ENGR 421 • Midterm II • Introduction to Machine Learning
Eğitmen
Nursena Köprücü Aslan
MSc in Machine Learning and AI
Koç Üniversitesi’nde Bilgisayar Mühendisliği okudum ve aynı zamanda Matematik alanında çift anadal yaptım. Ardından Imperial College London’da Machine Learning and Artificial Intelligence alanında yüksek lisansımı tamamladım. Bu süreçte yurtdışında farklı araştırma projelerinde yer aldım ve özellikle makine öğrenmesi, yapay zekâ ve veri bilimi konularında hem akademik hem de uygulamalı deneyim kazandım. Bu derste amacım, makine öğrenmesinin temel kavramlarını anlaşılır ve uygulamaya dönük bir şekilde sizlerle paylaşmak.
Paketi Tamamla
🎓 Koç Üniversitesinde öğrencilerin %92'si tüm paketi alarak çalışıyor.
Konular
Probability Review
Counting and Probability
Conditional Probability and Independence
Bayes' Rule
Discrete Random Variables
Continuous Random Variables
Expected Value and Variance
Bernoulli and Binomial Distributions
Continuous Uniform Distribution
Exponential Distribution
Normal Distribution
Laplace and Logistic Distributions
Non-parametric Methods
What Nonparametric Means
Density Estimation
Nonparametric Classification
Condensed Nearest Neighbor
Outlier Detection
Nonparametric Regression
Additive Models & How to chose h/k
Histogram density estimator
Naive / uniform kernel density estimator
k-nearest neighbor (k-NN) classifier in 1D
Decision Trees
What is a Decision Tree?
Splitting in Classification Trees
Pruning Trees
From Trees to Rules
Multivariate/Oblique Trees
Choosing Between Two Splits: Gini vs. Misclassification
Kernel Machines
What & Why
Maximum Margin Classification
Maximizing the Margin
Lagrangian Formulation of the Hard-Margin SVM
From Primal to Dual: Solving the SVM Optimization
Why only a few points matter (KKT & sparsity)
From 𝛼 to parameters
Prediction uses only support vectors
Soft Margin SVM
Soft Margin Dual
Margin, distance, and support vectors
Solving a tiny SVM dual problem (linear kernel)
Polynomial kernel and feature map
Dimensionality Reduction
Dimensionality Reduction
Principal Component Analysis (PCA)
Feature Embedding & Factor Analysis (FA)
Singular Value Decomposition and Matrix Factorization
Multidimensional Scaling
Linear Discriminant Analysis (LDA)
Canonical Correlation Analysis
Isomap, Locally Linear Embedding, Laplacian Eigenmaps
Sample Midterm Questions I
Linear Discriminant with Equal Variance
Naive Histogram Estimator vs. Parzen Windows (Kernel)
Kernel Smoother
Naive Density Estimator (Bandwidth effect & validity)
Comparing Two Splits (Gini vs. Misclassification)
Prepruning vs. Postpruning (Which and Why?)
Discrete Attribute in Decision Trees
Kernel Density Estimation
Naive Bayes Text Classification with Binary Features
Sample Midterm Questions II
Decision Trees: Gini Impurity Split Comparison
Decision Trees: Entropy & Information Gain Split Comparison
Kernel Engineering
1-NN LOOCV on Patient Dataset
k-NN Regression Prediction
Decision Boundary and Building a Network for Binary Classification
True/False on Scaling, k-NN, Intrinsic Error and Model Complexity
Değerlendirmeler
Henüz hiç değerlendirme yok.
Ders İçeriği
Probability Review
Counting and Probability
Conditional Probability and Independence
Bayes' Rule
Discrete Random Variables
Continuous Random Variables
Expected Value and Variance
Bernoulli and Binomial Distributions
Continuous Uniform Distribution
Exponential Distribution
Normal Distribution
Laplace and Logistic Distributions
Non-parametric Methods
What Nonparametric Means
Density Estimation
Nonparametric Classification
Condensed Nearest Neighbor
Outlier Detection
Nonparametric Regression
Additive Models & How to chose h/k
Histogram density estimator
Naive / uniform kernel density estimator
k-nearest neighbor (k-NN) classifier in 1D
Decision Trees
What is a Decision Tree?
Splitting in Classification Trees
Pruning Trees
From Trees to Rules
Multivariate/Oblique Trees
Choosing Between Two Splits: Gini vs. Misclassification
Kernel Machines
What & Why
Maximum Margin Classification
Maximizing the Margin
Lagrangian Formulation of the Hard-Margin SVM
From Primal to Dual: Solving the SVM Optimization
Why only a few points matter (KKT & sparsity)
From 𝛼 to parameters
Prediction uses only support vectors
Soft Margin SVM
Soft Margin Dual
Margin, distance, and support vectors
Solving a tiny SVM dual problem (linear kernel)
Polynomial kernel and feature map
Dimensionality Reduction
Dimensionality Reduction
Principal Component Analysis (PCA)
Feature Embedding & Factor Analysis (FA)
Singular Value Decomposition and Matrix Factorization
Multidimensional Scaling
Linear Discriminant Analysis (LDA)
Canonical Correlation Analysis
Isomap, Locally Linear Embedding, Laplacian Eigenmaps
Sample Midterm Questions I
Linear Discriminant with Equal Variance
Naive Histogram Estimator vs. Parzen Windows (Kernel)
Kernel Smoother
Naive Density Estimator (Bandwidth effect & validity)
Comparing Two Splits (Gini vs. Misclassification)
Prepruning vs. Postpruning (Which and Why?)
Discrete Attribute in Decision Trees
Kernel Density Estimation
Naive Bayes Text Classification with Binary Features
Sample Midterm Questions II
Decision Trees: Gini Impurity Split Comparison
Decision Trees: Entropy & Information Gain Split Comparison
Kernel Engineering
1-NN LOOCV on Patient Dataset
k-NN Regression Prediction
Decision Boundary and Building a Network for Binary Classification
True/False on Scaling, k-NN, Intrinsic Error and Model Complexity
Sıkça Sorulan Sorular
Örneğin, Koç Üniversitesi - MATH 101 (Calculus) veya başka bir okulun benzer dersi olsun, paketlerimiz tam da o derse göre tasarlanır. Böylece nokta atışı çalışır, zaman kazanırsın.
Sınava özel videolar —konu anlatımları, çıkmış sorular ve çözümleri, özet notlar—içerir. Sınavda sıkça çıkan soruları hedefler. Eğitmenlerimiz, üniversitenin akademik takvimini takip ederek paketleri sürekli günceller. Böylece, gereksiz detaylarla vakit kaybetmeden başarını artırmaya odaklanabilirsin.
