CMPE 468 • Final • Machine Learning for Engineers
Dersi 3 kişi birlikte alın
Eğitmen
Nursena Köprücü Aslan
MSc in Machine Learning and AI
Koç Üniversitesi’nde Bilgisayar Mühendisliği okudum ve aynı zamanda Matematik alanında çift anadal yaptım. Ardından Imperial College London’da Machine Learning and Artificial Intelligence alanında yüksek lisansımı tamamladım. Bu süreçte yurtdışında farklı araştırma projelerinde yer aldım ve özellikle makine öğrenmesi, yapay zekâ ve veri bilimi konularında hem akademik hem de uygulamalı deneyim kazandım. Bu derste amacım, makine öğrenmesinin temel kavramlarını anlaşılır ve uygulamaya dönük bir şekilde sizlerle paylaşmak.
Paketi Tamamla
🎓 Atılım Üniversitesinde öğrencilerin %92'si tüm paketi alarak çalışıyor.

CMPE 468 • Final
Machine Learning for Engineers
Nursena Köprücü Aslan
1499 TL

CMPE 468 • Midterm
Machine Learning for Engineers
Nursena Köprücü Aslan
1499 TL
Konular
Probability Review
Counting and Probability
Conditional Probability and Independence
Bayes' Rule
Discrete Random Variables
Continuous Random Variables
Expected Value and Variance
Bernoulli and Binomial Distributions
Continuous Uniform Distribution
Exponential Distribution
Normal Distribution
Laplace and Logistic Distributions
Decision Trees
What is a Decision Tree?
Splitting in Classification Trees
Pruning Trees
From Trees to Rules
Multivariate/Oblique Trees
Support Vector Machines
What & Why
Maximum Margin Classification
Maximizing the Margin
Lagrangian Formulation of the Hard-Margin SVM
From Primal to Dual: Solving the SVM Optimization
Why only a few points matter (KKT & sparsity)
From 𝛼 to parameters
Prediction uses only support vectors
Soft Margin SVM
Soft Margin Dual
Clustering: K-Means
Introduction and Mixture Densities
K-Means Clustering
One Iteration of k-Means
Expectation-Maximization (EM)
Mixture Models & Practical Use of Clusters
Spectral and Hierarchical Clustering
Choose the Right Clustering Tool
“Clustering as Preprocessing” Pitfall
Model Evaluation & Evaluation Metrics
Cross-validation, Generalization, Bias-Variance Trade-off
Evaluation/Performance Metrics
Loss Functions: Measuring Mistakes
Feature Selection vs Feature Extraction
Principal Component Analysis (PCA)
Feature Embedding & Factor Analysis (FA)
Sample Final Questions I
Pass Rates & Majors (Bayes; Law of Total Probability)
Linear Discriminant with Equal Variance
Comparing Two Splits (Gini vs. Misclassification)
Prepruning vs. Postpruning (Which and Why?)
Weighted Least Squares (Closed-Form Solution, Matrix View & Interpretation)
Mean Square Error for Linear Regression
Gradient Descent Update
k-NN Regression Prediction
Decision Boundary and Building a Network for Binary Classification
Derivative of Squared Error
Computing Input and Output of a Convolution Node
True/False Reasoning on Activation, Linear Networks, and Gradient Descent
Computing Total Probability
True/False on Scaling, k-NN, Intrinsic Error and Model Complexity
Regression: Test-Set MSE
Generalization & Overfitting: True/False
Baseline Error: ZeroR vs Random Guessing
Entropy: Fair Die & Bias Effect
Decision Trees: ID3 Optimality + Key Advantage
Decision Tree Split: Remaining Entropy
Sample Final Questions II
Discrete Attribute in Decision Trees
From Binary to Multiclass: One-vs-All / One-vs-One with a Binary Classifier
Why Not Regression for Classification?
Adaptive Learning Rates in Gradient Descent
Mahalanobis vs. Euclidean: Why and When?
Regularized Least Squares
Gaussian Generative Model → Logistic Posterior
Why Initialize Weights Near Zero?
Naive Bayes Text Classification with Binary Features
Choosing Between Two Splits: Gini vs. Misclassification
Değerlendirmeler
Henüz hiç değerlendirme yok.
Ders İçeriği
Probability Review
Counting and Probability
Conditional Probability and Independence
Bayes' Rule
Discrete Random Variables
Continuous Random Variables
Expected Value and Variance
Bernoulli and Binomial Distributions
Continuous Uniform Distribution
Exponential Distribution
Normal Distribution
Laplace and Logistic Distributions
Decision Trees
What is a Decision Tree?
Splitting in Classification Trees
Pruning Trees
From Trees to Rules
Multivariate/Oblique Trees
Support Vector Machines
What & Why
Maximum Margin Classification
Maximizing the Margin
Lagrangian Formulation of the Hard-Margin SVM
From Primal to Dual: Solving the SVM Optimization
Why only a few points matter (KKT & sparsity)
From 𝛼 to parameters
Prediction uses only support vectors
Soft Margin SVM
Soft Margin Dual
Clustering: K-Means
Introduction and Mixture Densities
K-Means Clustering
One Iteration of k-Means
Expectation-Maximization (EM)
Mixture Models & Practical Use of Clusters
Spectral and Hierarchical Clustering
Choose the Right Clustering Tool
“Clustering as Preprocessing” Pitfall
Model Evaluation & Evaluation Metrics
Cross-validation, Generalization, Bias-Variance Trade-off
Evaluation/Performance Metrics
Loss Functions: Measuring Mistakes
Feature Selection vs Feature Extraction
Principal Component Analysis (PCA)
Feature Embedding & Factor Analysis (FA)
Sample Final Questions I
Pass Rates & Majors (Bayes; Law of Total Probability)
Linear Discriminant with Equal Variance
Comparing Two Splits (Gini vs. Misclassification)
Prepruning vs. Postpruning (Which and Why?)
Weighted Least Squares (Closed-Form Solution, Matrix View & Interpretation)
Mean Square Error for Linear Regression
Gradient Descent Update
k-NN Regression Prediction
Decision Boundary and Building a Network for Binary Classification
Derivative of Squared Error
Computing Input and Output of a Convolution Node
True/False Reasoning on Activation, Linear Networks, and Gradient Descent
Computing Total Probability
True/False on Scaling, k-NN, Intrinsic Error and Model Complexity
Regression: Test-Set MSE
Generalization & Overfitting: True/False
Baseline Error: ZeroR vs Random Guessing
Entropy: Fair Die & Bias Effect
Decision Trees: ID3 Optimality + Key Advantage
Decision Tree Split: Remaining Entropy
Sample Final Questions II
Discrete Attribute in Decision Trees
From Binary to Multiclass: One-vs-All / One-vs-One with a Binary Classifier
Why Not Regression for Classification?
Adaptive Learning Rates in Gradient Descent
Mahalanobis vs. Euclidean: Why and When?
Regularized Least Squares
Gaussian Generative Model → Logistic Posterior
Why Initialize Weights Near Zero?
Naive Bayes Text Classification with Binary Features
Choosing Between Two Splits: Gini vs. Misclassification
Sıkça Sorulan Sorular
Örneğin, Koç Üniversitesi - MATH 101 (Calculus) veya başka bir okulun benzer dersi olsun, paketlerimiz tam da o derse göre tasarlanır. Böylece nokta atışı çalışır, zaman kazanırsın.
Sınava özel videolar —konu anlatımları, çıkmış sorular ve çözümleri, özet notlar—içerir. Sınavda sıkça çıkan soruları hedefler. Eğitmenlerimiz, üniversitenin akademik takvimini takip ederek paketleri sürekli günceller. Böylece, gereksiz detaylarla vakit kaybetmeden başarını artırmaya odaklanabilirsin.