MSc in Machine Learning and AI
Koç Üniversitesi’nde Bilgisayar Mühendisliği okudum ve aynı zamanda Matematik alanında çift anadal yaptım. Ardından Imperial College London’da Machine Learning and Artificial Intelligence alanında yüksek lisansımı tamamladım. Bu süreçte yurtdışında farklı araştırma projelerinde yer aldım ve özellikle makine öğrenmesi, yapay zekâ ve veri bilimi konularında hem akademik hem de uygulamalı deneyim kazandım. Bu derste amacım, makine öğrenmesinin temel kavramlarını anlaşılır ve uygulamaya dönük bir şekilde sizlerle paylaşmak.
1999 TL
🎓 Sabancı Üniversitesinde öğrencilerin %92'si tüm paketi alarak çalışıyor.

Machine Learning
Nursena Köprücü Aslan
1999 TL

Machine Learning
Nursena Köprücü Aslan
1999 TL
Introduction
Introduction to Machine Learning
Machine Learning Notation Explained
Machine Learning Preliminaries
Extra: Supervised Learning
Why Supervised?
Hypothesis Space & Occam's Razor
Loss Functions: Measuring Mistakes
Example: Least-Squares Linear Regression
K-Nearest Neighbor (kNN)
Nearest Neighbor Approach
Value of k
Geometric View: Voronoi Intuition
Distance Measure
Distances for Real Vectors
Example: Computing Distance Between Two Points
Distance for Non-Numeric Data
Scaling and Normalization
Voting Mechanism
k-NN Regression
Decision Trees
What is a Decision Tree?
Splitting in Classification Trees
Pruning Trees
From Trees to Rules
Multivariate/Oblique Trees
Regression
What Is Regression?
Linear Regression
Multiple Linear Regression
Polynomial Regression
Summary: Linear, Multiple & Polynomial Regression
Feature Transformations & Feature Engineering
Feature Selection vs Feature Extraction
Feature Embedding & Factor Analysis (FA)
Logistic Regression
Motivation
Probabilistic Interpretation
Binary Cross Entropy / Log-loss
Optimization with Gradient Descent
Classification with Logistic Regression
Summary & Multi-Class Logistic Regression
Neural Networks, MLP and Backpropagation
Perceptron
Training a Perceptron
Limitation: XOR
MLP Architecture & Representation View
Backpropagation
Regression
Discrimination
Deep Learning
Introduction to Deep Learning & Activation Functions
Training Deep Networks
Regularization Techniques
Tuning Network Structure
Learning Time
Time-Delay Neural Networks (TDNN)
RNN / LSTM / GRU
Generative Adversarial Networks (GANs)
Extra: Probability Review
Counting and Probability
Conditional Probability and Independence
Bayes' Rule
Discrete Random Variables
Continuous Random Variables
Expected Value and Variance
Bernoulli and Binomial Distributions
Continuous Uniform Distribution
Exponential Distribution
Normal Distribution
Laplace and Logistic Distributions
Parametric Methods and Bayesian Learning
Maximum Likelihood Estimation(MLE)
Bernoulli Likelihood
Multinomial Likelihood and Smoothing
Bayes' Theorem
Parametric Classification
Unequal Variances → Quadratic Boundary
Gaussian Classification Boundary
Parametric & Polynomial Regression
Naive Bayes
Naive Bayes Approach
Curse of Dimensionality
Bayes Classifier vs. Naive Bayes
Independence & Conditional Independence
Naive Bayes Classification
How Naive Bayes Simplifies Parameter Estimation
Sample Midterm Questions I
Pass Rates & Majors (Bayes; Law of Total Probability)
Weighted Least Squares (Closed-Form Solution, Matrix View & Interpretation)
MLE for α (positive support, exponential tail)
MLP with Hard-Threshold Units
Should we initialize all MLP weights to zero?
One Shared Network vs. Three Separate Networks
Naive Histogram Estimator vs. Parzen Windows (Kernel)
Comparing Two Splits (Gini vs. Misclassification)
Prepruning vs. Postpruning (Which and Why?)
Sample Midterm Questions II
Why Not Regression for Classification?
From Binary to Multiclass: One-vs-All / One-vs-One with a Binary Classifier
Max-shift for SoftMax
Why Initialize Weights Near Zero?
Adaptive Learning Rates in Gradient Descent
When Do Direct Input Output Links Help in an MLP?
Mahalanobis vs. Euclidean: Why and When?
Discrete Attribute in Decision Trees
Regularized Least Squares
Gaussian Generative Model → Logistic Posterior
Naive Bayes Text Classification with Binary Features
Derivative of Softmax
Choosing Between Two Splits: Gini vs. Misclassification
Past Exam Questions
Mean Square Error for Linear Regression
Gradient Descent Update
k-NN Regression Prediction
Decision Boundary and Building a Network for Binary Classification
Derivative of Squared Error
Computing Input and Output of a Convolution Node
True/False Reasoning on Activation, Linear Networks, and Gradient Descent
Computing Total Probability
True/False on Scaling, k-NN, Intrinsic Error and Model Complexity
Output Size of a Conv Layer
1999 TL