Skip to content

kk Nearest Neighbor

Last Updated: 3 months ago2024-12-26 ; Contributors: AhmedThahir, web-flow

Represents each record as a datapoint with mm dimensional space, where mm is the number of attributes

Requirements

  • Set of labelled records
  • Normalized Proximity/Distance metric
  • Min-Max normalization
  • ZZ-normalization
  • Odd value of kk

Choosing kk

Similar to bias-variance tradeoff $$ \begin{aligned} \text{Flexibility} &\propto \dfrac{1}{k} \

\implies \text{Bias} &\propto k \ \text{Variance} &\propto \dfrac{1}{k} \end{aligned} $$

Value of kk

kk Problems Low Bias Low Variance
too small Overfitting
Susceptible to noise
too large Underfitting
Susceptible to far-off points: Neighborhood includes points from other classes

knn_decision_boundary

knn_bias_variance_tradeoff

Finding optimal kk 1. Use a test set 2. Let k=3k = 3 3. Record error rate of regressor/classifier 4. k=k+2k = k+2 5. Repeat steps 3 and 4, until value of kk for which 1. error is min 2. accuracy is maximum

Types

Classification Regression
Output Class label is the majority label of \(k\) nearest neighbors Predicted value will be the average of the continuous labels of \(k\)-nearest neighbors
Steps - Compute distance between test record and all train records
- Identify \(k\) neighbors of test records
(Low distance=high similarity)
- Use majority voiting to find the class label of test sample
knn

Distance-Weighted KNN

Closer points are given larger weights for the majority voting scheme

Comments