Skip to main content

Do you know K-Nearest Neighbors?

 The K-Nearest Neighbor is an Instance-based algorithm. You won't build an ML model with trainable variables in this but just compare an inference with the already known data for prediction.


The algorithm is fitted with labeled training data (having many features), then when a prediction is queried it compares all the features of the given data point with already known data or training set. Then total 'K' most similar or closest (numerical distance) points are returned. The predicted class is of the majority among the returned K data points. 

KNN can be implemented in python using a library called Scikit Learn. It contains a built-in function for this.

How do you decide the value of K? Does it matter?

The value of K will decide how well your model fits the data. If the value is very low, say 1, then the predicted class will be of the closest data point. And even very close data points might get a different class output. This means your model is overfitted or has a very high variance.

If the value of K is very high, say 10, then the predicted class will be the majority among 10 closest points. And suppose 3-4 points of a class are very close to the given point and the other 6-7 are farther, then too the farther class will be assigned to the given data point. This means your model is underfitted or has high bias.

An appropriate value has to be assigned to the K-value, this won't happen in one go but will require some testing. 

The value K here is a 'hyperparameter'. That is, it cannot be learned by the algorithm but has to be provided by the developer.




Comments

Popular posts from this blog

What is AI?

 It actually seems funny to write answer to this question (as it's so unusual to find an article about this these days 🤔).  AI is short for Artificial Intelligence or intelligence which is created by humans. But what is intelligence then? Intelligence breaks into the tasks that beings are capable of doing. Like thinking, memorising, remembering, deciding, reasoning, predicting, recognising, improving, inventing, reproducing, dreaming, assuming, surviving, feeling, hoping, coping, all these tasks ending with an 'ing' reminds us that they will never end until life (except for reproducing😉). The thing that makes beings actually alive is knowing that they are.  But are all the beings intelligent? Not all of them carry out all those tasks. As being smartest of all we humans still don't know if a mouse dreams or not (atleast Jerry does😏). But we do know, beings with a smaller brain or number of brain cells cannot carry out complex tasks. I'm sure ameoba can't recog...

Do you know Machine Learning?

 Machine Learning is like Jesus, It's everywhere... From pizzerias to Notco (a company which uses AI to make vegan food that tastes like meat) and from banks to Netflix all are using Machine Learning. But can machines actually learn something? 🧐 There are several algorithms that improve performance on a particular task with experience, that's it. By the way, if anyone asked, that was the definition of Machine Learning. The thing that computer systems can actually increase their performance or learn tasks is what AI is driven by.  Machine Learning is basically divided into 3 categories, viz, Supervised Learning, Unsupervised Learning, and Reinforcement Learning.       Supervised Learning is learning from a training set of labeled examples provided by a knowledgeable external supervisor. Each example is a description of a situation together with a specification—the label—of the correct action the system should take to that situation, which is often to ident...

Do you know steps in building a full Machine Learning model?

1. Data Collection In Machine Learning the data is the most important thing, unlike humans who look at a person's face a few times and recognize him/her, ML needs tons of data. The 2001's paper from Microsoft showed that moderate and complex models performed almost the same given sufficient data.  Apart from it, the quality of data is also important, data that does not represent appropriate relation between features and their label is of no use.  2. Data Preprocessing The preprocessing of data is very essential before feeding it to the algorithm, removing irrelevant features, merging highly correlated features, removing or manually adding missing values, and converting data to numeric values, suppose the data contains a feature representing the country and your dataset consists of many countries which might be moderately correlated to your output so you might not wanna remove it, you can convert it into a one-hot encoding  (a zero vector of length equal to the number of c...