This Lesson is for PRO Members.

Subscribe today and get access to all lessons! Plus direct HD download
for offline use, member comment forums, and iTunes "podcast" RSS feed.
Level up your skills now!


Already subscribed? Sign In

Use K-nearest Neighbors To Find Similar Datapoints with Python and Scikit-learn

We’ll continue with the iris dataset to implement k-nearest neighbors (KNN), which makes predictions about data based on similarity to other data instances. We'll visualize how the KNN algorithm works by making its predictions based on its neighbors' labels. We'll also examine the confusion matrix a bit further.

KNN can be used for both classification and regression problems.

KNN is good for low dimensional data (data without too many input variables). It is not good for unbalanced data sets, and it can be computationally expensive.

Please take a moment to tell your friends:

You must be a PRO Member to view code