It is often used as it is faster and more accurate compared to regular K-means.
K-means++ initialization differs from regular K-means initialization in that it is based on a probabilistic approach to centroid definition. The algorithm tries to initialize centroids that are far apart from each other.
Following the first, random centroid initialization, the next point is chosen based on a probability that depends on the distance to the first point: the further apart the point is, the more probable it is to be chosen.
This adds complexity in the K-means algorithm but it reduces the probability of a bad initialization and thus to a bad clustering result.
K-means++ has a different seeding technique for centroid definition compared to K-means.
The steps are the following:
- The smallest distance between point x and the nearest centroid is denoted by
- A point is chosen randomly from the dataset
- The point is chosen belonging to the dataset with probability
- Step 3 is repeated until the points are chosen
K-means ++ is said to be competitive. The O notation is used to describe asymptotic behavior of functions. Its objective is to characterize the behavior of a function for high value arguments topics in a simple but rigorous way, in order to be able to compare the behavior of several functions with one another. The O notation is commonly used to describe a strict asymptotic limit, but the narrow asymptotic limits are more formally and properly denoted by the letter Θ.
K-means attempts to minimize a "potential" that is calculated as the sum of the squared distances of points from their cluster centers. For any possible clustering scenario, the potential of a K-means++ solution can be at most
times the potential of the best possible solution. This notation is shortened to .
k-means++: The Advantages of Careful Seeding
David Arthur, Sergei Vassilvitskii
Documentaries, videos and podcasts
- K-meansK-means is used to find clusters in a data space, where clusters group data points that share similar features with one another.
- Unsupervised learningA branch of machine learning that tries to make sense of data that has not been labeled, classified, or categorized by extracting features and patterns on its own.
- Spectral clusteringA simple implementation of clustering that can be solved efficiently by standard linear algebra software.
- Hierarchical clusteringHierarchical clustering
- Density clustering
- Show More