Golden
K-means

K-means

K-means is used to find clusters in a data space, where clusters group data points that share similar features with one another.

Introduction



The K-means algorithm is a clustering algorithm which divides groups of objects into K partitions based on their attributes. A cluster is identified by a centroid or midpoint. The algorithm follows an iterative procedure.



Here are steps that allow the K-means to converge to optimal data separation:

  • It initially creates K partitions and assigns the data points to each partition either randomly or using some known heuristic;
  • The algorithm then calculates the centroids of each group; the distances are computed through the Euclidean distance between point and centroid, with the formula:
argminci ϵ Cdist(ci, x)2\underset{c_i\ \epsilon\ C}{\rm argmin}{{dist(c_i,\ x)}^2}

where there is a centroid in the set C (which includes all the centroids), x are the datapoints and dist is the standard Euclidean distance.



  • The centroids for the new clusters are recalculated continuously until the algorithm converges. The new value of a centroid will be the average of all the data points that have been assigned to the new cluster. In mathematical terms we will have the following expression:
Ci=1SixiϵSixiC_i= \frac{1}{|S_i |}\underset {x_i ϵ S_i}{∑} x_i

Where SiS_i represents the sum of the data points assigned to the i-th cluster. The new centroid position is obtained from the average of all the data points assigned to the cluster in the previous step.

K-means clustering is one of the simplest and most effective algorithms belonging to unsupervised learning. Clusters represent the groups that divide the objects based on whether or not they share a particular similarity between them, and are chosen a priori, before the execution of the algorithm.



K-means works by identifying K centroids, imaginary points in space that represent the center of the grouping, and places each point at the nearest cluster. The variable K is defined by the data scientist and defines the number of centroids (or clusters) to be identified.



Each point is assigned to a specific cluster by calculating the standard deviation, the distance between the point and the centroid. For each point, the error is the distance from the centroid of the cluster to which it is assigned.

The value of K is arbitrary, but there are several ways to calculate the optimal value, one of which is the Elbow method.



How to find the ideal number of clusters

Elbow method

Researchers of unsupervised learning must try to determine the appropriate amount of clusters to use in a given problem.



The question is very pertinent because if researchers already had this information, as in supervised clustering, it would be possible to derive the attributions from the labels and categories that are already defined. In this case, instead, the number of clusters is defined a priori and then their memberships are calculated. One of the most immediate methods to accomplish this is the so-called Elbow method.



The Elbow method consists in computing the total sum between the distances of each point and its nearest centroid. Since an increase in a cluster is related to smaller groupings and distances, this sum will decrease when K increases and vice versa.

As an extreme example, if we choose a value K equal to the number of data points we have, the sum will be zero, because the centroid will coincide with each point and the total distance is zero.



The goal of this process is to find the point where the increase in K will cause a very small decrease in the sum, while the decrease in K will sharply increase the sum.

This sweet spot is called the elbow point.



In this case, the elbow point falls at the value 4, which should be the number of clusters used to initialize the K-means algorithm with.



Advantage and disadvantages of using K-means

Advantages:

  • It is possible to vary the initial position of the centroids to try to reduce the dependence on the initial conditions
  • Efficient in managing large amounts of data
  • The algorithm often ends with an optimal local



Disadvantages:

  • K-means works only on numerical values as it minimizes a cost function by calculating the average of clusters
  • K-Means is difficult to use when trying to find clusters of non-convex form
  • It’s necessary to set K a priori
  • Sometimes situation can occur in which one or more clusters are not associated with data points (K is too big)



Applications

K-means clustering can be used for numerous applications. Here are some examples.

  • Customer segmentation : clusters identify customers based on their behavior, allowing the company to deliver specific products to specific users
  • Pattern recognition : K-means can be used to distinguish between signal and noise. This applies to basically all scientific fields
  • Identification of outliers : Outliers are data points that present large differences with all the other elements of a data set. Their identification can be interesting for two purposes: the elimination of these anomalous values, which could be caused by errors, or the isolation of these particular cases that may have a certain importance for the business.

K-means is usually implemented in Python, R, Octave, or Matlab.

Timeline

People

Name
Role
LinkedIn

J MacQueen





Further reading

Title
Author
Link
Type
Date

Constrained K-means Clustering with Background Knowledge

Kiri Wagstaff, Claire Cardie, Seth Rogers, Stefan Schroedl

PDF



K-Means Clustering: From A to Z

Azika Amelia

Web



SOME METHODS FOR CLASSIFICATION AND ANALYSIS OF MULTIVARIATE OBSERVATIONS

J Macqueen

PDF



Understanding K-means Clustering in Machine Learning

Dr. Michael J. Garbade

Web



Documentaries, videos and podcasts

Title
Date
Link

Machine Learning | Coursera



Companies

Company
CEO
Location
Products/Services