KMeans Explained, Practical Guide To Data Clustering & How To In Python

by | Aug 23, 2023 | Data Science, Machine Learning

What is KMeans?

KMeans is a popular clustering algorithm used in machine learning and data analysis. It’s used to partition a dataset into distinct, non-overlapping clusters. The goal of KMeans is to assign each data point to the nearest cluster centroid while minimizing the sum of squared distances between data points and their assigned centroids.

kmeans data clustering

Kmeans partitions a dataset into distinct, non-overlapping clusters.

The KMeans algorithm explained step-by-step

1. Initialization:

  • Choose the number of clusters, K.
  • Initialize K cluster centroids. This can be done by randomly selecting K data points from the dataset.

2. Assignment:

  • For each data point in the dataset, calculate the distance (usually Euclidean distance) to each K centroid.
  • Assign the data point to the cluster corresponding to the nearest centroid.

3. Update Centroids:

  • After all data points are assigned to clusters, calculate the mean of all data points in each cluster. This mean becomes the new centroid for that cluster.

4. Re-Assignment:

  • Repeat the assignment step using the updated centroids. Data points are reassigned to clusters based on the new centroids.

5. Iteration:

  • Repeat the Update Centroids, and Re-Assignment steps iteratively until a stopping criterion is met. This criterion can be a maximum number of iterations or a slight change in centroids between iterations.

6. Convergence:

  • The algorithm converges when the centroids no longer change significantly between iterations, and the assignments become stable.

7. Final Result:

  • Once the algorithm converges, the data points are effectively partitioned into K clusters based on their proximity to the centroids.

8. Choosing Optimal K:

  • Determining the optimal number of clusters, K can be a challenge. One common approach is the “elbow method,” where you plot the sum of squared distances (inertia) for different values of K and look for an “elbow point” where the rate of decrease in inertia slows down.

9. Centroid Initialization Strategies:

  • While random initialization is common, it can lead to suboptimal solutions. KMeans++ is a popular initialization technique that distributes initial centroids more evenly across the data points, improving convergence speed and quality of results.

10. Handling Convergence to Local Optima:

  • To mitigate convergence to local optima, you can run the KMeans algorithm multiple times with different initializations and select the best result based on some criterion, such as the lowest sum of squared distances.

Remember that the choice of a distance metric, initialization strategy, and convergence criteria can influence the efficiency and effectiveness of the KMeans algorithm.

Additionally, KMeans assumes that clusters are spherical and equally sized, which might not always match the underlying data distribution. Therefore, it’s essential to understand your data and consider other clustering algorithms if KMeans doesn’t suit your specific needs.

How to choose the right K?

Choosing the right number of clusters, K, for KMeans is an important task, and there are several methods you can use to make an informed decision. Here are a few commonly used techniques:

1. Elbow Method: Plot the sum of squared distances (inertia) between data points and their assigned centroids for different values of K. The idea is to look for a point in the plot where the decrease in inertia starts to slow down, resembling an “elbow.” This point can indicate a reasonable number of clusters.

Here is a simple example in Python to show how this can be done:

from sklearn.cluster import KMeans
import matplotlib.pyplot as plt
import numpy as np

# Generate sample data
np.random.seed(0)
data = np.array([[1, 2], [5, 8], [1.5, 1.8], [8, 8], [1, 0.6], [9, 11], [7, 5], [2, 3], [1.1, 1.3], [9, 9], [1.5, 1.6], [9, 11]])

inertias = []
for k in range(1, 11):
    kmeans = KMeans(n_clusters=k)
    kmeans.fit(data)
    inertias.append(kmeans.inertia_)

plt.plot(range(1, 11), inertias, marker='o')
plt.xlabel('Number of Clusters (K)')
plt.ylabel('Inertia')
plt.title('Elbow Method')
plt.show()
elbow method graph

An example of the elbow method with the optimum K=2.

2. Silhouette Score: The silhouette score measures how similar an object is to its cluster (cohesion) compared to other clusters (separation). A higher silhouette score indicates that the object is well-matched to its cluster and poorly matched to neighbouring clusters.

Here is an example in Python as to how this can be done:

from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_score
import matplotlib.pyplot as plt
import numpy as np

# Generate sample data
np.random.seed(0)
data = np.array([[1, 2], [5, 8], [1.5, 1.8], [8, 8], [1, 0.6], [9, 11], [7, 5], [2, 3], [1.1, 1.3], [9, 9], [1.5, 1.6], [9, 11]])

silhouette_scores = []
for k in range(2, 11):
    kmeans = KMeans(n_clusters=k)
    labels = kmeans.fit_predict(data)
    score = silhouette_score(data, labels)
    silhouette_scores.append(score)

plt.plot(range(2, 11), silhouette_scores, marker='o')
plt.xlabel('Number of Clusters (K)')
plt.ylabel('Silhouette Score')
plt.title('Silhouette Score Method')
plt.show()
silhouette score method graph

The highest silhouette score is when K=2.

3. Gap Statistic: The gap statistic compares the performance of the KMeans clustering on the given data to its performance on random data. A more significant gap statistic indicates a better number of clusters.

4. Cross-Validation: You can use cross-validation techniques to evaluate the performance of KMeans with different values of K. For example, you could split your data into training and validation sets and measure the clustering performance using metrics like silhouette score or the Davies-Bouldin index.

5. Domain Knowledge: Your domain expertise can sometimes provide insights into the appropriate number of clusters. For instance, if you are clustering customer data, you might have a business reason to believe there are specific distinct segments.

Remember that these methods are not always definitive, and it’s common to use a combination of approaches to make a more confident decision. It’s also a good practice to better visualize the clustering results with different K values to understand the patterns and structure in your data.

How to choose a KMeans distance metric?

KMeans uses distance metrics to measure the similarity between data points and cluster centroids. The most commonly used distance metric is the Euclidean distance, but other metrics can be employed based on the data’s nature and the problem’s specific requirements. Here are some distance metrics that can be used with KMeans:

  1. Euclidean Distance: This is the most common distance metric used in KMeans. It measures the straight-line distance between two points in the Euclidean space.
  2. Manhattan Distance (City Block Distance): This metric measures the sum of the absolute differences between the coordinates of two points. It’s also known as the L1 distance or Manhattan distance because it resembles the distance a taxi would travel along the streets of a city.
  3. Cosine Similarity: Cosine similarity measures the cosine of the angle between two vectors. It’s often used in text mining and natural language processing when considering the similarity of term frequencies in documents.
  4. Correlation Distance: Correlation distance measures the correlation between two vectors. It’s useful when the data’s magnitude is less important than its direction.
  5. Mahalanobis Distance: Mahalanobis distance accounts for correlations and scales between data dimensions. It’s especially useful when the data is not isotropic (each dimension has a different variance).
  6. Hamming Distance: Hamming distance measures the number of positions at which two equal-length strings of symbols differ. It’s commonly used for clustering categorical data.
  7. Jaccard Distance: Jaccard distance is used for sets and calculates the proportion of elements that are dissimilar in two sets.
  8. Minkowski Distance: Minkowski distance is a generalization of the Euclidean and Manhattan distances. It’s parameterized by the exponent “p” and can handle different distance calculations based on the value of “p”.

When using a distance metric other than Euclidean distance, it’s important to ensure that the chosen metric aligns with the characteristics of your data and the goals of your analysis.

Different metrics can lead to different results, so experimentation and understanding the domain of your problem are key. Most machine learning libraries, including scikit-learn in Python, allow you to specify the distance metric as a parameter when using KMeans.

Advantages and disadvantages of KMeans

KMeans clustering, like any algorithm, comes with its advantages and disadvantages. Here’s a breakdown of both:

Advantages

  1. Simplicity: KMeans is relatively easy to understand and implement, making it a great starting point for learning about clustering algorithms.
  2. Efficiency: KMeans can be computationally efficient, especially when dealing with many data points. Its time complexity is often linear with the number of data points.
  3. Scalability: KMeans can handle large datasets well, especially compared to more complex clustering algorithms.
  4. Interpretability: The results of KMeans are easy to interpret. Each data point is assigned to a cluster, and cluster centroids can provide insight into the characteristics of each cluster.
  5. Spherical Clusters: KMeans work well when the underlying clusters are spherical and of similar sizes. It’s suitable for cases where clusters are relatively compact.
  6. Initialization Techniques: While the initial centroid placement can influence the results, techniques like KMeans++ offer improved initialization strategies.

Disadvantages

  1. Number of Clusters (K): The most significant challenge is determining the optimal number of clusters, K. This can be subjective and might require domain knowledge or trial-and-error.
  2. Sensitive to Initial Placement: The algorithm’s results can vary based on the initial placement of centroids, which can sometimes lead to suboptimal solutions.
  3. Cluster Shape: KMeans assumes that clusters are spherical and equally sized. It doesn’t handle clusters with irregular shapes or varying densities well.
  4. Outliers: Outliers can significantly impact the positions of centroids, pulling them away from the centre of the main cluster.
  5. Non-Convex Clusters: KMeans struggle with clusters with non-convex shapes or complex structures.
  6. Global Optima: Depending on initialization, the algorithm might converge to local optima instead of the global optimum.
  7. Influence of Features: KMeans treats all features equally, which can be a drawback if some features are more relevant.
  8. Not Probabilistic: KMeans doesn’t provide a probabilistic framework like Gaussian Mixture Models (GMMs), which can lead to less robust results when data distribution assumptions are violated.
  9. Need for Preprocessing: The algorithm can be sensitive to the scale and distribution of features, requiring preprocessing like normalization or standardization.

In summary, KMeans is a straightforward and efficient clustering algorithm suitable for well-separated, spherical clusters. It’s a good starting point for clustering tasks but may not be the best choice for all data types and cluster shapes. Depending on your data characteristics and goals, you might need to explore other clustering methods that can handle more complex structures and densities.

Application of KMeans

KMeans clustering has a wide range of applications across various fields. Here are some common use cases where KMeans is employed:

  1. Customer Segmentation: Businesses use KMeans to segment customers based on purchasing behaviour, demographics, or other relevant features. This helps tailor marketing strategies and product offerings to specific customer groups.
  2. Image Compression: KMeans can reduce the number of colours in an image, thus compressing the image while retaining its visual quality. Each cluster’s centroid represents a colour, and pixels are reassigned to the nearest centroid.
  3. Anomaly Detection: KMeans can help identify anomalies in datasets by treating standard data points as clusters and detecting data points that do not belong to any cluster.
  4. Document Clustering: KMeans is used to group similar documents, which helps organize extensive collections of text data, news articles, or documents for data retrieval.
  5. Market Segmentation: Like customer segmentation, KMeans can be applied to segment markets based on various attributes, allowing businesses to customize their marketing strategies for different market segments.
  6. Social Network Analysis: KMeans can cluster users in social networks based on their connections, interactions, and shared interests, providing insights into community structures.
  7. Genetic Analysis: In bioinformatics, KMeans can group genes with similar expression patterns across different samples, aiding in gene function discovery.
  8. Image Segmentation: KMeans can partition an image into distinct regions based on colour or texture, which is helpful in computer vision tasks like object detection and image editing.
  9. Stock Market Analysis: KMeans can be used to group stocks based on their historical price movements, helping investors identify similar patterns for trading strategies.
  10. Recommendation Systems: KMeans can cluster users based on their preferences and behaviour, which can be used to recommend products, movies, or content to similar users.
  11. Manufacturing: KMeans can be applied to identify clusters of products or components with similar quality characteristics, improving quality control.

While KMeans is versatile, it’s not the best solution for every clustering problem. It’s essential to consider the assumptions and limitations of the algorithm and evaluate whether it suits your specific dataset and objectives. In some cases, more advanced clustering methods might be more appropriate.

KMeans in NLP

KMeans clustering can also be applied effectively in Natural Language Processing (NLP) tasks, where the goal is to group text documents or text data into meaningful clusters based on their content. Here are some use cases and approaches for using KMeans in NLP:

  1. Document Clustering: KMeans can be used to cluster similar documents together. Each document is typically represented as a vector in a high-dimensional space, where each dimension represents a term frequency or some other measure of word importance. The clusters obtained can reveal topics or themes within the document collection.
  2. Topic Modeling: By applying KMeans to a term-document matrix obtained through techniques like TF-IDF (Term Frequency-Inverse Document Frequency), you can identify coherent topics within a collection of documents. The clusters represent different topics, and the top terms in each cluster can provide insights into the topics.
  3. Sentiment Analysis: You can use it to group text snippets or reviews with similar sentiments. By transforming text data into numerical vectors (using techniques like word embeddings or TF-IDF), you can cluster identical sentiment expressions to help understand public opinion or customer feedback.
  4. Document Summarization: KMeans can group similar sentences, extracting representative sentences for summarizing a document. Summarization systems can select sentences from different clusters to create a coherent summary.
  5. Text Classification: KMeans can help preprocess text data for text classification tasks. You can cluster the training data using KMeans and then assign the labels of the clusters to the entire dataset. This can provide a simplified representation of the data for further classification tasks.
  6. Named Entity Recognition: It can be used to group similarly named entities together, which can assist in creating entity lists or dictionaries for tasks like Named Entity Recognition (NER).
  7. Authorship Attribution: It can be applied to group texts by different authors. The idea is that each author might have a unique style or choice of words, and it can help identify patterns in their writing.
  8. Document Grouping for Search Results: When users search for information, KMeans can group search results based on their content, making it easier for users to find relevant documents.

Remember that preprocessing, vectorization, and feature engineering are crucial in NLP tasks before applying KMeans. Techniques like word embeddings, TF-IDF, and dimensionality reduction (such as Principal Component Analysis) are often used to convert text data into numerical vectors that can be fed into the KMeans algorithm. Also, post-processing, visualization, and domain knowledge are essential for interpreting clustering results in NLP.

How to implement KMeans clustering in Python with sklearn

Here’s an example of how to perform KMeans clustering in Python using the popular machine learning library scikit-learn:

import numpy as np
from sklearn.cluster import KMeans
import matplotlib.pyplot as plt

# Generate sample data
np.random.seed(0)
X = np.array([[1, 2], [5, 8], [1.5, 1.8], [8, 8], [1, 0.6], [9, 11]])

# Define the number of clusters (K)
num_clusters = 2

# Initialize KMeans model
kmeans = KMeans(n_clusters=num_clusters)

# Fit the model to the data
kmeans.fit(X)

# Get cluster assignments and centroids
cluster_assignments = kmeans.labels_
centroids = kmeans.cluster_centers_

# Plot the data and cluster centroids
plt.scatter(X[:, 0], X[:, 1], c=cluster_assignments, cmap='rainbow')
plt.scatter(centroids[:, 0], centroids[:, 1], marker='X', s=200, c='black')
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')
plt.title('KMeans Clustering')
plt.show()

In this example:

  • We first import the necessary libraries: numpy for data manipulation, KMeans from sklearn.cluster for the clustering algorithm, and matplotlib for visualization.
  • We generate some sample data points in a 2D space.
  • We define the number of clusters (num_clusters) we want to find in the data.
  • We create an instance of the KMeans model with the specified number of clusters.
  • We fit the model to the data using the fit method.
  • We obtain cluster assignments for each data point using the labels_ attribute and centroids of the clusters using the cluster_centers_ attribute.
  • Finally, we plot the data points with different colours representing their assigned clusters, and we also plot the centroids.
kmeans clustering plot

The created plot with the data points and centroids.

Remember that you would replace the sample data with your own dataset in practice. Additionally, you might need to preprocess your data and experiment with different values of num_clusters to find the optimal number of clusters for your specific problem.

What other clustering algorithms should you consider if KMeans doesn’t suit your specific need?

If KMeans doesn’t suit your specific needs due to its assumptions or limitations, you can consider several alternative clustering algorithms, each with its own strengths and weaknesses. Here are a few popular options:

  1. Hierarchical Clustering: Hierarchical clustering creates a tree-like structure of clusters, allowing you to explore data at different levels of granularity. It doesn’t require you to specify the number of clusters in advance. Agglomerative and divisive are two common types of hierarchical clustering.
  2. DBSCAN (Density-Based Spatial Clustering of Applications with Noise): DBSCAN is excellent for discovering clusters of arbitrary shapes and handling noisy data. It identifies clusters as dense regions separated by areas of lower point density. It can manage clusters of different sizes and doesn’t assume spherical shapes.
  3. Mean Shift Clustering: Mean Shift identifies clusters by moving towards higher data point density regions. It doesn’t assume prior information about cluster shapes or sizes, making it useful for various data distributions.
  4. Gaussian Mixture Models (GMMs): GMMs model data as a combination of several Gaussian distributions. They can capture more complex cluster shapes and handle overlapping clusters. GMMs also provide a probabilistic framework for clustering.
  5. Agglomerative Clustering: Agglomerative clustering is a hierarchical approach that starts with each data point as its cluster and iteratively merges the closest clusters. It produces a dendrogram that can help in determining the number of clusters.
  6. Spectral Clustering: Spectral clustering works well for data with complex cluster shapes. It uses graph theory to map data points into a low-dimensional space where traditional clustering algorithms can be applied.
  7. OPTICS (Ordering Points to Identify Clustering Structure): OPTICS is an extension of DBSCAN that orders data points based on their density. It can find clusters of varying densities and is less sensitive to the choice of parameters.
  8. Agglomerative Information Bottleneck: This method focuses on finding clusters that preserve the most information while minimizing the number of clusters. It can be handy for applications where maintaining the underlying data structure is crucial.
  9. Affinity Propagation: Affinity Propagation doesn’t require specifying the number of clusters in advance. It identifies exemplars and assigns other data points to these exemplars based on their similarity.
  10. BIRCH (Balanced Iterative Reducing and Clustering using Hierarchies): BIRCH is suitable for large datasets. It builds a tree-like structure representing the data distribution, making clustering and outlier detection efficient.

The choice of clustering algorithm depends on your data’s characteristics, objectives, and assumptions. It’s often a good idea to experiment with multiple algorithms and evaluate their performance using appropriate metrics for your specific use case.

Conclusion

KMeans clustering is a widely used algorithm that can effectively group data points into clusters based on similarity. It has advantages and limitations that should be considered when deciding whether to use it for a particular task. Here’s a brief conclusion:

Advantages:

  • Simple and easy to implement.
  • Efficient and can handle large datasets.
  • Interpretable results with clear cluster assignments and centroids.
  • Suitable for well-separated spherical clusters.
  • It can be used as a preprocessing step for other tasks.

Limitations:

  • Requires specifying the number of clusters (K) beforehand.
  • Sensitive to the initial placement of centroids, which can lead to suboptimal results.
  • Assumes spherical and equally sized clusters.
  • Struggles with non-convex clusters or clusters of varying densities.
  • Not probabilistic and doesn’t handle outliers well.

KMeans can be a great starting point for various clustering tasks, especially when clusters are relatively well-behaved. However, other algorithms like hierarchical clustering, DBSCAN, and Gaussian Mixture Models (GMMs) might be more appropriate for more complex data distributions or non-spherical clusters.

Remember to preprocess your data appropriately, choose an optimal value for K using techniques like the elbow method or silhouette score, and consider the implications of your data’s characteristics when deciding whether KMeans is the right choice for your problem.

About the Author

Neri Van Otten

Neri Van Otten

Neri Van Otten is the founder of Spot Intelligence, a machine learning engineer with over 12 years of experience specialising in Natural Language Processing (NLP) and deep learning innovation. Dedicated to making your projects succeed.

Recent Articles

types of data transformation processes

What Is Data Transformation? 17 Powerful Tools And Technologies

What is Data Transformation? Data transformation is converting data from its original format or structure into a format more suitable for analysis, storage, or...

Real time vs batch processing

Real-time Vs Batch Processing Made Simple: What Is The Difference?

What is Real-Time Processing? Real-time processing refers to the immediate or near-immediate handling of data as it is received. Unlike traditional methods, where data...

what is churn prediction?

Churn Prediction Made Simple & Top 9 ML Techniques

What is Churn prediction? Churn prediction is the process of identifying customers who are likely to stop using a company's products or services in the near future....

the federated architecture used for federated learning

Federated Learning Made Simple, Why its Important & Application in the Real World

What is Federated Learning? Federated Learning (FL) is a cutting-edge machine learning approach emphasising privacy and decentralisation. Unlike traditional machine...

cloud vs edge computing

NLP And Edge Computing: How It Works & Top 7 Technologies for Offline Computing

In the age of digital transformation, Natural Language Processing (NLP) has emerged as a cornerstone of intelligent applications. From chatbots and voice assistants to...

elastic net vs l1 and l2 regularization

Elastic Net Made Simple & How To Tutorial In Python

What is Elastic Net Regression? Elastic Net regression is a statistical and machine learning technique that combines the strengths of Ridge (L2) and Lasso (L1)...

how recursive feature engineering works

Recursive Feature Elimination (RFE) Made Simple: How To Tutorial

What is Recursive Feature Elimination? In machine learning, data often holds the key to unlocking powerful insights. However, not all data is created equal. Some...

high dimensional dat challenges

How To Handle High-Dimensional Data In Machine Learning [Complete Guide]

What is High-Dimensional Data? High-dimensional data refers to datasets that contain a large number of features or variables relative to the number of observations or...

in-distribution vs out-of-distribution example

Out-of-Distribution In Machine Learning Made Simple & How To Detect It

What is Out-of-Distribution Detection? Out-of-Distribution (OOD) detection refers to identifying data that differs significantly from the distribution on which a...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

nlp trends

2024 NLP Expert Trend Predictions

Get a FREE PDF with expert predictions for 2024. How will natural language processing (NLP) impact businesses? What can we expect from the state-of-the-art models?

Find out this and more by subscribing* to our NLP newsletter.

You have Successfully Subscribed!