Tutorial On How To Implement Document Clustering In Python With K-means

by | Jan 16, 2023 | Data Science, Machine Learning, Natural Language Processing

Introduction to document clustering and its importance

Grouping similar documents together in Python based on their content is called document clustering, also known as text clustering. This unsupervised machine learning method is used to analyse and organise extensive collections of text data.

Document clustering is important because it can find patterns and structure in text data that is not organized. This can be useful for many things, like text mining, natural language processing, and getting information.

document clustering in python is similar to stars in the sky

Documents can be clustered by how similar they are.

Some examples of how document clustering can be used include:

  • Grouping news articles by topic to improve search results and recommendation systems
  • Organizing customer feedback by theme helps identify critical issues and improve customer service.
  • Clustering research papers by topic will assist with the literature review and research organisation.

Document clustering can help make sense of large and complex text data, making it more manageable and actionable.

Overview of standard document clustering techniques in Python

There are several standard techniques used for document clustering, including:

  1. K-means: This well-known clustering algorithm divides a dataset into k clusters, where k is a parameter that the user specifies. When using K-means, spherical clusters are created, with the mean of each cluster’s points serving as its representation.
  2. Hierarchical Clustering: With this approach, the nested grouping of the data is represented by a tree-like structure called a dendrogram. Merging or dividing clusters can be used to build a hierarchy of clusters. Hierarchical clustering is divided into two categories:
    • Agglomerative: This bottom-up method treats each data point as a single cluster, then iteratively merged with other clusters until a stopping criterion is satisfied.
    • Divisive: This top-down method splits the data points until a stopping criterion is met by treating them all as one big cluster.
  3. Expectation-Maximization (EM): This technique calculates the probabilities that make up the data’s probability distribution. The EM algorithm can group text data by assuming that the data points come from a mix of different probability distributions.
  4. Latent Dirichlet Allocation (LDA): Using this generative probabilistic model, text data is clustered according to the topics that produce the data. LDA is a well-liked topic modelling method that has also been applied to document clustering.
  5. DBSCAN : The density-based clustering algorithm known as Density-Based Spatial Clustering of Applications with Noise (DBSCAN) groups together data points that are closely spaced apart (points with numerous close neighbours) while classifying as outliers those points that are isolated in low-density areas (called noise points).

Each of these algorithms has benefits and drawbacks of its own, and the technique to be used will depend on the specifics of the dataset and the desired result.

Selection of a document clustering technique for your dataset

The best method for a particular dataset will depend on several factors, such as the dataset’s size and complexity, the desired result, and the available computing resources.

  • K-means is a good choice if the dataset is small to medium-sized and the number of clusters is known or can be estimated. It scales well to big datasets and is relatively quick and simple to implement.
  • Hierarchical clustering is a good choice if the dataset is large and the number of clusters is unknown. Large datasets can be handled, and the number of clusters need not be predetermined.
  • Expectation-Maximization (EM) or Latent Dirichlet Allocation (LDA) may be appropriate if the dataset has many features and the objective is to find latent structure in the data. With these methods, you can find patterns in the data that are not obvious at first glance.
  • DBSCAN is a good choice if the dataset contains many points closely clustered in some feature space regions and separated by large regions of low point density.
  • It’s a good idea to start with a combination of techniques, such as hierarchical clustering and k-means, and then use more specialised techniques, like LDA or EM. If you have little to no prior knowledge about the nature of the data and want to explore the dataset and discover patterns.

It’s important to keep in mind that to choose the right method, you need to think about both the problem and the data. You should try out different methods, evaluate how well they work using the available evaluation metrics, and then choose the method that best solves the problem.

Preparation of the dataset for clustering

Before using any document clustering method, the text must be cleaned, tokenized, vectorized, and other steps taken to prepare the dataset.

  1. Text cleaning: This step eliminates extraneous or unnecessary text, including punctuation, stop words, special characters, and numbers. By eliminating data noise, this step enhances the clustering algorithm’s accuracy.
  2. Tokenization: In this step, the text is broken up into tokens, single words or phrases, so the algorithm can figure out what it means.
  3. Stemming/Lemmatization: Stemming or a lexicon-based method is used to strip inflectional endings from words and return them to their basic form (lemmatization). In this step, similar words are grouped to help reduce the dataset’s dimensionality.
  4. Vectorization: In this step, the text is transformed into a numerical representation that the clustering algorithm can use as input. Bag-of-words, term frequency-inverse document frequency (TF-IDF), and word embeddings are a few techniques for vectorizing text. The best method will depend on the specific characteristics of the dataset and the desired result. Each of these methods has its benefits and drawbacks.
  5. Dimensionality reduction: By removing correlated features or projecting the data onto a lower-dimensional space, this step reduces the number of features. When the dataset has a lot of features and the clustering algorithm needs to be sped up, this step is especially helpful.

The dataset is ready to be used as input for the clustering algorithm once it has been prepared. It is crucial to keep in mind that the quality of the pre-processing steps determines the quality of the clustering results, making it crucial to take the time to prepare the dataset properly.

Implementation of the chosen document clustering technique using Python

Here is a list of Python libraries that can be used for document clustering.

  • scikit-learn: This is a popular machine learning library that provides a wide range of clustering algorithms, including k-means, hierarchical clustering, and EM. It also provides tools for pre-processing and evaluating the clustering results.
  • NLTK: This is a natural language processing library that provides tools for text cleaning, tokenization, stemming, and lemmatization.
  • Gensim: This is a library for topic modelling and document indexing that provides tools for vectorization, dimensionality reduction, and LDA.
  • Spacy: This industrial-strength natural language processing library provides tools for tokenization, stemming, and lemmatization.

Clustering text documents using k-means

Here is an example of how to use the k-means algorithm for document clustering in Python using the scikit-learn library and a sample dataset:

import numpy as np
from sklearn.cluster import KMeans
from sklearn.feature_extraction.text import TfidfVectorizer

# Sample dataset
dataset = ["I love playing football on the weekends",
           "I enjoy hiking and camping in the mountains",
           "I like to read books and watch movies",
           "I prefer playing video games over sports",
           "I love listening to music and going to concerts"]

# Vectorize the dataset
vectorizer = TfidfVectorizer()
X = vectorizer.fit_transform(dataset)

# Define the number of clusters
k = 2

# Create a k-means model and fit it to the data
km = KMeans(n_clusters=k)
km.fit(X)

# Predict the clusters for each document
y_pred = km.predict(X)

# Print the cluster assignments
print(y_pred)
# Output: [1 1 0 1 0]

In this example, we first use the TfidfVectorizer to vectorize the dataset. This converts the text into a numerical representation that can be used as input for the k-means algorithm.

Then we specify the number of clusters to be used (in this case, 2) and build a KMeans model. The model is then fitted to the data, and each document is assigned to a cluster using the prediction method.

In this case, the documents are put into two clusters. The documents in the first cluster are about outdoor activities and sports. In contrast, the documents in the second cluster are about indoor activities such as reading and playing video games.

This is a simple example, but the same process can be applied to larger datasets and more complex text data. The number of clusters can also be adjusted to fit the desired outcome.

Evaluation of the clustering results

A crucial step in ensuring the algorithm operates as intended is assessing the results. The quality of clustering results can be assessed using several metrics, such as:

  1. Silhouette Score: This metric gauges how closely each point resembles its cluster in relation to other clusters. A score of 1 means that the point is well-matched to its own cluster, and a score of -1 means that it is better matched to another cluster. The silhouette score can range from -1 to 1. The clusters are clearly defined if the silhouette score is high.
  2. Adjusted Rand Index (ARI): This metric assesses how closely the actual labels match the forecasted labels. The scale goes from -1 to 1, with 1 representing a perfect match and -1 representing entirely different labels.
  3. Normalized Mutual Information (NMI): This metric measures how similar the actual labels are to the labels that were predicted.
  4. Fowlkes-Mallows index (FMI): This metric assesses how geometrically similar the actual labels are to the predicted ones.
  5. V-measure: This metric contrasts the harmonic mean between the clustering results’ precision and recall.
  6. Davies-Bouldin index (DBI): This metric measures the similarity between each cluster and its most similar cluster.
  7. Calinski-Harabasz index (CHI): This metric measures the ratio of the variance between clusters to the difference in variance within a cluster.

It’s important to remember that no single metric can provide a comprehensive evaluation of clustering results, so using several metrics is advised to obtain a more thorough evaluation. Remembering that various data types and clustering algorithms may respond better to various metrics is essential.

Visualization of the document clustering using Python libraries

Viewing the clusters can aid in understanding the dataset’s structure and data grouping. To visualise the document clustering, a variety of Python libraries are available, including:

  • Matplotlib: This is a plotting library that provides a wide range of plotting options, including scatter plots, line plots, and bar plots.
  • Seaborn: This is a library that is built on top of Matplotlib and provides advanced visualisation options such as heatmaps, pair plots, and violin plots.
  • Plotly: This is an open-source library that provides interactive and web-based visualisation options.
  • Bokeh: This is another interactive visualisation library that is well suited for large datasets and streaming data.
  • Yellowbrick: This visualisation library is built on top of Matplotlib and specifically designed for machine learning tasks.

Here is an example of how to visualise the clusters using Matplotlib and Seaborn:

import seaborn as sns

# convert to a sparse matrix
X = X.toarray()

# Create a scatter plot of the data colored by the predicted clusters
sns.scatterplot(x=X[:,0], y=X[:,5], hue=y_pred, palette='rainbow')
plt.show()

Here, we create a scatter plot of the first and fifth dimensions of the data and colour the points according to the predicted cluster assignments.

document clustering

The plot shows how the data are spread out within each cluster and makes it easier to see patterns in the data.

Similarly, other visualisation libraries can plot different plots like heatmaps, pair plots, etc.

The example above should be considered a starting point, and you may need to adjust the code depending on the specific requirements of your project.

Conclusion

Document clustering is a strong technique that can effectively analyse and organise large amounts of text data. Document clustering is a multi-step process that includes pre-processing the dataset, choosing an appropriate clustering method, using the method, analysing the outcomes, and visualising the clusters.

Several Python libraries can be used to put document clustering strategies into action, and there are several ways to measure the quality of the clustering results. In addition, visualisation libraries can be used to comprehend the dataset’s structure and grouping.

Potential further next steps

Potential next steps for further analysis include the following:

  1. Fine-tuning the pre-processing steps: Depending on the dataset, the text cleaning, tokenization, and vectorization steps may need to be changed to improve the quality of the clustering results.
  2. Using different clustering methods: If the results of one method aren’t good enough, it might be helpful to try other methods and compare how well they work.
  3. Applying supervised learning: After clustering the data, it’s possible to use the clusters as features to train a supervised model on the same dataset for classification or regression tasks.
  4. Using domain-specific knowledge: If domain-specific knowledge can be used to guide the clustering process, it can be used to improve the quality of the results.
  5. Interpreting the clusters: It’s also important to interpret the clusters and understand the characteristics of the grouped documents. This can help figure out what patterns are going on in the data and guide further analysis.

Overall, document clustering in Python is an effective method for making sense of large and complex text data, making it more manageable and valuable. It’s important to remember that the quality of the pre-processing steps determines the quality of the clustering results, so it’s essential to take the time to prepare the dataset properly.

About the Author

Neri Van Otten

Neri Van Otten

Neri Van Otten is the founder of Spot Intelligence, a machine learning engineer with over 12 years of experience specialising in Natural Language Processing (NLP) and deep learning innovation. Dedicated to making your projects succeed.

Recent Articles

online machine learning process

Online Machine Learning Explained & How To Build A Powerful Adaptive Model

What is Online Machine Learning? Online machine learning, also known as incremental or streaming learning, is a type of machine learning in which models are updated...

data drift in machine learning over time

Data Drift In Machine Learning Explained: How To Detect & Mitigate It

What is Data Drift Machine Learning? In machine learning, the accuracy and effectiveness of models heavily rely on the quality and consistency of the data on which they...

precision and recall explained

Classification Metrics In Machine Learning Explained & How To Tutorial In Python

What are Classification Metrics in Machine Learning? In machine learning, classification tasks are omnipresent. From spam detection in emails to medical diagnosis and...

example of a co-occurance matrix for NLP

Co-occurrence Matrices Explained: How To Use Them In NLP, Computer Vision & Recommendation Systems [6 Tools]

What are Co-occurrence Matrices? Co-occurrence matrices serve as a fundamental tool across various disciplines, unveiling intricate statistical relationships hidden...

use cases of query understanding

Query Understanding In NLP Simplified & How It Works [5 Techniques]

What is Query Understanding? Understanding user queries lies at the heart of efficient communication between humans and machines in the vast digital information and...

distributional semantics example

Distributional Semantics Simplified & 7 Techniques [How To Understand Language]

What is Distributional Semantics? Understanding the meaning of words has always been a fundamental challenge in natural language processing (NLP). How do we decipher...

4 common regression metrics

10 Regression Metrics For Machine Learning & Practical How To Guide

What are Evaluation Metrics for Regression Models? Regression analysis is a fundamental tool in statistics and machine learning used to model the relationship between a...

find the right document

Natural Language Search Explained [10 Powerful Tools & How To Tutorial In Python]

What is Natural Language Search? Natural language search refers to the capability of search engines and other information retrieval systems to understand and interpret...

the difference between bagging, boosting and stacking

Bagging, Boosting & Stacking Made Simple [3 How To Tutorials In Python]

What is Bagging, Boosting and Stacking? Bagging, boosting and stacking represent three distinct ensemble learning techniques used to enhance the performance of machine...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

nlp trends

2024 NLP Expert Trend Predictions

Get a FREE PDF with expert predictions for 2024. How will natural language processing (NLP) impact businesses? What can we expect from the state-of-the-art models?

Find out this and more by subscribing* to our NLP newsletter.

You have Successfully Subscribed!