MinHash — How To Deal With Finding Similarity At Scale With Python Code To Get Started

by | Jan 2, 2023 | Data Science, Natural Language Processing

What is MinHash?

MinHash is a technique for estimating the similarity between two sets. It was first introduced in information retrieval to evaluate the similarity between documents quickly. The basic idea is to hash the elements of the sets and then take the minimum hash value as a representation of the set. Because the minimum value is used, the technique is called MinHash.

MinHash is used in many applications, including plagiarism detection, document classification, and collaborative filtering. It is handy for large sets where it is infeasible to compare all the elements directly. By using MinHash, we can quickly figure out how similar two groups are without having to compare them in detail.

How does MinHash work?

MinHash represents a set as a hash value, which is obtained by taking the minimum hash value of all the elements in the set. To do this, we first need to define a hash function that maps the elements of the set to an extensive range of hash values. In addition, the hash function should have the property that similar features are likely to be mapped to similar hash values. Still, the process doesn’t need a perfect hash function (i.e., it is allowed to have collisions).

MinHash calculates the similarity between two sets.

MinHash calculates the similarity between two sets.

To estimate the similarity between two sets, we can compute the MinHash values for both collections and compare them. If the MinHash values are similar, the sets are likely similar. The more similar the MinHash values are, the more likely the groups are identical.

To improve the accuracy of the similarity estimate, we can compute multiple MinHash values for each set using different hash functions. The percentage of MinHash values that are the same for the two groups then serves as a proxy for the overall similarity between the sets.

MinHash is an efficient technique for estimating set similarity because it allows us to avoid performing an exhaustive comparison of all the elements in the sets. Instead, we only need to compare the hash values, which are much smaller and easier to compare.

Why use MinHash?

There are several reasons why MinHash is a useful technique:

  1. It allows us to quickly estimate the similarity between two sets without exhaustively comparing all the elements. This is especially helpful for big groups where it would be hard to compare all the parts directly.
  2. It is an efficient technique that can handle large sets of high-dimensional data.
  3. It is a robust technique that is relatively insensitive to the specific choice of the hash function. As long as the hash function has the desired properties (i.e., similar elements are likely to be mapped to similar hash values), the MinHash estimate of how similar two sets are will be correct.
  4. It can be used in various applications, including plagiarism detection, document classification, and collaborative filtering.
  5. It has been well studied and has a solid theoretical foundation, which makes it a reliable technique for estimating set similarity.
how user based collaborative filtering works

Overall, MinHash is a practical and widely used technique for estimating the similarity between data sets.

MinHash vs SimHash

MinHash and SimHash are both techniques for estimating the similarity between data sets. They both work by representing a set as a hash value and then comparing the hash values to calculate their similarity. However, there are some critical differences between the two techniques:

  1. MinHash takes the minimum hash value of all the elements in the set, while SimHash takes the weighted sum of the hash values of the elements and then applies a hashing function to the sum.
  2. MinHash is typically used to estimate the similarity between two sets, while SimHash is generally used to detect duplicate or near-duplicate documents.
  3. MinHash is a suitable method for finding duplicates in large sets, while SimHash is better at finding copies in smaller batches.
  4. MinHash is more robust to changes in the set, while SimHash is more sensitive to changes in the collection.

Overall, MinHash and SimHash are both valuable techniques for different purposes. For example, MinHash is particularly useful for estimating the similarity between large sets, while SimHash is more suitable for detecting duplicates in smaller groups.

MinHash example

Here is an example of how MinHash can be used to estimate the similarity between two sets:

For example, suppose we have two sets of integers:

Set A: {1, 3, 5, 7, 9}

Set B: {2, 3, 5, 8, 9}

To compute the hash values for these sets, we first need to define a hash function that maps the integers to an extensive range of hash values. For simplicity, let’s say that our hash function maps each integer to itself (i.e., the hash value of an integer is just the integer itself).

To compute the hash value for Set A, we take the minimum hash value of all the elements in the set. In this case, the minimum hash value is 1, so the hash value for Set A is 1.

To compute the hash value for Set B, we take the minimum hash value of all the elements in the set. In this case, the minimum hash value is 2, so the hash value for Set B is 2.

To estimate the similarity between the two sets, we compare their hash values. In this case, the hash values are different (1 for Set A and 2 for Set B), so we can conclude that the sets are not very similar.

Suppose we wanted to improve the accuracy of the similarity estimate. In that case, we could compute multiple hash values for each set using different hash functions and then take the fraction of the hash values equal to the overall similarity estimate.

What algorithms can be used for MinHashing?

MinHash is a technique for estimating the similarity between two sets of data. It works by representing a set as a hash value and then comparing the values to assess their similarity. Several different algorithms can be used to compute the MinHash values for a group of data:

  1. Hashing: The most common approach is to use a hash function to map the elements of the set to an extensive range of hash values. The hash function should have the property that similar features are likely to be mapped to similar hash values. Still, the function doesn’t need to be a perfect hash function (i.e., it is allowed to have collisions).
  2. Shingling: Another approach is to break the elements of the set into overlapping groups (called “shingles”) and then hash the shingles. This can be a better way to find similarities between things that aren’t next to each other in the set.
  3. Locality-sensitive hashing (LSH): LSH is a technique for hashing data points so that similar topics are more likely to be hashed to the same value. It can compute hash values that are more accurate than standard hash functions.

Overall, the choice of algorithm will depend on the application’s specific requirements and the data’s characteristics. Different algorithms may be better or worse at figuring out how similar the set’s elements are, so it’s vital to pick the right one for the job.

Use cases in NLP

MinHash is a technique for estimating the similarity between two sets of data. It can be used in many different applications in the field of natural language processing (NLP), including:

  1. Plagiarism detection: Compare two documents’ similarities to find ones copied from other sources.
  2. Document classification: Classify documents into different categories based on their content. It could be used to find spam emails or to put news articles into groups based on what they are about.
  3. Information retrieval: Help search engines work better by giving similar documents a higher ranking in the search results.
  4. Find similar documents: Find similarities in different languages and use them as a basis for machine translation.
  5. Text summarization: By assessing the similarity between various sentences and choosing the most representative ones, we can determine which sentences are the most crucial in a document.

MinHash is a valuable technique for many NLP applications that need to estimate how similar two sets of data, like documents, are to each other.

MinHash in Python

Here is an example of how MinHash can be implemented in Python:

from datasketch import MinHash

# Define the sets
set_a = {1, 3, 5, 7, 9}
set_b = {2, 3, 5, 8, 9}

# Create MinHash objects for the sets
mh_a = MinHash()
mh_b = MinHash()

# Add the elements of the sets to the MinHash objects
for item in set_a:
    mh_a.update(item.encode('utf8'))

for item in set_b:
    mh_b.update(item.encode('utf8'))

# Print the MinHash values
print(mh_a.digest())
print(mh_b.digest())

# Estimate the similarity between the sets
similarity = mh_a.jaccard(mh_b)
print(similarity)

This code creates two sets of integers ( set_a and set_b ) and then computes the MinHash values for the sets using the MinHash class from the datasketch library. The MinHash values are then printed, and the similarity between the sets is estimated using the jaccard method.

This code should give you the hash values for the sets and an estimate of how similar they are. This estimate will be a number between 0 and 1, with 1 meaning that the sets are the same.

Clustering with MinHash

MinHash can be used to cluster data points based on their similarity. The basic idea is to compute the hash values for each data point and then use these values to group similar points into the same cluster.

Here is a high-level outline of how MinHash clustering could work:

  1. Compute the hash values for each data point using a suitable hash function.
  2. Using the hash values, make a similarity matrix where the rows and columns are the data points, and the entries are how similar the points are.
  3. Use a clustering algorithm, like k-means or hierarchical clustering, to group the data points into clusters based on their similarities.
  4. Output the clusters and the data points in each cluster.

MinHash clustering can efficiently and effectively group data points based on their similarity, particularly for large datasets that are infeasible to compare all the points directly. We can quickly figure out how similar the two points are and then use this information to group the points.

Conclusion

MinHash is a technique for estimating the similarity between two sets of data. It works by representing a set as a hash value and then comparing the hash values to estimate their similarity. MinHash is an efficient and robust technique that is well-suited to large and high-dimensional data sets.

It has been used in many applications, including plagiarism detection, document classification, collaborative filtering, and text summarization. One key advantage is that it is relatively insensitive to the hash function’s specific choice, making it a reliable technique for estimating set similarity. Overall, MinHash is a valuable and widely-used technique for estimating the similarity between data sets.

About the Author

Neri Van Otten

Neri Van Otten

Neri Van Otten is the founder of Spot Intelligence, a machine learning engineer with over 12 years of experience specialising in Natural Language Processing (NLP) and deep learning innovation. Dedicated to making your projects succeed.

Recent Articles

q-learning explained witha a mouse navigating a maze and updating it's internal staate

Policy Gradient [Reinforcement Learning] Made Simple In An Elaborate Guide

Introduction Reinforcement Learning (RL) is a powerful framework that enables agents to learn optimal behaviours through interaction with an environment. From mastering...

q learning example

Deep Q-Learning [Reinforcement Learning] Explained & How To Example

Imagine teaching a robot to navigate a maze or training an AI to master a video game without ever giving it explicit instructions—only rewarding it when it does...

deepfake is deep learning and fake put together

Deepfake Made Simple, How It Work & Concerns

What is Deepfake? In an age where digital content shapes our daily lives, a new phenomenon is challenging our ability to trust what we see and hear: deepfakes. The term...

data filtering

Data Filtering Explained, Types & Tools [With How To Tutorials]

What is Data Filtering? Data filtering is sifting through a dataset to extract the specific information that meets certain criteria while excluding irrelevant or...

types of data encoding

Data Encoding Explained, Different Types, How To Examples & Tools

What is Data Encoding? Data encoding is the process of converting data from one form to another to efficiently store, transmit, and interpret it by machines or systems....

what is data enrichment?

Data Enrichment Made Simple [Different Types, How It Works & Common Tools]

What is Data Enrichment? Data enrichment enhances raw data by supplementing it with additional, relevant information to improve its accuracy, completeness, and value....

Hoe to data wrangling guide

Complete Data Wrangling Guide With How To In Python & 6 Common Libraries

What Is Data Wrangling? Data is the foundation of modern decision-making, but raw data is rarely clean, structured, or ready for analysis. This is where data wrangling...

anonymization vs pseudonymisation

Data Anonymisation Made Simple [7 Methods & Best Practices]

What is Data Anonymisation? Data anonymisation is modifying or removing personally identifiable information (PII) from datasets to protect individuals' privacy. By...

z-score normalization

Z-Score Normalization Made Simple & How To Tutorial In Python

What is Z-Score Normalization? Z-score normalization, or standardization, is a statistical technique that transforms data to follow a standard normal distribution. This...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

nlp trends

2025 NLP Expert Trend Predictions

Get a FREE PDF with expert predictions for 2025. How will natural language processing (NLP) impact businesses? What can we expect from the state-of-the-art models?

Find out this and more by subscribing* to our NLP newsletter.

You have Successfully Subscribed!