Top 3 Easy Ways To Remove Stop Word In Python With SpaCy, NLTK & Gensim

by | Dec 10, 2022 | Natural Language Processing

What is stop word removal?

Stop words are commonly used words that have very little meaning, such as “a,” “an,” “the,” or “in.” Stopwords are typically excluded from natural language processing (NLP) and information retrieval applications because they do not contribute much to the meaning or context of the text.

Stop words can remove common words from text.

In many NLP and information retrieval applications, words are filtered out of the text data before further processing is performed. This can reduce the dimensionality of the data and make the algorithms more efficient and effective. For example, removing stopwords from a document can help a text classification algorithm focus on the most important and relevant words and assign the document to the correct category or label.

There are many lists available, and the specific list to be used will depend on the language and domain of the text data. Some common stopwords in English, for example, include:

  • articles (a, an, the)
  • conjunctions (and, but, or)
  • prepositions (in, on, at)
  • pronouns (he, she, it, they)
  • auxiliary verbs (is, are, was, were)

Keep in mind that these words are only sometimes meaningful or irrelevant. However, in some cases, including or excluding stopwords can affect the meaning or context of the text and may impact the performance of NLP and information retrieval algorithms. Therefore, it is essential to carefully consider which stopwords to use and how to use them in your application.

Advantages and disadvantages of removing stop words

Advantages

There are both advantages and disadvantages to removing stopwords from text data. Some of the benefits of stopword removal include the following:

  • Reducing the text data size can make it more manageable and faster to process.
  • Improving the performance of natural language processing algorithms by reducing the number of irrelevant words that the algorithm needs to process.
  • Improving the interpretability of the results by removing words that do not carry much meaning.

Disadvantages

However, there are also some disadvantages to stopword removal, including:

  • The possibility of losing important information by removing words that may be significant in a specific context.
  • The subjectivity of choosing which words to include in the stopword list can affect the results of any downstream tasks.
  • The need to maintain and update the stopword list as the language and domain evolve.
  • Relevant stop word lists can be hard to find in some languages and so may not scale as more languages need to be processed.

Overall, whether or not to remove stopwords depends on the specific task and the desired outcome. In some cases, stopword removal can be beneficial, but in other cases, it may be better to keep the stopwords in the text data.

Remove stop words with Python

1. NLTK stop words

To remove stopwords with Python, you can use a pre-built list of stopwords in a library such as NLTK or create your list of stopwords.

Here is an example of how to remove stopwords using NLTK:

import nltk from nltk.corpus 
import stopwords nltk.download('stopwords') 

# Create a set of stop words 
stop_words = set(stopwords.words('english')) 

# Define a function to remove stop words from a sentence 
def remove_stop_words(sentence): 
  # Split the sentence into individual words 
  words = sentence.split() 
  
  # Use a list comprehension to remove stop words 
  filtered_words = [word for word in words if word not in stop_words] 
  
  # Join the filtered words back into a sentence 
  return ' '.join(filtered_words)

In this example, the NLTK library is imported, and the stopwords.words function is used to create a set of stop words in English. Then, a function called remove_stop_words is defined, which takes a sentence as input and splits it into individual words. A list comprehension is used to remove any words that are in the stopword set, and the filtered words are joined back into a sentence and returned.

To use this function, you can simply call it on a sentence, and it will return the sentence with the stopwords removed. For example:

sentence = "This is an example sentence with stopwords." 

filtered_sentence = remove_stop_words(sentence) 
print(filtered_sentence) 
# Output: "example sentence stopwords."

In this case, the stopwords “this”, “is”, “an”, “with”, and “the” would be removed from the input sentence.

2. SpaCy stop words

To remove stopwords with spaCy, you can use the spacy.lang.en.stop_words.STOP_WORDS attribute to get a set of stopwords in English, and then use the token.is_stop attribute to check if a token is a stop word. Here is an example of how to remove stopwords using spaCy:

import spacy nlp = spacy.load('en_core_web_sm') 

# Create a set of stop words 
stop_words = spacy.lang.en.stop_words.STOP_WORDS 

# Define a function to remove stop words from a sentence 
def remove_stop_words(sentence): 
  # Parse the sentence using spaCy 
  doc = nlp(sentence) 
  
  # Use a list comprehension to remove stop words 
  filtered_tokens = [token for token in doc if not token.is_stop] 
  
  # Join the filtered tokens back into a sentence 
  return ' '.join([token.text for token in filtered_tokens])

In this example, SpaCy is used to parse a sentence and identify the individual tokens. A list comprehension is used to remove any tokens that are stopwords, and the filtered tokens are joined back into a sentence and returned.

To use this function, you can simply call it on a sentence, and it will return the sentence with the stopwords removed. For example:

sentence = "This is an example sentence with stop words." 

filtered_sentence = remove_stop_words(sentence) 
print(filtered_sentence) 
# Output: "example sentence stop words."

In this case, the words “this”, “is”, “an”, “with”, and “the” would be removed from the input sentence.

3. Gensim stop words

To remove stop words using the Gensim library, you can use the gensim.parsing.remove_stopwords method. This method takes a list of words as input and returns a list of words with the stopwords removed.

Here is an example of how to use this method to remove stopwords from a list of words:

from gensim.parsing.preprocessing import remove_stopwords 

# Define a list of words 
words = ["the", "quick", "brown", "fox", "jumps", "over", "the", "lazy", "dog"] 

# Remove the stop words 
filtered_words = remove_stopwords(words) 

# Print the filtered list of words 
print(filtered_words)
# Output: ['quick', 'brown', 'fox', 'jumps', 'lazy', 'dog']

As you can see, the “the” and “over” have been removed from the list. Note that the Gensim library includes a default list of stopwords, but you can also specify your custom list if needed.

Create a domain-specific stop words list

It can be incredibly useful to create your specific list of irrelevant words. For example, when analysing social media content, you will come across lots of irrelevant sequences of characters that you may not wish to analyse. Think of “RT”, a “re-tweet” on Twitter. You could add the “RT” to a specific stop word list and remove these characters automatically.

To make a domain-specific list, you need to figure out the most common words in your domain or subject area that don’t have much to do with your task. This usually means looking at a lot of text data from your domain and finding the words used most often. Once you have determined these words, you can add them to your stopword list.

To create your own stopword list in Python, you can simply define a list of strings containing the stopwords that you want to use. For example:

stop_words = ["a", "an", "the", "and", "but", "or", "because", "as", "until", "while", "of", "at", "by", "for", "with", "about", "against", "between", "into", "through", "during", "before", "after", "above", "below", "to", "from", "up", "down", "in", "out", "on", "off", "over", "under", "again", "further", "then", "once", "here", "there", "when", "where", "why", "how", "all", "any", "both", "each", "few", "more", "most", "other", "some", "such", "no", "nor", "not", "only", "own", "same", "so", "than", "too", "very", "can", "will", "just"]

You can then use this list in your code to remove them from text data. For example, if you have a string containing some text, you can use the .split() method to split the string into a list of words, and then use a for loop to iterate over the list of words and remove any words that are in the stop word list:

# Define a string containing some text 
text = "The quick brown fox jumps over the lazy dog." 

# Split the string into a list of words 
words = text.split() 

# Create a new list to hold the filtered words 
filtered_words = [] 

# Iterate over the list of words 
for word in words: 
  # If the word is not in the stop word list, add it to the filtered list 
  if word not in stop_words: 
    filtered_words.append(word) 
    
# Print the filtered list of words 
print(filtered_words)
# Output: ['quick', 'brown', 'fox', 'jumps', 'lazy', 'dog.']

As you can see, the words from the list have been removed from the text. Note that you can use this technique with any list of stopwords, whether it is the default list in one of the libraries or your own.

Key Takeaways

  • Stop word removal is one of the top 10 most helpful NLP techniques. It’s a very useful technique to remove insignificant words from a data set with which you work. It can also improve the performance of your machine learning model and improve the interpretability of your results.
  • The disadvantages are that you could lose vital information, it’s not straightforward to decide which words to remove and you would need to maintain the list in the future.
  • It could be hard to scale this technique when working with multiple languages, as you must maintain a separate list per language used.

At Spot Intelligence, this is also one of our favourite techniques as it allows us to analyse a new data set quickly and run it through machine learning models without losing much of the interpretability you would get with other more advanced techniques like word embedding and sentence embedding.

What are your favourite pre-processing techniques to use in NLP? Let us know in the comments.

About the Author

Neri Van Otten

Neri Van Otten

Neri Van Otten is the founder of Spot Intelligence, a machine learning engineer with over 12 years of experience specialising in Natural Language Processing (NLP) and deep learning innovation. Dedicated to making your projects succeed.

Related Articles

Most Powerful Open Source Large Language Models (LLM) 2023

Open Source Large Language Models (LLM) – Top 10 Most Powerful To Consider In 2023

What are open-source large language models? Open-source large language models, such as GPT-3.5, are advanced AI systems designed to understand and generate human-like...

l1 and l2 regularization promotes simpler models that capture the underlying patterns and generalize well to new data

L1 And L2 Regularization Explained, When To Use Them & Practical Examples

L1 and L2 regularization are techniques commonly used in machine learning and statistical modelling to prevent overfitting and improve the generalization ability of a...

Hyperparameter tuning often involves a combination of manual exploration, intuition, and systematic search methods

Hyperparameter Tuning In Machine Learning & Deep Learning [The Ultimate Guide With How To Examples In Python]

What is hyperparameter tuning in machine learning? Hyperparameter tuning is critical to machine learning and deep learning model development. Machine learning...

Countvectorizer is a simple techniques that counts the amount of times a word occurs

CountVectorizer Tutorial In Scikit-Learn And Python (NLP) With Advantages, Disadvantages & Alternatives

What is CountVectorizer in NLP? CountVectorizer is a text preprocessing technique commonly used in natural language processing (NLP) tasks for converting a collection...

Social media messages is an example of unstructured data

Difference Between Structured And Unstructured Data & How To Turn Unstructured Data Into Structured Data

Unstructured data has become increasingly prevalent in today's digital age and differs from the more traditional structured data. With the exponential growth of...

sklearn confusion matrix

F1 Score The Ultimate Guide: Formulas, Explanations, Examples, Advantages, Disadvantages, Alternatives & Python Code

The F1 score formula The F1 score is a metric commonly used to evaluate the performance of binary classification models. It is a measure of a model's accuracy, and it...

regression vs classification, what is the difference

Regression Vs Classification — Understand How To Choose And Switch Between Them

Classification vs regression are two of the most common types of machine learning problems. Classification involves predicting a categorical outcome, such as whether an...

Several images of probability densities of the Dirichlet distribution as functions.

Latent Dirichlet Allocation (LDA) Made Easy And Top 3 Ways To Implement In Python

Latent Dirichlet Allocation explained Latent Dirichlet Allocation (LDA) is a statistical model used for topic modelling in natural language processing. It is a...

One of the critical features of GPT-3 is its ability to perform few-shot and zero-shot learning. Fine tuning can further improve GPT-3

How To Fine-tuning GPT-3 Tutorial In Python With Hugging Face

What is GPT-3? GPT-3 (Generative Pre-trained Transformer 3) is a state-of-the-art language model developed by OpenAI, a leading artificial intelligence research...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

nlp trends

2023 NLP Expert Trend Predictions

Get a FREE PDF with expert predictions for 2023. How will natural language processing (NLP) impact businesses? What can we expect from the state-of-the-art models?

Find out this and more by subscribing* to our NLP newsletter.

You have Successfully Subscribed!