Top 5 Easy Ways To Implement NER In Python With SpaCy, BERT, NLTK & Flair

by | Dec 6, 2022 | Machine Learning, Natural Language Processing

What is Named entity recognition (NER)?

Named entity recognition (NER) is a part of natural language processing (NLP) that involves finding and classifying named entities in text. Named entities are words or phrases that refer to specific real-world objects, such as people, organisations, locations, etc.

For example, in the sentence “Barack Obama was the 44th president of the United States,” the named entities would be “Barack Obama” and “United States.” Named entity recognition can help to extract meaningful information from the text and organise it in a structured format for further analysis.

United States is an entity that can be extracted with a NER

“Barack Obama was the 44th president of the United States”, has 2 entities that can be extracted with NER

Top 8 entity types most commonly extracted by NER

Several different types of named entities can be identified through named entity recognition. These include:

1. People

People are named entities that refer to specific people, such as “Barack Obama” or “Neri Van Otten.”

2. Locations

These entities refer to specific locations, such as “The United States” or “Paris, France.”

3. Organisations

Organisational entities refer to specific organisations, such as “Google” or “United Nations.”

4. Events

Event-named entities refer to specific events, such as “World War II” or “Olympic Games.”

5. Products

Products are named entities that refer to specific products, such as “the iPhone” or “Ford Mustang.”

6. Artefacts

These refer to objects such as the “Eiffel Tower” or the “Mona Lisa.”

7. Dates

Dates are entities that can sometimes be easily recognised as “02/04/2022” or “5th January 2022” but could also be textual entities like “new year” or “Easter”.

8. Monetary Values

These entities refer to a monetary value, these could be in pounds, euros or any other currency. An example would be “£2.10” or “5.50“.

NER can extract monetory values in financial reports

Extracting monetary values is useful when automating the parsing of financial reports.

These are the most common entities, and many libraries will have implementations to detect these entities. There may also be other types of named entities that can be identified, depending on the specific application. For example, named entity recognition can be used to identify medical conditions in medical text or financial entities in financial documents. Training your own NER is often the only option if you need a specific entity extraction for a specific use case.

When implementing your own NER, knowing the different approaches you can take is useful. The different approaches are discussed in the next section.

What are the different approaches to NER?

There are several approaches to NER, including rule-based systems, dictionary-based systems, and supervised-, unsupervised- or neural network-based approaches. Each of these different systems has its advantages and disadvantages. We discuss the various methods below.

1. Rule-based NER

NER develops rules to identify entities in texts written in natural language. Utilising predefined tags like “organisation,” “product name”, and “date”, these rules can be used to categorise and label content found in documents, articles, and websites. You could, for instance, establish a rule that whenever Apple or Windows appear, the NER labels them as “technology corporations” and “operating systems.” This has the benefit of being easy to implement because all you have to do is create some rules using the entities you’re interested in and then put them into action. However, this method’s obvious drawback is that it ignores context, which means that even the fruit “Apple” would be tagged as a “technology corporation”.

2. Dictionary-based NER

Dictionary-based methods take words from texts that are part of dictionaries, ontologies, and vocabularies. The approach applies to both small datasets and large corpora of texts that fall under a specific entity class (such as persons or locations). Dictionary-based methods can also be used to implement synonym substitution, which involves interpreting the document’s words interchangeably within their respective categories based on a database of accepted terms.

3. A supervised machine learning approach

Supervised learning uses labelled training data, and algorithms such as the maximum entropy algorithm and conditional random fields (CRF) are trained. These techniques support the automated learning of patterns from data that can be applied to new datasets to anticipate associated labels by labelling new samples using previously learned features.

4. An unsupervised machine learning approach

Unsupervised approaches aid in classifying names into distinct classes based on linguistic characteristics such as capitalisation, part-of-speech tags, and domain terminology found in comparable but imperfect classifications in the corpus; as a result, it is a technique that may ultimately increase labelling precision.

5. Neural network-based approaches

Neural network-based approaches combine artificial neural networks, language models, and embedding techniques to produce an entire NER framework. Three options to think about are the

  • BiLSTM (Bidirectional Long Short-Term Memory)
  • BERT (Bidirectional Encoder Representations for Transformers) architectures.

This general strategy can be effective in many sequence labelling tasks, including POS tagging and improving NER accuracy.

These architectures can create deep learning algorithms that improve machine-enabled prediction power while minimising manual interference.

What are the different Python tools that implement NER?

1. SpaCy NER

To use the named entity recognition (NER) functionality in the spaCy library, you must have spaCy installed on your machine. You can install spaCy using pip, the Python package manager, with the following command:

pip install spacy

Once spaCy is installed, you can load a pre-trained NER model, such as the English language model, and use it to identify named entities in text. Here is an example of how to do this:

import spacy # Load the English language model 

nlp = spacy.load("en_core_web_sm") 

# Define a sample text 
text = "Barack Obama was the 44th president of the United States." 

# Parse the text with the loaded NER model 
doc = nlp(text)

# Iterate over the entities in the document and print their labels 
for ent in doc.ents: 
  print(ent.label_)

In this example, the NER model will identify the named entities “Barack Obama” and “United States” in the sample text and will print their labels (i.e., “PERSON” and “GPE”, respectively). You can then use the identified entities and their labels for further analysis or processing.

2. BERT NER

To use the named entity recognition (NER) functionality of BERT, you must have a BERT model and the BERT library installed on your machine. The BERT library implements several popular deep-learning frameworks, including TensorFlow and PyTorch. To install the BERT library for TensorFlow, you can use pip, the Python package manager, with the following command:

pip install bert-tensorflow 

Once the BERT library is installed, you can perform NER on text. Here is an example of how to do this using the TensorFlow implementation of BERT:

import tensorflow as tf 
import tensorflow_hub as hub 

# Load a BERT model from TensorFlow Hub 
model = hub.load("https://tfhub.dev/google/bert_uncased_L-12_H-768_A-12/1") 

# Define a sample text 
text = "Barack Obama was the 44th president of the United States." 

# Tokenise and input the text to the BERT model 
tokens = model.tokenization.tokenize_string(text) 
inputs = model.tokenization.convert_tokens_to_ids(tokens) 

# Use the BERT model to predict named entities 
ner_predictions = model.predict(inputs) 

# Iterate over the predicted named entities and print their labels 
for ner_prediction in ner_predictions: 
  print(ner_prediction["ner"]) 

In this example, the BERT model will identify the named entities “Barack Obama” and “United States” in the sample text and will print their labels (i.e., “PERSON” and “LOCATION”, respectively). You can then use the identified entities and their labels for further analysis or processing.

Note that this example uses a pre-trained BERT model from TensorFlow Hub, but you can also use a custom BERT model that you have trained yourself.

3. NLTK NER

To use the named entity recognition (NER) functionality in the NLTK library, you will need to have NLTK installed on your machine. You can install NLTK using pip, the Python package manager, with the following command:

pip install nltk 

Once NLTK is installed, you can use the nltk.ne_chunk() function to identify named entities in text. This function uses the default NLTK NER classifier based on the Stanford NER model. Here is an example of how to use the nltk.ne_chunk() function to identify named entities in text:

import nltk 

# Define a sample text 
text = "Barack Obama was the 44th president of the United States." 

# Tokenize the text 
tokens = nltk.word_tokenize(text) 

# Part-of-speech tag the tokens 
tagged = nltk.pos_tag(tokens) 

# Use the NLTK NER classifier to identify named entities 
named_entities = nltk.ne_chunk(tagged) 

# Iterate over the named entities and print their labels 
for entity in named_entities: 
  if hasattr(entity, "label"): 
    print(entity.label())

In this example, the NLTK NER classifier will identify the named entities “Barack Obama” and “United States” in the sample text and will print their labels (i.e., “PERSON” and “GPE”, respectively). You can then use the identified entities and their labels for further analysis or processing.

Note that the nltk.ne_chunk() function requires the input text to be tokenized and part-of-speech tagged, which are included in the example above.

4. Flair NER

To use the named entity recognition (NER) functionality of Flair, you will need to have the Flair library installed on your machine. You can install Flair using pip, the Python package manager, with the following command:

pip install flair 

Once Flair is installed, you can perform NER on text. Here is an example of how to do this:

import flair 

# Load a pre-trained Flair NER model 
ner = flair.models.SequenceTagger.load("ner-fast") 

# Define a sample text 
text = "Barack Obama was the 44th president of the United States." 

# Use the Flair NER model to predict named entities in the text 
ner_predictions = ner.predict(text) 

# Iterate over the predicted named entities and print their labels 
for ner_prediction in ner_predictions: 
  print(ner_prediction) 

 

In this example, the Flair NER model will identify the named entities “Barack Obama” and “United States” in the sample text and will print their labels (i.e., “PERSON” and “GPE”, respectively). You can then use the identified entities and their labels for further analysis or processing.

Note that this example uses a pre-trained Flair NER model, but you can also use a custom Flair model that you have trained yourself.

Key Takeaways

  • NER can help extract important entities from text.
  • The most common entities extracted are people, organisations, locations, events, products, artefacts, dates and monetary values. If you need other entities extracted, you will probably need to train your own model.
  • There are many different ways of implementing named entity recognition. The simplest is a rule-based system. Slightly more complicated is the dictionary approach, and the more complicated systems use machine learning. Supervised and unsupervised learning can both be used to do entity extraction.
  • The most complicated NER technique is based on neural networks. Common options are BiLSTM, ELMO, and BERT architectures.
  • Python has several really good NER implementations to choose from. SpaCy, NLTK, BERT and Flair all have solid implementations you can use out of the box or train your model with.

At Spot Intelligence, we use the out-of-the-box approach and build and train our own for specific use cases. What NER implementations do you use and why? Let us know in the comments.

About the Author

Neri Van Otten

Neri Van Otten

Neri Van Otten is the founder of Spot Intelligence, a machine learning engineer with over 12 years of experience specialising in Natural Language Processing (NLP) and deep learning innovation. Dedicated to making your projects succeed.

Related Articles

Sequence-to-sequence encoder-decoder architecture

Sequence-to-Sequence Architecture Made Easy & How To Tutorial In Python

What is sequence-to-sequence? Sequence-to-sequence (Seq2Seq) is a deep learning architecture used in natural language processing (NLP) and other sequence modelling...

Cross-entropy can be interpreted as a measure of how well the predicted probability distribution aligns with the true distribution.

Cross-Entropy Loss — Crucial In Machine Learning — Complete Guide & How To Use It

What is cross-entropy loss? Cross-entropy Loss, often called "cross-entropy," is a loss function commonly used in machine learning and deep learning, particularly in...

nlg can generate product descriptions

Natural Language Generation Explained & 2 How To Tutorials In Python

What is natural language generation? Natural Language Generation (NLG) is a subfield of artificial intelligence (AI) and natural language processing (NLP) that focuses...

y_actual - y_predicted

Top 8 Loss Functions Made Simple & How To Implement Them In Python

What are loss functions? Loss functions, also known as a cost or objective functions, are critical component in training machine learning models. It quantifies a...

chatbots are commonly used for Cross-lingual Transfer Learning

How To Implement Cross-lingual Transfer Learning In 5 Different Ways

What is cross-lingual transfer learning? Cross-lingual transfer learning is a machine learning technique that involves transferring knowledge or models from one...

In text labelling and classification, each document or piece of text is assigned to one or more predefined categories or classes

Text Labelling Made Simple With How To Guide & Tools List

What is text labelling? Text labelling, or text annotation or tagging, assigns labels or categories to text data to make it more understandable and usable for various...

Automatically identifying these languages is crucial for search engines, content recommendation systems, and social media platforms.

Language Identification Complete How To Guide In Python [With & Without Libraries]

What is language identification? Language identification is a critical component of Natural Language Processing (NLP), a field dedicated to interacting with computers...

Multilingual NLP is important for an ever globalising world

Multilingual NLP Made Simple — Challenges, Solutions & The Future

Understanding Multilingual NLP In the era of globalization and digital interconnectedness, the ability to understand and process multiple languages is no longer a...

text cleaning is all about the right tools

Top 20 Essential Text Cleaning Techniques [Practical How To Guide In Python]

What is text cleaning in NLP? Text cleaning, also known as text preprocessing or text data cleansing, is preparing and transforming raw text data into a cleaner, more...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

nlp trends

2023 NLP Expert Trend Predictions

Get a FREE PDF with expert predictions for 2023. How will natural language processing (NLP) impact businesses? What can we expect from the state-of-the-art models?

Find out this and more by subscribing* to our NLP newsletter.

You have Successfully Subscribed!