How To Implement Abstractive Text Summarization In Python [2 Ways]

by | Dec 29, 2022 | Data Science, Machine Learning, Natural Language Processing

Abstractive text summarization is a valuable tool in Python when working with large documents, or you quickly want to summarize data. In this article, we discuss applications of abstractive text summarization. The advantages and disadvantages of using this technique and code examples of implementations in Python. We further discuss the advantages and disadvantages of each approach so that you can choose the method that best suits your use case.

What is abstractive text summarization?

Abstractive text summarization is a natural language processing (NLP) technique that generates a concise summary of a document or text. The summary represents the main points of the original text. In contrast to extractive summarization, which involves choosing and condensing essential parts of the original text, abstractive summarization involves making new words and sentences that capture the original text’s meaning.

Abstractive text summarization in python uses new words to create a summary.

Abstractive text summarization uses new words to create a summary.

Abstractive summarization can be performed using machine learning algorithms. Popular algorithms for text summarization, such as neural networks, are trained to generate coherent and meaningful text. These algorithms usually look at the original text’s structure and content to summarize the main points and ideas.

Example of abstractive text summarization

Original text

“Apple has announced that it will be releasing a new iPhone in the coming months. The new phone, called the iPhone 12, will feature a completely redesigned exterior and a host of new features, including a more powerful processor and improved camera. The iPhone 12 will be available in a variety of colors and storage capacities, and will be available for pre-order later this month.”

Abstractive summary

“Apple is releasing a new iPhone called the iPhone 12 with a redesigned exterior and improved features like a more powerful processor and better camera.”

Applications of abstractive summarization

Some applications of abstractive text summarization include automated news summarization, the generation of summaries for long documents, and the generation of summaries for social media posts or other online content. It can be a helpful tool for quickly extracting information from a large volume of text and generating human-like summaries that are more readable and understandable than purely extractive summaries.

  1. News summarization: Abstract text can be used to make short summaries of news articles, so users can quickly understand the story’s main points without reading the whole article.
  2. Meeting summaries: Abstractive text summarization can be used to generate summaries of meetings or conference calls, allowing attendees to review the critical points discussed quickly.
  3. Customer feedback summarization: Abstractive text summarization can be used to generate summaries of customer feedback or reviews, allowing businesses to understand the main issues and concerns that customers have quickly.
  4. Legal document summarization: Abstractive text summarization can be used to generate summaries of legal documents, allowing lawyers and legal professionals to understand the main points and arguments of a document quickly.
  5. Research paper summarization: Abstractive text summarization can be used to generate summaries of research papers, allowing researchers to review a paper’s essential findings and contributions quickly.

Advantages and disadvantages of abstractive text summarization

Advantages

  1. Since it doesn’t have to include whole sentences from the original text, it can make shorter and more to the point than extractive summaries.
  2. It can make summaries that make more sense and are easier to read because it can rephrase and reorganize the information from the original text in a more logical way.
  3. It can generate summaries that include information not explicitly stated in the original text but implied or inferred based on the context.

Disadvantages

  1. It takes more processing power than extractive text summarization because it needs to train and tweak large language models.
  2. It is less reliable than extractive text summarization, as it is prone to generating summary text that needs to be reviewed, especially if the language model needs to be better trained.
  3. Evaluating the quality of abstractive summaries is more challenging, as there is no clear “correct” summary for a given document.

Abstractive text summarization in Python

Several libraries and frameworks in Python can be used for abstractive text summarization. Here are a few options:

  1. GPT-3: GPT-3 (short for “Generative Pre-trained Transformer 3”) is a state-of-the-art language model developed by OpenAI. It can be used for many natural language processing tasks, including abstractive summarization. To use GPT-3 for abstractive summarization, you must sign up for an API key and use the GPT-3 API through one of the available client libraries, such as the openai library.
  2. Hugging Face’s Transformers: This is a library developed by Hugging Face that provides access to a variety of pre-trained transformer models, including language models like GPT-3. It can be used to perform abstractive summarization by fine-tuning a pre-trained language model on a dataset of summaries.
  3. TensorFlow’s Text Summarization with Transformer: TensorFlow can use the Transformer model to perform abstractive summarization.
  4. PyTextRank: PyTextRank is a Python library that implements the TextRank algorithm for summarization. TextRank is a graph-based ranking model that can identify the most important sentences in a document and generate a summary from them.

There are many other options for performing abstractive text summarization in Python, and the appropriate choice will depend on your specific needs and requirements.

We chose two of the most popular methods and provide code examples so that you can get started and try out both techniques.

1. Extractive summary with Hugging Face Transformers

Here is an example of how you might use the Hugging Face Transformers library in Python to perform abstractive summarization on a piece of text:

# Install the Transformers library
!pip install transformers

# Import necessary modules
import transformers
from transformers import T5ForConditionalGeneration, T5Tokenizer

# Load the T5 model and tokenizer
model = T5ForConditionalGeneration.from_pretrained('t5-small')
tokenizer = T5Tokenizer.from_pretrained('t5-small')

# Define the input text and the summary length
text = "This is a piece of text that you want to summarize."
max_length = 20

# Preprocess the text and encode it as input for the model
input_text = "summarize: " + text
input_ids = tokenizer.encode(input_text, return_tensors='pt').to(device)

# Generate a summary
summary = model.generate(input_ids, max_length=max_length)

# Decode the summary
summary_text = tokenizer.decode(summary[0], skip_special_tokens=True)
print(summary_text)

This code will summarise the input text using the T5 model. The summary will be no longer than max_length words.

Keep in mind that this is just a simple example. You may need to consider many other factors when using a machine learning model for abstractive summarization, such as fine-tuning the model on a specific dataset or adjusting the model’s hyperparameters.

2. Extractive summary with OpenAI

The second alternative to using a library is using an API to summarise the text for us.

Here is an example of how you might use the GPT-3 API from the openai library in Python to perform abstractive summarization on a piece of text:

# Install the openai library
!pip install openai

# Import necessary modules
import openai

# Set your API key
openai.api_key = "YOUR_API_KEY"

# Define the input text and the summary length
text = "This is a piece of text that you want to summarize."
length = 20

# Use the GPT-3 API to generate a summary
model_engine = "text-davinci-002"
prompt = (f"Summarize the following text in {length} words or fewer: "
         f"{text}")
completions = openai.Completion.create(engine=model_engine, prompt=prompt, max_tokens=length, n=1,stop=None,temperature=0.5)
summary = completions.choices[0].text

print(summary)

This code will summarise the input text using the GPT-3 API. The summary will be no longer than length words.

As with the previous example, this is just a simple example. You may need to consider many other factors when using the GPT-3 API for abstractive summarization, such as adjusting the API parameters or fine-tuning the model on a specific dataset.

What solution should you choose?

There are several advantages and disadvantages to using a library (e.g. Hugging Face Transformers) versus an API (e.g. openai) implementation for text summarization in Python. We look at the advantages and disadvantages so that you can make an informed decision about which method would more closely meet your needs.

Advantages of using a library for abstractive text summarization in Python

  • Greater control: When you use a library for text summarization, you have more control over the implementation and can customize it to fit your specific needs. For example, you can fine-tune the library on a particular dataset or adjust the model’s hyperparameters to improve performance.
  • No need for an API key: Many text summarization libraries are open-source and do not require an API key. This can be an advantage if you want to avoid signing up for an API key or need access to an API.
  • Offline use: A library can be used offline, which can be helpful if you do not have an internet connection or are concerned about data privacy.

Advantages of using an API for abstractive text summarization in Python

  • Ease of use: APIs are often easier to use than libraries because they provide a simple interface for making requests and receiving responses. This can be especially useful if you are unfamiliar with machine learning or natural language processing and want to avoid dealing with the complexity of implementing a text summarization model yourself.
  • Pre-trained models: Many text summarization APIs provide access to pre-trained models that can be used without additional training. This can be advantageous if you have a small dataset to fine-tune the model or want to start quickly.
  • Scalability: APIs can handle a large volume of requests, which can be an advantage if you need to perform text summarization on many documents.

Disadvantages of using a library for abstractive text summarization in Python

  • Requires more setup: Using a library for text summarization requires more setup than using an API. You will need to install the library and possibly download and preprocess data before you can use it.
  • May require more knowledge: Using a library for text summarization may require a deeper understanding of machine learning and natural language processing than using an API. You may need to understand how the library works and how to fine-tune it for your specific needs.

Disadvantages of using an API for abstractive text summarization in Python

  • Requires an API key: Many text summarization APIs require an API key, which can be inconvenient if you want to avoid signing up for one.
  • Limited control: When you use an API for text summarization, you have less control over the implementation and may need help to customize it as much as possible with a library.
  • Dependent on internet connection: An API requires an internet connection to work, which can be a disadvantage if you need a stable connection or need to perform text summarization offline.

Whether you summarize text with a library or an API will depend on your needs and preferences. For example, a library may be a better choice if you have a particular dataset on which you want to fine-tune a model or want more control over the implementation. On the other hand, if you want an easy-to-use solution that requires less setup or knowledge, an API may be a better choice. 

What implementation did you go with? Let us know in the comments.

About the Author

Neri Van Otten

Neri Van Otten

Neri Van Otten is the founder of Spot Intelligence, a machine learning engineer with over 12 years of experience specialising in Natural Language Processing (NLP) and deep learning innovation. Dedicated to making your projects succeed.

Recent Articles

online machine learning process

Online Machine Learning Explained & How To Build A Powerful Adaptive Model

What is Online Machine Learning? Online machine learning, also known as incremental or streaming learning, is a type of machine learning in which models are updated...

data drift in machine learning over time

Data Drift In Machine Learning Explained: How To Detect & Mitigate It

What is Data Drift Machine Learning? In machine learning, the accuracy and effectiveness of models heavily rely on the quality and consistency of the data on which they...

precision and recall explained

Classification Metrics In Machine Learning Explained & How To Tutorial In Python

What are Classification Metrics in Machine Learning? In machine learning, classification tasks are omnipresent. From spam detection in emails to medical diagnosis and...

example of a co-occurance matrix for NLP

Co-occurrence Matrices Explained: How To Use Them In NLP, Computer Vision & Recommendation Systems [6 Tools]

What are Co-occurrence Matrices? Co-occurrence matrices serve as a fundamental tool across various disciplines, unveiling intricate statistical relationships hidden...

use cases of query understanding

Query Understanding In NLP Simplified & How It Works [5 Techniques]

What is Query Understanding? Understanding user queries lies at the heart of efficient communication between humans and machines in the vast digital information and...

distributional semantics example

Distributional Semantics Simplified & 7 Techniques [How To Understand Language]

What is Distributional Semantics? Understanding the meaning of words has always been a fundamental challenge in natural language processing (NLP). How do we decipher...

4 common regression metrics

10 Regression Metrics For Machine Learning & Practical How To Guide

What are Evaluation Metrics for Regression Models? Regression analysis is a fundamental tool in statistics and machine learning used to model the relationship between a...

find the right document

Natural Language Search Explained [10 Powerful Tools & How To Tutorial In Python]

What is Natural Language Search? Natural language search refers to the capability of search engines and other information retrieval systems to understand and interpret...

the difference between bagging, boosting and stacking

Bagging, Boosting & Stacking Made Simple [3 How To Tutorials In Python]

What is Bagging, Boosting and Stacking? Bagging, boosting and stacking represent three distinct ensemble learning techniques used to enhance the performance of machine...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

nlp trends

2024 NLP Expert Trend Predictions

Get a FREE PDF with expert predictions for 2024. How will natural language processing (NLP) impact businesses? What can we expect from the state-of-the-art models?

Find out this and more by subscribing* to our NLP newsletter.

You have Successfully Subscribed!