How To Use One Hot Encoding In Python With 3 Tutorials

by | Jan 12, 2023 | Data Science, Machine Learning, Natural Language Processing

Categorical variables are variables that can take on one of a limited number of values. These variables are commonly found in datasets and can’t be used directly in machine learning models as most only accept numerical inputs. One-hot encoding and label encoding are two common techniques used to convert categorical variables into numerical values that can be input into machine learning models.

One-hot encoding is used for nominal variables, which do not have a natural ordering, whereas label encoding is used for ordinal variables, which have a natural ordering. Both methods have their own advantages and disadvantages, and the best one to use depends on the specific problem and the dataset.

What is one hot encoding?

One-hot encoding is a method used to represent categorical variables as numerical values that can be input into machine learning models. It creates a binary vector with a length equal to the number of categories, where the index of the “hot” (i.e. 1) value corresponds to the class.

For example, if there are three categories: “red”, “green”, and “blue”, then a value of “green” would be encoded as [0, 1, 0].

This is useful because many machine learning algorithms cannot handle categorical data directly.

One hot encoding can turn words into 0s and 1s

One hot encoding can turn words into 0s and 1s

Why is it called one hot encoding?

It’s called “one-hot” encoding because only one element of the binary vector is “hot” (i.e. set to 1), while all other elements are “cold” (i.e. set to 0).

For example, if there are three categories: “red”, “green”, “blue”, then a value of “green” would be encoded as [0, 1, 0], with the second element being “hot” (i.e. 1) and the first and third elements being “cold” (i.e. 0).

Because only one element is “hot” at a time, it’s called one-hot.

When to use one hot encoding?

One-hot encoding is typically used when the categorical variable is a nominal variable or a variable that does not have a natural ordering. It represents the categorical variable as a set of binary features that can be input into machine learning models, typically designed to work with numerical data.

Some examples of when one-hot encoding may be used include:

  • When working with text data, such as classifying emails as spam or not spam
  • When encoding the different types of iris plants in a dataset
  • When working with categorical variables that have a large number of levels, such as country or city names

It’s important to note that one-hot encoding can cause the dimensionality of the dataset to increase rapidly, leading to a curse of dimensionality. Therefore, alternative encoding methods, such as binary, ordinal, or count, may be preferred in some cases.

When should one hot encoding be avoided?

One-hot encoding should be avoided or used with caution in certain situations, such as:

  • High cardinality categorical variables: One-hot encoding can lead to a large number of binary features, which can cause the dimensionality of the dataset to increase rapidly. This can lead to the curse of dimensionality, where the model becomes less accurate as the number of features grows. In such cases, alternative encoding methods such as binary, ordinal, or count encoding may be preferred.
  • When working with ordinal variables: Ordinal variables have a natural ordering, and one-hot encoding will not be able to capture this ordering. In these cases, ordinal encoding may be a better choice.
  • When working with a limited dataset: One-hot encoding can cause the sparsity of the dataset to increase, leading to a lack of information on specific features. This can make the model less robust and less accurate.
  • When working with tree-based models: Tree-based models can handle categorical variables directly, and one-hot encoding may not be necessary.

In general, it’s always essential to understand the type of variables in the dataset and the characteristics of the model you are using and use the appropriate encoding method.

Can you use one-hot encoding for multiple categories?

When using one-hot encoding with multiple categories, a binary vector is created for each type, with a length equal to the number of classes. Each vector will have only one “hot” value (1), and the rest of the values will be “cold” (0).

The position of the “hot” value corresponds to the category. For example, if you have three types: “red”, “green”, and “blue”, then the encoding for “green” would be [0,1,0], with the second position being “hot” (i.e. 1) and the rest being “cold” (i.e. 0).

It’s important to note that each instance of a categorical variable is encoded separately, resulting in as many binary vectors as there are instances. So, if you have 100 cases of “green” and 200 cases of “blue”, you will have 300 binary vectors, with 100 of them having [0,1,0] and 200 of them having [0,0,1].

If you have a large number of categories and a large number of instances, one-hot encoding can result in a high-dimensional dataset with a lot of sparse features. In these situations, consider using other ways to encode, like binary encoding, ordinal encoding, or count encoding, to reduce the number of dimensions in the dataset.

What are the alternatives?

There are several ways to represent categorical variables as numbers besides one-hot encodings, such as:

  1. Binary encoding: This represents a categorical variable as a binary code, where each digit corresponds to a category. It allows for the capture of the order of the categories.
  2. Label encoding: This assigns a unique integer value to each category based on the natural ordering of the categories. It avoids the curse of dimensionality and allows capturing the order of the categories.
  3. Count encoding: This replaces a categorical variable with the number of times each category shows up in the dataset.
  4. Target encoding: This replaces a categorical variable with the average value of the target variable for each category.
  5. Helmert encoding: This creates a new feature for each category: the difference between the mean of the target variable for that category and the mean for the last category.
  6. Frequency encoding replaces a categorical variable with the number of times each category appears in the dataset.
  7. Backward difference encoding: This creates a new feature for each category: the difference between the mean of the target variable for that category and the mean for all categories below it.
  8. Leave-one-out encoding: This replaces all but the current instance of a categorical variable with the mean value of the target variable.

All these alternatives have advantages and disadvantages, and the best one depends on the specific problem and the dataset. Therefore, it’s essential to experiment with different encoding methods and compare the results.

One hot encoding vs label encoding

One-hot and label encoding are methods used to represent categorical variables as numerical values that can be input into machine learning models. However, they are used in different situations and have distinct advantages and disadvantages:

  • One-hot encoding: This method creates a binary vector for each category, with a length equal to the number of categories. Only one element of the vector is “hot” (i.e. set to 1), while all other elements are “cold” (i.e. set to 0). This method is helpful for nominal variables, which do not have a natural ordering. One-hot encoding can cause the dimensionality of the dataset to increase rapidly, leading to a curse of dimensionality.
  • Label encoding: This method assigns a unique integer value to each category, and it is helpful for ordinal variables, which have a natural ordering. This method can be useful for ordinal variables, and it’s less prone to the curse of dimensionality than one-hot encoding. However, it can introduce an arbitrary ordering to the categorical variable, which might not match the real ordering and can cause problems in some models.

For example, if you are working with a variable representing the size of a t-shirt (small, medium, large), it’s natural to order the variable and label it as 0,1,2. So, in this case, label encoding is appropriate. However, if you are working with a variable representing the colour of a t-shirt (red, blue, green), there is no natural ordering; one-hot encoding is more appropriate.

It’s important to note that the choice between one-hot encoding and label encoding depends on the specific problem and the dataset. Therefore, it’s essential to experiment with different encoding methods and compare the results.

What should you do after one hot encoding?

After one-hot encoding, there are a few steps that you may want to consider:

  1. Check for collinearity: One-hot encoding can result in correlated features, which can cause problems for some machine learning models. If necessary, you can use a correlation matrix to check for collinearity and remove one of the correlated features.
  2. Handle missing values: One-hot encoding can result in missing values if a category is not present in all instances. You can handle missing values by imputing the variable’s mode, median, or mean or by using a library such as Impyute.
  3. Scale the data: Scaling the data is vital to ensure that all features are on the same scale. This can be done using the StandardScaler or MinMaxScaler classes from scikit-learn .
  4. Train a model: After preprocessing the data, you can train a machine learning model on the dataset. For example, you can use a logistic regression, a decision tree, or a neural network.
  5. Evaluate the model: After training the model, you should evaluate it using metrics such as accuracy, precision, recall, and F1-score. You can also use k-fold cross-validation to get a more robust estimate of the model’s performance.
  6. Optimize the model: You can fine-tune the model’s parameters or try different models to improve it based on the evaluation results.

It’s also important to check your model’s performance on a test dataset that the model has not seen before to estimate its performance on unseen data.

One hot encoding in python

1. Scikit-Learn

In Python, one-hot encoding can be easily performed using the OneHotEncoder class from the sklearn.preprocessing module. Here is an example of how to use it:

from sklearn.preprocessing import OneHotEncoder

# Input data
data = ['red', 'green', 'blue', 'green', 'red']

# Create an instance of the one-hot encoder
encoder = OneHotEncoder()

# Fit and transform the input data
encoded_data = encoder.fit_transform(data)

# Print the encoded data
print(encoded_data.toarray())

This will output an array with the one-hot encoded data:

[[1. 0. 0.]
 [0. 1. 0.]
 [0. 0. 1.]
 [0. 1. 0.]
 [1. 0. 0.]]

2. Pandas

It’s also possible to use the get_dummies function from pandas to perform one-hot encoding. Here is an example:

import pandas as pd

# Input data
data = ['red', 'green', 'blue', 'green', 'red']

# One-hot encode the input data
encoded_data = pd.get_dummies(data)

# Print the encoded data
print(encoded_data)

This will output a DataFrame with the one-hot encoded data:

blue green red
0 0 0 1
1 0 1 0
2 1 0 0
3 0 1 0
4 0 0 1

Both the OneHotEncoder class and the get_dummies function is a very convenient way to perform one-hot encoding in Python.

3. PySpark

In PySpark, one-hot encoding can be performed using the StringIndexer and OneHotEncoder classes from the pyspark.ml.feature module.

Here is an example of how to use them:

from pyspark.ml.feature import StringIndexer, OneHotEncoder
from pyspark.ml import Pipeline

# Input data
data = [("red",), ("green",), ("blue",), ("green",), ("red",)]
df = spark.createDataFrame(data, ["color"])

# Create a StringIndexer
indexer = StringIndexer(inputCol="color", outputCol="color_index")

# Create a OneHotEncoder
encoder = OneHotEncoder(inputCol="color_index", outputCol="color_vec")

# Create a pipeline
pipeline = Pipeline(stages=[indexer, encoder])

# Fit the pipeline to the input data
transformed_df = pipeline.fit(df).transform(df)

# Print the transformed dataframe
transformed_df.show()

This will output the one-hot encoded dataframe:

+-----+-----------+-------------+
|color|color_index| color_vec|
+-----+-----------+-------------+
| red| 0.0|(2,[0],[1.0])|
|green| 1.0|(2,[1],[1.0])|
| blue| 2.0|(2,[2],[1.0])|
|green| 1.0|(2,[1],[1.0])|
| red| 0.0|(2,[0],[1.0])|
+-----+-----------+-------------+

As you can see, the pipeline first applies the StringIndexer to map the categorical variable “color” to a numerical variable “color_index”, and then applies the OneHotEncoder to create a binary vector with a length equal to the number of categories.

Each vector will have only one “hot” value (1) and the rest of the values will be “cold” (0). The position of the “hot” value corresponds to the category.

PySpark also provides a OneHotEncoderEstimator class, which is similar to OneHotEncoder class but it can handle multiple categorical variables at the same time.

Conclusion

One-hot and label encoding are methods used to represent categorical variables as numerical values that can be input into machine learning models.

One-hot encoding is helpful for nominal variables, which do not have a natural ordering, whereas label encoding is helpful for ordinal variables, which have a natural ordering.

One-hot encoding can cause the dimensionality of the dataset to increase rapidly, leading to a curse of dimensionality.

Label encoding can introduce an arbitrary ordering to the categorical variable, which might not match the real ordering and can cause problems in some models.

The choice between one-hot encoding and label encoding depends on the specific problem and the dataset, and it’s essential to experiment with different encoding methods and compare the results.

About the Author

Neri Van Otten

Neri Van Otten

Neri Van Otten is the founder of Spot Intelligence, a machine learning engineer with over 12 years of experience specialising in Natural Language Processing (NLP) and deep learning innovation. Dedicated to making your projects succeed.

Recent Articles

find the right document

Natural Language Search Explained [10 Powerful Tools & How To Tutorial In Python]

What is Natural Language Search? Natural language search refers to the capability of search engines and other information retrieval systems to understand and interpret...

the difference between bagging, boosting and stacking

Bagging, Boosting & Stacking Made Simple [3 How To Tutorials In Python]

What is Bagging, Boosting and Stacking? Bagging, boosting and stacking represent three distinct ensemble learning techniques used to enhance the performance of machine...

y_actual - y_predicted

Top 9 Performance Metrics In Machine Learning & How To Use Them

Why Do We Need Performance Metrics In Machine Learning? In machine learning, the ultimate goal is to develop models that can accurately generalize to unseen data and...

This stochasticity imbues SGD with the ability to traverse the optimization landscape more dynamically, potentially avoiding local minima and converging to better solutions.

Stochastic Gradient Descent (SGD) In Machine Learning Explained & How To Implement

Understanding Stochastic Gradient Descent (SGD) In Machine Learning Stochastic Gradient Descent (SGD) is a pivotal optimization algorithm widely utilized in machine...

self attention example in BERT NLP

The BERT Algorithm (NLP) Made Simple [Understand How Large Language Models (LLMs) Work]

What is BERT in the context of NLP? In Natural Language Processing (NLP), the quest for models genuinely understanding and generating human language has been a...

fact checking with large language models LLMs

Fact-Checking With Large Language Models (LLMs): Is It A Powerful NLP Verification Tool?

Can a Machine Tell a Lie? Picture this: you're scrolling through social media, bombarded by claims about the latest scientific breakthrough, political scandal, or...

key elements of cognitive computing

Cognitive Computing Made Simple: Powerful Artificial Intelligence (AI) Capabilities & Examples

What is Cognitive Computing? The term "cognitive computing" has become increasingly prominent in today's rapidly evolving technological landscape. As our society...

Multilayer Perceptron Architecture

Multilayer Perceptron Explained And How To Train & Optimise MLPs

What is a Multilayer perceptron (MLP)? In artificial intelligence and machine learning, the Multilayer Perceptron (MLP) stands as one of the foundational architectures,...

Left: Illustration of SGD optimization with a typical learning rate schedule. The model converges to a minimum at the end of training. Right: Illustration of Snapshot Ensembling. The model undergoes several learning rate annealing cycles, converging to and escaping from multiple local minima. We take a snapshot at each minimum for test-time ensembling

Learning Rate In Machine Learning And Deep Learning Made Simple

Machine learning algorithms are at the core of many modern technological advancements, powering everything from recommendation systems to autonomous vehicles....

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

nlp trends

2024 NLP Expert Trend Predictions

Get a FREE PDF with expert predictions for 2024. How will natural language processing (NLP) impact businesses? What can we expect from the state-of-the-art models?

Find out this and more by subscribing* to our NLP newsletter.

You have Successfully Subscribed!