How To Apply Feature Scaling In Machine Learning [Top Methods & Tutorials In Python]

by | Jul 31, 2023 | Data Science, Machine Learning

What is feature scaling in machine learning?

Feature scaling is a preprocessing technique used in machine learning and data analysis to bring all the input features to a similar scale. It is essential because many machine learning algorithms are sensitive to the scale of the input features. When features are on different scales, some algorithms might give excessive weight to features with larger scales, leading to biased or inefficient models.

There are primarily two common methods of feature scaling:

1. Min-Max Scaling (Normalization): This method scales the features to a fixed range, usually between 0 and 1. The formula for min-max scaling is:

X_scaled = (X - X_min) / (X_max - X_min) 

where X is the original feature value, X_min is the minimum value of the feature, and X_max is the maximum value of the feature.

2. Standardization (Z-score scaling): Standardization transforms the features with a mean of 0 and a standard deviation of 1. The formula for standardization is:

X_scaled = (X - mean(X)) / std(X) 

where X is the original feature value, mean(X) is the mean of the feature, and std(X) is the standard deviation of the feature.

z-score normalization

Both methods have their advantages and use cases:

Min-Max Scaling is useful when you want to scale the features to a specific range, especially if you know the maximum and minimum values have specific meanings or boundaries.

Min-Max Feature Scaling is useful when you want to scale the features to a specific range

Min-Max Feature Scaling is useful when you want to scale the features to a specific range.

Standardization is more suitable when you have features with varying scales and want to give them all equal importance. It is commonly used in algorithms that rely on distance calculations, such as k-nearest neighbours or gradient-based optimization methods.

The choice between normalization and standardization depends on the nature of the data and the requirements of the specific machine learning algorithm being used.

Standardization is often preferred as it makes the data more amenable to various algorithms and can improve the convergence speed during training. However, it is always a good practice to try both methods and observe their impact on the model’s performance before making a final decision.

When should you use feature scaling in machine learning?

Feature scaling is generally used in the following situations:

  1. Gradient-Based Optimization Algorithms: Many machine learning algorithms, such as gradient descent, use optimization techniques to minimize or maximize a particular objective function. Feature scaling helps these algorithms converge faster and prevents them from getting stuck in local minima or maxima. Standardization (Z-score scaling) is particularly beneficial in this context.
  2. Distance-Based Algorithms: Algorithms that rely on distances, such as k-nearest neighbours (KNN) and support vector machines (SVM), are sensitive to the scale of input features. If the features are not scaled, those with larger scales might dominate the distance calculations, leading to biased results. Scaling the features ensures that each feature contributes equally to the distance calculations, improving the algorithm’s performance.
  3. Regularization: Regularization methods, like L1 and L2 regularization, introduce penalty terms based on the magnitudes of the coefficients of features. When features are on different scales, the regularization may unfairly penalize certain features over others. Scaling the features helps to avoid this issue.
  4. Principal Component Analysis (PCA): PCA is used for dimensionality reduction and feature extraction. It works based on the covariance matrix of the data. Since PCA aims to find the directions of maximum variance, it is sensitive to the scale of the features. Feature scaling is necessary to ensure each feature contributes equally to the covariance matrix.
  5. Neural Networks: Neural networks benefit from feature scaling, especially when using activation functions like sigmoid or tanh. Scaled features help avoid vanishing or exploding gradients during backpropagation, leading to more stable training.
  6. Clustering Algorithms: Clustering algorithms like k-means use distance measures to assign data points to clusters. Scaling the features ensures that the clusters are formed based on the actual relationships between data points rather than being influenced by the scale of the features.

As a good practice, scaling the features before applying any machine learning algorithm is often beneficial. This ensures the algorithm performs optimally and avoids unexpected issues from varying feature scales.

What examples of machine learning algorithms do not require feature scaling?

Some machine learning algorithms are less sensitive to the scale of input features and do not require explicit feature scaling as a preprocessing step. These algorithms make decisions based on individual features’ values rather than magnitudes; thus, scaling does not significantly impact their performance. Here are some examples of such algorithms:

  1. Decision Trees: Decision trees make binary splits based on individual features at each node. Since the feature magnitudes do not influence the splits, scaling is unnecessary.
  2. Random Forests: Random forests are an ensemble of decision trees. Like decision trees, individual trees are unaffected by feature scales, making scaling unnecessary.
  3. Gradient Boosting Machines (GBM): GBM builds an ensemble of weak learners (usually decision trees) to create a strong learner. Like decision trees, each weak learner in the ensemble makes decisions based on individual features, making scaling irrelevant.
  4. Naive Bayes: Naive Bayes is a probabilistic classification algorithm that assumes independence between features given the class label. Feature scaling does not impact the conditional probabilities in the model.
  5. Nearest Neighbours: k-nearest neighbours (KNN) algorithm classifies data points based on the majority class of their k-nearest neighbours. Since distances are calculated based on the raw feature values, scaling does not affect the nearest neighbours’ identification.
  6. Association Rule Mining: Algorithms like Apriori for association rule mining focus on identifying patterns in binary feature sets. Feature scaling does not affect the data’s presence or absence of individual features.
  7. Text Classification with Bag-of-Words: In bag-of-words representations of text data, features represent the occurrence or frequency of specific words. Scaling is unnecessary as the focus is on the presence of words, not their magnitudes.

While these algorithms do not require feature scaling as a preprocessing step, it’s important to note that scaling might not harm their performance. In some cases, a slight performance improvement might still be achieved by scaling the features, especially when dealing with large or sparse datasets. However, compared to other algorithms that are highly sensitive to feature scales, the impact of scaling on these algorithms is usually less pronounced.

What are the different types of feature scaling that you can use?

There are several types of feature scaling methods used in data preprocessing. Here are some common ones:

1. Min-Max Scaling (Normalization)

This method scales the features to a fixed range, usually between 0 and 1. The formula for min-max scaling is:

X_scaled = (X - X_min) / (X_max - X_min)

where X is the original feature value, X_min is the minimum value of the feature, and X_max is the maximum value of the feature.

Advantages:

  • Scales data to a fixed range (typically between 0 and 1), which can be helpful for certain algorithms that require input features within specific boundaries.
  • Preserves the relative relationships between data points.

Disadvantages:

  • Sensitive to outliers, as they can significantly impact the scaling of the entire feature.
  • It does not handle extreme values well when compared to other scaling methods.

Use Cases:

  • Algorithms that use distances or gradients, like gradient descent, and neural networks with activation functions sensitive to scale.
  • Some image processing algorithms, where pixel values are usually between 0 and 255, must be normalized to a smaller range.

2. Standardization (Z-score scaling)

Standardization transforms the features with a mean of 0 and a standard deviation of 1. The formula for standardization is:

X_scaled = (X - mean(X)) / std(X)

where X is the original feature value, mean(X) is the mean of the feature, and std(X) is the standard deviation of the feature.

Advantages:

  • Shifts the mean to 0 and scales the variance to 1, making the features comparable and helping algorithms converge faster during training.
  • Less affected by outliers compared to Min-Max Scaling.

Disadvantages:

  • The scaled features do not have a fixed range, which might be undesirable for certain algorithms with specific requirements.

Use Cases:

  • Linear regression, logistic regression, and support vector machines (SVMs).
  • Principal Component Analysis (PCA) or other methods that require standardization to avoid features dominating the results.

3. Max Abs Scaling

Similar to Min-Max Scaling, but instead of scaling to a specific range, it scales the data to the absolute maximum value, preserving the sign of the original data. The formula is:

X_scaled = X / max(abs(X))

Advantages:

  • Preserves the sign of the original data, making it suitable for algorithms where both the sign and magnitude of the features matter.

Disadvantages:

  • Not ideal for data with extreme outliers.
  • Does not achieve the mean centring and variance scaling provided by standardization.

Use Cases:

  • Algorithms that rely on both positive and negative values, like decision trees or some types of neural networks. 

4. Robust Scaling

Robust Scaling scales the features based on their interquartile range (IQR), making it less sensitive to outliers. It is calculated as follows:

X_scaled = (X - Q1) / (Q3 - Q1) 

X is the original feature value, Q1 is the first quartile, and Q3 is the third quartile.

Advantages:

  • Less sensitive to outliers compared to Min-Max Scaling and Standardization.
  • Maintains the central tendency of the data.

Disadvantages:

  • It does not scale the features to a specific range, which might be necessary for certain algorithms.

Use Cases:

  • Data with a high presence of outliers, especially in regression tasks.
  • Clustering algorithms like k-means use distances and are sensitive to outliers.

5. Log Transformation

A log transformation can help normalize the distribution if the data is positively skewed. It is advantageous when dealing with data that varies over several orders of magnitude.

Advantages:

  • It helps normalize data with a positively skewed distribution.
  • Useful when data varies over several orders of magnitude.

Disadvantages:

  • Not suitable for data with zero or negative values, as the logarithm is not defined for them.

Use Cases:

  • Financial data, where prices or incomes often follow an exponential distribution.
  • Data that exhibit exponential growth patterns.

6. Power Transformation 

Power transformations, such as Box-Cox or Yeo-Johnson, stabilize variance and make the data more normally distributed.

Advantages:

  • It can stabilize variance and make the data more normally distributed.
  • Accommodates data with both positive and negative values.

Disadvantages:

  • Not appropriate for data with zero or negative values.
  • The choice of transformation parameter requires experimentation.

Use Cases:

  • When dealing with data with varying degrees of skewness.
  • Financial data analysis to make it more amenable to statistical tests.

7. Mean Normalization

This method scales the data to have a mean of 0. It is achieved by subtracting the mean of the data from each data point.

Advantages:

  • Centres the data around 0, which can be helpful for specific optimization algorithms.

Disadvantages:

  • It does not normalize the variance, and the scale of the data can still vary significantly.

Use Cases:

  • Optimization algorithms that benefit from centred data.

8. Unit Vector Scaling (Vector Normalization)

This method scales each sample (row) in the dataset to have a Euclidean norm (magnitude) of 1. It is often used in machine learning algorithms that rely on distance calculations, such as k-nearest neighbours.

Advantages:

  • Ensures all data points have the same scale, making distance-based algorithms work better.

Disadvantages:

  • It does not handle features with zero variance well, as division by zero is not allowed.

Use Cases:

The choice of feature scaling method depends on the nature of the data, the characteristics of the machine learning algorithm being used, and whether there are specific requirements for the scale of the features in the context of the problem. Experimentation and data analysis can help determine a given task’s most appropriate feature scaling technique.

How to implement feature scaling using Python

1. Min-max normalization and standardization in Python

In Python, you can perform feature scaling using various libraries. Here we will demonstrate how to do feature scaling using two popular libraries: scikit-learn and NumPy.

A. Using scikit-learn

Scikit-learn is a powerful machine learning library that includes utilities for data preprocessing, including feature scaling.

First, you need to install scikit-learn if you haven’t already:

pip install scikit-learn 

Here’s an example of how to perform feature scaling using Min-Max Scaling (Normalization) and Standardization (Z-score scaling) using scikit-learn:

import numpy as np
from sklearn.preprocessing import MinMaxScaler, StandardScaler

# Sample data (replace this with your actual dataset)
data = np.array([[10, 1000], [5, 500], [3, 300], [8, 800]])

# Min-Max Scaling (Normalization)
min_max_scaler = MinMaxScaler()
data_minmax_scaled = min_max_scaler.fit_transform(data)
print("Min-Max Scaled Data:")
print(data_minmax_scaled)

# Standardization (Z-score scaling)
standard_scaler = StandardScaler()
data_standard_scaled = standard_scaler.fit_transform(data)
print("Standardized Data:")
print(data_standard_scaled)

B. Using NumPy

You can do it manually if you want to implement feature scaling using NumPy. NumPy provides array operations, which allows you to apply scaling directly to your data.

Here’s an example of how to perform Min-Max Scaling and Standardization using NumPy:

import numpy as np

# Sample data (replace this with your actual dataset)
data = np.array([[10, 1000], [5, 500], [3, 300], [8, 800]])

# Min-Max Scaling (Normalization)
data_minmax_scaled = (data - data.min(axis=0)) / (data.max(axis=0) - data.min(axis=0))
print("Min-Max Scaled Data:")
print(data_minmax_scaled)

# Standardization (Z-score scaling)
data_standard_scaled = (data - data.mean(axis=0)) / data.std(axis=0)
print("Standardized Data:")
print(data_standard_scaled)

Both methods will achieve the same results in scaling the features of your dataset. The choice between scikit-learn and NumPy depends on your specific requirements and the complexity of the preprocessing steps you need to perform. Scikit-learn provides a more straightforward interface for everyday preprocessing tasks, while NumPy allows for more customization and flexibility.

2. Max Abs Scaler in Python

You can use the MaxAbsScaler from scikit-learn to perform Max Abs Scaling in Python. The MaxAbsScaler scales the data so that the maximum absolute value of each feature is 1, preserving the sign of the original data. This is particularly useful when you have positive and negative features and want to scale them based on their absolute maximum value.

First, make sure you have scikit-learn installed:

 pip install scikit-learn 

Here’s an example of how to use MaxAbsScaler:

import numpy as np
from sklearn.preprocessing import MaxAbsScaler

# Sample data (replace this with your actual dataset)
data = np.array([[10, 1000], [5, 500], [3, 300], [8, 800]])

# Max Abs Scaling
max_abs_scaler = MaxAbsScaler()
data_maxabs_scaled = max_abs_scaler.fit_transform(data)
print("Max Abs Scaled Data:")
print(data_maxabs_scaled)

The fit_transform() method in MaxAbsScaler will compute the maximum absolute value for each feature and then scale the data accordingly. After applying MaxAbsScaler, the data will have a maximum absolute value of 1 for each feature while preserving their original signs.

Applying the same scaler to the training and test data is essential when working with machine learning models. This ensures that the scaling is consistent across the entire dataset. To do that, you can reuse the fitted scaler or use transform() on the test data with the already fitted scaler. For example:

# Sample test data
test_data = np.array([[15, 1500], [2, 200]])

# Use the already fitted MaxAbsScaler to transform the test data
test_data_maxabs_scaled = max_abs_scaler.transform(test_data)
print("Max Abs Scaled Test Data:")
print(test_data_maxabs_scaled)

Remember to only use the fit_transform() method on the training data to avoid data leakage, which can lead to biased results.

Conclusion

Feature scaling is a fundamental data preprocessing technique critical in enhancing machine learning algorithms’ performance and data analysis tasks. Scaling the input features to a common scale brings multiple benefits and resolves potential issues that can arise due to varying scales of the features.

Here are the key takeaways:

  1. Normalization and Standardization: The two most common scaling methods are Min-Max Scaling (Normalization), which scales features to a fixed range (often between 0 and 1), and Standardization (Z-score scaling), which standardizes features to have zero mean and unit variance.
  2. Impact on Algorithms: Feature scaling is essential for algorithms that rely on distance measures or gradient-based optimization, as it ensures equal contributions from all features, avoids biases due to scale, and helps algorithms converge faster and more reliably.
  3. Handling Outliers: Some scaling methods, such as Robust Scaling and Max Abs Scaling, offer better resistance to outliers than Min-Max Scaling and Standardization.
  4. Use Cases: Feature scaling is relevant in numerous scenarios, including linear regression, logistic regression, support vector machines (SVMs), k-nearest neighbours (KNN), neural networks, clustering algorithms, and principal component analysis (PCA).
  5. Choice of Scaling Method: The selection of the scaling method depends on the characteristics of the data and the specific requirements of the algorithm being used. Experimentation and analysis are crucial for selecting the most suitable scaling technique.
  6. Good Practice: Although some algorithms might be less sensitive to feature scales, it is generally considered good practice to scale features before applying most machine learning algorithms. This ensures consistent and optimal performance across different models.

In summary, feature scaling is a powerful tool to improve machine learning models’ accuracy, stability, and efficiency. Bringing features to a common scale facilitates fair comparisons between different features. It enables algorithms to focus on the underlying patterns within the data, leading to better and more reliable results. Always consider feature scaling as an essential step in your data preprocessing pipeline to maximize the performance of your machine learning models.

About the Author

Neri Van Otten

Neri Van Otten

Neri Van Otten is the founder of Spot Intelligence, a machine learning engineer with over 12 years of experience specialising in Natural Language Processing (NLP) and deep learning innovation. Dedicated to making your projects succeed.

Recent Articles

anonymization vs pseudonymisation

Data Anonymisation Made Simple [7 Methods & Best Practices]

What is Data Anonymisation? Data anonymisation is modifying or removing personally identifiable information (PII) from datasets to protect individuals' privacy. By...

z-score normalization

Z-Score Normalization Made Simple & How To Tutorial In Python

What is Z-Score Normalization? Z-score normalization, or standardization, is a statistical technique that transforms data to follow a standard normal distribution. This...

different types of data masking

Data Masking Explained, Different Types & How To Implement It

Understanding the Basics of Data Masking Data masking is a critical process in data security designed to protect sensitive information from unauthorised access while...

types of data transformation processes

What Is Data Transformation? 17 Powerful Tools And Technologies

What is Data Transformation? Data transformation is converting data from its original format or structure into a format more suitable for analysis, storage, or...

Real time vs batch processing

Real-time Vs Batch Processing Made Simple: What Is The Difference?

What is Real-Time Processing? Real-time processing refers to the immediate or near-immediate handling of data as it is received. Unlike traditional methods, where data...

what is churn prediction?

Churn Prediction Made Simple & Top 9 ML Techniques

What is Churn prediction? Churn prediction is the process of identifying customers who are likely to stop using a company's products or services in the near future....

the federated architecture used for federated learning

Federated Learning Made Simple, Why its Important & Application in the Real World

What is Federated Learning? Federated Learning (FL) is a cutting-edge machine learning approach emphasising privacy and decentralisation. Unlike traditional machine...

cloud vs edge computing

NLP And Edge Computing: How It Works & Top 7 Technologies for Offline Computing

In the age of digital transformation, Natural Language Processing (NLP) has emerged as a cornerstone of intelligent applications. From chatbots and voice assistants to...

elastic net vs l1 and l2 regularization

Elastic Net Made Simple & How To Tutorial In Python

What is Elastic Net Regression? Elastic Net regression is a statistical and machine learning technique that combines the strengths of Ridge (L2) and Lasso (L1)...

1 Comment

  1. Scala Development Services

    These are genuinely wonderful ideas in on the topic of blogging.
    You have touched some good things here. Any way keep up wrinting.

    Reply

Submit a Comment

Your email address will not be published. Required fields are marked *

nlp trends

2025 NLP Expert Trend Predictions

Get a FREE PDF with expert predictions for 2025. How will natural language processing (NLP) impact businesses? What can we expect from the state-of-the-art models?

Find out this and more by subscribing* to our NLP newsletter.

You have Successfully Subscribed!