Principal Component Analysis Made Easy & How To Python Tutorial With Scikit-Learn

by | Aug 25, 2023 | Data Science, Machine Learning

What is the meaning of PCA in machine learning?

PCA stands for Principal Component Analysis. It is a statistical technique used in data analysis and machine learning to simplify the complexity of high-dimensional data while retaining its important features.

PCA primarily aims to transform a dataset’s original variables into a new set of uncorrelated variables called principal components. These components are linear combinations of the original variables and are chosen in such a way that they capture the maximum variance present in the data.

PCA is often used for dimensionality reduction, which is particularly useful when dealing with datasets with many variables. By reducing the number of dimensions, PCA can help mitigate issues related to the “curse of dimensionality” and make subsequent analysis or modelling more efficient and accurate. Additionally, PCA can also be used for data visualization and noise reduction.

In PCA, the first principal component captures the most variance in the data; the second principal component captures the second most, and so on. These principal components are orthogonal to each other, meaning they are uncorrelated. Finding these components involves computing eigenvectors and eigenvalues of the data’s covariance matrix.

An intuitive Explanation Behind PCA

The intuition behind Principal Component Analysis (PCA) revolves around simplifying complex data by focusing on its most significant patterns. Imagine you have a high-dimensional dataset with numerous variables. Each variable represents a different aspect or measurement, and together, they form a multi-dimensional space. However, not all of these variables may contribute equally to the underlying structure of the data.

PCA aims to find a new set of axes, called principal components, in this multi-dimensional space such that when the data is projected onto these components, the variance (spread) of the data is maximized along the first component, followed by the second most variance on the second component, and so on. PCA identifies the directions in the original data space where the data varies the most.

A step-by-step breakdown of how PCA works:

  1. Data Variability: Consider your high-dimensional data as points scattered in space. Each point is a data instance whose dimensions correspond to the different variables. These dimensions might be correlated or might contain redundant information.
  2. Variance as Information: The spread of the data along each dimension represents the amount of information contained in that variable. If a variable has a more extensive spread (higher variance), it means it carries more information about the differences between data points.
  3. New Coordinate System: PCA seeks to find a new set of axes (principal components) in this space. The first principal component is the direction along which the data varies the most (has the highest variance). The second principal component is orthogonal (perpendicular) to the first, captures the next highest variance, and so on.
  4. Projection: When you project the data onto these principal components, you’re essentially looking at the data from a new perspective. The projection captures the most significant information while discarding the less critical information. The new coordinate system is chosen so the first principal component captures the most variance. The second captures the second most, and so on.
  5. Dimensionality Reduction: Now, if you’re willing to sacrifice some information (variance), you can choose to retain only the top few principal components. This effectively reduces the dimensionality of the data while preserving the most important patterns. It can simplify visualization and subsequent analysis.
  6. Interpretability: In many cases, these principal components might have physical or intuitive interpretations. They might represent underlying factors or trends in the data that are difficult to discern in the original high-dimensional space.

PCA helps to highlight the underlying structure of the data by finding the directions in which it varies the most. Focusing on the most important patterns and reducing dimensionality can lead to better data understanding, visualization, and analysis.

Let’s illustrate this process with a simple 2D example with the concept of variance and how PCA selects principal components

A simple 2D example of PCA

Let’s consider a simple 2D example to illustrate the concept of variance and how PCA selects principal components.

Imagine you have a dataset of points in a 2D space, where each point represents an observation with two variables: X and Y. Here’s the dataset:

  X   |   Y
----------------
  2   |   3
  4   |   5
  6   |   7
  8   |   9
 10   |  11

1. Calculating Means: The first step in PCA is calculating the means of both variables (X and Y). In this case, the mean of X is (2 + 4 + 6 + 8 + 10) / 5 = 6, and the mean of Y is (3 + 5 + 7 + 9 + 11) / 5 = 7.

2. Centering the Data: Subtract the respective means from each data point. This centers the data around the origin (0, 0):

  X   |   Y
----------------
 -4   |  -4
 -2   |  -2
  0   |   0
  2   |   2
  4   |   4

3. Calculating Covariance: Calculate the covariance matrix of the centred data:

   X     |    Y
-----------------
 10.0    |  10.0
 10.0    |  10.0
 10.0    |  10.0
 10.0    |  10.0
 10.0    |  10.0

Notice that the off-diagonal elements are identical, indicating that X and Y are perfectly correlated in this example.

4. Finding Eigenvectors and Eigenvalues: The next step is to find the eigenvectors and eigenvalues of the covariance matrix. In this simple example, it turns out that any vector in the space is an eigenvector with a corresponding eigenvalue of 50. This is because the covariance matrix is proportional to the identity matrix, indicating no preferred direction of variability.

5. Choosing Principal Components: Since X and Y have the same variance (equal to the eigenvalue, 50), any linear combination of X and Y is a principal component. However, we can choose the original axes (X and Y) as the principal components for this example.

In this example, both X and Y contribute equally to the variance of the data, so the principal components are aligned with the original axes. In more complex examples, PCA would select directions where the data varies the most, allowing you to capture the most important patterns and reduce dimensionality.

Remember that this is a highly simplified example. In real-world scenarios, PCA becomes particularly powerful when there’s a noticeable difference in variance along different directions, allowing it to capture the main patterns in high-dimensional data effectively.

The Mathematics Behind Principal Component Analysis

Principal Component Analysis (PCA) might sound complex, but at its core, it relies on straightforward mathematical principles to uncover the intrinsic structure of data. In this section, we’ll delve into the mathematical underpinnings of PCA, breaking down the steps that lead to identifying those crucial principal components.

1. Covariance Matrix and Centered Data

At the heart of PCA lies the covariance matrix. This matrix quantifies the relationships between different variables in your data. But we need to centre the data before we compute the covariance matrix. Centring involves subtracting the mean of each variable from its respective values, ensuring that the new origin is at the mean of the data.

Mathematically, for each data point (x, y) , the centred point becomes (x - mean(x), y - mean(y)). Once all data points are centred, we can construct the covariance matrix. This matrix captures how much the variables vary together.

2. Eigenvalues and Eigenvectors

With the covariance matrix in hand, we find its eigenvalues and eigenvectors. Eigenvalues and eigenvectors are fundamental concepts in linear algebra. An eigenvector of a matrix remains in the same direction, only scaled, when the matrix is applied to it. The corresponding eigenvalue represents the amount by which the eigenvector is scaled.

For the covariance matrix, the eigenvectors represent the directions along which the data varies the most. The eigenvalues tell us how much variance is captured along each eigenvector direction. The eigenvector with the largest eigenvalue corresponds to the first principal component, the direction with the most variance in the data. The second largest eigenvalue corresponds to the second principal component, and so on.

3. Selecting Principal Components

The final step involves selecting the top k eigenvectors (principal components) corresponding to the k largest eigenvalues. These principal components collectively form a new coordinate system for the data. The original data is projected onto this new coordinate system, capturing the essential patterns while discarding the less significant information.

In practice, you can choose how many principal components to retain based on the variance you want to preserve. Retaining more components holds more information but may lead to higher-dimensional representations.

4. Dimensionality Reduction and Reconstruction

One of the primary applications of PCA is dimensionality reduction. By selecting a subset of the principal components, you reduce the dimensionality of your data while retaining most of its essential characteristics. This can significantly simplify subsequent analysis, visualization, and modelling.

Additionally, you can use the retained principal components to reconstruct an approximation of the original data. This is done by projecting the data back into the original space using the selected principal components.

The mathematical machinery behind PCA might seem intricate, but its conceptual core is accessible. By focusing on the relationships between variables, the eigenvalues and eigenvectors guide us to the principal components that capture the essence of the data. With this understanding, we can move on to practical implementations and explore how PCA works its magic on real-world datasets.

How to implement PCA with sklearn In Python

Now that we have a solid grasp of the mathematical foundation of Principal Component Analysis (PCA) let’s dive into the practical steps of implementing PCA using popular libraries like scikit-learn in Python. By the end of this section, you’ll be equipped to apply PCA to your datasets and harness its power for dimensionality reduction and data analysis.

1. Data Preparation

Before applying PCA, ensure that your data is preprocessed and normalized. This is crucial for PCA’s performance. Suppose you have your dataset loaded into a NumPy array or a Pandas DataFrame.

import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA

# Generating some fake data for this demonstration
np.random.seed(42)
num_samples = 100

# Create correlated data with a positive correlation
mean = [5, 7]
cov = [[2, 1.5], [1.5, 2]]
data = np.random.multivariate_normal(mean, cov, num_samples)

# Standardize the data
scaler = StandardScaler()
scaled_data = scaler.fit_transform(data)

2. Applying PCA

With your data preprocessed, you can apply PCA. Scikit-learn provides an easy-to-use PCA class for this purpose.

# Apply PCA

# Instantiate PCA with the number of components you want to retain 
num_components = 2
pca = PCA(n_components=num_components)

# Fit PCA to the scaled data
pca_data = pca.fit_transform(scaled_data)

3. Explained Variance Ratio

One of the critical pieces of information PCA provides is the explained variance ratio of each principal component. This ratio tells you the proportion of the total variance in the original data captured by each component.

# Explained variance ratio
explained_variance_ratio = pca.explained_variance_ratio_
print("Explained Variance Ratio:", explained_variance_ratio)

Output:

Explained Variance Ratio: [0.8373527 0.1626473

4. Visualization: PCA plot

Visualizing the transformed data in the PCA space can be insightful. For a 2D PCA space, you can create a scatter plot.

# Visualize the original and PCA-transformed data
plt.figure(figsize=(10, 5))

plt.subplot(1, 2, 1)
plt.scatter(data[:, 0], data[:, 1])
plt.xlabel('X')
plt.ylabel('Y')
plt.title('Original Data')

plt.subplot(1, 2, 2)
plt.scatter(pca_data[:, 0], pca_data[:, 1])
plt.xlabel('Principal Component 1')
plt.ylabel('Principal Component 2')
plt.title('PCA Transformed Data')

plt.tight_layout()
plt.show()
Principal Component Analysis pca plot 2 dimensions

Scatter plot of the original vs transformed data.

5. Reconstruction (Optional)

You can also reconstruct the data from the PCA space back to the original space, although some information might be lost during this process due to dimensionality reduction.

original_data_reconstructed = pca.inverse_transform(pca_data) 

You’ve successfully implemented PCA using scikit-learn. Following these steps, you’ve transformed your data into a lower-dimensional space, retained the most significant patterns, and possibly gained new insights into your dataset. Remember that PCA is a versatile tool that can be applied to a wide range of domains, from image processing to finance, and its practical benefits are substantial.

Applications of Principal Component Analysis

Principal Component Analysis (PCA) isn’t just a theoretical concept – it’s a practical tool with various applications in various fields. Let’s explore how PCA is employed to tackle real-world challenges and unearth hidden insights.

1. Image Compression and Reconstruction

Many pixels often represent images in image processing, resulting in high-dimensional data. PCA can be applied to compress images by reducing the number of dimensions while preserving essential features. This compression is achieved by retaining the most significant principal components. Despite the dimensionality reduction, the reconstructed images can still capture the essence of the original images, albeit with some loss of detail.

2. Face Recognition

PCA has played a pivotal role in face recognition systems. PCA can extract the most critical facial features by treating each face as a high-dimensional data point, enabling efficient recognition. In this context, the principal components correspond to facial features like eyes, nose, and mouth. By comparing the principal component representations of new faces to those of known faces, recognition algorithms can accurately identify individuals.

3. Bioinformatics

In genomics and proteomics, datasets can be exceedingly high-dimensional due to the many genes or proteins considered. PCA helps reveal patterns in gene expression data, facilitating the identification of clusters or groups of genes that share similar expression profiles. This can aid in understanding biological processes and classifying diseases based on gene expression.

4. Financial Analysis

PCA finds applications in financial analysis, particularly in portfolio management and risk assessment. By applying PCA to historical stock price data, you can identify the primary modes of variability among stocks. This information is invaluable for constructing diversified portfolios that balance risk and return.

5. Noise Reduction

In scenarios where data is noisy or contains irrelevant information, PCA can help filter the noise by retaining only the principal components that capture the signal. Focusing on the most significant patterns can enhance signal-to-noise ratios and improve subsequent analysis or modelling.

Principal Component Analysis in Data Visualization

One of the most tangible benefits of Principal Component Analysis (PCA) is its role in data visualization. Visualizing high-dimensional data can be daunting, as our ability to comprehend data diminishes as the number of dimensions increases. PCA alleviates this challenge by transforming data into a lower-dimensional space that retains much essential information. Let’s explore how PCA enhances data visualization and aids in understanding complex datasets.

1. Dimension Reduction for Visualization

PCA’s primary function in data visualization is dimension reduction. It identifies the principal components that capture the most significant variance in the data and projects the data onto these components. By reducing the number of dimensions while preserving the most critical patterns, PCA enables data visualization in two or three dimensions.

2. Scatter Plots and Clustering

In a two-dimensional PCA space, you can create scatter plots that display the distribution of data points. Clusters, patterns, and relationships that were difficult to discern in the original high-dimensional space can become apparent in the reduced PCA space. This aids in identifying groups of similar data points and understanding their underlying structures.

3. Anomaly Detection

Outliers, anomalies, or data points that deviate significantly from the norm are often hard to spot in high-dimensional data. However, anomalies might stand out in a PCA-transformed space as data points that lie far from the rest. This can be immensely valuable for identifying exceptional cases or errors in your dataset.

4. Interactive Visualization

Interactive data visualization tools can provide a dynamic way to explore PCA-transformed data. By allowing users to select combinations of principal components to visualize interactively, these tools can unveil intricate patterns that might not be immediately obvious. Such interactive exploration can lead to deeper insights and hypotheses generation.

Considerations and Limitations

While Principal Component Analysis (PCA) is a powerful technique for dimensionality reduction and uncovering patterns in data, it’s essential to be aware of its considerations and limitations. Understanding these aspects will help you make informed decisions when applying PCA to your datasets.

1. Linear Assumption

PCA is based on the assumption that the underlying relationships in the data are linear. This means that PCA might not be suitable for datasets where the relationships between variables are highly non-linear. In such cases, alternative techniques like Kernel PCA can be considered to capture non-linear patterns.

2. Retained Variance

One crucial decision when using PCA is determining how many principal components to retain. Retaining too few components might result in losing information, while having too many might not provide significant benefits and could lead to overfitting. The explained variance ratio can guide your decision, helping you choose between dimensionality reduction and information preservation.

3. Interpretability

While PCA is excellent for reducing dimensionality and capturing patterns, the interpretability of the resulting components might not always be straightforward. Sometimes, the principal components might not have direct physical or intuitive interpretations. This is especially true when the original variables have complex relationships.

4. Data Scaling and Outliers

PCA is sensitive to the scale of the data. Before applying PCA, it’s crucial to standardize or normalize your data to ensure that variables with larger scales do not dominate the principal component selection process. Additionally, outliers can influence PCA, so outlier detection and handling should be considered part of your preprocessing steps.

5. Curse of Dimensionality

While PCA addresses the “curse of dimensionality” to some extent by reducing dimensionality, it’s essential to remember that PCA might not always be a magic solution. PCA might struggle to capture the most critical patterns in extremely high-dimensional spaces, and more advanced techniques or domain-specific knowledge might be necessary.

6. Overfitting and Generalization

When applying PCA for machine learning tasks, such as feature reduction, be cautious not to overfit your model to the reduced-dimensional space. Constantly evaluate your model’s performance on validation or test data to ensure the reduced features generalize well to new data.

Conclusion

Principal Component Analysis (PCA) stands as a cornerstone in data analysis and machine learning, offering a potent methodology to unveil the underlying structure of complex datasets. Through this journey, we’ve explored the essence of PCA, its mathematical foundations, practical implementation, and a spectrum of its applications.

PCA’s ability to reduce high-dimensional data to its essential components is crucial in various domains. From image compression to bioinformatics, PCA empowers us to extract meaningful information, streamline computations, and gain insights that might remain hidden in the sea of dimensions.

As you venture into the world of PCA, remember its considerations and limitations. While it excels in capturing linear patterns, it might falter when dealing with intricate non-linear relationships—the decision of how many principal components to retain balances information preservation and dimensionality reduction. The variance explained by each component should guide this choice.

PCA is a beacon in data visualization, guiding us through the labyrinth of high-dimensional spaces. It empowers us to explore data clusters, identify anomalies, and recognize patterns that might not be evident in their original form.

In essence, PCA is more than a technique; it’s a lens through which we can better perceive the intricate tapestry of data. As you apply PCA to your projects, may it unveil insights, simplify complexities, and help you make informed decisions grounded in the profound patterns that underlie your data.

About the Author

Neri Van Otten

Neri Van Otten

Neri Van Otten is the founder of Spot Intelligence, a machine learning engineer with over 12 years of experience specialising in Natural Language Processing (NLP) and deep learning innovation. Dedicated to making your projects succeed.

Recent Articles

glove vector example "king" is to "queen" as "man" is to "woman"

Text Representation: A Simple Explanation Of Complex Techniques

What is Text Representation? Text representation refers to how text data is structured and encoded so that machines can process and understand it. Human language is...

wavelet transform: a wave vs a wavelet

Wavelet Transform Made Simple [Foundation, Applications, Advantages]

Introduction to Wavelet Transform What is Signal Processing? Signal processing is critical in various fields, from telecommunications to medical diagnostics and...

ROC curve

Precision And Recall In Machine Learning Made Simple: How To Handle The Trade-off

What is Precision and Recall? When evaluating a classification model's performance, it's crucial to understand its effectiveness at making predictions. Two essential...

Confusion matrix explained

Confusion Matrix: A Beginners Guide & How To Tutorial In Python

What is a Confusion Matrix? A confusion matrix is a fundamental tool used in machine learning and statistics to evaluate the performance of a classification model. At...

ordinary least square is a linear relationship

Understand Ordinary Least Squares: How To Beginner’s Guide [Tutorials In Python, R & Excell]

What is Ordinary Least Squares (OLS)? Ordinary Least Squares (OLS) is a fundamental technique in statistics and econometrics used to estimate the parameters of a linear...

how does METEOR work

METEOR Metric In NLP: How It Works & How To Tutorial In Python

What is the METEOR Score? The METEOR score, which stands for Metric for Evaluation of Translation with Explicit ORdering, is a metric designed to evaluate the text...

glove vector example "king" is to "queen" as "man" is to "woman"

BERTScore – A Powerful NLP Evaluation Metric Explained & How To Tutorial In Python

What is BERTScore? BERTScore is an innovative evaluation metric in natural language processing (NLP) that leverages the power of BERT (Bidirectional Encoder...

Perplexity in NLP explained

Perplexity In NLP: Understand How To Evaluate LLMs [Practical Guide]

Introduction to Perplexity in NLP In the rapidly evolving field of Natural Language Processing (NLP), evaluating the effectiveness of language models is crucial. One of...

BLEU Score In NLP: What Is It & How To Implement In Python

What is the BLEU Score in NLP? BLEU, Bilingual Evaluation Understudy, is a metric used to evaluate the quality of machine-generated text in NLP, most commonly in...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

nlp trends

2024 NLP Expert Trend Predictions

Get a FREE PDF with expert predictions for 2024. How will natural language processing (NLP) impact businesses? What can we expect from the state-of-the-art models?

Find out this and more by subscribing* to our NLP newsletter.

You have Successfully Subscribed!