What do Principal Components Actually do Mathematically?
I have recently taken an interest in PCA after watching Professor Gilbert Strang’s PCA lecture. I must have watched at least 15 other videos and read 7 different blog posts on PCA since. They are all very excellent resources, but I found myself somewhat unsatisfied. What they do a lot is teaching us the following:
What the PCA promise is;
Why that promise is very useful in Data Science; and
How to extract these principal components. (Although I don't agree with how some of them do it by applying SVD on the covariance matrix, that can be saved for another post.)
Some of them go the extra mile to show how the promise is being fulfilled graphically. For example, a transformed vector can be shown to be still clustered with its original group in a plot.
Objective
To me, the plot does not provide a visual effect that is striking enough. The components extraction part, on the other hand, mostly talks about how only. Therefore, the objective of this post is to shift our focus onto these 2 areas - to establish a more precise goal before we dive into the components extraction part, and to bring an end to this post with a more striking visual.
Prerequisites
This post is for you if:
You have already seen the aforementioned plot - just a bonus actually;
You have a decent understanding of what the covariance matrix is about;
You have a good foundation in linear algebra; and
Your heart is longing to discover the principal components, instead of being told what they are!
How to Choose P?
After hearing my dissatisfaction, my friend Calvin recommended this paper by Jonathon Shlens - A Tutorial on Principal Component Analysis to me. It is by far the best resource I have come across on PCA. However, it's also a bit lengthier than your typical blog post, so the remainder of this post will focus on section 5 of the paper. In there, Jonathon immediately establishes the following goal:
The [original] dataset is , an matrix. Find some orthonormal matrix in such that is a diagonal matrix.[1] The rows of shall be principal components of .
As you might have noticed, here is the covariance matrix of our rotated dataset . Why do we want to be diagonal? Before we address this question, let’s generate a dataset consisting of 4 features with some random values.
"""
Mostly helper functions.
Skip ahead unles you would like to follow the steps on your local machine.
"""
from IPython.display import Latex, display
from string import ascii_lowercase
import numpy as np
import pandas as pd
FEAT_NUM, SAMPLE_NUM = 4, 4
def covariance_matrix(dataset):
return dataset @ dataset.transpose() / SAMPLE_NUM
def tabulate(dataset, rotated=False):
'''
Label row(s) and column(s) of a matrix by wrapping it in a dataframe.
'''
if rotated:
prefix = 'new_'
feats = ascii_lowercase[FEAT_NUM:2 * FEAT_NUM]
else:
prefix = ''
feats = ascii_lowercase[0:FEAT_NUM]
return pd.DataFrame.from_records(dataset,
columns=['sample{}'.format(num) for num in range(SAMPLE_NUM)],
index=['{}feat_{}'.format(prefix, feat) for feat in feats])
def display_df(dataset, latex=False):
rounded = dataset.round(15)
if latex:
display(Latex(rounded.to_latex()))
else:
display(rounded)
x = tabulate(np.random.rand(FEAT_NUM, SAMPLE_NUM))
display_df(x)
sample0
sample1
sample2
sample3
feat_a
0.472612
0.453242
0.811147
0.237625
feat_b
0.728994
0.916212
0.202783
0.116406
feat_c
0.803590
0.967202
0.659594
0.726142
feat_d
0.771849
0.753178
0.153215
0.459026
above just looks like a normal dataset. Nothing special. What about its covariance matrix?
c_x = covariance_matrix(x)
display_df(c_x)
feat_a
feat_b
feat_c
feat_d
feat_a
0.285804
0.237986
0.381435
0.234878
feat_b
0.237986
0.356387
0.422564
0.334312
feat_c
0.381435
0.422564
0.635896
0.445776
feat_d
0.234878
0.334312
0.445776
0.349302

Its covariance matrix doesn't look that intersting either. However, let us recall that the covariance matrix is always a symmetric matrix with the variances on its diagonal and the covariances off-diagonal, i.e., having the following form:
Let's also recall that is zero if and only if feature x and y are uncorrelated. The non-zero convariances in is an indication that there are quite some redundant features in . What we are going to do here is feature extraction. We would like to rotate our dataset in a way such that the change of basis will bring us features that are uncorrelated to each other, i.e., having a new covariance matrix that is diagonal.
Time to Choose
With a clearer goal now, let's figure out how we can achieve it.
From the givens above, we are able to derive the relationship between and in terms of :
Let's recall one more time that all covariance matrices are symmetric, and any symmetric matrix can be "Eigendecomposed" as
where is an orthogonal matrix whose columns are the eigenvectors of the symmetric matrix, and is a diagonal matrix whose entries are the eigenvalues. There is usally more than one way to choose , but Eigendecomposing will prove to make our life much easier. Let's see what we can do with it:
Since we know is diagonal and , what if we choose to be ?
Voilà , by choosing to be eigenvectors of , we are able to transform into whose features are uncorrelated to each other!
Test it
Well, that was quite convenient, wasn't it? What's even better is that we can demonstrate it in a few lines of code:
_, q = np.linalg.eig(c_x) # Eigendecomposition
p = q.transpose()
y = tabulate(p @ x, rotated=True)
display_df(y)
sample0
sample1
sample2
sample3
new_feat_e
1.400186
1.576029
0.906121
0.830166
new_feat_f
-0.162144
-0.225848
0.572904
0.076917
new_feat_g
-0.042285
-0.086877
-0.091316
0.335921
new_feat_h
0.087761
-0.072164
-0.002497
-0.008295
The transformed dataset with the newly extracted features to , doesn't look like anything either. What about its convariance matrix??
c_y = covariance_matrix(y)
display_df(c_y)
new_feat_e
new_feat_f
new_feat_g
new_feat_h
new_feat_e
1.488654
0.000000
0.000000
0.000000
new_feat_f
0.000000
0.102858
0.000000
-0.000000
new_feat_g
0.000000
0.000000
0.032629
-0.000000
new_feat_h
0.000000
-0.000000
-0.000000
0.003246
Holy moly, isn't this exactly what we were aiming for from the beginning, with just a few lines of code? From a dataset with some redundant and less interesting fetures, we have extracted new features that are much more meaningful to look at, simply by diagonalizing its convariance matrix. Let's wrap this up with some side-by-side comparisons.
display_df(x, latex=True)
display_df(c_x, latex=True)
display_df(y, latex=True)
display_df(c_y, latex=True)
Look at this. Isn't it just beautiful?
[1]: The reason orthonormality is part of the goal is that we do not want to do anything more than rotations. We do not want to modify . We only want to re-express by carefully choosing a change of basis.
Last updated
Was this helpful?