What is Kernel PCA?

Kernel PCA extends regular PCA to situations where linear transformations are not satisfactory in capturing the variability within the data. It works analogous to kernelized SVM, where a problem that is not linearly separable in the original feature space is transformed into something that is so in the transformed space. Like in SVM, it makes use of the kernel trick so that it does not have to perform the computation in higher dimensions. Common choices for kernels include the polynomial, Gaussian, RBF, and Sigmoid.

Help us improve this post by suggesting in comments below:

– modifications to the text, and infographics
– video resources that offer clear explanations for this question
– code snippets and case studies relevant to this concept
– online blogs, and research publications that are a “must read” on this topic

Leave the first comment

Partner Ad
Find out all the ways that you can
Contribute
Here goes your text ... Select any part of your text to access the formatting toolbar.