Kernel principal component analysis, or kernel PCA for short, is an extension of the principal component analysis tool that's popular for linear dimensionality reduction and feature extraction.
Kernel PCA is the nonlinear form of PCA, which makes it more adept to exploit the complicated spatial structure of high-dimensional features. In other words, kernel PCA is useful for machine learning problems which have data with more complicated structures that can't be represented in a linear subspace.
Generally speaking, a "kernel" is a continuous function that takes two inputs (e.g. real numbers, functions, vectors, etc.) and maps them to a real value independent of the order of the arguments.
A tutorial on Kernel Principal Component Analysis
Kernel PCA vs PCA vs ICA in Tensorflow/sklearn
Jae Duk Seo
Kernel Principal Component Analysis and its Applications in Face Recognition and Active Shape Models
Documentaries, videos and podcasts
- Principal component analysis (PCA)Technique used to find the most valuable parts of all of the variables in a dataset and to then transform the original variables into a smaller set of linear combinations.
- Sparse PCAAn extension of the classic principal component analysis (PCA) method that offers dimensionality reduction of data with better statistical properties and interpretability than classic PCA.
- Nonlinear dimensionality reduction (NDR or NLDR)A process of mapping higher-dimensional data into a lower-dimensional non-linear manifold within higher-dimensional space so that the data can be more easily visualized and interpreted.
- Unsupervised learningA branch of machine learning that tries to make sense of data that has not been labeled, classified, or categorized by extracting features and patterns on its own.