Kernel principal component analysis, or kernel PCA for short, is an extension of the principal component analysis tool that's popular for linear dimensionality reduction and feature extraction.
Kernel PCA is the nonlinear form of PCA, which makes it more adept to exploit the complicated spatial structure of high-dimensional features. In other words, kernel PCA is useful for machine learning problems which have data with more complicated structures that can't be represented in a linear subspace.
Generally speaking, a "kernel" is a continuous function that takes two inputs (e.g. real numbers, functions, vectors, etc.) and maps them to a real value independent of the order of the arguments.
A tutorial on Kernel Principal Component Analysis
Kernel PCA vs PCA vs ICA in Tensorflow/sklearn
Jae Duk Seo
Kernel Principal Component Analysis and its Applications in Face Recognition and Active Shape Models