What's singular value decomposition? How is it typically used for machine learning?
There are many dimensionality reduction techniques:
- Singular Value Decomposition (SVD)
- Principal Component Analysis (PCA)
- Linear Discriminant Analysis (LDA)
- T-distributed Stochastic Neighbor Embedding (t-SNE)
- Fourier and Wavelet Transforms
Singular Value Decomposition (SVD) is a general matrix decomposition method that factors a matrix X into three matrices L (left singular values), Σ (diagonal matrix) and R^T (right singular values).
For machine learning, Principal Component Analysis (PCA) is typically used. It is a special type of SVD where the singular values correspond to the eigenvectors and the values of the diagonal matrix are the squares of the eigenvalues. We use these features as they are statistically descriptive.
Having calculated the eigenvectors and eigenvalues, we can use the Kaiser-Guttman criterion, a scree plot or the proportion of explained variance to determine the principal components (i.e. the final dimensionality) that are useful for dimensionality reduction.