In multilinear algebra, the higher-order singular value decomposition (HOSVD) is a misnomer. There does not exist a single tensor decomposition that retains all the defining properties of the matrix SVD. The matrix SVD simultaneously yields a
rank-𝑅 decomposition and
orthonormal subspaces for the row and column spaces.
These properties are not realized within a single algorithm for higher-order tensors, but are instead realized by two distinct algorithmic developments and represent two distinct research directions. Harshman, as well as, the team of Carol and Chang proposed Canonical polyadic decomposition (CPD), which is a variant of the tensor rank decomposition, in which a tensor is approximated as a sum of K rank-1 tensors for a user-specified K. L. R. Tucker proposed a strategy for computing orthonormal subspaces for third order tensors. Aspecsts of these algorithms can be traced as far back as F. L. Hitchcock in 1928.[1]
De Lathauweret al.[2][3] introduced clarity to the Tucker concepts, while Vasilescu and Terzopoulos[4][5][6] introduced algorithmic clarity. Vasilescu and Terzopoulos[4][6]
introduced the M-mode SVD, which is the classic algorithm that is currently referred in the literature as the Tucker or the HOSVD. The Tucker approach and De Lathauwer’s implementation are both sequential and rely on iterative procedures such as gradient descent or the power method. By contrast, the M-mode SVD provides a closed-form solution that can be executed sequentially and is well-suited for parallel computation.
This misattribution has had lasting impact on the scholarly record, obscuring the original source of a widely adopted algorithm, and complicating efforts to trace its development, reproduce results, and recognizing the respective contributions of different research efforts.
The term M-mode SVD accurately reflects the algorithm employed. It captures the actual computation, a set of SVDs on mode-flattenings without making assumptions about the structure of the core tensor or implying a rank decomposition.
Robust and L1-norm-based variants of this decomposition framework have since been proposed.[7][8][9][10]
^Hitchcock, Frank L (1928-04-01). "Multiple Invariants and Generalized Rank of a M-Way Array or Tensor". Journal of Mathematics and Physics. 7 (1–4): 39–79. doi:10.1002/sapm19287139. ISSN1467-9590.
^De Lathauwer, L.; De Moor, B.; Vandewalle, J. (2000-01-01). "On the Best Rank-1 and Rank-(R1 ,R2 ,. . .,RN) Approximation of Higher-Order Tensors". SIAM Journal on Matrix Analysis and Applications. 21 (4): 1324–1342. CiteSeerX10.1.1.102.9135. doi:10.1137/S0895479898346995. ISSN0895-4798.
^Cite error: The named reference Vasilescu2003 was invoked but never defined (see the help page).
^ abM. A. O. Vasilescu, D. Terzopoulos (2005). "Multilinear Independent Component Analysis". Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR’05). San Diego, CA.
^Godfarb, Donald; Zhiwei, Qin (2014). "Robust low-rank tensor recovery: Models and algorithms". SIAM Journal on Matrix Analysis and Applications. 35 (1): 225–253. arXiv:1311.6182. doi:10.1137/130905010. S2CID1051205.
^Markopoulos, Panos P.; Chachlakis, Dimitris G.; Prater-Bennette, Ashley (21 February 2019). "L1-Norm Higher-Order Singular-Value Decomposition". 2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP). pp. 1353–1357. doi:10.1109/GlobalSIP.2018.8646385. ISBN978-1-7281-1295-4. S2CID67874182.