Functional data analysis
Functional data analysis (FDA) is a branch of statistics that analyzes data providing information about curves, surfaces or anything else varying over a continuum. In its most general form, under an FDA framework each sample element is considered to be a function. The physical continuum over which these functions are defined is often time, but may also be spatial location, wavelength, probability, etc.
History
Functional data analysis has roots going back to work by Grenander and Karhunen in the 1940's and 1950's.[1][2][3] They considered the decomposition of square-integrable continuous time stochastic process into eigencomponents, now known as the Karhunen-Loève decomposition. A rigorous analysis of functional principal components analysis was done in the 1970's by Kleffe, Dauxois and Pousse including results about the asymptotic distribution of the eigenvalues.[4][5][6] More recently in the 1990's and 2000's the field has focused more on application and understanding the effects of dense and sparse observations schemes. Core contributions in this era were made by James O. Ramsay (who coined the term "functional data analysis" in this period), Bernard Silverman and John Rice.[7][8][9]
Mathematical formalism
Random functions can be viewed as random elements taking values in a Hilbert space, or as a stochastic process. The former is mathematically convenient, whereas the latter is somewhat more suitable from an applied perspective. These two approaches coincide if the random functions are continuous and a condition called mean-squared continuity is satisfied. For more on the probabilistic underpinnings of functional data analysis, see Chapter 7.[10]
Hilbertian random variables
In the Hilbert space viewpoint, one considers an -valued random element , where is a separable Hilbert space such as the space of square-integrable functions . Under the integrability condition that is finite, one can define the mean of as the unique element satisfying
This formulation is the Pettis integral but the mean can also be defined as the Bochner sense. Under the integrability condition that is finite, the covariance operator of is a linear operator that is uniquely defined by the relation
or, in tensor form, . The spectral theorem allows to decompose as the Karhunen-Loève decomposition
where are eigenvectors of , corresponding to the nonnegative eigenvalues of , in a nonincreasing order. Truncating this infinite series to a finite order underpins functional principal component analysis.
Stochastic processes
The Hilbertian point of view is mathematically convenient, but abstract; the above considerations do not necessarily even view as a function at all, since common choices of like and Sobolev spaces consist of equivalence classes, not functions. The stochastic process perspective views as a collection of random variables
indexed by the unit interval (or more generally some compact metric space ). The mean and covariance functions are defined in a pointwise manner as
(if for all ). We can hope to view as a random element on the Hilbert function space . However, additional conditions are required for such a pursuit to be fruitful, since if we let be Gaussian white noise, i.e. is standard Gaussian and independent from for any , it is clear that we have no hope of viewing this as a square integrable function.
A convenient sufficient condition is mean square continuity, stipulating that and are continuous functions. In this case defines a covariance operator by
The spectral theorem applies to , yielding eigenpairs , so that in tensor product notation writes
Moreover, since is continuous for all , all the 's are continuous. Mercer's theorem then states that the covariance function admits an analogous decomposition
Finally, under the extra assumption that has continuous sample paths, namely that with probability one, the random function is continuous, the Karhunen-Loève expansion above holds for and the Hilbert space machinery can be subsequently applied. Continuity of sample paths can be shown using Kolmogorov continuity theorem.
Regression methods for functional data
Several methods have been developed for simple functional data.
Scalar-on-function regression
A well-studied model for scalar-on-function regression is a generalisation of linear regression. Classical linear regression assumes that a scalar variable of interest is related to a -dimensional covariate vector through the equation
for a -dimensional vector of coefficients and a scalar noise variable , where denotes the standard inner product on . If we instead observe a functional variable , which we assume is an element of the space of square-integrable functions on the unit interval, we can consider the same linear regression model as above using the inner product. In other words, we consider the model
for a square-integrable coefficient function and as before (see Chapter 13[7]).
Function-on-scalar regression
Analogously to the scalar-on-function regression model, we can consider a functional and -dimensional covariate vector and again draw inspiration from the usual linear regression model by modelling as a linear combination of functions. In other words, we assume that the relationship between and is
for functions and functional error term .
Function-on-function regression
Both of the previous regression models can be viewed as instances of a general linear model between Hilbert spaces. Assuming that and are elements of Hilbert spaces and , the Hilbertian linear model assumes that
for a Hilbert-Schmidt operator and a noise variable taking values in . If and , we get the scalar-on-function regression model above. Similarly, if and then we get the function-on-scalar regression model above. If we let , we get the function-on-function linear regression model which can equivalently be written
for a square-integrable coefficient function and functional noise variable .
Practical considerations
While the presentation of the models above assumes fully-observed functions, software is available for fitting the models with discretely-observed functions in software such as R. Packages for R include refund [11] and FDboost [12] that utilise reformulations of the functional models as generalized additive models and boosted models, respectively.
Further reading
- Ramsay, J. O. and Silverman, B.W. (2002) Applied functional data analysis : methods and case studies, Springer series in statistics, New York ; London : Springer, ISBN 0-387-95414-7
- Ramsay, J. O. and Silverman, B.W. (2005) Functional data analysis, 2nd ed., New York : Springer, ISBN 0-387-40080-X
- Hsing, T. and Eubank, R. (2015) Theoretical Foundations of Functional Data Analysis, with an Introduction to Linear Operators, Wiley series in probability and statistics, John Wiley & Sons, Ltd, ISBN 978-0-470-01691-6
- Morris, J. (2015) Functional Regression, Annual Review of Statistics and Its Application, Vol. 2, 321 - 359, https://doi.org/10.1146/annurev-statistics-010814-020413
- Wang et al. (2016) Functional Data Analysis, Annual Review of Statistics and Its Application, Vol. 3, 257-295, https://doi.org/10.1146/annurev-statistics-041715-033624
References
- Grenander, Ulf (1950). "Stochastic processes and statistical inference". Arkiv för Matematik. 1 (3): 195–277. doi:10.1007/BF02590638. ISSN 0004-2080. Retrieved 2021-01-27.
- Müller, Hans-Georg (2016). "Peter Hall, functional data analysis and random objects". The Annals of Statistics. 44 (5): 1867–1887. doi:10.1214/16-AOS1492. ISSN 0090-5364. Retrieved 2021-01-27.
- Karhunen, Kari (1946). "Zur Spektraltheorie stochastischer Prozesse". Annales Academiae Scientiarum Fennicae. Series A. I. Mathematica-Physica. 1946 (34): 7. ISSN 0365-2300. Retrieved 2021-01-27.
- Kleffe, Jürgen (1973). "Principal components of random variables with values in a seperable hilbert space". Mathematische Operationsforschung und Statistik. 4 (5): 391–406. doi:10.1080/02331887308801137. ISSN 0047-6277. Retrieved 2021-01-27.
- Dauxois, J.; Pousse, A. (1976). Les analyses factorielles en calcul des probabilités et en statistique: essai d'étude synthétique.
- Dauxois, J.; Pousse, A.; Romain, Y. (1982). "Asymptotic theory for the principal component analysis of a vector random function: Some applications to statistical inference". Journal of Multivariate Analysis. 12 (1): 136–154. doi:10.1016/0047-259X(82)90088-4. ISSN 0047-259X. Retrieved 2021-01-27.
- Ramsay, James O.; Silverman, Bernard W. (2005). Functional Data Analysis (2nd ed.). Springer. ISBN 0-387-40080-X.
- Rice, John A.; Silverman, B. W. (1991). "Estimating the Mean and Covariance Structure Nonparametrically When the Data are Curves". Journal of the Royal Statistical Society: Series B (Methodological). 53 (1): 233–243. doi:10.1111/j.2517-6161.1991.tb01821.x. ISSN 2517-6161. Retrieved 2021-01-27.
- Rice, John A.; Wu, Colin O. (2001). "Nonparametric Mixed Effects Models for Unequally Sampled Noisy Curves". Biometrics. 57 (1): 253–259. doi:10.1111/j.0006-341X.2001.00253.x. ISSN 0006-341X. Retrieved 2021-01-27.
- Hsing, Tailen; Eubank, Randall (2015-05-18). Theoretical Foundations of Functional Data Analysis, with an Introduction to Linear Operators: Hsing/Theoretical Foundations of Functional Data Analysis, with an Introduction to Linear Operators. Wiley Series in Probability and Statistics. Chichester, UK: John Wiley & Sons, Ltd. ISBN 978-0-470-01691-6. Retrieved 2021-02-04.
- "CRAN - Package refund".
- "CRAN - Package FDboost".