Measurement invariance

Measurement invariance or measurement equivalence is a statistical property of measurement that indicates that the same construct is being measured across some specified groups.[1] For example, measurement invariance can be used to study whether a given measure is interpreted in a conceptually similar manner by respondents representing different genders or cultural backgrounds. Violations of measurement invariance may preclude meaningful interpretation of measurement data. Tests of measurement invariance are increasingly used in fields such as psychology to supplement evaluation of measurement quality rooted in classical test theory.[1]

Measurement invariance is often tested in the framework of multiple-group confirmatory factor analysis (CFA).[2] In the context of structural equation models, including CFA, measurement invariance is often termed factorial invariance.[3]

Definition

In the common factor model, measurement invariance may be defined as the following equality:

where is a distribution function, is an observed score, is a factor score, and s denotes group membership (e.g., Caucasian=0, African American=1). Therefore, measurement invariance entails that given a subject's factor score, his or her observed score is not dependent on his or her group membership.[4]

Types of invariance

Several different types of measurement invariance can be distinguished in the common factor model for continuous outcomes:[5]

1) Equal form: The number of factors and the pattern of factor-indicator relationships are identical across groups.
2) Equal loadings: Factor loadings are equal across groups.
3) Equal intercepts: When observed scores are regressed on each factor, the intercepts are equal across groups.
4) Equal residual variances: The residual variances of the observed scores not accounted for by the factors are equal across groups.

The same typology can be generalized to the discrete outcomes case:

1) Equal form: The number of factors and the pattern of factor-indicator relationships are identical across groups.
2) Equal loadings: Factor loadings are equal across groups.
3) Equal thresholds: When observed scores are regressed on each factor, the thresholds are equal across groups.
4) Equal residual variances: The residual variances of the observed scores not accounted for by the factors are equal across groups.

Each of these conditions corresponds to a multiple-group confirmatory factor model with specific constraints. The tenability of each model can be tested statistically by using a likelihood ratio test or other indices of fit. Meaningful comparisons between groups usually require that all four conditions are met, which is known as strict measurement invariance. However, strict measurement invariance rarely holds in applied context.[6] Usually, this is tested by sequentially introducing additional constraints starting from the equal form condition and eventually proceeding to the equal residuals condition if the fit of the model does not deteriorate in the meantime.

Tests for invariance

Although further research is necessary on the application of various invariance tests and their respective criteria across diverse testing conditions, two approaches are common among applied researchers. For each model being compared (e.g., Equal form, Equal Intercepts) a χ2 fit statistic is iteratively estimated from the minimization of the difference between the model implied mean and covariance matrices and the observed mean and covariance matrices.[7] As long as the models under comparison are nested, the difference between the χ2 values and their respective degrees of freedom of any two CFA models of varying levels of invariance follows a χ2 distribution (diff χ2) and as such, can be inspected for significance as an indication of whether increasingly restrictive models produce appreciable changes in model-data fit.[7] However, there is some evidence the diff χ2 is sensitive to factors unrelated to changes in invariance targeted constraints (e.g., sample size).[8] Consequently it is recommended that researchers also use the difference between the comparative fit index (ΔCFI) of two models specified to investigate measurement invariance. When the difference between the CFIs of two models of varying levels of measurement invariance (e.g., equal forms versus equal loadings) is greater than 0.01, then invariance in likely untenable.[8] The CFI values being subtracted are expected to come from nested models as in the case of diff χ2 testing;[9] however, it seems that applied researchers rarely take this into consideration when applying the CFI test.[10]

Levels of Equivalence

Equivalence can also be categorized according to three hierarchical levels of measurement equivalence.[11][12]

  1. Configural equivalence: The factor structure is the same across groups in a multi-group confirmatory factor analysis.
  2. Metric equivalence: Factor loadings are similar across groups.[11]
  3. Scalar equivalence: Values/Means are also equivalent across groups.[11]

Implementation

Tests of measurement invariance are available in the R programming language.[13]

See also

References

  1. Vandenberg, Robert J.; Lance, Charles E. (2000). "A Review and Synthesis of the Measurement Invariance Literature: Suggestions, Practices, and Recommendations for Organizational Research". Organizational Research Methods. 3: 4–70. doi:10.1177/109442810031002.
  2. Chen, Fang Fang; Sousa, Karen H.; West, Stephen G. (2005). "Testing Measurement Invariance of Second-Order Factor Models". Structural Equation Modeling. 12 (3): 471–492. doi:10.1207/s15328007sem1203_7.
  3. Widaman, K. F.; Ferrer, E.; Conger, R. D. (2010). "Factorial Invariance within Longitudinal Structural Equation Models: Measuring the Same Construct across Time". Child Dev Perspect. 4 (1): 10–18. doi:10.1111/j.1750-8606.2009.00110.x. PMC 2848495. PMID 20369028.
  4. Lubke, G. H.; et al. (2003). "On the relationship between sources of within- and between-group differences and measurement invariance in the common factor model". Intelligence. 31 (6): 543–566. doi:10.1016/s0160-2896(03)00051-5.
  5. Brown, T. (2015). Confirmatory Factor Analysis for Applied Research, Second Edition. The Guilford Press.
  6. Van De Schoot, Rens; Schmidt, Peter; De Beuckelaer, Alain; Lek, Kimberley; Zondervan-Zwijnenburg, Marielle (2015-01-01). "Editorial: Measurement Invariance". Frontiers in Psychology. 6: 1064. doi:10.3389/fpsyg.2015.01064. PMC 4516821. PMID 26283995.
  7. Loehlin, John (2004). Latent Variable Models: An Introduction to Factor, Path, and Structural Equation Analysis. Taylor & Francis. ISBN 9780805849103.
  8. Cheung, G. W.; Rensvold, R. B. (2002). "Evaluating goodness-of-fit indexes for testing measurement invariance". Structural Equation Modeling. 9 (2): 233–255. doi:10.1207/s15328007sem0902_5.
  9. Widaman, Keith F.; Thompson, Jane S. (2003-03-01). "On specifying the null model for incremental fit indices in structural equation modeling". Psychological Methods. 8 (1): 16–37. CiteSeerX 10.1.1.133.489. doi:10.1037/1082-989x.8.1.16. ISSN 1082-989X. PMID 12741671.
  10. Kline, Rex (2011). Principles and Practice of Structural Equation Modeling. Guilford Press.
  11. Steenkamp, Jan-Benedict E. M.; Baumgartner, Hans (1998-06-01). "Assessing Measurement Invariance in Cross-National Consumer Research". Journal of Consumer Research. 25 (1): 78–90. doi:10.1086/209528. ISSN 0093-5301. JSTOR 10.1086/209528.
  12. Ariely, Gal; Davidov, Eldad (2012-09-01). "Assessment of Measurement Equivalence with Cross-National and Longitudinal Surveys in Political Science". European Political Science. 11 (3): 363–377. doi:10.1057/eps.2011.11. ISSN 1680-4333.
  13. Hirschfeld, Gerrit; von Brachel, Ruth (2014). "Improving Multiple-Group confirmatory factor analysis in R – A tutorial in measurement invariance with continuous and ordinal indicators". Practical Assessment, Research & Evaluation. 19. doi:10.7275/qazy-2946.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.