quadratic effects
quadratic.Rmd
Quadratic Effects and Interaction Effects
Quadratic effects are essentially a special case of interaction
effects—where a variable interacts with itself. As such, all of the
methods in modsem
can also be used to estimate quadratic
effects.
Below is a simple example using the LMS
approach.
library(modsem)
m1 <- '
# Outer Model
X =~ x1 + x2 + x3
Y =~ y1 + y2 + y3
Z =~ z1 + z2 + z3
# Inner model
Y ~ X + Z + Z:X + X:X
'
est1_lms <- modsem(m1, data = oneInt, method = "lms")
summary(est1_lms)
#>
#> modsem (version 1.0.11):
#>
#> Estimator LMS
#> Optimization method EMA-NLMINB
#> Number of observations 2000
#> Number of iterations 45
#> Loglikelihood -14661.5
#> Akaike (AIC) 29387.01
#> Bayesian (BIC) 29566.24
#>
#> Numerical Integration:
#> Points of integration (per dim) 24
#> Dimensions 1
#> Total points of integration 24
#>
#> Fit Measures for Baseline Model (H0):
#> Loglikelihood -17831.87
#> Akaike (AIC) 35723.75
#> Bayesian (BIC) 35891.78
#> Chi-square 17.52
#> Degrees of Freedom (Chi-square) 24
#> P-value (Chi-square) 0.826
#> RMSEA 0.000
#>
#> Comparative Fit to H0 (LRT test):
#> Loglikelihood change 3170.37
#> Difference test (D) 6340.74
#> Degrees of freedom (D) 2
#> P-value (D) 0.000
#>
#> R-Squared Interaction Model (H1):
#> Y 0.599
#> R-Squared Baseline Model (H0):
#> Y 0.395
#> R-Squared Change (H1 - H0):
#> Y 0.204
#>
#> Parameter Estimates:
#> Coefficients unstandardized
#> Information observed
#> Standard errors standard
#>
#> Latent Variables:
#> Estimate Std.Error z.value P(>|z|)
#> X =~
#> x1 1.000
#> x2 0.803 0.013 63.914 0.000
#> x3 0.914 0.013 67.731 0.000
#> Z =~
#> z1 1.000
#> z2 0.810 0.012 65.089 0.000
#> z3 0.881 0.013 67.618 0.000
#> Y =~
#> y1 1.000
#> y2 0.798 0.007 107.546 0.000
#> y3 0.899 0.008 112.580 0.000
#>
#> Regressions:
#> Estimate Std.Error z.value P(>|z|)
#> Y ~
#> X 0.673 0.031 21.649 0.000
#> Z 0.570 0.030 18.739 0.000
#> X:X -0.004 0.021 -0.194 0.847
#> X:Z 0.719 0.029 24.856 0.000
#>
#> Intercepts:
#> Estimate Std.Error z.value P(>|z|)
#> .x1 1.024 0.024 42.847 0.000
#> .x2 1.217 0.020 60.917 0.000
#> .x3 0.921 0.022 41.444 0.000
#> .z1 1.011 0.024 41.562 0.000
#> .z2 1.206 0.020 59.260 0.000
#> .z3 0.916 0.022 42.052 0.000
#> .y1 1.041 0.038 27.481 0.000
#> .y2 1.223 0.031 39.889 0.000
#> .y3 0.957 0.034 27.901 0.000
#>
#> Covariances:
#> Estimate Std.Error z.value P(>|z|)
#> X ~~
#> Z 0.200 0.024 8.247 0.000
#>
#> Variances:
#> Estimate Std.Error z.value P(>|z|)
#> .x1 0.158 0.009 18.167 0.000
#> .x2 0.162 0.007 23.160 0.000
#> .x3 0.164 0.008 20.761 0.000
#> .z1 0.167 0.009 18.507 0.000
#> .z2 0.160 0.007 22.682 0.000
#> .z3 0.158 0.008 20.781 0.000
#> .y1 0.160 0.009 18.012 0.000
#> .y2 0.154 0.007 22.686 0.000
#> .y3 0.164 0.008 20.682 0.000
#> X 0.981 0.036 27.037 0.000
#> Z 1.017 0.038 26.933 0.000
#> .Y 0.980 0.038 25.926 0.000
In this example, we have a simple model with two quadratic effects
and one interaction effect. We estimate the model using both the
QML
and double-centering approaches, with data from a
subset of the PISA 2006 dataset.
m2 <- '
ENJ =~ enjoy1 + enjoy2 + enjoy3 + enjoy4 + enjoy5
CAREER =~ career1 + career2 + career3 + career4
SC =~ academic1 + academic2 + academic3 + academic4 + academic5 + academic6
CAREER ~ ENJ + SC + ENJ:ENJ + SC:SC + ENJ:SC
'
est2_dca <- modsem(m2, data = jordan)
est2_qml <- modsem(m2, data = jordan, method = "qml")
#> Warning: Standard errors for some coefficients could not be computed.
#> Warning: The variance-covariance matrix of the estimated parameters
#> (vcov) does not appear to be positive definite! The smallest
#> eigenvalue (= -7.886871e-04) is smaller than zero. This may
#> be a symptom that the model is not identified.
summary(est2_qml)
#>
#> modsem (version 1.0.11):
#>
#> Estimator QML
#> Optimization method NLMINB
#> Number of observations 6038
#> Number of iterations 55
#> Loglikelihood -110516.71
#> Akaike (AIC) 221135.42
#> Bayesian (BIC) 221477.42
#>
#> Fit Measures for Baseline Model (H0):
#> Loglikelihood -110521.29
#> Akaike (AIC) 221138.58
#> Bayesian (BIC) 221460.46
#> Chi-square 1016.34
#> Degrees of Freedom (Chi-square) 87
#> P-value (Chi-square) 0.000
#> RMSEA 0.042
#>
#> Comparative Fit to H0 (LRT test):
#> Loglikelihood change 4.58
#> Difference test (D) 9.16
#> Degrees of freedom (D) 3
#> P-value (D) 0.027
#>
#> R-Squared Interaction Model (H1):
#> CAREER 0.524
#> R-Squared Baseline Model (H0):
#> CAREER 0.510
#> R-Squared Change (H1 - H0):
#> CAREER 0.014
#>
#> Parameter Estimates:
#> Coefficients unstandardized
#> Information observed
#> Standard errors standard
#>
#> Latent Variables:
#> Estimate Std.Error z.value P(>|z|)
#> ENJ =~
#> enjoy1 1.000
#> enjoy2 1.002 0.020 50.553 0.000
#> enjoy3 0.894 0.020 43.654 0.000
#> enjoy4 1.000 0.021 48.220 0.000
#> enjoy5 1.047 0.021 50.375 0.000
#> SC =~
#> academic1 1.000
#> academic2 1.105 0.028 38.935 0.000
#> academic3 1.235 0.030 41.700 0.000
#> academic4 1.254 0.030 41.808 0.000
#> academic5 1.114 0.029 38.631 0.000
#> academic6 1.199 0.030 40.348 0.000
#> CAREER =~
#> career1 1.000
#> career2 1.040 0.016 65.181 0.000
#> career3 0.952 0.016 57.860 0.000
#> career4 0.818 0.017 48.364 0.000
#>
#> Regressions:
#> Estimate Std.Error z.value P(>|z|)
#> CAREER ~
#> ENJ 0.543 0.021 26.139 0.000
#> SC 0.461 0.023 19.650 0.000
#> ENJ:ENJ 0.051 0.020 2.523 0.012
#> ENJ:SC -0.015 0.038 -0.407 0.684
#> SC:SC -0.004 0.030 -0.124 0.901
#>
#> Intercepts:
#> Estimate Std.Error z.value P(>|z|)
#> .enjoy1 0.000
#> .enjoy2 0.000 0.011 0.014 0.989
#> .enjoy3 0.000 0.011 -0.027 0.979
#> .enjoy4 0.000 0.011 0.010 0.992
#> .enjoy5 0.000 0.011 0.034 0.973
#> .academic1 0.000 0.012 -0.012 0.991
#> .academic2 0.000
#> .academic3 0.000 0.008 -0.053 0.957
#> .academic4 0.000 0.007 -0.031 0.975
#> .academic5 -0.001 0.009 -0.066 0.947
#> .academic6 0.001 0.009 0.080 0.936
#> .career1 -0.021 0.015 -1.344 0.179
#> .career2 -0.022 0.015 -1.423 0.155
#> .career3 -0.020 0.015 -1.313 0.189
#> .career4 -0.018 0.015 -1.216 0.224
#>
#> Covariances:
#> Estimate Std.Error z.value P(>|z|)
#> ENJ ~~
#> SC 0.217 0.009 25.472 0.000
#>
#> Variances:
#> Estimate Std.Error z.value P(>|z|)
#> .enjoy1 0.487 0.011 44.355 0.000
#> .enjoy2 0.489 0.011 44.445 0.000
#> .enjoy3 0.596 0.012 48.252 0.000
#> .enjoy4 0.487 0.011 44.571 0.000
#> .enjoy5 0.442 0.010 42.491 0.000
#> .academic1 0.645 0.013 49.811 0.000
#> .academic2 0.566 0.012 47.857 0.000
#> .academic3 0.473 0.011 44.316 0.000
#> .academic4 0.455 0.010 43.576 0.000
#> .academic5 0.565 0.012 47.690 0.000
#> .academic6 0.501 0.011 45.425 0.000
#> .career1 0.374 0.009 40.414 0.000
#> .career2 0.328 0.009 37.034 0.000
#> .career3 0.436 0.010 43.268 0.000
#> .career4 0.576 0.012 48.379 0.000
#> ENJ 0.499 0.017 29.535 0.000
#> SC 0.338 0.015 23.179 0.000
#> .CAREER 0.299 0.010 29.423 0.000
NOTE: We can also use the LMS approach to estimate
this model, but it will be a lot slower, since we have to integrate
along both ENJ
and SC
. In the first example it
is sufficient to only integrate along X
, but the addition
of the SC:SC
term means that we have to explicitly model
SC
as a moderator. This means that we (by default) have to
integrate along 24^2=576
nodes. This both affects the the
optimization process, but also dramatically affects the computation time
of the standard errors. To make the estimation process it is possible to
reduce the number of quadrature nodes, and calculate standard errors
using the outer product of the score function, instead of the negative
of the hessian matrix. Additionally, we can also pass
mean.observed = FALSE
, constraining the intercepts of the
indicators to zero.
m2 <- '
ENJ =~ enjoy1 + enjoy2 + enjoy3 + enjoy4 + enjoy5
CAREER =~ career1 + career2 + career3 + career4
SC =~ academic1 + academic2 + academic3 + academic4 + academic5 + academic6
CAREER ~ ENJ + SC + ENJ:ENJ + SC:SC + ENJ:SC
'
est2_lms <- modsem(m2, data = jordan, method = "lms",
nodes = 15, OFIM.hessian = FALSE,
mean.observed = FALSE)
summary(est2_lms)
#>
#> modsem (version 1.0.11):
#>
#> Estimator LMS
#> Optimization method EMA-NLMINB
#> Number of observations 6038
#> Number of iterations 39
#> Loglikelihood -99999.35
#> Akaike (AIC) 200072.71
#> Bayesian (BIC) 200320.82
#>
#> Numerical Integration:
#> Points of integration (per dim) 15
#> Dimensions 2
#> Total points of integration 225
#>
#> Fit Measures for Baseline Model (H0):
#> Loglikelihood -110521.29
#> Akaike (AIC) 221108.58
#> Bayesian (BIC) 221329.87
#> Chi-square 1016.34
#> Degrees of Freedom (Chi-square) 87
#> P-value (Chi-square) 0.000
#> RMSEA 0.042
#>
#> Comparative Fit to H0 (LRT test):
#> Loglikelihood change 10521.94
#> Difference test (D) 21043.87
#> Degrees of freedom (D) 4
#> P-value (D) 0.000
#>
#> R-Squared Interaction Model (H1):
#> CAREER 0.511
#> R-Squared Baseline Model (H0):
#> CAREER 0.510
#> R-Squared Change (H1 - H0):
#> CAREER 0.001
#>
#> Parameter Estimates:
#> Coefficients unstandardized
#> Information observed
#> Standard errors standard
#>
#> Latent Variables:
#> Estimate Std.Error z.value P(>|z|)
#> ENJ =~
#> enjoy1 1.000
#> enjoy2 1.002 0.021 48.191 0.000
#> enjoy3 0.894 0.020 45.409 0.000
#> enjoy4 0.998 0.019 53.052 0.000
#> enjoy5 1.047 0.019 55.767 0.000
#> SC =~
#> academic1 1.000
#> academic2 1.105 0.028 39.153 0.000
#> academic3 1.236 0.029 43.129 0.000
#> academic4 1.255 0.029 44.009 0.000
#> academic5 1.115 0.027 40.609 0.000
#> academic6 1.200 0.028 42.805 0.000
#> CAREER =~
#> career1 1.000
#> career2 1.040 0.019 54.669 0.000
#> career3 0.952 0.019 49.624 0.000
#> career4 0.818 0.017 47.319 0.000
#>
#> Regressions:
#> Estimate Std.Error z.value P(>|z|)
#> CAREER ~
#> ENJ 0.524 0.021 24.961 0.000
#> SC 0.466 0.024 19.454 0.000
#> ENJ:ENJ 0.027 0.018 1.525 0.127
#> ENJ:SC -0.048 0.029 -1.652 0.099
#> SC:SC 0.002 0.023 0.095 0.924
#>
#> Intercepts:
#> Estimate Std.Error z.value P(>|z|)
#> .CAREER -0.004 0.013 -0.317 0.751
#>
#> Covariances:
#> Estimate Std.Error z.value P(>|z|)
#> ENJ ~~
#> SC 0.216 0.007 29.325 0.000
#>
#> Variances:
#> Estimate Std.Error z.value P(>|z|)
#> .enjoy1 0.487 0.009 56.164 0.000
#> .enjoy2 0.488 0.009 55.990 0.000
#> .enjoy3 0.596 0.011 53.605 0.000
#> .enjoy4 0.488 0.009 57.047 0.000
#> .enjoy5 0.442 0.009 50.867 0.000
#> .academic1 0.645 0.012 52.067 0.000
#> .academic2 0.566 0.010 55.056 0.000
#> .academic3 0.473 0.009 50.901 0.000
#> .academic4 0.455 0.009 49.255 0.000
#> .academic5 0.565 0.010 55.868 0.000
#> .academic6 0.502 0.010 52.325 0.000
#> .career1 0.373 0.008 46.719 0.000
#> .career2 0.328 0.007 45.886 0.000
#> .career3 0.436 0.009 46.208 0.000
#> .career4 0.576 0.011 54.852 0.000
#> ENJ 0.498 0.015 34.255 0.000
#> SC 0.337 0.013 26.588 0.000
#> .CAREER 0.302 0.010 29.689 0.000