modsem
modsem.Rmd
The Basic Syntax
modsem
introduces a new feature to the
lavaan
syntax—the semicolon operator (:
). The
semicolon operator works the same way as in the lm()
function. To specify an interaction effect between two variables, you
join them by Var1:Var2
.
Models can be estimated using one of the product indicator approaches
("ca"
, "rca"
, "dblcent"
,
"pind"
) or by using the latent moderated structural
equations approach ("lms"
) or the quasi maximum likelihood
approach ("qml"
). The product indicator approaches are
estimated via lavaan
, while the lms
and
qml
approaches are estimated via modsem
itself.
A Simple Example
Here is a simple example of how to specify an interaction effect
between two latent variables in lavaan
.
m1 <- '
# Outer Model
X =~ x1 + x2 + x3
Y =~ y1 + y2 + y3
Z =~ z1 + z2 + z3
# Inner Model
Y ~ X + Z + X:Z
'
est1 <- modsem(m1, oneInt)
summary(est1)
#> modsem (version 1.0.4, approach = dblcent):
#> lavaan 0.6-19 ended normally after 161 iterations
#>
#> Estimator ML
#> Optimization method NLMINB
#> Number of model parameters 60
#>
#> Number of observations 2000
#>
#> Model Test User Model:
#>
#> Test statistic 122.924
#> Degrees of freedom 111
#> P-value (Chi-square) 0.207
#>
#> Parameter Estimates:
#>
#> Standard errors Standard
#> Information Expected
#> Information saturated (h1) model Structured
#>
#> Latent Variables:
#> Estimate Std.Err z-value P(>|z|)
#> X =~
#> x1 1.000
#> x2 0.804 0.013 63.612 0.000
#> x3 0.916 0.014 67.144 0.000
#> Y =~
#> y1 1.000
#> y2 0.798 0.007 107.428 0.000
#> y3 0.899 0.008 112.453 0.000
#> Z =~
#> z1 1.000
#> z2 0.812 0.013 64.763 0.000
#> z3 0.882 0.013 67.014 0.000
#> XZ =~
#> x1z1 1.000
#> x2z1 0.805 0.013 60.636 0.000
#> x3z1 0.877 0.014 62.680 0.000
#> x1z2 0.793 0.013 59.343 0.000
#> x2z2 0.646 0.015 43.672 0.000
#> x3z2 0.706 0.016 44.292 0.000
#> x1z3 0.887 0.014 63.700 0.000
#> x2z3 0.716 0.016 45.645 0.000
#> x3z3 0.781 0.017 45.339 0.000
#>
#> Regressions:
#> Estimate Std.Err z-value P(>|z|)
#> Y ~
#> X 0.675 0.027 25.379 0.000
#> Z 0.561 0.026 21.606 0.000
#> XZ 0.702 0.027 26.360 0.000
#>
#> Covariances:
#> Estimate Std.Err z-value P(>|z|)
#> .x1z1 ~~
#> .x2z2 0.000
#> .x2z3 0.000
#> .x3z2 0.000
#> .x3z3 0.000
#> .x2z1 ~~
#> .x1z2 0.000
#> .x1z2 ~~
#> .x2z3 0.000
#> .x3z1 ~~
#> .x1z2 0.000
#> .x1z2 ~~
#> .x3z3 0.000
#> .x2z1 ~~
#> .x1z3 0.000
#> .x2z2 ~~
#> .x1z3 0.000
#> .x3z1 ~~
#> .x1z3 0.000
#> .x3z2 ~~
#> .x1z3 0.000
#> .x2z1 ~~
#> .x3z2 0.000
#> .x3z3 0.000
#> .x3z1 ~~
#> .x2z2 0.000
#> .x2z2 ~~
#> .x3z3 0.000
#> .x3z1 ~~
#> .x2z3 0.000
#> .x3z2 ~~
#> .x2z3 0.000
#> .x1z1 ~~
#> .x1z2 0.115 0.008 14.802 0.000
#> .x1z3 0.114 0.008 13.947 0.000
#> .x2z1 0.125 0.008 16.095 0.000
#> .x3z1 0.140 0.009 16.135 0.000
#> .x1z2 ~~
#> .x1z3 0.103 0.007 14.675 0.000
#> .x2z2 0.128 0.006 20.850 0.000
#> .x3z2 0.146 0.007 21.243 0.000
#> .x1z3 ~~
#> .x2z3 0.116 0.007 17.818 0.000
#> .x3z3 0.135 0.007 18.335 0.000
#> .x2z1 ~~
#> .x2z2 0.135 0.006 20.905 0.000
#> .x2z3 0.145 0.007 21.145 0.000
#> .x3z1 0.114 0.007 16.058 0.000
#> .x2z2 ~~
#> .x2z3 0.117 0.006 20.419 0.000
#> .x3z2 0.116 0.006 20.586 0.000
#> .x2z3 ~~
#> .x3z3 0.109 0.006 18.059 0.000
#> .x3z1 ~~
#> .x3z2 0.138 0.007 19.331 0.000
#> .x3z3 0.158 0.008 20.269 0.000
#> .x3z2 ~~
#> .x3z3 0.131 0.007 19.958 0.000
#> X ~~
#> Z 0.201 0.024 8.271 0.000
#> XZ 0.016 0.025 0.628 0.530
#> Z ~~
#> XZ 0.062 0.025 2.449 0.014
#>
#> Variances:
#> Estimate Std.Err z-value P(>|z|)
#> .x1 0.160 0.009 17.871 0.000
#> .x2 0.162 0.007 22.969 0.000
#> .x3 0.163 0.008 20.161 0.000
#> .y1 0.159 0.009 17.896 0.000
#> .y2 0.154 0.007 22.640 0.000
#> .y3 0.164 0.008 20.698 0.000
#> .z1 0.168 0.009 18.143 0.000
#> .z2 0.158 0.007 22.264 0.000
#> .z3 0.158 0.008 20.389 0.000
#> .x1z1 0.311 0.014 22.227 0.000
#> .x2z1 0.292 0.011 27.287 0.000
#> .x3z1 0.327 0.012 26.275 0.000
#> .x1z2 0.290 0.011 26.910 0.000
#> .x2z2 0.239 0.008 29.770 0.000
#> .x3z2 0.270 0.009 29.117 0.000
#> .x1z3 0.272 0.012 23.586 0.000
#> .x2z3 0.245 0.009 27.979 0.000
#> .x3z3 0.297 0.011 28.154 0.000
#> X 0.981 0.036 26.895 0.000
#> .Y 0.990 0.038 25.926 0.000
#> Z 1.016 0.038 26.856 0.000
#> XZ 1.045 0.044 24.004 0.000
By default, the model is estimated using the "dblcent"
method. If you want to use another method, you can change it using the
method
argument.
est1 <- modsem(m1, oneInt, method = "lms")
summary(est1)
#>
#> modsem (version 1.0.4):
#> Estimator LMS
#> Optimization method EM-NLMINB
#> Number of observations 2000
#> Number of iterations 84
#> Loglikelihood -14687.86
#> Akaike (AIC) 29437.73
#> Bayesian (BIC) 29611.35
#>
#> Numerical Integration:
#> Points of integration (per dim) 24
#> Dimensions 1
#> Total points of integration 24
#>
#> Fit Measures for H0:
#> Loglikelihood -17832
#> Akaike (AIC) 35723.75
#> Bayesian (BIC) 35891.78
#> Chi-square 17.52
#> Degrees of Freedom (Chi-square) 24
#> P-value (Chi-square) 0.826
#> RMSEA 0.000
#>
#> Comparative fit to H0 (no interaction effect)
#> Loglikelihood change 3144.01
#> Difference test (D) 6288.02
#> Degrees of freedom (D) 1
#> P-value (D) 0.000
#>
#> R-Squared:
#> Y 0.596
#> R-Squared Null-Model (H0):
#> Y 0.395
#> R-Squared Change:
#> Y 0.201
#>
#> Parameter Estimates:
#> Coefficients unstandardized
#> Information expected
#> Standard errors standard
#>
#> Latent Variables:
#> Estimate Std.Error z.value P(>|z|)
#> X =~
#> x1 1.000
#> x2 0.804 0.018 45.23 0.000
#> x3 0.915 0.015 62.37 0.000
#> Z =~
#> z1 1.000
#> z2 0.810 0.014 55.97 0.000
#> z3 0.881 0.014 64.52 0.000
#> Y =~
#> y1 1.000
#> y2 0.799 0.009 85.43 0.000
#> y3 0.899 0.010 88.71 0.000
#>
#> Regressions:
#> Estimate Std.Error z.value P(>|z|)
#> Y ~
#> X 0.677 0.039 17.21 0.000
#> Z 0.572 0.038 14.98 0.000
#> X:Z 0.712 0.032 22.46 0.000
#>
#> Intercepts:
#> Estimate Std.Error z.value P(>|z|)
#> x1 1.026 0.025 41.65 0.000
#> x2 1.218 0.020 61.14 0.000
#> x3 0.922 0.024 38.69 0.000
#> z1 1.016 0.031 32.90 0.000
#> z2 1.209 0.029 42.26 0.000
#> z3 0.920 0.027 34.22 0.000
#> y1 1.046 0.024 43.95 0.000
#> y2 1.228 0.022 54.98 0.000
#> y3 0.962 0.025 39.06 0.000
#> Y 0.000
#> X 0.000
#> Z 0.000
#>
#> Covariances:
#> Estimate Std.Error z.value P(>|z|)
#> X ~~
#> Z 0.198 0.032 6.27 0.000
#>
#> Variances:
#> Estimate Std.Error z.value P(>|z|)
#> x1 0.160 0.010 15.38 0.000
#> x2 0.163 0.008 19.89 0.000
#> x3 0.165 0.009 19.18 0.000
#> z1 0.166 0.010 16.74 0.000
#> z2 0.160 0.009 17.18 0.000
#> z3 0.158 0.009 18.54 0.000
#> y1 0.160 0.010 16.78 0.000
#> y2 0.154 0.008 20.01 0.000
#> y3 0.163 0.009 18.39 0.000
#> X 0.972 0.038 25.86 0.000
#> Z 1.017 0.052 19.55 0.000
#> Y 0.984 0.045 21.94 0.000
Interactions Between Two Observed Variables
modsem
allows you to estimate interactions between not
only latent variables but also observed variables. Below, we first run a
regression with only observed variables, where there is an interaction
between x1
and z2
, and then run an equivalent
model using modsem()
.
Using a Regression
reg1 <- lm(y1 ~ x1*z1, oneInt)
summary(reg1)
#>
#> Call:
#> lm(formula = y1 ~ x1 * z1, data = oneInt)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -3.7155 -0.8087 -0.0367 0.8078 4.6531
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 0.51422 0.04618 11.135 <2e-16 ***
#> x1 0.05477 0.03387 1.617 0.1060
#> z1 -0.06575 0.03461 -1.900 0.0576 .
#> x1:z1 0.54361 0.02272 23.926 <2e-16 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 1.184 on 1996 degrees of freedom
#> Multiple R-squared: 0.4714, Adjusted R-squared: 0.4706
#> F-statistic: 593.3 on 3 and 1996 DF, p-value: < 2.2e-16
Using modsem
When you have interactions between observed variables, it is
generally recommended to use method = "pind"
. Interaction
effects with observed variables are not supported by the
LMS
and QML
approaches. In some cases, you can
define a latent variable with a single indicator to estimate the
interaction effect between two observed variables in the
LMS
and QML
approaches, but this is generally
not recommended.
# Using "pind" as the method (see Chapter 3)
est2 <- modsem('y1 ~ x1 + z1 + x1:z1', data = oneInt, method = "pind")
summary(est2)
#> modsem (version 1.0.4, approach = pind):
#> lavaan 0.6-19 ended normally after 1 iteration
#>
#> Estimator ML
#> Optimization method NLMINB
#> Number of model parameters 4
#>
#> Number of observations 2000
#>
#> Model Test User Model:
#>
#> Test statistic 0.000
#> Degrees of freedom 0
#>
#> Parameter Estimates:
#>
#> Standard errors Standard
#> Information Expected
#> Information saturated (h1) model Structured
#>
#> Regressions:
#> Estimate Std.Err z-value P(>|z|)
#> y1 ~
#> x1 0.055 0.034 1.619 0.105
#> z1 -0.066 0.035 -1.902 0.057
#> x1z1 0.544 0.023 23.950 0.000
#>
#> Variances:
#> Estimate Std.Err z-value P(>|z|)
#> .y1 1.399 0.044 31.623 0.000
Interactions Between Latent and Observed Variables
modsem
also allows you to estimate interaction effects
between latent and observed variables. To do so, simply join a latent
and an observed variable with a colon (e.g.,
'latent:observer'
). As with interactions between observed
variables, it is generally recommended to use
method = "pind"
for estimating the effect between latent
and observed variables.
m3 <- '
# Outer Model
X =~ x1 + x2 + x3
Y =~ y1 + y2 + y3
# Inner Model
Y ~ X + z1 + X:z1
'
est3 <- modsem(m3, oneInt, method = "pind")
summary(est3)
#> modsem (version 1.0.4, approach = pind):
#> lavaan 0.6-19 ended normally after 45 iterations
#>
#> Estimator ML
#> Optimization method NLMINB
#> Number of model parameters 22
#>
#> Number of observations 2000
#>
#> Model Test User Model:
#>
#> Test statistic 4468.171
#> Degrees of freedom 32
#> P-value (Chi-square) 0.000
#>
#> Parameter Estimates:
#>
#> Standard errors Standard
#> Information Expected
#> Information saturated (h1) model Structured
#>
#> Latent Variables:
#> Estimate Std.Err z-value P(>|z|)
#> X =~
#> x1 1.000
#> x2 0.803 0.013 63.697 0.000
#> x3 0.915 0.014 67.548 0.000
#> Y =~
#> y1 1.000
#> y2 0.798 0.007 115.375 0.000
#> y3 0.899 0.007 120.783 0.000
#> Xz1 =~
#> x1z1 1.000
#> x2z1 0.947 0.010 96.034 0.000
#> x3z1 0.902 0.009 99.512 0.000
#>
#> Regressions:
#> Estimate Std.Err z-value P(>|z|)
#> Y ~
#> X 0.021 0.034 0.614 0.540
#> z1 -0.185 0.023 -8.096 0.000
#> Xz1 0.646 0.017 37.126 0.000
#>
#> Covariances:
#> Estimate Std.Err z-value P(>|z|)
#> X ~~
#> Xz1 1.243 0.055 22.523 0.000
#>
#> Variances:
#> Estimate Std.Err z-value P(>|z|)
#> .x1 0.158 0.009 17.976 0.000
#> .x2 0.164 0.007 23.216 0.000
#> .x3 0.162 0.008 20.325 0.000
#> .y1 0.158 0.009 17.819 0.000
#> .y2 0.154 0.007 22.651 0.000
#> .y3 0.164 0.008 20.744 0.000
#> .x1z1 0.315 0.017 18.328 0.000
#> .x2z1 0.428 0.019 22.853 0.000
#> .x3z1 0.337 0.016 21.430 0.000
#> X 0.982 0.036 26.947 0.000
#> .Y 1.112 0.040 27.710 0.000
#> Xz1 3.965 0.136 29.217 0.000
Quadratic Effects
Quadratic effects are essentially a special case of interaction
effects. Thus, modsem
can also be used to estimate
quadratic effects.
m4 <- '
# Outer Model
X =~ x1 + x2 + x3
Y =~ y1 + y2 + y3
Z =~ z1 + z2 + z3
# Inner Model
Y ~ X + Z + Z:X + X:X
'
est4 <- modsem(m4, oneInt, method = "qml")
summary(est4)
#>
#> modsem (version 1.0.4):
#> Estimator QML
#> Optimization method NLMINB
#> Number of observations 2000
#> Number of iterations 104
#> Loglikelihood -17496.2
#> Akaike (AIC) 35056.4
#> Bayesian (BIC) 35235.62
#>
#> Fit Measures for H0:
#> Loglikelihood -17832
#> Akaike (AIC) 35723.75
#> Bayesian (BIC) 35891.78
#> Chi-square 17.52
#> Degrees of Freedom (Chi-square) 24
#> P-value (Chi-square) 0.826
#> RMSEA 0.000
#>
#> Comparative fit to H0 (no interaction effect)
#> Loglikelihood change 335.68
#> Difference test (D) 671.35
#> Degrees of freedom (D) 2
#> P-value (D) 0.000
#>
#> R-Squared:
#> Y 0.607
#> R-Squared Null-Model (H0):
#> Y 0.395
#> R-Squared Change:
#> Y 0.212
#>
#> Parameter Estimates:
#> Coefficients unstandardized
#> Information observed
#> Standard errors standard
#>
#> Latent Variables:
#> Estimate Std.Error z.value P(>|z|)
#> X =~
#> x1 1.000
#> x2 0.803 0.013 63.963 0.000
#> x3 0.914 0.013 67.795 0.000
#> Z =~
#> z1 1.000
#> z2 0.810 0.012 65.122 0.000
#> z3 0.881 0.013 67.621 0.000
#> Y =~
#> y1 1.000
#> y2 0.798 0.007 107.568 0.000
#> y3 0.899 0.008 112.543 0.000
#>
#> Regressions:
#> Estimate Std.Error z.value P(>|z|)
#> Y ~
#> X 0.674 0.032 20.888 0.000
#> Z 0.566 0.030 18.947 0.000
#> X:X -0.005 0.023 -0.207 0.836
#> X:Z 0.713 0.029 24.554 0.000
#>
#> Intercepts:
#> Estimate Std.Error z.value P(>|z|)
#> x1 1.023 0.024 42.894 0.000
#> x2 1.216 0.020 60.994 0.000
#> x3 0.919 0.022 41.484 0.000
#> z1 1.012 0.024 41.575 0.000
#> z2 1.206 0.020 59.269 0.000
#> z3 0.916 0.022 42.062 0.000
#> y1 1.042 0.038 27.683 0.000
#> y2 1.224 0.030 40.158 0.000
#> y3 0.958 0.034 28.101 0.000
#> Y 0.000
#> X 0.000
#> Z 0.000
#>
#> Covariances:
#> Estimate Std.Error z.value P(>|z|)
#> X ~~
#> Z 0.200 0.024 8.238 0.000
#>
#> Variances:
#> Estimate Std.Error z.value P(>|z|)
#> x1 0.158 0.009 18.145 0.000
#> x2 0.162 0.007 23.188 0.000
#> x3 0.165 0.008 20.821 0.000
#> z1 0.166 0.009 18.340 0.000
#> z2 0.159 0.007 22.621 0.000
#> z3 0.158 0.008 20.713 0.000
#> y1 0.159 0.009 17.975 0.000
#> y2 0.154 0.007 22.670 0.000
#> y3 0.164 0.008 20.711 0.000
#> X 0.983 0.036 26.994 0.000
#> Z 1.019 0.038 26.951 0.000
#> Y 0.943 0.038 24.819 0.000
More Complicated Examples
Here is a more complex example using the theory of planned behavior (TPB) model.
tpb <- '
# Outer Model (Based on Hagger et al., 2007)
ATT =~ att1 + att2 + att3 + att4 + att5
SN =~ sn1 + sn2
PBC =~ pbc1 + pbc2 + pbc3
INT =~ int1 + int2 + int3
BEH =~ b1 + b2
# Inner Model (Based on Steinmetz et al., 2011)
INT ~ ATT + SN + PBC
BEH ~ INT + PBC + INT:PBC
'
# The double-centering approach
est_tpb <- modsem(tpb, TPB)
# Using the LMS approach
est_tpb_lms <- modsem(tpb, TPB, method = "lms")
#> Warning: It is recommended that you have at least 32 nodes for interaction
#> effects between exogenous and endogenous variables in the lms approach 'nodes =
#> 24'
summary(est_tpb_lms)
#>
#> modsem (version 1.0.4):
#> Estimator LMS
#> Optimization method EM-NLMINB
#> Number of observations 2000
#> Number of iterations 88
#> Loglikelihood -23463.87
#> Akaike (AIC) 47035.73
#> Bayesian (BIC) 47338.18
#>
#> Numerical Integration:
#> Points of integration (per dim) 24
#> Dimensions 1
#> Total points of integration 24
#>
#> Fit Measures for H0:
#> Loglikelihood -26393
#> Akaike (AIC) 52892.45
#> Bayesian (BIC) 53189.29
#> Chi-square 66.27
#> Degrees of Freedom (Chi-square) 82
#> P-value (Chi-square) 0.897
#> RMSEA 0.000
#>
#> Comparative fit to H0 (no interaction effect)
#> Loglikelihood change 2929.36
#> Difference test (D) 5858.71
#> Degrees of freedom (D) 1
#> P-value (D) 0.000
#>
#> R-Squared:
#> INT 0.361
#> BEH 0.248
#> R-Squared Null-Model (H0):
#> INT 0.367
#> BEH 0.210
#> R-Squared Change:
#> INT -0.006
#> BEH 0.038
#>
#> Parameter Estimates:
#> Coefficients unstandardized
#> Information expected
#> Standard errors standard
#>
#> Latent Variables:
#> Estimate Std.Error z.value P(>|z|)
#> PBC =~
#> pbc1 1.000
#> pbc2 0.911 0.023 39.06 0.000
#> pbc3 0.802 0.016 51.13 0.000
#> ATT =~
#> att1 1.000
#> att2 0.877 0.016 54.64 0.000
#> att3 0.789 0.017 47.31 0.000
#> att4 0.695 0.018 38.72 0.000
#> att5 0.887 0.018 49.78 0.000
#> SN =~
#> sn1 1.000
#> sn2 0.889 0.027 32.67 0.000
#> INT =~
#> int1 1.000
#> int2 0.913 0.026 34.76 0.000
#> int3 0.807 0.019 43.58 0.000
#> BEH =~
#> b1 1.000
#> b2 0.961 0.044 21.59 0.000
#>
#> Regressions:
#> Estimate Std.Error z.value P(>|z|)
#> INT ~
#> PBC 0.217 0.045 4.78 0.000
#> ATT 0.213 0.033 6.38 0.000
#> SN 0.177 0.036 4.90 0.000
#> BEH ~
#> PBC 0.228 0.035 6.45 0.000
#> INT 0.182 0.034 5.29 0.000
#> PBC:INT 0.204 0.025 8.01 0.000
#>
#> Intercepts:
#> Estimate Std.Error z.value P(>|z|)
#> pbc1 0.960 0.028 34.86 0.000
#> pbc2 0.951 0.024 39.44 0.000
#> pbc3 0.961 0.022 44.16 0.000
#> att1 0.988 0.036 27.66 0.000
#> att2 0.984 0.030 32.28 0.000
#> att3 0.996 0.026 37.69 0.000
#> att4 0.981 0.022 44.69 0.000
#> att5 0.969 0.028 34.07 0.000
#> sn1 0.980 0.031 31.89 0.000
#> sn2 0.987 0.034 28.80 0.000
#> int1 0.996 0.031 32.40 0.000
#> int2 0.996 0.027 37.26 0.000
#> int3 0.991 0.024 40.79 0.000
#> b1 0.990 0.024 40.94 0.000
#> b2 1.008 0.026 39.40 0.000
#> INT 0.000
#> BEH 0.000
#> PBC 0.000
#> ATT 0.000
#> SN 0.000
#>
#> Covariances:
#> Estimate Std.Error z.value P(>|z|)
#> PBC ~~
#> ATT 0.658 0.037 17.84 0.000
#> SN 0.657 0.043 15.32 0.000
#> ATT ~~
#> SN 0.616 0.047 13.06 0.000
#>
#> Variances:
#> Estimate Std.Error z.value P(>|z|)
#> pbc1 0.147 0.012 12.31 0.000
#> pbc2 0.164 0.010 16.73 0.000
#> pbc3 0.154 0.010 15.21 0.000
#> att1 0.167 0.011 15.71 0.000
#> att2 0.150 0.010 15.45 0.000
#> att3 0.159 0.009 17.26 0.000
#> att4 0.163 0.008 19.28 0.000
#> att5 0.159 0.009 16.97 0.000
#> sn1 0.178 0.023 7.72 0.000
#> sn2 0.156 0.017 9.32 0.000
#> int1 0.157 0.013 12.37 0.000
#> int2 0.160 0.011 14.88 0.000
#> int3 0.168 0.009 18.87 0.000
#> b1 0.186 0.026 7.28 0.000
#> b2 0.135 0.025 5.45 0.000
#> PBC 0.933 0.039 23.75 0.000
#> ATT 0.985 0.056 17.72 0.000
#> SN 0.974 0.058 16.66 0.000
#> INT 0.491 0.032 15.36 0.000
#> BEH 0.456 0.034 13.58 0.000
Here is an example that includes two quadratic effects and one
interaction effect, using the jordan
dataset. The dataset
is a subset of the PISA 2006 dataset.
m2 <- '
ENJ =~ enjoy1 + enjoy2 + enjoy3 + enjoy4 + enjoy5
CAREER =~ career1 + career2 + career3 + career4
SC =~ academic1 + academic2 + academic3 + academic4 + academic5 + academic6
CAREER ~ ENJ + SC + ENJ:ENJ + SC:SC + ENJ:SC
'
est_jordan <- modsem(m2, data = jordan)
est_jordan_qml <- modsem(m2, data = jordan, method = "qml")
#> Warning: SE's for some coefficients could not be computed.
summary(est_jordan_qml)
#>
#> modsem (version 1.0.4):
#> Estimator QML
#> Optimization method NLMINB
#> Number of observations 6038
#> Number of iterations 86
#> Loglikelihood -110520.22
#> Akaike (AIC) 221142.45
#> Bayesian (BIC) 221484.45
#>
#> Fit Measures for H0:
#> Loglikelihood -110521
#> Akaike (AIC) 221138.58
#> Bayesian (BIC) 221460.46
#> Chi-square 1016.34
#> Degrees of Freedom (Chi-square) 87
#> P-value (Chi-square) 0.000
#> RMSEA 0.005
#>
#> Comparative fit to H0 (no interaction effect)
#> Loglikelihood change 1.06
#> Difference test (D) 2.13
#> Degrees of freedom (D) 3
#> P-value (D) 0.546
#>
#> R-Squared:
#> CAREER 0.513
#> R-Squared Null-Model (H0):
#> CAREER 0.510
#> R-Squared Change:
#> CAREER 0.003
#>
#> Parameter Estimates:
#> Coefficients unstandardized
#> Information observed
#> Standard errors standard
#>
#> Latent Variables:
#> Estimate Std.Error z.value P(>|z|)
#> ENJ =~
#> enjoy1 1.000
#> enjoy2 1.002 0.020 50.584 0.000
#> enjoy3 0.894 0.020 43.669 0.000
#> enjoy4 0.999 0.021 48.225 0.000
#> enjoy5 1.047 0.021 50.399 0.000
#> SC =~
#> academic1 1.000
#> academic2 1.104 0.028 38.946 0.000
#> academic3 1.235 0.030 41.720 0.000
#> academic4 1.254 0.030 41.829 0.000
#> academic5 1.113 0.029 38.649 0.000
#> academic6 1.198 0.030 40.357 0.000
#> CAREER =~
#> career1 1.000
#> career2 1.040 0.016 65.181 0.000
#> career3 0.952 0.016 57.839 0.000
#> career4 0.818 0.017 48.358 0.000
#>
#> Regressions:
#> Estimate Std.Error z.value P(>|z|)
#> CAREER ~
#> ENJ 0.523 0.020 26.293 0.000
#> SC 0.467 0.023 19.899 0.000
#> ENJ:ENJ 0.026 0.022 1.221 0.222
#> ENJ:SC -0.040 0.046 -0.874 0.382
#> SC:SC -0.002 0.034 -0.049 0.961
#>
#> Intercepts:
#> Estimate Std.Error z.value P(>|z|)
#> enjoy1 0.000
#> enjoy2 0.000 0.005 0.032 0.975
#> enjoy3 0.000 0.009 -0.035 0.972
#> enjoy4 0.000
#> enjoy5 0.000 0.006 0.059 0.953
#> academic1 0.000 0.007 -0.016 0.988
#> academic2 0.000 0.011 -0.014 0.989
#> academic3 0.000 0.011 -0.039 0.969
#> academic4 0.000 0.010 -0.022 0.983
#> academic5 -0.001 0.011 -0.061 0.951
#> academic6 0.001 0.011 0.064 0.949
#> career1 -0.004 0.015 -0.246 0.806
#> career2 -0.005 0.015 -0.299 0.765
#> career3 -0.004 0.015 -0.255 0.798
#> career4 -0.004 0.014 -0.269 0.788
#> CAREER 0.000
#> ENJ 0.000
#> SC 0.000
#>
#> Covariances:
#> Estimate Std.Error z.value P(>|z|)
#> ENJ ~~
#> SC 0.218 0.009 25.477 0.000
#>
#> Variances:
#> Estimate Std.Error z.value P(>|z|)
#> enjoy1 0.487 0.011 44.332 0.000
#> enjoy2 0.488 0.011 44.407 0.000
#> enjoy3 0.596 0.012 48.234 0.000
#> enjoy4 0.488 0.011 44.561 0.000
#> enjoy5 0.442 0.010 42.472 0.000
#> academic1 0.644 0.013 49.816 0.000
#> academic2 0.566 0.012 47.862 0.000
#> academic3 0.474 0.011 44.316 0.000
#> academic4 0.455 0.010 43.580 0.000
#> academic5 0.565 0.012 47.696 0.000
#> academic6 0.502 0.011 45.434 0.000
#> career1 0.373 0.009 40.392 0.000
#> career2 0.328 0.009 37.018 0.000
#> career3 0.436 0.010 43.274 0.000
#> career4 0.576 0.012 48.373 0.000
#> ENJ 0.500 0.017 29.546 0.000
#> SC 0.338 0.015 23.196 0.000
#> CAREER 0.302 0.010 29.710 0.000
Note: Other approaches also work but may be quite
slow depending on the number of interaction effects, particularly for
the LMS
and constrained approaches.