modsem
modsem.Rmd
The Basic Syntax
modsem
introduces a new feature to the
lavaan
syntax—the semicolon operator (:
). The
semicolon operator works the same way as in the lm()
function. To specify an interaction effect between two variables, you
join them by Var1:Var2
.
Models can be estimated using one of the product indicator approaches
("ca"
, "rca"
, "dblcent"
,
"pind"
) or by using the latent moderated structural
equations approach ("lms"
) or the quasi maximum likelihood
approach ("qml"
). The product indicator approaches are
estimated via lavaan
, while the lms
and
qml
approaches are estimated via modsem
itself.
A Simple Example
Here is a simple example of how to specify an interaction effect
between two latent variables in lavaan
.
m1 <- '
# Outer Model
X =~ x1 + x2 + x3
Y =~ y1 + y2 + y3
Z =~ z1 + z2 + z3
# Inner Model
Y ~ X + Z + X:Z
'
est1 <- modsem(m1, oneInt)
summary(est1)
#> Estimating baseline model (H0)
#> modsem (version 1.0.11, approach = dblcent):
#>
#> Interaction Model Fit Measures (H1):
#> Loglikelihood -26807.61
#> Akaike (AIC) 53735.22
#> Bayesian (BIC) 54071.28
#> Chi-square 122.92
#> Degrees of Freedom 111
#> P-value (Chi-square) 0.207
#> RMSEA 0.007
#> CFI 1.000
#> SRMR 0.008
#>
#> Fit Measures for Baseline Model (H0):
#> Loglikelihood -27137.74
#> Akaike (AIC) 54393.48
#> Bayesian (BIC) 54723.93
#> Chi-square 783.18
#> Degrees of Freedom 112
#> P-value (Chi-square) 0.000
#> RMSEA 0.055
#> CFI 0.987
#> SRMR 0.141
#>
#> Comparative Fit to H0 (LRT test):
#> Chi-square diff 660.257
#> Degrees of freedom diff 1
#> P-value (LRT) 0.000
#>
#> R-Squared Interaction Model (H1):
#> Y 0.602
#> R-Squared Baseline Model (H0):
#> Y 0.397
#> R-Squared Change (H1 - H0):
#> Y 0.204
#>
#> lavaan 0.6-19 ended normally after 161 iterations
#>
#> Estimator ML
#> Optimization method NLMINB
#> Number of model parameters 60
#>
#> Number of observations 2000
#>
#> Model Test User Model:
#>
#> Test statistic 122.924
#> Degrees of freedom 111
#> P-value (Chi-square) 0.207
#>
#> Parameter Estimates:
#>
#> Standard errors Standard
#> Information Expected
#> Information saturated (h1) model Structured
#>
#> Latent Variables:
#> Estimate Std.Err z-value P(>|z|)
#> X =~
#> x1 1.000
#> x2 0.804 0.013 63.612 0.000
#> x3 0.916 0.014 67.144 0.000
#> Y =~
#> y1 1.000
#> y2 0.798 0.007 107.428 0.000
#> y3 0.899 0.008 112.453 0.000
#> Z =~
#> z1 1.000
#> z2 0.812 0.013 64.763 0.000
#> z3 0.882 0.013 67.014 0.000
#> XZ =~
#> x1z1 1.000
#> x2z1 0.805 0.013 60.636 0.000
#> x3z1 0.877 0.014 62.680 0.000
#> x1z2 0.793 0.013 59.343 0.000
#> x2z2 0.646 0.015 43.672 0.000
#> x3z2 0.706 0.016 44.292 0.000
#> x1z3 0.887 0.014 63.700 0.000
#> x2z3 0.716 0.016 45.645 0.000
#> x3z3 0.781 0.017 45.339 0.000
#>
#> Regressions:
#> Estimate Std.Err z-value P(>|z|)
#> Y ~
#> X 0.675 0.027 25.379 0.000
#> Z 0.561 0.026 21.606 0.000
#> XZ 0.702 0.027 26.360 0.000
#>
#> Covariances:
#> Estimate Std.Err z-value P(>|z|)
#> .x1z1 ~~
#> .x1z2 0.115 0.008 14.802 0.000
#> .x1z3 0.114 0.008 13.947 0.000
#> .x2z1 0.125 0.008 16.095 0.000
#> .x3z1 0.140 0.009 16.135 0.000
#> .x1z2 ~~
#> .x1z3 0.103 0.007 14.675 0.000
#> .x2z2 0.128 0.006 20.850 0.000
#> .x3z2 0.146 0.007 21.243 0.000
#> .x1z3 ~~
#> .x2z3 0.116 0.007 17.818 0.000
#> .x3z3 0.135 0.007 18.335 0.000
#> .x2z1 ~~
#> .x2z2 0.135 0.006 20.905 0.000
#> .x2z3 0.145 0.007 21.145 0.000
#> .x3z1 0.114 0.007 16.058 0.000
#> .x2z2 ~~
#> .x2z3 0.117 0.006 20.419 0.000
#> .x3z2 0.116 0.006 20.586 0.000
#> .x2z3 ~~
#> .x3z3 0.109 0.006 18.059 0.000
#> .x3z1 ~~
#> .x3z2 0.138 0.007 19.331 0.000
#> .x3z3 0.158 0.008 20.269 0.000
#> .x3z2 ~~
#> .x3z3 0.131 0.007 19.958 0.000
#> X ~~
#> Z 0.201 0.024 8.271 0.000
#> XZ 0.016 0.025 0.628 0.530
#> Z ~~
#> XZ 0.062 0.025 2.449 0.014
#>
#> Variances:
#> Estimate Std.Err z-value P(>|z|)
#> .x1 0.160 0.009 17.871 0.000
#> .x2 0.162 0.007 22.969 0.000
#> .x3 0.163 0.008 20.161 0.000
#> .y1 0.159 0.009 17.896 0.000
#> .y2 0.154 0.007 22.640 0.000
#> .y3 0.164 0.008 20.698 0.000
#> .z1 0.168 0.009 18.143 0.000
#> .z2 0.158 0.007 22.264 0.000
#> .z3 0.158 0.008 20.389 0.000
#> .x1z1 0.311 0.014 22.227 0.000
#> .x2z1 0.292 0.011 27.287 0.000
#> .x3z1 0.327 0.012 26.275 0.000
#> .x1z2 0.290 0.011 26.910 0.000
#> .x2z2 0.239 0.008 29.770 0.000
#> .x3z2 0.270 0.009 29.117 0.000
#> .x1z3 0.272 0.012 23.586 0.000
#> .x2z3 0.245 0.009 27.979 0.000
#> .x3z3 0.297 0.011 28.154 0.000
#> X 0.981 0.036 26.895 0.000
#> .Y 0.990 0.038 25.926 0.000
#> Z 1.016 0.038 26.856 0.000
#> XZ 1.045 0.044 24.004 0.000
By default, the model is estimated using the "dblcent"
method. If you want to use another method, you can change it using the
method
argument.
est1 <- modsem(m1, oneInt, method = "lms")
summary(est1)
#>
#> modsem (version 1.0.11):
#>
#> Estimator LMS
#> Optimization method EMA-NLMINB
#> Number of observations 2000
#> Number of iterations 46
#> Loglikelihood -17493.6
#> Akaike (AIC) 35049.21
#> Bayesian (BIC) 35222.84
#>
#> Numerical Integration:
#> Points of integration (per dim) 24
#> Dimensions 1
#> Total points of integration 24
#>
#> Fit Measures for Baseline Model (H0):
#> Loglikelihood -17831.87
#> Akaike (AIC) 35723.75
#> Bayesian (BIC) 35891.78
#> Chi-square 17.52
#> Degrees of Freedom (Chi-square) 24
#> P-value (Chi-square) 0.826
#> RMSEA 0.000
#>
#> Comparative Fit to H0 (LRT test):
#> Loglikelihood change 338.27
#> Difference test (D) 676.54
#> Degrees of freedom (D) 1
#> P-value (D) 0.000
#>
#> R-Squared Interaction Model (H1):
#> Y 0.599
#> R-Squared Baseline Model (H0):
#> Y 0.395
#> R-Squared Change (H1 - H0):
#> Y 0.204
#>
#> Parameter Estimates:
#> Coefficients unstandardized
#> Information observed
#> Standard errors standard
#>
#> Latent Variables:
#> Estimate Std.Error z.value P(>|z|)
#> X =~
#> x1 1.000
#> x2 0.803 0.013 63.91 0.000
#> x3 0.914 0.013 67.73 0.000
#> Z =~
#> z1 1.000
#> z2 0.810 0.012 65.09 0.000
#> z3 0.881 0.013 67.62 0.000
#> Y =~
#> y1 1.000
#> y2 0.798 0.007 107.55 0.000
#> y3 0.899 0.008 112.58 0.000
#>
#> Regressions:
#> Estimate Std.Error z.value P(>|z|)
#> Y ~
#> X 0.672 0.031 21.66 0.000
#> Z 0.570 0.030 18.75 0.000
#> X:Z 0.718 0.028 25.84 0.000
#>
#> Intercepts:
#> Estimate Std.Error z.value P(>|z|)
#> .x1 1.024 0.024 42.85 0.000
#> .x2 1.217 0.020 60.92 0.000
#> .x3 0.921 0.022 41.45 0.000
#> .z1 1.011 0.024 41.54 0.000
#> .z2 1.205 0.020 59.24 0.000
#> .z3 0.915 0.022 42.03 0.000
#> .y1 1.037 0.033 31.40 0.000
#> .y2 1.220 0.027 45.42 0.000
#> .y3 0.954 0.030 31.80 0.000
#>
#> Covariances:
#> Estimate Std.Error z.value P(>|z|)
#> X ~~
#> Z 0.200 0.024 8.24 0.000
#>
#> Variances:
#> Estimate Std.Error z.value P(>|z|)
#> .x1 0.158 0.009 18.17 0.000
#> .x2 0.162 0.007 23.16 0.000
#> .x3 0.164 0.008 20.76 0.000
#> .z1 0.167 0.009 18.50 0.000
#> .z2 0.160 0.007 22.68 0.000
#> .z3 0.158 0.008 20.78 0.000
#> .y1 0.160 0.009 18.01 0.000
#> .y2 0.154 0.007 22.69 0.000
#> .y3 0.164 0.008 20.68 0.000
#> X 0.981 0.036 27.04 0.000
#> Z 1.018 0.038 26.93 0.000
#> .Y 0.980 0.038 25.94 0.000
Interactions Between Two Observed Variables
modsem
allows you to estimate interactions between not
only latent variables but also observed variables. Below, we first run a
regression with only observed variables, where there is an interaction
between x1
and z2
, and then run an equivalent
model using modsem()
.
Using a Regression
reg1 <- lm(y1 ~ x1*z1, oneInt)
summary(reg1)
#>
#> Call:
#> lm(formula = y1 ~ x1 * z1, data = oneInt)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -3.7155 -0.8087 -0.0367 0.8078 4.6531
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 0.51422 0.04618 11.135 <2e-16 ***
#> x1 0.05477 0.03387 1.617 0.1060
#> z1 -0.06575 0.03461 -1.900 0.0576 .
#> x1:z1 0.54361 0.02272 23.926 <2e-16 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 1.184 on 1996 degrees of freedom
#> Multiple R-squared: 0.4714, Adjusted R-squared: 0.4706
#> F-statistic: 593.3 on 3 and 1996 DF, p-value: < 2.2e-16
Using modsem
modsem
can also be used to to estimate interaction
effects between observed variables. Interaction effects with observed
variables are not supported by the LMS
and QML
approaches. In most cases, you can define a latent variable with a
single indicator to estimate the interaction effect between two observed
variables in the LMS
and QML
approaches.
est2 <- modsem('y1 ~ x1 + z1 + x1:z1', data = oneInt, method = "dblcent")
summary(est2)
#> Estimating baseline model (H0)
#> modsem (version 1.0.11, approach = dblcent):
#>
#> Interaction Model Fit Measures (H1):
#> Loglikelihood -3173.58
#> Akaike (AIC) 6355.15
#> Bayesian (BIC) 6377.56
#> Chi-square 0.00
#> Degrees of Freedom 0
#> P-value (Chi-square) NA
#> RMSEA 0.000
#> CFI 1.000
#> SRMR 0.000
#>
#> Fit Measures for Baseline Model (H0):
#> Loglikelihood -3425.74
#> Akaike (AIC) 6857.49
#> Bayesian (BIC) 6874.29
#> Chi-square 504.33
#> Degrees of Freedom 1
#> P-value (Chi-square) 0.000
#> RMSEA 0.502
#> CFI 0.604
#> SRMR 0.123
#>
#> Comparative Fit to H0 (LRT test):
#> Chi-square diff 504.333
#> Degrees of freedom diff 1
#> P-value (LRT) 0.000
#>
#> R-Squared Interaction Model (H1):
#> y1 0.471
#> R-Squared Baseline Model (H0):
#> y1 0.320
#> R-Squared Change (H1 - H0):
#> y1 0.152
#>
#> lavaan 0.6-19 ended normally after 1 iteration
#>
#> Estimator ML
#> Optimization method NLMINB
#> Number of model parameters 4
#>
#> Number of observations 2000
#>
#> Model Test User Model:
#>
#> Test statistic 0.000
#> Degrees of freedom 0
#>
#> Parameter Estimates:
#>
#> Standard errors Standard
#> Information Expected
#> Information saturated (h1) model Structured
#>
#> Regressions:
#> Estimate Std.Err z-value P(>|z|)
#> y1 ~
#> x1 0.605 0.025 24.012 0.000
#> z1 0.490 0.025 19.827 0.000
#> x1z1 0.544 0.023 23.950 0.000
#>
#> Variances:
#> Estimate Std.Err z-value P(>|z|)
#> .y1 1.399 0.044 31.623 0.000
Interactions Between Latent and Observed Variables
modsem
also allows you to estimate interaction effects
between latent and observed variables. To do so, simply join a latent
and an observed variable with a colon (e.g.,
'latent:observed'
). In most of the product indicator
approaches the residuals of product indicators with common variables
(e.g., x1z1
and x1z2
) are allowed to covary
freely. This is problematic when there is an interaction term between an
observed variable (or a latent variable with a single indicator) and a
latent variables (with multiple indicators), since all of the product
indicators share a common element.
In the example below, all the generated product indicators
(x1z1
, x2z1
and x3z1
) share
z1
. If all the indicator residuals of a latent variabel are
allowed to covary freely, the model will (in general) be unidenfified.
The simplest way to fix this issue, is to constrain the residual
covariances to be zero, by using the res.cov.method
argument.
m3 <- '
# Outer Model
X =~ x1 + x2 + x3
Y =~ y1 + y2 + y3
# Inner Model
Y ~ X + z1 + X:z1
'
est3 <- modsem(m3, oneInt, method = "dblcent", res.cov.method = "none")
summary(est3)
#> Estimating baseline model (H0)
#> modsem (version 1.0.11, approach = dblcent):
#>
#> Interaction Model Fit Measures (H1):
#> Loglikelihood -18004.00
#> Akaike (AIC) 36052.00
#> Bayesian (BIC) 36175.22
#> Chi-square 105.21
#> Degrees of Freedom 32
#> P-value (Chi-square) 0.000
#> RMSEA 0.034
#> CFI 0.996
#> SRMR 0.055
#>
#> Fit Measures for Baseline Model (H0):
#> Loglikelihood -18307.00
#> Akaike (AIC) 36655.99
#> Bayesian (BIC) 36773.61
#> Chi-square 711.21
#> Degrees of Freedom 33
#> P-value (Chi-square) 0.000
#> RMSEA 0.101
#> CFI 0.967
#> SRMR 0.165
#>
#> Comparative Fit to H0 (LRT test):
#> Chi-square diff 605.997
#> Degrees of freedom diff 1
#> P-value (LRT) 0.000
#>
#> R-Squared Interaction Model (H1):
#> Y 0.525
#> R-Squared Baseline Model (H0):
#> Y 0.333
#> R-Squared Change (H1 - H0):
#> Y 0.192
#>
#> lavaan 0.6-19 ended normally after 44 iterations
#>
#> Estimator ML
#> Optimization method NLMINB
#> Number of model parameters 22
#>
#> Number of observations 2000
#>
#> Model Test User Model:
#>
#> Test statistic 105.208
#> Degrees of freedom 32
#> P-value (Chi-square) 0.000
#>
#> Parameter Estimates:
#>
#> Standard errors Standard
#> Information Expected
#> Information saturated (h1) model Structured
#>
#> Latent Variables:
#> Estimate Std.Err z-value P(>|z|)
#> X =~
#> x1 1.000
#> x2 0.804 0.013 63.560 0.000
#> x3 0.916 0.014 67.103 0.000
#> Y =~
#> y1 1.000
#> y2 0.798 0.008 103.776 0.000
#> y3 0.899 0.008 108.502 0.000
#> Xz1 =~
#> x1z1 1.000
#> x2z1 0.804 0.012 66.809 0.000
#> x3z1 0.878 0.013 68.890 0.000
#>
#> Regressions:
#> Estimate Std.Err z-value P(>|z|)
#> Y ~
#> X 0.699 0.027 26.192 0.000
#> z1 0.487 0.023 21.289 0.000
#> Xz1 0.620 0.024 25.927 0.000
#>
#> Covariances:
#> Estimate Std.Err z-value P(>|z|)
#> X ~~
#> Xz1 0.001 0.026 0.022 0.982
#>
#> Variances:
#> Estimate Std.Err z-value P(>|z|)
#> .x1 0.160 0.009 17.802 0.000
#> .x2 0.162 0.007 22.955 0.000
#> .x3 0.163 0.008 20.110 0.000
#> .y1 0.158 0.009 17.724 0.000
#> .y2 0.154 0.007 22.593 0.000
#> .y3 0.164 0.008 20.670 0.000
#> .x1z1 0.157 0.010 15.792 0.000
#> .x2z1 0.190 0.008 23.007 0.000
#> .x3z1 0.199 0.009 21.617 0.000
#> X 0.981 0.036 26.895 0.000
#> .Y 1.107 0.040 27.402 0.000
#> Xz1 1.203 0.044 27.606 0.000
Quadratic Effects
Quadratic effects are essentially a special case of interaction
effects. Thus, modsem
can also be used to estimate
quadratic effects.
m4 <- '
# Outer Model
X =~ x1 + x2 + x3
Y =~ y1 + y2 + y3
Z =~ z1 + z2 + z3
# Inner Model
Y ~ X + Z + Z:X + X:X
'
est4 <- modsem(m4, oneInt, method = "qml")
summary(est4)
#>
#> modsem (version 1.0.11):
#>
#> Estimator QML
#> Optimization method NLMINB
#> Number of observations 2000
#> Number of iterations 110
#> Loglikelihood -17501.57
#> Akaike (AIC) 35067.14
#> Bayesian (BIC) 35246.37
#>
#> Fit Measures for Baseline Model (H0):
#> Loglikelihood -17831.87
#> Akaike (AIC) 35723.75
#> Bayesian (BIC) 35891.78
#> Chi-square 17.52
#> Degrees of Freedom (Chi-square) 24
#> P-value (Chi-square) 0.826
#> RMSEA 0.000
#>
#> Comparative Fit to H0 (LRT test):
#> Loglikelihood change 330.30
#> Difference test (D) 660.61
#> Degrees of freedom (D) 2
#> P-value (D) 0.000
#>
#> R-Squared Interaction Model (H1):
#> Y 0.617
#> R-Squared Baseline Model (H0):
#> Y 0.395
#> R-Squared Change (H1 - H0):
#> Y 0.222
#>
#> Parameter Estimates:
#> Coefficients unstandardized
#> Information observed
#> Standard errors standard
#>
#> Latent Variables:
#> Estimate Std.Error z.value P(>|z|)
#> X =~
#> x1 1.000
#> x2 0.803 0.013 63.96 0.000
#> x3 0.913 0.013 67.77 0.000
#> Z =~
#> z1 1.000
#> z2 0.810 0.012 65.15 0.000
#> z3 0.881 0.013 67.67 0.000
#> Y =~
#> y1 1.000
#> y2 0.798 0.007 107.55 0.000
#> y3 0.899 0.008 112.52 0.000
#>
#> Regressions:
#> Estimate Std.Error z.value P(>|z|)
#> Y ~
#> X 0.673 0.031 21.89 0.000
#> Z 0.566 0.030 18.80 0.000
#> X:X -0.006 0.021 -0.28 0.779
#> X:Z 0.715 0.028 25.48 0.000
#>
#> Intercepts:
#> Estimate Std.Error z.value P(>|z|)
#> .x1 1.023 0.024 42.84 0.000
#> .x2 1.216 0.020 60.92 0.000
#> .x3 0.920 0.022 41.44 0.000
#> .z1 1.012 0.024 41.58 0.000
#> .z2 1.206 0.020 59.27 0.000
#> .z3 0.916 0.022 42.06 0.000
#> .y1 1.043 0.038 27.46 0.000
#> .y2 1.225 0.031 39.83 0.000
#> .y3 0.959 0.034 27.88 0.000
#>
#> Covariances:
#> Estimate Std.Error z.value P(>|z|)
#> X ~~
#> Z 0.201 0.024 8.24 0.000
#>
#> Variances:
#> Estimate Std.Error z.value P(>|z|)
#> .x1 0.158 0.009 18.12 0.000
#> .x2 0.162 0.007 23.17 0.000
#> .x3 0.165 0.008 20.83 0.000
#> .z1 0.166 0.009 18.46 0.000
#> .z2 0.160 0.007 22.70 0.000
#> .z3 0.158 0.008 20.81 0.000
#> .y1 0.159 0.009 17.98 0.000
#> .y2 0.154 0.007 22.66 0.000
#> .y3 0.164 0.008 20.69 0.000
#> X 0.983 0.036 26.99 0.000
#> Z 1.018 0.038 26.94 0.000
#> .Y 0.901 0.040 22.69 0.000
More Complicated Examples
Here is a more complex example using the theory of planned behavior (TPB) model.
tpb <- '
# Outer Model (Based on Hagger et al., 2007)
ATT =~ att1 + att2 + att3 + att4 + att5
SN =~ sn1 + sn2
PBC =~ pbc1 + pbc2 + pbc3
INT =~ int1 + int2 + int3
BEH =~ b1 + b2
# Inner Model (Based on Steinmetz et al., 2011)
INT ~ ATT + SN + PBC
BEH ~ INT + PBC + INT:PBC
'
# The double-centering approach
est_tpb <- modsem(tpb, TPB)
# Using the LMS approach
est_tpb_lms <- modsem(tpb, TPB, method = "lms")
#> Warning: It is recommended that you have at least 32 nodes for interaction
#> effects between exogenous and endogenous variables in the lms approach 'nodes =
#> 24'
summary(est_tpb_lms)
#>
#> modsem (version 1.0.11):
#>
#> Estimator LMS
#> Optimization method EMA-NLMINB
#> Number of observations 2000
#> Number of iterations 28
#> Loglikelihood -26326.1
#> Akaike (AIC) 52760.2
#> Bayesian (BIC) 53062.65
#>
#> Numerical Integration:
#> Points of integration (per dim) 24
#> Dimensions 1
#> Total points of integration 24
#>
#> Fit Measures for Baseline Model (H0):
#> Loglikelihood -26393.22
#> Akaike (AIC) 52892.45
#> Bayesian (BIC) 53189.29
#> Chi-square 66.27
#> Degrees of Freedom (Chi-square) 82
#> P-value (Chi-square) 0.897
#> RMSEA 0.000
#>
#> Comparative Fit to H0 (LRT test):
#> Loglikelihood change 67.12
#> Difference test (D) 134.24
#> Degrees of freedom (D) 1
#> P-value (D) 0.000
#>
#> R-Squared Interaction Model (H1):
#> INT 0.366
#> BEH 0.263
#> R-Squared Baseline Model (H0):
#> INT 0.367
#> BEH 0.210
#> R-Squared Change (H1 - H0):
#> INT -0.001
#> BEH 0.053
#>
#> Parameter Estimates:
#> Coefficients unstandardized
#> Information observed
#> Standard errors standard
#>
#> Latent Variables:
#> Estimate Std.Error z.value P(>|z|)
#> PBC =~
#> pbc1 1.000
#> pbc2 0.912 0.013 69.44 0.000
#> pbc3 0.801 0.012 66.07 0.000
#> ATT =~
#> att1 1.000
#> att2 0.878 0.012 71.56 0.000
#> att3 0.789 0.012 66.38 0.000
#> att4 0.695 0.011 61.00 0.000
#> att5 0.887 0.013 70.85 0.000
#> SN =~
#> sn1 1.000
#> sn2 0.888 0.017 52.60 0.000
#> INT =~
#> int1 1.000
#> int2 0.913 0.015 59.06 0.000
#> int3 0.807 0.014 55.74 0.000
#> BEH =~
#> b1 1.000
#> b2 0.960 0.030 31.76 0.000
#>
#> Regressions:
#> Estimate Std.Error z.value P(>|z|)
#> INT ~
#> PBC 0.217 0.030 7.34 0.000
#> ATT 0.213 0.026 8.17 0.000
#> SN 0.176 0.028 6.39 0.000
#> BEH ~
#> PBC 0.233 0.022 10.41 0.000
#> INT 0.190 0.025 7.70 0.000
#> PBC:INT 0.205 0.018 11.31 0.000
#>
#> Intercepts:
#> Estimate Std.Error z.value P(>|z|)
#> .pbc1 1.000 0.024 42.36 0.000
#> .pbc2 0.987 0.022 44.88 0.000
#> .pbc3 0.993 0.020 50.38 0.000
#> .att1 1.015 0.024 41.97 0.000
#> .att2 1.008 0.021 46.93 0.000
#> .att3 1.017 0.020 51.41 0.000
#> .att4 0.999 0.018 55.61 0.000
#> .att5 0.993 0.022 45.63 0.000
#> .sn1 1.007 0.024 41.64 0.000
#> .sn2 1.011 0.022 46.68 0.000
#> .int1 1.014 0.022 46.94 0.000
#> .int2 1.013 0.020 50.38 0.000
#> .int3 1.006 0.018 54.78 0.000
#> .b1 1.002 0.021 46.99 0.000
#> .b2 1.020 0.020 51.07 0.000
#>
#> Covariances:
#> Estimate Std.Error z.value P(>|z|)
#> PBC ~~
#> ATT 0.678 0.029 23.64 0.000
#> SN 0.677 0.029 23.25 0.000
#> ATT ~~
#> SN 0.629 0.029 21.90 0.000
#>
#> Variances:
#> Estimate Std.Error z.value P(>|z|)
#> .pbc1 0.144 0.008 18.37 0.000
#> .pbc2 0.160 0.007 21.44 0.000
#> .pbc3 0.155 0.006 23.88 0.000
#> .att1 0.167 0.007 23.54 0.000
#> .att2 0.150 0.006 24.72 0.000
#> .att3 0.159 0.006 26.39 0.000
#> .att4 0.162 0.006 27.65 0.000
#> .att5 0.159 0.006 24.93 0.000
#> .sn1 0.178 0.015 12.09 0.000
#> .sn2 0.157 0.012 13.25 0.000
#> .int1 0.157 0.009 18.11 0.000
#> .int2 0.160 0.008 20.41 0.000
#> .int3 0.168 0.007 23.55 0.000
#> .b1 0.186 0.018 10.12 0.000
#> .b2 0.135 0.017 8.14 0.000
#> PBC 0.962 0.035 27.16 0.000
#> ATT 0.998 0.037 27.10 0.000
#> SN 0.987 0.039 25.38 0.000
#> .INT 0.491 0.020 24.65 0.000
#> .BEH 0.455 0.023 20.14 0.000
Here is an example that includes two quadratic effects and one
interaction effect, using the jordan
dataset. The dataset
is a subset of the PISA 2006 dataset.
m2 <- '
ENJ =~ enjoy1 + enjoy2 + enjoy3 + enjoy4 + enjoy5
CAREER =~ career1 + career2 + career3 + career4
SC =~ academic1 + academic2 + academic3 + academic4 + academic5 + academic6
CAREER ~ ENJ + SC + ENJ:ENJ + SC:SC + ENJ:SC
'
est_jordan <- modsem(m2, data = jordan)
est_jordan_qml <- modsem(m2, data = jordan, method = "qml")
#> Warning: Standard errors for some coefficients could not be computed.
#> Warning: The variance-covariance matrix of the estimated parameters
#> (vcov) does not appear to be positive definite! The smallest
#> eigenvalue (= -7.886871e-04) is smaller than zero. This may
#> be a symptom that the model is not identified.
summary(est_jordan_qml)
#>
#> modsem (version 1.0.11):
#>
#> Estimator QML
#> Optimization method NLMINB
#> Number of observations 6038
#> Number of iterations 55
#> Loglikelihood -110516.71
#> Akaike (AIC) 221135.42
#> Bayesian (BIC) 221477.42
#>
#> Fit Measures for Baseline Model (H0):
#> Loglikelihood -110521.29
#> Akaike (AIC) 221138.58
#> Bayesian (BIC) 221460.46
#> Chi-square 1016.34
#> Degrees of Freedom (Chi-square) 87
#> P-value (Chi-square) 0.000
#> RMSEA 0.042
#>
#> Comparative Fit to H0 (LRT test):
#> Loglikelihood change 4.58
#> Difference test (D) 9.16
#> Degrees of freedom (D) 3
#> P-value (D) 0.027
#>
#> R-Squared Interaction Model (H1):
#> CAREER 0.524
#> R-Squared Baseline Model (H0):
#> CAREER 0.510
#> R-Squared Change (H1 - H0):
#> CAREER 0.014
#>
#> Parameter Estimates:
#> Coefficients unstandardized
#> Information observed
#> Standard errors standard
#>
#> Latent Variables:
#> Estimate Std.Error z.value P(>|z|)
#> ENJ =~
#> enjoy1 1.000
#> enjoy2 1.002 0.020 50.553 0.000
#> enjoy3 0.894 0.020 43.654 0.000
#> enjoy4 1.000 0.021 48.220 0.000
#> enjoy5 1.047 0.021 50.375 0.000
#> SC =~
#> academic1 1.000
#> academic2 1.105 0.028 38.935 0.000
#> academic3 1.235 0.030 41.700 0.000
#> academic4 1.254 0.030 41.808 0.000
#> academic5 1.114 0.029 38.631 0.000
#> academic6 1.199 0.030 40.348 0.000
#> CAREER =~
#> career1 1.000
#> career2 1.040 0.016 65.181 0.000
#> career3 0.952 0.016 57.860 0.000
#> career4 0.818 0.017 48.364 0.000
#>
#> Regressions:
#> Estimate Std.Error z.value P(>|z|)
#> CAREER ~
#> ENJ 0.543 0.021 26.139 0.000
#> SC 0.461 0.023 19.650 0.000
#> ENJ:ENJ 0.051 0.020 2.523 0.012
#> ENJ:SC -0.015 0.038 -0.407 0.684
#> SC:SC -0.004 0.030 -0.124 0.901
#>
#> Intercepts:
#> Estimate Std.Error z.value P(>|z|)
#> .enjoy1 0.000
#> .enjoy2 0.000 0.011 0.014 0.989
#> .enjoy3 0.000 0.011 -0.027 0.979
#> .enjoy4 0.000 0.011 0.010 0.992
#> .enjoy5 0.000 0.011 0.034 0.973
#> .academic1 0.000 0.012 -0.012 0.991
#> .academic2 0.000
#> .academic3 0.000 0.008 -0.053 0.957
#> .academic4 0.000 0.007 -0.031 0.975
#> .academic5 -0.001 0.009 -0.066 0.947
#> .academic6 0.001 0.009 0.080 0.936
#> .career1 -0.021 0.015 -1.344 0.179
#> .career2 -0.022 0.015 -1.423 0.155
#> .career3 -0.020 0.015 -1.313 0.189
#> .career4 -0.018 0.015 -1.216 0.224
#>
#> Covariances:
#> Estimate Std.Error z.value P(>|z|)
#> ENJ ~~
#> SC 0.217 0.009 25.472 0.000
#>
#> Variances:
#> Estimate Std.Error z.value P(>|z|)
#> .enjoy1 0.487 0.011 44.355 0.000
#> .enjoy2 0.489 0.011 44.445 0.000
#> .enjoy3 0.596 0.012 48.252 0.000
#> .enjoy4 0.487 0.011 44.571 0.000
#> .enjoy5 0.442 0.010 42.491 0.000
#> .academic1 0.645 0.013 49.811 0.000
#> .academic2 0.566 0.012 47.857 0.000
#> .academic3 0.473 0.011 44.316 0.000
#> .academic4 0.455 0.010 43.576 0.000
#> .academic5 0.565 0.012 47.690 0.000
#> .academic6 0.501 0.011 45.425 0.000
#> .career1 0.374 0.009 40.414 0.000
#> .career2 0.328 0.009 37.034 0.000
#> .career3 0.436 0.010 43.268 0.000
#> .career4 0.576 0.012 48.379 0.000
#> ENJ 0.499 0.017 29.535 0.000
#> SC 0.338 0.015 23.179 0.000
#> .CAREER 0.299 0.010 29.423 0.000