quadratic effects
quadratic.RmdQuadratic Effects and Interaction Effects
Quadratic effects are essentially a special case of interaction
effects—where a variable interacts with itself. As such, all of the
methods in modsem can also be used to estimate quadratic
effects.
Below is a simple example using the LMS approach.
library(modsem)
m1 <- '
# Outer Model
X =~ x1 + x2 + x3
Y =~ y1 + y2 + y3
Z =~ z1 + z2 + z3
# Inner model
Y ~ X + Z + Z:X + X:X
'
est1_lms <- modsem(m1, data = oneInt, method = "lms")
summary(est1_lms)
#> 
#> modsem (1.0.14) ended normally after 16 iterations
#> 
#>   Estimator                                        LMS
#>   Optimization method                       EMA-NLMINB
#>   Number of model parameters                        32
#> 
#>   Number of observations                          2000
#> 
#> Loglikelihood and Information Criteria:
#>   Loglikelihood                              -17493.58
#>   Akaike (AIC)                                35051.16
#>   Bayesian (BIC)                              35230.39
#>  
#> Fit Measures for Baseline Model (H0):
#>                                               Standard
#>   Chi-square                                     17.52
#>   Degrees of Freedom (Chi-square)                   24
#>   P-value (Chi-square)                           0.826
#>   RMSEA                                          0.000
#>                                                       
#>   Loglikelihood                              -17831.87
#>   Akaike (AIC)                                35723.75
#>   Bayesian (BIC)                              35891.78
#>  
#> Comparative Fit to H0 (LRT test):
#>   Loglikelihood change                          338.29
#>   Difference test (D)                           676.59
#>   Degrees of freedom (D)                             2
#>   P-value (D)                                    0.000
#>  
#> R-Squared Interaction Model (H1):
#>   Y                                              0.598
#> R-Squared Baseline Model (H0):
#>   Y                                              0.395
#> R-Squared Change (H1 - H0):
#>   Y                                              0.203
#> 
#> Parameter Estimates:
#>   Coefficients                          unstandardized
#>   Information                                 observed
#>   Standard errors                             standard
#>  
#> Latent Variables:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>   X =~          
#>     x1              1.000                             
#>     x2              0.803      0.013   63.813    0.000
#>     x3              0.914      0.014   67.614    0.000
#>   Z =~          
#>     z1              1.000                             
#>     z2              0.810      0.012   65.087    0.000
#>     z3              0.881      0.013   67.616    0.000
#>   Y =~          
#>     y1              1.000                             
#>     y2              0.798      0.007  107.545    0.000
#>     y3              0.899      0.008  112.580    0.000
#> 
#> Regressions:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>   Y ~           
#>     X               0.672      0.031   21.633    0.000
#>     Z               0.568      0.030   18.688    0.000
#>     X:X            -0.004      0.021   -0.197    0.844
#>     Z:X             0.720      0.029   24.856    0.000
#> 
#> Intercepts:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>    .x1              1.022      0.024   42.767    0.000
#>    .x2              1.215      0.020   60.836    0.000
#>    .x3              0.919      0.022   41.363    0.000
#>    .z1              1.011      0.024   41.540    0.000
#>    .z2              1.205      0.020   59.236    0.000
#>    .z3              0.915      0.022   42.027    0.000
#>    .y1              1.039      0.038   27.453    0.000
#>    .y2              1.222      0.031   39.867    0.000
#>    .y3              0.956      0.034   27.873    0.000
#> 
#> Covariances:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>   X ~~          
#>     Z               0.200      0.024    8.242    0.000
#> 
#> Variances:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>    .x1              0.158      0.009   18.170    0.000
#>    .x2              0.162      0.007   23.156    0.000
#>    .x3              0.164      0.008   20.758    0.000
#>    .z1              0.167      0.009   18.509    0.000
#>    .z2              0.160      0.007   22.683    0.000
#>    .z3              0.158      0.008   20.780    0.000
#>    .y1              0.160      0.009   18.012    0.000
#>    .y2              0.154      0.007   22.686    0.000
#>    .y3              0.164      0.008   20.682    0.000
#>     X               0.981      0.036   26.974    0.000
#>     Z               1.017      0.038   26.932    0.000
#>    .Y               0.980      0.038   25.929    0.000In this example, we have a simple model with two quadratic effects
and one interaction effect. We estimate the model using both the
QML and double-centering approaches, with data from a
subset of the PISA 2006 dataset.
m2 <- '
ENJ =~ enjoy1 + enjoy2 + enjoy3 + enjoy4 + enjoy5
CAREER =~ career1 + career2 + career3 + career4
SC =~ academic1 + academic2 + academic3 + academic4 + academic5 + academic6
CAREER ~ ENJ + SC + ENJ:ENJ + SC:SC + ENJ:SC
'
est2_dca <- modsem(m2, data = jordan)
est2_qml <- modsem(m2, data = jordan, method = "qml")
#> Warning: The variance-covariance matrix of the estimated parameters
#> (vcov) does not appear to be positive definite! The smallest
#> eigenvalue (= -3.811339e-05) is smaller than zero. This may
#> be a symptom that the model is not identified.
summary(est2_qml)
#> 
#> modsem (1.0.14) ended normally after 61 iterations
#> 
#>   Estimator                                        QML
#>   Optimization method                           NLMINB
#>   Number of model parameters                        51
#> 
#>   Number of observations                          6038
#> 
#> Loglikelihood and Information Criteria:
#>   Loglikelihood                             -110519.99
#>   Akaike (AIC)                               221141.99
#>   Bayesian (BIC)                             221483.98
#>  
#> Fit Measures for Baseline Model (H0):
#>                                               Standard
#>   Chi-square                                   1016.34
#>   Degrees of Freedom (Chi-square)                   87
#>   P-value (Chi-square)                           0.000
#>   RMSEA                                          0.042
#>                                                       
#>   Loglikelihood                             -110521.29
#>   Akaike (AIC)                               221138.58
#>   Bayesian (BIC)                             221460.46
#>  
#> Comparative Fit to H0 (LRT test):
#>   Loglikelihood change                            1.30
#>   Difference test (D)                             2.59
#>   Degrees of freedom (D)                             3
#>   P-value (D)                                    0.459
#>  
#> R-Squared Interaction Model (H1):
#>   CAREER                                         0.513
#> R-Squared Baseline Model (H0):
#>   CAREER                                         0.510
#> R-Squared Change (H1 - H0):
#>   CAREER                                         0.003
#> 
#> Parameter Estimates:
#>   Coefficients                          unstandardized
#>   Information                                 observed
#>   Standard errors                             standard
#>  
#> Latent Variables:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>   ENJ =~        
#>     enjoy1          1.000                             
#>     enjoy2          1.002      0.020   50.577    0.000
#>     enjoy3          0.894      0.020   43.667    0.000
#>     enjoy4          0.999      0.021   48.222    0.000
#>     enjoy5          1.047      0.021   50.394    0.000
#>   SC =~         
#>     academic1       1.000                             
#>     academic2       1.104      0.028   38.946    0.000
#>     academic3       1.235      0.030   41.721    0.000
#>     academic4       1.254      0.030   41.830    0.000
#>     academic5       1.113      0.029   38.649    0.000
#>     academic6       1.198      0.030   40.358    0.000
#>   CAREER =~     
#>     career1         1.000                             
#>     career2         1.040      0.016   65.181    0.000
#>     career3         0.952      0.016   57.840    0.000
#>     career4         0.818      0.017   48.360    0.000
#> 
#> Regressions:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>   CAREER ~      
#>     ENJ             0.526      0.020   26.308    0.000
#>     SC              0.465      0.023   20.113    0.000
#>     ENJ:ENJ         0.029      0.021    1.373    0.170
#>     ENJ:SC         -0.046      0.043   -1.083    0.279
#>     SC:SC           0.002      0.033    0.049    0.961
#> 
#> Intercepts:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>    .enjoy1          0.000      0.010   -0.022    0.983
#>    .enjoy2          0.000      0.001    0.043    0.966
#>    .enjoy3          0.000      0.011   -0.037    0.970
#>    .enjoy4          0.000      0.001   -0.063    0.950
#>    .enjoy5          0.000      0.010    0.021    0.984
#>    .academic1       0.000      0.012   -0.018    0.986
#>    .academic2       0.000      0.012   -0.019    0.985
#>    .academic3      -0.001      0.012   -0.042    0.966
#>    .academic4       0.000      0.012   -0.029    0.977
#>    .academic5      -0.001      0.012   -0.060    0.952
#>    .academic6       0.001      0.012    0.045    0.964
#>    .career1        -0.005      0.016   -0.307    0.759
#>    .career2        -0.006      0.016   -0.357    0.721
#>    .career3        -0.005      0.016   -0.312    0.755
#>    .career4        -0.005      0.015   -0.321    0.748
#> 
#> Covariances:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>   ENJ ~~        
#>     SC              0.218      0.009   25.478    0.000
#> 
#> Variances:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>    .enjoy1          0.487      0.011   44.350    0.000
#>    .enjoy2          0.489      0.011   44.420    0.000
#>    .enjoy3          0.596      0.012   48.234    0.000
#>    .enjoy4          0.488      0.011   44.569    0.000
#>    .enjoy5          0.442      0.010   42.478    0.000
#>    .academic1       0.644      0.013   49.812    0.000
#>    .academic2       0.566      0.012   47.864    0.000
#>    .academic3       0.473      0.011   44.319    0.000
#>    .academic4       0.455      0.010   43.581    0.000
#>    .academic5       0.565      0.012   47.687    0.000
#>    .academic6       0.502      0.011   45.439    0.000
#>    .career1         0.373      0.009   40.396    0.000
#>    .career2         0.328      0.009   37.016    0.000
#>    .career3         0.436      0.010   43.273    0.000
#>    .career4         0.576      0.012   48.375    0.000
#>     ENJ             0.500      0.017   29.547    0.000
#>     SC              0.338      0.015   23.194    0.000
#>    .CAREER          0.302      0.010   29.725    0.000NOTE: We can also use the LMS approach to estimate
this model, but it will be a lot slower, since we have to integrate
along both ENJ and SC. In the first example it
is sufficient to only integrate along X, but the addition
of the SC:SC term means that we have to explicitly model
SC as a moderator. This means that we (by default) have to
integrate along 24^2=576 nodes. This both affects the the
optimization process, but also dramatically affects the computation time
of the standard errors. To make the estimation process it is possible to
reduce the number of quadrature nodes. Additionally, we can also pass
mean.observed = FALSE, constraining the intercepts of the
indicators to zero.
m2 <- '
ENJ =~ enjoy1 + enjoy2 + enjoy3 + enjoy4 + enjoy5
CAREER =~ career1 + career2 + career3 + career4
SC =~ academic1 + academic2 + academic3 + academic4 + academic5 + academic6
CAREER ~ ENJ + SC + ENJ:ENJ + SC:SC + ENJ:SC
'
est2_lms <- modsem(m2, data = jordan, method = "lms",
                   nodes = 15, mean.observed = FALSE)
summary(est2_lms)
#> 
#> modsem (1.0.14) ended normally after 22 iterations
#> 
#>   Estimator                                        LMS
#>   Optimization method                       EMA-NLMINB
#>   Number of model parameters                        37
#> 
#>   Number of observations                          6038
#> 
#> Loglikelihood and Information Criteria:
#>   Loglikelihood                             -110520.02
#>   Akaike (AIC)                               221114.03
#>   Bayesian (BIC)                             221362.15
#>  
#> Fit Measures for Baseline Model (H0):
#>                                               Standard
#>   Chi-square                                   1016.34
#>   Degrees of Freedom (Chi-square)                   87
#>   P-value (Chi-square)                           0.000
#>   RMSEA                                          0.042
#>                                                       
#>   Loglikelihood                             -110521.29
#>   Akaike (AIC)                               221108.58
#>   Bayesian (BIC)                             221329.87
#>  
#> Comparative Fit to H0 (LRT test):
#>   Loglikelihood change                            1.27
#>   Difference test (D)                             2.55
#>   Degrees of freedom (D)                             4
#>   P-value (D)                                    0.636
#>  
#> R-Squared Interaction Model (H1):
#>   CAREER                                         0.512
#> R-Squared Baseline Model (H0):
#>   CAREER                                         0.510
#> R-Squared Change (H1 - H0):
#>   CAREER                                         0.002
#> 
#> Parameter Estimates:
#>   Coefficients                          unstandardized
#>   Information                                 observed
#>   Standard errors                             standard
#>  
#> Latent Variables:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>   ENJ =~        
#>     enjoy1          1.000                             
#>     enjoy2          1.002      0.020   50.379    0.000
#>     enjoy3          0.894      0.021   43.536    0.000
#>     enjoy4          0.998      0.021   48.014    0.000
#>     enjoy5          1.047      0.021   50.180    0.000
#>   SC =~         
#>     academic1       1.000                             
#>     academic2       1.105      0.029   38.558    0.000
#>     academic3       1.236      0.030   41.239    0.000
#>     academic4       1.255      0.030   41.331    0.000
#>     academic5       1.114      0.029   38.257    0.000
#>     academic6       1.199      0.030   39.900    0.000
#>   CAREER =~     
#>     career1         1.000                             
#>     career2         1.040      0.016   65.193    0.000
#>     career3         0.952      0.016   57.847    0.000
#>     career4         0.818      0.017   48.365    0.000
#> 
#> Regressions:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>   CAREER ~      
#>     ENJ             0.525      0.020   26.398    0.000
#>     SC              0.465      0.023   20.167    0.000
#>     ENJ:ENJ         0.028      0.021    1.321    0.186
#>     ENJ:SC         -0.049      0.042   -1.158    0.247
#>     SC:SC           0.003      0.032    0.081    0.935
#> 
#> Intercepts:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>    .CAREER         -0.004      0.013   -0.330    0.741
#> 
#> Covariances:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>   ENJ ~~        
#>     SC              0.216      0.009   25.383    0.000
#> 
#> Variances:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>    .enjoy1          0.487      0.011   44.349    0.000
#>    .enjoy2          0.488      0.011   44.393    0.000
#>    .enjoy3          0.596      0.012   48.227    0.000
#>    .enjoy4          0.488      0.011   44.630    0.000
#>    .enjoy5          0.442      0.010   42.529    0.000
#>    .academic1       0.645      0.013   49.799    0.000
#>    .academic2       0.566      0.012   47.870    0.000
#>    .academic3       0.473      0.011   44.311    0.000
#>    .academic4       0.455      0.010   43.588    0.000
#>    .academic5       0.565      0.012   47.685    0.000
#>    .academic6       0.502      0.011   45.432    0.000
#>    .career1         0.373      0.009   40.396    0.000
#>    .career2         0.328      0.009   37.021    0.000
#>    .career3         0.436      0.010   43.282    0.000
#>    .career4         0.576      0.012   48.380    0.000
#>     ENJ             0.498      0.017   29.579    0.000
#>     SC              0.337      0.015   22.823    0.000
#>    .CAREER          0.302      0.010   29.736    0.000