Skip to contents
library(modsem)
#> This is modsem (1.0.11). Please report any bugs!

Quadratic Effects and Interaction Effects

Quadratic effects are essentially a special case of interaction effects—where a variable interacts with itself. As such, all of the methods in modsem can also be used to estimate quadratic effects.

Below is a simple example using the LMS approach.

library(modsem)
m1 <- '
# Outer Model
X =~ x1 + x2 + x3
Y =~ y1 + y2 + y3
Z =~ z1 + z2 + z3

# Inner model
Y ~ X + Z + Z:X + X:X
'

est1_lms <- modsem(m1, data = oneInt, method = "lms")
summary(est1_lms)
#> Estimating baseline model (H0)
#> 
#> modsem (version 1.0.11):
#> 
#>   Estimator                                         LMS
#>   Optimization method                        EMA-NLMINB
#>   Number of observations                           2000
#>   Number of iterations                               43
#>   Loglikelihood                                -14687.6
#>   Akaike (AIC)                                 29439.21
#>   Bayesian (BIC)                               29618.43
#>  
#> Numerical Integration:
#>   Points of integration (per dim)                    24
#>   Dimensions                                          1
#>   Total points of integration                        24
#>  
#> Fit Measures for Baseline Model (H0):
#>   Loglikelihood                                  -17832
#>   Akaike (AIC)                                 35723.75
#>   Bayesian (BIC)                               35891.78
#>   Chi-square                                      17.52
#>   Degrees of Freedom (Chi-square)                    24
#>   P-value (Chi-square)                            0.826
#>   RMSEA                                           0.000
#>  
#> Comparative Fit to H0 (LRT test):
#>   Loglikelihood change                          3144.27
#>   Difference test (D)                           6288.54
#>   Degrees of freedom (D)                              2
#>   P-value (D)                                     0.000
#>  
#> R-Squared Interaction Model (H1):
#>   Y                                               0.595
#> R-Squared Baseline Model (H0):
#>   Y                                               0.395
#> R-Squared Change (H1 - H0):
#>   Y                                               0.199
#> 
#> Parameter Estimates:
#>   Coefficients                           unstandardized
#>   Information                                  observed
#>   Standard errors                              standard
#>  
#> Latent Variables:
#>                   Estimate  Std.Error  z.value  P(>|z|)
#>   X =~ 
#>     x1               1.000                             
#>     x2               0.804      0.012   64.321    0.000
#>     x3               0.915      0.013   67.946    0.000
#>   Z =~ 
#>     z1               1.000                             
#>     z2               0.810      0.012   65.093    0.000
#>     z3               0.881      0.013   67.607    0.000
#>   Y =~ 
#>     y1               1.000                             
#>     y2               0.798      0.007  107.545    0.000
#>     y3               0.899      0.008  112.576    0.000
#> 
#> Regressions:
#>                   Estimate  Std.Error  z.value  P(>|z|)
#>   Y ~ 
#>     X                0.672      0.031   21.680    0.000
#>     Z                0.569      0.030   19.028    0.000
#>     X:X             -0.005      0.021   -0.258    0.796
#>     X:Z              0.715      0.029   24.732    0.000
#> 
#> Intercepts:
#>                   Estimate  Std.Error  z.value  P(>|z|)
#>     x1               1.022      0.021   47.661    0.000
#>     x2               1.215      0.018   67.203    0.000
#>     x3               0.919      0.020   45.924    0.000
#>     z1               1.011      0.024   41.694    0.000
#>     z2               1.205      0.020   59.449    0.000
#>     z3               0.915      0.022   42.181    0.000
#>     y1               1.041      0.037   28.058    0.000
#>     y2               1.223      0.030   40.701    0.000
#>     y3               0.957      0.034   28.478    0.000
#> 
#> Covariances:
#>                   Estimate  Std.Error  z.value  P(>|z|)
#>   X ~~ 
#>     Z                0.199      0.024    8.323    0.000
#> 
#> Variances:
#>                   Estimate  Std.Error  z.value  P(>|z|)
#>     x1               0.160      0.008   19.299    0.000
#>     x2               0.163      0.007   23.886    0.000
#>     x3               0.165      0.008   21.226    0.000
#>     z1               0.167      0.009   18.477    0.000
#>     z2               0.160      0.007   22.672    0.000
#>     z3               0.158      0.008   20.798    0.000
#>     y1               0.160      0.009   18.011    0.000
#>     y2               0.154      0.007   22.680    0.000
#>     y3               0.164      0.008   20.680    0.000
#>     X                0.972      0.033   29.873    0.000
#>     Z                1.017      0.038   26.946    0.000
#>     Y                0.984      0.038   25.985    0.000

In this example, we have a simple model with two quadratic effects and one interaction effect. We estimate the model using both the QML and double-centering approaches, with data from a subset of the PISA 2006 dataset.

m2 <- '
ENJ =~ enjoy1 + enjoy2 + enjoy3 + enjoy4 + enjoy5
CAREER =~ career1 + career2 + career3 + career4
SC =~ academic1 + academic2 + academic3 + academic4 + academic5 + academic6
CAREER ~ ENJ + SC + ENJ:ENJ + SC:SC + ENJ:SC
'

est2_dca <- modsem(m2, data = jordan)
est2_qml <- modsem(m2, data = jordan, method = "qml")
#> Warning: SE's for some coefficients could not be computed.
summary(est2_qml)
#> Estimating baseline model (H0)
#> 
#> modsem (version 1.0.11):
#> 
#>   Estimator                                         QML
#>   Optimization method                            NLMINB
#>   Number of observations                           6038
#>   Number of iterations                               81
#>   Loglikelihood                              -110520.22
#>   Akaike (AIC)                                221142.45
#>   Bayesian (BIC)                              221484.45
#>  
#> Fit Measures for Baseline Model (H0):
#>   Loglikelihood                                 -110521
#>   Akaike (AIC)                                221138.58
#>   Bayesian (BIC)                              221460.46
#>   Chi-square                                    1016.34
#>   Degrees of Freedom (Chi-square)                    87
#>   P-value (Chi-square)                            0.000
#>   RMSEA                                           0.042
#>  
#> Comparative Fit to H0 (LRT test):
#>   Loglikelihood change                             1.06
#>   Difference test (D)                              2.13
#>   Degrees of freedom (D)                              3
#>   P-value (D)                                     0.546
#>  
#> R-Squared Interaction Model (H1):
#>   CAREER                                          0.512
#> R-Squared Baseline Model (H0):
#>   CAREER                                          0.510
#> R-Squared Change (H1 - H0):
#>   CAREER                                          0.002
#> 
#> Parameter Estimates:
#>   Coefficients                           unstandardized
#>   Information                                  observed
#>   Standard errors                              standard
#>  
#> Latent Variables:
#>                   Estimate  Std.Error  z.value  P(>|z|)
#>   ENJ =~ 
#>     enjoy1           1.000                             
#>     enjoy2           1.002      0.020   50.586    0.000
#>     enjoy3           0.894      0.020   43.669    0.000
#>     enjoy4           0.999      0.021   48.228    0.000
#>     enjoy5           1.047      0.021   50.401    0.000
#>   SC =~ 
#>     academic1        1.000                             
#>     academic2        1.104      0.028   38.957    0.000
#>     academic3        1.235      0.030   41.734    0.000
#>     academic4        1.254      0.030   41.843    0.000
#>     academic5        1.113      0.029   38.658    0.000
#>     academic6        1.198      0.030   40.369    0.000
#>   CAREER =~ 
#>     career1          1.000                             
#>     career2          1.040      0.016   65.181    0.000
#>     career3          0.952      0.016   57.839    0.000
#>     career4          0.818      0.017   48.358    0.000
#> 
#> Regressions:
#>                   Estimate  Std.Error  z.value  P(>|z|)
#>   CAREER ~ 
#>     ENJ              0.523      0.020   26.289    0.000
#>     SC               0.467      0.023   19.892    0.000
#>     ENJ:ENJ          0.026      0.022    1.224    0.221
#>     ENJ:SC          -0.040      0.046   -0.876    0.381
#>     SC:SC           -0.002      0.034   -0.046    0.963
#> 
#> Intercepts:
#>                   Estimate  Std.Error  z.value  P(>|z|)
#>     enjoy1           0.000                             
#>     enjoy2           0.000      0.006    0.025    0.980
#>     enjoy3           0.000      0.009   -0.032    0.974
#>     enjoy4           0.000                             
#>     enjoy5           0.000      0.005    0.068    0.946
#>     academic1        0.000      0.002   -0.065    0.948
#>     academic2        0.000                             
#>     academic3        0.000      0.005   -0.080    0.936
#>     academic4        0.000                             
#>     academic5       -0.001      0.004   -0.146    0.884
#>     academic6        0.001      0.003    0.243    0.808
#>     career1         -0.004      0.014   -0.268    0.788
#>     career2         -0.005      0.014   -0.324    0.746
#>     career3         -0.004      0.014   -0.274    0.784
#>     career4         -0.004      0.014   -0.282    0.778
#> 
#> Covariances:
#>                   Estimate  Std.Error  z.value  P(>|z|)
#>   ENJ ~~ 
#>     SC               0.218      0.009   25.481    0.000
#> 
#> Variances:
#>                   Estimate  Std.Error  z.value  P(>|z|)
#>     enjoy1           0.487      0.011   44.337    0.000
#>     enjoy2           0.488      0.011   44.407    0.000
#>     enjoy3           0.596      0.012   48.231    0.000
#>     enjoy4           0.488      0.011   44.562    0.000
#>     enjoy5           0.442      0.010   42.471    0.000
#>     academic1        0.644      0.013   49.816    0.000
#>     academic2        0.566      0.012   47.863    0.000
#>     academic3        0.473      0.011   44.321    0.000
#>     academic4        0.455      0.010   43.582    0.000
#>     academic5        0.565      0.012   47.694    0.000
#>     academic6        0.502      0.011   45.435    0.000
#>     career1          0.373      0.009   40.392    0.000
#>     career2          0.328      0.009   37.019    0.000
#>     career3          0.436      0.010   43.272    0.000
#>     career4          0.576      0.012   48.373    0.000
#>     ENJ              0.500      0.017   29.548    0.000
#>     SC               0.338      0.015   23.206    0.000
#>     CAREER           0.302      0.010   29.711    0.000

NOTE: We can also use the LMS approach to estimate this model, but it will be a lot slower, since we have to integrate along both ENJ and SC. In the first example it is sufficient to only integrate along X, but the addition of the SC:SC term means that we have to explicitly model SC as a moderator. This means that we (by default) have to integrate along 24^2=576 nodes. This both affects the the optimization process, but also dramatically affects the computation time of the standard errors. To make the estimation process it is possible to reduce the number of quadrature nodes, and calculate standard errors using the outer product of the score function, instead of the negative of the hessian matrix. Additionally, we can also pass mean.observed = FALSE, constraining the intercepts of the indicators to zero.

m2 <- '
ENJ =~ enjoy1 + enjoy2 + enjoy3 + enjoy4 + enjoy5
CAREER =~ career1 + career2 + career3 + career4
SC =~ academic1 + academic2 + academic3 + academic4 + academic5 + academic6
CAREER ~ ENJ + SC + ENJ:ENJ + SC:SC + ENJ:SC
'

est2_lms <- modsem(m2, data = jordan, method = "lms", 
                   nodes = 15, OFIM.hessian = FALSE, 
                   mean.observed = FALSE)
#> Warning: It is recommended that you have at least 16 nodes for interaction
#> effects between exogenous variables in the lms approach 'nodes = 15'
summary(est2_lms)
#> Estimating baseline model (H0)
#> 
#> modsem (version 1.0.11):
#> 
#>   Estimator                                         LMS
#>   Optimization method                        EMA-NLMINB
#>   Number of observations                           6038
#>   Number of iterations                               51
#>   Loglikelihood                              -100036.82
#>   Akaike (AIC)                                200147.64
#>   Bayesian (BIC)                              200395.75
#>  
#> Numerical Integration:
#>   Points of integration (per dim)                    15
#>   Dimensions                                          2
#>   Total points of integration                       225
#>  
#> Fit Measures for Baseline Model (H0):
#>   Loglikelihood                                 -110521
#>   Akaike (AIC)                                221108.58
#>   Bayesian (BIC)                              221329.87
#>   Chi-square                                    1016.34
#>   Degrees of Freedom (Chi-square)                    87
#>   P-value (Chi-square)                            0.000
#>   RMSEA                                           0.042
#>  
#> Comparative Fit to H0 (LRT test):
#>   Loglikelihood change                         10484.47
#>   Difference test (D)                          20968.94
#>   Degrees of freedom (D)                              4
#>   P-value (D)                                     0.000
#>  
#> R-Squared Interaction Model (H1):
#>   CAREER                                          0.509
#> R-Squared Baseline Model (H0):
#>   CAREER                                          0.510
#> R-Squared Change (H1 - H0):
#>   CAREER                                         -0.001
#> 
#> Parameter Estimates:
#>   Coefficients                           unstandardized
#>   Information                                  observed
#>   Standard errors                              standard
#>  
#> Latent Variables:
#>                   Estimate  Std.Error  z.value  P(>|z|)
#>   ENJ =~ 
#>     enjoy1           1.000                             
#>     enjoy2           1.002      0.021   47.917    0.000
#>     enjoy3           0.895      0.020   45.290    0.000
#>     enjoy4           0.997      0.019   52.613    0.000
#>     enjoy5           1.047      0.019   55.454    0.000
#>   SC =~ 
#>     academic1        1.000                             
#>     academic2        1.105      0.028   39.155    0.000
#>     academic3        1.236      0.029   43.117    0.000
#>     academic4        1.255      0.029   44.024    0.000
#>     academic5        1.115      0.027   40.605    0.000
#>     academic6        1.199      0.028   42.786    0.000
#>   CAREER =~ 
#>     career1          1.000                             
#>     career2          1.040      0.019   54.673    0.000
#>     career3          0.952      0.019   49.620    0.000
#>     career4          0.818      0.017   47.318    0.000
#> 
#> Regressions:
#>                   Estimate  Std.Error  z.value  P(>|z|)
#>   CAREER ~ 
#>     ENJ              0.525      0.021   24.881    0.000
#>     SC               0.465      0.024   19.423    0.000
#>     ENJ:ENJ          0.025      0.018    1.437    0.151
#>     ENJ:SC          -0.050      0.029   -1.715    0.086
#>     SC:SC            0.003      0.023    0.113    0.910
#> 
#> Intercepts:
#>                   Estimate  Std.Error  z.value  P(>|z|)
#>     CAREER          -0.004      0.013   -0.277    0.782
#> 
#> Covariances:
#>                   Estimate  Std.Error  z.value  P(>|z|)
#>   ENJ ~~ 
#>     SC               0.213      0.007   29.509    0.000
#> 
#> Variances:
#>                   Estimate  Std.Error  z.value  P(>|z|)
#>     enjoy1           0.488      0.009   56.029    0.000
#>     enjoy2           0.490      0.009   55.913    0.000
#>     enjoy3           0.595      0.011   53.584    0.000
#>     enjoy4           0.490      0.009   56.846    0.000
#>     enjoy5           0.444      0.009   50.759    0.000
#>     academic1        0.645      0.012   52.068    0.000
#>     academic2        0.566      0.010   55.059    0.000
#>     academic3        0.473      0.009   50.901    0.000
#>     academic4        0.455      0.009   49.260    0.000
#>     academic5        0.565      0.010   55.869    0.000
#>     academic6        0.502      0.010   52.330    0.000
#>     career1          0.373      0.008   46.727    0.000
#>     career2          0.328      0.007   45.884    0.000
#>     career3          0.436      0.009   46.215    0.000
#>     career4          0.576      0.011   54.848    0.000
#>     ENJ              0.490      0.014   34.422    0.000
#>     SC               0.336      0.013   26.607    0.000
#>     CAREER           0.301      0.010   29.673    0.000