Skip to contents
library(modsem)
#> This is modsem (1.0.13). Please report any bugs!

Quadratic Effects and Interaction Effects

Quadratic effects are essentially a special case of interaction effects—where a variable interacts with itself. As such, all of the methods in modsem can also be used to estimate quadratic effects.

Below is a simple example using the LMS approach.

library(modsem)
m1 <- '
# Outer Model
X =~ x1 + x2 + x3
Y =~ y1 + y2 + y3
Z =~ z1 + z2 + z3

# Inner model
Y ~ X + Z + Z:X + X:X
'

est1_lms <- modsem(m1, data = oneInt, method = "lms")
summary(est1_lms)
#> 
#> modsem (1.0.13) ended normally after 50 iterations
#> 
#>   Estimator                                        LMS
#>   Optimization method                       EMA-NLMINB
#>   Number of model parameters                        32
#>                                                       
#>   Number of observations                          2000
#>  
#> Loglikelihood and Information Criteria:
#>   Loglikelihood                              -17493.58
#>   Akaike (AIC)                                35051.16
#>   Bayesian (BIC)                              35230.39
#>  
#> Numerical Integration:
#>   Points of integration (per dim)                   24
#>   Dimensions                                         1
#>   Total points of integration                       24
#>  
#> Fit Measures for Baseline Model (H0):
#>                                               Standard
#>   Chi-square                                     17.52
#>   Degrees of Freedom (Chi-square)                   24
#>   P-value (Chi-square)                           0.826
#>   RMSEA                                          0.000
#>                                                       
#>   Loglikelihood                              -17831.87
#>   Akaike (AIC)                                35723.75
#>   Bayesian (BIC)                              35891.78
#>  
#> Comparative Fit to H0 (LRT test):
#>   Loglikelihood change                          338.29
#>   Difference test (D)                           676.59
#>   Degrees of freedom (D)                             2
#>   P-value (D)                                    0.000
#>  
#> R-Squared Interaction Model (H1):
#>   Y                                              0.599
#> R-Squared Baseline Model (H0):
#>   Y                                              0.395
#> R-Squared Change (H1 - H0):
#>   Y                                              0.204
#> 
#> Parameter Estimates:
#>   Coefficients                          unstandardized
#>   Information                                 observed
#>   Standard errors                             standard
#>  
#> Latent Variables:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>   X =~          
#>     x1              1.000                             
#>     x2              0.804      0.013   63.813    0.000
#>     x3              0.914      0.014   67.610    0.000
#>   Z =~          
#>     z1              1.000                             
#>     z2              0.810      0.012   65.085    0.000
#>     z3              0.881      0.013   67.613    0.000
#>   Y =~          
#>     y1              1.000                             
#>     y2              0.798      0.007  107.549    0.000
#>     y3              0.899      0.008  112.580    0.000
#> 
#> Regressions:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>   Y ~           
#>     X               0.673      0.031   21.654    0.000
#>     Z               0.569      0.030   18.712    0.000
#>     X:X            -0.004      0.021   -0.194    0.846
#>     Z:X             0.720      0.029   24.855    0.000
#> 
#> Intercepts:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>    .x1              1.023      0.024   42.796    0.000
#>    .x2              1.215      0.020   60.867    0.000
#>    .x3              0.919      0.022   41.391    0.000
#>    .z1              1.011      0.024   41.565    0.000
#>    .z2              1.206      0.020   59.262    0.000
#>    .z3              0.916      0.022   42.054    0.000
#>    .y1              1.040      0.038   27.463    0.000
#>    .y2              1.222      0.031   39.874    0.000
#>    .y3              0.956      0.034   27.884    0.000
#> 
#> Covariances:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>   X ~~          
#>     Z               0.200      0.024    8.240    0.000
#> 
#> Variances:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>    .x1              0.158      0.009   18.172    0.000
#>    .x2              0.162      0.007   23.157    0.000
#>    .x3              0.164      0.008   20.759    0.000
#>    .z1              0.167      0.009   18.507    0.000
#>    .z2              0.160      0.007   22.681    0.000
#>    .z3              0.158      0.008   20.780    0.000
#>    .y1              0.160      0.009   18.012    0.000
#>    .y2              0.154      0.007   22.685    0.000
#>    .y3              0.164      0.008   20.683    0.000
#>     X               0.981      0.036   26.975    0.000
#>     Z               1.017      0.038   26.934    0.000
#>    .Y               0.979      0.038   25.929    0.000

In this example, we have a simple model with two quadratic effects and one interaction effect. We estimate the model using both the QML and double-centering approaches, with data from a subset of the PISA 2006 dataset.

m2 <- '
ENJ =~ enjoy1 + enjoy2 + enjoy3 + enjoy4 + enjoy5
CAREER =~ career1 + career2 + career3 + career4
SC =~ academic1 + academic2 + academic3 + academic4 + academic5 + academic6
CAREER ~ ENJ + SC + ENJ:ENJ + SC:SC + ENJ:SC
'

est2_dca <- modsem(m2, data = jordan)
est2_qml <- modsem(m2, data = jordan, method = "qml")
summary(est2_qml)
#> 
#> modsem (1.0.13) ended normally after 48 iterations
#> 
#>   Estimator                                        QML
#>   Optimization method                           NLMINB
#>   Number of model parameters                        51
#>                                                       
#>   Number of observations                          6038
#>  
#> Loglikelihood and Information Criteria:
#>   Loglikelihood                             -110519.99
#>   Akaike (AIC)                               221141.98
#>   Bayesian (BIC)                             221483.98
#>  
#> Fit Measures for Baseline Model (H0):
#>                                               Standard
#>   Chi-square                                   1016.34
#>   Degrees of Freedom (Chi-square)                   87
#>   P-value (Chi-square)                           0.000
#>   RMSEA                                          0.042
#>                                                       
#>   Loglikelihood                             -110521.29
#>   Akaike (AIC)                               221138.58
#>   Bayesian (BIC)                             221460.46
#>  
#> Comparative Fit to H0 (LRT test):
#>   Loglikelihood change                            1.30
#>   Difference test (D)                             2.59
#>   Degrees of freedom (D)                             3
#>   P-value (D)                                    0.458
#>  
#> R-Squared Interaction Model (H1):
#>   CAREER                                         0.513
#> R-Squared Baseline Model (H0):
#>   CAREER                                         0.510
#> R-Squared Change (H1 - H0):
#>   CAREER                                         0.003
#> 
#> Parameter Estimates:
#>   Coefficients                          unstandardized
#>   Information                                 observed
#>   Standard errors                             standard
#>  
#> Latent Variables:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>   ENJ =~        
#>     enjoy1          1.000                             
#>     enjoy2          1.002      0.020   50.574    0.000
#>     enjoy3          0.894      0.020   43.665    0.000
#>     enjoy4          0.999      0.021   48.219    0.000
#>     enjoy5          1.047      0.021   50.391    0.000
#>   SC =~         
#>     academic1       1.000                             
#>     academic2       1.104      0.028   38.951    0.000
#>     academic3       1.235      0.030   41.727    0.000
#>     academic4       1.254      0.030   41.836    0.000
#>     academic5       1.113      0.029   38.653    0.000
#>     academic6       1.198      0.030   40.364    0.000
#>   CAREER =~     
#>     career1         1.000                             
#>     career2         1.040      0.016   65.185    0.000
#>     career3         0.952      0.016   57.843    0.000
#>     career4         0.818      0.017   48.361    0.000
#> 
#> Regressions:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>   CAREER ~      
#>     ENJ             0.526      0.020   26.309    0.000
#>     SC              0.464      0.023   20.004    0.000
#>     ENJ:ENJ         0.029      0.022    1.341    0.180
#>     ENJ:SC         -0.046      0.045   -1.015    0.310
#>     SC:SC           0.001      0.036    0.025    0.980
#> 
#> Intercepts:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>    .enjoy1          0.000      0.011   -0.010    0.992
#>    .enjoy2          0.000      0.013    0.011    0.991
#>    .enjoy3          0.000      0.017   -0.018    0.986
#>    .enjoy4          0.000      0.016    0.005    0.996
#>    .enjoy5          0.000      0.016    0.020    0.984
#>    .academic1       0.000      0.014   -0.011    0.991
#>    .academic2       0.000      0.013   -0.011    0.992
#>    .academic3       0.000      0.013   -0.034    0.973
#>    .academic4       0.000      0.014   -0.018    0.986
#>    .academic5      -0.001      0.013   -0.049    0.961
#>    .academic6       0.001      0.015    0.046    0.964
#>    .career1        -0.005      0.020   -0.231    0.817
#>    .career2        -0.005      0.020   -0.270    0.787
#>    .career3        -0.005      0.019   -0.240    0.811
#>    .career4        -0.005      0.018   -0.257    0.798
#> 
#> Covariances:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>   ENJ ~~        
#>     SC              0.218      0.009   25.479    0.000
#> 
#> Variances:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>    .enjoy1          0.487      0.011   44.350    0.000
#>    .enjoy2          0.489      0.011   44.421    0.000
#>    .enjoy3          0.596      0.012   48.234    0.000
#>    .enjoy4          0.488      0.011   44.568    0.000
#>    .enjoy5          0.442      0.010   42.478    0.000
#>    .academic1       0.644      0.013   49.812    0.000
#>    .academic2       0.566      0.012   47.864    0.000
#>    .academic3       0.473      0.011   44.320    0.000
#>    .academic4       0.455      0.010   43.582    0.000
#>    .academic5       0.565      0.012   47.684    0.000
#>    .academic6       0.502      0.011   45.441    0.000
#>    .career1         0.373      0.009   40.396    0.000
#>    .career2         0.328      0.009   37.017    0.000
#>    .career3         0.436      0.010   43.275    0.000
#>    .career4         0.576      0.012   48.375    0.000
#>     ENJ             0.500      0.017   29.545    0.000
#>     SC              0.338      0.015   23.199    0.000
#>    .CAREER          0.302      0.010   29.725    0.000

NOTE: We can also use the LMS approach to estimate this model, but it will be a lot slower, since we have to integrate along both ENJ and SC. In the first example it is sufficient to only integrate along X, but the addition of the SC:SC term means that we have to explicitly model SC as a moderator. This means that we (by default) have to integrate along 24^2=576 nodes. This both affects the the optimization process, but also dramatically affects the computation time of the standard errors. To make the estimation process it is possible to reduce the number of quadrature nodes. Additionally, we can also pass mean.observed = FALSE, constraining the intercepts of the indicators to zero.

m2 <- '
ENJ =~ enjoy1 + enjoy2 + enjoy3 + enjoy4 + enjoy5
CAREER =~ career1 + career2 + career3 + career4
SC =~ academic1 + academic2 + academic3 + academic4 + academic5 + academic6
CAREER ~ ENJ + SC + ENJ:ENJ + SC:SC + ENJ:SC
'

est2_lms <- modsem(m2, data = jordan, method = "lms",
                   nodes = 15, mean.observed = FALSE)
summary(est2_lms)
#> 
#> modsem (1.0.13) ended normally after 22 iterations
#> 
#>   Estimator                                        LMS
#>   Optimization method                       EMA-NLMINB
#>   Number of model parameters                        37
#>                                                       
#>   Number of observations                          6038
#>  
#> Loglikelihood and Information Criteria:
#>   Loglikelihood                             -110520.02
#>   Akaike (AIC)                               221114.03
#>   Bayesian (BIC)                             221362.15
#>  
#> Numerical Integration:
#>   Points of integration (per dim)                   15
#>   Dimensions                                         2
#>   Total points of integration                      225
#>  
#> Fit Measures for Baseline Model (H0):
#>                                               Standard
#>   Chi-square                                   1016.34
#>   Degrees of Freedom (Chi-square)                   87
#>   P-value (Chi-square)                           0.000
#>   RMSEA                                          0.042
#>                                                       
#>   Loglikelihood                             -110521.29
#>   Akaike (AIC)                               221108.58
#>   Bayesian (BIC)                             221329.87
#>  
#> Comparative Fit to H0 (LRT test):
#>   Loglikelihood change                            1.27
#>   Difference test (D)                             2.55
#>   Degrees of freedom (D)                             4
#>   P-value (D)                                    0.636
#>  
#> R-Squared Interaction Model (H1):
#>   CAREER                                         0.512
#> R-Squared Baseline Model (H0):
#>   CAREER                                         0.510
#> R-Squared Change (H1 - H0):
#>   CAREER                                         0.002
#> 
#> Parameter Estimates:
#>   Coefficients                          unstandardized
#>   Information                                 observed
#>   Standard errors                             standard
#>  
#> Latent Variables:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>   ENJ =~        
#>     enjoy1          1.000                             
#>     enjoy2          1.002      0.020   50.379    0.000
#>     enjoy3          0.894      0.021   43.536    0.000
#>     enjoy4          0.998      0.021   48.014    0.000
#>     enjoy5          1.047      0.021   50.180    0.000
#>   SC =~         
#>     academic1       1.000                             
#>     academic2       1.105      0.029   38.558    0.000
#>     academic3       1.236      0.030   41.239    0.000
#>     academic4       1.255      0.030   41.331    0.000
#>     academic5       1.114      0.029   38.257    0.000
#>     academic6       1.199      0.030   39.900    0.000
#>   CAREER =~     
#>     career1         1.000                             
#>     career2         1.040      0.016   65.193    0.000
#>     career3         0.952      0.016   57.847    0.000
#>     career4         0.818      0.017   48.365    0.000
#> 
#> Regressions:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>   CAREER ~      
#>     ENJ             0.525      0.020   26.396    0.000
#>     SC              0.465      0.023   20.168    0.000
#>     ENJ:ENJ         0.028      0.021    1.321    0.187
#>     ENJ:SC         -0.049      0.042   -1.160    0.246
#>     SC:SC           0.003      0.032    0.081    0.935
#> 
#> Intercepts:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>    .CAREER         -0.004      0.013   -0.330    0.742
#> 
#> Covariances:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>   ENJ ~~        
#>     SC              0.216      0.009   25.383    0.000
#> 
#> Variances:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>    .enjoy1          0.487      0.011   44.349    0.000
#>    .enjoy2          0.488      0.011   44.393    0.000
#>    .enjoy3          0.596      0.012   48.227    0.000
#>    .enjoy4          0.488      0.011   44.630    0.000
#>    .enjoy5          0.442      0.010   42.529    0.000
#>    .academic1       0.645      0.013   49.799    0.000
#>    .academic2       0.566      0.012   47.870    0.000
#>    .academic3       0.473      0.011   44.311    0.000
#>    .academic4       0.455      0.010   43.588    0.000
#>    .academic5       0.565      0.012   47.684    0.000
#>    .academic6       0.502      0.011   45.432    0.000
#>    .career1         0.373      0.009   40.396    0.000
#>    .career2         0.328      0.009   37.021    0.000
#>    .career3         0.436      0.010   43.282    0.000
#>    .career4         0.576      0.012   48.380    0.000
#>     ENJ             0.498      0.017   29.579    0.000
#>     SC              0.337      0.015   22.823    0.000
#>    .CAREER          0.302      0.010   29.736    0.000