Skip to contents
library(modsem)
#> This is modsem (1.0.12). Please report any bugs!

The Basic Syntax

modsem introduces a new feature to the lavaan syntax—the semicolon operator (:). The semicolon operator works the same way as in the lm() function. To specify an interaction effect between two variables, you join them by Var1:Var2.

Models can be estimated using one of the product indicator approaches ("ca", "rca", "dblcent", "pind") or by using the latent moderated structural equations approach ("lms") or the quasi maximum likelihood approach ("qml"). The product indicator approaches are estimated via lavaan, while the lms and qml approaches are estimated via modsem itself.

A Simple Example

Here is a simple example of how to specify an interaction effect between two latent variables in lavaan.

m1 <- '
  # Outer Model
  X =~ x1 + x2 + x3
  Y =~ y1 + y2 + y3
  Z =~ z1 + z2 + z3

  # Inner Model
  Y ~ X + Z + X:Z
'

est1 <- modsem(m1, oneInt)
summary(est1)
#> Estimating baseline model (H0)
#> modsem (version 1.0.12, approach = dblcent):
#> 
#> Interaction Model Fit Measures (H1):
#>   Loglikelihood                              -26807.61 
#>   Akaike (AIC)                                53735.22 
#>   Bayesian (BIC)                              54071.28 
#>   Chi-square                                    122.92 
#>   Degrees of Freedom                               111 
#>   P-value (Chi-square)                           0.207 
#>   RMSEA                                          0.007 
#>   CFI                                            1.000 
#>   SRMR                                           0.008 
#> 
#> Fit Measures for Baseline Model (H0):
#>   Loglikelihood                              -27137.74 
#>   Akaike (AIC)                                54393.48 
#>   Bayesian (BIC)                              54723.93 
#>   Chi-square                                    783.18 
#>   Degrees of Freedom                               112 
#>   P-value (Chi-square)                           0.000 
#>   RMSEA                                          0.055 
#>   CFI                                            0.987 
#>   SRMR                                           0.141 
#> 
#> Comparative Fit to H0 (LRT test):
#>   Chi-square diff                              660.257 
#>   Degrees of freedom diff                            1 
#>   P-value (LRT)                                  0.000 
#> 
#> R-Squared Interaction Model (H1):
#>   Y                                              0.602 
#> R-Squared Baseline Model (H0):
#>   Y                                              0.397 
#> R-Squared Change (H1 - H0):
#>   Y                                              0.204 
#> 
#> lavaan 0.6-19 ended normally after 161 iterations
#> 
#>   Estimator                                         ML
#>   Optimization method                           NLMINB
#>   Number of model parameters                        60
#> 
#>   Number of observations                          2000
#> 
#> Model Test User Model:
#>                                                       
#>   Test statistic                               122.924
#>   Degrees of freedom                               111
#>   P-value (Chi-square)                           0.207
#> 
#> Parameter Estimates:
#> 
#>   Standard errors                             Standard
#>   Information                                 Expected
#>   Information saturated (h1) model          Structured
#> 
#> Latent Variables:
#>                    Estimate  Std.Err  z-value  P(>|z|)
#>   X =~                                                
#>     x1                1.000                           
#>     x2                0.804    0.013   63.612    0.000
#>     x3                0.916    0.014   67.144    0.000
#>   Y =~                                                
#>     y1                1.000                           
#>     y2                0.798    0.007  107.428    0.000
#>     y3                0.899    0.008  112.453    0.000
#>   Z =~                                                
#>     z1                1.000                           
#>     z2                0.812    0.013   64.763    0.000
#>     z3                0.882    0.013   67.014    0.000
#>   XZ =~                                               
#>     x1z1              1.000                           
#>     x2z1              0.805    0.013   60.636    0.000
#>     x3z1              0.877    0.014   62.680    0.000
#>     x1z2              0.793    0.013   59.343    0.000
#>     x2z2              0.646    0.015   43.672    0.000
#>     x3z2              0.706    0.016   44.292    0.000
#>     x1z3              0.887    0.014   63.700    0.000
#>     x2z3              0.716    0.016   45.645    0.000
#>     x3z3              0.781    0.017   45.339    0.000
#> 
#> Regressions:
#>                    Estimate  Std.Err  z-value  P(>|z|)
#>   Y ~                                                 
#>     X                 0.675    0.027   25.379    0.000
#>     Z                 0.561    0.026   21.606    0.000
#>     XZ                0.702    0.027   26.360    0.000
#> 
#> Covariances:
#>                    Estimate  Std.Err  z-value  P(>|z|)
#>  .x1z1 ~~                                             
#>    .x1z2              0.115    0.008   14.802    0.000
#>    .x1z3              0.114    0.008   13.947    0.000
#>    .x2z1              0.125    0.008   16.095    0.000
#>    .x3z1              0.140    0.009   16.135    0.000
#>  .x1z2 ~~                                             
#>    .x1z3              0.103    0.007   14.675    0.000
#>    .x2z2              0.128    0.006   20.850    0.000
#>    .x3z2              0.146    0.007   21.243    0.000
#>  .x1z3 ~~                                             
#>    .x2z3              0.116    0.007   17.818    0.000
#>    .x3z3              0.135    0.007   18.335    0.000
#>  .x2z1 ~~                                             
#>    .x2z2              0.135    0.006   20.905    0.000
#>    .x2z3              0.145    0.007   21.145    0.000
#>    .x3z1              0.114    0.007   16.058    0.000
#>  .x2z2 ~~                                             
#>    .x2z3              0.117    0.006   20.419    0.000
#>    .x3z2              0.116    0.006   20.586    0.000
#>  .x2z3 ~~                                             
#>    .x3z3              0.109    0.006   18.059    0.000
#>  .x3z1 ~~                                             
#>    .x3z2              0.138    0.007   19.331    0.000
#>    .x3z3              0.158    0.008   20.269    0.000
#>  .x3z2 ~~                                             
#>    .x3z3              0.131    0.007   19.958    0.000
#>   X ~~                                                
#>     Z                 0.201    0.024    8.271    0.000
#>     XZ                0.016    0.025    0.628    0.530
#>   Z ~~                                                
#>     XZ                0.062    0.025    2.449    0.014
#> 
#> Variances:
#>                    Estimate  Std.Err  z-value  P(>|z|)
#>    .x1                0.160    0.009   17.871    0.000
#>    .x2                0.162    0.007   22.969    0.000
#>    .x3                0.163    0.008   20.161    0.000
#>    .y1                0.159    0.009   17.896    0.000
#>    .y2                0.154    0.007   22.640    0.000
#>    .y3                0.164    0.008   20.698    0.000
#>    .z1                0.168    0.009   18.143    0.000
#>    .z2                0.158    0.007   22.264    0.000
#>    .z3                0.158    0.008   20.389    0.000
#>    .x1z1              0.311    0.014   22.227    0.000
#>    .x2z1              0.292    0.011   27.287    0.000
#>    .x3z1              0.327    0.012   26.275    0.000
#>    .x1z2              0.290    0.011   26.910    0.000
#>    .x2z2              0.239    0.008   29.770    0.000
#>    .x3z2              0.270    0.009   29.117    0.000
#>    .x1z3              0.272    0.012   23.586    0.000
#>    .x2z3              0.245    0.009   27.979    0.000
#>    .x3z3              0.297    0.011   28.154    0.000
#>     X                 0.981    0.036   26.895    0.000
#>    .Y                 0.990    0.038   25.926    0.000
#>     Z                 1.016    0.038   26.856    0.000
#>     XZ                1.045    0.044   24.004    0.000

By default, the model is estimated using the "dblcent" method. If you want to use another method, you can change it using the method argument.

est1 <- modsem(m1, oneInt, method = "lms")
summary(est1)
#> 
#> modsem (1.0.12) ended normally after 43 iterations
#> 
#>   Estimator                                        LMS
#>   Optimization method                       EMA-NLMINB
#>   Number of model parameters                        31
#>                                                       
#>   Number of observations                          2000
#>  
#> Loglikelihood and Information Criteria:
#>   Loglikelihood                              -17493.60
#>   Akaike (AIC)                                35049.20
#>   Bayesian (BIC)                              35222.83
#>  
#> Numerical Integration:
#>   Points of integration (per dim)                   24
#>   Dimensions                                         1
#>   Total points of integration                       24
#>  
#> Fit Measures for Baseline Model (H0):
#>   Loglikelihood                              -17831.87
#>   Akaike (AIC)                                35723.75
#>   Bayesian (BIC)                              35891.78
#>   Chi-square                                     17.52
#>   Degrees of Freedom (Chi-square)                   24
#>   P-value (Chi-square)                           0.826
#>   RMSEA                                          0.000
#>  
#> Comparative Fit to H0 (LRT test):
#>   Loglikelihood change                          338.27
#>   Difference test (D)                           676.55
#>   Degrees of freedom (D)                             1
#>   P-value (D)                                    0.000
#>  
#> R-Squared Interaction Model (H1):
#>   Y                                              0.599
#> R-Squared Baseline Model (H0):
#>   Y                                              0.395
#> R-Squared Change (H1 - H0):
#>   Y                                              0.204
#> 
#> Parameter Estimates:
#>   Coefficients                          unstandardized
#>   Information                                 observed
#>   Standard errors                             standard
#>  
#> Latent Variables:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>   X =~          
#>     x1              1.000                             
#>     x2              0.804      0.013   63.806    0.000
#>     x3              0.914      0.014   67.599    0.000
#>   Z =~          
#>     z1              1.000                             
#>     z2              0.810      0.012   65.079    0.000
#>     z3              0.881      0.013   67.610    0.000
#>   Y =~          
#>     y1              1.000                             
#>     y2              0.798      0.007  107.547    0.000
#>     y3              0.899      0.008  112.574    0.000
#> 
#> Regressions:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>   Y ~           
#>     X               0.674      0.031   21.689    0.000
#>     Z               0.570      0.030   18.739    0.000
#>     X:Z             0.718      0.028   25.829    0.000
#> 
#> Intercepts:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>    .x1              1.023      0.024   42.812    0.000
#>    .x2              1.216      0.020   60.879    0.000
#>    .x3              0.920      0.022   41.405    0.000
#>    .z1              1.012      0.024   41.586    0.000
#>    .z2              1.206      0.020   59.281    0.000
#>    .z3              0.916      0.022   42.072    0.000
#>    .y1              1.037      0.033   31.391    0.000
#>    .y2              1.220      0.027   45.413    0.000
#>    .y3              0.954      0.030   31.800    0.000
#> 
#> Covariances:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>   X ~~          
#>     Z               0.200      0.024    8.241    0.000
#> 
#> Variances:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>    .x1              0.158      0.009   18.169    0.000
#>    .x2              0.162      0.007   23.156    0.000
#>    .x3              0.165      0.008   20.755    0.000
#>    .z1              0.167      0.009   18.505    0.000
#>    .z2              0.160      0.007   22.680    0.000
#>    .z3              0.158      0.008   20.777    0.000
#>    .y1              0.160      0.009   18.010    0.000
#>    .y2              0.154      0.007   22.684    0.000
#>    .y3              0.164      0.008   20.681    0.000
#>     X               0.981      0.036   26.966    0.000
#>     Z               1.017      0.038   26.930    0.000
#>    .Y               0.980      0.038   25.934    0.000

Interactions Between Two Observed Variables

modsem allows you to estimate interactions between not only latent variables but also observed variables. Below, we first run a regression with only observed variables, where there is an interaction between x1 and z2, and then run an equivalent model using modsem().

Using a Regression

reg1 <- lm(y1 ~ x1*z1, oneInt)
summary(reg1)
#> 
#> Call:
#> lm(formula = y1 ~ x1 * z1, data = oneInt)
#> 
#> Residuals:
#>     Min      1Q  Median      3Q     Max 
#> -3.7155 -0.8087 -0.0367  0.8078  4.6531 
#> 
#> Coefficients:
#>             Estimate Std. Error t value Pr(>|t|)    
#> (Intercept)  0.51422    0.04618  11.135   <2e-16 ***
#> x1           0.05477    0.03387   1.617   0.1060    
#> z1          -0.06575    0.03461  -1.900   0.0576 .  
#> x1:z1        0.54361    0.02272  23.926   <2e-16 ***
#> ---
#> Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#> 
#> Residual standard error: 1.184 on 1996 degrees of freedom
#> Multiple R-squared:  0.4714, Adjusted R-squared:  0.4706 
#> F-statistic: 593.3 on 3 and 1996 DF,  p-value: < 2.2e-16

Using modsem

modsem can also be used to to estimate interaction effects between observed variables. Interaction effects with observed variables are not supported by the LMS and QML approaches. In most cases, you can define a latent variable with a single indicator to estimate the interaction effect between two observed variables in the LMS and QML approaches.

est2 <- modsem('y1 ~ x1 + z1 + x1:z1', data = oneInt, method = "dblcent")
summary(est2)
#> Estimating baseline model (H0)
#> modsem (version 1.0.12, approach = dblcent):
#> 
#> Interaction Model Fit Measures (H1):
#>   Loglikelihood                               -3173.58 
#>   Akaike (AIC)                                 6355.15 
#>   Bayesian (BIC)                               6377.56 
#>   Chi-square                                      0.00 
#>   Degrees of Freedom                                 0 
#>   P-value (Chi-square)                              NA 
#>   RMSEA                                          0.000 
#>   CFI                                            1.000 
#>   SRMR                                           0.000 
#> 
#> Fit Measures for Baseline Model (H0):
#>   Loglikelihood                               -3425.74 
#>   Akaike (AIC)                                 6857.49 
#>   Bayesian (BIC)                               6874.29 
#>   Chi-square                                    504.33 
#>   Degrees of Freedom                                 1 
#>   P-value (Chi-square)                           0.000 
#>   RMSEA                                          0.502 
#>   CFI                                            0.604 
#>   SRMR                                           0.123 
#> 
#> Comparative Fit to H0 (LRT test):
#>   Chi-square diff                              504.333 
#>   Degrees of freedom diff                            1 
#>   P-value (LRT)                                  0.000 
#> 
#> R-Squared Interaction Model (H1):
#>   y1                                             0.471 
#> R-Squared Baseline Model (H0):
#>   y1                                             0.320 
#> R-Squared Change (H1 - H0):
#>   y1                                             0.152 
#> 
#> lavaan 0.6-19 ended normally after 1 iteration
#> 
#>   Estimator                                         ML
#>   Optimization method                           NLMINB
#>   Number of model parameters                         4
#> 
#>   Number of observations                          2000
#> 
#> Model Test User Model:
#>                                                       
#>   Test statistic                                 0.000
#>   Degrees of freedom                                 0
#> 
#> Parameter Estimates:
#> 
#>   Standard errors                             Standard
#>   Information                                 Expected
#>   Information saturated (h1) model          Structured
#> 
#> Regressions:
#>                    Estimate  Std.Err  z-value  P(>|z|)
#>   y1 ~                                                
#>     x1                0.605    0.025   24.012    0.000
#>     z1                0.490    0.025   19.827    0.000
#>     x1z1              0.544    0.023   23.950    0.000
#> 
#> Variances:
#>                    Estimate  Std.Err  z-value  P(>|z|)
#>    .y1                1.399    0.044   31.623    0.000

Interactions Between Latent and Observed Variables

modsem also allows you to estimate interaction effects between latent and observed variables. To do so, simply join a latent and an observed variable with a colon (e.g., 'latent:observed'). In most of the product indicator approaches the residuals of product indicators with common variables (e.g., x1z1 and x1z2) are allowed to covary freely. This is problematic when there is an interaction term between an observed variable (or a latent variable with a single indicator) and a latent variables (with multiple indicators), since all of the product indicators share a common element.

In the example below, all the generated product indicators (x1z1, x2z1 and x3z1) share z1. If all the indicator residuals of a latent variabel are allowed to covary freely, the model will (in general) be unidenfified. The simplest way to fix this issue, is to constrain the residual covariances to be zero, by using the res.cov.method argument.

m3 <- '
  # Outer Model
  X =~ x1 + x2 + x3
  Y =~ y1 + y2 + y3

  # Inner Model
  Y ~ X + z1 + X:z1
'

est3 <- modsem(m3, oneInt, method = "dblcent", res.cov.method = "none")
summary(est3)
#> Estimating baseline model (H0)
#> modsem (version 1.0.12, approach = dblcent):
#> 
#> Interaction Model Fit Measures (H1):
#>   Loglikelihood                              -18004.00 
#>   Akaike (AIC)                                36052.00 
#>   Bayesian (BIC)                              36175.22 
#>   Chi-square                                    105.21 
#>   Degrees of Freedom                                32 
#>   P-value (Chi-square)                           0.000 
#>   RMSEA                                          0.034 
#>   CFI                                            0.996 
#>   SRMR                                           0.055 
#> 
#> Fit Measures for Baseline Model (H0):
#>   Loglikelihood                              -18307.00 
#>   Akaike (AIC)                                36655.99 
#>   Bayesian (BIC)                              36773.61 
#>   Chi-square                                    711.21 
#>   Degrees of Freedom                                33 
#>   P-value (Chi-square)                           0.000 
#>   RMSEA                                          0.101 
#>   CFI                                            0.967 
#>   SRMR                                           0.165 
#> 
#> Comparative Fit to H0 (LRT test):
#>   Chi-square diff                              605.997 
#>   Degrees of freedom diff                            1 
#>   P-value (LRT)                                  0.000 
#> 
#> R-Squared Interaction Model (H1):
#>   Y                                              0.525 
#> R-Squared Baseline Model (H0):
#>   Y                                              0.333 
#> R-Squared Change (H1 - H0):
#>   Y                                              0.192 
#> 
#> lavaan 0.6-19 ended normally after 44 iterations
#> 
#>   Estimator                                         ML
#>   Optimization method                           NLMINB
#>   Number of model parameters                        22
#> 
#>   Number of observations                          2000
#> 
#> Model Test User Model:
#>                                                       
#>   Test statistic                               105.208
#>   Degrees of freedom                                32
#>   P-value (Chi-square)                           0.000
#> 
#> Parameter Estimates:
#> 
#>   Standard errors                             Standard
#>   Information                                 Expected
#>   Information saturated (h1) model          Structured
#> 
#> Latent Variables:
#>                    Estimate  Std.Err  z-value  P(>|z|)
#>   X =~                                                
#>     x1                1.000                           
#>     x2                0.804    0.013   63.560    0.000
#>     x3                0.916    0.014   67.103    0.000
#>   Y =~                                                
#>     y1                1.000                           
#>     y2                0.798    0.008  103.776    0.000
#>     y3                0.899    0.008  108.502    0.000
#>   Xz1 =~                                              
#>     x1z1              1.000                           
#>     x2z1              0.804    0.012   66.809    0.000
#>     x3z1              0.878    0.013   68.890    0.000
#> 
#> Regressions:
#>                    Estimate  Std.Err  z-value  P(>|z|)
#>   Y ~                                                 
#>     X                 0.699    0.027   26.192    0.000
#>     z1                0.487    0.023   21.289    0.000
#>     Xz1               0.620    0.024   25.927    0.000
#> 
#> Covariances:
#>                    Estimate  Std.Err  z-value  P(>|z|)
#>   X ~~                                                
#>     Xz1               0.001    0.026    0.022    0.982
#> 
#> Variances:
#>                    Estimate  Std.Err  z-value  P(>|z|)
#>    .x1                0.160    0.009   17.802    0.000
#>    .x2                0.162    0.007   22.955    0.000
#>    .x3                0.163    0.008   20.110    0.000
#>    .y1                0.158    0.009   17.724    0.000
#>    .y2                0.154    0.007   22.593    0.000
#>    .y3                0.164    0.008   20.670    0.000
#>    .x1z1              0.157    0.010   15.792    0.000
#>    .x2z1              0.190    0.008   23.007    0.000
#>    .x3z1              0.199    0.009   21.617    0.000
#>     X                 0.981    0.036   26.895    0.000
#>    .Y                 1.107    0.040   27.402    0.000
#>     Xz1               1.203    0.044   27.606    0.000

Quadratic Effects

Quadratic effects are essentially a special case of interaction effects. Thus, modsem can also be used to estimate quadratic effects.

m4 <- '
# Outer Model
X =~ x1 + x2 + x3
Y =~ y1 + y2 + y3
Z =~ z1 + z2 + z3

# Inner Model
Y ~ X + Z + Z:X + X:X
'

est4 <- modsem(m4, oneInt, method = "qml")
summary(est4)
#> 
#> modsem (1.0.12) ended normally after 117 iterations
#> 
#>   Estimator                                        QML
#>   Optimization method                           NLMINB
#>   Number of model parameters                        32
#>                                                       
#>   Number of observations                          2000
#>  
#> Loglikelihood and Information Criteria:
#>   Loglikelihood                              -17493.63
#>   Akaike (AIC)                                35051.26
#>   Bayesian (BIC)                              35230.49
#>  
#> Fit Measures for Baseline Model (H0):
#>   Loglikelihood                              -17831.87
#>   Akaike (AIC)                                35723.75
#>   Bayesian (BIC)                              35891.78
#>   Chi-square                                     17.52
#>   Degrees of Freedom (Chi-square)                   24
#>   P-value (Chi-square)                           0.826
#>   RMSEA                                          0.000
#>  
#> Comparative Fit to H0 (LRT test):
#>   Loglikelihood change                          338.25
#>   Difference test (D)                           676.49
#>   Degrees of freedom (D)                             2
#>   P-value (D)                                    0.000
#>  
#> R-Squared Interaction Model (H1):
#>   Y                                              0.599
#> R-Squared Baseline Model (H0):
#>   Y                                              0.395
#> R-Squared Change (H1 - H0):
#>   Y                                              0.204
#> 
#> Parameter Estimates:
#>   Coefficients                          unstandardized
#>   Information                                 observed
#>   Standard errors                             standard
#>  
#> Latent Variables:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>   X =~          
#>     x1              1.000                             
#>     x2              0.804      0.013   63.894    0.000
#>     x3              0.914      0.013   67.706    0.000
#>   Z =~          
#>     z1              1.000                             
#>     z2              0.810      0.012   65.076    0.000
#>     z3              0.881      0.013   67.605    0.000
#>   Y =~          
#>     y1              1.000                             
#>     y2              0.798      0.007  107.531    0.000
#>     y3              0.899      0.008  112.557    0.000
#> 
#> Regressions:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>   Y ~           
#>     X               0.673      0.031   21.659    0.000
#>     Z               0.569      0.030   18.724    0.000
#>     X:X            -0.004      0.021   -0.208    0.835
#>     Z:X             0.720      0.029   24.863    0.000
#> 
#> Intercepts:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>    .x1              1.023      0.024   42.843    0.000
#>    .x2              1.216      0.020   60.921    0.000
#>    .x3              0.920      0.022   41.437    0.000
#>    .z1              1.012      0.024   41.577    0.000
#>    .z2              1.206      0.020   59.270    0.000
#>    .z3              0.916      0.022   42.064    0.000
#>    .y1              1.041      0.038   27.515    0.000
#>    .y2              1.224      0.031   39.923    0.000
#>    .y3              0.958      0.034   27.933    0.000
#> 
#> Covariances:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>   X ~~          
#>     Z               0.200      0.024    8.239    0.000
#> 
#> Variances:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>    .x1              0.158      0.009   18.146    0.000
#>    .x2              0.162      0.007   23.150    0.000
#>    .x3              0.165      0.008   20.765    0.000
#>    .z1              0.167      0.009   18.510    0.000
#>    .z2              0.160      0.007   22.678    0.000
#>    .z3              0.158      0.008   20.773    0.000
#>    .y1              0.160      0.009   18.011    0.000
#>    .y2              0.154      0.007   22.679    0.000
#>    .y3              0.164      0.008   20.679    0.000
#>     X               0.983      0.036   26.972    0.000
#>     Z               1.017      0.038   26.927    0.000
#>    .Y               0.980      0.038   25.899    0.000

More Complicated Examples

Here is a more complex example using the theory of planned behavior (TPB) model.

tpb <- '
# Outer Model (Based on Hagger et al., 2007)
  ATT =~ att1 + att2 + att3 + att4 + att5
  SN =~ sn1 + sn2
  PBC =~ pbc1 + pbc2 + pbc3
  INT =~ int1 + int2 + int3
  BEH =~ b1 + b2

# Inner Model (Based on Steinmetz et al., 2011)
  INT ~ ATT + SN + PBC
  BEH ~ INT + PBC + INT:PBC
'

# The double-centering approach
est_tpb <- modsem(tpb, TPB)

# Using the LMS approach
est_tpb_lms <- modsem(tpb, TPB, method = "lms")
#> Warning: It is recommended that you have at least 32 nodes for interaction
#> effects between exogenous and endogenous variables in the lms approach 'nodes =
#> 24'
summary(est_tpb_lms)
#> 
#> modsem (1.0.12) ended normally after 27 iterations
#> 
#>   Estimator                                        LMS
#>   Optimization method                       EMA-NLMINB
#>   Number of model parameters                        54
#>                                                       
#>   Number of observations                          2000
#>  
#> Loglikelihood and Information Criteria:
#>   Loglikelihood                              -26326.10
#>   Akaike (AIC)                                52760.20
#>   Bayesian (BIC)                              53062.65
#>  
#> Numerical Integration:
#>   Points of integration (per dim)                   24
#>   Dimensions                                         1
#>   Total points of integration                       24
#>  
#> Fit Measures for Baseline Model (H0):
#>   Loglikelihood                              -26393.22
#>   Akaike (AIC)                                52892.45
#>   Bayesian (BIC)                              53189.29
#>   Chi-square                                     66.27
#>   Degrees of Freedom (Chi-square)                   82
#>   P-value (Chi-square)                           0.897
#>   RMSEA                                          0.000
#>  
#> Comparative Fit to H0 (LRT test):
#>   Loglikelihood change                           67.12
#>   Difference test (D)                           134.24
#>   Degrees of freedom (D)                             1
#>   P-value (D)                                    0.000
#>  
#> R-Squared Interaction Model (H1):
#>   INT                                            0.366
#>   BEH                                            0.263
#> R-Squared Baseline Model (H0):
#>   INT                                            0.367
#>   BEH                                            0.210
#> R-Squared Change (H1 - H0):
#>   INT                                           -0.001
#>   BEH                                            0.053
#> 
#> Parameter Estimates:
#>   Coefficients                          unstandardized
#>   Information                                 observed
#>   Standard errors                             standard
#>  
#> Latent Variables:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>   ATT =~        
#>     att1            1.000                             
#>     att2            0.878      0.012   71.568    0.000
#>     att3            0.789      0.012   66.377    0.000
#>     att4            0.695      0.011   61.003    0.000
#>     att5            0.887      0.013   70.856    0.000
#>   SN =~         
#>     sn1             1.000                             
#>     sn2             0.888      0.017   52.599    0.000
#>   PBC =~        
#>     pbc1            1.000                             
#>     pbc2            0.912      0.013   69.337    0.000
#>     pbc3            0.801      0.012   65.982    0.000
#>   INT =~        
#>     int1            1.000                             
#>     int2            0.913      0.015   59.060    0.000
#>     int3            0.807      0.014   55.741    0.000
#>   BEH =~        
#>     b1              1.000                             
#>     b2              0.961      0.030   31.747    0.000
#> 
#> Regressions:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>   INT ~         
#>     ATT             0.213      0.026    8.170    0.000
#>     SN              0.177      0.028    6.391    0.000
#>     PBC             0.217      0.030    7.341    0.000
#>   BEH ~         
#>     PBC             0.233      0.022   10.413    0.000
#>     INT             0.190      0.025    7.692    0.000
#>     INT:PBC         0.205      0.018   11.307    0.000
#> 
#> Intercepts:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>    .pbc1            0.999      0.024   42.326    0.000
#>    .pbc2            0.986      0.022   44.844    0.000
#>    .pbc3            0.992      0.020   50.353    0.000
#>    .att1            1.015      0.024   41.988    0.000
#>    .att2            1.008      0.021   46.949    0.000
#>    .att3            1.017      0.020   51.432    0.000
#>    .att4            1.000      0.018   55.629    0.000
#>    .att5            0.993      0.022   45.656    0.000
#>    .sn1             1.007      0.024   41.640    0.000
#>    .sn2             1.011      0.022   46.683    0.000
#>    .int1            1.014      0.022   46.949    0.000
#>    .int2            1.013      0.020   50.392    0.000
#>    .int3            1.006      0.018   54.791    0.000
#>    .b1              1.001      0.021   46.969    0.000
#>    .b2              1.019      0.020   51.041    0.000
#> 
#> Covariances:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>   ATT ~~        
#>     SN              0.629      0.029   21.899    0.000
#>   PBC ~~        
#>     ATT             0.677      0.029   23.622    0.000
#>     SN              0.677      0.029   23.234    0.000
#> 
#> Variances:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>    .pbc1            0.144      0.008   18.373    0.000
#>    .pbc2            0.160      0.007   21.442    0.000
#>    .pbc3            0.155      0.006   23.881    0.000
#>    .att1            0.167      0.007   23.536    0.000
#>    .att2            0.150      0.006   24.722    0.000
#>    .att3            0.159      0.006   26.385    0.000
#>    .att4            0.162      0.006   27.653    0.000
#>    .att5            0.159      0.006   24.934    0.000
#>    .sn1             0.178      0.015   12.088    0.000
#>    .sn2             0.157      0.012   13.254    0.000
#>    .int1            0.157      0.009   18.110    0.000
#>    .int2            0.160      0.008   20.413    0.000
#>    .int3            0.168      0.007   23.555    0.000
#>    .b1              0.186      0.018   10.129    0.000
#>    .b2              0.135      0.017    8.120    0.000
#>     ATT             0.998      0.037   27.106    0.000
#>     SN              0.987      0.039   25.379    0.000
#>     PBC             0.961      0.035   27.094    0.000
#>    .INT             0.491      0.020   24.650    0.000
#>    .BEH             0.455      0.023   20.138    0.000

Here is an example that includes two quadratic effects and one interaction effect, using the jordan dataset. The dataset is a subset of the PISA 2006 dataset.

m2 <- '
ENJ =~ enjoy1 + enjoy2 + enjoy3 + enjoy4 + enjoy5
CAREER =~ career1 + career2 + career3 + career4
SC =~ academic1 + academic2 + academic3 + academic4 + academic5 + academic6
CAREER ~ ENJ + SC + ENJ:ENJ + SC:SC + ENJ:SC
'

est_jordan <- modsem(m2, data = jordan)
est_jordan_qml <- modsem(m2, data = jordan, method = "qml")
summary(est_jordan_qml)
#> 
#> modsem (1.0.12) ended normally after 59 iterations
#> 
#>   Estimator                                        QML
#>   Optimization method                           NLMINB
#>   Number of model parameters                        51
#>                                                       
#>   Number of observations                          6038
#>  
#> Loglikelihood and Information Criteria:
#>   Loglikelihood                             -110519.99
#>   Akaike (AIC)                               221141.99
#>   Bayesian (BIC)                             221483.98
#>  
#> Fit Measures for Baseline Model (H0):
#>   Loglikelihood                             -110521.29
#>   Akaike (AIC)                               221138.58
#>   Bayesian (BIC)                             221460.46
#>   Chi-square                                   1016.34
#>   Degrees of Freedom (Chi-square)                   87
#>   P-value (Chi-square)                           0.000
#>   RMSEA                                          0.042
#>  
#> Comparative Fit to H0 (LRT test):
#>   Loglikelihood change                            1.30
#>   Difference test (D)                             2.59
#>   Degrees of freedom (D)                             3
#>   P-value (D)                                    0.459
#>  
#> R-Squared Interaction Model (H1):
#>   CAREER                                         0.513
#> R-Squared Baseline Model (H0):
#>   CAREER                                         0.510
#> R-Squared Change (H1 - H0):
#>   CAREER                                         0.003
#> 
#> Parameter Estimates:
#>   Coefficients                          unstandardized
#>   Information                                 observed
#>   Standard errors                             standard
#>  
#> Latent Variables:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>   ENJ =~        
#>     enjoy1          1.000                             
#>     enjoy2          1.002      0.020   50.567    0.000
#>     enjoy3          0.894      0.020   43.661    0.000
#>     enjoy4          0.999      0.021   48.213    0.000
#>     enjoy5          1.047      0.021   50.383    0.000
#>   SC =~         
#>     academic1       1.000                             
#>     academic2       1.104      0.028   38.932    0.000
#>     academic3       1.235      0.030   41.703    0.000
#>     academic4       1.254      0.030   41.812    0.000
#>     academic5       1.114      0.029   38.635    0.000
#>     academic6       1.199      0.030   40.342    0.000
#>   CAREER =~     
#>     career1         1.000                             
#>     career2         1.040      0.016   65.185    0.000
#>     career3         0.952      0.016   57.843    0.000
#>     career4         0.818      0.017   48.363    0.000
#> 
#> Regressions:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>   CAREER ~      
#>     ENJ             0.526      0.020   26.309    0.000
#>     SC              0.465      0.023   20.106    0.000
#>     ENJ:ENJ         0.029      0.021    1.365    0.172
#>     ENJ:SC         -0.046      0.043   -1.067    0.286
#>     SC:SC           0.001      0.033    0.030    0.976
#> 
#> Intercepts:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>    .enjoy1          0.000      0.012   -0.009    0.993
#>    .enjoy2          0.000      0.012    0.012    0.990
#>    .enjoy3          0.000      0.011   -0.027    0.978
#>    .enjoy4          0.000      0.014    0.006    0.995
#>    .enjoy5          0.000      0.011    0.031    0.975
#>    .academic1       0.000      0.014   -0.010    0.992
#>    .academic2       0.000      0.013   -0.011    0.991
#>    .academic3       0.000      0.012   -0.036    0.971
#>    .academic4       0.000      0.012   -0.022    0.983
#>    .academic5      -0.001      0.012   -0.053    0.958
#>    .academic6       0.001      0.012    0.056    0.956
#>    .career1        -0.005      0.015   -0.299    0.765
#>    .career2        -0.005      0.016   -0.349    0.727
#>    .career3        -0.005      0.015   -0.303    0.762
#>    .career4        -0.005      0.015   -0.313    0.755
#> 
#> Covariances:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>   ENJ ~~        
#>     SC              0.217      0.009   25.474    0.000
#> 
#> Variances:
#>                  Estimate  Std.Error  z.value  P(>|z|)
#>    .enjoy1          0.487      0.011   44.350    0.000
#>    .enjoy2          0.489      0.011   44.421    0.000
#>    .enjoy3          0.596      0.012   48.235    0.000
#>    .enjoy4          0.488      0.011   44.569    0.000
#>    .enjoy5          0.442      0.010   42.477    0.000
#>    .academic1       0.645      0.013   49.813    0.000
#>    .academic2       0.566      0.012   47.863    0.000
#>    .academic3       0.473      0.011   44.319    0.000
#>    .academic4       0.455      0.010   43.581    0.000
#>    .academic5       0.565      0.012   47.685    0.000
#>    .academic6       0.502      0.011   45.439    0.000
#>    .career1         0.373      0.009   40.395    0.000
#>    .career2         0.328      0.009   37.017    0.000
#>    .career3         0.436      0.010   43.274    0.000
#>    .career4         0.576      0.012   48.375    0.000
#>     ENJ             0.500      0.017   29.540    0.000
#>     SC              0.338      0.015   23.183    0.000
#>    .CAREER          0.302      0.010   29.727    0.000