Interaction between latent variables using lms and qml approaches
modsem_da.Rd
modsem_da()
is a function for estimating interaction effects between latent variables
in structural equation models (SEMs) using distributional analytic (DA) approaches.
Methods for estimating interaction effects in SEMs can basically be split into
two frameworks:
1. Product Indicator-based approaches ("dblcent"
, "rca"
, "uca"
,
"ca"
, "pind"
)
2. Distributionally based approaches ("lms"
, "qml"
).
modsem_da()
handles the latter and can estimate models using both QML and LMS,
necessary syntax, and variables for the estimation of models with latent product indicators.
NOTE: Run default_settings_da
to see default arguments.
Usage
modsem_da(
model.syntax = NULL,
data = NULL,
method = "lms",
verbose = NULL,
optimize = NULL,
nodes = NULL,
convergence.abs = NULL,
convergence.rel = NULL,
optimizer = NULL,
center.data = NULL,
standardize.data = NULL,
standardize.out = NULL,
standardize = NULL,
mean.observed = NULL,
cov.syntax = NULL,
double = NULL,
calc.se = NULL,
FIM = NULL,
EFIM.S = NULL,
OFIM.hessian = NULL,
EFIM.parametric = NULL,
robust.se = NULL,
R.max = NULL,
max.iter = NULL,
max.step = NULL,
start = NULL,
epsilon = NULL,
quad.range = NULL,
adaptive.quad = NULL,
adaptive.frequency = NULL,
n.threads = NULL,
algorithm = NULL,
em.control = NULL,
rcs = FALSE,
rcs.choose = NULL,
...
)
Arguments
- model.syntax
lavaan
syntax- data
dataframe
- method
method to use:
"lms"
= latent model structural equations (not passed tolavaan
)."qml"
= quasi maximum likelihood estimation of latent model structural equations (not passed tolavaan
).- verbose
should estimation progress be shown
- optimize
should starting parameters be optimized
- nodes
number of quadrature nodes (points of integration) used in
lms
, increased number gives better estimates but slower computation. How many are needed depends on the complexity of the model. For simple models, somewhere between 16-24 nodes should be enough; for more complex models, higher numbers may be needed. For models where there is an interaction effect between an endogenous and exogenous variable, the number of nodes should be at least 32, but practically (e.g., ordinal/skewed data), more than 32 is recommended. In cases where data is non-normal, it might be better to use theqml
approach instead. For large numbers of nodes, you might want to change the'quad.range'
argument.- convergence.abs
Absolute convergence criterion. Lower values give better estimates but slower computation. Not relevant when using the QML approach. For the LMS approach the EM-algorithm stops whenever the relative or absolute convergence criterion is reached.
- convergence.rel
Relative convergence criterion. Lower values give better estimates but slower computation. For the LMS approach the EM-algorithm stops whenever the relative or absolute convergence criterion is reached.
- optimizer
optimizer to use, can be either
"nlminb"
or"L-BFGS-B"
. For LMS,"nlminb"
is recommended. For QML,"L-BFGS-B"
may be faster if there is a large number of iterations, but slower if there are few iterations.- center.data
should data be centered before fitting model
- standardize.data
should data be scaled before fitting model, will be overridden by
standardize
ifstandardize
is set toTRUE
.NOTE: It is recommended that you estimate the model normally and then standardize the output using
standardize_model
standardized_estimates
,summary(<modsem_da-object>, standardize=TRUE)
- standardize.out
should output be standardized (note will alter the relationships of parameter constraints since parameters are scaled unevenly, even if they have the same label). This does not alter the estimation of the model, only the output.
NOTE: It is recommended that you estimate the model normally and then standardize the output using
standardized_estimates
.- standardize
will standardize the data before fitting the model, remove the mean structure of the observed variables, and standardize the output. Note that
standardize.data
,mean.observed
, andstandardize.out
will be overridden bystandardize
ifstandardize
is set toTRUE
.NOTE: It is recommended that you estimate the model normally and then standardize the output using
standardized_estimates
.- mean.observed
should the mean structure of the observed variables be estimated? This will be overridden by
standardize
ifstandardize
is set toTRUE
.NOTE: Not recommended unless you know what you are doing.
- cov.syntax
model syntax for implied covariance matrix (see
vignette("interaction_two_etas", "modsem")
)- double
try to double the number of dimensions of integration used in LMS, this will be extremely slow but should be more similar to
mplus
.- calc.se
should standard errors be computed? NOTE: If
FALSE
, the information matrix will not be computed either.- FIM
should the Fisher information matrix be calculated using the observed or expected values? Must be either
"observed"
or"expected"
.- EFIM.S
if the expected Fisher information matrix is computed,
EFIM.S
selects the number of Monte Carlo samples. Defaults to 100. NOTE: This number should likely be increased for better estimates (e.g., 1000), but it might drasticly increase computation time.- OFIM.hessian
Logical. If
TRUE
(default) standard errors are based on the negative Hessian (observed Fisher information). IfFALSE
they come from the outer product of individual score vectors (OPG). For correctly specified models, these two matrices are asymptotically equivalent; yielding nearly identical standard errors in large samples. The Hessian usually shows smaller finite-sample variance (i.e., it's more consistent), and is therefore the default.Note, that the Hessian is not always positive definite, and is more computationally expensive to calculate. The OPG should always be positive definite, and a lot faster to compute. If the model is correctly specified, and the sample size is large, then the two should yield similar results, and switching to the OPG can save a lot of time. Note, that the required sample size depends on the complexity of the model.
A large difference between Hessian and OPG suggests misspecification, and
robust.se = TRUE
should be set to obtain sandwich (robust) standard errors.- EFIM.parametric
should data for calculating the expected Fisher information matrix be simulated parametrically (simulated based on the assumptions and implied parameters from the model), or non-parametrically (stochastically sampled)? If you believe that normality assumptions are violated,
EFIM.parametric = FALSE
might be the better option.- robust.se
should robust standard errors be computed, using the sandwich estimator?
- R.max
Maximum population size (not sample size) used in the calculated of the expected fischer information matrix.
- max.iter
maximum number of iterations.
- max.step
maximum steps for the M-step in the EM algorithm (LMS).
- start
starting parameters.
- epsilon
finite difference for numerical derivatives.
- quad.range
range in z-scores to perform numerical integration in LMS using, when using quasi-adaptive Gaussian-Hermite Quadratures. By default
Inf
, such thatf(t)
is integrated from -Inf to Inf, but this will likely be inefficient and pointless at a large number of nodes. Nodes outside+/- quad.range
will be ignored.- adaptive.quad
should a quasi adaptive quadrature be used? If
TRUE
, the quadrature nodes will be adapted to the data. IfFALSE
, the quadrature nodes will be fixed. Default isFALSE
. The adaptive quadrature does not fit an adaptive quadrature to each participant, but instead tries to place more nodes where posterior distribution is highest. Compared with a fixed Gauss Hermite quadrature this usually means that less nodes are placed at the tails of the distribution.- adaptive.frequency
How often should the quasi-adaptive quadrature be calculated? Defaults to 3, meaning that it is recalculated every third EM-iteration.
- n.threads
number of cores to use for parallel processing. If
NULL
, it will use <= 2 threads. If an integer is specified, it will use that number of threads (e.g.,n.threads = 4
will use 4 threads). If"default"
, it will use the default number of threads (2). If"max"
, it will use all available threads,"min"
will use 1 thread.- algorithm
algorithm to use for the EM algorithm. Can be either
"EM"
or"EMA"
."EM"
is the standard EM algorithm."EMA"
is an accelerated EM procedure that uses Quasi-Newton and Fisher Scoring optimization steps when needed. Default is"EM"
.- em.control
a list of control parameters for the EM algorithm. See
default_settings_da
for defaults.- rcs
Should latent variable indicators be replaced with reliablity-corrected single item indicators instead? See
relcorr_single_item
.- rcs.choose
Which latent variables should get their indicators replaced with reliablity-reliability corrected single items? Corresponds to the
choose
argument inrelcorr_single_item
.- ...
additional arguments to be passed to the estimation function.
Examples
library(modsem)
# For more examples, check README and/or GitHub.
# One interaction
m1 <- "
# Outer Model
X =~ x1 + x2 +x3
Y =~ y1 + y2 + y3
Z =~ z1 + z2 + z3
# Inner model
Y ~ X + Z + X:Z
"
if (FALSE) { # \dontrun{
# QML Approach
est_qml <- modsem_da(m1, oneInt, method = "qml")
summary(est_qml)
# Theory Of Planned Behavior
tpb <- "
# Outer Model (Based on Hagger et al., 2007)
ATT =~ att1 + att2 + att3 + att4 + att5
SN =~ sn1 + sn2
PBC =~ pbc1 + pbc2 + pbc3
INT =~ int1 + int2 + int3
BEH =~ b1 + b2
# Inner Model (Based on Steinmetz et al., 2011)
# Covariances
ATT ~~ SN + PBC
PBC ~~ SN
# Causal Relationships
INT ~ ATT + SN + PBC
BEH ~ INT + PBC
BEH ~ INT:PBC
"
# LMS Approach
est_lms <- modsem_da(tpb, data = TPB, method = lms)
summary(est_lms)
} # }