Difference between revisions of "Algorithms"

From Linear Mixed Models Toolbox
Jump to navigation Jump to search
Line 19: Line 19:
====MC-EM-REML====
====MC-EM-REML====
{{lmt}} provides a monte-carlo expectation-maximisation REML algorithms which uses the preconditioned gradient solver for solving the mixed model equations and a blocked Gibbs sampler to sample the necessary traces<ref name="Harville2004" />.
{{lmt}} provides a monte-carlo expectation-maximisation REML algorithms which uses the preconditioned gradient solver for solving the mixed model equations and a blocked Gibbs sampler to sample the necessary traces<ref name="Harville2004" />.
====Average information(AI)-REML====
{{lmt}} provides the calculation of variance components using average information REML as described in <ref name="Johnson1995" />, <ref name="Gilmour1995" /> and <ref name="Jensen1997" />.
REML estimates of co-variance matrices can be derived using the phenotypic co-variance matrix $$V$$ or the mixed-model equations system coefficient matrix C.
{{lmt}} provides three different AI-REML convergence criterions:
* the relative change of the log-likelihood calculated as $$log_e\left(\sqrt{\frac{||(l_{i}-l_{i-1})||'}{||l_{i-1}||}}\right)$$ where $$l$$ is the log-likelihood and $$i$$ is the iteration counter.
* $$log_e\left(\sqrt{\frac{||(p_{i}-p_{i-1})||'}{||p_{i-1}||}}\right)$$ where $$p$$ is the parameter vector and $$i$$ is the iteration counter.
* $$log_e\left(\sqrt{||g_{i}||}\right)$$ where $$g$$ is the gradient vector and $$i$$ is the iteration counter.
=====AI-REML-C=====
{{lmt}} supports AI-REML-C, which relies on the construction and factorization of the mixed-model equations system coefficient matrix C. That is, only models and variance structures are supported where these operations are feasible.


==Elements of the inverse of the mixed model coefficient matrix==
==Elements of the inverse of the mixed model coefficient matrix==

Revision as of 07:16, 7 May 2022

Solving Linear Mixed model Equations

lmt supports two types of solver for solving MME's: a direct solver and an iterative solver

Iterative solver

The iterative solver uses the preconditioned conjugate gradient method and is lmt's default solver. It does not require the explicit construction of any mixed model equation, and is therefore less resource demanding than the direct solver. That is, many models which cannot be solved using the direct solver can still be solved using the iterative solver. Even for small models the iterative solver usually outperforms the direct solver in terms of total processing time.

Whether the iterative solver has converged in round $$i$$ can be evaluated with convergence criterions $$log_e\left(\sqrt{\frac{||(Cx_i-b)||}{||b||}}\right)<t$$ or $$log_e\left(\sqrt{\frac{||(x_{i}-x_{i-1})||'}{||x_{i-1}||}}\right)<t$$, where $$C$$ is the mixed-model coefficient matrix, $$x_i$$ is the solution vector in round $$i$$, $$b$$ is the right-hand side and $$t$$ is the convergence threshold which defaults to -18.42, which is $$log_e(10^{-9})$$.

Direct solver

The direct solver requires the mixed model coefficient matrix to be build and all Kronecker products to be resolved. This can be quite memory demanding and should therefore be used carefully. The direct solver uses a Cholesky decomposition and forward-backward-substitution to solve the mixed model equation system, where especially the decomposition step can be very resource demanding and time consuming.

Variance component estimation

Gibbs sampling

Single pass Gibbs sampling

lmt's single pass Gibbs sampling algorithm is described in [1]. In short, all location parameters are drawn from their joint conditional posterior distribution. Note that this requires solving the mixed model equation system once per iteration which usually leads to a substantial increase in processing time.

Blocked Gibbs sampling

For random factors lmt's blocked Gibbs sampler draws correlated location parameters within factor level from their joint conditional posterior distribution. Location parameters of fixed factors are drawn in scalar mode from their fully conditional posterior.

Restricted Maximum Likelyhood

MC-EM-REML

lmt provides a monte-carlo expectation-maximisation REML algorithms which uses the preconditioned gradient solver for solving the mixed model equations and a blocked Gibbs sampler to sample the necessary traces[2].

Average information(AI)-REML

lmt provides the calculation of variance components using average information REML as described in [3], [4] and [5]. REML estimates of co-variance matrices can be derived using the phenotypic co-variance matrix $$V$$ or the mixed-model equations system coefficient matrix C.

lmt provides three different AI-REML convergence criterions:

  • the relative change of the log-likelihood calculated as $$log_e\left(\sqrt{\frac{||(l_{i}-l_{i-1})||'}{||l_{i-1}||}}\right)$$ where $$l$$ is the log-likelihood and $$i$$ is the iteration counter.
  • $$log_e\left(\sqrt{\frac{||(p_{i}-p_{i-1})||'}{||p_{i-1}||}}\right)$$ where $$p$$ is the parameter vector and $$i$$ is the iteration counter.
  • $$log_e\left(\sqrt{||g_{i}||}\right)$$ where $$g$$ is the gradient vector and $$i$$ is the iteration counter.
AI-REML-C

lmt supports AI-REML-C, which relies on the construction and factorization of the mixed-model equations system coefficient matrix C. That is, only models and variance structures are supported where these operations are feasible.

Elements of the inverse of the mixed model coefficient matrix

In principle lmt can generate any element of the inverse mixed model coefficient matrix. However, the user interface is currently limited to the diagonal elements for fixed factors and the diagonal blocks for random factors. These elements can either be sampled or obtained accurately via solving.

Gibbs Sampling

Following the approach of Harville(1999)[6] lmt can sample for fixed factors the diagonal elements of the inverse of the mixed model coefficient matrix, for random factors the diagonal blocks of the inverse of the coefficient matrix where the block size is determined by the dimension of the related $$\Sigma$$ matrix. The blocks are the prediction error co-variance matrices of the factor levels of correlated sub-factors. When sampling prediction error variances lmt can run many Gibbs chains in parallel allowing to exploit multi-core hardware architecture. However, it is recommended to specify not more chains than the number of available real cores excluding hyper-threading technology.

Solving

lmt can obtain elements of the inverse of the coefficient matrix via solving the mixed model equations. This method is currently only supported for the diagonal prediction error co-variance blocks of random factors, where the block size is determined by the dimension of the related $$\Sigma$$ matrix. For this algorithm lmt can utilize either the #Iterative solver or the #Direct solver.

Iterative inbreeding

lmt supports the iterative calculation of inbreeding coefficients as described in VanRaden(1992)[7].


References

  1. D. Sorensen and D. Gianola; Likelihood, Bayesian, and MCMC Methods in Quantitative Genetics; 2002; 584-588
  2. David A. Harville; Making REML computationally feasible for large data sets: use of the Gibbs sampler; Journal of Statistical Computation & Simulation; 2004
  3. Cite error: Invalid <ref> tag; no text was provided for refs named Johnson1995
  4. Cite error: Invalid <ref> tag; no text was provided for refs named Gilmour1995
  5. Cite error: Invalid <ref> tag; no text was provided for refs named Jensen1997
  6. David A. Harville; Use of the Gibbs sampler to invert large, possibly sparse, positive definite matrices; Linear Algebra and its Applications; 1999
  7. PM VanRanden; Accounting for Inbreeding and Crossbreeding in Genetic Evaluation of Large Populations; Journal of Dairy Science; 1992