Difference between revisions of "Examples"

From Linear Mixed Models Toolbox
Jump to navigation Jump to search
Line 8: Line 8:


Consider the linear model $$y=Xb+e$$ where $$y$$ is a vector of $$n$$ observations, $$X$$ is as single column matrix of $$1$$, $$e$$ is the residual and $$y\sim N(Xb,R)$$ where $$R$$ is a $$n \times n$$ co-variance matrix. Since the only source of variation in $$y$$ is $$e$$, $$R$$ is equivalent to the residual co-variance matrix. The generalized least square solution for $$b$$ would be obtained from $$b=(X'R^{-1}X)^{-1}X'R^{-1}y$$. Note that in simple cases $$R=I\sigma_e^2$$, where $$I$$ is an $$n \times n$$ identity matrix, and estimating $$b$$ simplifies to $$b=(X'X)^{-1}X'y$$, which is equivalent to $$b=\frac{\sum_{i=1}^n y_i}{n}$$. These formulas seem to be a bit overshot for the task of estimating a mean, however, they should raise the awareness that a sound general model always needs a residual co-variance matrix.
Consider the linear model $$y=Xb+e$$ where $$y$$ is a vector of $$n$$ observations, $$X$$ is as single column matrix of $$1$$, $$e$$ is the residual and $$y\sim N(Xb,R)$$ where $$R$$ is a $$n \times n$$ co-variance matrix. Since the only source of variation in $$y$$ is $$e$$, $$R$$ is equivalent to the residual co-variance matrix. The generalized least square solution for $$b$$ would be obtained from $$b=(X'R^{-1}X)^{-1}X'R^{-1}y$$. Note that in simple cases $$R=I\sigma_e^2$$, where $$I$$ is an $$n \times n$$ identity matrix, and estimating $$b$$ simplifies to $$b=(X'X)^{-1}X'y$$, which is equivalent to $$b=\frac{\sum_{i=1}^n y_i}{n}$$. These formulas seem to be a bit overshot for the task of estimating a mean, however, they should raise the awareness that a sound general model always needs a residual co-variance matrix.
Assume we have a data file:

Revision as of 09:44, 2 December 2020

The lmt parameter file is in written in “eXtensible Markeup Language” (xml). For under- standing the examples you may want to get a jump start in xml file structure first. Don’t be scared. Even without any previous knowledge you’ll be able to understand the xml structure used for lmt in less then half an hour. Please consult the Internet to find suitable introduction into xml

Estimating a mean

Consider the linear model $$y=Xb+e$$ where $$y$$ is a vector of $$n$$ observations, $$X$$ is as single column matrix of $$1$$, $$e$$ is the residual and $$y\sim N(Xb,R)$$ where $$R$$ is a $$n \times n$$ co-variance matrix. Since the only source of variation in $$y$$ is $$e$$, $$R$$ is equivalent to the residual co-variance matrix. The generalized least square solution for $$b$$ would be obtained from $$b=(X'R^{-1}X)^{-1}X'R^{-1}y$$. Note that in simple cases $$R=I\sigma_e^2$$, where $$I$$ is an $$n \times n$$ identity matrix, and estimating $$b$$ simplifies to $$b=(X'X)^{-1}X'y$$, which is equivalent to $$b=\frac{\sum_{i=1}^n y_i}{n}$$. These formulas seem to be a bit overshot for the task of estimating a mean, however, they should raise the awareness that a sound general model always needs a residual co-variance matrix.

Assume we have a data file: