3.10.4 Joint-normal Distributions
Let X be an n-dimensional random vector with mean vector μ and covariance matrix Σ. Suppose the marginal distribution of each component Xi is normal. Let Y be a random variable defined as a linear polynomial
The definition of joint-normality is almost trivial. A random vector X is said to be joint-normal if every nontrivial linear polynomial Y of X is normal. Joint-normal distributions are sometimes called multivariate normal or multinormal distributions.
We denote the n-dimensional joint-normal distribution with mean vector μ and covariance matrix Σ as Nn(μ,Σ). If Σ is positive definite, it has PDF
where |Σ| is the determinant of Σ. Exhibit 3.20 illustrates a joint-normal distribution in two random variables X1 and X2. If we define Y = X1 + X2, then Y is normal.
A random vector is joint-normal with uncorrelated components if and only if the components are independent normal random variables.
A property of joint-normal distributions is the fact that marginal distributions and conditional distributions are either normal (if they are univariate) or joint-normal (if they are multivariate). Specifically, let X ~ Nn(μ,Σ). Select k components. Without loss of generality, suppose these are the first k components X1, X2, … Xk. Let X1 be a k-dimensional vector comprising these components, and let X2 be an (n – k)-dimensional vector of the remaining components. These partition X, μ and Σ into sub-vectors and sub-matrices as follows
The marginal distribution of X1 is Nk(μ1,Σ1,1) and that of X2 is Nn–k(μ2,Σ2,2). If Σ2,2 is positive definite, the conditional distribution of X1 given that X2 = x2 is
If X ~ Nn(μ,Σ), b is a constant m × n matrix and a is an m-dimensional constant vector, then
This generalizes property [3.94] of one-dimensional normal distributions.