How do we define independence for random
vectors?
Eg, let

be a random n by 1 vector and

be a random k by 1 vector (where n does not necessarily have to equal to k), then how do we define independence between

and

?
For example, if we take just 2 random variables, X, Y then X and Y are independent iff f(x,y) = f(x)f(y) where f(x,y) characterizes their joint pdf and f(x), f(y) are their marginal pdfs. If we adapt this onto the random vector case, we have
 = f(\mathbf{e}) f(\mathbf{b}))
, examining the RHS, we see that
)
is simply the
joint distribution of

and likewise,
)
is the
joint distribution of

, but then what is
 )
? How is it defined?
Continuing on, for a more special case, consider when

is jointly multivariate normal, ie,
)
and

is also jointly multivariate normal, ie,
)
. Now in the special case of when two
jointly normally distributed variables X and Y, a sufficient condition for independence is when cov(X,Y) = 0 where cov(.) denotes the covariance between X and Y. How then, do we generalise that into the random vector case? Ie, can we say that

and

are independent iff
 = 0)
? But then we would have to show

and

are jointly normally distributed... but this doesn't really make sense since

and

are already
itself jointly normally distributed, so then wouldn't we be talking about the 'joint' distribution of two joint distributions?
EDIT: Thanks Ahmad for clarifying on IRC
