
If the picture is too small, please view attached pic.
I'm just reading through the proof of the biased-ness of OLS estimates if an important independent variable is left out and I'm just stuck at the very end, basically the mathematical part that confuses me is the part circled in red.
When we take the expectation of
(x_{i2} - \bar{x_2})}{\sum_{i=1}^{n}\left((x_{i1}-\bar{x_1})^2\right)})
how does it simply end as the same thing?
Ie, Why is
(x_{i2} - \bar{x_2})}{\sum_{i=1}^{n}\left((x_{i1}-\bar{x_1})^2\right)}\right] = \frac{\sum_{i=1}^n (x_{i1} - \bar{x_1})(x_{i2} - \bar{x_2})}{\sum_{i=1}^{n}\left((x_{i1}-\bar{x_1})^2\right)})
?
Since
(x_{i2} - \bar{x_2})}{\sum_{i=1}^{n}\left((x_{i1}-\bar{x_1})^2\right)})
isn't a constant as it depends on

so it's like a random variable which outputs different values as

changes, so why does it follow the rule that
 = c)
where

is a constant?

I don't get why the part boxed in red.
Note that X, Y and U are all random variables, how is
 = \beta_1 X)
?
Shouldn't it be
 = \beta_1E(X|X))
since

is just a constant, then
 = \sum_{i=1}^n x_i P(X=x_i|X=x_i) = \sum_{i=1}^n x_i since P(X=x_i|X=x_i) = 1)
Thus
 = \beta_1 \left(\sum_{i=1}^n x_i\right) \neq \beta_1 X)
Thanks
