Ok thats a pretty broad area, I'll give it a shot.
1. Basically, the
residual value is just the difference between the
actual y value and
predicted y value (found using the least squares regression line). For example, (just making things simple) if the least squares regression line was

, and we had a point at
)
then for when x=1:



(The way you get the predicted value is substituting x=1 into the equation for the least squares regression line). In simple terms, on the graph, this point of (1,3) will be ABOVE the least squares regression line, due to the positive residual. Any point BELOW the regression line will have a negative residual.
2.Transformations : As much as we like having linear data to analyse because of its simplicity, obviously not everything is going to be linearly related, and hence more difficult to analyse. So to fix this problem, we can
linearise data. There should/will be a section in your textbook for this topic, showing all relevant transformations for different types of data. Putting this in your summary book is an ABSOLUTE MUST, as it will most likely come up in atleast one of the exams.
The transformations you have listed,

,

,

, etc. are all transformations that you can apply for different types of data. By different types, I'm reffering to the different patterns. In further theres only 4 different types so get familiar with what transformations to apply.
Now, the application of transformations is all done on the calculator, and is quite straight forward. Be happy to step you through it if needed.
Hope that helped somewhat, feel free to ask any questions you have.