Login

Welcome, Guest. Please login or register.

February 17, 2026, 07:30:36 am

Author Topic: brightsky's Maths Thread  (Read 61354 times)  Share 

0 Members and 1 Guest are viewing this topic.

brightsky

  • Victorian
  • ATAR Notes Legend
  • *******
  • Posts: 3136
  • Respect: +200
Re: brightsky's Maths Thread
« Reply #180 on: November 17, 2013, 07:05:01 pm »
0
My understanding is that a hyperplane is an n-1 dimensional subspace of some n dimensional space.
For example in any line through the origin could be considered a hyperplane since it is the span of 1 (=2-1) vector.
In any plane through the origin could be considered a hyperplane since it is the span of 2 (=3-1) vectors.
The way I think about dimension with this sort of stuff is just the least number of vectors needed to span the object in question. i.e. a two dimensional subspace (e.g. plane in ) requires at least 2 spanning vectors.
These may be enlightening:
http://en.wikipedia.org/wiki/Half-space_(geometry)
http://en.wikipedia.org/wiki/Ham_sandwich_theorem

'3 linearly independent vectors' makes sense and means exactly what you would expect.
You could take a set S of m vectors in a vector space V of dimension n (where m<n) which are linearly independent, and then find another vector v which is linearly independent to the set (perhaps it should be linearly independent 'of' the set, but 'to' is the standard terminology). In this case you have used linearly independent to describe both a set and a vector, and it is well defined in both scenarios.

More precisely the above would be interpreted as:
  and  .

(I feel that may have made my explanation even more unclear... :P)

thanks rife! yeah that's the definition of hyperplane I was taught...although if the definition were actually that strict, then what would you call a n-2 dimensional subspace in an n dimensional vector space? what about n-3? is there a general name that mathematicians attribute to subspaces of these dimensions?

and with regard to linear independence, i have a suspicion that statements like 'let v be a linearly independent vector' do not really make any sense. i suppose by 'vector independent to a set of vectors', you mean the set containing the new vector + vectors in original set is linearly independent...? i always get a little unsettled when i write stuff like '4 linearly independent vectors are required to span R^4 and we have 4 linearly independent vectors so the set spans R^4'...but statements like this are fine?

thanks again!
2020 - 2021: Master of Public Health, The University of Sydney
2017 - 2020: Doctor of Medicine, The University of Melbourne
2014 - 2016: Bachelor of Biomedicine, The University of Melbourne
2013 ATAR: 99.95

Currently selling copies of the VCE Chinese Exam Revision Book and UMEP Maths Exam Revision Book, and accepting students for Maths Methods and Specialist Maths Tutoring in 2020!

brightsky

  • Victorian
  • ATAR Notes Legend
  • *******
  • Posts: 3136
  • Respect: +200
Re: brightsky's Maths Thread
« Reply #181 on: November 17, 2013, 08:10:23 pm »
0
unrelated question, but can someone explain what the purpose of 'fourier series' is? or rather, what is the purpose of projecting a given function f onto the space spanned by the orthonormal set {1/sqrt(2pi), sinx/sqrt(pi), cosx/sqrt(pi), etc.}? the booklet includes fourier series as one of the topics...but I don't really understand what the point of it is...although i've heard rumours of its being a boss technique...
2020 - 2021: Master of Public Health, The University of Sydney
2017 - 2020: Doctor of Medicine, The University of Melbourne
2014 - 2016: Bachelor of Biomedicine, The University of Melbourne
2013 ATAR: 99.95

Currently selling copies of the VCE Chinese Exam Revision Book and UMEP Maths Exam Revision Book, and accepting students for Maths Methods and Specialist Maths Tutoring in 2020!

b^3

  • Honorary Moderator
  • ATAR Notes Legend
  • *******
  • Posts: 3529
  • Overloading, just don't do it.
  • Respect: +631
  • School: Western Suburbs Area
  • School Grad Year: 2011
Re: brightsky's Maths Thread
« Reply #182 on: November 17, 2013, 08:28:12 pm »
+2
Fourier Series, what I'm trying to currently cram... :P
Really what it's doing is, say if we have a function

Then we can represent it as a single function, not a hybrid function by taking adding together a lot of functions and coefficients represented by a basis from the fourier series for it. So we're representing one a repeating function with another function. The more terms we take the closer it will be to the actual function.
e.g. Stolen from wiki.

Anyways, one of the main applications that we're looking at is, is applying it to find the solution to the Heat Equation (below is 1-D heat equation).

Now if we have some function that represents the distribution of temperatures at points along a 1-D rod, then our solution (not going to include the derivation, procrastinating but can't procrastinate for that long :P) will be off the form:

Which we get by using a separation of variables method for the PDE, in which we get a solution with just one of the terms in the sum of above, to get a more general solution we take a linear combination of all possible solutions, which results in the equation above.
Now lets say we have the Initial condition, , then we get a Fourier Sine Series.

Which means we need to find the coefficients , by applying our techniques for Fourier Series.

I'm not sure if that's how you'd be looking at it, or what the point of looking into it for what you're doing, but it's one motivation to use it anyways.

Sorry about being a bit vague though, really can't afford to procrastinate for too long.

EDIT: Added a bit, really crap explanation though....
* hands over to TrueTears/rife168
« Last Edit: November 17, 2013, 08:40:58 pm by b^3 »
2012-2016: Aerospace Engineering/Science (Double Major in Applied Mathematics - Monash Uni)
TI-NSPIRE GUIDES: METH, SPESH

Co-Authored AtarNotes' Maths Study Guides


I'm starting to get too old for this... May be on here or irc from time to time.

rife168

  • Victorian
  • Forum Obsessive
  • ***
  • Posts: 408
  • Respect: +36
  • School Grad Year: 2012
Re: brightsky's Maths Thread
« Reply #183 on: November 17, 2013, 08:56:49 pm »
+3
thanks rife! yeah that's the definition of hyperplane I was taught...although if the definition were actually that strict, then what would you call a n-2 dimensional subspace in an n dimensional vector space? what about n-3? is there a general name that mathematicians attribute to subspaces of these dimensions?

and with regard to linear independence, i have a suspicion that statements like 'let v be a linearly independent vector' do not really make any sense. i suppose by 'vector independent to a set of vectors', you mean the set containing the new vector + vectors in original set is linearly independent...? i always get a little unsettled when i write stuff like '4 linearly independent vectors are required to span R^4 and we have 4 linearly independent vectors so the set spans R^4'...but statements like this are fine?

thanks again!

I don't know of particular names for the n-2, n-3,... cases, I am under the impression that the n-1 case is more important and pervasive, so there was a need to give it a name that made it easier to talk about. I could be wrong here.

Yeah, so you cannot talk of a linearly independent vector without some assumed underlying set, other vector or vector space. So I guess the context is the important part.
Check out the explanation on Wikipedia, the two bullet points at the top explain exactly the ambiguity you are facing.
http://en.wikipedia.org/wiki/Linear_independence

So you can talk of a set of linearly independent vectors, and also families of vectors (i.e. '__ vectors are linearly independent').

Maybe it would help to think of it like this:
is linearly independent to a set of vectors if , noting that a set can have zero elements, one element and so on.
Then we can say: vectors are linearly independent if, for , is linearly independent to the set (the set of all the vectors excluding ),
i.e. .

I think this is a well-defined construction, ignore this if it confuses things more, it is as much an exercise for me as it is you to try to precisely formulate things like this.


edit: I know next to nothing about Fourier Series, but the Wikipedia page is quite clear and has some nice visualisations.
« Last Edit: November 17, 2013, 08:59:38 pm by rife168 »
2012: VCE - 99.10
2013: PhB(Sci)@ANU

Alwin

  • Victorian
  • Forum Leader
  • ****
  • Posts: 838
  • Respect: +241
Re: brightsky's Maths Thread
« Reply #184 on: November 17, 2013, 09:36:33 pm »
+1
unrelated question, but can someone explain what the purpose of 'fourier series' is? or rather, what is the purpose of projecting a given function f onto the space spanned by the orthonormal set {1/sqrt(2pi), sinx/sqrt(pi), cosx/sqrt(pi), etc.}? the booklet includes fourier series as one of the topics...but I don't really understand what the point of it is...although i've heard rumours of its being a boss technique...

just on a side note, fourier series is mentioned in the UMEP notes as a side example, apparently it's not something they test explicitly my teacher said. so don't worry about it too much for the exam?

but it is pretty cool, looked at it in MUEP. But now done with the muep exams, I've promptly forgotten everything like any good student so I'll leave it up to the other guys to explain (esp TrueTears :P)
2012:  Methods [48] Physics [49]
2013:  English [40] (oops) Chemistry [46] Spesh [42] Indo SL [34] Uni Maths: Melb UMEP [4.5] Monash MUEP [just for a bit of fun]
2014:  BAeroEng/BComm

A pessimist says a glass is half empty, an optimist says a glass is half full.
An engineer says the glass has a safety factor of 2.0

brightsky

  • Victorian
  • ATAR Notes Legend
  • *******
  • Posts: 3136
  • Respect: +200
Re: brightsky's Maths Thread
« Reply #185 on: November 19, 2013, 01:21:45 pm »
0
ah okay thanks guys!

I have another (minor) dilemma. say we have a square matrix A and wanted to determine whether or not A is invertible. two methods:
1. determine whether det(A) = 0. if yes, invertible. if no, singular.
2. determine whether rank(A) = no. of columns in A. if yes, invertible. if no, singular.

are both methods legitimate? i see no issues with either method, but just want to make sure...because I've seen examiner's reports which prefer one method over the other...

thanks!
2020 - 2021: Master of Public Health, The University of Sydney
2017 - 2020: Doctor of Medicine, The University of Melbourne
2014 - 2016: Bachelor of Biomedicine, The University of Melbourne
2013 ATAR: 99.95

Currently selling copies of the VCE Chinese Exam Revision Book and UMEP Maths Exam Revision Book, and accepting students for Maths Methods and Specialist Maths Tutoring in 2020!

brightsky

  • Victorian
  • ATAR Notes Legend
  • *******
  • Posts: 3136
  • Respect: +200
Re: brightsky's Maths Thread
« Reply #186 on: November 19, 2013, 03:34:30 pm »
0
another question: how does one prove that nullspace of A is orthogonal to the row space of A. I sort of understand what's going on, but if someone can explain it in extremely dumbed-down lingo, that would be great! :p
2020 - 2021: Master of Public Health, The University of Sydney
2017 - 2020: Doctor of Medicine, The University of Melbourne
2014 - 2016: Bachelor of Biomedicine, The University of Melbourne
2013 ATAR: 99.95

Currently selling copies of the VCE Chinese Exam Revision Book and UMEP Maths Exam Revision Book, and accepting students for Maths Methods and Specialist Maths Tutoring in 2020!

brightsky

  • Victorian
  • ATAR Notes Legend
  • *******
  • Posts: 3136
  • Respect: +200
Re: brightsky's Maths Thread
« Reply #187 on: November 19, 2013, 09:01:15 pm »
0
another question:

f(x,y)=x^2/4-y^2/9
x(s,t) = st^2 - 2 and y(s,t) = 8s - t^3
use a linear approximation to estimate f when s = 0.99 and t = 2.01.

how do I go about this?

I know that f(x+g,y+h) = f(x,y) + g*fx(x,y) + h*fy(x,y) (just an extension of linear approx from methods), but how do I apply this formula to this particular question, where x and y are themselves functions of two variables?

confuzzled. any help appreciated!

thanks!
2020 - 2021: Master of Public Health, The University of Sydney
2017 - 2020: Doctor of Medicine, The University of Melbourne
2014 - 2016: Bachelor of Biomedicine, The University of Melbourne
2013 ATAR: 99.95

Currently selling copies of the VCE Chinese Exam Revision Book and UMEP Maths Exam Revision Book, and accepting students for Maths Methods and Specialist Maths Tutoring in 2020!

rife168

  • Victorian
  • Forum Obsessive
  • ***
  • Posts: 408
  • Respect: +36
  • School Grad Year: 2012
Re: brightsky's Maths Thread
« Reply #188 on: November 20, 2013, 03:10:07 am »
+3
ah okay thanks guys!

I have another (minor) dilemma. say we have a square matrix A and wanted to determine whether or not A is invertible. two methods:
1. determine whether det(A) = 0. if yes, invertible. if no, singular.
2. determine whether rank(A) = no. of columns in A. if yes, invertible. if no, singular.

are both methods legitimate? i see no issues with either method, but just want to make sure...because I've seen examiner's reports which prefer one method over the other...

thanks!

First I think you have the det(A) = 0 part the wrong way around, if yes, singular, if no, invertible.

Finding the determinant is in general a more useful method, in particular because det(AB)=det(A)det(B) among other properties.
Using the rank method gives the required result, but I would only use this if the row echelon form of A is given, or if you had to find it in a previous question.


another question: how does one prove that nullspace of A is orthogonal to the row space of A. I sort of understand what's going on, but if someone can explain it in extremely dumbed-down lingo, that would be great! :p

Think about what you are doing with matrix multiplication, say multiplying A and x. The first component of the product is the first row of A dotted with x, the second component is the second row of A dotted with x and so on...
So if Ax=0 then every component of the product is zero, so x dotted with every row of A is zero, hence any x in the nullspace of A is orthogonal to every row of A.



another question:



and
use a linear approximation to estimate f when s = 0.99 and t = 2.01.

how do I go about this?

I know that f(x+g,y+h) = f(x,y) + g*fx(x,y) + h*fy(x,y) (just an extension of linear approx from methods), but how do I apply this formula to this particular question, where x and y are themselves functions of two variables?

confuzzled. any help appreciated!

thanks!

I remember doing this question or one very similar to it last year in UMEP, but I can't remember precisely the way that I did it.
I think (I may be wrong) we had to find an approximation for x and y using the linear approx formula, and then sub them into f.
I do recall not being entirely comfortable with what I did, whatever it was. It didn't feel 'complete'.
2012: VCE - 99.10
2013: PhB(Sci)@ANU

brightsky

  • Victorian
  • ATAR Notes Legend
  • *******
  • Posts: 3136
  • Respect: +200
Re: brightsky's Maths Thread
« Reply #189 on: November 20, 2013, 07:46:35 am »
0
thanks so much rife! you're a legend!

just a few points/queries:

re: invertible matrices - I see I see (although the first question in the umep paper typically gives the reduced row echelon form of some huge matrix...but then again I guess to find product of the entries along the main diagonal more intuitive than to look at the rank...)

re: nullspace and rank - yep i get that part. but i think i'm missing something critical, a final link in the chain so to speak. so if x is in the nullspace, then for some matrix (of transformation) A, Ax = 0. if we break A down into its component rows, then r1.x = 0, r2.x = 0, etc. so every row of A is orthogonal to the nullspace. so far so good. but we are required to prove that EVERY vector in the row space of A is orthogonal to the nullspace. i can't quite draw the final link. has it got something to do with the fact that the set of all rows in A is always a spanning set of A, so therefore if every row of A is orthogonal the nullspace, then the space it spans (the row space) must always be orthogonal to the nullspace?

re: linear approximation question - yeah that's what i did, although as you said not sure if right...

actually a random thought just struck me. would it be right to sub x(s,t) and y(s,t) into f(x,y) at the start to make f a function of s and t, i.e. f = f(s,t) and then do normal linear approx. on that new function?

thanks again rife! hugely appreciated!
2020 - 2021: Master of Public Health, The University of Sydney
2017 - 2020: Doctor of Medicine, The University of Melbourne
2014 - 2016: Bachelor of Biomedicine, The University of Melbourne
2013 ATAR: 99.95

Currently selling copies of the VCE Chinese Exam Revision Book and UMEP Maths Exam Revision Book, and accepting students for Maths Methods and Specialist Maths Tutoring in 2020!

scribble

  • is sexier than Cthulhu
  • Victorian
  • Forum Leader
  • ****
  • Posts: 814
  • Respect: +145
  • School Grad Year: 2012
Re: brightsky's Maths Thread
« Reply #190 on: November 20, 2013, 09:02:27 am »
0
poops. reading this thread makes me realise that i've forgotten all the math i did this year rofl

actually a random thought just struck me. would it be right to sub x(s,t) and y(s,t) into f(x,y) at the start to make f a function of s and t, i.e. f = f(s,t) and then do normal linear approx. on that new function?
yep thats right.
so f(s+g,t+h) = f(s,t) + g*fs(s,t) + h*ft(s,t)
you can find fs and ft using the chain rule for two variables.  df/dt=∂f/∂d*dx/dt + ∂f/∂y*dy/dt

brightsky

  • Victorian
  • ATAR Notes Legend
  • *******
  • Posts: 3136
  • Respect: +200
Re: brightsky's Maths Thread
« Reply #191 on: November 20, 2013, 09:58:03 am »
0
poops. reading this thread makes me realise that i've forgotten all the math i did this year rofl
yep thats right.
so f(s+g,t+h) = f(s,t) + g*fs(s,t) + h*ft(s,t)
you can find fs and ft using the chain rule for two variables.  df/dt=∂f/∂d*dx/dt + ∂f/∂y*dy/dt

ah I see I see thanks scribbles!

and for people who've done umep maths or something similar (linear algebra?), how did you guys manage to sketch 3D surfaces? it's easy to sketch something like an ellipsoid (relatively speaking), but how are we supposed to sketch some random graph like f(x,y) = xy/10 on a 2D paper? :O

thanks!
2020 - 2021: Master of Public Health, The University of Sydney
2017 - 2020: Doctor of Medicine, The University of Melbourne
2014 - 2016: Bachelor of Biomedicine, The University of Melbourne
2013 ATAR: 99.95

Currently selling copies of the VCE Chinese Exam Revision Book and UMEP Maths Exam Revision Book, and accepting students for Maths Methods and Specialist Maths Tutoring in 2020!

Alwin

  • Victorian
  • Forum Leader
  • ****
  • Posts: 838
  • Respect: +241
Re: brightsky's Maths Thread
« Reply #192 on: November 20, 2013, 10:19:01 am »
0
ah I see I see thanks scribbles!

and for people who've done umep maths or something similar (linear algebra?), how did you guys manage to sketch 3D surfaces? it's easy to sketch something like an ellipsoid (relatively speaking), but how are we supposed to sketch some random graph like f(x,y) = xy/10 on a 2D paper? :O

thanks!

Personally, for this graph I would consider when x and y -> 0, when x-> infty and y-> infy, both go to negative infinity or one goes to positive infinity, one goes to negative infinity.
Then, sketch it and maybe show what I'm doing but marking intercepts and whatnot.

Hopefully you get what I mean :)

ps: have you seen any continuity questions on past exams? I swear we covered it in umep but haven't seen any qs on it...
2012:  Methods [48] Physics [49]
2013:  English [40] (oops) Chemistry [46] Spesh [42] Indo SL [34] Uni Maths: Melb UMEP [4.5] Monash MUEP [just for a bit of fun]
2014:  BAeroEng/BComm

A pessimist says a glass is half empty, an optimist says a glass is half full.
An engineer says the glass has a safety factor of 2.0

brightsky

  • Victorian
  • ATAR Notes Legend
  • *******
  • Posts: 3136
  • Respect: +200
Re: brightsky's Maths Thread
« Reply #193 on: November 20, 2013, 10:28:56 am »
0
Personally, for this graph I would consider when x and y -> 0, when x-> infty and y-> infy, both go to negative infinity or one goes to positive infinity, one goes to negative infinity.
Then, sketch it and maybe show what I'm doing but marking intercepts and whatnot.

Hopefully you get what I mean :)

ps: have you seen any continuity questions on past exams? I swear we covered it in umep but haven't seen any qs on it...

yeah I can kind of visualise the graph in my head but when I go to sketch the graph on a piece of paper, my graph looks like poo. it's literally a collection of random curves which sort of resemble a horse's back but not really :/

re: continuity - nope not yet. I've only encountered one limit question that asks us to approach the point from different directions and check whether or not the limit is the same. but i'm interested...if we did get a question that asks us to determine whether a random functions of two variables is continuous at a certain point, how would you go about it? as far as i know, to be continuous at a point the limit when you approach it from any direction must be the same and the value of that limit must be the same as the value you get when you substitute the point into the function. but it's the first bit that's worrying me - how do you show that the limit is the same when you approach the point from ANY direction without resorting to fancy stuff like l'hopital's rule and other limit techniques (which i don't think is on the course anymore)?
2020 - 2021: Master of Public Health, The University of Sydney
2017 - 2020: Doctor of Medicine, The University of Melbourne
2014 - 2016: Bachelor of Biomedicine, The University of Melbourne
2013 ATAR: 99.95

Currently selling copies of the VCE Chinese Exam Revision Book and UMEP Maths Exam Revision Book, and accepting students for Maths Methods and Specialist Maths Tutoring in 2020!

rife168

  • Victorian
  • Forum Obsessive
  • ***
  • Posts: 408
  • Respect: +36
  • School Grad Year: 2012
Re: brightsky's Maths Thread
« Reply #194 on: November 20, 2013, 04:25:06 pm »
+2
Label the rows of a matrix A by .
Then any vector v in the row space can be written as a linear combination of the rows.
I.e. .
Then if you take the dot product of v with any vector x in the nullspace of A, you get
.
So any vector in the row space of A is orthogonal to any vector in the nullspace of A.
The key here is that the dot product distributes and is linear w.r.t. constants.
Try to prove this.

edit: fixed up vector notation for clarity, one tends to get a bit lazy with this after a little while...
« Last Edit: November 20, 2013, 10:27:01 pm by rife168 »
2012: VCE - 99.10
2013: PhB(Sci)@ANU