Login

Welcome, Guest. Please login or register.

September 20, 2025, 03:39:16 pm

Author Topic: brightsky's Maths Thread  (Read 57645 times)  Share 

0 Members and 1 Guest are viewing this topic.

b^3

  • Honorary Moderator
  • ATAR Notes Legend
  • *******
  • Posts: 3529
  • Overloading, just don't do it.
  • Respect: +631
  • School: Western Suburbs Area
  • School Grad Year: 2011
Re: brightsky's Maths Thread
« Reply #210 on: November 23, 2013, 07:43:21 pm »
+1
thanks kamil!

also this might sound silly, but is there a foolproof way of identifying and sketching quadrics. I was confronted with a surface of equation 2x^2 - 2y^2 - 4z^2 = 1, and asked to identify it, but the equation doesn't seem to fit any of the possible cases: http://en.wikipedia.org/wiki/Quadric. perhaps some rearrangement is in order?

help much appreciated!
Since there's no translation of it look at the curves that lay on the surface for , and .
I.e. for , we get the hyperbola , so you have a hyperbola with being the major axis in the plane. Going along the axis here will just increase the RHS, so the hyperbola gets wider.
Same logic goes for , to get .

For , we get , we we know we don't have anything to satisfy it. So we know there is no part of the surface going through . So in the plane we can see where that gap between the two halves come from. You can substitute in a value of for it to work to see the shape you get. e.g. for we will get , which means along the plane we'll get a circle. As you go further out the radius increases as you increase the value of you put in.

Drawing these, (well they're really contours) will give you the surface.

If anything I'd say don't just go and memorise the shapes, it's better to be able to do the able and see what it will result in. After a while you won't even need to draw anything else out first, it won't take long and just picture it from the contours in your head.

Hope that helps.

EDIT: tl;dr, look at the section cuts (2d curves) in the , and planes.
EDIT2: Added a tiny bit.
« Last Edit: November 23, 2013, 07:48:47 pm by b^3 »
2012-2016: Aerospace Engineering/Science (Double Major in Applied Mathematics - Monash Uni)
TI-NSPIRE GUIDES: METH, SPESH

Co-Authored AtarNotes' Maths Study Guides


I'm starting to get too old for this... May be on here or irc from time to time.

brightsky

  • Victorian
  • ATAR Notes Legend
  • *******
  • Posts: 3136
  • Respect: +200
Re: brightsky's Maths Thread
« Reply #211 on: November 23, 2013, 07:50:20 pm »
0
Since there's no translation of it look at the curves that lay on the surface for , and .
I.e. for , we get the hyperbola , so you have a hyperbola with being the major axis in the plane. Going along the axis here will just increase the RHS, so the hyperbola gets wider.
Same logic goes for , to get .

For , we get , we we know we don't have anything to satisfy it. So we know there is no part of the surface going through . So in the plane we can see where that gap between the two halves come from. You can substitute in a value of for it to work to see the shape you get. e.g. for we will get , which means along the plane we'll get a circle. As you go further out the radius increases as you increase the value of you put in.

Drawing these, (well they're really contours) will give you the surface.

If anything I'd say don't just go and memorise the shapes, it's better to be able to do the able and see what it will result in. After a while you won't even need to draw anything else out first, it won't take long and just picture it from the contours in your head.

Hope that helps.

EDIT: tl;dr, look at the section cuts (2d curves) in the , and planes.
EDIT2: Added a tiny bit.

right, right, makes sense!! yeah I should train myself to visualise the contours in my head...i'm not usually very good at visualising 3D objects and rotating stuff around in my head...

thanks so much b^3!
2020 - 2021: Master of Public Health, The University of Sydney
2017 - 2020: Doctor of Medicine, The University of Melbourne
2014 - 2016: Bachelor of Biomedicine, The University of Melbourne
2013 ATAR: 99.95

Currently selling copies of the VCE Chinese Exam Revision Book and UMEP Maths Exam Revision Book, and accepting students for Maths Methods and Specialist Maths Tutoring in 2020!

brightsky

  • Victorian
  • ATAR Notes Legend
  • *******
  • Posts: 3136
  • Respect: +200
Re: brightsky's Maths Thread
« Reply #212 on: November 23, 2013, 10:53:59 pm »
0
does Melbourne uni post up solutions to past papers?
2020 - 2021: Master of Public Health, The University of Sydney
2017 - 2020: Doctor of Medicine, The University of Melbourne
2014 - 2016: Bachelor of Biomedicine, The University of Melbourne
2013 ATAR: 99.95

Currently selling copies of the VCE Chinese Exam Revision Book and UMEP Maths Exam Revision Book, and accepting students for Maths Methods and Specialist Maths Tutoring in 2020!

TrueTears

  • TT
  • Honorary Moderator
  • Great Wonder of ATAR Notes
  • *******
  • Posts: 16363
  • Respect: +667
Re: brightsky's Maths Thread
« Reply #213 on: November 24, 2013, 05:58:46 am »
+1
for your question, I can present a vague, informal proof, but not sure how to write it up formally. so we the augmented matrix A|b. we perform row operations to turn it into reduced row echelon form. the stuff on the LHS of the | would be identical to the reduced row echelon form of A. so the columns in A corresponding to the columns on the LHS of the | which contain pivots form a basis for the column space of A. now if rank(A) = rank(augmented matrix), then that means that b can be written as a linear combination of the vectors in a basis for the column space of A and so b is in the column space of A. if rank(A) < rank(augmented matrix), then b cannot be written as a linear combination of the vectors in the basis for the column space of A and so b is not in the column space of b. is this right? (I need to learn the art of writing up proofs in fancy maths notation though...any help would be much appreciated!)
I can see where you're heading, here is perhaps a clearer argument:

So assume that rank[A] = rank[A|b] is true. Now, name the column vectors of A, a_1, a_2, ...., a_n. We know that rank[A] <= min(m,n), so some subset of the column vectors of A must form the basis for the column space of A, call this set of vectors a_1, a_2, ..., a_r where r<=min(m,n). Now the column vectors of A|b is given by a_1, a_2, ..., a_r, ..., a_n, b. By the initial assumption, we know that the dimension of the column space of A|b is also r=rank[A]. Since a_1, a_2, ...., a_r are a linearly independent set of vectors and they also belong in the column space of A|b, then they form a basis for the column space of A|b. Thus, b is expressible as a linear combination of a_1, a_2, ..., a_r, hence by definition, b lies in the column space of A.

Infact, the converse is also true, that is, if b lies in the column space of A, then rank[A] = rank[A|b].
« Last Edit: November 24, 2013, 06:02:46 am by TrueTears »
PhD @ MIT (Economics).

Interested in asset pricing, econometrics, and social choice theory.

vcestudent94

  • Victorian
  • Forum Obsessive
  • ***
  • Posts: 419
  • Respect: +36
Re: brightsky's Maths Thread
« Reply #214 on: November 24, 2013, 11:34:49 am »
+1
does Melbourne uni post up solutions to past papers?
The university doesn't. It depends on the lecturer if they want to give some or not.

brightsky

  • Victorian
  • ATAR Notes Legend
  • *******
  • Posts: 3136
  • Respect: +200
Re: brightsky's Maths Thread
« Reply #215 on: November 24, 2013, 12:04:29 pm »
0
right, right makes sense! I really need to learn how to express stuff clearly in formal notation :S

I have another question, albeit a minor one: are these two statements the same as each other?
1. row space of A and nullspace of A are orthogonal complements
2. every vector in row space of A is orthogonal to every vector in the nullspace of A

the proof that rife presented a few pages earlier established the truth of statement no. 2. but apparently to prove statement no. 1, you need to present an if and only if proof, i.e. prove that if v is in the nullspace of A then it is orthogonal to every vector in the row space of A, AND prove that if v is orthogonal to every vector in the row space of A, then v is in the nullspace of A. I never thought the second part was at all necessary...but apparently it is?

any help/clarification appreciated!
2020 - 2021: Master of Public Health, The University of Sydney
2017 - 2020: Doctor of Medicine, The University of Melbourne
2014 - 2016: Bachelor of Biomedicine, The University of Melbourne
2013 ATAR: 99.95

Currently selling copies of the VCE Chinese Exam Revision Book and UMEP Maths Exam Revision Book, and accepting students for Maths Methods and Specialist Maths Tutoring in 2020!

brightsky

  • Victorian
  • ATAR Notes Legend
  • *******
  • Posts: 3136
  • Respect: +200
Re: brightsky's Maths Thread
« Reply #216 on: November 24, 2013, 12:25:29 pm »
0
also (unrelated to the above), if we have a linear transformation T: R^2 --> R^2, and a fancy basis B = {(2,2), (2,1)}, and were only given that T([2;2]) = [2;2] and T(2;1) = [2;2] + [2;1], is there a quick way to work out the matrix of transformation with respect to the fancy basis B? it is relatively easy to find the matrix of transformation with respect to the standard basis S = {(1,0), (0,1)}, but apparently it is even easier to find it first in terms of the fancy basis B? I think i'm missing something here...

thanks!
2020 - 2021: Master of Public Health, The University of Sydney
2017 - 2020: Doctor of Medicine, The University of Melbourne
2014 - 2016: Bachelor of Biomedicine, The University of Melbourne
2013 ATAR: 99.95

Currently selling copies of the VCE Chinese Exam Revision Book and UMEP Maths Exam Revision Book, and accepting students for Maths Methods and Specialist Maths Tutoring in 2020!

Alwin

  • Victorian
  • Forum Leader
  • ****
  • Posts: 838
  • Respect: +241
Re: brightsky's Maths Thread
« Reply #217 on: November 24, 2013, 02:01:01 pm »
+2
also (unrelated to the above), if we have a linear transformation T: R^2 --> R^2, and a fancy basis B = {(2,2), (2,1)}, and were only given that T([2;2]) = [2;2] and T(2;1) = [2;2] + [2;1], is there a quick way to work out the matrix of transformation with respect to the fancy basis B? it is relatively easy to find the matrix of transformation with respect to the standard basis S = {(1,0), (0,1)}, but apparently it is even easier to find it first in terms of the fancy basis B? I think i'm missing something here...

thanks!

Assuming when you write [a;b] you mean a column matrix with a on top of b. I use "_" to denote underscore, sorry no LaTeX. Continuing:
EDIT: looked too ugly, had to change it :P

let the basis vectors for basis B be: and
therefore, and T

these form the columns of our matrix, A.   
where

as for your first q, don't see much of a difference :P
« Last Edit: November 24, 2013, 02:31:56 pm by Alwin »
2012:  Methods [48] Physics [49]
2013:  English [40] (oops) Chemistry [46] Spesh [42] Indo SL [34] Uni Maths: Melb UMEP [4.5] Monash MUEP [just for a bit of fun]
2014:  BAeroEng/BComm

A pessimist says a glass is half empty, an optimist says a glass is half full.
An engineer says the glass has a safety factor of 2.0

brightsky

  • Victorian
  • ATAR Notes Legend
  • *******
  • Posts: 3136
  • Respect: +200
Re: brightsky's Maths Thread
« Reply #218 on: November 24, 2013, 05:03:57 pm »
0
ooohhh yes of course of course, thanks alwin!!!!

another question: i'm probably a bit pedantic here but just say you have a 3*3 matrix with one repeated eigenvalue. do you write lambda_1 = 1, lambda_2 = 1, lambda_3 = 2, or do you just write lambda_1 = 1, lambda_2 = 2, and solve as per usual, perhaps with some indication that you know it's a repeated root, e.g. algebraic multiplicity = 2 or something?

also...if say for lambda_1 = 1 you find that AN eigenvector is [1,-2]. but you know that every scalar multiple of [1,-2] is technically an eigenvector, and the question asks: "find the eigenvalues and corresponding eigenvectors for A", would you just put the eigenvector [1,-2], since it's kind of self-explanatory that the eigenspace is a line not a dot, or would you put something like t*[1,-2], where t E R as the eigenvector?

thanks!
« Last Edit: November 24, 2013, 05:07:56 pm by brightsky »
2020 - 2021: Master of Public Health, The University of Sydney
2017 - 2020: Doctor of Medicine, The University of Melbourne
2014 - 2016: Bachelor of Biomedicine, The University of Melbourne
2013 ATAR: 99.95

Currently selling copies of the VCE Chinese Exam Revision Book and UMEP Maths Exam Revision Book, and accepting students for Maths Methods and Specialist Maths Tutoring in 2020!

TrueTears

  • TT
  • Honorary Moderator
  • Great Wonder of ATAR Notes
  • *******
  • Posts: 16363
  • Respect: +667
Re: brightsky's Maths Thread
« Reply #219 on: November 24, 2013, 05:20:01 pm »
+2
I have another question, albeit a minor one: are these two statements the same as each other?
1. row space of A and nullspace of A are orthogonal complements
2. every vector in row space of A is orthogonal to every vector in the nullspace of A
Well let's try prove the equivalence (iff) between statements 1. and 2.

First 1. => 2.

By definition, if 1. is true, that is, every vector in row space of A is orthogonal to every vector in the nullspace of A and every vector in the nullspace of A is orthogonal to every vector in the row space of A.

2. => 1.

To prove this, we must show that if a vector v is orthogonal to every vector in the row space, then it must belong to the nullspace of A, that is, v satisfies Av=0. Conversely, we must show that if Av=0 (that is, v belongs to the nullspace of A), then v is orthogonal to every vector in the rowspace.

So first assume that v is orthogonal to every vector in the rowspace of A. Let the row vectors of A be r_1, r_2, ..., r_m. Assuming we are working with the Euclidean inner product defined in R^n, then v.r_1 = v.r_2 = ... = v.r_m = 0. Remembering that the nullspace of A, Ax=0, can be expressed as:

, then v must clearly be a solution to this system of equations and hence it lies in the nullspace of A.

Conversely, assume v is a vector from the nullspace of A, so Av=0. Then clearly, from the aforementioned expression for the nullspace, r_1.v = r_2.v= ... = r_m.v = 0. Now let r be any vector from the rowspace of A, by definition r = c_1 r_1 + ... + c_m r_m, where c_1, ..., c_m are just scalar constants. Now, r.v =  (c_1 r_1 + ... + c_m r_m).v = c_1(r_1.v) + ... + c_m(r_m.v) = 0, hence v is orthogonal to every vector in the rowspace of A.

And we are done.
PhD @ MIT (Economics).

Interested in asset pricing, econometrics, and social choice theory.

Alwin

  • Victorian
  • Forum Leader
  • ****
  • Posts: 838
  • Respect: +241
Re: brightsky's Maths Thread
« Reply #220 on: November 24, 2013, 05:20:54 pm »
0
ooohhh yes of course of course, thanks alwin!!!!

another question: i'm probably a bit pedantic here but just say you have a 3*3 matrix with one repeated eigenvalue. do you write lambda_1 = 1, lambda_2 = 1, lambda_3 = 2, or do you just write lambda_1 = 1, lambda_2 = 2, and solve as per usual, perhaps with some indication that you know it's a repeated root, e.g. algebraic multiplicity = 2 or something?

also...if say for lambda_1 = 1 you find that AN eigenvector is [1,-2]. but you know that every scalar multiple of [1,-2] is technically an eigenvector, and the question asks: "find the eigenvalues and corresponding eigenvectors for A", would you just put the eigenvector [1,-2], since it's kind of self-explanatory that the eigenspace is a line not a dot, or would you put something like t*[1,-2], where t E R as the eigenvector?

thanks!

no worries!

Normally I write:

Assuming your second question is not related to the first as that matrix would be deficient, geometric multiplicity < algebraic etc:
When it asks for you to find the eigenvectors of a matrix, you list vectors
When it asks for eigenspace, then yes you are completely correct and you include the parameters

And just being a bit picky, if it was a repeated factor for one eigenvalue then there would be more than one eigenvector for a non-deficient matrix so the eigenspace would be the span of these eigenvectors corresponding to the repeated eigenvalue, not just a line.
2012:  Methods [48] Physics [49]
2013:  English [40] (oops) Chemistry [46] Spesh [42] Indo SL [34] Uni Maths: Melb UMEP [4.5] Monash MUEP [just for a bit of fun]
2014:  BAeroEng/BComm

A pessimist says a glass is half empty, an optimist says a glass is half full.
An engineer says the glass has a safety factor of 2.0

Alwin

  • Victorian
  • Forum Leader
  • ****
  • Posts: 838
  • Respect: +241
Re: brightsky's Maths Thread
« Reply #221 on: November 24, 2013, 05:50:30 pm »
+1
Oh, and @TrueTears or anyone else who can help, can you show me how to finish my proof off? :)

To prove: Eigenvectors of a matrix form a linearly independent set.

Base case: consider two eigenvectors, and with corresponding unique eigenvectors and

A lemma of sorts
Assume linear dependence:








But since and we have a contradiction.
So, the initial assumption is wrong and the two eigenvectors are linearly independent.


General case: has the set of eigenvectors

Assume:
Linear independence for the set of some eigenvectors .
We have already shown that there are two linearly independent eigenvectors above.
The other eigenvectors, are linear combinations of these linearly independent vectors.

Thus, one the eigenvectors in the set is can be expressed as a combination of the linearly independent vectors:







Anddd now what??? can I say that since and and I have a contradiction?? imho that seems wrong because they could I guess still add + subtract to give .
Or have I done something wrong?

Or... should I end with so then the set is linearly independent...? But that was in my assumption that the eigenvectors from 2 to j were linearly independent and I'm looking for a contradiction yeah.. ?


THANKS :D
« Last Edit: November 24, 2013, 05:52:48 pm by Alwin »
2012:  Methods [48] Physics [49]
2013:  English [40] (oops) Chemistry [46] Spesh [42] Indo SL [34] Uni Maths: Melb UMEP [4.5] Monash MUEP [just for a bit of fun]
2014:  BAeroEng/BComm

A pessimist says a glass is half empty, an optimist says a glass is half full.
An engineer says the glass has a safety factor of 2.0

kamil9876

  • Victorian
  • Part of the furniture
  • *****
  • Posts: 1943
  • Respect: +109
Re: brightsky's Maths Thread
« Reply #222 on: November 24, 2013, 08:34:13 pm »
0
You should reformulate the actual statement more carefully. We are of course assuming that for and that each eigenvector is not the zero vector.

That way you get from your equation [6]-[5] the conclusion that . But now since (by the crucial assumption that the eigenvalues are all distinct) then we get that as desired (which would be a contradiction, as it would imply (by [4]) that which contradicts our assumption.

Voltaire: "There is an astonishing imagination even in the science of mathematics ... We repeat, there is far more imagination in the head of Archimedes than in that of Homer."

brightsky

  • Victorian
  • ATAR Notes Legend
  • *******
  • Posts: 3136
  • Respect: +200
Re: brightsky's Maths Thread
« Reply #223 on: September 06, 2014, 03:11:05 pm »
0
Two questions:

1. A self map is a function whose range is a subset of the domain. A fixed point of a self-map f is a point c with c = f(c). Let [a,b] be a closed finite interval and let f:[a,b] --> [a,b] be continuous on [a,b]. Prove that there exists at least one fixed point.

2. A function f(x) is defined as 1 if x is rational and sin(x) if x is irrational. Find the derivative of f(x). If at any point the derivative does not exist, explain why it doesn't and report the values of the left and right derivatives if these exist.

Any insight would be much appreciated! :)
2020 - 2021: Master of Public Health, The University of Sydney
2017 - 2020: Doctor of Medicine, The University of Melbourne
2014 - 2016: Bachelor of Biomedicine, The University of Melbourne
2013 ATAR: 99.95

Currently selling copies of the VCE Chinese Exam Revision Book and UMEP Maths Exam Revision Book, and accepting students for Maths Methods and Specialist Maths Tutoring in 2020!

Phenomenol

  • Victorian
  • Trendsetter
  • **
  • Posts: 114
  • Class of 2013
  • Respect: +60
Re: brightsky's Maths Thread
« Reply #224 on: September 06, 2014, 05:05:45 pm »
0

Two questions:

1. A self map is a function whose range is a subset of the domain. A fixed point of a self-map f is a point c with c = f(c). Let [a,b] be a closed finite interval and let f:[a,b] --> [a,b] be continuous on [a,b]. Prove that there exists at least one fixed point.

2. A function f(x) is defined as 1 if x is rational and sin(x) if x is irrational. Find the derivative of f(x). If at any point the derivative does not exist, explain why it doesn't and report the values of the left and right derivatives if these exist.

Any insight would be much appreciated! :)

1. Let F(x) = f(x) - x
Now apply Rolle's theorem. the Intermediate Value Theorem.
« Last Edit: September 06, 2014, 07:49:29 pm by Phenomenol »
Methods 46, Music Performance 49 (Top Acts), Chemistry 50, English 43, Physics 45, Specialist 48, University Maths 93%

ATAR: 99.80 (ASP)

2014-2016: BSc (Chemistry) UoM

2017-2018: MSc (Chemistry) UoM

Stuff I've written:
Free VCE Chemistry Trial Exam (2017)

VCE Chemistry Revision Questions (2017)

PM me if you are looking for a 1/2 or 3/4 VCE Chemistry tutor in 2018. I can also do other subjects including Methods, Specialist and Physics depending on availability.