So heres the standard (in my experience) way to solve for eigenvalues and eigenvectors:
The meaning of eigenvectors and eigenvalues:
The definition of an eigenvalue eigenvector pair for a given matrix is that when you hit the eigenvector with the matrix, you get the same thing when you multiply the vector by the eigenvalue.
This is written like so:
matrix*eigenvector = eigenvalue*eigenvector, or
A*v_eig = lambda*v_eig.
Basically when you take any vector 'v_initial' and "hit it" with a matrix 'A',
ie: A*v_initial = v_final,
the vector 'v_initial' is transformed into a new vector 'v_final'. Depending on the matrix, this transformation can be a reflection about a line, a rotation, a scaling, combinations of those things, etc...
Geometrically eigenvectors are just a special case where the vector is only scaled - no rotations, reflections, etc. You hit v_eig with A, and you get out v_eig multiplied by a scaling factor.
Ok, so now onto actually solving for eigenvalues and eigenvectors (witout the guesswork):
Finding Eigenvalues:
We start with the basic definition for eigenvectors and eigenvalues:
A*v_eig = lambda*v_eig
We want to somehow solve for lambda from that. So first we might try subtracting lambda*v_eig from both sides of the equation.
A*v_eig - lambda*v_eig = 0_vector
Here we get mildly excited since we notice that both of the terms have a v_eig in them. It would be super-nice if we could factor out that v_eig from both terms. Unfortunately we can't factor it as (A-lambda)*v_eig, since you can't subtract a scalar (lambda) from a matrix (A).
The trick is to remember that if you hit any vector with the Identity matrix, you get back the origiona vector. Using this fact, we can write the equation as:
A*v_eig - lambda*(I*v_eig) = 0_vector, or
A*v_eig - (lambda*I)*v_eig = 0_vector
This we
can factor, and we get
(A - lambda*I)*v_eig = 0_vector
Now we have a matrix hitting a vector, and we get out the zero vector. If you recall from matrix multiplication and determinant properties, this can happen in 2 cases:
1: v_eig is the zero vector (not very interesting), or
2: the determinant of (A - lambda*I) is zero.
The 2nd case is interesting because it gives us a nice equation that we can use to solve for lambda:
det(A-lambda*I) = 0
All you have to do is calculate that determinant, set it equal to zero, and solve for lambda. You will likely notice that the equation you get will have multiple "roots" ie: you will be able to write it as
(lambda - x0)(lambda - x1) (...) = 0.
These multiple solutions (lambda_1 = x1, lambda_2 = x2, ..., lambda_n = xn) are all valid and each one will correspond to an eigenvector.
Ok, now on to finding those eigenvectors that they correspond to:
Finding eigenvectors given their eigenvalues.
Once you have the eigenvalues, you can just plug them into the equation:
(A-lambda*I)*v_eig = 0
But thats just a system of linear equations you can solve by row-reduction. So all you have to do is do row-reduction on the matrix (A-lambda*I), and you will get out the components of the eigenvector.
http://maze5.net