A set and a field with being an abelian group, and a scalar multiplication That is associative, with producing the identity map , and distributive laws both ways between on and on , and the scalar multiplication of a product in is the composition of the scalar multiplications.
Elements in are called vectors, and elements of are called scalars. is a subspace of if is closed under vector addition and scalar multiplication, inherited from , and also contains the vector.
I don't know why I wrote the definition like this, maybe I am lazy to type
Linear combination
Let over be a vector space, a list of vectors, and a list of scalars. The linear combination of these lists is a vector .
The set of all linear combinations of is called and is a subspace of , as choosing all scalars to be zero gives the zero vector in the span, and also by definition, the span is closed under both operations.
if there exists a list of scalars not all zero for which , then the list of vectors is called linearly dependent.
On the other hand if if and only if each , then the list of vectors is linearly independent.
The dimension of a vector space is the length of the smallest list of vectors that spans it. The same applies for subspaces.
for a linearly dependent list of vectors, the dimension of the subspace induced by its span is at most the length of the list minus one, as one of these vectors is already in the span, so we need not include it in the list, and we can still span this subspace.
if is a vector space, any smallest list of linearly independent vectors that span are called "basis" vectors.
Linear map
if are vector spaces, (on fields ) then a function is a linear map if:
(upto isomorphism).
The linear map is a homomorphism between two vector spaces. let be a basis list of . Then for any vector we have such that
Therefore, .
This allows us to write a matrix $$
M = \begin{bmatrix} \uparrow \ f(\mathbf{e_{1}} ) \dots f(\mathbf{e_{n}}) \ \downarrow
\end{bmatrix}$$ Whose columns are the image of each basis, and when we write as a column vector , with components wrt the same basis, The application is equivalent to the matmul .
Also (also called ) is a subspace of , simple due to the structure preserving nature of the linear map: if , then . Similiary, if , then . Zero vector is in there too.
Hence, we are concerned about the dimension of . To see if our linear map squashes the space into a lower dimension than .
Rank of a linear map
Let be a vector space with , and a basis of , and let also be a vector space, both over a scalar field .
then the rank of the linear map is the dimension of .
By definition, the list spans and has length . Therefore, is at most ( We may find smaller spanning lists for ).
In other words, .
Kernel of a linear map
for linear map , the kernel of , is the subset of that maps to the zero vector in . such that for all , .
We notice that is a subspace of . if then, . Similarly, if and , then . Finally, notice that hence , but has some additive inverse in , therefore .
We need some nice spanning list and linear independent list lemmas from LADR for this one
2.19 Linear Dependence Lemma
Suppose is a linearly dependent list in . Then there exists such that
Furthermore, if satisfies the condition above and the th term is removed from , then the span of the remaining list equals .
Proof
Because the list is linearly dependent, there exist numbers , not all 0, such that
Let be the largest element of such that . Then
which proves that , as desired.
Length of any linearly independent list is at most length of any spanning list
let be a vector space, a linearly independent list of length in , a spanning list of length in .
Then, .
Consider the following process:
step 1: create the list of length . This list is linearly dependent because , but as it comes from a linearly independent list. Hence there exist some , which we can remove to get which is still a spanning list of
step k: Repeating this process to get which contains and the rest are the remaining . make by adding to . is a list of length and is linearly dependent as (by induction) was a spanning list. Yet again, there is some removable (linear dependence lemma) and it can't be any one of the as they are all preceding by other which are all linearly independent. Remove to obtain .
At the end of this process, we obtain , which is a spanning list, obtained by removing (at least) number of . This means than .
All basis have the same length, allowing the definition of a dimension (finite dimension vector space obviously)
if is a vector space, and and and are both linearly independent and spanning lists (they're both basis), then they have the same length
Moreover, any linearly independent list of this length is a basis of .
let , . Apply the above lemma, treating as a linearly independent list and as a spanning list, to get . Now do it the other way around to get and hence .
For the second part, let be a linearly independent list with length . Do the replacement process, treating are linearly independent, and a basis as the spanning list, both of the same length. This allows us to add an element of to and remove an element which was in , and still maintaining that the list is spanning throughout the process. so we modify to become while maintaning "spanning Ness" at each step. Therefore is spanning. Hence is a basis.
Rank Nullity theorem
Let be vector spaces over and a linear map. Then, .
let be a basis for of length . let be a basis of of length . Using the process described in the above lemma, treating as the linearly independent list, as the spanning list, create . This list spans . moreover, since and are both linearly independent, at each step, after adding a when we remove an we have a linearly independent list (this can be again shown by induction on the same process). Therefore is a linearly independent list. Since it is also of length , is a basis of . Hence spans (refer to def of linear map). Since any each . Therefore, still spans . We claim is linearly independent.
Otherwise we have . This means that , and hence, .If , then can be written as a linear combination in two ways, one using just and other one using just . Subtracting the two, we get . so if even one , then we contradict the linear independence of (and even for that matter). Hence, bubbling back, is linearly independent, as each has to be zero whenever .
In pratice, if is a matrix representing a linear map from to , then by solving for the kernel/nullspace of by and finding the dimension of this nullspace, we can say that the rank/dimension of the image of this linear map is equal to the dimension of minus the dimension of the nullspace/kernel of .
Invertible linear maps
Linear maps from smaller to larger dimensions are not surjective
Let be vector spaces, , a linear map. Since , cannot span . Hence is not surjective. (refer definition of rank of linear map, or use rank-nullity theorem to conclude ).
Linear maps from larger to smaller dimensions are not injective
Let be vector spaces, and a linear map. Using rank-nullity theorem, we have Hence, (the rightmost inequality owing to the fact that f(U) lives in V so its dimension is at most V), Finally, , which means that the kernel of is not the trivial subspace {0}. Therefore, there are at least two vectors in the kernel, both of which (by definition) map to zero in
Combining the two above ideas, and stating with a little flair, we have:
Invertible linear maps
A linear map is bijective if and only if and .
Necessity of the two conditions are already clear from the above two theorems. We just have to show sufficiency.
Well, given and , suppose is not injective. then there exists vectors such that , since the two vectors are not equal, at least one of them is non zero. However, it follows that . hence , contradicting that kern(f) = {0}. Moreover, using rank nullity, along with , We see that . It is a trivial fact that the and highest dimensional subspaces, are uniquely the trivial subspace and the original space itself respectively. Therefore . showing surjectivity.
Maps on the same vector space.
Eigenvectors of a linear map
Let be a linear map on a vector space (to itself). Then the Eigenvectors of : . The here is called an eigenvalue of . The set of all eigenvalues of is called the spectrum of and denoted . If we define as the set that contains all vectors for which ofr that particular lambda, then we have .
For any particular and a linear map , is a subspace of : Notice that , and .
The set of eigenvectors of f, itself, without fixing those eigenvectors with a particular eigenvalue IS NOT ALWAYS A SUBSPACE of V!!!
For example just think about then, .
This is pretty cool, so when we transform a vector space , there are subspaces of which contain vectors that each just get scaled by a particular amount under the transform.
The eigenvalue and eigenvector sections are pure gold. Read them :) (LADR)
The idea is to have Invariant subspaces :
Operator, invariant subspace
If is a vector space, a linear map on (to ) is called an operator. A subspace is invariant under if for each . In other words, the restriction is an operator on .
Let's talk about one dimensional Invariant subspaces, pick a particular . Let . Then, if is an invariant on , then for any , . But any for some . since is linear, . This means that the map is defined by the image of itself, and that image is in therefore there has to be some unique scalar for which . (note that ).
Conversely if there exists an operator such that for some , for some scalar , then is an invariant subspace under .
eigenvalue
if is an operator on , then is an eigenvalue of if there exists a non zero such that . Writing as a matrix , We have . Since is non zero, the linear map has a non trivial kernel/null space. As we know, (from the discussion on invertibility), This is equivalent to saying, is not injective. Using rank nullity theorem, Since the kernel is non trivial, the dimension of is smaller than the dimension which means that this map is also not surjective. Hence finally this map is not invertible.
Linearly independent eigen vectors
If is an operator on , if we pick a list of distinct eigenvectors, each associated with it's own distinct eigen value, then that list is linearly independent. That is if such that , then is linearly independent.
since each is non zero (by definition), pick the smallest subset of , of length , such that there exists a linear combination such that each is non zero (if an is zero, just drop it).
Using matrix notation apply to , since , we obtain a smaller sublist (of length m-1) that is also linearly dependent, contradicting the minimality of .
The number of eigenvalues is at most the dimension of the space
if is an operator on , then the number of distinct eigenvalues of is at most
Pick a list of eigenvectors, one for each distinct eigenvalue. Then since this list is linearly independent (form the theorem above), it's length is at most .