Multilinear Algebra

Hom(V,W)

The set of all linear maps from VW where V,W are vector spaces, is called Hom(V,W). And is itself a vector space, where the zero function O(v)=0w for each vV acts as the identity vector, and the map f+g is defined as the map that xf(x)+g(x) and the map λf is defined as the one xλf(x).
Now, given Hom(V1,V2) and Hom(W1,W2) are vector spaces, we can have linear maps between them. This is the "multilinear nature"

Dual vector space

if V is a vector space over R, then Hom(V,R)=V is called the dual space of V. An element of V is called a "co-vector".
For example if P is the vector space of all polynomials with real entries and real coefficients, the definite integral of an element of P from 0 to 1 is an operator call it I: I(p)=01p(x)dx, then I is a linear map from P to R, hence IP, so I is a co-vector of P.
In general if V is a vector space over F, Hom(V,F) is called the dual of V. An element of V is often called a functional

Hamel Bases

if V is a vector space (maybe of infinite dimension), then B is a Hamel basis for V if for all vV, there exist a unique finite subset {v1,v2,vn} of B, and n non zero scalars f1,f2,fn such that v=i=1nvifi.

Co-ordinate projection maps

If V is finite dimensional, with basis <ei>i=1n, Then the j'th projection map pj:VF is the one that gives the scalar co-efficient of the j'th basis vector when the input vector is written as a linear combination of the basis vectors. that is $$ p_{j}(\lambda_{1}\mathbf{e_{1}}+ \lambda_{2}\mathbf{e_{2}} + \dots \lambda_{n}\mathbf{e_{n}}) = \lambda_{j}$$
If V is infinite dimensional, with Hamel basis B, then pb:VF gives the scalar coefficient of the basis vector bB, when the input vector is written as a unique linear combination of a subset of B. that is,

pb(i=1nvifi)={fk    b{v1,v2,,vn}, i.e. b=vk0    b{v1,v2,,vn}

In both cases, the co-ordinate projection maps are linear maps.
In the finite dimensional case $$p_{j}\left( a\sum_{i=1}^n\lambda_{i}\mathbf{e_{i}} +b\sum_{i=1}^n\mu_{i}\mathbf{e_{i}} \right) = a\lambda_{j} + b\mu_{j}$$ and for the infinite dimensional case, I wont write the case-wise breakdown, but okay. Moreover, in both cases the set of all co-ordinate projection maps (also called co-ordinate functionals) are linearly independent vectors in V, for the infinite dimension case, consider the functional which is a linear combination of co-ordinate maps being equal to the zero functional. $$ L =\sum_{b \in B}f_{b}p_{b} = \mathbf{0}$$
For any basis bB, L(b)=0F (as L is the zero functional) but L(b)=fb Hence fb=0, running through all bB, we notice that each fb=0F. Hence L is linearly independent
Similarly, for the finite dimensional case let $$ T = \sum_{i=1}^n\lambda_{i}p_{i} = \mathbf{0}$$, then T(ej)=λj=0F, hence running through all ei,we have λi=0F for each i=1n. so <pi>i=1n is a list of n linearly independent vectors in V. let fV, for any v=i=1nλieiV, Then f(v)=i=1nλif(ei) That is f is completely determined by what it does to each of the basis vectors.
Hence we see (beautifully!) that f=i=1nf(ei)pi. Notice that (i=1nf(ei)pi)(μkek)=i=1n(f(ei)μk)(pj(ek))=f(ek)μk=f(μkek).
So the co-ordinate projection functionals are a basis for V when V is finite dimensional.
Moreover, even if V is infinite dimensional, for any v0V, there exists a functional g for which g(v)0. Notice v=i=1nvifi. then simple set g=pvk then g(v)=fk0F
Also, dim(V)=dim(V)

Injections into the double dual:

Fix a particular vector vV. Let fV.
Define an evaluation map evv:VF, as ev(f)=f(v). Now,

evv(f1+f2)=(f1+f2)(v)=f1(v)+f2(v)=evv(f1)+evv(f2)

and,

evv(λf)=(λf)(v)=λ(f(v))=λevv$$.Hence,foreach$vV$,$evvHom(V,F)=V$.Nowconsiderthemap$ϕ:VV$givenby$ϕ(v)=evv$.Isthismap$ϕ$nowlinear?Noticethat$ϕ(v1+v2)$isequaltothefunction$evv1+v2$.Now,forany$fV$,$evv1+v2(f)=f(v1+v2)=f(v1)+f(v2)=evv1(f)+evv2(f)$.Soforanyinput,weseethatthemaps$evv1+v2$isequivalenttothemap$evv1+evv2$,whereadditionofmapsisdefinedasusual,$evv1+evv2$isthemapthattakes$f$to$evv1(f)+evv2(f)$.Therefore$ϕ(v1+v2)=ϕ(v1)+ϕ(v2)$.Similarly,itcanbeshownthat$ϕ(λv)=λϕ(v)$.therefore$ϕ$isalinearmapfrom$VV$.Moreoverthismapisinjective.Letussuppose$ϕ(u)=ϕ(v)$.Thenthetwomaps$evu=evv$asfunctions.meaningthatforall$fV$,$evu(f)=evv(f)$.Hence,foreach$fV$,$f(u)=f(v)$.Henceforall$fV$,$f(uv)=0F$,andastheresultshownabove,thereexistsatleastonefunctional$fuv$forwhich$fuv(uv)0$unless$u=v$hencethemapisinjective.Moreover,if$V$isoffinitedimension,then$ϕ$isbijective.Fromthedefinitionofcoordinateprojectionmaps,weknowthat$dim(V)=dim(V)$.Treating$V$asthefinitedimensionalvectorspace,takingcoordinateprojectionmapsasthebasisfor$V$andapplyingthesameargument,wehavethat$dim(V)=dim(V)$Hence$ϕ:VV$isaninjectivefunctionbetweentwospacesofthesamedimension.usingranknullitytheorem,$dim(V)=dim(kern(ϕ))+dim(range(ϕ))$,since$ϕ$isinjective,$dim(kern(ϕ))=0$.Hence$dim(V)=dim(range(ϕ))$.Therefore$dim(V)=dim(range(ϕ))$,weknowthatthereisonlyonesubspaceof$V$offulldimension,namelyitself,therefore$range(ϕ)=V$.Henceforfinitedimensionalvectorspaces$V$,$V$isisomorphictoitsdoubledual$V$>[!def]Tensor>An$r,s$tensoron$V$isamultilinearmap$t:(V)r×(V)sR$,thatis$t$eatsatuple,whosefirst$r$elementsaredualvectors,andtheremaining$s$elementsarevectors,andspitsourarealnumbersuchthat$t$islinearineachentryofthetuple(orineachvariable).>Considerthe$1,1$tensors$t:V×VR$.Nowconsideramap$ϕ:V(V)$givenby$ϕ(v)=t(,v)$,thatis$ϕ$takesinaparticularvector$v$andspitsoutthemapdefinedbyfixingthesecondentryofa$1,1$tensorat$v$,callit$t(,v)$,whichisafunctionfrom$VR$.Informally,thisisbasicallyjustsplittingtheprocessof$t$byfirstgivingitavectorandletitspitoutamapthateatsacovectorontheleft,andtherightvectorisfixed,equaltototheinputvector.Soif$V$isfinitedimensional,$1,1$tensorscanbeseenasisomorphictothesetofalllinearmapson$V$.>Inasimilarfashion,$V=Hom(V,R)$,thereforecovectorsare$0,1$tensors.Forfinitedimension$V=V=Hom(V,R)$thereforevectorsare$1,0$tensors.If$V$isoffinitedimension,thentospecifyalinearmapfrom$V$,itissufficienttoknow$f(e1),f(e2),,f(en)$,soitisenoughtoknowtheevaluationofeachbasisvectortoreconstructtheevaluationofthelinearmaponanyvector>[!def]ComponentsofaTensoroverfinitedimensionalvectorspace>Let$t$bean$(r,s)$tensoroverafinitedimensionalvectorspace$V$,withbasis$<ei>i=1n$andforthedualspace,thebasisofcoordinateprojectionmaps$<ej>j=1n$.Then,acomponentof$t$iswritten>

T^{j_{1},j_{2},\dots j_{r}}{i,i_{2},\dots i_{s}} = T(\mathbf{e^{j_{1}}}, \mathbf{e^{j_{2}}}, \dots, \mathbf{e^{j_{r}}},\mathbf{e_{i_{1}}}, \mathbf{e_{i_{2}}}, \dots, \mathbf{e_{i_{s}}}).

Soeachcomponentistheevaluationof:some$r$lengthsubsequenceofthedualbasisvectors,concatenatedby$s$lengthsubsequenceofthebasisvectors.Andthisisalltheinfoweneedtoevaluate$T$foranyinput.