Linear Algebra Math3E

Lecturer: Tianyu Zhang (YMSC)

Contact: bidenbaka@gmail.com

Questions: https://piazza.com/tymathdb/spring2024/math3e/

Joining Piazza via the code: math3e

Recommend Calculator: https://matrixcalc.org/


Lecture 1: Determinant and Matrix Intro

Abstract:

In this lecture we introduce the concept of determinant and how to use determinant to calculate a linear system. In particular, we introduced some algorithm in calculating the determinant. We also assign a vector space structure to the determinant to derive the concept of matrix. Later in this course, we introduced the concept of normalized basis and use it to define the dimension of vector space. We also cover the addition, scalar multiplication, and multiplication properties of matrix.

In Lecture Notes

Lecture 2: Matrix Introduction and the Invertible Matrix Theorem

Abstract:

In this lecture we introduced the operations of matrices. Serving as a collection of vectors, the matrix addition and the scalar multiplication are derived naturally from the vector space structures. In particular, since each entry of a vector (hence a matrix) is a scalar, hence the power of a matrix could be found a similarity as the power of a number. We introduced the matrix multiplications, and we introduced that they are indeed vector space transformations. For those matrix multiplications (since the matrix multiplication yields another matrix), once being invertible, then serves as a vector space isomorphism. We therefore introduced the Invertible Matrix Theorem connecting all the materials we have encountered so far.

In Lecture Notes

Lecture 3: Vector Subspace

Abstract:

In this lecture we introduce another method in finding the inverse of a matrix, other than using the adjoint, we reduce the matrix to the row-reduced form and use the equivalence relation to derive an inverse (provided possible). We also introduced the linear span, basis, and dimension. We covered the definition of a vector subspace, and we introduced the column space and null space, which turns out to be vector subspaces. The pivot of the row-reduced form naturally generates a subspace coincides with the column space. We further decomposed a linear transformation to null space and column space and then we revised the invertible matrix theorem.

Lecture 4: LU Decomposition and Orthogonality

Abstract:

We introduced the kernel and the range of a linear transformation and construct a one-to-one correspondence to matrices. Therefore the dimension operation of matrices is equivalent to the dimension operations of linear transformations. We introduced LU decomposition by “trace back” the row reduction process. We introduced the operations dot product and cross product, therefore we introduced the projection operation, which is a linear space isomorphism, and the special case when two vector are perpendicular. We then proved the orthogonal complement relation between column space and null space, and between row space and null space. We introduced the rank of row space and rank of column space and the special case when they agree leads to the rank of a transformation (resp. matrix).

Lecture 5: Eigenvalues, Eigenvectors, and Eigenspace

Abstract:

In this lecture we introduced the eigenvalues, eigenvectors, and eigenspace. We specifically introduced the method in finding the eigenvalues and eigenvectors. We also introduced the relation between eigenvalues and the determinant, also the basis of eigenspace. We then introduced the diagonalization and diagonalizability. Serving as a method to represent the eigenvalues and eigenvectors, the diagonalization yields the equivalent matrix relations, hence yields the same determinant. The diagonalizability, however, does not always exist, hence we introduce two different criterion in finding if a matrix is diagonalizable, which are, for distinct eigenvalues case, and for different eigenvalue case. We further introduced the basis linear transformations.

Lecture 6: Review, Complex Vector Space, and Vector Space Operations

Abstract:

We did some exercises in determinants, basis, LU decomposition, and inverse calculation. We then introduced the complex vector space, and the polar argument of the complex vector space. We then introduced the concept of inner direct sum, and we argued that the representation in the direct sum space has a unique representation. We further proved that a direct sum space is a vector space, in particular, two vector subspaces can have a direct sum if and only if they have intersection 0.

Lecture 7: Orthogonality and the Gram-Schmidt Process

Abstract:

We introduced the inner product operation and then we derive the concept of norm. We then see that having zero inner product between two vectors means they are orthogonal to each other. Therefore we introduced the orthogonality and also the orthogonal complement. We then introduce the concept of projection component, which offers us a way to decompose a vector into the orthogonal part and the scaled part. Then we use this concept, along some results in orthogonal basis (resp. orthonomal basis) to derive the Gram-Schmidt process.

Lecture 8: Orthogonality and Symmetric Matrices

Abstract:

We first did some recitations. Then we did a review of all the concepts we have covered so far. Then we talked the QR decomposition and then introduced the symmetric matrix, though being simply argumented by $A=A^T$, we could draw many elegant results, especially in spectral theory.

Lecture 9: Quadratic Form and Singular Value Decomposition

Abstract:

In this lecture we introduced the quadratic form and the singular value decomposition. We see how powerful it is for a matrix to be symmetric. We also introduced the concept of Jordan form, which provides an intermediate between the diagonalizability and the near-diagonalizable ones.

Lecture 10: System of Linear Equations

Abstract:

In this lecture we reviewed on solving the linear equations. We then see additional properties of the elementary operations and elementary matrices, where the former one serves as a permutation that does not change the vector space structure, as the second one is the generalized such permutation. We then reviewed the matrix invertible theorem, argued the 22 statements are indeed linear space properties.

Lecture 11: Vector Spaces

Abstract:

In this lecture we give a detailed view on the vector spaces. We first review the vector spaces and vector subspaces, then we introduce the operation between vector spaces. We also discuss the spanning set, the dimension theorem, and their connection to row and column spaces, as well as the orderd bases and coordinate matrices. We then establish the relationship between real vector space and the complex vector space, via the method called complexfication.

Lecture 12: Linear Transformations

Abstract:

In this lecture we discuss the linear transformations. We first introduce the notations of linear operator, linear transformation, and linear functional. We then realize the concept of isomorphism, endormorphism, homomorphism, epimorphism we have convered before. Then we use the linear transformation to establish the isomorphic vector spaces and the rank-nullity theorem. Then we assign each linear transformation a matrix and we discussed the change of basis (resp. change of coordinates) transformations.

Lecture 13: Change of Basis Mappings

Abstract:

In this lecture we identified the coordinate transformations, and we see performing the row operations is equivalent to multiplying the matrix $A$ on the left by $P$ for some intertible $P$, while performing row operations is equivalent to multiplying the matrix $A$ on the right by $Q^{-1}$. Moreover, performing row operations is equivalent to changing the basis used to represent vectors in the image while performing columns operations is equivalent to changing the basis used to represent vectors in the domain. We also revisited the invertible matrix theorem under the change of basis perspective of view.

Lecture 14: Eigenspace, Trace, and Multiplicities

Abstract:

In this lecture we identify the eigenspace with the eigenvectors. We argued that row operations (resp. column operations) may change the eigenvalues but the eigenspace is an invariant. We then introduce the trace being the sum of all eigenvalues (including multiplicity) which also appear to present on the main diagonal. We then introduced the geometric multiplicity and the algebraic multiplicity and argue the first one is at most as large as the latter one.

Lecture 15: Inner Product Space and Orthogonalization

Abstract:

In this lecture we revisited the definition of dot product, and introduce the general version, the inner product. We see how the norm becomes a middle ground between describing the magnitude of a vector and describing the distance in a vector space. We then use the norm to recover inner product, and by the application of the Gram-Schmidt process we can generate an orthonormal basis from any given collection of vectors.

Lecture 16: QR Factorization, Projection Theorem, and Best Approximation

Abstract:

In today’s lecture we proved that for any matrices we can find a QR decomposition, where the special cases are the square and non-singular ones. Applying Gram-Schmidt process yields the orthogonal column vectors and then the projection theorem, the best approximation follow.

Lecture 17: On Finding Positive Solutions to System of Linear Equations

Abstract:

Before we see how to identify the number of solutions to a system of linear equations (that is, either it has infinitely many solution, exactly one solution, or no solutions), today we discuss how to guarantee that all the solutions are positive (resp. non-negative), this then yields the negative parts by assigning minus sign. Identifying such a result requires our investiation of the convex hull. We introduced the concept of convex hull and use Farkas Lemma to derive the desired result.

Lecture 18: Polar Decomposition and Isomorphism Between $\Bbb{R}^n$ and $\B_1(0)$

Abstract:

In today’s lecture we cover the concept of the polar coordinates, we introduced the one-dimensional and the two-dimensional case, and a more general result could be applied by extending the isomorphism between $\Bbb{R}^n$ and its $n$-dimensional unit ball $B_1(0):=\{x,y\in\Bbb{R}^n|\|x-y\|≤1\}$.

One can consult Sheldon Axler, 2014, Chapter 7, pp.252-259.

Leave a Comment