Table of contents

L1 Systems of linear equations, Gaussian elimination

BCS1410-1 Lecture 1

Key Concepts:

  • Systems of Linear Equations: Solutions of sets of equations that work for all equations within the set, representing lines in .
  • Unique Solution: If the system of linear equations (SLE) has a unique solution, it can be efficiently found through Gaussian elimination.
  • Row Echelon Form (REF): A form of a matrix where each leading entry is to the right of the leading entry in the previous row.
  • Gaussian Elimination: A method for solving SLEs, involving three types of row operations: replacement, scaling, and interchange.

Formulas and Definitions:

  • Augmented Matrix: Combines the coefficients of the variables and the constants from the equations of an SLE into a single matrix.
  • Row Equivalence: Two matrices are row equivalent if one can be obtained from the other by a series of row operations.
  • Consistent SLE: An SLE that has at least one solution.
  • Inconsistent SLE: An SLE that has no solution, typically indicated by a row where all the variable coefficients are 0, but the constant is non-zero.

Examples:

  • SLE with No Solutions: Exhibits a row like after Gaussian elimination.
  • SLE with Infinitely Many Solutions: Appears when the system describes lines that overlap, indicating an infinite number of intersection points.
  • Parametric Form: A way to express the solution set when there are infinitely many solutions.

Important Theorems and Proofs:

  • Pivot Positions: In REF, a pivot is the first non-zero entry in a row, and all entries above and below in its column are zero.
  • Reduced Row Echelon Form (RREF): A further refinement of REF, where each leading entry is the only non-zero entry in its column, making it unique for a matrix.

L2 Vector and matrix equations

BCS1410-2 Lecture 2

Key Concepts:

  • Matrix Equations: Viewed as a column perspective to an SLE, especially useful in applications like machine learning where columns represent features.
  • Feature Vectors: Data points in feature space, such as height and weight from a number of animals, which can be vectors in .
  • Linear Combination: The vector is called a linear combination of vectors with weights .

Formulas and Definitions:

  • Scaling and Addition: Fundamental operations with vectors; for example, scaling a vector by 2 or adding two vectors together.
  • Vector Equation: An equation like , which is equivalent to an SLE.

Examples:

  • Column View of an SLE: Investigating whether a vector can be written as a linear combination of the columns of a matrix .
  • Parametric Vector Form: Expresses the solution set to an SLE with many solutions in ; for example, where is any scalar multiple of .

Important Theorems and Proofs:

  • Matrix Equation: Given by and has the same solution set as the corresponding vector equation and SLE.
  • Consistency: An SLE is consistent if and only if the vector is in the column space of ; this is equivalent to saying that there exist weights such that .

L3 Solution sets, linear independence

BCS1410-3 Lecture 3

Key Concepts:

  • Homogeneous SLE: An SLE of the form is always consistent because it has at least the trivial solution (all zeros).
  • Non-homogeneous SLE: An SLE of the form , where . The solution set is the sum of a particular solution of the non-homogeneous SLE and the solution set of the corresponding homogeneous SLE.
  • Linear Independence: A set of vectors is linearly independent if the only solution to is .

Formulas and Definitions:

  • Trivial Solution: In the context of a homogeneous SLE, it is the solution where all variables are set to zero.
  • Linear Combination: Form represents a linear combination of vectors with coefficients .

Examples:

  • Testing for Linear Independence: To test a set of vectors for linear independence, transform the matrix formed by these vectors to REF and look for pivot positions in every column.
  • Single Vector Linear Independence: Any non-zero vector is linearly independent by itself.
  • Zero Vector: A set containing the zero vector is always linearly dependent.

Important Theorems and Proofs:

  • Set of Solutions for Non-homogeneous SLE: The solution set can be written as the set of all vectors of the form , where is a particular solution to the non-homogeneous SLE and is any solution to the corresponding homogeneous SLE.
  • Linear Dependence with Zero Vector: If the set includes the zero vector, it is linearly dependent because a non-trivial combination can give zero by using zero coefficients for other vectors and a non-zero coefficient for the zero vector.

L4 Linear transformations, matrix algebra

BCS1410-4 Lecture 4

Key Concepts:

  • Matrix Algebra and Transformations: Focus on linear transformations and their representation through matrices.
  • Linear Transformations: Functions that map vectors from one vector space to another, preserving vector addition and scalar multiplication.
  • Matrix Vector Product: A transformation of a vector by a matrix resulting in a vector , denoted as .
  • Examples of Linear Transformations: Including rotation, reflection, and projection, which are all linear transformations.

Formulas and Definitions:

  • Linear Transformation: A transformation is linear if it satisfies and for all vectors in its domain and scalar .
  • Matrix Representation of Linear Transformations: The standard matrix for a linear transformation from to is the matrix such that for all in .

Examples:

  • Rotation about the Origin: Represented by a matrix that rotates vectors by a specified angle.
  • Reflection: Represented by a matrix that reflects vectors across a specified axis or line.
  • Projection onto an Axis: Represented by a matrix that projects vectors onto a specified axis.

Important Theorems and Proofs:

  • Injectivity and Surjectivity of Linear Transformations: A linear transformation is injective (one-to-one) if and only if it maps distinct vectors to distinct vectors, and surjective (onto) if every vector in the codomain is the image of at least one vector from the domain.
  • Matrix Algebra Properties: Including the properties of matrix addition, scalar multiplication, and the special cases of the transpose and symmetric matrices.

Transpose matrices

Symmetry for Square Matrices
  • Definition: A square matrix is symmetric if .
  • Implication: The matrix is equal to its transpose.
Transpose of a Transpose
  • Property:
  • Implication: Applying the transpose operation twice returns the original matrix.
Transpose of a Sum
  • Property:
  • Implication: The transpose of a sum is the sum of the transposes.
Transpose of a Product
  • Property:
  • Implication: When matrices are multiplied, the order of multiplication is reversed in the transpose.
Transpose of a Scalar Multiplication
  • Property: where is a scalar.
  • Implication: Scalar multiplication is distributive over the transpose operation.
Rank
  • Property:
  • Implication: The rank of a matrix is unchanged by transposition.
Determinant
  • Property: for square matrices.
  • Implication: The determinant of a matrix is the same as its transpose.
Inverse
  • Property: If is invertible, then .
  • Implication: The inverse of the transpose is the transpose of the inverse.
Eigenvalues
  • Property: The eigenvalues of are the same as the eigenvalues of .
  • Implication: Transposition does not affect the eigenvalues of a matrix.
Orthogonal Matrices
  • Property: A square matrix is orthogonal if , where is the identity matrix. For such matrices, is also the inverse of : .
  • Implication: Orthogonal matrices preserve lengths and angles.

L5 Perspective projections, inverse of matrices

BCS1410-5 Lecture 5

Key Concepts:

  • Inverse of a Matrix: The concept of an inverse matrix is crucial for solving systems of linear equations, among other applications.
  • Homogeneous Coordinates: Used in computer graphics to represent three-dimensional objects on a two-dimensional screen.
  • Elementary Matrices: Result from performing a single elementary row operation on an identity matrix. They are crucial for finding the inverse of a matrix.

Formulas and Definitions:

  • Elementary Row Operations: Include replacement (adding multiples of one row to another), scaling (multiplying a row by a non-zero scalar), and interchange (swapping two rows).
  • Invertible Matrix (Nonsingular Matrix): A square matrix is invertible if there exists a matrix such that , where is the identity matrix.

Examples:

  • Finding the Inverse of a Matrix: Involves applying a series of elementary row operations to transform the matrix into the identity matrix, while performing the same operations on the identity matrix to obtain the inverse.
  • Application in Computer Graphics: Homogeneous coordinates are employed to easily perform linear transformations, like rotation and translation, on points and vectors in a 3D space projected onto a 2D plane.

Important Theorems and Proofs:

  • Criteria for Invertibility: ==A square matrix is invertible if it is row equivalent to the identity matrix and it has a pivot position in every row (and column).==
  • Uniqueness of the Inverse: The inverse of a matrix, if it exists, is unique. This uniqueness is fundamental for the stability and predictability of many mathematical and computational procedures.
  • Algorithm for Finding : The process to find involves Gaussian elimination or applying a series of elementary row operations to reduce to , simultaneously transforming into .

Invertible Matrix Theorem

The Invertible Matrix Theorem is a central result in linear algebra that provides a list of equivalent statements, which, if true for a square matrix , imply that is invertible (and conversely, if any of these statements is false, then is not invertible). This theorem links various concepts in linear algebra, showing their interdependence. Here are the key points of the theorem, summarized in bullet points:

[!note ] Square Matrix is Invertible This is the foundational statement that the rest of the conditions either affirm or deny.

  • Determinant: . A matrix is invertible if and only if its determinant is non-zero.
  • Matrix Rank: The rank of equals its number of columns (or rows), indicating full rank.
  • Unique Solutions: The equation has a unique solution for every vector in .
  • Null Space: The null space of contains only the zero vector, i.e., .
  • Column Space: The columns of span , making them linearly independent.
  • Row Equivalence: The matrix is row equivalent to the identity matrix .
  • Inverse Existence: There exists an matrix such that .
  • Eigenvalues: has no zero eigenvalues. Any zero eigenvalue would imply a determinant of zero, contradicting invertibility.
  • Transpose Invertibility: The transpose is invertible.
  • Subspaces: The column space and row space of are equal to .
  • Linear Transformations: The linear transformation mapping to is both one-to-one and onto, covering the entire space without overlap or omission.
  • Product of Invertible Matrices: If can be expressed as a product of elementary matrices, then it is invertible, as elementary matrices are invertible.

This theorem elegantly ties together different properties and concepts regarding square matrices, showing how they provide various perspectives on the idea of invertibility. The equivalence of these conditions means that verifying any one of them for a particular matrix guarantees that all the other conditions hold true as well, offering multiple pathways to determine or utilize the invertibility of a matrix in problem-solving.


L6 Determinants

BCS1410-6 Lecture 6

Key Concepts:

  • Inverse Matrix and Cryptography: Explores the application of inverse matrices in cryptography, such as the Hill cipher algorithm.
  • Determinant of a Matrix: A scalar associated with a square matrix, indicating how a transformation changes area or volume.
  • Cofactor Expansion: Method for calculating the determinant of a matrix by expanding along a row or column.

Formulas and Definitions:

  • Determinant Notation: Denoted as or , measures the scaling factor of the transformation represented by matrix .
  • Homogeneous Coordinates: Used in graphical applications to manage dimensions and perspective transformations efficiently.
  • Elementary Matrices: Arise from performing elementary row operations on the identity matrix, instrumental in deriving the inverse of a matrix.

Examples:

  • Determinant and Area Scaling: Demonstrates how the determinant reflects the scaling of areas or volumes, such as stretching or compressing.
  • Invertibility and Determinants: A matrix is invertible if and only if its determinant is non-zero.

Calculating determinants

To calculate the determinant of a matrix, different formulas and methods are used depending on the size of the matrix. Here are some common formulas for calculating determinants:

  • 2x2 Matrix: For a matrix , the determinant is calculated as:
  • 3x3 Matrix: For a matrix , the determinant can be calculated using the rule of Sarrus:
  • nxn Matrix (General Case): For larger matrices, the determinant can be calculated using cofactor expansion. For any row or column , the determinant of matrix is: where is the element in the row and column, and is the matrix obtained by removing the row and column from (this is called the minor of at ).

The sign is positive if the sum of the row and column indices is even, and negative if the sum is odd, ensuring the correct sign based on the position of the element in the matrix.

  • Special Cases and Properties:
    • Triangular Matrices (both upper and lower): The determinant is the product of the diagonal elements.
    • Determinant of the Identity Matrix: The determinant of the identity matrix of any size is 1.
    • Determinant of a Product: The determinant of a product of matrices equals the product of their determinants: .
    • Inverse and Determinant: A square matrix is invertible if and only if . If is invertible, then .

Calculating the inverse of a matrix

To calculate the inverse of a matrix, it’s crucial to understand that not all matrices have inverses. A square matrix has an inverse, denoted as , if there exists a matrix that, when multiplied by , results in the identity matrix . Here’s how to calculate the inverse of a matrix, with examples:

  • Formula to Calculate Inverse of a 2x2 Matrix: For a 2x2 matrix , the inverse is given by: where is the determinant of . The matrix is invertible if and only if .

  • Example for 2x2 Matrix: Let . The determinant . Thus, the inverse of is:

  • Formula for nxn Matrix Using Elementary Row Operations: For larger matrices, the inverse can be found by augmenting the matrix with the identity matrix of the same size, and then performing elementary row operations to transform into . The operations that transform into will transform into on the augmented side.

  1. Write the augmented matrix .
  2. Use row operations to transform into the identity matrix.
  3. The matrix that appears where was is .
  • Example for 3x3 Matrix: Consider . To find , augment with the identity matrix and apply row operations to transform into . This process is more complex and requires multiple steps, so we won’t solve it fully here. However, the methodology follows the described procedure.
  • Important Notes:
    • Existence of Inverse: A square matrix is invertible if and only if .


L7 Vector spaces

BCS1410-7 Lecture 7

Key Concepts:

  • Vector Spaces: Fundamental concept in linear algebra, encompassing sets of vectors in any dimension, including functions, polynomials, and more.
  • Subspaces: A subset of a vector space that is itself a vector space under the same addition and scalar multiplication operations.

Formulas and Definitions:

  • Definition of a Vector Space: A set of vectors is a vector space if it is closed under vector addition and scalar multiplication. For any vectors and scalar , the set is closed under addition and scalar multiplication .
  • Subspaces: A subset of a vector space is a subspace of if itself satisfies the vector space properties (closure under addition and scalar multiplication, contains the zero vector).

Examples:

  • Real Numbers as a Vector Space: The set of all real numbers is a basic example of a vector space.
  • Polynomials of Degree at Most : The set of all polynomials with degree at most , denoted , is a vector space.
  • Subspaces in : For any matrix , the set of all vectors that are solutions to the homogeneous system forms a subspace of .

Important Theorems and Proofs:

  • Criteria for Subspaces: A subset of is a subspace if it contains the zero vector and is closed under vector addition and scalar multiplication.
  • Subspace Example with Polynomials: The set of polynomials of a certain degree forms a subspace, as it includes the zero polynomial and is closed under addition and scalar multiplication.

L8 More about vector spaces

BCS1410-11 Lecture 11

Key Concepts:

  • Basis and Dimension: Fundamental concepts for understanding vector spaces, including how to construct a vector space and the minimum number of vectors needed for its basis.
  • Nul(A), Col(A), and Row(A): Essential subspaces associated with any matrix , representing the null space, column space, and row space, respectively.

Formulas and Definitions:

  • Basis of a Vector Space: A set of vectors in a vector space that is linearly independent and spans . The number of vectors in a basis is the dimension of , denoted as .
  • Null Space (Nul A): The set of all vectors such that . A basis for can be found by solving and expressing the solution set in vector form.
  • Column Space (Col A): The set of all linear combinations of the columns of . A basis for can be found from the pivot columns of in its reduced echelon form.
  • Row Space (Row A): The set of all linear combinations of the rows of . A basis for corresponds to the non-zero rows in the reduced echelon form of .

Examples:

  • Basis for : The standard basis is .
  • Basis for a Polynomial Space : The set of polynomials of degree at most has a basis of .
  • Subspaces: For any matrix , , , and are subspaces of appropriate or spaces, with dimensions giving key insights into the properties of .

Important Theorems and Proofs:

  • Invertible Matrix Theorem: Connects the concept of an invertible matrix with properties like being row equivalent to the identity matrix, having a non-zero determinant, and the absence of free variables for uniqueness of solutions to systems of linear equations.
  • Subspace Criterion: A subset of a vector space is a subspace if and only if it is non-empty, closed under vector addition and scalar multiplication, and contains the zero vector.

L9 Eigenvalues and Eigenvectors

BCS1410-11 Lecture 11

Key Concepts:

  • Eigenvalues and Eigenvectors: Fundamental concepts in linear algebra that describe vectors that only change by a scalar factor when a linear transformation is applied.
  • Characteristic Polynomial: A polynomial that is instrumental in finding the eigenvalues of a matrix.

Formulas and Definitions:

  • Eigenvalue and Eigenvector Definition: For a square matrix , a scalar is an eigenvalue if there exists a non-zero vector such that . The vector is called an eigenvector corresponding to .
  • Characteristic Polynomial: Given by , where is the identity matrix of the same size as . Solving this equation for gives the eigenvalues of .
  • Eigenspace: The set of all eigenvectors corresponding to a particular eigenvalue , along with the zero vector, forms a subspace related to called the eigenspace of .

Examples:

  • Finding Eigenvalues: For matrix , solve to find eigenvalues .
  • Finding Eigenvectors: Once eigenvalues are known, solve to find the corresponding eigenvectors .

Important Theorems and Proofs:

  • Invertibility and Eigenvalues: A matrix is invertible if and only if is not an eigenvalue of .
  • Trace and Determinant: The trace of , denoted , is the sum of eigenvalues of , and the determinant of , , is the product of its eigenvalues.

Concrete examples

To illustrate the process of finding eigenvalues and eigenvectors, let’s go through a detailed example with a 2x2 matrix. This process involves two main steps: finding the eigenvalues and then finding the eigenvectors corresponding to those eigenvalues.

Step 1: Finding Eigenvalues

Consider the matrix .

  1. Determine the characteristic equation by calculating the determinant of , where is the identity matrix, and represents the eigenvalues of .

  2. Solve the characteristic equation for .

    The solutions to this quadratic equation are the eigenvalues. In this case, and .

Step 2: Finding Eigenvectors

Once the eigenvalues are determined, we find the eigenvectors by solving for each eigenvalue.

  • For :

    1. Substitute into the equation and solve for .

    1. Solving this system, we find that . We can choose , so an eigenvector corresponding to is .
  • For :

    1. Substitute into the equation and solve for .
    2. Solving this system, we find that . Choosing , an eigenvector corresponding to is .

L10 Diagonalization

BCS1410-11 Lecture 11

Key Concepts:

  • Diagonalization: A matrix is diagonalizable if it is similar to a diagonal matrix, meaning there exists an invertible matrix such that is a diagonal matrix.
  • Eigenvalues and Eigenvectors: Essential for the process of diagonalization, where eigenvalues populate the diagonal of the diagonal matrix, and eigenvectors form the columns of matrix .

Formulas and Definitions:

  • Diagonal Matrix: A square matrix in which the entries outside the main diagonal are all zero. The main diagonal can contain any scalar, including the eigenvalues of .
  • Similar Matrices: Two matrices and are similar if , where is an invertible matrix.
  • Theorem on Eigenvalues: Similar matrices have the same eigenvalues.

Examples:

  • Diagonalization Process:
    1. Find Eigenvalues: Solve for to find the eigenvalues of .
    2. Find Eigenvectors: For each eigenvalue , solve to find the corresponding eigenvectors.
    3. Form Matrix : Assemble a matrix using the eigenvectors as columns.
    4. Compute : The result is a diagonal matrix with eigenvalues on the diagonal.
  • Example with Specific Matrix:
    • Given , find eigenvalues and , and corresponding eigenvectors and . Matrix is , and .

Important Theorems and Proofs:

  • Diagonalizability Criteria: A matrix is diagonalizable if it has linearly independent eigenvectors. If has distinct eigenvalues, it is automatically diagonalizable.

Applications:

  • Solving Differential Equations: Diagonalization simplifies the process of solving systems of linear differential equations.
  • Efficient Computation: Powers of matrices can be easily computed when the matrix is diagonalizable.

L11 Orthogonality and symmetric matrices

BCS1410-11 Lecture 11

Key Concepts:

  • Symmetric Matrices: A central topic of the lecture, where the matrix is equal to its transpose, i.e., .
  • Inner Dot Product: The dot product is an algebraic operation that takes two equal-length sequences of numbers and returns a single number. It is expressed as .
  • Length (Norm) of a Vector: Given by , it represents the magnitude of the vector.
  • Unit Vector: A vector with a length of 1, calculated as .
  • Distance Between Two Vectors: The Euclidean distance given by .
  • Orthogonal Vectors: Two vectors are orthogonal if their dot product is zero, i.e., .
  • Null Space of a Matrix (Nul A): All vectors such that . Every vector in Nul A is orthogonal to every vector in Row A.

Formulas and Definitions:

  • Orthogonal Set: A set of vectors is orthogonal if the dot product between each pair of distinct vectors is zero.
  • Orthonormal Set: An orthogonal set of vectors where each vector has a length of 1.
  • Orthogonal Matrix: A square matrix A is orthogonal if , meaning its transpose is also its inverse.
  • Orthogonal Projection: Projecting vector y onto the subspace spanned by an orthogonal basis.
  • Orthogonal Basis: A basis of a subspace where all the basis vectors are orthogonal to each other.

Examples:

  • Orthogonality in Applications: Important for feature detection in signal processing and statistics.
  • Testing for Orthogonality: To check if a set is orthogonal, form matrix A from the vectors and calculate . If is diagonal, is orthogonal. If is the identity matrix, is orthonormal.
  • Diagonalization: A matrix A is orthogonally diagonalizable if it is symmetric, meaning where D is diagonal and P is orthogonal.

Important Theorems and Proofs:

  • Eigenvectors from different eigenvalues are orthogonal in the context of symmetric matrices.
  • The dimensionality relationship: , where is the orthogonal complement of W in .

Concrete examples

Testing for Orthogonality:
  1. Example 1:

    • Consider two vectors and .
    • To test for orthogonality, compute their dot product: .
    • Since the dot product is not zero, the vectors are not orthogonal.
  2. Example 2:

    • Given two vectors and .
    • Compute their dot product: .
    • Since the dot product is zero, the vectors are orthogonal.
Testing for Orthonormality:
  1. Example 1:

    • Consider a set of three vectors in : , , and .
    • To test for orthonormality, compute the dot products between each pair of vectors. They should all equal 0 except when the vectors are the same, where it should equal 1.
    • For example, , .
    • Repeat this process for all pairs of vectors to confirm orthonormality.
  2. Example 2:

    • Given an orthonormal basis: , where each vector has a length of 1 and they are mutually orthogonal.
    • To test orthonormality, compute the dot products as above. Each dot product should be 0 except for dot products of a vector with itself, which should be 1.