Table of contents
Key concepts for the exam
- Eigenvectors
- Orthogonality
- Bases
- Inverse matrices
- Nol-col
- Unit Vectros
- Magnitude of matrices
Key definitions
1. Eigenvectors
An eigenvector is a special vector that only changes by a scalar factor when a linear transformation is applied to it. Think of it as a vector that doesn’t change its direction when a transformation (like rotation, scaling) is applied, but its length might change. The formula involves a matrix (A), an eigenvector (v), and an eigenvalue (\lambda): [Av = \lambda v] For example, if (A) is a 2x2 matrix that represents scaling objects by 2 in the x-direction and () (a vector pointing along the x-axis), then (), making (v) an eigenvector of (A) with an eigenvalue of 2.
2. Orthogonality
Orthogonality in mathematics means “at a right angle.” Two vectors are orthogonal if their dot product is zero. This concept is akin to perpendicular lines in geometry. The formula for the dot product of two vectors (a) and (b) is: If (a) and (b) are orthogonal, . For example, the vectors () and () are orthogonal because .
3. Bases
A basis of a vector space is a set of vectors that are linearly independent and span the entire vector space. This means you can express any vector in that space as a combination of the basis vectors. For example, in , the vectors ([1, 0]) and ([0, 1]) form a basis because you can form any vector in by scaling and adding these two.
4. Inverse Matrices
An inverse matrix is essentially the “opposite” of a given matrix. When you multiply a matrix by its inverse, you get the identity matrix, which acts like 1 in matrix math. The formula for finding an inverse (when it exists) varies, but the key property is: where (I) is the identity matrix. For example, if (), then because their product is the identity matrix.
5. Null Space
The null space of a matrix (A) consists of all vectors (v) for which (Av = 0). It tells us which vectors become zero vectors when transformed by (A). For example, if (), the null space includes any scalar multiple of the vector (), since ().
6. Unit Vectors
A unit vector is a vector with a magnitude (or length) of 1. It is often used to indicate direction without specifying magnitude. To find a unit vector in the direction of a given vector (v), you divide (v) by its magnitude (||v||): [\hat{v} = \frac{v}{||v||}] For example, to find the unit vector in the direction of (v = [3, 4]), divide by its magnitude (5), resulting in (\hat{v} = [0.6, 0.8]).
7. Magnitude of Vectors
The magnitude (or length) of a vector provides a measure of how long the vector is. For a vector (v = [x, y]) in two dimensions, its magnitude is found using the Pythagorean theorem: [||v|| = ] For example, the magnitude of (v = [3, 4]) is (||v|| = = 5).
What does the Row Echelon Form (REF) tell you about an SLE?
- Existence of Solutions: If any row in the REF consists entirely of zeros except for the last entry (which corresponds to the constants in the equations), then that row represents an equation of the form 0=c where c is a non-zero constant. This situation indicates that the system of linear equations is inconsistent and has no solution.
- Uniqueness of Solution: If the number of non-zero rows in the REF equals the number of variables in the system, and each leading entry (the first non-zero number from the left in a row) is 1 with all other entries in its column being 0, then the system has a unique solution. This scenario means you can directly read off the values of the variables from the REF → Reduced Row Echelon Form
- Infinite Solutions: If the system is consistent (i.e., it does not have a row indicating no solution as described above) and the number of non-zero rows is less than the number of variables, then the system has infinitely many solutions. This situation occurs because there are more variables than independent equations, leading to at least one free variable that can take an infinite number of values.
- Dependency of Equations: If a row in the REF is all zeros, it indicates that the corresponding equation is a linear combination of the other equations. This means it does not add new information to the system.
- Basic and Free Variables: The variables corresponding to the columns with leading 1s in the REF are called basic variables. These can be expressed in terms of the other variables if the system has infinitely many solutions. The variables that do not have a leading 1 in any row correspond to columns that do not determine the solutions directly; these are called free variables and can take any value.
- Simplified System Representation: The REF simplifies the original system of equations, making it easier to understand the relationships between variables and to solve the system either by back substitution or by further transformation into Reduced Row Echelon Form (RREF).
Identity matrix
An identity matrix, denoted as , is a square matrix in which all the elements of the principal diagonal are ones, and all other elements are zeros. The principal diagonal of a matrix runs from the top left corner to the bottom right corner. The identity matrix serves as the multiplicative identity in matrix multiplication, meaning that when any square matrix is multiplied by an identity matrix (of compatible size), remains unchanged.
The general form of an identity matrix of size is:
Here, represents an -order identity matrix. The subscript can be omitted when the size is understood from the context.
Properties of the Identity Matrix
- Multiplicative Identity: For any matrix ,
where is the identity matrix.
- Unchanged by Multiplication: Multiplying any matrix by the identity matrix (of compatible dimensions) on either side doesn’t change the original matrix.
- Inverse: The identity matrix is its own inverse, so .
- Uniqueness: For each order , there is exactly one identity matrix.
Relationship between augmented matrices and Ax
The matrix represents the coefficients of the variables in the system, represents the column vector of variables, and represents the column vector of constants on the right-hand side of the equations.
- Write down the system of equations: Start by writing each equation in the system so that all terms are aligned by their variables and the constants are on the right side. For example, consider a system of two equations:
-
Extract the coefficients and the constants: For each equation, write down the coefficients of the variables in the order they appear and the constant term from the right-hand side. For the example above, the coefficients are , for the first equation and , for the second, with constants and , respectively.
-
Form the coefficient matrix : This matrix is formed by placing the coefficients of the variables in their respective positions as they appear in the equations:
- Form the constant vector : This vector contains the constants from the right-hand side of each equation:
- Combine into the augmented matrix: The augmented matrix is formed by appending the constant vector to the right of the coefficient matrix . This is done to create a new matrix that includes all the information from the system of equations in a compact form. The augmented matrix for the system above is:
The vertical line in the augmented matrix separates the coefficients of the variables (on the left) from the constants (on the right), visually distinguishing the parts of the system of equations.
Span
In the context of Systems of Linear Equations (SLEs), the term “span” refers to the concept of spanning sets from linear algebra. Specifically, it describes the set of all possible vectors that can be created by taking linear combinations of a given set of vectors. In the context of SLEs, these vectors usually represent the columns of the coefficient matrix associated with the system.
Definition
Given a set of vectors in a vector space , the span of these vectors, denoted as , is the set of all vectors that can be formed by linear combinations of . A linear combination of these vectors can be written as , where are scalars.
Span in SLEs
In the context of an SLE represented as , where is the coefficient matrix, is the vector of variables, and is the vector of constants:
- The columns of represent the vectors.
- The span of these column vectors determines the set of all possible outcomes ( vectors) that can be achieved through the system of linear equations.
- If is in the span of the columns of , the system has at least one solution.
- The concept of span is closely related to the idea of the column space of , which is the span of its column vectors.
Importance
The span of the column vectors in tells us about the solutions of the SLE:
- Consistency: If the vector is within the span of the columns of , the system is consistent (it has at least one solution).
- Inconsistency: If is not in the span, the system is inconsistent (no solution exists).
- Dimensionality of Solutions: The dimension of the span (or the column space) of can indicate the number of linearly independent directions in which solutions to the system can vary, affecting the number and type of solutions (unique, infinite, or none).
Example of finding the Span
Let’s consider a system of linear equations (SLE) and illustrate how the concept of span relates to finding solutions.
System of Linear Equations Example
Consider the following system of linear equations:
These equations can be represented in matrix form as , where
Column Vectors of A
The matrix has two column vectors:
Span of the Column Vectors
The span of these vectors, , represents all linear combinations of and . However, if you look closely, you’ll notice that is just scaled by a factor of . Therefore, and are linearly dependent, and the span of these vectors is essentially all vectors along the line defined by (or ).
Solution to the SLE
- By inspecting the system, we see that the second equation is just the first equation multiplied by 2. This redundancy means our system doesn’t span the entire plane of but rather a line.
- The vector is exactly two times the vector , which can be obtained from the equation by choosing and .
- Since is on the line spanned by and , it means the system has solutions, and in this case, an infinite number of solutions because the system is underdetermined (the equations are linearly dependent).
Geometric Interpretation
Geometrically, both equations represent the same line in . Any point on this line is a solution to the system. The span of the column vectors of (the line) includes the vector , which confirms that solutions exist.