Time spent: 4h
Total: 45h/10000h
I’m reading through my linear algebra book and this is just a brief look over the various things I went over today.
LU Decomposition
We briefly talked about upper and lower triangular matrices on Day 10 of ML. Today we’ll look a bit further into how they can be used to solve systems. We’ll also be going over matrix inverses and transposes.
NOTE: A prerequisite for reading this section would be having read Day 10 of ML, since that’s where we go over the principles of the upper and lower triangular matrices.
Following the principles of Day 10 of ML, we know that the lower triangular matrix is used for forward elimination, and the upper triangular matrix is used for back-substitution. With forward elimination, we go from to , and with back-substitution we go from to .
The system can be split into two different parts: and . Knowing that , we can multiply the second equation with to get , which is simply equivalent to .
Elimination algorithms tend to do this. The algorithm can be broken into two steps:
- factoring (finding and from ), and
- solving (finding using and ).
The Permutation Matrix P
What if you have a system of equations that looks something like this:
We run into a problem. No multiple of row 1 can eliminate the element (3). We can remedy this by swapping rows 1 and 2 from the matrix, which gives us a normal-looking system again. To express this in matrix terms, we have the permutation matrix that swaps the rows. It looks like this:
Note that we also have to swap the rows in . The new system is . For a matrix with rows, there are permutations.
Inverses
The inverse for some matrix exists when there is a matrix that contains the following property: . If you multiply by first and then multiply by , you get back to where you started. The inverse of an matrix is another matrix.
Inverse matrices are useful for solving systems. If is invertible, the one and only solution to is . Let’s prove this by multiplying with :
Here are some things to note about inverses:
- The inverse exists only for nonsingular matrices.
- A matrix has only one inverse at most.
- A matrix is invertible only if it’s determinant is not zero.
- The inverse of a product of two invertible matrices . A similar rule is for invertible matrices. Consider a set of matrices . The inverse of the product of all these matrices is:
Calculating Inverses with the Gauss-Jordan Method
The Gauss-Jordan Method is a simple system for getting the inverse of some matrix . Here are the steps on how it works:
- Combine into one matrix with the identity matrix (also known as the augmented matrix ).
- Solve the upper triangular form of , so we end up with .
- Create zeros above the pivots as well and divide each pivot to form the identity matrix. This results in .
Let’s see an example (ChatGPT-generated) for this. This will be the matrix for our example:
We start with the augmented matrix :
Step 1: We first eliminate the entries below the first element in the first column.
Step 2: Use the third row to make the third column above it zero.
Step 3: Normalize the second row by multiplying by .
Step 4: Clear the second column in rows 1 and 3.
The right side of the augmented matrix now represents the inverse of :
The Transpose Of a Matrix
Fortunately, this idea is much simpler than the inverse. The transpose of a matrix essentially just swaps the columns and rows in the matrix. Yes, it is as simple as it sounds. Take a look:
As with inverses, the product of matrix transposes follows a similar rule. The transpose of a product of two matrices .
Again, consider a set of matrices . The transpose of the product of all these matrices is:
Symmetric matrices
A symmetric matrix is one that possesses the following property: . Here are some examples of symmetric matrices:
Conclusion
That was that. Not sure what I’ll be doing in the upcoming days. Perhaps I’ll look into something like decision trees and just keep going with linear algebra. I really want to get into neural networks and stuff but I want to have a great linear algebra base before I do that.