\documentclass{article}
\usepackage{axiom}
\usepackage{pstricks}
\usepackage{pst-node}
\begin{document}
\title{CAISSline Overview}
\author{Timothy Daly}
\maketitle
\eject
\tableofcontents
\eject
\vfill
\begin{abstract}
CAISSline is a proof of concept program for using the Zero Learning
Curve interface in courseware software. For this example we chose Linear
Algebra. Since the idea is to demonstrate the use of this software in
both the classroom setting and a single student setting we have chosen
to use Gilbert Strang's video lecture \#2 from the MIT Open Course Ware
project as a reference example.
We follow the class presentation as closely as possible in order to
show how this software would operate in a classroom. However since
the inputs can be modified the student is capable of working through
examples of their own choosing.
\end{abstract}
\eject
\vfill
\section{Back Substitution}
Our set of equations is now:
\[
\left[
\begin{array}{rrrrrrr}
x & + & 2y & + & 2 & = & 2\\
& & 2y & - & 2z & = & 6\\
& & & & 5z & = & -10
\end{array}
\right]
\]
This can be written in Matrix notation as:
\[
\left[
\begin{array}{rrrr}
1 & 2 & 1 & 2\\
0 & 2 & 2 & 6\\
0 & 0 & 5 & -10
\end{array}
\right]
\]
The fundamental idea of Back Substitution is as follows:
We wish to find the values of $x$, $y$, and $z$. The method to do this
is called {\bf Back Substitution} and the idea is quite simple. We have
arranged the equations so that the final equation contains only one
variable, $z$. Thus the last equation can be solved immediately.
Since we now know the real value of $z$ we can substitute it into the
second equation. This leaves the equation with only one variable, $y$.
Thus the second equation can be solved immediately.
Since we now know the real value of $z$ and $y$ we can substitute them
into the first equation. This leaves the equation with only one variable, $x$.
Thus the first equation can be solved immediately.
So the procedure, as equations, amounts to:
\begin{list}{}
\item solve Equation 3 for z
\[5z = -10\]
\[z = -2\]
\item substitute $z=-2$ into Equation 2
\[2y-2z=6\]
\[2y-2(-2)=6\]
\[2y+4=6\]
\item solve Equation 2 for y
\[2y+4=6\]
\[2y+4-4=6-4\]
\[2y=2\]
\[y=1\]
\item substitute $z=-2$ and $y=1$ into Equation 1
\[x+2y+z=2\]
\[x+2(1)+(-2)=2\]
\[x+2-2=2\]
\[x=2\]
\end{list}
given the final result
\[x = 2, y = 1, z = -2\]
\section{Elimination Matrices}
Given the matrix
\[
\left[
\begin{array}{rrrr}
1 & 2 & 1 & 2\\
0 & 2 & 2 & 6\\
0 & 0 & 5 & -10
\end{array}
\right]
\]
we want to do elimination on that.
First we want to discuss two ideas from a "big picture" point of view.
We need to understand what happens if we multiply a matrix on the left
hand side and on the right hand side.
It is important to think in terms of whole row operations and whole
column operations. So first we need to discuss a way to think about
operations (such as addition and subtraction) by whole rows and columns.
\subsection{Column Operations}
If we look at any matrix and multiply it on the {\bf right} by a column
vector we get that:
\[
\left[
\begin{array}{rrr}
- & - & -\\
- & - & -\\
- & - & -
\end{array}
\right]
\left[
\begin{array}{r}
3\\
4\\
5
\end{array}
\right]
=
\begin{array}{rrr}
3 & {\rm\ x\ } & {\rm column\ 1}\\
4 & {\rm\ x\ } & {\rm column\ 2}\\
5 & {\rm\ x\ } & {\rm column\ 3}
\end{array}
\]
Notice that the result of a {\bf matrix times a column is a column}.
\subsection{Row Operations}
If we look at any matrix and multiply it on the {\bf left} by a row
vector we get that:
\[
\left[ 1{\rm\ \ } 2{\rm \ \ } 7 \right]
\left[
\begin{array}{rrr}
- & - & -\\
- & - & -\\
- & - & -
\end{array}
\right]
=
\begin{array}{rrr}
1 & {\rm\ x\ } & {\rm row\ 1}\\
2 & {\rm\ x\ } & {\rm row\ 2}\\
7 & {\rm\ x\ } & {\rm row\ 3}
\end{array}
\]
Notice that the result of a {\bf row times a matrix is a row}.
Now lets go back to our original problem. We wish to find a matrix
that allows us to subtract 3 of row 1 from row 2and leaves the other
rows the same. In matrix notation
we're looking for a matrix like:
\[
\left[
\begin{array}{rrr}
{\rm\ \ } & {\rm\ \ } & {\rm\ \ }\\
{\rm\ \ } & {\rm\ \ } & {\rm\ \ }\\
{\rm\ \ } & {\rm\ \ } & {\rm\ \ }
\end{array}
\right]
\left[
\begin{array}{rrr}
1 & 2 & 1\\
3 & 8 & 1\\
0 & 4 & 1
\end{array}
\right]
=
\left[
\begin{array}{rrr}
1 & 2 & 1\\
0 & 2 & -2\\
0 & 4 & 1
\end{array}
\right]
\]
The matrix will be simple because we are only changing row 2.
Since we are not changing the first row of the matrix we reason as follows:
We need a row vector that ``chooses''
\[1{\rm\ x\ row\ 1} + 0{\rm\ x\ row\ 2} + 0{\rm\ x\ row\ 3}\]
which we can read as ``choose one of the first row and none of the
other rows"
so we can fill in the first row of the new matrix:
\[
\left[
\begin{array}{rrr}
1 & 0 & 0\\
{\rm\ \ } & {\rm\ \ } & {\rm\ \ }\\
{\rm\ \ } & {\rm\ \ } & {\rm\ \ }
\end{array}
\right]
\left[
\begin{array}{rrr}
1 & 2 & 1\\
3 & 8 & 1\\
0 & 4 & 1
\end{array}
\right]
=
\left[
\begin{array}{rrr}
1 & 2 & 1\\
0 & 2 & -2\\
0 & 4 & 1
\end{array}
\right]
\]
This matrix will also not affect the third row of the matrix so
by similar reasoning
we need a row vector that ``chooses''
\[0{\rm\ x\ row\ 1} + 0{\rm\ x\ row\ 2} + 1{\rm\ x\ row\ 3}\]
which we can read as ``choose one of the third row and none of the
other rows".
So we can fill in the third row of the new matrix:
\[
\left[
\begin{array}{rrr}
1 & 0 & 0\\
{\rm\ \ } & {\rm\ \ } & {\rm\ \ }\\
0 & 0 & 1
\end{array}
\right]
\left[
\begin{array}{rrr}
1 & 2 & 1\\
3 & 8 & 1\\
0 & 4 & 1
\end{array}
\right]
=
\left[
\begin{array}{rrr}
1 & 2 & 1\\
0 & 2 & -2\\
0 & 4 & 1
\end{array}
\right]
\]
\subsection{The Identity Matrix}
As an aside, suppose we didn't want to change the matrix at all.
Then we could reason that
we need a row vector that ``chooses''
\[0{\rm\ x\ row\ 1} + 1{\rm\ x\ row\ 2} + 0{\rm\ x\ row\ 3}\]
which we can read as ``choose one of the second row and none of the
other rows".
So we can fill in the second row of the new matrix:
\[
\left[
\begin{array}{rrr}
1 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 1
\end{array}
\right]
\]
This special matrix is known as the {\bf Identity} matrix. Multiplying
by the identity matrix changes nothing about the original matrix. It
acts like the number 1 for matrices.
However, in the problem we're discussing we {\sl do} want to change the
matrix. In fact, we want 3 of row 1 to be subtracted from row 2.
We need a row vector that ``chooses''
\[-3{\rm\ x\ row\ 1} + 1{\rm\ x\ row\ 2} + 1{\rm\ x\ row\ 3}\]
which we can read as ``choose $-3$ times the first row plus 1 times
the second row and none of the third row''
So we can fill in the second row of the new matrix:
\[
\left[
\begin{array}{rrr}
1 & 0 & 0\\
-3 & 1 & 0\\
0 & 0 & 1
\end{array}
\right]
\left[
\begin{array}{rrr}
1 & 2 & 1\\
3 & 8 & 1\\
0 & 4 & 1
\end{array}
\right]
=
\left[
\begin{array}{rrr}
1 & 2 & 1\\
0 & 2 & -2\\
0 & 4 & 1
\end{array}
\right]
\]
We can check our result. Suppose we want to check the entry in
row 2, column 3 of our result matrix, which is a $-2$.
This entry is computed by selecting row 2 of the first matrix and
column 3 of the second matrix. This is a row vector times a column
vector and is called the {\bf dot product}.
We then get:
\[
[-3{\rm\ \ }1{\rm\ \ }0]
\left[
\begin{array}{r}
1\\
1\\
1
\end{array}
\right]
=
-3{\rm\ x\ }1 + 1{\rm\ x\ }1 + 0{\rm\ x\ }1
= -2
\]
The matrix we just computed we will call the ``Elementary'' or
``Elimination'' matrix. This matrix is used to compute a result
between row 2 and row 1 so that there will be a zero in the
row 2 column 1 of result matrix. So we call this matrix $E_{2,1}$:
\[
E_{2,1}=
\left[
\begin{array}{rrr}
1 & 0 & 0\\
-3 & 1 & 0\\
0 & 0 & 1
\end{array}
\right]
\]
We'd like to express the whole elimination procedure in matrix language.
So now we've done the first step of the elimination procedure. Now
we have a result matrix that has a zero in row 2 column 1. Next we
need to find a matrix which will eliminate (make it zero) the entry
in row 3, column 2. We will call this matrix $E_{3,2}$.
This was done by subtracting 2 times row 2 from row 3.
\section{Associativity}
\section{Permutation Matrices}
\section{Commutativity}
\section{Inverses}
\vfill
\eject
\eject
\begin{thebibliography}{99}
\bibitem{1} Daly, et al., "Axiom, The 30 Year Horizon"
\bibitem{2} Strang, Gilbert, MIT Open Courseware Lecture
"Linear Algebra" Video Lecture \#2
\end{thebibliography}
\end{document}