# May 10, 2011

## A better n-Point Curve

### Math, Math Demonstration, Programming, Math Demonstration

In the following demo, try dragging the black dots around. The image below will be recalculated to find a polynomial function that crosses each of these points. More or less points can be added using their respective buttons. Up to 75 points can be added. I wrote an article back in June of 2009 about finding coefficients for a polynomial that will pass through n-points. Turns out this is covered in the second week of my linear algebra class, so not much of an accomplishment. However, I was using Cramer's Rule to solve the system of equations, which took (n+1)(2n!+n) multiplications, where n is the number of points. By the time n is 10 there are 79,833,710 multiplications to preform. I knew a better method to solve the system of equation must exist, but at the time I didn't really know much about it.

Now that I am in linear algebra, I decided to come back to this problem and use elementary row operations to reduce an augmented matrix in order to obtain the polynomial coefficients.

To start, take for instance the following system of equations, it's matrix equivalent, and it's form when placed in an augmented matrix: This system can be solved by using row operations, and reducing the left side down to the identity matrix. The last column holds the result. This is probably the easiest system to do on paper when having to solve a system of linear equations. It also can be done in a spreadsheet, and I made just such a spreadsheet for assisting my class homework. One problem with the spreadsheet is that it can not row-swap. So sometimes the initial matrix has to have rows swapped so the spreadsheet can solve it. This isn't a problem when using the spreadsheet, but isn't great when trying to implement software to do row reduction. We could swap the rows in software, but that slows down everything.

The key to putting this system together was doing out-of-order row-reduction. We do not swap rows during the reduction. The outer loop goes through all but the last column (the column that has been augmented) of the matrix. This loop will first search for a row that has a term in this column (i.e. not zero).   Take for instance, the three matrices above. Normally, rows would be swapped in a matrix like B—row 1 and 3. The outer loop's first iteration will find row 1 for matrix A, row 2 for matrix B, and nothing for matrix C. If there is a result from the search, the row is normalized by dividing each term by the first term.   Above are the results after first term normalization. Matrix A was already normalized, matrix B row 2 is now normalized, and there wasn't a row found in matrix C. Once normalization, the active row is subtracted from all rows that have a leading term and have not been normalized.   The active row is marked as complete, and the loop continues onto the next column. The active row will be 2 for matrix A, 1 for matrix B and 1 for matrix C. Now the normalization happens again:   For subtraction, row 1 in matrix A is complete, so subtraction only needs to be done to row 3. For matrix B, row 2 is complete, so only row 3 remains. And in matrix C, rows 2 and 3 need subtraction because the initial pass did not reduce anything.   This process continues one more time for the 2nd to last column.   Notice that matrix A and C have multiple solutions due to the rows of zeros. This is perfectly alright. During this process, the order of the rows is saved. For matrix A, the order is [1,2,3]; matrix B [2, 1, 3] and matrix C [1,2,3]. This is needed to know which coefficient is represented by which row. One could find this be looking at all the proceeding zeros in the row, but that is time consuming—easier just to store the order in an array.

The last step is the second loop, where back substitution is preformed. The outer loop goes through the columns in reverse, and selects the row representing this column. So on the first iteration, this is row 3 for matrix all the matrices. Once selected, this row is subtracted from all remaining rows.   The next selected row will be 2 for matrix A and C, and 1 for matrix B.   Each of these matrices is now reduced. The resulting coefficients would be   Notice that the results for B have row 2 and row 1 swamped. This was accounted for in the order array, which for B was [2,1,3]. This entire process can be scaled to a matrix of any size.

Here is the updated function for finding the coefficients:

//---------------------------------------------------------------------------
// Return an array of coefficients that will produce a polynomial that passes
// through all the points in the list.
//
// This solves a system of linear equations.
//   | x0^0  x0^1  x0^2 ... x0^n || c0 |   | y0 |
//   | x1^0  x1^1  x1^2 ... x1^n || c1 | = | y1 |
//   |   .     .     .        .  || .  |   | .  |
//   |   .     .     .        .  || .  |   | .  |
//   | xn^0  xn^1  xn^2 ... xn^n || cn |   | yn |
//
// Input:
//  \$x0 - Array of X-coordinates
//  \$y0 - Array of Y-coordinates
//
// Output:
//  Array of coefficents for a polynomial that crosses all the points in the
//  x/y input arrays.
//---------------------------------------------------------------------------
function GetCoefficents\$x0\$y0 )
{

// Count elements.

\$Degree count\$x0 );

\$Order = array();

// Build base matrix using a combination of x powers and augmenting on
// the y values.
//
// This matrix will have the form:
//   | x0^0  x0^1  x0^2 ... x0^n  y0 |
//   | x1^0  x1^1  x1^2 ... x1^n  y1 |
//   |   .     .     .        .   .  |
//   |   .     .     .        .   .  |
//   | xn^0  xn^1  xn^2 ... xn^n  yn |

\$IsDone = array();

\$Matrix = array();
for (
\$Row 0\$Row \$Degree; ++\$Row )
{

\$Matrix\$Row ] = array();
for (
\$Column 0\$Column \$Degree; ++\$Column )

\$Matrix\$Row ][ \$Column ] =

pow\$x0\$Row ], \$Degree \$Column );

\$Matrix\$Row ][ \$Degree ] = \$y0\$Row ];

\$IsDone\$Row ] = false;
}

//-----------------------------------
// We now row-reduce the matrix to obtain the coefficents for the
// polynomial.
//-----------------------------------

// This loop will result in an upper-triangle matrix with the
// diagnals all 1--the first part of row-reduction--using 2
// elementry row operations: multiplying a row by a scalar, and
// subtracting a row by a multiple of an other row.
// NOTE: This loop can be done out-of-order.  That is, the first
// row may not begin with the first term.  Order is tracked in the
// "Order" array.

\$Order = array();
for (
\$Column 0\$Column \$Degree; ++\$Column )
{

// Find a row to work with.
// A row that has a term in this column, and hasn't yet been
// reduced.

\$ActiveRow 0;
while ( ( (
== \$Matrix\$ActiveRow ][ \$Column ] )
|| (
\$IsDone\$ActiveRow ] ) )
&& (
\$ActiveRow \$Degree ) )
{
++
\$ActiveRow;
}

// Do we have a term in this row?

if ( \$ActiveRow \$Degree )
{

// Remeber the order.

\$Order\$Column ] = \$ActiveRow;

// Normilize row--results in the first term being 1.

\$FirstTerm \$Matrix\$ActiveRow ][ \$Column ];
for (
\$SubColumn \$Column\$SubColumn <= \$Degree; ++\$SubColumn )

\$Matrix\$ActiveRow ][ \$SubColumn ] /= \$FirstTerm;

// This row is finished.

\$IsDone\$ActiveRow ] = true;

// Subtract the active row from all rows that are not finished.

for ( \$Row 0\$Row \$Degree; ++\$Row )
if ( ( !
\$IsDone\$Row ] )
&& (
!= \$Matrix\$Row ][ \$Column ] ) )
{

// Get first term in row.

\$FirstTerm \$Matrix\$Row ][ \$Column ];
for (
\$SubColumn \$Column\$SubColumn <= \$Degree; ++\$SubColumn )

\$Matrix\$Row ][ \$SubColumn ] -=

\$FirstTerm \$Matrix\$ActiveRow ][ \$SubColumn ];
}
}
}

// Reset done.

for ( \$Row 0\$Row \$Degree; ++\$Row )

\$IsDone\$Row ] = false;

\$Coefficents = array();

// Back-subsitution.
// This will solve the matrix completely, resulting in the identity
// matrix in the x-locations, and the coefficents in the last column.
//   | 1  0  0 ... 0  c0 |
//   | 0  1  0 ... 0  c1 |
//   | .  .  .     .   . |
//   | .  .  .     .   . |
//   | 0  0  0 ... 1  cn |

for ( \$Column = (\$Degree 1); \$Column >= 0; --\$Column )
{

// The active row is based on order.

\$ActiveRow \$Order\$Column ];

// The active row is now finished.

\$IsDone\$ActiveRow ] = true;

// For all rows not finished...

for ( \$Row 0\$Row \$Degree; ++\$Row )
if ( !
\$IsDone\$Row ] )
{

\$FirstTerm \$Matrix\$Row ][ \$Column ];

// Back subsitution.

for ( \$SubColumn \$Column\$SubColumn <= \$Degree; ++\$SubColumn )

\$Matrix\$Row ][ \$SubColumn ] -=

\$FirstTerm \$Matrix\$ActiveRow ][ \$SubColumn ];
}

// Save this coefficent for the return.

\$Coefficents\$Column ] = \$Matrix\$ActiveRow ][ \$Degree ];
}

return
\$Coefficents;

// GetCoefficents

Looking at the loop in software, we can obtain the a set of summations that represent the complexity of the calculation. Using summation identities, it can then be reduced. This will give an loop maximum, as the loops may end early depending on the data in the equations being solved. I could step through the summation reductions, but it takes forever—I cheated and used software. What we get is an equation that is the maximum number of operations needed to reduce any matrix of n rows and columns. For 10 points, that turns out to be 750 multiplications—100,000 times less then using Cramer's Rule. That is worst-case—the actual number of multiplications may be less.

The increased performance allows 50 points (85,750 multiplications) to be calculated with around the same complexity as 7 points in the old system. Even at 75 points (286,750 multiplications), the performance is good enough that the web server can handle it.

The draw back is that the floating point precision is good to around 16-points. Around this point the accuracy of the calculations becomes so poor due to accumulated rounding errors that the resulting polynomial begins to miss their targets. Nonetheless, I have allowed up to 75 points to be added to the demo despite the fact they will often be missed by the resulting polynomial.

In addition, there is more optimization that can be done to improve the speed at which the solutions can be derived. At some point, I plan on looking into implementing LU decomposition. This should further improve speeds. However, it would be necessary to use an arbitrary-precision arithmetic to make this worthwhile, since the precision of floating point is a much more limiting factor then the speed at which the matrices can be solved.

# May 09, 2011

Angela.  This is one of three shots I proofed of Angela, and I think it is the best of the set.

# May 08, 2011

Maggie at coffee after Cinco.

# May 07, 2011

## Happy Cinco de Moustache!

An other Cinco de Moustache, and Garage Crowd from around the county have come down for our largest gathering of the year.  It is always good to see everyone.

Xiphos.

# May 05, 2011

I don't know that I am all the good at Chemistry, but I do like putting together setups for the lab assignments.

# May 04, 2011

Letting people use my laptop to vote.  Goodbye United Council!

# May 03, 2011

## Happy Birthday Tara

Differential equations homework at the school library.  I've really taken a liking to Wolfram Alpha.  I am notorious for making algebraic mistakes, and Wolfram lets me check any step of my problem solving.

# May 02, 2011

Pit kids and laptops.

# May 01, 2011

A picture from the Garage dinner a few nights ago.  Xen sits under one of the shelves as he was the only person small enough to fit.