Pictured is Dr. Fuller and myself, taken by Zam.

I was asked to attend commencement at U-Rock. Unsure exactly why, I volunteered to be an usher. Shortly before the ceremony began I was told I had a reserved seat and should go up on stage when my name was called. I was unexpectedly presented the "Hari Ram Luddhar math award" for my work on error correcting codes, linear algebra, and asking a lot of math related questions. I was presented the award by Dr. Mark Fuller whom was my professor for my honers discrete mathematics, and differential equations courses this year. Dr. Fuller is a fantastic professor—he knows his field extremely well, he can convey it, and he has this uncanny ability to say just a few words to answer questions you've spent hours trying to solve. Thanks for everything Dr. Fuller!

Pictured is Dr. Fuller and myself, taken by Zam.

Pictured is Dr. Fuller and myself, taken by Zam.

Mike and Shara at my last Denny's night coffee gathering. Not sure when I'll be able to do one of these again.

That's it—my last exam and I am done at UW-Rock. Now it's time to pack for the move on Saturday.

Birthday shout out to Zen—have a good one!

Pictured is Maggie, barefoot and pregnant in the kitchen. I kicked her out shortly after I took this picture—I insist on cleaning my own dishes.

Birthday shout out to Zen—have a good one!

Pictured is Maggie, barefoot and pregnant in the kitchen. I kicked her out shortly after I took this picture—I insist on cleaning my own dishes.

I think Angela is making her "study face" as we try prepare for our chemistry exam tomorrow evening.

Yesterday I wrote about using the first derivative to create a curve fit where the peeks/valleys (the stationary points) were at the specified point locations. I could have released the demo with this, but I had the idea I could improve upon it.

In this demo, each point can use have the first and/or second derivative forced to zero at the point's location. Setting the first derivative to zero will cause the point to occur on a stationary point in the polynomial—that is, a point where the polynomial is at a local minimum or maximum. Setting the second derivative to zero will cause the point to have a local change in curvature.

This is done by allowing each point to specify additional restrictions to the overall system of equations. With no derivative, each point adds one term to the polynomial, and one equation to the system. Each derivative adds an other term, and an other equation. So a plot involving 3 points, the first with no derivative, the second with the first derivative, and the third with the second derivative will result in a polynomial with (1 + 2 + 3) = 5 terms and 5 equations.

Part of the solving takes advantage of this relationship:

Where is the *m*^{th} derivative of the function (this is Lagrange's notation). Using this, it is easy to make a function that can construct an equation (or a row in a matrix). The order in which the rows in the matrix isn't important, the construction loop for the matrix simply loops for the number of points, and then the number of derivatives for each point.

What this means is that we could have any number of points, with any number of derivatives. I limited the derivative number to 2, but I really didn't have a reason to even use the 2^{nd} derivative—it could easily go higher. Also, I have the derivatives being forced to zero. We could also specify a value for this derivative, although I haven't thought out how that could be useful yet.

As of this writing, I'm considering releasing a library with these functions in it. So for now, no source code—it's a mess and needs to be cleaned up.

One item I had read about polynomial curve fitting was using the derivative to make the points at the peeks of the function. We touched on this idea in my linear algebra class, but like most of the class, it was a vague mention. However, using my notes I was able to come up with what was being attempted.

The system of equations we are trying to solve for polynomial curve fitting is of this form:

Where *n* is the number of known points for which the polynomial is trying to pass through. As a consequence the resulting polynomial is always of degree *n*. That is, 4 known points will result in a polynomial .

If we desire the points to be at peeks/valleys in the curve, we need to go back to first semester calculus. Stationary points will occur when the derivative of the function is equal to zero. So we have a second equation:

This will double the degree polynomial as we will have twice the number of equations to solve. The result is the following matrix:

It looks a bit messy, but it isn't all that bad. The top half of the matrix is much like solving for an n-point polynomial, except there are twice the number of terms. The bottom half is the derivative of the top half.

This demo can show the effects of a linear approximation on trigonometric functions. The initial setup shows sine from 0 to 2 π made up of 20 points. The linear approximation is constructed using a 19^{th} degree polynomial from the data points. The coefficients are listed in the background.

Try removing points using the “Delete point” button. Notice how the right end stays around 0.5 until around 10 points. It has subtle variations which continue to get worse.

Now hit the “Sine centered->20 points” button. This will produce what looks to be a sine wave spanning 2 π. Now try “Sine centered->30 points.” Notice how deformed the sine wave has become before and after the points. Try the 10 point version.

One last thing to try is moving just slightly one point close to either end. The graph will not change much. Now try moving a point near the center slightly. The graph will change wildly near the ends, especially when a large number of points are used.

These are all effects of the linear approximation being used to represent the function.

This article a follow-up to my May 10, 11, and 12 articles. This demo will accurately find a polynomial crossing through up to 40 points. It uses PHP's BCMath Arbitrary Precision Mathematics functions to overcome the limitations of the floating point precision.

On May 12, I wrote that I had created this implementation, but that it was slow. What I found was that rending the polynomial plot was the slowest part of the system. After looking into this, I changed from using the “power-of” function to accumulating the power using multiplication. This noticeably improved the speed and I decided to add a few more fetchers.

I have allowed up to 40 points to be placed in this implementation, which takes the web server around 3 and a half seconds to render. The coefficients are display in the background of the plot to 30 digits, although the actual precision used might actually be higher.

The title of this article is a tribute to the demo group Triton. Early in their 1993 demo Crystal Dreams 2 is the introduction “now more linear algebra.”

I did a quick implementation of the n-Point curve system using arbitrary-precision arithmetic. While the system is good for as many points as I could place on the image, computation took seconds once the number of points reached 20. While I could improve the system by implementing LU decomposition, the slowest part of the system is not solving for the coefficients, but plotting the resulting polynomial. There isn't too much that can be done about that. I suppose I could try some calculus on the polynomial and guess how the function is acting near each point, and draw things that way. Could be fun, but not something I can just throw together.

Just some follow-ups to yesterday's story:

There are some interesting results for situations that are impossible. For example, two points at different y locations, but the same x location. Polynomials are functions, and thus this isn't allowed. Let's analysis one simple scenario.

Now expand the augmented matrix and the row-equivalent matrix for this scenario:

This system is inconsistent as row two is saying is 0 = 0.1—a clearly false statement. How the implementation deals with inconsistencies is by simply ignoring them. The resulting equation from this system is *y = -2 x*^{2}* + 1.8 x*. There should be an *x*^{0} term for 3 points, but there isn't. One of the points is simply ignored—in this case, y_{1}.

The other variant of this is when two points are identical.

In this case, the matrix is consistent, but there are an infinite number of solutions. No points are ignored, but the polynomial has been reduced from a 2^{nd} order to 1^{st} order. Reading the last column, the equation *y = 0 x*^{2} + 1 x or *y = x*. There is no x^{2} coefficient.

This is similar to what happens if 3 separate points are defined, but for a line rather then a parabola.

This leads to our last interesting scenario, which is a special case of two identical x vales.

What happened? Earlier I claimed that a function could not have two different y values for a single x value or it wouldn't be a function. The simple answer: it's wrong—these coefficients do not pass through (0.5,0.6) and (0.5,0.5).

Our resulting equations is *y = 66666675.05555565 x*^{2}* - 53333338.23333337 x + 10000001*. Using an arbitrary-precision arithmetic math system with x = 0.5, we get around 0.647222227499999. This is not .6 or .5 as expected. However, the coefficients are large enough that the function looks almost vertical around x=0.5. At x=0.4999, y≈-1332 and at x=0.5001, y≈1335. It is this transition that makes it look as though the functions is passing through both (0.5,0.6) and (0.5,0.5) when in reality, it does not pass through either.

This is a limitation of floating point. If we set the precision higher, to say 25 decimal places, y = 1000000000000000000000001-5333333333333333333333338.2333333333333333333333337 x + 6666666666666666666666675.0555555555555555555555565 x^{2}.

Remember that there is no solution, and the reason a solution is being found is because of rounding errors. However, the error looks close to the impossible situation we desire. This result is like the small-angle approximation where sines, cosines and tangents of small values are treated as the value rather then the function. An other method of doing something similar is to use the following values:

This will produce an almost identical graph, but the system has an exact solution. We engineers like this fact of math. I call it “close enough.”