At work I stumbled on a function for doing a log in base 2. After studying the function, I decided to try a similar approach to the exponential function.
We start with Taylor series. I'm not sure how James Gregory discovered infinite series or how Brook Taylor figured out a general method for constructing them back in the early 1700s, but they sure do come in handy in the age of computers. This topic is infamous amongst calculus 2 students, but although they were a lot of work, I rather enjoyed them. Let's first look at the Taylor series transform function:
The notation denotes the nth derivative of the function. The variable a is some starting point. Our function is . To preform the Taylor series, we need a list of the function's derivatives. For our function, that's easy: . Fill in the blanks, our our series becomes .
To see why this is useful, let's step back to the Maclaurin series for ex: . This series will work for any value of x. There's just one problem, which should be easy to see from looking at the function: n!. This value grows very rapidly, and it doesn't take long before we run out of floating point precision. This is why the Taylor series is better. We can speed the process of convergence by starting from some known point closer to x.
In order to implement this in software, we now need a known point. If a general ex is desired, then we are going to need several known points. I used some known limitation for the implementation. 64-bit double precision floating point has a range from 2-1024 to 21023. Any value outside of this range is too much to work with. Just blindly judging from a spreadsheet, I could see that useful range from was -50 < x < 708. To keep it simple, I just decided to have a whole number increments in the table. So 758 known entries must be known.
To generate these, I used a program I've been using more and more. It's called PARI/GP and is a computer algebra system. One of the fetchers is arbitrary precision arithmetic (i.e. the ability to calculate big numbers). I used the program to generate my table of known values with 50 digits of accuracy.
And here is the implementation:
Download the tables here.
If you follow the code, you will notice a couple of things. We can take advantage of the fact a summation is a loop. This speeds up two items, (a - x)n and n!. This is because and . Thus, we can accumulate the results similar just like with the summation—except instead of adding, we multiply. The upper limit on the loop is not infinity. The reality is, we only want as much precision as we can store. So once the loop has produced enough precision that the numbers no longer change, it stops. Also, the summation loop starts at 1, not zero. This is because of the factorial. 0! = 1, and 1! = 1. Rather then deal with this in the loop, I started with the second term and moved the results of the first term out of the loop. Evaluated, the first term of the loop comes out to be ea. The (-1)n means this is an alternating series—or that the sign is inverted on even/odd intervals. We can do that better with an if statement.. The loop is really . Note that k is some finite number based on the precision—usually less then 20.
My initial testing shows this function is accurate to about 12 digits.