Interpolation is a method in which the values of unknown data are filled using known data. It is basically making an educated guessing--hypnotising. I've used interpolation a number of times in the field and today I'm going to write a little about it.
I've most often used
linear interpolation. This is the easiest to implement and when filling in small gaps of data, quite sufficient. For larger gaps between data, one method is
polynomial interpolation.
Here we have two singles, both missing 98% of the real data. The first chart uses linear interpolation to fill in the gaps. The second uses polynomial interpolation. The real signal is in green and the interpolated signal is in red. The black dots denote locations where data is present. Note that most of the green is covered because both systems of interpolation reconstruct the signal fairly well. However, aside from the ends, the polynomial interpolation is a much closer fit. This is evident when closely examining a segment of the chart.

Here is the same function, but zoomed into a specific area. The linear interpolation's weak fit is much more apparent, where the polynomial interpolation is pretty much a perfect fit.
Here is the equation for polynomial interpolation:
This expands to:
What this equation does is predict any new point with the existing point using all of the data from the existing points. This function creates a polynomial that intercepts each of the known points. Since this is a polynomial and all polynomials are continuous function, a value of
y can be obtained for any value
x.
It looks messy, but as source code turns into a couple of nested for-loops.
//---------------------------------------------------------------------------
// Polynomial Interpolate
//---------------------------------------------------------------------------
function PolynomialInterpolate( $x , $Data )
{
$y = 0.0;
foreach ( $Data as $Y_Index => $Y_DataPoint )
{
$y_n = $Y_DataPoint[ "y" ];
$x_n = $Y_DataPoint[ "x" ];
$Numerator = 1.0;
$Denominator = 1.0;
foreach ( $Data as $X_Index => $X_DataPoint )
{
if ( $Y_Index == $X_Index )
continue;
$Numerator *= $x - $X_DataPoint[ "x" ];
$Denominator *= $x_n - $X_DataPoint[ "x" ];
}
$y += ( $Numerator / $Denominator ) * $y_n;
}
return $y;
}
Because of the multiplication and division, the process is fairly CPU intensive. And the more data points, the longer this interpolation takes.
Polynomial interpolation can end up producing significantly worse results then linear interpolation when there are not enough data points.

Using a 5 Hz single

Using a 7 Hz single

Using a 7 Hz single
The above charts show polynomial interpolation without enough sample points to discern the frequencies. Note how the interpolation still passes through each of the data points, but can become radically different between sample points. Linear interpolation actually ends up making a better fit under these conditions. Notice how the ends of the function are worse then the middle. This seems typical of polynomial interpolation.
There are other types of interpolation, such a
bilinear and
trilinear as well as the more popular
Spline interpolation. I will probably write about these methods latter on.