I have expanded on the Gibbs Phenomenon demo I wrote about the other day. In addition to the plot of the series output, I have added the output of a Fast Fourier Transform (FFT).
The top graph is the signal. The bottom graph shows the first 150 values of the FFT output, interlacing the real and imaginary values. Arranged in this manner, the graph is the spectrum of the signal. Each column on the lower represents an increment of 1 Hz, so the full graph spans 0 and 150 Hz. The height of each bar is the amplitude of the sine wave at this frequency. The default graph is setup to create a square wave out of 10 sines waves. The first column is at 5 Hz with an amplitude of 0.6 * 4 / π. The 4 / π is the scale factor used in the series equation, and the 0.6 is the default amplitude. The next spike is at 15 Hz, and 1/3 the amplitude of the first; and the third is at 25 Hz and 1/5 the amplitude of the first. There are 10 bars in the FFT graph all together because the default number of sine waves to add together is 10.
The demo can switch between the appropriated square wave, and a true square wave. When the switch is made, note that the FFT graph only has bars adds additional graph, but the amplitude and position of the existing bars does not change. What this shows is that the series output and a square wave really do share the relation to one an other.
The idea behind a Fourier transform is to describe a function as a series of sine waves at different frequencies and amplitudes added together. A Fast Fourier Transform is a method to calculate the coefficients of for the amplitudes for equally spaced frequencies on a set of samples. Since the Gibbs Phenomenon is a series of sine waves added together, an FFT of this signal has a very predictable output.
While adjusting the values of the demo you will notice some interesting things. Frequencies between integer number produce bars on the FFT between that sweep over a range. This is because the output only has equally spaces sine waves every 1 Hz, so partial frequencies must consist the frequencies available. You may also notice small bars next to primary bars. This is caused by the accumulated in the FFT calculations—floating-point numbers only have to much precision.
Notice that the phase has no effect on the FFT output, which is expected.
So the amplitude may be a little confusing. To understand it we need to look at the series equation.
The scale factor is 4 / π. This will result in the steady-state of the square wave resting at amplitude. For example, the default settings of the demo have an amplitude of 0.6, and a look at the graph revels the ringing signal hovers right around 0.6. To get a known amplitude on the FFT, simple scale the desired amplitude by 4 / π. For example, setting the amplitude to 0.785 will result in the first FFT bar at 1.0 for amplitude. Setting the series sum to 1 will produce a sine wave with peaks at -1 and 1.
Feel free to experiment, and post if you find something interesting.
A few of days ago I wrote an article about using a sloped average filter technique. My friend Noah commented that the distortion caused by a change in slope looked similar to that of the artifacts from the ringing artifact on a square wave. He is referring to what is known as the Gibbs Phenomenon. It shows that a square wave can be made from a series of sine waves added together.
Here is a little demo I put together to show the effect.
The equation that drives this function is:
Where a is amplitude (a ∈ R | 0 ≤ a < ∞), f is the frequency (f ∈ R | 0 ≤ f < ∞), p is phase (p ∈ R | -π ≤ p < π), and n is the number of sine waves to sum together (n ∈ Z | 1 ≤ n < ∞). If you can't follow the interval notions, have a quick look at this.
The summation looks worse than it is. It starts with a scale factor that keeps the asymptote near 1 and -1. In the sum itself has (2 i - 1) in it twice. This is just selecting all the odd numbers.
This phenomenon happens in actual electrical signals such as the square wave clock signal driving the CPU on the computer you are using. There is always some resistance, capacitance, and inductance in any length of wire which acts as a low-pass filter on the square wave signal. That results is the signal ringing.
A quick follow-up to my article a few days about about the sloped average filter. After thinking about it, I found there is a shortcut to doing the calculation. If the calculation is preformed counting down, then the Y-intercept will end up the filter's prediction.
The typical method for calculating slope and intercept on a set of one- dimensional data is to assume the X data starts at zero and increments evenly for each Y element. The only constraint on the X data is that it have even increments, and it could start or end anywhere.
The equation for linear regression results in a slope (m) and Y- intercept (b) to produce the linear equation: y( x ) = m x + b. Once the values form and b are found the prediction for the filter can be made by plugging in the value for x we want to predict. This value for x is the X value for the last piece of data entered. By counting down to zero, the last X value is then 0. Plugging that into the linear equation results in y( 0 ) = 0 x + b, or y( 0 ) = b.
So back to the full equation for linear regression on one-dimensional data. First, in expanded matrix form:
The left matrix of summations can be simplified.
When solved this turns into:
We are only interested in the Y-intercept, and with some rearranging the total equation becomes:
Although mathematically this is more straightforward, it unfortunately doesn't improve the speed of the filter. There are 3n + 4 additions/subtractions, n + 4 multiplications and 1 division. One less multiplication is required, but n + 2 more additions are required. So not a great improvement over the original.