Today instead of tending to yard work I decided to dive into the next phase if a new project: an Amiga module player. The first milestone has been reached and I’m impressed by how much I’ve already accomplished.
The journey started on Friday when I used some base reference material to read in a standard 4 channel MOD file. Initially I thought I’d detail all those steps are part of this article but I have decided instead to make a multi-part series on the process of creating my own MOD player. In this article I will focus on what happened during the day that allowed me take a skeleton of a library that was able to load a MOD file, and a very simple WAV file writer to render the first MOD I composed.
I spent a fair amount of time on Saturday trying to work out how the Amiga computer played back notes. The reference document was fairly vague and their numbers slightly inaccurate but we’ll save the details for another article. In the end, I learned that the sample period defines the rate at which new words are sent to the analog to digital converter. This determines the samples’ pitch. That part was easy. What I wasn’t sure about was how to convert these values to the sample frequency I was using. I finished Saturday with some understanding of how the rates worked.
Today it was time to make sound. It made sense to me to render the first MOD I ever wrote: Que’s First. While technically Crazzy was the first MOD I wrote, it was not composed so much as randomly thrown together. Que’s First’s melody was composed on a keyboard before it was put into MOD form. So it is really the first MOD I composed.
I started by ignoring periods all together. Somewhere I had read the the samples that make MOD instruments were sampled at 8000 Hz. So I made a program to extract MOD samples and turn them into WAV files setup with a 8000 Hz playback frequency. This is quick but required one conversion. Amiga samples are two’s complement 8-bit signed integers, and was a WAV sample is an 8-bit unsigned integer. So all the samples need to have 0x80 added to them.
The produced all the samples from my MOD as I expected, and their pitch sounded reasonable. My first pass at playback would simply render the first channel of the pattern. This would work well because for that song, the first channel is the drum beat—a simple kick drum and snare combination. Pitch doesn’t matter too much. This would allow me to get the speed of playback correct.
This produced a WAV file that had my bass/snare beat. A good start. The next step was to render all 4 channels. The song begins with a measure of just the beat. There is the kick/snare on the first track, and the second channel which has a rattle shake (like a Maraca). The volume of the shake alternates between full and half which loosely mimics the beads in the rattle sounding at different volumes depending on the side they hit.
This code produced the entire beat which is looped throughout the song. Now it was time for pitch. My work yesterday gave me an equation to convert the note’s period value to the number of samples per second sent to the Digital-to-Analog Converter (DAC). I had a fixed number of samples per second sent to the DAC, so what I needed to calculate was which sample would be getting sent to the DAC at a moment in time. I plan to write in much more detail about how MOD timing works, but for now just understand that songs are broken up in to patterns which consist of 64 divisions in which a note can be played. The speed at which divisions are played is based on the tempo which defaults to 125 beats/minute. There are 4 divisions in a beat. A division is further divided into ticks but other than knowing that the speed calculations assume 6 ticks/division, ticks are not yet used elsewhere.
So I added a function to mix a single division worth of samples. This function has a fixed-point index used to figure out where in the channel’s instrument sample the next output sample comes from. The is some fractional number based on the note’s period. We only use the whole number for the index, but keep the fractional part so it can properly accumulate as playback continues.
I needed to add the calculation to compute the note’s playback increment rate. This is how many counts (including fractional) the instrument sample index changes for each output sample of the mix. Just simple scaling math here. To make it easy on myself I used floating-point for doing the calculation. There are no speed concerns and I was just trying to move quickly so I didn’t feel bad about this.
The results: I had a full playback of my first module that was mostly accurate. For my next iteration I addressed two issues: the pattern break effect and volume slides. In order to do this I needed to address ticks. As briefly stated, each division is further broken into a number of ticks. Effects are applied on the tick level. For volume slide, the amount the volume changes is applied each tick.
Although I didn’t need it for this song, I also added instrument loops. While rendering my own song, I was also rendering a classic: Bjorn Lynne’s 12th Warrior. I wasn’t worried about getting everything correct—just pieces—and did get instrument loops working.
That was it. With the volume slide functional I was able to fully render my very first Amiga module. For the first time I am releasing this MOD and my first rendering of it. Be aware, I was 13 or 14 years old when I composed this song and was no (and am still not a) musical prodigy.
The song itself was composed around 1991 or 1992 on a Yamaha Protasound PSS-140 keyboard acquired from a garage sale, and I tracked this MOD sometime in between mid and late 1993. 28 years latter, I am able to render it into a playable waveform using only my own software.
All the code posted here was written in a full day of work, based on code written over bits of the prior 4 days. I typically don’t release uncleaned code, but this is kind of a unique project in its ability to show developing code.