At the end of 2022, Benjie from Benjiao Modular released an Atari Punk Console synthesizer using a couple of 555 timer chips and a handful of other components. Now, there are lots of 555-based synthesizers out there, but in this case he printed it onto a PCB the size of a business card and added his contact information into the silkscreen of the PCB. Thus the SynthCard was born!
This clearly genius idea gained traction and soon others followed. Karltron created a triple VCO/LFO SynthCard, and Juanito Moore of Modular for the Masses created an 808 Kick Drum and MS-20 style filter.
Now, I don’t want to get too side tracked here because ultimately, we are still trying to build a YM3812 EuroRack module. But, all of the hardware we’ve needed for the first eight articles of this series fit nicely onto a single breadboard. And that got me wondering… would it be possible to fit all that onto a SynthCard sized PCB? What if it could only use through-hole components?
Well, this wouldn’t be much of an article if you couldn’t, so… yes! You totally can!
The YM3812 SynthCard
This business card sized module takes a MIDI input, generates sound using a YM3812 OPL2 sound processor and then outputs that sound through a mono jack. The schematic for this is basically the same as the one we created all the way back in article 4, but I’ve included a status LED and also a 5v voltage regulator.
It also has all of the same great features implemented so far:
Support for the General MIDI 1.0 patches and instrument selection
Support for percussion patches on MIDI channel 10
Support for level control for on each incoming note
What can I do with it?
My favorite use case for this module is to send it General MIDI music from video games. While you might think it would sound the same as an Adlib card—because it has the same OPL2 sound chip—the instrumentation can be quite different. For example, take a look at this side by side comparison of Leisure Suit Larry 3:
Notice how the original Adlib instrumentation uses far fewer voices than the General MIDI? I find that very strange because both the Adlib card and this SynthCard use the same OPL2 chip. While the OPL2 chip is clearly capable of playing more sophisticated music, the Adlib instrumentation doesn’t take advantage of it. This leads me to believe that perhaps the original Adlib composition statically assigned patches to each voice and kept them the same throughout the song. Perhaps dynamically reassigning them was too CPU intensive? If anyone knows the answer here, I’d love to hear it!
Bottom line, General MIDI music rendered on an OPL2 chip sounds very different in this game. (I dare say, even better!)
Then go to JLCPCB (or p C B WAAAAaaay!) whichever PCB manufacturer you like best, and drag the zip file onto the “Add Gerber File” box. After a few seconds, you will see an order form. There are only a couple of key changes to make:
The most important change is to select LeadFree HASL. Sure it adds $1 to the price, but since this is supposed to be like a business card that you can pass around and handle… with your hands… you really want to avoid getting lead poisoning. And with my name on it… I just don’t want to feel responsible for that. So please, go unleaded. (And print at your own risk).
OK, enough disclaimers. From here you are able to select your favorite color and then you can also choose “Specify Location” for the Remove Order Number option. (I specified a location in the Gerber File).
From there, just go ahead and place your order! For the low, low price of $3 (plus ~$16 shipping and handling) you get not one… but TWO boards. And if you act now, they will MORE THAN DOUBLE YOUR ORDER and send you FIVE BOARDS! Juuuuust kidding. This deal lasts forever (it’s their minimum order quantity). It’s still a pretty great deal though. So get 5 and pass the extras around to your friends!
How do I build one?
First, you are going to need a few parts. If you have been following along, you probably have most of these parts already—especially the AVR microcontroller, the YM3812, Y3014b, and other chips, as well as the passive components. This board adds just a few new components that you’ll need to source. Specifically, a 78L05 voltage regulator (in a TO-92 package), a 1N4001 rectifier diode, and a mono (black) thonkiconn jack and a stereo (green) thonkiconn jack. For U6 I’m using an OPA2604 operational amplifier instead of a TL072 because this is a single power rail design, but in theory, you might be able to get away with an alternative op-amp. Just play with it a bit.
Bill of Materials
Here is the full list of materials that you will need to populate the board:
CAPACITORS:
• 100 nF (Ceramic)
x9
C1-C6, C11-C13
• 4.7 uF (Electrolytic)
x1
C7
• 10 uF (Electrolytic)
x3
C8-C10
RESISTORS:
• 220R (Axial)
x2
R1, R2
• 100K (Axial)
x1
R3
• 2K (Axial)
x1
R6
• 1K (Axial)
x3
R4, R5, R7
DIODES:
• 1N4148 Signal Diode(Axial)
x2
D1, D2
• 1N4001 Rectifier Diode (Axial)
x1
D4
• LED (3.0mm)
x1
D3
JUMPERS:
• Pin Header 01×02 2.54mm Vertical
x3
J1, J2, J6
• Pin Header 01×04 2.54mm Vertical
x1
J3
JACKS:
• MIDI IN: Green Thonkiconn Stereo 3.5mm Audio Jacks (PJ366ST)
Hopefully the labeling is mostly self explanatory. On the front, I’ve listed all of the component values, and on the back, you’ll find all of the component reference numbers. Here are the usual tips and gotchas for building modules:
Start with the smallest / flattest components first—This is a pretty densely populated board, so soldering things on in the right order will make your life easier. Start with the resistors, diodes, capacitors, and voltage regulator. Then move on to the switches, ICs and the oscillator. Finally solder on the jumpers and Thonkiconn jacks.
Check the alignment of polarized components. The line on the diode should match the line on the board. The strip on the electrolytic capacitors should match the white silk screen on the board. The 78L05 should align to the shape in the silk screen. The longer pin of the LED should go in the round hole. The notch in the ICs (or IC sockets) should match the notch on the board. The square corner of the oscillator should match the corner on the silk screen.
Sockets vs. No Sockets – This is totally up to you. I personally use sockets, but I know there are lots of arguments on why NOT to use sockets (the connections fall out, sometimes they cost as much as the chips, etc.). I would recommend using a socket for the YM3812, Y3014b and the AVR128DA28 as those are either the rarest and/or most expensive chips on the board. To be on the safe side, you might want to power up the board and test the voltages before plugging them in.
Stereo (Green) Thonkiconn jack is used for MIDI In, and the Mono (Black) Thonkiconn jack is used for the Audio Output.
Powering Up!
The 5v regulator allows you to power this device using anything from 6v to 12v DC. No negative voltages are required, so you could use a 9v battery, an external power supply, or even just plug it into the positive end of a EuroRack connector.
Alternatively, you can plug a regulated 5v supply into the UPDI header at the top of the card. This allows you to power the device directly from an USB to FTDI cable or even an external Li-Po battery.
CAUTION: The regulated power connectors on the left and right side of the board have reverse polarity protection, but the 5v connector on the top does not. Be sure to only use 5v and make sure you plug it in the right way around!
Uploading the firmware
The firmware for this module is exactly the same as what we’ve built throughout the article series thus far. Just go find the latest code from Article 8 and then upload it from the Arduino IDE using an FTDI cable. There are lots of details on how to do that throughout this series, but especially in Article 2.
Usage and Troubleshooting
To use a MIDI cable, you will need an adapter. These are pretty easy to find online, but there are several different versions. You will want a version that connects pins 4 and 5 of the MIDI connector to the tip and ring of the 3.5mm jack. I talk more about the different standards in Article 4 if you want some more background. Depending on which version of the adapter you get, you may need to change the direction of the Korg/Art switch on the left side of the SynthCard. If you have a beat step pro or another device that uses a TRS jack for MIDI output, you may be able to connect it directly using a 3.5mm stereo audio cable (no adapter required!).
If you turn the device on and the red activity light just stays on, press the reset button a bunch of times until it turns off. I have yet to track down why that happens. If you guys have any ideas, let me know in the comments!
If things still don’t work, check that all of the polarized parts are in the right way around.
You want some more?
Well, you are in luck! Benjie put together a website, synthcard.com, to host other SynthCard projects! I hope others will join in the fun and create even more mini-synth business card goodness. If you want jump into the conversation, then head on over to the Modular for the Masses discord community.
Errata
The microswitch was labeled as an EG1271 which is an SPDT switch. The correct part should be EG2271 which is a DPDT switch. Many thanks to Peter for debugging this one and identifying the issue!
Today, it’s time to take our module to the next level. Lots of levels in fact! See, right now our module is full volume all the time. And well, music is more than notes and drum sounds—it needs dynamics! Fortissimo! pianissimo and every level in between. Of course, our General MIDI friends thought of this and, their solution has been staring at us the whole time! Here check out the MIDI note-on handler function:
Velocity refers to the speed that you press a key on the keyboard. People often refer to this as “Touch Sensitivity,” but in most keyboards, this is a measure of key press speed, not the amount of pressure applied. There ARE keyboards that measure pressure, and that can be useful in something called “aftertouch” but we aren’t going there… yet…
Every time you press a key, a touch sensitive keyboard communicates both the note number and the velocity over MIDI. The MIDI library passes both of these values to the note on handler function. From there, it’s up to us to do something with it.
Level Scaling
Scaling the level of a channel should be pretty straight forward right? Just take the current level of the note and multiply it by the fraction of velocity / 127. This way high velocity notes will be louder proportionally to velocity. Right? Well, if it was that simple, this would be a boring article. So, not quite. There are a couple of gotchas here we need to talk about.
Operator Scaling
As you might suspect, the level setting controls the loudness of a channel. But level isn’t a channel setting. Level is an operator setting. And our patch definition has four different level settings per channel. So the question becomes, which level setting should we change? To figure that out, we need to review how an operator’s level affects the sound production.
Mixing
Sticking with the two operator capability of a YM3812, there are only two ways to configure the “algorithm.” Mixing and Frequency Modulation. If we mix the two operators together, then BOTH operators contribute equally to the sound output. And, to scale that sound, we need to scale the level of BOTH operators.
Frequency Modulation
In frequency modulation mode, the two operators play very different roles—carrier and modulator. In this mode, only the carrier—operator 2—affects the level of the sound. The level of modulator—operator 1—affects the timbre of the output sound instead. So, we only want to scale the level of operator 2. If we scaled operator 1 as well, then the sound would become brighter the harder we press a key. This could be an interesting way to add after-touch, but again, not what we are trying to do today.
So, let’s put this all together:
First check the algorithm to see if we are mixing or modulating
If mixing, scale both operators
If modulating, only scale the carrier operator (op2)
Calculating Level
Now that we know which numbers to combine, we need to figure out how these properties work. The YM3812 accepts a 6-bit value for the level property. That means it supports values from 0 through 63. On the other hand, the level value of our patch goes from 0 through 127. Take a look at article 6 if you need a refresher on why that is. Similarly, our velocity value also goes from 0 through 127 as well. To put these together, we need to scale level based on the percentage of velocity through 127.
patchLevel * velocity / 127
This formula results in values between 0 and 127, but we need those values to go from 0 to 63. So to fix that, we just need to divide the whole thing by 2 again. Or, better yet, divide by 256. (Yes I know that 127 x 2 is 254, but it actually doesn’t change the integer solutions and 256 is much easier to divide by):
patchLevel * velocity / 256
Let’s plot out some values based on this formula to better understand what’s going on. The table below shows patch levels (columns) and velocities (rows) from 0-127. I’ve also color coded the cells so things are easier to see:
The color coding does a nice job here of showing how the highest values sit in the top right corner. And then those values decrease as you move left or down. This also brings to light our first “gotcha.” The value of Patch Level (for some reason) assumes that 0 is the loudest and 127 is the quietest. We need our values to radiate from the top left corner, instead of the top right. To do that, we can just invert the Patch Level:
(127-patchLevel) * velocity / 256
By subtracting patchLevel from the highest possible value (127), our table plot flips around. Let’s take a look:
Ok, now we are getting somewhere. The top left corner of the table appears to be going from loud to soft in all directions. Now for the second “gotcha.” The YM3812 also expects that zero is the loudest value. So now, we need to invert the entire table:
63 - ((127-patchLevel) * velocity / 256)
Just like before, we take the highest possible value (63) and subtract the entire formula. And now, plotting this out, you can see that the loudest values sit in the top left corner. As you move to a softer patch level (63) or a lower velocity (0) the formula gets closer to 63.
Beautiful. Now that we have our formula, let’s go write some code!
Code Updates!
For the most part, we only need to make a few tweaks from the last article. To keep this straightforward, let’s start at the noteHandler function and work our way inward.
The first change is pretty simple. We need to pass the velocity argument on to the patchNoteOn function of our YM3812 library. Of course for patchNoteOn to accommodate this new argument we have to update that too. So let’s do that next—starting with the function definition in the YM3812.h file:
Remember that there are two versions of the patchNoteOn function, and they both need to be updated. The second version simply calls the first, but now passes velocity along. Now let’s modify the patchNoteOn implementation. This code is in the YM3812.cpp file:
patchNoteOn chooses the next YM3812 channel and then keeps track of the note metadata. We are going to track velocity—just like the midiNote—by saving it into the channel_states array. The one new line of code in this function saves velocity into the channel_states entry for the current channel. This will make the information available later when we need to update the settings of the YM3812. Now, remember that channel_states is an array of YM_Channel data structures. So, to save velocity into it, we need to add a velocity property to the data structure:
After tracking the channel states, the patchNoteOn function calls chPlayNote which in turn calls chSendPatch before playing the note. And, chSendPatch configures the YM3812 for the instrument we want to play, and this is where we get to work our magic. For reference, here is the full function:
Most of this function works exactly as it did before. But now, instead of using the operator’s level value in the patch data, we calculate it using some fancy math. And that math comes in the form of these 5 lines of code:
The first line of code decides whether or not to scale the level of the patch:
Thinking back to the algorithm we talked about earlier, there’s only one case where we don’t scale the level: when the algorithm is zero (FM mode) AND the operator is zero (the modulator operator). In this case, level controls timbre not volume, but in every other case, level affects the volume of the sound. Chips with more operators, require a more complex table, but this one is nice and simple.
The second line of code handles the “don’t scale” case by assigning the current patch’s level value to op_level. This line also right shifts the patch value by one so it falls into the 0 to 63 range. The fourth line handles the “do scale” case, and adjusts the patch’s level proportionally to velocity. While the code looks a little complicated, it really just derives from the formula we created earlier:
63 - ((127-patchLevel) * velocity / 256)
First let’s substitute patchLevel with the reference to the level value in the patch array:
And finally, we use one more trick. Whenever you divide an integer by a power of 2, you can right shift instead to do it MUCH faster. Here, for example, we right shift by 8 bits to divide by 256. Because these are integers, you need to ensure that the numerator is larger than the denominator. This is why we multiply level and velocity before right shifting by 8—hence the parenthesis.
And that’s how you derive that line of code. There’s one more small tweak to make in this function. We need to update the sendData command to use our newly calculated op_level value:
In this first demo, let’s see how this SHOULD sound when only carrier operators are scaled:
Notice how the waveforms scale up and down with the note velocity, but otherwise doesn’t change? This works because in FM mode we don’t scale the modulator operator. Just for the heck of it, let’s see what happens when we scale everything including the modulator operator:
Notice here how the edges and corners of the waveforms become more pronounced as velocity increases. This is how that modulator operator affects the sound. Now, there may be times where it makes sense to have velocity change the quality of a note. Maybe a bass that gets more plucky the harder you play the note? That feels to me like a special case, but perhaps there is an opportunity to add velocity response as an operator property in the patch? Definitely open to feedback there.
Conclusion & Links
I don’t know about you, but I never thought scaling the level of a note would be so difficult. Still, this opens the door to some other cool features down the road like more granular panning—once we have multiple chips of course. As always you can find the code on GitHub. And if you run into any issues, feel free to drop a comment below. For the next article, I’m a bit torn between pitch bend and virtual 4-op voices. If you have a preference, let me know!
A couple years ago, I had this crazy idea of controlling multiple Yamaha FM sound chips with a single microcontroller. At the time, I had a YM3812, YM2151, YM2413 and even a non-Yamaha chip, the SN76489 on a breadboard outputting sound together. It turned out that controlling the chips themselves wasn’t really that hard. They all follow generally the same principles that we have been using to control the YM3812. The tricky part turned out to be finding a way to control the sound properties in an even remotely consistent way. Each of the chips has different properties, and those properties have different ranges. Those differences became pretty impossible to keep track of in my head, which began my quest to find a unified way of controlling them.
If you’ve been following along through the blog over the last few months, I’m building a EuroRack module around the YM3812 OPL2 sound processor. In the last entry, we got our module running with 9 voice polyphony and that made it a pretty playable instrument. Still, to change the characteristics of the sound itself, you have to set all of the sound properties one at a time. In this article, we are going to fix that by bringing together all of the settings into a single data structure called a patch. But this patch isn’t just going to work for the YM3812, it’s going to be cross compatible with several Yamaha sound processors. Moreover, we are going to explore how to assign different MIDI channels to different patches. This will allow us to use multiple patches at the same time and turn our instrument into an orchestra! Let’s get started.
A Unified Patch Structure
Over the years Yamaha produced a variety of sound chips, each with its own unique set of features. Now to keep this exercise reasonably simple, I’m going to focus on a subset of the chips that use four operators or less. Within this list, there are some interesting familial relationships between the chips.
Yamaha Chip Evolution
The YM3526 (OPL) was the first in the OPL family of sound processor. It combined sine waves through frequency modulation to produce a wide variety of sound characteristics and represented a radical departure from the square-wave chips at the time.
The YM3812 (OPL2) added multiple waveforms, but otherwise stayed much the same. In fact, this chip is backwards compatible with the YM3526 and you actually have to take it out of compatibility mode in order to use the broader waveform set.
Yamaha introduced the YM2413 (OPLL) as a budget version of the YM3812 designed for keyboard usage. This chip comes with a set of pre-configured instruments as well as a user-configurable one. You can only use one instrument at a time—which makes sense for a keyboard. The chips have a built in DAC, but I have found the audio to be rather noisy.
The YMF262 (OPL3) advanced the OPL family considerably by upping the number of operators per voice to 4, adding even more waveforms, and—assuming you spring for two DAC chips—up to four output channels. Still, even this chip is backwards compatible all the way to the YM3526.
While the YM2151 (OPM) launched in 1984 around the same time as the YM3526, this family of sound chips departed from the OPL family in a few key ways. For one, this chip started out with four operators per voice, but did not include additional waveforms beyond sine waves. This chip provided a more advanced low frequency oscillator (that support multiple waveforms) and the ability to change tremolo/vibrato levels at the individual channel level. The OPM line also introduced detuning and an alternative style of envelope generation.
The YM2612 (OPN2) builds on the functionality of the YM2151, adopting its 4-op sine-wave-only approach to FM synthesis. This chip removes some of the LFO and noise-generation capabilities of the YM2151, but also adds a DAC as well as SSG envelopes.
Envelope Differences
The OPL family provides two different envelope types that you can select through the “Percussive Envelope” setting. In a percussive envelope, the release cycle begins as soon as the decay cycle ends. This allows you to create percussive instruments like drums, xylophones, wood blocks, etc.
The OPM/N family provides only one envelope type, but adds a new setting called Sustain Decay. With this setting you can create both ADR and ADSR envelope styles as well as anything in between.
SSG envelopes control the channel level in a repeating envelope pattern according to the chart above. The AY-3-8910 introduced this feature in 1978. Later, Yamaha picked up manufacturing of this chip with a couple tweaks and rebadged it as the YM2149 (SSG). These three-voice chips produce sound with square waves and don’t use FM synthesis at all. Still the SSG envelope has a very distinctive sound and it’s interesting to see them reappear as a feature in the YM2612 (OPN2).
Lastly, it’s worth noting that while chips like the YM2149 and the SN76489 don’t provide envelopes or LFOs, you can emulate them by using a microcontroller to adjust the level directly. Maybe a good topic for a future article? 🤔
Comparing Property Ranges
Of course the differences between these chip families differ at more than a feature level. Once you look at the range of values each property supports, their familial relationships become even clearer:
Ignoring the YM2149 for a second, the major differences between the chips coincide around the two chip families (the OPL vs. OPM/N). The OPM/N family provides more granular control over the attack and decay envelope settings (0..31 vs. 0..15), level (0..127 vs. 0..63), and envelope scaling (0..3 vs. 0..1). With more operators, the OPM/N family also allows a wider range of values for algorithm (0..7 vs. 0..1). Even the 4-op YMF262 (OPL3) only provides 4 algorithms.
In order to produce a patch that works across these chips, we need a way to represent these ranges consistently. And to do that we’ll review a little bit math. If you recall from the first article, each register byte combines multiple settings by dedicating a few bits to each one.
The maximum values derive from the limited number of bits associated with each setting. The more bits, the broader the range, and every maximum is a multiple of two.
With this in mind, scaling settings between different chips could be as simple as left or right shifting the value. For example, Attack on the YM2151 ranges from 0-31 (a 5-bit number), while Attack on the YM3812 ranges from 0-15 (a 4-bit number). So if you had a YM3812 attack value, you could scale it for the YM2151 by shifting the number one bit to the left. This is equivalent to multiplying the number by two. Unfortunately, we always loose fidelity when we scale a number down and back up. For example:
(19 >> 1) << 1 == 18 (not 19)
Finding Consistency
We solve this by only scale numbers down, and never back up. This means we have to start with the largest possible value and scale down from there. By default, the MIDI specification allows values that range from 0-127. You can send larger values, but need to break them up across multiple bytes. Thankfully, this range covers nearly all of the values we’d need anyway, so it’s a good place to start. With this in mind, I updated the combined spec:
Non-patch settings
Most of the operator-level and channel-level settings make sense to include in the patch because they alter the properties of the sound. However, settings like frequency and output channel are more “runtime” properties. They affect things like the note you are playing and the speaker you are playing it through. Additionally, global settings like Tremolo Depth change all patches being played on the system at the same time. So, if you reassigned those every time you loaded a patch, those patches would conflict and override each other. For this reason, global settings can’t be included in the patch either.
Adding a few extras
Processor Config associates the patch with a sound processor type. Since our module only includes a YM3812 chip, we can ignore this setting for now. For a multi-sound processor module though, this would become super important.
Note Number indicates the note frequency to use when playing percussion sounds. We will dig more into this with the next article, but MIDI channel 10 aligns to a dedicated percussion mode. In this mode, each note on the keyboard aligns to a different patch, providing access to 42 different drum sounds. Of course those percussive sounds still need to be played at a specific frequency. And since we can’t rely on the keyboard key for that frequency, we use the note number setting instead.
Pitch Envelopes (PEG)
This one is a bit of a stretch goal for me. Pitch envelopes—like other envelopes—trigger upon playing a note. But, instead of affecting the sound level, they bend the pitch into the note while it’s held, and then away from the note upon release. You can find this feature on sample-based modules like Yamaha’s MU series, but not in these FM chips. To make this work, we will have to emulate the functionality in the micro-controller. For now, I’ve added it to the spec as a placeholder.
Coding the Final Spec
Putting everything together, we get a 78 element array of 8-bit integers. Each element of this array corresponds to an index shown above. The first 10 elements store the general patch & channel settings, while the remaining elements store operator settings in groups of 17. Since there are four operators, you can calculate the full patch size by multiplying the number of operator settings by 4 and adding the number of general settings.
I’ve defined these values in a new file called, YMDefs.h:
Also in the YMDefs.h file, the PatchArr data type defines the patch structure. It is simply an array of 8-bit integers:
typedef uint8_t PatchArr[PATCH_SIZE]; // Define the PatchArr type as a uint8_t array
Additionally, I’ve added definitions for every array index. Each setting includes comments that show the supported sound processor types.
// Instrument Level / Virtual
#define PATCH_PROC_CONF 0 // Specifies the processor and desired processor configuration
#define PATCH_NOTE_NUMBER 1 // Indicates the pitch to use when playing as a drum sound
#define PATCH_PEG_INIT_LEVEL 2 // Initial pitch shift before attack begins (Virtual setting - none of the YM chips have this by default)
#define PATCH_PEG_ATTACK 3 // Pitch Envelope Attack (Virtual setting - none of the YM chips have this by default)
#define PATCH_PEG_RELEASE 4 // Pitch Envelope Release (Virtual setting - none of the YM chips have this by default)
#define PATCH_PEG_REL_LEVEL 5 // Final Release Level (Virtual setting - none of the YM chips have this by default)
// Channel Level
#define PATCH_FEEDBACK 6 // Used by: YM3526 | YM3812 | YMF262 | YM2413 | YM2151 | | YM2612 |
#define PATCH_ALGORITHM 7 // Used by: YM3526 | YM3812 | YMF262 | | YM2151 | | YM2612 |
#define PATCH_TREMOLO_SENS 8 // Used by: | | | | YM2151 | | YM2612 | *SN76489
#define PATCH_VIBRATO_SENS 9 // Used by: | | | | YM2151 | | YM2612 | *SN76489
// Operator Level
#define PATCH_WAVEFORM 10 // Used by: | YM3812 | YMF262 | YM2413 | | | |
#define PATCH_LEVEL 11 // Used by: YM3526 | YM3812 | YMF262 | YM2413 | YM2151 | YM2149 | YM2612 | *SN76489
#define PATCH_LEVEL_SCALING 12 // Used by: YM3526 | YM3812 | YMF262 | YM2413 | | | |
#define PATCH_ENV_SCALING 13 // Used by: YM3526 | YM3812 | YMF262 | YM2413 | YM2151 | | YM2612 |
#define PATCH_PERCUSSIVE_ENV 14 // Used by: YM3526 | YM3812 | YMF262 | YM2413 | | | | *SN76489
#define PATCH_ATTACK 15 // Used by: YM3526 | YM3812 | YMF262 | YM2413 | YM2151 | | YM2612 | *SN76489
#define PATCH_DECAY 16 // Used by: YM3526 | YM3812 | YMF262 | YM2413 | YM2151 | | YM2612 | *SN76489
#define PATCH_SUSTAIN_LEVEL 17 // Used by: YM3526 | YM3812 | YMF262 | YM2413 | YM2151 | | YM2612 | *SN76489
#define PATCH_SUSTAIN_DECAY 18 // Used by: | | | | YM2151 | | YM2612
#define PATCH_RELEASE_RATE 19 // Used by: YM3526 | YM3812 | YMF262 | YM2413 | YM2151 | | YM2612 | *SN76489
#define PATCH_TREMOLO 20 // Used by: YM3526 | YM3812 | YMF262 | YM2413 | YM2151 | | YM2612
#define PATCH_VIBRATO 21 // Used by: YM3526 | YM3812 | YMF262 | YM2413 | | |
#define PATCH_FREQUENCY_MULT 22 // Used by: YM3526 | YM3812 | YMF262 | YM2413 | YM2151 | | YM2612 |
#define PATCH_DETUNE_FINE 23 // Used by: | | | | YM2151 | | YM2612 | *SN76489
#define PATCH_DETUNE_GROSS 24 // Used by: | | | | YM2151 | | YM2612 | *SN76489
#define PATCH_SSGENV_ENABLE 25 // Used by: | | | | | YM2149 | YM2612 | *SN76489
#define PATCH_SSGENV_WAVEFORM 26 // Used by: | | | | | | YM2612 | *SN76489
Now, to access a value inside of a PatchArr object—let’s say you had one called, “patch”—you can use these defined indexes. For example:
patch[PATCH_FEEDBACK] = 27;
To access a specific operator’s setting, you need to add on PATCH_OP_SETTINGS times the operator number that you want—assuming the operator numbers go from 0 to 3:
Now that we have our patch structure, we need to decide how and when to send the patch information to the YM3812. This part of the project threw me for the biggest loop. So, just in case you get my same bad idea, I’ll show you where I went off the rails.
Partitioning YM3812 Channels
First, partition the YM3812’s nine channels into sets of channels that each represent an “instrument.” Then, depending on the assigned instrument, upload the appropriate patch information. Now that everything is configured, listen to the different MIDI input commands and—depending on the MIDI channel number—direct the play note functions to the appropriate set of channels.
In this configuration, each set of channels becomes its own polyphonic instrument. But the more instruments you have, the fewer voices each instrument can support. In fact, if you had 9 instruments, they would all be monophonic. Clearly, to support 16 MIDI channels, we are going to have to do this in another way…
Dynamic Patch Reassignment
The assumption that led me to partition the channels was that uploading patch information takes too long. Thus, I assumed that it should only occur once during setup—before receiving note information. Let’s set that assumption aside for a moment and see what happens if we reassign voices on the fly.
In this new model, we associate our instrument patches with MIDI channels. When we receive a note, we choose a YM3812 channel using our normal polyphonic algorithm. Then, we can upload the associated patch information to the channel, and finally play the note.
There are several key advantages to this algorithm:
It supports different instruments for every MIDI channel. While it’s true that only nine notes can be played at once, those notes could come from any number of different instruments. By associating a patch with each MIDI channel, we can support a different instrument for each of the 16 MIDI channels.
It supports up to nine-voice polyphony: Even with 16 different instruments, each instrument can play up to 9 notes—just not all at once. We don’t have to decide up front how much polyphony each instrument gets.
We never need to know what’s on the YM3812: Because we keep overwriting the sound properties on the chip, we never really need to know what’s on it. Thus, we don’t need to track all of the register values, and that saves a good bit of RAM.
Testing Reassignment Speed
But what about the disadvantage? Can we really upload a new patch every time we play a note?
Here I played 5 notes and monitored the debug LED that flashes during the SendData command. Each horizontal division represents 1 millisecond. So, each note takes approximately 1ms to send. During my experimentation, I really tried to play all 5 notes at exactly the same time. And yet, as you can see, there is still a gap between the 4th and 5th notes. Clearly, human time just isn’t the same as computer time.
With this in mind, let’s look at the worst possible case of playing 9 notes at the same time. It takes 9ms to update every patch on the YM3812 and turn the notes on. At 120BPM, that’s about 7% of a 16th note. Math aside, if there’s a delay, I honestly can’t seem to notice it.
YM3812 Library Updates
Sending all of the patch data at once, requires a fundamental rearchitecting of our YM3812 class.
Replacing Register Functions
First, we replace our Channel and Operator register functions with a single “chSendPatch” function. Yes, this change eliminates a lot of the work we’ve put into the module to date. But, it also provides us with a TON of flexibility. Whether you code the chSendPatch to interface with a YM3812 or a YM2151, you call the function the same way. You pass the same patch object, and use the same 0..127 ranges for all properties.
Let’s have a look at how the chSendPatch function works:
This function takes our generic patch and repackages all of the properties into YM3812 register values. Then, it takes those register bytes and uploads them to the chip. To repackage those properties, this function:
Scales the property from the 0..127 range to the range appropriate for the setting by right shifting the value
Aligns the new value to the correct bits in the register map by left-shifting
ORs together the values that go into the same register byte
Uploads the combined byte to the YM3812 using the sendData function
If you need to brush up on how the formulas above calculate the register addresses, take a look at article #3 of the series.
Calculating the index of each setting in the patch works just like we discussed earlier in the article. We used a named index like, “PATCH_FEEDBACK” to locate the correct setting in the patch array. And for operator settings, we add on PATCH_OP_SETTINGS multiplied by the operator number we want to edit. By cycling through the operators in a for loop, we can easily calculate a patch_offset and just add it onto the name of the setting we want to edit.
This looping structure scales up nicely for the YM2151 that has four operators. You just change the loop to go from 0..4 instead of 0..2. In fact, take a look at the same function written for the YM2151. Pretty similar, eh?
According to our new algorithm we now upload patch information to the YM3812 every time we play a note. So we need to swap our noteOn and noteOff functions with new versions called patchNoteOn and patchNoteOff. Unlike the noteOn function—which takes only a MIDI note number to play—patchNoteOn will take both a MIDI note number and a reference to a PatchArr object.
void YM3812::patchNoteOn( PatchArr &patch, uint8_t midiNote ){
last_channel = chGetNext();
channel_states[ last_channel ].pPatch = &patch; // Store pointer to the patch
channel_states[ last_channel ].midi_note = midiNote; // Store midi note associated with the channel
channel_states[ last_channel ].note_state = true; // Indicate that the note is turned on
channel_states[ last_channel ].state_changed = millis(); // save the time that the note was turned on
chPlayNote( last_channel, midiNote ); // Play the note on the correct YM3812 channel
}
This function works almost identically to the original noteOn function, but now saves a pointer to the patch data in the channel_states array. If you recall from the last article on polyphony, the channel_states array tracks the status of YM3812 channels to determine which to use for every incoming note. By saving a pointer to the patch, we not only know what data to upload to the YM3812, also which instrument is playing on each channel. This becomes important when we go to turn the note off:
void YM3812::patchNoteOff( PatchArr &patch, uint8_t midiNote ){
for( uint8_t ch = 0; ch<num_channels; ch++ ){
if( channel_states[ch].pPatch == &patch ){
if( channel_states[ch].midi_note == midiNote ){
channel_states[ ch ].state_changed = millis(); // Save the time that the state changed
channel_states[ ch ].note_state = false; // Indicate that the note is currently off
regKeyOn( ch, 0 ); // Turn off any channels associated with the midiNote
}
}
}
}
The patchNoteOff function also works mostly like the noteOff function. Both loop through all of the channels in the channel_states array looking for a matching note that’s currently turned on. But now, in patchNoteOff, the note has to both be turned on AND associated with the same patch. Otherwise, if two instruments played the same note, then one instrument would turn them both off. We check this by validating that the patch pointer stored in the channel_states array points to the same patch that was passed to the patchNoteOff function.
Oh, this is probably a good time to mention that I updated the YM_Channel structure to include a PatchArr pointer. Remember, the channel_states array is composed YM_Channels:
struct YM_Channel{
PatchArr *pPatch = NULL; // Pointer to the patch playing on the channel
uint8_t midi_note = 0; // The pitch of the note associated with the channel
bool note_state = false; // Whether the note is on (true) or off (false)
unsigned long state_changed; // The time that the note state changed (millis)
};
I also moved this data structure into the YMDefs.h file. This file now includes all of the functions, definitions and macros that aren’t specific to the YM3812. Structuring things this way allows us to create separate library files for each sound chip without interdependence on each other. I also moved the SET_BITS and GET_BITS macro as well.
Updating chPlayNote
In order to send the patch up to the YM3812, we need to do more than just attach it to the channel_states array, we need to call the chSendPatch function. And we need to call that function at that moment where the note is turned off, and we are updating the frequency properties before turning the note back on. This occurs in the chPlayNote function:
void YM3812::chPlayNote( uint8_t ch, uint8_t midiNote ){ // Play a note on channel ch with pitch midiNote
regKeyOn( ch, 0 ); // Turn off the channel if it is on
if( midiNote > 114 ) return; // Note is out of range, so return
chSendPatch( ch, *channel_states[ch].pPatch ); // Send the patch to the YM3812
if( midiNote < 19 ){ // Note is at the bottom of range
regFrqBlock( ch, 0 ); // So use block zero
regFrqFnum( ch, FRQ_SCALE[midiNote] ); // and pull the value from the s
} else { // If in the normal range
regFrqBlock( ch, (midiNote - 19) / 12 ); // Increment block every 12 notes from zero to 7
regFrqFnum( ch, FRQ_SCALE[((midiNote - 19) % 12) + 19] ); // Increment F-Num from 19 through 30 over and over
}
regKeyOn( ch, 1 ); // Turn the channel back on
}
I only added one line to this function that calls chSendPatch after turning off the note. Keep in mind the chSendPatch function expects the patch to be passed by reference, not as a pointer to the patch. So we need to dereference pPatch using an asterisk. Now this function will turn off the note, upload the patch, set the frequency and then turn it back on.
New Functions
In addition to updating the noteOn and noteOff functions, we can add some new ones as well. The patchAllOff function looks for any channels playing a specific patch and then turns them off:
void YM3812::patchAllOff( PatchArr &patch ){
for( uint8_t ch = 0; ch<num_channels; ch++ ){
if( channel_states[ch].pPatch == &patch ){
channel_states[ ch ].state_changed = millis(); // Save the time that the state changed
channel_states[ ch ].note_state = false; // Indicate that the note is currently off
regKeyOn( ch, 0 ); // Turn off any channels associated with the midiNote
}
}
}
The patchUpdate function takes a patch and then updates the settings of any channels playing that patch. This becomes very useful when we start modifying properties of the patch. Using this function, if you can change a patch property generically (using a 0-127 range) and then re-upload the patch to any relevant channels all at once.
void YM3812::patchUpdate( PatchArr &patch ){ // Update the patch data of any active channels assocaited with the patch
for( byte ch = 0; ch < num_channels; ch++ ){ // Loop through each channel
if( channel_states[ch].pPatch == &patch ) chSendPatch( ch, patch ); // If the channel uses the patch, update the patch data on the chip
}
}
The New YM3812 Library
Since we have edited so much, let’s clean up our revised YM3812 library structure:
If you draw an imaginary line under the new patch functions, you can see which functions require an understanding of how the YM3812 works and the ones that are fully generic. With this new structure, we can not only turn a note on and off, but adjust the sound settings as well. We only need to know the structure of a generic patch, and we are good to go.
It’s also worth noting that, of the functions below the line, most of the details on how the sound chip works are confined to the chSendPatch function. By altering this function, we can quickly rebuild the library for any number of sound processors.
Creating Instruments
We sure have talked a lot about patches. Patch structures… Patch algorithms… Generic patches… but um… how do we make the patches themselves? Well, this turns out to be a pretty well solved problem. The YM3812 has been around for a long time, and as such there are various patch libraries available. Many of these libraries even follow the General MIDI standard. General MIDI provides a naming convention for patches while leaving the nature of the sound up to manufacturers. The naming convention defines 128 different instrument names broken into groups of eight:
Editing Sounds Patches
For my purposes, I found a library containing the old DOOM sound patches, but if you are interested in editing them or creating your own, I highly recommend the OPL3 Bank Editor project:
This open source project lets you create and preview 2-operator, 4-operator and even dual 2-operator sounds. You can build up a library of these patches and then export them into a variety of formats.
Instruments.h
In order to use these sounds properties in our module, we need to import all of the information. I took the easy way by creating an instruments.h file that compiles directly into the code. To create this file, start by exporting a patch library from the OPL3 Bank Editor into a DMX .op2 format file. Then you can use a NodeJS program I wrote to generate the instruments.h file. You can find this converter utility in the InstrumentConvert folder on the GitHub for this article. To use it, just put your GENMIDI.op2 file in the same directory and type:
node op2.js GENMIDI.op2 instruments.h
This will create an instruments.h file that you can copy into your Arduino folder. If you peek into this file, you will find a ton of code. But, it really boils down to a couple of things:
patches[] – Contains all 128 instrument (and 47 drum) patch definitions in our generic patch format
patchNames[] – Contains character arrays for the names of every patch
NUM_MELODIC – Defines the number melodic patches (128)
NUM_DRUMS – Defines the number of drum patches (47)
PROGMEM
If you look closely at the instruments.h file, you will notice the PROGMEM keyword used in the definition of each patch array and every patch label. Here is an excerpt:
const unsigned char ym_patch_0[78] PROGMEM = {0x22,0x00,0x00,0x00,0x00,0x00,0x60,0x00,0x00,0x00,0x40,0x34,0x40,0x40,0x40,0x70,0x08,0x28,0x00,0x18,0x00,0x00,0x10,0x00,0x00,0x00,0x00,0x20,0x02,0x00,0x40,0x40,0x78,0x08,0x78,0x00,0x20,0x00,0x00,0x08,0x00,0x00,0x00,0x00,0x40,0x48,0x00,0x40,0x40,0x78,0x00,0x78,0x00,0x18,0x00,0x00,0x10,0x00,0x00,0x00,0x00,0x00,0x14,0x00,0x40,0x40,0x78,0x08,0x78,0x00,0x20,0x00,0x00,0x08,0x00,0x00,0x00,0x00}; // Acoustic Grand Piano
const unsigned char ym_patch_1[78] PROGMEM = {0x22,0x00,0x00,0x00,0x00,0x00,0x60,0x00,0x00,0x00,0x40,0x24,0x20,0x40,0x40,0x78,0x00,0x78,0x00,0x18,0x00,0x00,0x08,0x00,0x00,0x00,0x00,0x20,0x00,0x00,0x40,0x40,0x78,0x08,0x78,0x00,0x20,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x78,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x78,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00}; // Bright Acoustic Piano
const unsigned char ym_patch_2[78] PROGMEM = {0x23,0x00,0x00,0x00,0x00,0x00,0x40,0x00,0x00,0x00,0x60,0x34,0x00,0x40,0x40,0x70,0x08,0x58,0x00,0x18,0x00,0x00,0x08,0x00,0x00,0x00,0x00,0x00,0x0E,0x00,0x40,0x40,0x78,0x08,0x78,0x00,0x20,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x04,0x00,0x40,0x40,0x78,0x08,0x58,0x00,0x18,0x00,0x00,0x00,0x10,0x00,0x00,0x00,0x40,0x0E,0x00,0x40,0x00,0x78,0x10,0x08,0x00,0x20,0x00,0x00,0x08,0x00,0x00,0x00,0x00}; // Electric Grand Piano
const unsigned char ym_patch_3[78] PROGMEM = {0x23,0x00,0x00,0x00,0x00,0x00,0x60,0x00,0x00,0x00,0x40,0x00,0x40,0x40,0x40,0x78,0x50,0x38,0x00,0x18,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x20,0x08,0x00,0x40,0x00,0x68,0x08,0x78,0x00,0x20,0x00,0x00,0x10,0x00,0x00,0x00,0x00,0x40,0x00,0x40,0x40,0x40,0x78,0x50,0x38,0x00,0x18,0x00,0x00,0x00,0x50,0x00,0x00,0x00,0x20,0x08,0x00,0x40,0x00,0x68,0x08,0x78,0x00,0x20,0x00,0x00,0x18,0x00,0x00,0x00,0x00}; // Honky-tonk Piano
The AVR microcontrollers (and well, a whole lot of microcontrollers) use a Harvard Architecture that splits memory up between Program Memory and RAM. Code resides in Program Memory, while variables (and things that change) reside in RAM.
Microcontrollers typically have far more Program Memory, so it’s useful to store large data constructs in PROGMEM and only load them into RAM when we need to. Our patches array consumes ~13.5k and the patchNames array consumes another 2k. Keeping this in RAM would consume nearly all of it, so instead I’ve put it in PROGMEM. Just keep this in mind, because we will need to write a function to move things from PROGMEM into RAM.
Using the Library in our .ino file
With our fancy new YM3812 library in hand, it’s time to use it in our main program! Let’s go over what we want this program to do:
Define and manage instruments for every MIDI channel
Manage MIDI events and play each note with the correct patch
Listen for MIDI Program Change events that assign patches to channels
Managing MIDI Instruments
Conceptually, we want assign a different patch to each MIDI channel. Those patches come from the patches array defined in the instruments.h file. So, we need to add a couple of lines of code to track this in our .ino file:
#define MAX_INSTRUMENTS 16 // Total MIDI instruments to support (one per midi channel)
uint8_t inst_patch_index[ MAX_INSTRUMENTS ]; // Contains index of the patch used for each midi instrument / channel
The inst_patch_index array contains the indexes of the active patches in our patches array. The array contains 16 elements, one for each MIDI channel. Additionally, we need a data structure in RAM to store the PROGMEM patch information. I called this, inst_patch_data:
PatchArr inst_patch_data[ MAX_INSTRUMENTS ]; // Contains one patch per instrument
Finally, we need a function that loads PROGMEM patch data from the patches array into this inst_patch_data structure:
void loadPatchFromProgMem( byte instIndex, byte patchIndex ){ // Load patch data from program memory into inst_patch_data array
for( byte i=0; i<PATCH_SIZE; i++ ){ // Loop through instrument data
inst_patch_data[instIndex][i] = pgm_read_byte_near( patches[patchIndex]+i ); // Copy each byte into ram_data
}
}
This function takes an instrument index—a.k.a. the MIDI channel to associate with the patch—and the index of the patch from the patches array.
MIDI Event Handlers
In the articles so far, our program only needed two event handler functions—one for note on and one for note off. We still need these two functions, but now we need to adjust them to send patch information:
void handleNoteOn( byte channel, byte midiNote, byte velocity ){ // Handle MIDI Note On Events
uint8_t ch = channel - 1; // Convert to 0-indexed from MIDI's 1-indexed channel nonsense
PROC_YM3812.patchNoteOn( inst_patch_data[ch], midiNote ); // Pass the patch information for the channel and note to the YM3812
}
void handleNoteOff( byte channel, byte midiNote, byte velocity ){ // Handle MIDI Note Off Events
uint8_t ch = channel - 1; // Convert to 0-indexed from MIDI's 1-indexed channel nonsense
PROC_YM3812.patchNoteOff( inst_patch_data[ch], midiNote ); // Pass the patch information for the channel and note to the YM3812
}
Because we have all of the patch data contained in the inst_patch_data array, all we need to do is pass the patch data associated with the MIDI channel. Note, MIDI channels run from 1..16, not 0..15. So, we have to subtract one in order for it to be zero-indexed.
Next, to allow the user to swap patches via MIDI commands, we need to introduce a new event handler called, handleProgramChange.
void handleProgramChange( byte channel, byte patchIndex ){
uint8_t ch = channel-1; // Convert to 0-indexed from MIDI's 1-indexed channel nonsense
inst_patch_index[ch] = patchIndex; // Store the patch index
loadPatchFromProgMem( ch, inst_patch_index[ch] ); // Load the patch from progmem into regular memory
}
This event runs every time a MIDI device indicates that it wants to select a different patch. To do that, we first update the inst_patch_index array entry associated with the MIDI channel to point to the new patch index. Then, we run the loadPatchFromProgMem function, to copy the patch data from the patches array into RAM. From then on, any note played on this MIDI channel, will use this new patch.
The Setup Function
In all of the previous articles, the setup function loaded all of the sound settings individually into the YM3812. Now, we have patches to do that, and don’t need any of that noise. So if you compare our new setup function to the old one, it looks a whole lot shorter:
void setup(void) {
PROC_YM3812.reset();
// Initialize Patches
for( byte i=0; i<MAX_INSTRUMENTS; i++ ){ // Load patch data from channel_patches into the
inst_patch_index[i] = i; // By default, use a different patch for each midi channel
loadPatchFromProgMem( i, inst_patch_index[i] ); // instruments array (inst_patch_data)
}
//MIDI Setup
MIDI.setHandleNoteOn( handleNoteOn ); // Setup Note-on Handler function
MIDI.setHandleNoteOff( handleNoteOff ); // Setup Note-off Handler function
MIDI.setHandleProgramChange( handleProgramChange ); // Setup Program Change Handler function
MIDI.begin(); // Start listening for incoming MIDI
}
That said, we DO need to load an initial set of default patches for each of the MIDI channels from PROGMEM into RAM. Being lazy, I just used a for loop to load them sequentially, so MIDI channel 1 aligns to patch 0, channel 2 to patch 1, and so on.
Finally, we need to call the setHandleProgramChange function from the MIDI library to connect our handleProgramChange function… and that’s it!
Conclusion?
Well, that was a lot of words. I hope they added at least a little more clarity than confusion. But if not, maybe try looking at the code all together on GitHub.
This was a pretty pivotal moment during the development of this module. I can still remember ruthlessly deleting broad swaths of code hoping for the best. And thankfully, everything kind of clicked into place. By abstracting all of the settings into generic patches, we now have the foundation for multi-sound chip modules. And, by loading patches with every new note, we can even implement percussion mode… which will be our topic of discussion in the next article.
If you’ve followed the series so far, welcome back! In the last article, we added MIDI control to our YM3812 module. This allows us to play music by sending signals from a computer, keyboard or any other midi source. Still, you can only play one note at a time, and there is a pretty annoying bug. If you hold a note down, play another note, and then release the first note, both notes turn off. This makes playing legato scales sound terrible. Here, have a listen:
Fixing the Legato Issue
As it turns out, fixing this legato issue brings us one step closer to a polyphony, so let’s do it!
The problem stems from our our handleNote functions. The handleNoteOn function looks for any incoming note-on commands from the MIDI input. It then updates channel 0 of the YM3812 to play that note—even if another note is playing. But then when it receives ANY note off function, it turns channel zero off. To fix this, we need to add a noteOff function that only turns off the channel if the released key matches the last key we turned on. This way releasing a key other than the one you last played won’t turn off the current note.
To start, let’s add a variable—channel_note—that keeps track of the last played note:
uint8_t channel_note = 0;
Then, in the play note function, we can update this variable to the current note being played:
We also need to add a new function called noteOff to our library. This function turns off the the keyOn register for the channel by setting it to false. But first, it checks that the MIDI note to turn off is the same as the last one played.
Now that we’ve fixed the legato issue, let’s find a way to use more than one channel on the YM3812. For our first attempt, we will simply select a new channel with each new note. This will effectively “rotate” through each YM3812 channel. While the YM3812 can play up to nine notes simultaneously, for demonstration purposes, let’s pretend there are only 3. Don’t worry, we will scale back up at the end.
Take a look at a quick demonstration of the algorithm:
With this algorithm, we rotate through each channel every time we play a new note. For example, to play a C chord, we play the C on channel 1, the E on channel 2, and the G on channel 3. Playing a fourth note just rotates back to the beginning and overwrites the first channel.
Implementing the Rotate Algorithm
We will implement this algorithm into a new noteOn function that only needs the note you want to play. This function will internally manage all of the logic that decides which channel to use. As such, we can update the handleNoteOn function to call this one instead of chPlayNote:
We also need to replace the channel_note integer with an array of integers. This way we can track the notes associated with every channel. I called it channel_notes 🤯. Again, we can assume there are only three channels for demonstration purposes. And, we can keep track of that assumption in a num_channels variable.
Finally, we need a variable to track which channel we played as we rotate through them. We can call it last_channel. With these variables defined, we can write the implementations for our note functions:
The noteOn function increments the last_channel variable. If this index hits the maximum (as defined by num_channels) then the modulus operator just rotates it back to zero. The next line of code saves the MIDI note number into the channel_notes array at the new index. And finally, uses the chPlayNote function to turn the channel on.
The noteOff function compares each of the values stored in the channel_notes array to midiNote. If the current channel matches midiNote, then it turns that channel off using the regKeyOn function. With these changes in place, let’s see how it works:
The Trouble With Rotation
Well, that definitely created polyphony. But it also introduced some unexpected behavior. As you can see in the video above, if you hold two notes and press a third note repeatedly, then the held notes eventually stop playing. Let’s take a look at the algorithm to understand why this happens.
Here, we play the three notes of our C chord: C E G. But then, we let go of the E. Now when we add a new note, it still overwrites the lower C even though there is an unused channel. This happens because we are just blindly rotating through channels. Clearly this algorithm has some shortfalls. Let’s see if we can make it better.
Smart Polyphonic Algorithm
Just like before, we play the C, E, G of the C chord. But this time we also keep track of when each key turns on. Then when we let go of the E, we turn it off and track when we turned it off. Finally, when we turn on the high C, we search for the track that has been off the longest and overwrite it.
Why the longest? Well, when you press a key, you trigger the envelope for that note. You get the attack and decay, and then the note stays on until you let go. At that point, the note starts its release cycle. To keep things sounding natural, we want the release cycle to go on as long as possible. To allow for this, we always overwrite the note that has been off the longest.
What happens when all of the notes are still turned on? Well, no big surprise here, you overwrite the note that has been on the longest. Ok! we have the rules. Let’s figure out how to implement this.
Implementing Smart Polyphony
In the rotation algorithm, we only kept track of the the note assigned to each channel. For this algorithm, we need to track a bit more information. To do that, we are going to use a “struct” data structure. Structures are just collections of variables. Our structure—which we will call YM_Channel—contains three variables:
struct YM_Channel{
uint8_t midi_note = 0; // The pitch of the note
bool note_state = false; // Whether the note is on (true) or off (false)
unsigned long state_changed; // The time that the note state changed (millis)
};
Because structures work like other data types, we can adjust the channel_notes array to be an array of YM_Channel objects instead of integers. We’d better rename it to something more generic too. How about channel_states.
This might be the most complex function yet! But we can break it down into three parts.
In the first part, we create a set of variables. These variables track the channel that has been on the longest and off the longest. The variables on_channel and off_channel store the index of the channel, while oldest_on_time and oldest_off_time store the time that their state changed. Note, that we set on_channel and off_channel to 0xFF by default. Obviously there isn’t a 255th channel on the YM3812, so this can’t possibly reference a valid channel. But if we get through the second step without changing the variable then its value will remain at 0xFF. And that will be helpful in step three. (Just go with it…)
In the second step, we loop through each of the channels. If the note_state is on, we check if the state_changed time stamp for that channel is less than oldest_on_time. If it is, then we set on_channel to be this index (ch) and oldest_on_time to be this channel’s state_changed time. In this way, by the time we get to the end of the loop, oldest_on_time will contain the state_changed time of the channel that has been playing the longest. And on_channel will point to that channel’s index. If note_state is false, then we do the same thing, but track the oldest_off_time and off_channel instead.
In step three, we decide whether to return on_channel or off_channel. If we made it through step two and off_channel equals 0xFF, then all of the channels are in use. But if off_channel is less than 0xFF then at least one channel must currently be turned off. So, we return off_channel—which contains the index of the note that’s been off the longest. Otherwise, if no notes are currently off, then we return on_channel—which contains the note that’s been on the longest.
Writing that out felt like writing a Dr. Seuss poem 🤯—hopefully it wasn’t too confusing
NoteOn/Off Function Tweaks
While we definitely buried most of the complexity into the chGetNext function, there are still a few other changes to make.
In the noteOn function, we use chGetNext to update last_channel (instead of rotating through channel numbers). But then, instead of just saving the MIDI note number, we update all three variables associated with the channel. We set midi_note to the currently playing midi note. We set note_state to true (since it is now playing). Finally, we set state_changed to the current time (in milliseconds).
We also need to make similar updates to the noteOff function, saving the state_change time and setting note_state to false.
That should do it! Give it a compile and see if it works. Here is a demo of how things turned out on my end:
Going FULL Polyphonic
Now that we have our “smart” algorithm working, let’s update the number of channels. We want to take full advantage of the YM3812’s 9-voice polyphony! All you need to do is update these lines of code in the .h file to point to the YM3812_NUM_CHANNELS constant:
And with that, you should have a note for (almost) every finger!
Wrap Up & Links
Hopefully the code snippets above provided enough clues to get you program working. But if not, you can find the full files on gitHub. I included a separate folder for the updated monosynth algorithm, the rotate algorithm and the “smart” algorithm:
In the next article, we will move on from MIDI and tackle sound patches. Just like polyphony, patches will create another substantial step forward. But to make that leap forward, we are going to take a big step back too.
If you run into any trouble, feel free to leave a note in the comments below!
If you followed along through last three posts, then hopefully you have a Yamaha YM3812 OPL Sound Chip on a breadboard playing a lovely F Major 7th chord over and over and over 😅. Or maybe, the annoying repetition drove you to experiment and find new patterns. It’s a great proof of concept that ensures the hardware and software work, but not exactly an instrument. Today, we are going to fix that by implementing MIDI. This way, you can use any external musical input device or even your computer to play music—not just sound. And that, to me, makes it an instrument.
The Hardware
Building hardware that interfaces MIDI with a microcontroller requires only a handful of components and is actually pretty simple. Fundamentally, the schematic just passes a serial TTL logic signal from the Tx pin of one microcontroller to the Rx pin of another. The real magic is that this circuit fully isolates the sender from the receiver. They don’t even need a common ground! Let’s see how.
Input & Output Combined
If you connect these two circuits through a MIDI cable, you get the schematic above. An LED inside the 6n138 optocoupler signals a light sensitive transistor. When the LED turns on, the transistor connects pins 5 and 6 of the optocoupler and allows electricity to pass through. When the LED turns off, that connection breaks. The 220 ohm pull-up resistor ensures that pin 6 stays high whenever the LED is off, and low when the LED turns on.
You can think of the other half of the circuit as an LED on an extension chord. 5v flows through two 220 ohm resistors before going through the LED inside of the 6n138. It then flows back through another 220 ohm resistors and into the Tx pin of the microcontroller. When the Tx pin goes high, both sides of the LED are at 5v and no current can pass. But when Tx goes low, the current sinks into the microcontroller and the LED turns on. Turning the LED on closes the connection of pins 5 and 6 on the 6n138 which shorts the Rx line to ground. This way when Tx goes low, Rx goes low—and visa versa.
Also take note that the ground connection that passes into the MIDI cable does not connect to the receiving circuit. This shields the cable from interference, but also keeps both sides fully isolated.
On another note, the placement of the resistors ensures that foul play by either device won’t hurt the microcontroller or the optocoupler. The extra diode ensures that even connecting these pins backwards won’t cause any damage to the 6n138.
MIDI TRS Jacks
If you are familiar with MIDI devices, then you may be wondering why I used a TRS (tip/ring/sleeve) jack instead of the traditional 5-pin DIN connector. And well, I can think of two good reasons: size and cost. Ultimately, I plan to create a set of patchable EuroRack midi devices that allow patching of MIDI signals just like CV or Audio. In this context, 5-pin DIN connectors (which can cost multiple dollars each) and their associated MIDI cables become prohibitively expensive—especially in comparison to TRS jacks and stereo patch cables. Not to mention that a patchwork of MIDI cables would become far too bulky for a EuroRack setup.
Of course, I am far from the first person to decide this, and DIN to TRS converters abound. So I purchased one used by big name brands like Korg and Akai. The circuits above are based on the pinout of these adapters. Later, I added a beat step pro to the mix.
The Beatstep Pro uses TRS jacks straight out of the box, so I happily connected my device with a stereo patch cable… and it didn’t work. This is because Arturio inverts the connection of the tip and ring. Of course you could just as easily say that Korg inverts the tip and the ring connections, because it all depends on your frame of reference. There is no fully standard way to convert between a DIN MIDI connector and a TRS jack. And now I had one Korg converter and two Arturio converters… what to do…
TRS Input Switchable Module
Well, this is easy… Just add a switch and then you can swap from one standard to the other! Besides, like I said earlier, that diode prevents anything bad from happening if you connect things backwards. After building this circuit dozens of times I finally just built a small module that plugs right into a breadboard. This makes things way more compact and simple:
You can find all of the the breakout board schematics and PCB files on GitHub in the Article 4 folder. I even included a zip file with a penalized layout that you can upload to your favorite PCB manufacturer and print out for dirt cheap. I think I got 20 copies for $2 USD through JLCPCB—though shipping probably cost 10x that.
Breadboard Schematic
Anyway, how ever you build the circuit out, just hook up the Rx pin of the MIDI input to the Rx pin on the AVR128DA28. Now, we should be good to get to the software. Here is the full schematic for reference:
Frequency Nonsense
Before getting into the code that plays a MIDI note, we first need to understand how the YM3812 encodes frequencies. Looking at the register options, two settings control the pitch of a channel: FNum and Block. Note: F-Number splits across two 8-bit register bytes to form a 10-bit number. Block, on the other hand, uses only a 3-bit number.
According to the data sheet, the note frequency (fmus) connects to the F-Number and Block through the formula:
Note: “fsam” (or sample frequency) equals the master clock frequency (3.6 MHz) divided by 72.
Calculating F-Numbers
Now you may be asking yourself, why do we need more than one variable to select a frequency? And to figure that out, let’s try calculating the F-Num for every MIDI note’s frequency and every for block. To do this, I created a frequency number calculator. Feel free to open that up in another window and play along!
A couple of interesting insights become apparent when we do this.
Depending on the block value used, some of the F-Numbers (shown in grey) end up being too large to represent using a 10-bit number. 10-bit numbers max out at 1023.
Even with the maximum, you can still represent some frequencies with more than one block value / F-Num combination.
Looking at the highest possible values in each column, they seem to follow a pattern: 970, 916, 864, etc. And comparing each instance of those numbers to the note name on the left, those numbers appear to be an octave apart. For example, 970 in block 3 represents F#4 where 970 in block 2 represents F#3. This seems to indicate that blocks are one octave apart.
Now the question remains: knowing we can represent a frequency with multiple block/f-num combinations, which block should we use? To figure that out, let’s use our newly calculated F-Numbers and block values in reverse to calculate the original note frequency.
Rounded Frequencies
Now things are getting interesting. Some of the newly calculated frequencies are way off—especially for higher block numbers. Because F-numbers have no decimal component, they will always be somewhat inaccurate. Moreover, the lower the F-number, the more the inaccurate the frequency becomes. Take a look at the bottom of the table:
Yikes, that’s a lot of red! Even block 0 does a mediocre job of representing the midi note. Still, A0, which is the lowest note on the piano is MIDI note 21… so I suppose that’s not THAT bad… Everything below that is going to be difficult to hear anyway.
Let’s take this further and come up with a few general rules for choosing blocks and F-numbers. Namely:
We always want to use the lowest possible block value and thus the highest—and most in-tune—F-numbers
The note frequencies follow the same pattern across each block separated by 12 notes (one octave)
Building an Algorithm
With this in mind, I exported the top 31 F-Numbers into an array and lined them up with the midi note scale. Also, I recalculated the F-Numbers for a 3.57954 MHz crystal (instead of 3.6 MHz). You can do this by adjusting the master clock frequency in the calculator tool.
Now, with our array lined up based on block number, you can start to see the pattern. The first 18 midi notes use block zero—it’s the only block that can reach those frequencies. Also, for the first 18 notes, the index of F-number in our array aligns to the midi note number. So, midi note 0 aligns with frequency array index 0, and so-forth.
Then, beginning at midi note 19, the block number starts at 0 and, every 12 notes, increments by 1. This way, when we get to midi note 31, we start using block 1. And when we get to midi note 43, we use block 2, etc. Because block is an integer, you can represent this behavior by the formula:
Block = (MidiNote - 19) / 12
To select the appropriate index from our F-Number array, we need to rotate through the top 12 values. We start at index 19, count to 30, and repeat from 19 again. You can represent this using the following formula:
FNumIndex = ((MidiNote - 19) % 12) + 19
Both of these formulas work great until we hit midi note 115. At that point, we’d need a block value of 8—which is more than a 3-bit number can represent. Honestly though… that’s REALLY high so, I think we will be fine.
The Software
OK, enough yapping about theory. Let’s implement this into our YM3812 library.
Play Note Function
First up, we need to add the frequency array into our .h file:
As well as a definition for the play note function:
void chPlayNote( uint8_t ch, uint8_t midiNote );
Here the function takes a YM3812 channel (ch) to play the note on as well as the midi note pitch to play (midiNote).
void YM3812::chPlayNote( uint8_t ch, uint8_t midiNote ){
regKeyOn( ch, 0 ); // Turn off the channel if it is on
if( midiNote > 114 ) return; // Note is out of range, so return
if( midiNote < 19 ){ // Note is at the bottom of range
regFrqBlock( ch, 0 ); // So use block zero
regFrqFnum( ch, FRQ_SCALE[midiNote] ); // and pull the value from the s
} else { // If in the normal range
regFrqBlock( ch, (midiNote - 19) / 12 ); // Increment block every 12 notes
regFrqFnum( ch, FRQ_SCALE[((midiNote - 19) % 12) + 19] ); // Increment F-Num
}
regKeyOn( ch, 1 ); // Turn the channel back on
}
The algorithm of the function first turns off the channel we want to update and then checks to see if the midi note resides in the range of valid YM3812 pitches. If not, the function quits.
For valid notes, under midi note 19, we follow our algorithm and use block zero with the index of the F-Number associated directly with the midi note. For midi notes 19 and over, we use the formulas for block and F-Number index that we determined earlier.
Finally, after setting the frequency, we just turn on the channel to make it start playing the note.
Installing the MIDI Library
Now that we have a way to play notes using a midi note number, we need to update our .ino file to listen for midi events and then trigger the play note function.
First things first, we need to install the MIDI Library by Francois Best. You can find it using the Arduino IDE’s library manager:
If you are having trouble finding the library, select “Communication” in the Topic dropdown. Apparently “midi” appears in the middle of lots of words, so you will likely get a ton of random results.
Now that we have the library installed, include it at the top of the .ino file:
#include <MIDI.h>
Next, we need to create an instance of the MIDI object and tell it to listen to the correct Serial port (Serial2 in our case).
Both of these functions receive the same arguments:
channel: The MIDI channel of the incoming note
midiNote: The pitch number of the note
velocity: The volume of the note (for touch sensitivity)
In the handleNoteOn function, we simply use our play note function and pass the midi note number. For now, let’s keep this a mono-synth and play everything on channel zero of the YM3812.
In the handleNoteOff function, we (rather unsophisticatedly) tell channel zero to stop playing. There are a few issues with this, but we will address those when we make this thing polyphonic.
For these two functions to do anything, we need to register them with the MIDI Library. We do this by adding a few lines to the setup() function:
setHandleNoteOn and setHandleNoteOff connect the functions we wrote above to the MIDI object. This way the MIDI object can run our function whenever it detects the associated event.
The begin function kicks things into gear and tells the MIDI library to start listening.
Listening for Events
Now that we have things set up, we need to regularly prompt the MIDI object to check for new information. We do this by modifying our loop function. If you still have all of the code from article 3, you can delete all of that and update the function to look like this:
void loop() {
while( MIDI.read(0) ){}
}
The read function checks for any events related to the MIDI channel passed to it. If you pass a zero (as we did above) then it reacts to ALL incoming MIDI channels. When the read function detects an event, it initiates the associated handler function to manage that event and then returns true. If it does not detect an event it returns false. This way, by calling the function in a while loop, we ensure that we manage all captured events before moving on.
And with that, compile this puppy and send it up to the micro. With any luck, you should be able to control your YM3812 with an external MIDI keyboard, computer or really any midi device!
Some Things to Try
Once you have things up and running, try playing a note and then another note and then quickly release the first one. The issue becomes especially apparent if you set the properties of the voice to be more organ like. Take a look at the note-off function and see if there is something we can tweak to make things a bit more reliable.
A Few Final Thoughts
Having the ability to receive a midi signal and then play it on our YM3812 represents kind of a milestone. We now have something that functions as a playable instrument—even if that instrument has rather limited capabilities. Still, this was a necessary first step that lays a solid foundation for what’s to come. And in the next article, we will considerably enhance the capabilities of our device through polyphony. I guarantee it will become far more playable.
In the last article, we built a simple circuit to get our YM3812 up and running on a breadboard. To do that, we used code from the GitHub repo and just kind of glossed over how the it works. In this article, let’s delve into the sea of complexity that is c++ programming. Obligatory disclaimer: c++ is an incredibly flexible language with MANY ways of writing code from the simplest to the most enigmatic. I chose one way—not the best way, nor the simplest, nor the most elegant—just the way I thought to do it. If you have recommendations on how to improve the code, please leave a comment!
Code Structure
For now, the code resides in three files—though we will add a couple more before this project concludes:
YM3812_Breadboard.ino will be our primary “program”—that is, where the setup and main loop functions reside
YM3812.h header file defines the YM3812 library that wraps the chip’s functionality into a class
YM3812.cpp provides implementations for functions defined in the header file
Inside of the YM3812 class are two different types of functions. The Chip Control Functions instruct the microcontroller to send data to (or reset) the YM3812. Their implementation depends entirely on the type of microcontroller used and how its pins are configured. The Register Functions utilize these Chip Control Functions to manipulate register settings in the YM3812. By coding things this way, if you want to use an Arduino or Teensy instead of the AVR128DA28, then you only need to change the sendData and reset functions. Everything else stays the same.
The last piece of our class is the Register Cache. Because we want to manipulate pieces of bytes in the registers at a bit level, we need to know the current state of all of the bits that we don’t want to change. The YM3812 doesn’t allow us to read registers on the chip, so instead, we have to store their current values in the microcontroller’s memory. At some point, we will discuss how to circumvent this, but that’s a few articles away.
Chip Control Functions
Let’s start with the lowest level functions that electrically manipulate the pins on the YM3812. The good news is that there are only two of them: sendData and reset.
sendData( reg, val )
If you read the first article, this should feel pretty familiar. sendData directly implements the timing diagram at the end of that article—but with one small change. “Putting data on the bus” requires sending the byte over SPI and then latching it onto the 74HC595. Let’s take a look:
//Port Bits defined for control bus:
#define YM_WR 0b00000001 // Pin 0, Port D - Write
#define YM_A0 0b00000010 // Pin 1, Port D - Differentiates between address/data
#define YM_IC 0b00000100 // Pin 2, Port D - Reset Pin
#define YM_LATCH 0b00001000 // Pin 3, Port D - Output Latch
#define YM_CS 0b00010000 // Pin 4, Port D - Left YM3812 Chip Select
...
void YM3812::sendData( uint8_t reg, uint8_t val ){
PORTD.OUTCLR = YM_CS; // Enable the chip
PORTD.OUTCLR = YM_A0; // Put chip into register select mode
SPI.transfer(reg); // Put register location onto the data bus through SPI port
PORTD.OUTSET = YM_LATCH; // Latch register location into the 74HC595
delayMicroseconds(1); // wait a tic.
PORTD.OUTCLR = YM_LATCH; // Bring latch low now that the location is latched
PORTD.OUTCLR = YM_WR; // Bring write low to begin the write cycle
delayMicroseconds(10); // Delay so chip completes write cycle
PORTD.OUTSET = YM_WR; // Bring write high
delayMicroseconds(10); // Delay until the chip is ready to continue
PORTD.OUTSET = YM_A0; // Put chip into data write mode
SPI.transfer(val); // Put value onto the data bus through SPI port
PORTD.OUTSET = YM_LATCH; // Latch the value into the 74HC595
delayMicroseconds(1); // wait a tic.
PORTD.OUTCLR = YM_LATCH; // Bring latch low now that the value is latched
PORTD.OUTCLR = YM_WR; // Bring write low to begin the write cycle
delayMicroseconds(10); // Delay so chip completes write cycle
PORTD.OUTSET = YM_WR; // Bring write high
delayMicroseconds(10); // Delay until the chip is ready to continue
PORTD.OUTSET = YM_CS; // Bring Chip Select high to disable the YM3812
}
The pin definitions at the top of the code provide a name for each pin of PORT D on the microcontroller. The sendData function then uses those definitions to directly manipulate the registers and turn on and off those pins.
If you are more accustomed to the digitalWrite() function on an Arduino, then OUTSET and OUTCLR might look a little strange. These map to registers in the AVR128DA28 microcontroller that directly manipulate the output pins. By writing a byte to OUTSET, any bits in that byte that are 1 will turn the corresponding pins on. Bits that are 0 will just be ignored. OUTCLR does the opposite. Any bits that are 1 in the byte will turn off their corresponding pins and, again, the 0s get ignored. While maybe a little less intuitive, directly manipulating these registers makes things MUCH faster.
Take note that in order for SPI.transfer() to work, you first have to run SPI.begin() at some point prior in the code. I got stumped by this bug while reducing the module code for this article. When I removed the code for the TFT screen—which also uses the SPI bus—the microcontroller locked up when it ran the SPI.transfer() function. This is because the TFT screen’s initialization function runs SPI.begin(). Once I added SPI.begin() to the YM3812’s reset function, everything worked perfectly!
void reset()
void YM3812::reset(){
SPI.begin(); // Initialize the SPI bus
//Hard Reset the YM3812
PORTD.OUTCLR = YM_IC; delay(10); // Bring Initialize / Clear line low
PORTD.OUTSET = YM_IC; delay(10); // And then high
//Clear all of our register caches
reg_01 = reg_08 = reg_BD = 0; // Clear out global settings
for( uint8_t ch = 0; ch < YM3812_NUM_CHANNELS; ch++ ){ // Loop through all of the channels
reg_A0[ch] = reg_B0[ch] = reg_C0[ch] = 0; // Set all register info to zero
}
for( uint8_t op = 0; op < YM3812_NUM_OPERATORS; op++ ){ // Loop through all of the operators
reg_20[op] = reg_40[op] = reg_60[op] = 0;
reg_80[op] = reg_E0[op] = 0; // Set all register info to zero
}
regWaveset( 1 ); // Enable all wave forms (not just sine waves)
}
The reset function does four main tasks. First it initializes the SPI bus, then it hard-resets the YM3812 by bringing its reset line low and then high. The delays ensure the chip is ready before continuing. Third, it resets the register cache variables stored in the class to zero (so they match what should be in the chip now). And finally, it sets the “Waveset” flag to allow for all waveform types.
Twiddling Bits
The register functions build upon the chip control functions and follow a very similar pattern:
Mask the new value to ensure it has the correct number of bits for the property you want to change
Look up the current value of the register containing the property
Erase the bits associated with the property that you want to overwrite
Insert the masked property value‘s bits into the register
Save the new combined register value into the register cache and send it to the YM3812
Bit manipulation turns out to be the most complicated part of this process. So let’s start with a macro that, once written, will allow us to tuck all of those details away and not worry about them any more:
SET_BITS( regVal, bitMask, offset, newVal )
This macro takes a register value (regVal) and combines a new property value (newVal) into it. The bitMask determines which of the bits from newVal to include, and offset determines where inside of regVal to insert those bits.
Looking at the left side of the chart above, we can see binary versions of newVal as well as bitMask. In this case, we want to insert a two-bit number into bits 4 and 5 of an existing register value without accidentally overwriting any of the other bits. A two-bit value should only ever have 1s in the lowest two bits, but of course, nothing prevents a developer from accidentally pushing a bigger value. So, to solve this, we use a bitMask that has 1s in the two lowest bits—the ones we want to keep—and then AND it with newVal. This way, we block out any rogue bits (like that 1 in red).
In order to insert those two bits into the current register value, we need to first set any bits we want to overwrite to zero. To create this “hole,” we shift the bitMask to the left by offset and then invert it (with the ~ operator). This puts a 1 in all of the bits we want to keep (as shown on the right side of the chart above). If we logically AND this new mask with regVal then those two bits will be zero.
Now, to insert our trimmed newVal into the two bit hole in regVal, we need to shift the value over by offset so it is in the right spot. Then we just logically OR the repositioned value with the masked regVal and presto! We get a new regVal with newVal inserted into just the right spot, and without altering any other bits.
Representing this as a single formula looks something like this:
In this case, regVal will be overwritten with the new combined value. And, because assigning one value to another returns the assigned value, you could wrap this entire thing in parenthesis and think of it like a function that both updates regVal and returns the updated value.
This is fairly complicated bit-math, so if its confusing, that’s OK. We are going to use this formula a lot though, so let’s tuck away all of that complexity into a macro:
Macros are a little like functions, except they aren’t executed at runtime. Instead, before compiling the code, the pre-compiler looks for any macro calls and replaces them with their associated code. For example, when the pre-compiler finds:
We could write all of our functions without the macro using this expanded format, but personally, I find the first version easier to read. Also, we can combine this with our sendData function to write a single line of code for each of our register update functions. Let’s dive into those next.
Global Register Fns
Let’s put our sendData function and SET_BITS macro together to actually modify a register. There are three types of these register update functions—global, channel and operator. The Global registers are the simplest, so we’ll start there. Let’s write a function that sets the Tremolo Depth for the chip.
Looking at the Register Map, we see that the Deep Tremolo flag is located at bit B7 of register address 0xBD. To update this, we have to fetch the current value of register 0xBD from our register cache and then update bit B7.
What is this register cache? Well, there isn’t any way to read the current value of a register on the YM3812. So in order to update only a few bits of a register value, we need to keep track of the current values for all of the registers in memory on the microcontroller. Global registers only have one instance of their value, so they can be represented with integers. Channels and Operators both have multiple instances of their registers so we use integer arrays to store them. Here are the global register caches:
As a convention, I’ve named each register cache variable after its base address. For global registers that have only one setting per base address, these are simply integers. Channel and Operator caches have multiple settings, so they are stored in arrays. Ok, let’s start piecing together our function.
void regTremoloDepth( uint8_t val ){ sendData( regAddress, value ); }
We know that our function is going to call sendData to set the register value on the YM3812, and that the function requires both a register location and a value to set it to. regAddress comes from the chart above—0xBD—so that’s easy to fill in. value needs to combine the current value of our register cache with the val that was passed into the function. Good thing we have that SET_BITS macro, let’s pop that in:
Filling in the blanks above, we first swap regVal for the variable containing the current value of the register we want to modify—reg_BD in this case. Then, because we are updating only a single bit, we swap bitMask for the constant, 0b00000001. Then, because the DeepTrem flag sits in bit 7, we swap offset for the constant, 7. Finally, we swap newVal for the variable that was passed into the function—val. Here’s what the formula looks like once you make all of those changes:
void regTremoloDepth( uint8_t val ){
sendData(0xBD, SET_BITS( reg_BD, 0b00000001, 7, val ) );
}
And there you have it, our one-liner function that allows the user to update the global tremolo depth value.
Channel Register Fns
Updating the channel registers works much the same way, but with one new wrinkle. Instead of one register per base address, we now have 9—one for each channel. Let’s look at the register caches to see how they are affected:
uint8_t reg_A0[YM3812_NUM_CHANNELS] = {0,0,0,0,0,0,0,0,0}; // frequency (lower 8-bits)
uint8_t reg_B0[YM3812_NUM_CHANNELS] = {0,0,0,0,0,0,0,0,0}; // key on, freq block, frequency (higher 2-bits)
uint8_t reg_C0[YM3812_NUM_CHANNELS] = {0,0,0,0,0,0,0,0,0}; // feedback, algorithm
Instead of simple integers, we use integer arrays to store our cache values. The arrays contain one integer for each of the 9 channels supported by the YM3812. This also means that in our register writing functions, we have to choose the correct register cache value based on the channel we want to update. Here is an example:
void regChFeedback( uint8_t ch, uint8_t val ){
sendData(0xC0+ch, SET_BITS( reg_C0[ch], 0b00000111, 1, val ) );
}
This function modifies the feedback property of a specific channel. It takes two parameters: the channel to update and the value to set the feedback property to. Looking at our call to sendData, we see two important changes. First, we have to compute the address of the register we want to update by adding the channel number (ch) to the base address (0xC0). Second, we need to pull the correct register value from our cache by using ch as the array index.
Operator Register Fns
Updating operator registers works the same way as channels, but with yet another twist. The channel registers are all in sequence, so computing the register address is super simple (just add the channel number to the base address). We aren’t so lucky with the operators.
Operators 1, 2, 3, 4, 5, and 6 all pair with register offsets 0, 1, 2, 3, 4, and 5… so far so good. But then… Operator 7 pairs with register offset 8! what happened to 6 and 7? Then, later, we skip offset locations 0xE and 0xF. Woof.
To get around this, we use an array that maps the operator number to the memory offset:
void regOpAttack( uint8_t op, uint8_t val ){
sendData(0x60+op_map[op], SET_BITS( reg_60[op], 0b00001111, 4, val ) );
}
Now, instead of adding the operator number to the base address (like we did with channels), we add the mapped offset from our array.
All of the register functions follow these patterns, and when you put them together, they create a lovely and readable, almost table-like pattern:
Really, the only things that change are the function name, the base address, the register cache, the bitMask and the offset. And, of course, all of those values come from the Register Map.
For the full implementation of all of these functions, check out the GitHub repot. This version has been heavily simplified to match this article and will serve as a foundation for all of the other functionality we’ll be adding.
Making Music
The time has finally come… Let’s go back to our YM3812_Breadboard.ino file and put our library to good use. Again, we are going to keep this as simple as possible. There are two parts: setting things up in a setup() function, and then playing a sequence of notes in a loop() function.
setup()
The first thing we need to do is initialize the YM3812 library. I’ve defined a global instance of the library with the line:
YM3812 PROC_YM3812
Then, inside of the setup function, we initialize the library with the reset function:
PROC_YM3812.reset();
At this point, we get to set up the properties of the sound that we want to create. To keep this simple, let’s just tell every channel to have the same sound properties:
As we loop through the channels, we have to first compute the index of the operators we want to modify. And of course, it’s quirky. Operator 2‘s index will always be 3 higher than Operator 1‘s. The pattern for Operator 1‘s index however, is less straightforward. To get around this, we are going to add another array to our YM3812 class that maps from channel number to its associated Operator 1 index:
Now that we know which channel to modify as well as the associated operator indexes, the rest of the function configures each of the sound parameters using the register functions.
Loop()
Now finally, we get to play the four notes in our chord. And there are just a few steps to do it:
First, we turn off any notes that might happen to be on:
PROC_YM3812.regKeyOn( 0, false ); //Turn Channel 0 off
PROC_YM3812.regKeyOn( 1, false ); //Turn Channel 1 off
PROC_YM3812.regKeyOn( 2, false ); //Turn Channel 2 off
PROC_YM3812.regKeyOn( 3, false ); //Turn Channel 3 off
Then we set the octave we want the notes to be in:
PROC_YM3812.regFrqFnum( 0, 0x1C9 ); // F
PROC_YM3812.regFrqFnum( 1, 0x240 ); // A
PROC_YM3812.regFrqFnum( 2, 0x2AD ); // C
PROC_YM3812.regFrqFnum( 3, 0x360 ); // E
Wait… 0x1C9, 0x240, 0x2AD and 0x360 look nothing like F A C E… Nothing to see here… we will talk about that in the next article. Anyway… Finally, we turn the notes on:
//Turn on all of the notes one at a time
PROC_YM3812.regKeyOn( 0, true );
delay(100);
PROC_YM3812.regKeyOn( 1, true );
delay(100);
PROC_YM3812.regKeyOn( 2, true );
delay(100);
PROC_YM3812.regKeyOn( 3, true );
delay(1000);
Now, I turned all of the notes on one at a time here by adding delays, but if you remove those, they would turn on all at once.
Final Thoughts
My goodness that was a long article. Apparently “FACE reveal” is a bit more involved than “hello world.” If you made it all of the way through, I hope that a lovely F Major 7th chord is echoing through your speakers at this very moment. If you ran into any trouble along the way, feel free to post questions in the comments. All of the working code is posted on gitHub, so hopefully that will help you in the troubleshooting process as well.
While the journey so far may have been arduous, it’s just getting started. Remember how I glossed over the frequencies for F A C E? Well, if we want to add midi control we are going to need a way to translate midi notes into those values. You know all of those register control functions we made? Eventually we are going to get rid of most of those—there’s a better way. But for now, don’t worry about all of that. Enjoy the fruits of your labor. See if you can create more interesting instruments than the one I used. Maybe program in a nice little jig! You got this.
In the last article we dug into the YM3812 registers and how to manipulate them an electrical level. Today we are going to build up the first part of the schematic—just enough to get sound working through a microcontroller.
Whenever building on a new platform, What’s the first thing you implement? Hello World. It’s a great starting point that ensures everything works—the hardware, the development toolchain, etc. We need an equivalent project for our YM3812 that uses sound instead of text on the screen. We need… a FACE reveal! (you know… play an F major 7th chord… F A C E… yeah… woof… #DadJokes 🙄). OK but seriously… to play a note on this chip, we need working hardware, a library that interfaces with the chip, a properly configured voice, and the ability to turn on/off a note at the right frequency. This is going to be an epic journey… and it’s going to take two articles to complete. In this article, we will start with the hardware and schematic, and then upload the code from the GitHub repo using the Arduino IDE. Then in the next article, we will look into the code in a bit more detail to see exactly how it’s working.
The Components
In the spirit of starting small, I’ve pulled together a simple implementation of the YM3812 hardware. Technically, you could simplify this further—interfacing the full 8-bit data bus directly with the microcontroller, or having the microcontroller produce the 3.58 MHz clock. But we are going to need some of those pins down the line and this all fits on a single breadboard. Let’s take a look at our cast of silicon characters.
AVR128DA28 Microcontroller
I like to think of the AVR128DA28 as an Arduino Nano, but in chip form—and far more capable. This chip sports:
128kb of program memory and16kb of ram
10 AD converters, and a DA converter
I2C, SPI, UART, and even 4 hardware serial ports
Operating voltage of 1.8v to 5.5v
Internal auto-tuned 24 MHz oscillator
UPDI programing & Arduino IDE Support
And sooooo much more…
All in an itty bitty 28pin DIP package. There are even more powerful surface mount versions of the chip as well, but this will give us plenty of oomph in our project. I have to give a shout out here to Robbie Langmore from Tangible Waves for introducing me to these microcontrollers. In a world of chip shortages, there are still ~300 in stock at mouser—as of writing this.
74HC595 Shift Register
The 74HC595 shift register is next in our lineup. This chip converts the serial output from the AVR’s SPI bus and converts it into the 8-bit parallel bus that bus required by the YM3812.
YM3812 OPL2 Sound Processor
If you are reading this article and wondering what this chip does, then I highly recommend reading part one of this series. Other than some decoupling capacitors, the YM3812 requires a a 3.58Mhz crystal oscillator.
Y3014B DAC
Unlike some of the other YM sound processors, the YM3812 does not have an analog output. Instead, it needs to be paired with the Y3014B 10-bit Digital to Analog Converter. This chip takes a serial data stream coming from the YM3812 and converts it into an analog output. The Y3014B requires a voltage of 1/2 VCC on pin 8 (MP) to bias the output signal (pin 2) around. Fear not though, it produces its own reference voltage on pin 8 (RB), but you still need to buffer that voltage using an operational amplifier. Also, buffering the output signal (pin 2) with an operational amplifier is another solid plan.
TL072 / LM358 OpAmp
Did somebody say operational amplifier? OpAmps are the workhorses of Eurorack Modules and Analog Circuitry in general. Unless it’s a passive attenuator, odds are, there’s going to be 1 or maybe 10 of these versatile building blocks. In our case, will use these to buffer the reference voltage and output signal generated by the Y3014B.
Other Parts
After the ICs, there’s only a handful of other parts.
The six .1uF capacitors provide decoupling for the ICs
The two 10 uF capacitors stabilize the reference voltage on the Y3014b
And the 4.7 uF capacitor provides capacitive coupling on the audio output
The 100k resistor ties the output capacitor to ground and the 1k resistor ensures appropriate output impedance
The 1N4148 signal diode converts the single-wire UDPI pin into separate Tx/Rx pins that attach to an FTDI cable
Lastly, a micro-switch provides a reset button for the AVR microcontroller.
Schematic Time!
Microcontroller Connections
Now that we’ve talked through how the pieces work, the schematic should seem pretty straightforward. The Clock (pin 26) and Data (pin 28) of the SPI bus of the AVR128DA28 connects to the Serial Data (pin 14) and Shift Clock (pin 11) inputs on the 74HC595 shift register. PortD-3 (Pin 9) on the microcontroller connects to the Latch Clock (pin 12) on th 74HC595 and latches the output of the register. This sends the 8 Data Bits (pins 15, 1-7) to the YM3812 Data Inputs (pins 10-11, 13-18).
PortD 0, 1, 2 and 4 (pins 6, 7, 8, 10) on the microcontroller connect to the four control lines of the YM3812—Write (pin5), A0 (pin 4), Initialize/Clear (pin 3), and Chip Select (pin 7) respectively. On the YM3812, tie Read (pin 6) high to prevent rogue reads. IRQ (pin 2) can be left floating in the breeze.
Sound Processor & DAC Connections
The Output of the 4.58 MHz Crystal Oscillator flows in to the Master Clock Input (pin 24) on the YM3812. Enable (pin 1) on the oscillator should float. Tying it to ground disables the clock output.
The Serial Data and Clock output pins on the YM3812 connect to the Serial Data and Clock input pins on the Y3014B and transfers the raw sound data to the DAC. Sample & Hold (pin 20) on the YM3812 connects to the Latch (pin 3) on the Y3014B, and updates the analog value of the output.
The Y3014B produces a reference voltage of 1/2 VCC on RB (pin 7) which the TL072 then buffers and sends back to the output bias control pin MP (pin 8) on the Y3014B. C7 and C8 stabilize the reference voltage.
The Analog Output signal of the Y3014B (pin 2) get buffered through another operational amplifier on the TL072 before passing through a coupling capacitor (C8) to remove the DC bias. A high value resistor (R1) ensures our output stays centered at ground, and R2 ensures our output remains at a 1k impedance.
The Other Stuff
Of course all of the ICs require decoupling capacitors on their power pins (C1 – C6). And a reset button connects the ResetLine (pin 18) of the AVR128DA28 to ground. As far as I can tell, the reset button doesn’t require a debouncing filter, you can just directly connect it.
Lastly, the AVR128DA28 can be programmed through a one-wire Universal Programming & Debugging Interface. If you don’t have one of those programmers, you can use an FTDI to USB connector. There are many variations of this connector, some include the RTS/CTS lines (which we don’t use) and some don’t. Also, there are a couple of color variations of the wires on the connector. Here is an alternate color scheme that I have seen:
Starter Code
Our circuit isn’t going to do much without a bit of code. We need to program the microcontroller to see if everything works. As it happens that I created a simple library on my GitHub for just that purpose. I’ve trimmed it down to its basic elements and added loads of comments to make things as readable as possible. Feel free to download and play around with it, and in the next video we’ll dive into exactly how the code works.
Note, to get the YM3812_Breadboard.ino file to work, you will need to create a “YM3812_Breadboard” folder in your Arduino folder and then copy YM3812_Breadboard.ino, YM3812.h, and YM3812.cpp into it.
Programming the Micro
Assuming that you chose to use an AVR128DA28 as your microcontroller, you’ll need to install the DxCore in your Boards Manager. DxCore is an open sourced library written by Spence Konde that allows you to use the AVR128 line of microcontrollers in the Arduino IDE. There are detailed installation instructions on the GitHub, but this should at least get you started.
First we need to add a URL to the Additional Boards Manager URLs. To do that, open the menu: Arduino > Preferences:
Enter the url, http://drazzy.com/package_drazzy.com_index.json in the Additional Boards Manager URLs text box as shown above.
Then, locate the boards manager in Tools > Boards:… > Boards Manager
Search for DxCore and locate a package that looks similar to the one below:
After installing the DxCore boards library, you should be able to select the AVR DA-series (no bootloader) from the board menu. Also, select AVR128DA28 as the chip, and 24 MHz Internal as the Clock Speed.
The programmer that worked best for me was SerialUPDI – 230400 baud. It’s quite fast.
Now, just connect your FTDI cable as described in the schematic section of this article. And fingers crossed your program should upload directly into the micro. If it doesn’t… check that the Tx/Rx pins aren’t swapped—I’ve done that a few times.
Final Thoughts
With any luck, you have now experienced the total exhilaration of getting the YM3812 to generate that F Major 7 chord! Realistically, ANY sound at this point is great, because it shows that the hardware is working. If you got everything working, feel free to leave a comment. Similarly, if you see any issues in the schematic, let me know. After all, I’m still learning too!
In the next article, we will look through the code in detail.
Errata
In the schematic, the labels on the SPI Clock and MOSI lines were swapped. Thank you to Peter Hesterman for catching this one!
In conducting a pretty extensive deep dive into Yamaha’s FM synthesis chips, I’ve come to realize that while these chips produce very unique sounds, they are also largely very similar. In fact, if you compare their register settings (and ignore the level of granular control that you get), you can see just how many of the settings are the same:
With all of these similarities in mind, I originally planned to build a single module with a variety of sound processor types all controlled by a single microcontroller. After breadboarding this idea out with a YM3812, YM2151 and even a non-Yamaha SN76494, it worked! And I was even able to translate chip settings from one chip to another seamlessly. But when I started to document how it worked, I realized from a complexity standpoint, I needed to back waaaay up and start from the beginning. To get into the advanced stuff, we need to start with a smaller project that lays a strong foundation. And what better foundation than a Eurorack module powered by Yamaha’s YM3812 OPL2 sound chip. A chip that revolutionized computer audio for the video games of my childhood.
This article is only the first in a series. Together, we will tour the properties of the YM3812, design the hardware for a module and write the supporting microcontroller code. Hopefully, each article will pair with a YouTube video on the subject as well.
Why the YM3812?
If you grew up in the 80’s and 90’s and played video games on a PC, then it is highly likely that you have listened to the glorious 8-bit audio output of a YM3812 sound processor. To me, this sound is synonymous with Sierra and Lucas Arts adventure games. I can still remember reading advertisements for the sound card in Sierra’s interAction magazine. The idea of getting music and sound effects other than square waves from a computer was like magic. So after saving up my dog-sitting money I bough a SoundBlaster Pro from Fry’s electronics. The first time using this thing was symphonic—it took adventure gaming to a whole new level. Now, 30-ish years later, I want to recapture that magic.
From a more practical standpoint, the YM3812 supports only two operators per voice making it significantly simpler to understand than other 4-operator or even 6-operator FM synthesizers. But don’t be fooled, this is still a powerful chip, capable of creating a vast variety of nostalgia-generating instrumental and percussion sounds—and that’s what makes this chip such a great starting point.
How does FM synthesis work on the YM3812?
Let’s start with a quick review of FM Synthesis and how sound processors use it to produce sound. The YM3812 is an FM Synthesis sound processor. It plays up to 9 different sounds (voices) simultaneously, with each sound composed of two different operators.
An operator is the basic building block of FM synthesis. It contains an oscillator that generates a sound wave, and an envelope that adjusts the sound’s level over time. Each voice of the YM3812 includes two of these operators, and they combine together using two possible algorithms:
Mixing is the simplest form of “synthesis.” It adds the output of the two oscillators together like a mixer. In the output, both oscillators’ sounds are audible and distinct.
Frequency Modulation uses the first operator (called the modulator) to modulate the second operator (called the carrier). In this style, only the level of the carrier operator affects the level (volume) of the output signal. The level of the modulator operator affects the brightness of the timbre of the output. The higher the modulator level, the brighter the sound will be.
Frequency Modulation Demo
Words simply can’t do this justice, so let’s try a demonstration. I’ve connected an oscilloscope to the output of the final module (spoiler alert… the module eventually does work and this story will have a happy ending). I configured the module into Frequency Modulation mode and set the modulator operator to have a slowly decaying envelope. This way, you can hear how the amount of modulation changes the timbre of the sound as it slowly decreases. After playing a few notes, I also show how using different waveforms for the modulator and carrier signal also change the overall output.
Notice how the brightness quickly drops as the level of the modulating operator decays. In the Oscilloscope, it looks like the waveform collapses back into itself. Pretty cool right? Let’s try something even more mind bending…
Feedback
In the YM3812, the modulator operator (operator 1) also has the ability to modulate itself. What does that mean? This was incredibly difficult to wrap my head around, so I made an animation that starts with a sine wave and slowly increases the amount of feedback. As the amount of feedback increases, the waveform shifts from a sine wave into something more like a sawtooth wave. Then, after adding too much feedback, the wave deteriorates into inharmonic white noise. While less than musical, this white noise is a critical component of making percussive sounds on the YM3812, so it’s actually a good thing!
As an experiment to see if my animation aligns with reality, I plugged the final module into an oscilloscope. Now, we only want operator #1 because that is the operator that has the option for feedback. So, I set the module in mixing mode and turned the second operator’s signal all of the way down. This ensures that only the first operator’s signal appears in the output. I then set the first operator’s waveform to be a sine wave and then slowly increased the amount of feedback. You can see it shift from sine to saw… to crazy…
Types of Settings in the YM3812
The register settings that control sound production on the YM3812 fall into three different categories.
Operator Level settings control how the oscillator and envelope functions. Oscillator settings include things like the waveform, detuning, and vibrato, while the envelope settings include things like attack, decay, sustain, and release. Operator settings have to be defined for every operator on the chip, so there are 18 sets of these settings on the YM3812.
Channel level settings determine how the operators will work together to form a sound, as well as the things that affect that sound overall. Channel settings include the pitch of the note to play, whether the sound is turned on or off, and adding effects like tremolo and vibrato.
Global level settings control the general properties of the chip like turning on “drum mode” or affecting internal chip timers. The “Deep Tremolo” and “Deep Vibrato” settings change the amount of vibrato / tremolo applied to the channels that have vibrato / tremolo enabled. Because these two settings are global, they affect all channels at once. As for the drum mode, you get 6 drum sounds for the cost of 3 channels. It seems like a good thing, but realistically you can achieve far superior drum sounds using normal instrument settings.
What are Registers?
In order to produce sound, the YM3812 uses a set of special-purpose memory locations to store the sound settings called registers. Because space comes at a premium, and there are so many different settings to store, the YM3812 compresses multiple settings into each register byte. So, for example, let’s say we found the value stored at register 0x41 to be 0x4C.
This value represents two different settings—Level Scaling and Total Level for Operator 1 of Channel 2. To understand how these values were combined, you have to look at the value of 0x4C in a binary representation. As shown above, this translates to a binary value of 01001100. The top bits on the left (the two “highest” bits) store the value “01” or in decimal, 1 and the next 6 bits store the value 001100 or in decimal, 12. The first number maps to the Level Scaling setting and the second to Total Level. Combining variables in this way saves memory because not every setting requires 8 bits to represent its value. Conversely, this also means that the maximum integer value a variable represents depends on the number of bits used.
For example, a 1-bit number, can only be on or off, where as a 2-bit number can have 4 states (0-3). The number of allowable states doubles with each successive bit, until you get to a maximum of 256 (0 – 255) for an 8-bit number.
Note, the Frequency Number setting is the only one that uses MORE than 8 bits. This value is broken up across two register settings: an 8-bit Frequency Number LOW setting and a 2-bit Frequency Number HIGH setting. To combine these into a single 10-bit value, you shift the 2 high bits over to the left 8 times and then logically OR it with the low setting.
Finding Register Locations
Earlier, I pulled a random location 0x41 and somehow said, “oh that’s these two settings.” How the heck did I know that? Well, the answer is that I looked it up in the data sheet. But I think with a spreadsheet and a little information design, we can build a simple map to use going forward.
Global Registers
The chart above shows how settings map to the memory locations shown on the right. These 6 global registers scrunch together 18 different settings. The left side of the chart shows how the settings map to each bit of those 6 bytes. The D0 column represents the lowest bit in the byte and D7 represents the highest bit. So for example, if you wanted to set the Deep Vibrato flag’s setting, you take the current value of BD and do a logical OR with 0b01000000 to set the correct bit, and then write the result into the BD register.
Channel Registers
The channel-level registers follow a similar pattern, but with one added twist. Instead of each register type having one location in memory, they have 9—one for each channel. The columns on the left still indicate how bits map to settings, but now the columns on the right show the memory location of the register that corresponds to each channel. So, for example, if I wanted to turn sound generation on for Channel 5, I would have to set bit 5 of memory location 0xB4 to a 1, and that would play the note.
One more note, the base address column in the center provides the memory location of the first channel, and we can use it as an offset to calculate the other memory locations. So if we wanted the memory location of the “B0” setting of channel 5, we could find it with the formula, “0xB0 + 5 – 1”. The minus 1 is in there because the channel names are “1 indexed” instead of “0 indexed” If you labeled them channel 0 – 8, then “channel 5” would become “channel 4” and the formula would just be “0xB0 + 4”
Operator Registers
The operator-level register add a bit more complexity to the mix. I’ve laid them out in a similar fashion with the mapping of bits to setting types on the left and memory locations on the right, but now we have 18 different locations to contend with—one for each operator—instead of 9. For every channel, there are two operators—a Modulator and a Carrier. We can also abstract these into “Slot 1” and “Slot 2” because in other Yamaha chips, there are up to 4 operators per channel and numbers add clarity.
I’ve arranged the memory location columns in the table above to keep each operator visually connected with its associated channel. If you look closely, a curious pattern emerges. Channel 1 associates with operators 1 and 4, Channel 2 associates with operators 2 and 5, and so-on. The carrier operator number is always three higher than the modulator’s operator number.
In the chart above, the row below the operator numbers (that starts, “+0, +3, +1,…”) represents the memory offsets for each operator from the base address. So to find a memory location for a specific operator, you could add this value to the base address associated with the setting you want to change. Here again, there is another “gotcha.” The memory locations JUMP between channels 3 and 4, and skip offsets 0x6 and 0x7. A similar jump occurs between channels 6 and 7, where we skip offsets 0xE and 0xF as well. As far as I can tell, the easiest way to manage this is to put the offsets into an array that maps operator to memory location.
Full Register Map
If we put all of these pieces together, the full register map emerges:
If you want to develop code that controls a YM3812, I highly recommend printing this chart, laminating it and pinning it to you wall. You are going to need it!
Oh, one other thing, if you turn on drum mode then channels 7, 8, 9 become dedicated to drum sound production. I’ve indicated this with orange asterisks in the chart above. Again, I’d recommend against using the default percussion sounds, but hey, it’s there if you need it!
Setting Registers on the YM3812
Now that we know the mapping of settings to register locations, let’s talk about how to set those registers… electrically.
YM3812 Pins
The pins of the YM3812 fit into three different types. The power pins connect to 5v and ground. The three pins in yellow provide a serial connection to the Y3014b digital to analog converter chip. The 8 pins in blue connect to the data bus, and the 6 green pins work as control lines. There are also a few unused pins in grey that we can ignore.
For this exercise, our interest lies in the 8 data lines and the A0, Write, and Chip Select control lines. The Init-Clear pin will clear the contents of the chips registers, and the IRQ and read pins are used to read status information from the chip. Unfortunately, you can’t read the contents of a register, so honestly reading information from the chip just isn’t that useful.
Selecting Registers & Sending Data
Setting the register requires two main steps: selecting the register, and setting its data. To select a register, we:
Begin with the Chip Select and Write control lines high (disabled). The A0 line could be either high or low.
Set the Chip Select line low to enable the chip
Set A0 low to tell the YM3812 that we are selecting a register
Put the address of the register we want to write onto the Data Bus
Set Write to low to tell the YM3812 to use the data on the Data Bus to select the register
Wait 10 microseconds
Set Write to high to complete the write cycle
Wait 10 microseconds. At this point the register is selected
Flip the A0 control line high to indicate that we are now writing data into the register
Put the value onto the Data Bus to write into the selected register
Set Write to low to tell the YM3812 to write the data into the selected register
Wait 10 microseconds
Set Write to high to complete the write cycle.
Wait 10 microseconds. At this point, the data has been written into the YM3812
Set Chip Select back to high to disable the YM3812 again
And that’s it! If you followed the steps above, then you have written data into a register!
Wrap Up
Now that you know where all of the registers are on the chip, and how to manipulate them, you have the fundamental building blocks to control the YM3812. Of course getting from register setting to a working MIDI controlled module will require a bit more discussion. My hope is to take these topics one at a time in future articles. In the next article, let’s go through the module schematic, and after that we will get into some more software algorithms.