The B-Side Of The Moon

-slapdash media appraisal-

Searching For Analog In Digital: Synthesizing Prepared Piano

I got the chance to score a short film. I got the opportunity because I took a pretty audacious step in creating my first piano piece this past Autumn. It was audacious because I do not play the piano. Well, not well anyway. Piano was actually the instrument I started out on, but guitar was where I really came into my own as a musician. The physicality of guitar just made instant sense to me in a way piano never did. Now I see myself as more of a singer / songwriter, and I wear every hat in the audio production chain making music from scratch all by myself, but when I envision notes, the representative interface that comes to mind is always a keyboard. The shapes chords form from the keys, the cues that let you know you’re entering a new octave, the ergonomics of figuring out if you can even play that beefy cluster of notes all at once or if you can fly through that run for a flashy melody… It’s my first language. I am a transplant on any other instrument, fluent as I may be. The first music I fell for was piano pieces, too. Solo piano. Beethoven, Mozart, Chopin, Debussy… Debussy is probably my favorite musician ever. To play piano proficiently, in my worldview, is the loftiest endeavor. I cannot do it. The muscular disorder that kept me from mastering Van Halen when I cut my teeth on guitar by imitation, it means I’ll never be able to really tickle the ivories either. I’ve never been able to bifurcate my brain to get each hand playing a seperate musical passage anyway. I have a hell of a time multitasking anything repetitive. What the hell am I doing writing piano pieces? You can hear them, they’re on my Whyp or SoundCloud! How is that possible? Well, I cheat. No, I can’t play an entire piece from start to finish with my own two hands. I can play each individual part a buncha times until I get it right. I can record it from the decent keybed of my Yamaha SY77 as MIDI data. I can create a composite from each take until I have the whole song in MIDI, just waiting to trigger a synth. It’s studio magic, the same way you hear this polished, astonishing performance of a song that’s in actuality a patchwork of the absolute best takes of each part, stitched together and encased in the resin of audio engineering & production tricks. Is that wrong? Is it removing the human element from art, so any piano piece I create isn’t worth a damn? Idk, man, listen and tell me! Since the MIDI is a patchwork of my physical playing, it contains my mistakes, my missed timing, the velocity data of how hard or soft I hit the notes, as much of me as can be encoded in a digital signal. Art means different things to every artist. For me, it’s the most vivid hallucination, a fever dream rapping on the door of my conscious that will drive me insane if left unanswered. I didn’t answer it for most of my life, and now it is a fugue state that seizes me. It’s triggered by my synaesthesia, so it has the power to shunt me into catatonia at any moment of my waking life. Now that I’ve spent years assembling a home studio and have finally embarked on translating these intrusive signals into forms anyone else can experience, it’s calmed down. I believe I have a calling to make music. I don’t know if anything will ever come of it, I just have to do it. I will answer the call by any means necessary! Even if that means resorting to machines.

When I tried to score this short film, I relied on inspirational lightning strikes as usual. I’ve learned by now how to wade out into the stream of the collective subconscious to catch a big fish, David Lynch style. Everything I got was really good, actually probably pretty damn impressive for a moment’s notice (in my experience, melodies need to steep like tea, but for years upon years if possible), but none of it was what the director envisioned. He namedropped Olafur Arnalds’ “They Sink” as a reference and it clicked. Acoustic! Also, E flat major, which also turned out to be astute, but the piece I finally recorded that won the director’s approval was a workup of an old song of mine that ended up sounding like a classical guitarist soundtracking a beach wedding (a beautiful event I once witnessed). All it is in the end is acoustic guitar recorded messily into a tube compressor. Now, I know it was really the melodic content, the sentiment that the director was looking for. The other pieces I submitted were the wrong mood. This acoustic guitar instrumental wound up kinda cool, cavalier, ambivalent. That’s all true, but I think the texture must’ve been a pivotal part of it too. There’s just something about the way an acoustic instrument radiates sound in a room, and how that interacts with the realm of analog audio tech. It feels alive to us. It’s organic. It reminds us of ourselves, so we can imprint on it. That brings me to the real problem of recording synthesized piano pieces. Even if the MIDI data is spliced with my DNA, what can I do with that data to breathe life into the final product? There are a few really nerdy answers. If I really wanted to be authentic as possible about this, I would forgo MIDI with its outdated, 2-dimensional conventions. Timing, velocity, and pitch are only 3 factors, expressive as they may be in conjunction. What if I used an MPE controller for more variables? Well, that would absolutely capture another element of myself in the note data, but it hasn’t answered the question of what do I do with it! I’m not rich as Aphex Twin; I can’t buy a piano with MIDI triggers for the keys! The piano I have access to was obviously dropped at some point in its journey to its final resting place in the front room of this house where I’m renting. The keys are sunk into the keybed so you can’t wring the full range of expression from them; the dynamics are stuck in a dampened timbre. It’s a discouraging slog to play. It’s also horrifically out of tune, so not even good fodder for a sampler. What did I use to create the sound in these tracks then? It’s all a pitifully obsolete rompler from Y2K: the Roland XV-5080.

The 5080 is an anachronism now, but back in 2000, it was the zenith of Roland’s progress. A rompler is a pejorative term for a class of synthesizer. It’s a synthesizer whose oscillators are samples, recordings of acoustic sounds or synth waves. “Rompler” is a contraction of “rom player”. In other hopefully less geeky words, a synthesizer engine that doesn’t actually generate sound by itself, but just plays back the contents of a memorybank of recordings. To the worst kind of synth nerd, that’s an affront. A rompler isn’t a “pure” synthesizer, a legit instrument, but a convenient doodad to help sellout hacks churn out drivel. I think that’s really funny. I’ve heard hackneyed crap come from a Prophet VS, and I’ve heard ragged glory come from a Yamaha SY55. It’s the creator, not the tools. There are folks making scorched-earth industrial punk with General MIDI. So trying to extract organic emotion from basically a CD player with a home secrutiy system interface, I’m committed to that bit. The 5080 is digital to the core. Sampled waves through digital filters into digital amplifiers summed into a digital multiFX section. Only at the last second it becomes analog, and only because it has to. It could’ve very easily been more analog, it could’ve ran sampled waves into the VCA/VCF chips used in analog synths from around that time, but it wouldn’t be anywhere near as powerful. The 5080 is a powerhouse. The basic architecture is a wide open horizon. On a patch level, it’s 4 oscillators, but each oscillator is its own individual synth with tuning & pitch scaling, frequency modulation, signal clipping, amplifier & filter with an envelope for each, and an LFO with its own mod matrix. The oscillators don’t generate their own sound electronically, but play back samples from a bank of thousands of sounds. Not just bread & butter orchestral instruments either, but weird stuff like fret noise, closeup captures of piano hammers in action, just the breath sounds of a flute, or clangorous synth waves that on their own are grating. Having the option to layer all these possible sounds in one patch is a dizzying possibility. Add in the option to clash two of the waves together in a distortion or ring modulator, and things can get outta hand quick. Yes, the 5080 is capable of saccharine cinematic soundscapes, but I’ve made some of the ugliest instruments imaginable with it too. You’d think a digital synth wouldn’t offer any special character of its own the way even a hybrid digital-waves-through-analog-filters-&-amps synth would, but this was only the year 2000; the tech was still in its early adolescence. Music manufacturers have been marching towards ironed-smooth perfection since the 80’s. May they never truly arrive there. I do not find a total absence of inconveniences compelling. I don’t want to just type in my MIDI data and hear it spat flawlessly back out at me. I want errors, artifacts. The 5080 offers them. I’m not even totally sure why, and I’ve done brain surgery on this thing to resurrect it from the sorry state I bought it for peanuts in. The sampled wavebanks, they’re all culled from recording sessions done in the 90’s, both before digital recording reached high-definition and before RAM was bountiful & cheap. The onboard sample RAM of the 5080 is tiny, megabytes compared to today’s gigabytes. Studio pros would laugh, but that’s ideal for me. To fit all those samples onto such cramped chips, the sample rate is degraded. Detail is jettisoned, unwanted side effects are invited in. The range of human hearing is supposed to top out at 20kHz, but it’s been demonstrated that we can still percieve frequencies beyond that. Respectable digital recording devices at the time top out at 48k, but now they go all they way up to 96k, 192k. The samples in the 5080 mostly cut off abruptly at 32k. Most of them aren’t multisamples, but a single sample transposed over several octaves. When a sample is transposed lower than the original frequency, clarity is shaved off. The treble disappears by the time you get to bass notes. Instead, you start to hear ghost notes, a geode encrusting that music manufacturers seem to find embarassing. I love it. I think it’s beautiful. It scintillates. Something similar happens in the high end when you transpose a sample up from the original pitch: aliasing. It’s the high frequncies doubling over on themselves, treble information compounding into a new harmonics that may not sit in harmony with the fundamental pitch. It’s not always a pretty sound. It can mar a celestial melody. Sometimes, it even annoys me if I’m trying to record say, a music box ditty that keeps devolving into a series of wrong notes duking it out with the right ones. Manufacturers have tricks for exorcising aliasing from a signal, but it’s often at the cost of clarity. Try as Roland might, they couldn’t evict all the artifacts from the 5080’s signal path. Ironically, all the soundscaping tools they bestowed on this synth, none of them impart more realism than these inconveniences! Acoustic is the sound of inconvenience. Most acoustic instruments have terrible intonation. There’s just nothing for it. You can adjust bridges & saddles on an electric guitar, but my $1k Martin dreadnought has phantom overtones everywhere. There’s nothing that can be done. Thank god! If it was free of demons in the details, I would wonder if I was even really playing the thing or if it was a clever simulation. The sampled waves the 5080 relies on, a lot of them capture the nuances of acoustic inconvenience, but by nature of the lower resolution the synth is working with, a lot of those details are jettisoned. If it weren’t for the shortcomings of turn-of-the-millennium digital audio, the 5080 might just sound lifeless. As it works out in reality, the naunces it can’t help but exude make it an interesting instrument to work with!

If you want a passable piano sound out of the XV-5080, all you have to do is initialize a patch. Initializing is a menu setting the resets a patch to a predetermined basic state. In the 5080’s case, it sets the envelopes, filter, amplifier, and mod matrix to wide open, zero variation. It also shuts off all the oscillators but the first one, and sets the first one’s wave sample to the very first one in the onboard memory: a softer piano sound. You could already create music with this sound, but after playing it for a moment, you realize it sounds stilted, joyless. There’s a lack of vitality. That has to be added. How can vitality be achieved? Roland provided a couple means. The technology was still in its infancy, so Roland were a little naive in their aspirations. The shortcomings of the 5080 necessitated further development of the rompler concept. We’ve come very far since Y2K, but how does the 5080 fare? It’s pretty damn dodgy. There are phase issues, phantom clipping, nasty overtone spikes, and everything is swathed in the plastic wrap of early CD quality sound. The initial jump from carved grooves in wax & electromagnetic imprints on tape to zeroes & ones wasn’t a smooth transition. I remember playing a cassette of an album against the CD copy and laughing, because neither was perfect. The cassette was warmer, more organic, but had a comically higher noisefloor. The CD sounded like a cheap & slightly queasy facsimile of itself, like a teleported man who hadn’t quite been reconstituted 100%. I much preferred the cassette. If you don’t know what the hell I’m talking about, if you never paid that much attention or just missed the CD era altogether, envision the sound of a low-res mp3. When you get down to 128kbps, you start to hear the missing, the information discarded. It manifests as odd clicks, pops, warbles, a strange aluminum shimmering tone coating the treble range. In extremes, it sounds like the music is playing through strange water on an alien planet. That’s what I’m talking about. It’s present in all CD player DAC’s if you listen too closely. The 5080 displays early CD DAC dirt in its final output. You’d think I hate that, but I’ve come to love it! I’m not sure when I changed my tune; it must’ve come over me at some point since CD’s were my only option. Now, I embrace it, per Brian Eno’s treatise on the shortcomings of a supplanted technology becoming their hallmark in an artist’s exploration of them.

Anyway, how does this all shake out in practice? How do you create a reasonably convincing piano sound from outdated digital technology? Let’s dig into those means Roland provided for us. First is the oscillator. The 5080 offers thousands of samples in its internal wave memory, but offers up to 6 slots for expansion cards too. It also has user sample RAM and allows you to import your own samples. The internal memory of the 5080 offers around 25 piano sounds, including recordings of piano hammer noises without a fundamental pitch or just the attack portion of a piano strike. I have a few expansion cards I collected fixing up & reselling other Roland synths in the JV/XP/XV series, but none with more piano samples. I could import some really nice piano samples into the user wave memory, but the user wave memory caps out at 128 megabytes. This was generous for Y2K, but for today it’s not even enough to load up a decent set of multisamples. I’d have to truncate their duration (NOOOO, anything but truncating a duration!!!) and downsample them into smaller wav files because the 5080 is so old, it doesn’t work with mp3’s. Even then, I might not have any memory space left to autoload my trove of lo-fi single cycle waves too! So, guess I’m stuck with the onboard piano samples for now. No worries, they’re a great starting point. They are sorely inadequate right off the bat. If you only had one oscillator to work with, you would be despondent. The 5080 was designed to be layered and filtered and manipulated. It is a true synthesizer. Unadultered, the piano waves are a little thin, brittle, plunky. When you consider that that’s just because they’re meant to be filtered and chopped up by envelopes and layered with other sounds, you understand that it wasn’t an oversight. Since the 80’s, Roland has been obsessed with the idea of layering different attack sounds over sustain sounds. The D50 kicked off this novel form of synthesis and is now one of the most beloved & recognizable digital synths there will ever be. The 5080 takes everything a step further with the massive trove of sampled waves. If a piano sound is too wan and decays too quickly, just layer it with a distorted Rhodes sample! Pair it with a fat Oberheim saw wave. If the soft piano samples are perfect for the quieter sections of your song, but you need more high-end punch for the chorus, layer it with the spiky sounding synth piano wave pulled from the MKS-20 digital piano module and filter it, set the cutoff to add brightness when the notes are played harder. Which brings me to the next tool Roland gave us: the filter. The filter syphons off frequences depending what type of filter you use. Most filters are low-pass: the shave off treble. The 5080 filters are multimode; you can switch ’em from low-pass to high-pass (filtering low frequencies) to band-pass (filtering everything but a very narrow clump of frequencies). You can task the filter with filtering more or less depending on how hard you hit the notes or how much mod wheel you dial in. For tones functioning as a fundamental pitch, you can add a low pass filter that filters less treble as you play harder to simulate the way a piano responds to really banging on the keys. You can also use a highpass filter on other oscillators so you could keep just the key ticks from an electric piano sound in the mix, but introduce more of the fundamental pitch the harder you play for a thicker sound. To my ears, the basic piano samples in the 5080 are a little dry and dark. For gentle, sentimental scores, maximum impact can come from the sound of an acoustic instrument played too softly into a mic preamp turned up very high to compensate. At that gain stage, you can hear so much normally-unwanted detail from the instrument, mechanical sounds that I find imbue the music with more life, even a sense of intimacy you don’t get from a louder performance mic’d from further away. To simulate a closemic’d sound for piano, find one of the samples of an acoustic instrument’s operating noises. The piano hammer sounds is good, but the strat fret & pick noise is great too. For softer sounds, you can blend it in just to the point of audibility, filter it, and even drop the pitch an octave so the sample’s high-end energy is diminished. It’s possible to achieve sweet results before even having to resort to using Velocity, but Velocity is absolutely the 5080’s most effective weapon. In MIDI terms, Velocity is the data for how hard or soft you hit a note. It’s encoded with MIDI notes if you play on a Velocity-equipped keyboard. It’s practically primitive now, but it’s still such slight-of-hand. It’s the difference between the uncanny valley & naturalism in electronic music. It’s the human element that would otherwise be missed. The 5080 allows velocity to control a host of parameters all at once. It can trigger a swap between sample layers, open or close the filter, or tell the amplifier to be shout or shut the heck up. No, it was not the only expressive tool we would ever need, and MPE tech has improved upon it to the point that it’s probably obsolete. It’s a neat trick for coaxing humanity from these outdated synths, but its limits become clear pretty quickly. It can’t change the data in a sampled sound. Velocity assigned to the amp of an oscillator would only make it quieter or louder, it wouldn’t interact with the harmonic content of the sampled wave the same way striking a piano key vs. gently tapping it generates entirely different tones. If the output signal of the synth is run hot into a preamp, velocity could change the harmonic saturation, but that’s still only a nitpicky subtlety. How do I overcome that fact? The piano samples in the 5080 vary in keyboard velocity from mezzopiano to forte. Not a universe of range, but I can work with it. You can set one oscillator layer yeild to another as you play the keyboard harder, so set the mezzopiano sample to blend into the mezzoforte and onto the forte sample with your playing dynamics… It’s not the best, but in a mix, heavily reverbed as it will be, it actually sings. Again, the XV-5080 is meant to be layered; every sound is only one voice in a choir. Altogether, it adds up to more than the sonic impression a single patch implies. Still, mixing engineers know: you can’t just dash in a parade of ingredients and expect them to gel into a meal. There’s gotta be GLUE. Some element has to tie it all together. Here’s where the last trick of the 5080 comes in: the FX. The XV-5080 has a multiFX engine you can run each patch through. There’s one suped up FX slot (MFX) with hundreds of options, everything from light EQing to mutliFX patches for hair metal guitarists. I like using the Enhancer to draw out artifacts in the details of samples Roland wanted to hide. There’s also a dedicated chorus & reverb bus, and typical of Roland, they’re lush & enveloping. You can dial in the best of both worlds all at once, the dreamy & the nightmarish. There’s some esoteric offerings in the MFX section. Guitar amp modeling, weird distortions, strange modulated delays, pitch shifting that isn’t entirely successful… One of the FX options is a lo-fi compressor. It doesn’t just squash the signal, it adds bit crushing & aliasing artifacts & radio static & vinyl crackle too. Just slathered onto a sound, it can come across as chintzy, but dialed in sparingly, it can pepper in just the right amount of grit I hope for. This is a digital synth, though so it should be recorded too hot into a 1073 style preamp anyway. I employed all these tricks in my pursuit of a great dream piano sound, yet somthing still isn’t right. It’s not wrong enough!

What I need is a prepared piano emulation.

Prepared piano becomes a whole new instrument. It takes the aural equivalent of a sunrise, such a normality to us by now that we can take it for boring granted, and introduces an element of chaos that makes it somehow even more beautiful than it was unhindered. Humans gravitate towards the imperfections in music, and I love a noise that haunts a melody. The sound of disruption can be enthralling. Prepared piano is the sound of aristocratic eloquence cracking under the meddling of nuisances. By nature, the sound of that failure is what endears it to us. An audible representation of the struggle of life, as portrayed as organically as inhumanly possible… Why the hell am I trying to replicate it with this haplessly digital synthesizer? By synthesizing the sound of mechanical limitation with a limited computer instrument, I hope to create a new experience: the sound of emulation failing in its imitation. I have to be careful, though, or it can tumble into an offputting uncanny valley, and I can sterilize my music of the human element I’ve gone to such lengths to preserve. The question is, can synthesized sound be made acoustic? Can a binary boy become a real boy? Well, yeah, sorta. Through distortion! The sound of failure. We may grow inpatient when our computers display limitations, but in their failure, we see ourselves. After all, they only do what we tell them. If they fail, was it not our failure, really? When I run the MIDI of my playing into the XV-5080, and it spits it back out with the human element erased, that’s my failure. I did not succeed in keeping myself intact in the MIDI data. If I can get the 5080 to fail at its goal (to be as perfect a powerful music computer as was possible a quarter century ago) instead of mine, I have won, the human has triumphed, and the human element I seek to impart might just find itself present. How do I get a computer to fail? It turns out the XV-5080 is exactly the anachronism it needs to be for my purposes. If it were the latest Roland workstation, with a cutting edge physically modeled piano program layered with pristine velocity-switching multisamples of a Steinway grand, I would be up shit crick. You might hear something you like in the end result, but it wouldn’t be one iota of me. It might even fool the average listener, and prompt them to ask when I became such an accomplished pianist, but if I basked in the moment, I would be a fraud. I don’t want you to hear this music and wonder if it was a microphone recording of me actually playing a real piano in a room. I want you to hear this music and be transported to another world, parallel to our own, and only ever so slightly extraterrestrial. I want you to kinda wonder what instrument you’re even listening to. Some of the timbres present suggest a piano, but the attack is too clacky, the decay portion of the envelope is all wrong, the sustain lingers unnnaturally long. I want you to hear this music and notice the distortion, how it sparkles in the moonlight like a shattered gemstone. How it reflects sunlight like shrapnel. I hope that stands out just enough that you notice it, but not enough that it overtakes the impact of the treacly melody or the melancholy reverb. I hope all the elements blend together to create tension, dissonance, but also harmony, unique resolution that couldn’t have have germinated from another source. In the Roland XV-5080’s limitations, in the way its digital convertor circuitry glitches, or the way the waveforms clash against each other, or the way the signal clips when an audio engineer going by the book would say it shouldn’t… I hope you hear me in those elements. I put them there! I set the stage for them to happen. I hope you hear them and understand why I decided they were necessary, even if you may not agree. In the end, that’s the human element all art needs to be considered art: human decisionmaking. It’s what AI will always lack, and why I don’t feel conflicted in exploring this approach even though I’m the type of imperfection perfectionist who records every guitar part again for each repetition in the song, even though I could just copypaste the first iteration into the second instance. My body is a machine that fails me. Any machine I can make fail becomes an extension of my body.

Last night, I finished the structure & melodic details of my next piano piece. I was lying in bed as I worked on it, Handel-style, so I just used a VST as a stand-in for the sounds. The VST I used was Plogue Sforzando, an SFZ file player. A VST rompler, similar to a software Roland XV-5080, in other words. I auditioned several different piano patches made by strangers on the internet, found one that just seemed to fit. The piano piece is a romantic one, a melody that came to me on the first date I ever went on way back when I was a teen and has knocked around in my imagination ever since. It’s a soft serenade in A# major, a key that triggers my synaesthesia to show me amber hues, warm summer nights under electric lights with the gentle rustle of trees swaying in a breeze. It’s a piece that needs a pretty damn gentle piano sound. This patch in Sforzando, it approached the ideal in my head, but didn’t quite get there. It’s comprised of 2 piano parts, rhythm & lead, and a warm synth bass. I tried running the piano parts through distortion. At just the threshold of breakup, I found perfect imperfection. The rhythm part is powered by MIDI of me playing without much variance in the dynamics, so it jives with my MO of limited velocity control I employed for the XV-5080. It sounds like a piano, but slightly unreal, and suggests synthesis. The lead line, though, is me playing expressively. Sforzando offers fairly powerful velocity layer-switching, and this piano patch has surprisingly decent dynamics for a thing I downloaded for free off the internet. It took some extra practice, but it was worth it to get the dynamics right, line them up to shift with the moods of the song. It worked out so well that I wondered if the piece was finished, if I even needed to try running the MIDI through the XV-5080 to achieve a final product. The expressivity of Sforzando’s multisample layer-switching is much greater than what I could coax out of the 5080 with just amplitude and filter cutoff or even the 5080’s layer-switching. This Sforzando patch just feels more natural. Is it the imperfections in the sample recordings? That the room is more audible in these samples? Run through distortion, all the artifacts of a person recording their own playing in a room leap out of the audio image. As much as the frequencies the distortion conjured out of the tracks, it was these artifacts that I was missing upon listening back to the initial mix. Bringing them to the fore made me forget for a moment that I wasn’t hearing a microphone recording of me playing an acoustic instrument in a room. The illusion was that convincing. I’m still not finished, though. I hear only more of what’s missing when I listen back. I think I know what it is: running these yet-digital tracks through an analog preamp, then replacing the plugin reverb with a similar spaciousness from my Lexicon 300.

Art is as much what the artist intends as what the audience interprets. Maybe none of this matters, and listeners will only hear a pleasant melody they either bond with or tune out. Maybe for my next piece (or this one if a conclusion can’t be reached through my current MO) I’ll try to find someone with a piano in decent shape I may plod away on for an afternoon. I could hold a blind test, see if anyone can spot which one is all me or only partly me. It would be interesting just to see if anyone cared about the distinction. I know that for myself, it’s not really me I want to hear in this music, only the human element the music requires. I would love if, while I’m listening to it, I forgot I’m me at all.

Here’s a demonstration of this case study. It’s 100% Roland XV-5080, run through a Lexicon 300. Prepared piano & guitar pinch harmonic sounds. An ode to 90’s George Winston, Enya, and Steve Roach. Also a sendup of the idea of eternal recurrence. Eternal recurrence is the notion that everything that happens has happened before and is only ever happening again. I set a reiterated melody over shifting chords then altered the melody over the same chords to demonstrate how, even if there are observable historic parallels between past & present events, context has the power to render them utterly disparate. Nothing is truly cyclical, nothing is exactly the same. It’s impossible. The 5080 is proof. As a digital synth playing static samples, each repeated note should sound the same, but they never do for a host of reasons. I could ferret all of them out, cut the patch down to one oscillator with the filter & amplitude all the way up and the LFO off and the velocity not touching anything, and still the notes would vary noticeably because this is an old synth with an outdated microprocessor translating computer commands into audio through a digital convertor that’s not getting any younger.

https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/2072183540&color=%23ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&show_teaser=true&visual=true

Spooky Mulder · Recurrence

Leave a comment