Arg, no more lengthy replies while needing to catch a plane. Of course, I 
didn’t mean to say Alesis synths (and most others) were drop sample—I meant 
linear interpolation. The point was that stuff that seems to be substandard can 
be fine under musical circumstances...

Sent from my iPhone

> On Aug 6, 2018, at 11:57 AM, Nigel Redmon <> wrote:
> Hi Robert,
> On the drop-sample issue:
> Yes, it was a comment about “what you can get away with”, not about 
> precision. First, it’s frequency dependent (a sample or half is a much bigger 
> relative error for high frequencies), frequency content (harmonics), relative 
> harmonic amplitudes (upper harmonics are usually low amplitude), relative 
> oversampling (at constant table size, higher tables are more oversampled), 
> etc. So, for a sawtooth for instance, the tables really don’t have to be so 
> big before it’s awfully hard to hear the difference between that and linear 
> interpolation. The 512k table analysis is probably similar to looking at 
> digital clipping and figuring out the oversampling ratio needed. But the 
> numbers, the answer would be that you need to upsample to something like a 5 
> MHz rate (using that number only because I recall reading that conclusion of 
> someone’s analysis once). In reality, you can get away with much less, 
> because when you calculate worst-case expected clipping levels, you tend to 
> forget you’ll have a helluva time hearing the aliased tones amongst all the 
> “correct” harmonic grunge, despite what your dB attenuation calculations tell 
> you. :-)
> To be clear, though, I’m not advocating zero-order—linear interpolation is 
> cheap. The code on my website was a compile-time switch, for the intent that 
> the user/student can compare the effects. I think it’s good to think about 
> tradeoffs and why some may work and others not. In the mid ‘90s, I went to 
> work for my old Oberheim mates at Line 6. DSP had been a hobby for years, and 
> at a NAMM Marcus said hey why don’t you come work for us. Marcus was an 
> analog guy, not a DSP guy. But he taught me one important thing, early on: 
> The topic of interpolation came up, or lack thereof, came up. I remember I 
> said something like, “but that’s terrible!”. He replied that every Alesis 
> synth ever made was drop sample. He didn’t have to say more—I immediately 
> realized that probably most samplers ever made were that way (not including 
> E-mu), because samplers typically didn’t need to cater to the general case. 
> You didn’t need to shift pitch very far, because real instruments need to be 
> heavily multisampled anyway. And, of course, musical instruments only need to 
> sound “good” (subjective), not fit an audio specification.
> It’s doubtful that people are really going to come down to a performance 
> issue where skipping linear interpolation is the difference in realizing a 
> plugin or device or not. On the old Digidesign TDM systems, I ran into 
> similar tradeoffs often. Where I’d have to do something that on paper seemed 
> to be a horrible thing to do, but to the ear it was fine, and it kept the 
> product viable by staying within the available cycle count.
>> Nigel, i remember your code didn't require big tables and you could have 
>> each wavetable a different size (i think you had the accumulated phase be a 
>> float between 0 and 1 and that was scaled to the wavetable size, right?) but 
>> then that might mean you have to do better interpolation than linear, if you 
>> want it clean.
> Similar to above—for native computer code, there’s little point in variable 
> table sizes, mainly a thought exercise. I think somewhere in the articles I 
> also noted that if you really needed to save table space (say in ROM, in a 
> system with limited RAM to expand), it made sense to reduce/track the table 
> sizes only up to a point. I think I gave an audio example of one that tracked 
> octaves with halved table lengths, but up to a minimum of 64 samples. Again, 
> this was mostly a thought exercise, exploring the edge of overt aliasing.
> Hope it’s apparent I was just going off on a thought tangent there, some 
> things I think are good to think about for people getting started. Would have 
> been much shorter if just replying to you ;-)
> Robert
>> On Aug 5, 2018, at 4:27 PM, robert bristow-johnson 
>> <> wrote:
>> ---------------------------- Original Message ----------------------------
>> Subject: Re: [music-dsp] Antialiased OSC
>> From: "Nigel Redmon" <>
>> Date: Sun, August 5, 2018 1:30 pm
>> To:
>> --------------------------------------------------------------------------
>> > Yes, that’s a good way, not only for LFO but for that rare time you want 
>> > to sweep down into the nether regions to show off.
>> i, personally, would rather see a consistent method used throughout the MIDI 
>> keyboard range; high notes or low.  it's hard to gracefully transition from 
>> one method to a totally different method while the note sweeps.  like what 
>> if portamento is turned on?  the only way to clicklessly jump from wavetable 
>> to a "naive" sawtooth would be to crossfade.  but crossfading to a wavetable 
>> richer in harmonics is already built in.  and what if the "classic" waveform 
>> wasn't a saw but something else?  more general?
>> > I think a lot of people don’t consider that the error of a “naive” 
>> > oscillator becomes increasingly smaller for lower frequencies. Of course, 
>> > it’s waveform specific, so that’s why I suggested bigger tables. (Side 
>> > comment: If you get big enough tables, you could choose to skip linear 
>> > interpolation altogether—at constant table size, the higher frequency 
>> > octave/whatever tables, where it matters more, will be progressively more 
>> > oversampled anyway.)
>> well, Duane Wise and i visited this drop-sample vs. linear vs. various 
>> different cubic splines (Lagrange, Hermite...) a couple decades ago.  for 
>> really high quality audio (not the same as an electronic musical 
>> instrument), i had been able to show that, for 120 dB S/N, 512x oversampling 
>> is sufficient for linear interpolation but 512K is what is needed for drop 
>> sample.  even relaxing those standards, choosing to forgo linear 
>> interpolation for drop-sample "interpolation" might require bigger 
>> wavetables than you might wanna pay for.  for the general wavetable synth 
>> (or NCO or DDS or whatever you wanna call this LUT thing, including just 
>> sample playback) i would never recommend interpolation cruder than linear.  
>> Nigel, i remember your code didn't require big tables and you could have 
>> each wavetable a different size (i think you had the accumulated phase be a 
>> float between 0 and 1 and that was scaled to the wavetable size, right?) but 
>> then that might mean you have to do better interpolation than linear, if you 
>> want it clean.
>> > Funny thing I found in writing the wavetable articles. One soft synth 
>> > developer dismissed the whole idea of wavetables (in favor of minBLEPs, 
>> > etc.). When I pointed out that wavetables allow any waveform, he said the 
>> > other methods did too. I questioned that assertion by giving an example of 
>> > a wavetable with a few arbitrary harmonics. He countered that it wasn’t a 
>> > waveform. I guess some people only consider the basic synth waves as 
>> > “waveforms”. :-D
>> >
>> i've had arguments like this with other Kurzweil people while i worked there 
>> a decade ago (still such a waste when you consider how good and how much 
>> work they put into their sample-playback, looping, and interpolation 
>> hardware, only a small modification was needed to make it into a decent 
>> wavetable synth with morphing).
>> for me, a "waveform" is any quasi-periodic function.  A note from any 
>> decently harmonic instrument; piano, fiddle, a plucked guitar, oboe, 
>> trumpet, flute, all of those can be done with wavetable synthesis (and most, 
>> maybe all, of them can be limited to 127 harmonics allowing archived 
>> wavetables to be as small as 256).
>> these are the two necessary ingredients to wavetable synthesis:  
>> quasi-periodic note (that means it can be represented as a Fourier series 
>> with slowly-changing Fourier coefficients) and bandlimited.  if it's 
>> quasi-periodic and bandlimited it can be done with wavetable synthesis.  to 
>> me, for someone to argue against that, means to me that they are arguing 
>> against Fourier and Shannon.
>> there is a straight-forward way of pitch tracking the sampled note from 
>> attack to release, and from that slowly-changing period information, there 
>> is a straight-forward way to sample it to 256 points per cycle and 
>> converting each adjacent cycle into a wavetable.  that's a lotta redundant 
>> data and most of the wavetables (nearly all of them) can be culled with the 
>> assumption that the wavetables surviving the culling process will be 
>> linearly cross-faded from one to the next. 
>> and if several notes (say up and down the keyboard) are sampled, there is a 
>> way to align the wavetables (before culling) between the different notes to 
>> be phase aligned.  then, say you have a split every half octave, the note at 
>> E-flat can be a mix of the wavetables for C below and F# above.  it's like 
>> the F# is pitched down 3 semitones and the C is pitched up 3 semitones and 
>> the Eb is a phase-aligned mix of the two.  this can be done with any 
>> harmonic or quasi-periodic instrument, even a piano (but maybe you will need 
>> more than 2 splits per octave).
>> > Hard sync is another topic...
>> hard sync is sorta hard, but still very doable with wavetable (and morphing 
>> along one dimension) as long as one is willing to put a lotta memory into 
>> it.  each incremental change in the slave/master frequency ratio (which is a 
>> timbre control) will require a separate wavetable to cross-fade into and out.
>> --
>> r b-j               
>> "Imagination is more important than knowledge."
>> _______________________________________________
>> dupswapdrop: music-dsp mailing list
dupswapdrop: music-dsp mailing list

Reply via email to