Hi Kevin,

Wavetables are for synthesizing ANY band-limited *periodic* signal. On the other hand, the BLEP family of methods are for synthesizing band-limited *discontinuities* (first order, and/or higher order discontinuities).

It is true that BLEP can be used to synthesize SOME bandlimited periodic signals (typically those involving a few low-order discontinuities per cycle such as sqare, saw and triangle waveforms). In this case, BLEP can be thought of as adding "corrective grains" that cancel out the aliasing present in a naive (aliased) waveform that has sample-aligned discontinuities. The various __BL__ methods tend to vary in how they synthesize the corrective grains.

If all you need to do is synthesize bandlimited periodic signals, I don't see many benefits to BLEP methods over wavetable synthesis.

Where BLEP comes into its own (in theory at least) is when the signal contains discontinuities that are not synchronous with the fundamental period. The obvious application is hard-sync, where discontinuities vary relative to the phase of the primary waveform.

The original BLEP method used wavetables for the corrective grains, so has no obvious performance benefit over periodic wavetable synthesis. Other derivative methods (maybe polyBLEP?) don't use wavetables for the corrective grains, so they might potentially have benefits in settings where there is limited RAM or where the cost of memory access and/or cache pollution is large (e.g. modern memory hierarchies) -- but you'd need to measure!

You mention frequency modulation. A couple of thoughts on that:

(1) With wavetable switching, frequency modulation will cause high frequency harmonics to fade in and out as the wavetables are crossfaded -- a kind of amplitude modulation on the high-order harmonics. The strength of the effect will depend on the amount of high-frequency content in the waveform, and the number of wavetables (per octave, say): Less wavetables per-octave will cause lower frequency harmonics to be affected, more wavetables per-octave will lessen the effect on low-order harmonics, but will cause the rate of amplitude modulation to increase. To some extent you can push this AM effect to higher frequencies by allowing some aliasing (say above 18kHz). You could eliminate the AM effect entirely with 2x oversampling.

(2) With BLEP-type methods, full alias suppression is dependent on generating corrective bandlimited pulses for all non-zero higher-order derivatives of the signal. Unless your original signal is a square/rectangle waveform, any frequency modulation will introduce additional derivative terms (product rule) that need to be compensated. For sufficiently low frequency, low amplitude modulation you may be able to ignore these terms, but beyond some threshold they will become significant and would need to be dealt with. I don't recall how PolyBLEP deals with higher-order corrective terms.

In any case, my main point here is that BLEP methods don't magically support artifact-free frequency modulation (except maybe for square waves).

In the end I don't think there's one single standard, because there are mutually exclusive trade-offs to be made. The design space includes:

Features:

- Supported frequency modulation? (just pitch bend? low frequency LFOs? audio-rate modulation?).

- Support for hard sync?

- Support for arbitrary waveforms?

Audio quality:

- Allowable aliasing specification (e.g. below 120dB over whole audio spectrum, or below 70dB below 10kHz, etc.)

- High-frequency harmonic modulation margin under FM?

Compute performance:

- RAM usage

- CPU usage per voice

And of course:

- Development time/cost (initial and cost of adding features)


Today, desktop CPUs are fast enough to support the most difficult-to-achieve synthesis capabilities with no measurable audio artifacts. There are plugins that aim for that, and use a whole CPU core to synthesize a single voice. There is a market for that. But if you goal is a 128-voice polysynth that uses a maximum of 2% CPU on a smartphone then you may not want to aim for say, completely-alias-free hard-sync of audio-rate frequency modulated arbitrary waveforms.

Cheers,

Ross.


On 4/08/2018 7:23 AM, Kevin Chi wrote:
Hi,

Is there such a thing as today's standard for softSynth antialiased oscillators?

I was looking up PolyBLEP oscillators, and was wondering how it would relate to a 1-2 waveTables per octave based oscillator or maybe to some other algos.

thanks for any ideas and recommendations in advance,
Kevin
_______________________________________________
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


_______________________________________________
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Reply via email to