Re: [music-dsp] Antialiased OSC

2018-11-01 Thread Ethan Duni
Well you definitely want a monotonic, equal-amplitude crossfade, and
probably also time symmetry. So I think raised sinc is right out.

In terms of finer design considerations it depends on the time scale. For
longer crossfades (>100ms), steady-state considerations apply, and you can
design for frequency domain characteristics. I.e., raised cosine, half of
your favorite analysis window, etc.

But for shorter crossfades (particularly 20ms and below), time domain
considerations dominate and you want to minimize the max slope of the
crossfade curve. So a linear crossfade is indicated here.

Of course linear crossfade is also the cheapest option, so you really need
a reason *not* to use it.

Ethan (D)

On Thu, Nov 1, 2018 at 12:18 PM robert bristow-johnson <
r...@audioimagination.com> wrote:

>
>
>  Original Message --------
> Subject: Re: [music-dsp] Antialiased OSC
> From: "Sampo Syreeni" 
> Date: Wed, October 31, 2018 9:35 pm
> To: philb...@mobileer.com
> "A discussion list for music-related DSP" 
> Cc: "robert bristow-johnson" 
> --
>
> > On 2018-08-06, Phil Burk wrote:
> >
> >> I crossfade between two adjacent wavetables.
> >
> > Yes. Now the question is, how to fade between them, optimally.
> >
> > I once again don't have any math to back this up, but intuition says the
> > mixing function ought to be something like a sinc function or a raised
> > cosine, at the lower rate. Because off the inherent bandlimit. And then
> > the ability of such linear phase thingies to be turned into one-off
> > interpolation thingies.
> >
> > Doing it at the lower rate, for the lower wavetable, would seem to be
> > the easiest, while holding to band limitation.
>
> interpolating between samples of a wavetable and crossfading between
> wavetables are different issues.
>
> if this wavetable synthesis is for the purpose of synthesizing a
> bandlimited saw, square, triangle, PWM, sync saw, sync square, then you
> adjacent wavetables going up and down the keyboard should be identical
> except on will have more harmonics at the top set to zero.
>
> i think a linear crossfade, mixing only the two adjacent wavetables, is
> the correct way to do it.
>
>
> --
>
> r b-j r...@audioimagination.com
>
> "Imagination is more important than knowledge."
>
>
>
>
>
>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Antialiased OSC

2018-11-01 Thread robert bristow-johnson







 Original Message 

Subject: Re: [music-dsp] Antialiased OSC

From: "Sampo Syreeni" 

Date: Wed, October 31, 2018 9:35 pm

To: philb...@mobileer.com

"A discussion list for music-related DSP" 

Cc: "robert bristow-johnson" 

--



> On 2018-08-06, Phil Burk wrote:

>

>> I crossfade between two adjacent wavetables.

>

> Yes. Now the question is, how to fade between them, optimally.

>

> I once again don't have any math to back this up, but intuition says the

> mixing function ought to be something like a sinc function or a raised

> cosine, at the lower rate. Because off the inherent bandlimit. And then

> the ability of such linear phase thingies to be turned into one-off

> interpolation thingies.

>

> Doing it at the lower rate, for the lower wavetable, would seem to be

> the easiest, while holding to band limitation.
interpolating between samples of a wavetable and crossfading between wavetables 
are different issues.
if this wavetable synthesis is for the purpose of synthesizing a bandlimited 
saw, square, triangle, PWM, sync saw, sync square, then
you adjacent wavetables going up and down the keyboard should be identical 
except on will have more harmonics at the top set to zero.
i think a linear crossfade, mixing only the two adjacent wavetables, is the 
correct way to do it.

--



r b-j� � � � � � � � � � � � �r...@audioimagination.com



"Imagination is more important than knowledge."

�
�
�
�
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Antialiased OSC

2018-09-01 Thread Theo Verelst

Content-wise, you also need to consider what the meaning is of
the terminology "anti aliased"; technically, it means you prevent
the double interpretation of wave partials, which is usually associated
with AD or DA conversion, as in that high frequency components that are
put on a AD converter will be "aliased" because they will mirror around
the Nyquist rate and become indistinguishable from a similar frequency in the
input signal.

A more common thing to think about in softwaere waveform synthesis, apart
from this principle but then in the "virtual" sampling of a waveform,, is
to consider the error an actual DAC (digital to analog converter, like your
sound card has), as compared with a "perfect reconstructor", which would take
your properly bandwidth limited signal (to prevent aliasing) and (given a very
long latency) turn it into a perfect output signal from you sound card.

The DAC in you soundcard will not do this job perfectly, even if you're
perfectly anti-aliasing or bandwidth limiting your digital signal. That's 
because
of the sampling reconstruction theorem's need for a very long filter, while
and actual DAC has a very short reconstruction filter.

One of the effects of this limitation is probably the most important to consider
for musical instruments producing sound which will be amplified into the higher 
Decibel
domains: mid frequency blare. Especially in highly resonant spaces, like those 
with
un-damped parallel reflective walls, certain sound wave patterns tend to 
amplify through
reverberation causing a lot of clutter in the sensitive range of the human 
hearing, the
middle frequencies (lets say 1000 through 4000 Hz). This "blare" becomes louder 
because
of various digital processing and DAC reconstruction ensemble effects, and 
preferably
should be controlled.

So especially for serious "live" music reproduction, measures ought to be in 
place to
control the amount (and kind) of blare your software isntrument produces, 
probably as
higher priority than the exact type and amount of "anti-aliasing" you provide.

Theo V.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Antialiased OSC

2018-08-22 Thread Andrew Simper
To bandlimit a discontinuity in the n-th derivative of any function you add
a corrective grain formed from band-limited step approximation integrated n
times. For saw and sqr, which have C0 discontinuites, you add in
band-limited corrective step functions directly. For a non-synced triangle,
where you only have C1 discontinuities you add in band-limited corrective
ramp functions (integrated step). A corrective function is just the
band-limited one with the trivial one subtracted, so you can generate the
trivial waveform and then add the correction to get the band-limited one.

Andy

On Wed, 8 Aug 2018 at 06:02, Kevin Chi  wrote:

> I just want to thank you guys for the amount of experience and knowledge
> you are sharing here! This list is a gem!
>
> I started to replace my polyBLEP oscillators with waveTables to see how
> it compares!
>
>
> Although while experimenting with PolyBLEP I just run into something I
> don't get and probably you will know the answer for this.
>
> I read at a couple of places if you use a leaky integrator on a Square
> then you can get a Triangle. But as a leaky integrator
> is a first order lowpass filter, you won't get a Triangle waveform, but
> this:
>
> https://www.dropbox.com/s/1xq321xqcb7ir3a/PolyBLEPTri.png?dl=0
>
> Is it me doing something wrong misunderstanding the concept, or what is
> the best way to make a triangle with PolyBLEPs?
>
>
> thanks again for the great discussion on wavetables!
>
> --
> Kevin @ Fine Cut Bodies
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Antialiased OSC

2018-08-22 Thread Vladimir Pantelic

On 22/08/18 17:00, Theo Verelst wrote:

There's a lot of ways to look at wave tables and their use, for instance the way 
I used one quite some years ago in a for the time advanced enough Open Source 
hardware analog synthesizer simulation


can you name it?

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Antialiased OSC

2018-08-22 Thread Theo Verelst

Kevin Chi wrote:

Hi,

Is there such a thing as today's standard for softSynth antialiased oscillators?



It's easier to know that for Open Source products, which use a variation of 
methods as
far I checked, and nothing out there I've heard (probably some stuff of most 
known
packages) is "standard worthy" in terms of deep understanding of the underlying
theory, or even made with quantitative error bounds in mind. Parts of some 
hardware based
oscillator designs probably excepted.

There's a lot of ways to look at wave tables and their use, for instance the way I used 
one quite some years ago in a for the time advanced enough Open Source hardware

analog synthesizer simulation, mostly dealing with signal 
aliasing/reconstruction errors
(don't confuse the two) and the possibility to store accurate waves forms that 
are hard
to compute in real time.

The dealings with the harmonic distortion from interpolating in the wave table, 
and it's
subsequent effects on the rest of the signal chain are often done in such a way 
as to
take a  few parts of the relevant theory, ignoring the others, leading to a 
rather
dead an bland sound in many commercial products, simply because preparing the 
samples
for DA conversion is hard to do, and even relatively simple interpolations cost 
work,
and it isn't easy to make a scheme based on WTs that overall gives guaranteed 
good
results, including pitch bends, fast and slow modulations and some sort of 
consistent
sound feel.

It's possible to make a real design, which tries to eliminate serious 
distortion, and
work for a number of musical applications with a lot better sound, in terms of
high fidelity, but thus far that's thus far above the reach of the people in 
this group,
to my knowledge.


T.V.

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Antialiased OSC (Kevin Chi)

2018-08-08 Thread Ethan Fenn
And if you want to stick with BLEP-like approaches, rather than BLEPs
followed by an integrator you can use a pre-integrated BLEP, usually called
a BLAMP (Band-Limited rAMP). This gives you a short waveform you can mix in
any time there is a discontinuity in the first derivative of your signal,
rather than in the signal itself.

By combining BLEPs and BLAMPs you can make an accurately bandlimited
version of any signal that is piecewise linear -- including square/pulse,
triangle, or sawtooth.

-Ethan



On Wed, Aug 8, 2018 at 12:12 PM, Frank Sheeran  wrote:

> Hi Kevin,
>
> > I read at a couple of places if you use a leaky integrator on a Square
> > then you can get a Triangle. But as a leaky integrator
> > s a first order lowpass filter, you won't get a Triangle waveform, but
> > this
>
> A leaky integrator may function as a lowpass filter, but it may not work
> exactly as the low-pass filter you seem to be showing there.  Or, if that
> IS your leaky integrator, it may just have too much leak.
>
> That said, once you have a wavetable oscillator, you can simply generate a
> triangle in it directly that will be bandwidth-limited.
>
> Frank
>
> http://moselle-synth.com/
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Antialiased OSC (Kevin Chi)

2018-08-08 Thread Frank Sheeran
Hi Kevin,

> I read at a couple of places if you use a leaky integrator on a Square
> then you can get a Triangle. But as a leaky integrator
> s a first order lowpass filter, you won't get a Triangle waveform, but
> this

A leaky integrator may function as a lowpass filter, but it may not work
exactly as the low-pass filter you seem to be showing there.  Or, if that
IS your leaky integrator, it may just have too much leak.

That said, once you have a wavetable oscillator, you can simply generate a
triangle in it directly that will be bandwidth-limited.

Frank

http://moselle-synth.com/
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Antialiased OSC

2018-08-07 Thread Kevin Chi
I just want to thank you guys for the amount of experience and knowledge 
you are sharing here! This list is a gem!


I started to replace my polyBLEP oscillators with waveTables to see how 
it compares!



Although while experimenting with PolyBLEP I just run into something I 
don't get and probably you will know the answer for this.


I read at a couple of places if you use a leaky integrator on a Square 
then you can get a Triangle. But as a leaky integrator
is a first order lowpass filter, you won't get a Triangle waveform, but 
this:


https://www.dropbox.com/s/1xq321xqcb7ir3a/PolyBLEPTri.png?dl=0

Is it me doing something wrong misunderstanding the concept, or what is 
the best way to make a triangle with PolyBLEPs?



thanks again for the great discussion on wavetables!

--
Kevin @ Fine Cut Bodies

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Antialiased OSC

2018-08-06 Thread robert bristow-johnson







 Original Message 

Subject: Re: [music-dsp] Antialiased OSC

From: "Phil Burk" 

Date: Tue, August 7, 2018 12:59 am

To: "robert bristow-johnson" 

"A discussion list for music-related DSP" 

--



> On Sun, Aug 5, 2018 at 4:27 PM, robert bristow-johnson <

> r...@audioimagination.com> wrote:

>

> i, personally, would rather see a consistent method used throughout the

>> MIDI keyboard range; high notes or low. it's hard to gracefully transition

>> from one method to a totally different method while the note sweeps. like

>> what if portamento is turned on? the only way to clicklessly jump from

>> wavetable to a "naive" sawtooth would be to crossfade. but crossfading to

>> a wavetable richer in harmonics is already built in.

>

>

> Yes. I crossfade between two adjacent wavetables. It is just at the bottom

> that I switch to the "naive sawtooth". I want to be able to sweep the

> frequency through zero to negative frequency.
okay, i get it.� if DC wasn't the bottom, but some octave on the keyboard, it 
would be a saw with sample values very close to the "naive sawtooth" ramp but 
could still have some limit to harmonics.
> So I need a signal
near zero.
> But as I get closer to zero I need an infinite number of octaves.
yup.
> So the�region near zero has to be handled differently anyway.

>

>

>> and what if the "classic" waveform wasn't a saw but something else? more

>> general?

>

> I only use the MultiTable for the Sawtooth. Then I generate Square and

> Pulse from two Sawteeth.
yup. that works.� detune them slightly and it sounds like a monster analog 
synth.


> <https://lists.columbia.edu/mailman/listinfo/music-dsp>

> Also note that for the octave between Nyquist and Nyquist/2 that I use a

> table with a pure sine wave. If I added a harmonic in that range then it

> would be above the Nyquist.
that i would expect.� pretty high octave.
the octave below that would have a sine and it's octave up.� the octave below 
that would have the sine (at the fundamental) and three harmonics above it...
�
--



r b-j� � � � � � � � � � � � �r...@audioimagination.com



"Imagination is more important than knowledge."

�
�
�
�
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Antialiased OSC

2018-08-06 Thread Phil Burk
On Sun, Aug 5, 2018 at 4:27 PM, robert bristow-johnson <
r...@audioimagination.com> wrote:

i, personally, would rather see a consistent method used throughout the
> MIDI keyboard range; high notes or low.  it's hard to gracefully transition
> from one method to a totally different method while the note sweeps.  like
> what if portamento is turned on?  the only way to clicklessly jump from
> wavetable to a "naive" sawtooth would be to crossfade.  but crossfading to
> a wavetable richer in harmonics is already built in.


Yes. I crossfade between two adjacent wavetables. It is just at the bottom
that I switch to the "naive sawtooth". I want to be able to sweep the
frequency through zero to negative frequency. So I need a signal near zero.
But as I get closer to zero I need an infinite number of octaves. So the
region near zero has to be handled differently anyway.


> and what if the "classic" waveform wasn't a saw but something else?  more
> general?


I only use the MultiTable for the Sawtooth. Then I generate Square and
Pulse from two Sawteeth.
  
Also note that for the octave between Nyquist and Nyquist/2 that I use a
table with a pure sine wave. If I added a harmonic in that range then it
would be above the Nyquist.

Phil Burk
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Antialiased OSC

2018-08-06 Thread robert bristow-johnson







 Original Message 

Subject: Re: [music-dsp] Antialiased OSC

From: "Scott Gravenhorst" 

Date: Mon, August 6, 2018 8:06 pm

To: music-dsp@music.columbia.edu

--



>

> Nigel Redmon via music-dsp@music.columbia.edu wrote:

>>

>>Arg, no more lengthy replies while needing to catch a plane. Of

>>course, I didnt mean to say Alesis synths (and most others) were

>>drop sampleI meant linear interpolation. The point was that stuff

>>that seems to be substandard can be fine under musical

>>circumstances...

>

> This is an important point. I see so often developer approaches that start

> at the most complex and most computationally burdensome when simpler

> approaches work well enough to pass the "good enough" test. I start at the

> low end and if I can hear stuff I don't like, then I'll try more "advanced"

> techniques and that has served me very well.

>
well, we might have different definitions about what "complex and 
computationally burdensome" is.� i don't always measure it out as what's the 
least MIPS.� sometimes it's a computational burden to toss in a conditional 
execution instruction.� sometimes is less
computational burden to just extend the model to use more resources, like more 
wavetables just after NoteOn to cover the attack.� i just think that as long as 
4K word wavetables can cover 10+ octaves (but there's a lotta redundancy in 
those wavetables for the high pitches).� the easiest
way to do it simply is just do it the same old way, but throw enough wavetable 
length at it.
again, this is a softsynth running on a platform with ca. gig or memory.� the 
cost is about 16K per wavetable.� maybe an instrument needs 64 wavetables laid 
out in some array to cover
modulation in all of the dimensions.� that's a meg for one instrument.
i'll admit it's an aesthetic issue.� even though i do it too, i dislike if() 
statements in the real-time processing code.� it's just that's how i would 
extend the range of an instrument rather than grafting
this wavetable biz to the "naive" method (that might produce some aliases at a 
low level).� how do you graft them together clicklessly or glitchlessly?� it 
just seems to me that doing it the same consistent way makes it the least 
complex and computationally
burdensome.
--


r b-j� � � � � � � � � � � � �r...@audioimagination.com



"Imagination is more important than knowledge."

�
�
�
�
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Antialiased OSC

2018-08-06 Thread Andrew Simper
I definitely agree here, start with the easy approach, then put in more
effort when it's needed - but keep in mind you won't be able to get decent
feedback from non-dsp people until the final quality version is done.

If the code is not a key part of your product then you can even take
another step back, if you can find someone else's code that does what you
want then just use that! If that code has detailed information on how it
was derived then if you do need to make a change in the future you can put
in the work later on to understand how it was done and make changes as
needed.

I really appreciate RBJ's Audio EQ cookbook for this sort of approach, and
hopefully I have helped people with the technical paper's I've done too for
the non LTI (modulation) case of linear 2 pole resonant filters / EQ.

Cheers,

Andy


On Tue, 7 Aug 2018 at 08:06, Scott Gravenhorst  wrote:

>
> Nigel Redmon via music-dsp@music.columbia.edu wrote:
> >
> >Arg, no more lengthy replies while needing to catch a plane. Of
> >course, I didnt mean to say Alesis synths (and most others) were
> >drop sampleI meant linear interpolation. The point was that stuff
> >that seems to be substandard can be fine under musical
> >circumstances...
>
> This is an important point.  I see so often developer approaches that start
> at the most complex and most computationally burdensome when simpler
> approaches work well enough to pass the "good enough" test.  I start at the
> low end and if I can hear stuff I don't like, then I'll try more "advanced"
> techniques and that has served me very well.
>
> You may now start heaving rotten eggs and expired vegetables at me  :)
>
> -- ScottG
> 
> -- Scott Gravenhorst
> -- http://scott.joviansynth.com/
> -- When the going gets tough, the tough use the command line.
> -- Matt 21:22
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Antialiased OSC

2018-08-06 Thread Scott Gravenhorst


Nigel Redmon via music-dsp@music.columbia.edu wrote:
>
>Arg, no more lengthy replies while needing to catch a plane. Of 
>course, I didnt mean to say Alesis synths (and most others) were 
>drop sampleI meant linear interpolation. The point was that stuff 
>that seems to be substandard can be fine under musical 
>circumstances... 

This is an important point.  I see so often developer approaches that start
at the most complex and most computationally burdensome when simpler
approaches work well enough to pass the "good enough" test.  I start at the
low end and if I can hear stuff I don't like, then I'll try more "advanced"
techniques and that has served me very well.

You may now start heaving rotten eggs and expired vegetables at me  :)

-- ScottG

-- Scott Gravenhorst
-- http://scott.joviansynth.com/
-- When the going gets tough, the tough use the command line.
-- Matt 21:22

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Antialiased OSC

2018-08-06 Thread Nigel Redmon
Arg, no more lengthy replies while needing to catch a plane. Of course, I 
didn’t mean to say Alesis synths (and most others) were drop sample—I meant 
linear interpolation. The point was that stuff that seems to be substandard can 
be fine under musical circumstances...

Sent from my iPhone

> On Aug 6, 2018, at 11:57 AM, Nigel Redmon  wrote:
> 
> Hi Robert,
> 
> On the drop-sample issue:
> 
> Yes, it was a comment about “what you can get away with”, not about 
> precision. First, it’s frequency dependent (a sample or half is a much bigger 
> relative error for high frequencies), frequency content (harmonics), relative 
> harmonic amplitudes (upper harmonics are usually low amplitude), relative 
> oversampling (at constant table size, higher tables are more oversampled), 
> etc. So, for a sawtooth for instance, the tables really don’t have to be so 
> big before it’s awfully hard to hear the difference between that and linear 
> interpolation. The 512k table analysis is probably similar to looking at 
> digital clipping and figuring out the oversampling ratio needed. But the 
> numbers, the answer would be that you need to upsample to something like a 5 
> MHz rate (using that number only because I recall reading that conclusion of 
> someone’s analysis once). In reality, you can get away with much less, 
> because when you calculate worst-case expected clipping levels, you tend to 
> forget you’ll have a helluva time hearing the aliased tones amongst all the 
> “correct” harmonic grunge, despite what your dB attenuation calculations tell 
> you. :-)
> 
> To be clear, though, I’m not advocating zero-order—linear interpolation is 
> cheap. The code on my website was a compile-time switch, for the intent that 
> the user/student can compare the effects. I think it’s good to think about 
> tradeoffs and why some may work and others not. In the mid ‘90s, I went to 
> work for my old Oberheim mates at Line 6. DSP had been a hobby for years, and 
> at a NAMM Marcus said hey why don’t you come work for us. Marcus was an 
> analog guy, not a DSP guy. But he taught me one important thing, early on: 
> The topic of interpolation came up, or lack thereof, came up. I remember I 
> said something like, “but that’s terrible!”. He replied that every Alesis 
> synth ever made was drop sample. He didn’t have to say more—I immediately 
> realized that probably most samplers ever made were that way (not including 
> E-mu), because samplers typically didn’t need to cater to the general case. 
> You didn’t need to shift pitch very far, because real instruments need to be 
> heavily multisampled anyway. And, of course, musical instruments only need to 
> sound “good” (subjective), not fit an audio specification.
> 
> It’s doubtful that people are really going to come down to a performance 
> issue where skipping linear interpolation is the difference in realizing a 
> plugin or device or not. On the old Digidesign TDM systems, I ran into 
> similar tradeoffs often. Where I’d have to do something that on paper seemed 
> to be a horrible thing to do, but to the ear it was fine, and it kept the 
> product viable by staying within the available cycle count.
>> Nigel, i remember your code didn't require big tables and you could have 
>> each wavetable a different size (i think you had the accumulated phase be a 
>> float between 0 and 1 and that was scaled to the wavetable size, right?) but 
>> then that might mean you have to do better interpolation than linear, if you 
>> want it clean.
>> 
> 
> Similar to above—for native computer code, there’s little point in variable 
> table sizes, mainly a thought exercise. I think somewhere in the articles I 
> also noted that if you really needed to save table space (say in ROM, in a 
> system with limited RAM to expand), it made sense to reduce/track the table 
> sizes only up to a point. I think I gave an audio example of one that tracked 
> octaves with halved table lengths, but up to a minimum of 64 samples. Again, 
> this was mostly a thought exercise, exploring the edge of overt aliasing.
> 
> Hope it’s apparent I was just going off on a thought tangent there, some 
> things I think are good to think about for people getting started. Would have 
> been much shorter if just replying to you ;-)
> 
> Robert
> 
>> On Aug 5, 2018, at 4:27 PM, robert bristow-johnson 
>>  wrote:
>> 
>> 
>> 
>>  Original Message 
>> Subject: Re: [music-dsp] Antialiased OSC
>> From: "Nigel Redmon" 
>> Date: Sun, August 5, 2018 1:30 pm
>> To: music-dsp@music.columbia.edu
>> --
>> 
>&

Re: [music-dsp] Antialiased OSC

2018-08-06 Thread Ethan Duni
rbj wrote:
>i, personally, would rather see a consistent method used throughout the
MIDI keyboard range

If you squint at it hard enough, you can maybe convince yourself that the
naive sawtooth generator is just a memory optimization for low-frequency
wavetable entries. I mean, it does a perfect job at DC right? :]



On Sun, Aug 5, 2018 at 4:27 PM, robert bristow-johnson <
r...@audioimagination.com> wrote:

>
>
>  Original Message --------
> Subject: Re: [music-dsp] Antialiased OSC
> From: "Nigel Redmon" 
> Date: Sun, August 5, 2018 1:30 pm
> To: music-dsp@music.columbia.edu
> --
>
> > Yes, that’s a good way, not only for LFO but for that rare time you want
> to sweep down into the nether regions to show off.
>
>
> i, personally, would rather see a consistent method used throughout the
> MIDI keyboard range; high notes or low.  it's hard to gracefully transition
> from one method to a totally different method while the note sweeps.  like
> what if portamento is turned on?  the only way to clicklessly jump from
> wavetable to a "naive" sawtooth would be to crossfade.  but crossfading to
> a wavetable richer in harmonics is already built in.  and what if the
> "classic" waveform wasn't a saw but something else?  more general?
>
>
> > I think a lot of people don’t consider that the error of a “naive”
> oscillator becomes increasingly smaller for lower frequencies. Of course,
> it’s waveform specific, so that’s why I suggested bigger tables. (Side
> comment: If you get big enough tables, you could choose to skip linear
> interpolation altogether—at constant table size, the higher frequency
> octave/whatever tables, where it matters more, will be progressively more
> oversampled anyway.)
>
> well, Duane Wise and i visited this drop-sample vs. linear vs. various
> different cubic splines (Lagrange, Hermite...) a couple decades ago.  for
> really high quality audio (not the same as an electronic musical
> instrument), i had been able to show that, for 120 dB S/N, 512x
> oversampling is sufficient for linear interpolation but 512K is what is
> needed for drop sample.  even relaxing those standards, choosing to forgo
> linear interpolation for drop-sample "interpolation" might require bigger
> wavetables than you might wanna pay for.  for the general wavetable synth
> (or NCO or DDS or whatever you wanna call this LUT thing, including just
> sample playback) i would never recommend interpolation cruder than linear.
> Nigel, i remember your code didn't require big tables and you could have
> each wavetable a different size (i think you had the accumulated phase be a
> float between 0 and 1 and that was scaled to the wavetable size, right?)
> but then that might mean you have to do better interpolation than linear,
> if you want it clean.
>
>
>
> > Funny thing I found in writing the wavetable articles. One soft synth
> developer dismissed the whole idea of wavetables (in favor of minBLEPs,
> etc.). When I pointed out that wavetables allow any waveform, he said the
> other methods did too. I questioned that assertion by giving an example of
> a wavetable with a few arbitrary harmonics. He countered that it wasn’t a
> waveform. I guess some people only consider the basic synth waves as
> “waveforms”. :-D
> >
>
> i've had arguments like this with other Kurzweil people while i worked
> there a decade ago (still such a waste when you consider how good and how
> much work they put into their sample-playback, looping, and interpolation
> hardware, only a small modification was needed to make it into a decent
> wavetable synth with morphing).
>
> for me, a "waveform" is any quasi-periodic function.  A note from any
> decently harmonic instrument; piano, fiddle, a plucked guitar, oboe,
> trumpet, flute, all of those can be done with wavetable synthesis (and
> most, maybe all, of them can be limited to 127 harmonics allowing archived
> wavetables to be as small as 256).
>
> these are the two necessary ingredients to wavetable synthesis:
> quasi-periodic note (that means it can be represented as a Fourier series
> with slowly-changing Fourier coefficients) and bandlimited.  if it's
> quasi-periodic and bandlimited it can be done with wavetable synthesis.  to
> me, for someone to argue against that, means to me that they are arguing
> against Fourier and Shannon.
>
> there is a straight-forward way of pitch tracking the sampled note from
> attack to release, and from that slowly-changing period information, there
> is a straight-forward way to sample it to 256 points per cycle and
> converting e

Re: [music-dsp] Antialiased OSC

2018-08-06 Thread Nigel Redmon
Hi Robert,

On the drop-sample issue:

Yes, it was a comment about “what you can get away with”, not about precision. 
First, it’s frequency dependent (a sample or half is a much bigger relative 
error for high frequencies), frequency content (harmonics), relative harmonic 
amplitudes (upper harmonics are usually low amplitude), relative oversampling 
(at constant table size, higher tables are more oversampled), etc. So, for a 
sawtooth for instance, the tables really don’t have to be so big before it’s 
awfully hard to hear the difference between that and linear interpolation. The 
512k table analysis is probably similar to looking at digital clipping and 
figuring out the oversampling ratio needed. But the numbers, the answer would 
be that you need to upsample to something like a 5 MHz rate (using that number 
only because I recall reading that conclusion of someone’s analysis once). In 
reality, you can get away with much less, because when you calculate worst-case 
expected clipping levels, you tend to forget you’ll have a helluva time hearing 
the aliased tones amongst all the “correct” harmonic grunge, despite what your 
dB attenuation calculations tell you. :-)

To be clear, though, I’m not advocating zero-order—linear interpolation is 
cheap. The code on my website was a compile-time switch, for the intent that 
the user/student can compare the effects. I think it’s good to think about 
tradeoffs and why some may work and others not. In the mid ‘90s, I went to work 
for my old Oberheim mates at Line 6. DSP had been a hobby for years, and at a 
NAMM Marcus said hey why don’t you come work for us. Marcus was an analog guy, 
not a DSP guy. But he taught me one important thing, early on: The topic of 
interpolation came up, or lack thereof, came up. I remember I said something 
like, “but that’s terrible!”. He replied that every Alesis synth ever made was 
drop sample. He didn’t have to say more—I immediately realized that probably 
most samplers ever made were that way (not including E-mu), because samplers 
typically didn’t need to cater to the general case. You didn’t need to shift 
pitch very far, because real instruments need to be heavily multisampled 
anyway. And, of course, musical instruments only need to sound “good” 
(subjective), not fit an audio specification.

It’s doubtful that people are really going to come down to a performance issue 
where skipping linear interpolation is the difference in realizing a plugin or 
device or not. On the old Digidesign TDM systems, I ran into similar tradeoffs 
often. Where I’d have to do something that on paper seemed to be a horrible 
thing to do, but to the ear it was fine, and it kept the product viable by 
staying within the available cycle count.
> Nigel, i remember your code didn't require big tables and you could have each 
> wavetable a different size (i think you had the accumulated phase be a float 
> between 0 and 1 and that was scaled to the wavetable size, right?) but then 
> that might mean you have to do better interpolation than linear, if you want 
> it clean.
> 

Similar to above—for native computer code, there’s little point in variable 
table sizes, mainly a thought exercise. I think somewhere in the articles I 
also noted that if you really needed to save table space (say in ROM, in a 
system with limited RAM to expand), it made sense to reduce/track the table 
sizes only up to a point. I think I gave an audio example of one that tracked 
octaves with halved table lengths, but up to a minimum of 64 samples. Again, 
this was mostly a thought exercise, exploring the edge of overt aliasing.

Hope it’s apparent I was just going off on a thought tangent there, some things 
I think are good to think about for people getting started. Would have been 
much shorter if just replying to you ;-)

Robert

> On Aug 5, 2018, at 4:27 PM, robert bristow-johnson 
>  wrote:
> 
> 
> 
>  Original Message ------------
> Subject: Re: [music-dsp] Antialiased OSC
> From: "Nigel Redmon" 
> Date: Sun, August 5, 2018 1:30 pm
> To: music-dsp@music.columbia.edu
> --
> 
> > Yes, that’s a good way, not only for LFO but for that rare time you want to 
> > sweep down into the nether regions to show off.
>  
> 
> i, personally, would rather see a consistent method used throughout the MIDI 
> keyboard range; high notes or low.  it's hard to gracefully transition from 
> one method to a totally different method while the note sweeps.  like what if 
> portamento is turned on?  the only way to clicklessly jump from wavetable to 
> a "naive" sawtooth would be to crossfade.  but crossfading to a wavetable 
> richer in harmonics is already built in.  and what if the "classic" waveform 
> wasn't a saw but something else?  more general

Re: [music-dsp] Antialiased OSC

2018-08-05 Thread robert bristow-johnson







 Original Message 

Subject: Re: [music-dsp] Antialiased OSC

From: "Ross Bencina" 

Date: Sat, August 4, 2018 2:12 am

To: "A discussion list for music-related DSP" 

--



> Hi Kevin,

>

> Wavetables are for synthesizing ANY band-limited *periodic* signal.

...

>

> If all you need to do is synthesize bandlimited periodic signals, I

> don't see many benefits to BLEP methods over wavetable synthesis.
i might suggest prepending "quasi-" to "periodic".� notes don't have to be 
perfectly periodic, just *mostly* periodic.� one cycle in this quasi-periodic 
waveform should look just like its
adjacent cycles on the left or right.� but it wouldn't necessarily look like a 
cycle 1 second away.

> (1) With wavetable switching, frequency modulation will cause high

> frequency harmonics to fade in and out as the wavetables are crossfaded

> -- a kind of amplitude modulation on the high-order harmonics. The

> strength of the effect will depend on the amount of high-frequency

> content in the waveform, and the number of wavetables (per octave, say):

> Less wavetables per-octave will cause lower frequency harmonics to be

> affected, more wavetables per-octave will lessen the effect on low-order

> harmonics, but will cause the rate of amplitude modulation to increase.

> To some extent you can push this AM effect to higher frequencies by

> allowing some aliasing (say above 18kHz). You could eliminate the AM

> effect entirely with 2x oversampling.
two things:� 1.� the AM effect should *not* affect the lower harmonics that are 
not potential aliases.� as you cross-fade, those harmonics are identical in 
amplitude and phase in the beginning and ending wavetables.� it's only the
highest harmonics that start to fade out (before they alias) as the pitch 
increases.
2. it's possible to show that if you have a sample rate of 48 kHz (so 24 kHz is 
the "folding frequency") and two wavetables per octave you can make all this 
nasty harmonic aliasing (and variation in
amplitude) happen above 19.88 kHz.
at the bottom of the half-octave split, the fundamental is f0.� the Nth 
harmonic is right at 19.88 kHz so N = (19.88 kHz)/f0 and there are no other 
harmonics above it.� at the top of the split the fundamental is 2^(1/2)*f0 and 
the Nth harmonic is
at�2^(1/2)*(19.88 kHz) = 28.11 kHz, if there was no fold over.� but since there 
is, this top harmonic folds over to 48 kHz -�28.11 kHz = 19.89 kHz.� admittedly 
that's a little tight to start fading it out and fading in the waveform that 
has fundamental at 2^(1/2)*f0 and top
harmonic of 19.88 kHz, but you can back off a little and maybe just do this all 
above 19 kHz and put a brickwall LPF at 19 kHz.� i know i (at age 62) won't be 
missing any harmonics above 19 kHz.� below 19 kHz, every harmonic is harmonic 
(unaliased) and unchanging in
amplitude.
another thing you can do is make your splits be smaller than 6 semitone spacing.
and, of course, if the sample rate is 96 kHz, no one will be hearing any 
aliasing nor loss of harmonics below even 24 kHz.� but at 96 kHz, maybe you can 
get away with "naive"
sawtooths and square and maybe even naive hard-sync.� maybe not.
�
--


r b-j� � � � � � � � � � � � �r...@audioimagination.com



"Imagination is more important than knowledge."

�
�
�
�
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Antialiased OSC

2018-08-05 Thread robert bristow-johnson







 Original Message 

Subject: Re: [music-dsp] Antialiased OSC

From: "Nigel Redmon" 

Date: Sun, August 5, 2018 1:30 pm

To: music-dsp@music.columbia.edu

--



> Yes, thats a good way, not only for LFO but for that rare time you 
> want to sweep down into the nether regions to show off.

�
i, personally, would rather see a consistent method used throughout the MIDI 
keyboard range; high notes or low.� it's hard to gracefully transition from one 
method to a totally different method while the note sweeps.� like what if 
portamento is turned on?� the only way to
clicklessly jump from wavetable to a "naive" sawtooth would be to crossfade.� 
but crossfading to a wavetable richer in harmonics is already built in.� and 
what if the "classic" waveform wasn't a saw but something else?� more general?

> I think a lot of people dont consider that the error of a 
> naive oscillator becomes increasingly smaller for lower 
> frequencies. Of course, its waveform specific, so thats why I 
> suggested bigger tables. (Side comment: If you get big enough tables, you 
> could
choose to skip linear interpolation altogetherat constant table size, 
the higher frequency octave/whatever tables, where it matters more, will be 
progressively more oversampled anyway.)
well, Duane Wise and i visited this drop-sample vs. linear vs. various 
different cubic splines
(Lagrange, Hermite...) a couple decades ago.� for really high quality audio 
(not the same as an electronic musical instrument), i had been able to show 
that, for 120 dB S/N, 512x oversampling is sufficient for linear interpolation 
but 512K is what is needed for drop sample.� even relaxing
those standards, choosing to forgo linear interpolation for drop-sample 
"interpolation" might require bigger wavetables than you might wanna pay for.� 
for the general wavetable synth (or NCO or DDS or whatever you wanna call this 
LUT thing, including just sample playback) i would
never recommend interpolation cruder than linear.� Nigel, i remember your code 
didn't require big tables and you could have each wavetable a different size (i 
think you had the accumulated phase be a float between 0 and 1 and that was 
scaled to the wavetable size, right?) but then that might
mean you have to do better interpolation than linear, if you want it clean.
�
> Funny thing I found in writing the wavetable articles. One soft synth 
> developer dismissed the whole idea of wavetables (in favor of minBLEPs, 
> etc.). When I pointed out that wavetables allow any
waveform, he said the other methods did too. I questioned that assertion by 
giving an example of a wavetable with a few arbitrary harmonics. He countered 
that it wasnt a waveform. I guess some people only consider the basic 
synth waves as waveforms. :-D
>
i've had arguments like this with other Kurzweil people while i worked there a 
decade ago (still such a waste when you consider how good and how much work 
they put into their sample-playback, looping, and interpolation hardware, only 
a small modification was needed to make it into a
decent wavetable synth with morphing).
for me, a "waveform" is any quasi-periodic function.� A note from any decently 
harmonic instrument; piano, fiddle, a plucked guitar, oboe, trumpet, flute, all 
of those can be done with wavetable synthesis (and most, maybe all, of them can
be limited to 127 harmonics allowing archived wavetables to be as small as 256).
these are the two necessary ingredients to wavetable synthesis:� quasi-periodic 
note (that means it can be represented as a Fourier series with slowly-changing 
Fourier coefficients) and bandlimited.� if
it's quasi-periodic and bandlimited it can be done with wavetable synthesis.� 
to me, for someone to argue against that, means to me that they are arguing 
against Fourier and Shannon.
there is a straight-forward way of pitch tracking the sampled note from attack 
to release, and from that
slowly-changing period information, there is a straight-forward way to sample 
it to 256 points per cycle and converting each adjacent cycle into a 
wavetable.� that's a lotta redundant data and most of the wavetables (nearly 
all of them) can be culled with the assumption that the wavetables
surviving the culling process will be linearly cross-faded from one to the 
next.�
and if several notes (say up and down the keyboard) are sampled, there is a way 
to align the wavetables (before culling) between the different notes to be 
phase aligned.� then, say you have a split
every half octave, the note at E-flat can be a mix of the wavetables for C 
below and F# above.� it's like the F# is pitched down 3 semitones and the C is 
pitched up 3 semitones and the Eb is a phase-aligned mix of the two.� this can 
be done with any harmonic or quasi-periodic instr

Re: [music-dsp] Antialiased OSC

2018-08-05 Thread Nigel Redmon
Yes, that’s a good way, not only for LFO but for that rare time you want to 
sweep down into the nether regions to show off. I think a lot fo people don’t 
consider that the error of a “naive” oscillator becomes increasingly smaller 
for lower frequencies. Of course, it’s waveform specific, so that’s why I 
suggested bigger tables. (Side comment: If you get big enough tables, you could 
choose to skip linear interpolation altogether—at constant table size, the 
higher frequency octave/whatever tables, where it matters more, will be 
progressively more oversampled anyway.)

Funny thing I found in writing the wavetable articles. One soft synth developer 
dismissed the whole idea of wavetables (in favor of minBLEPs, etc.). When I 
pointed out that wavetables allow any waveform, he said the other methods did 
too. I questioned that assertion by giving an example of a wavetable with a few 
arbitrary harmonics. He countered that it wasn’t a waveform. I guess some 
people only consider the basic synth waves as “waveforms”. :-D

Hard sync is another topic...

> On Aug 4, 2018, at 1:39 PM, Phil Burk  wrote:
> 
> On Sat, Aug 4, 2018 at 10:53 AM, Nigel Redmon  > wrote: 
> With a full-bandwidth saw, though, the brightness is constant. That takes 
> more like 500 harmonics at 40 Hz, 1000 at 20 Hz. So, as Robert says, 2048 or 
> 4096 are good choices (for both noise and harmonics).
> 
>  As I change frequencies  above 86 Hz, I interpolate between wavetables with 
> 1024 samples. For lower frequencies I interpolate between a bright wavetable 
> and a pure sawtooth phasor that is not band limited. That way I can use the 
> same oscillator as an LFO.
> 
> https://github.com/philburk/jsyn/blob/master/src/com/jsyn/engine/MultiTable.java#L167
>  
> 
> 
> Phil Burk

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Antialiased OSC

2018-08-04 Thread robert bristow-johnson







 Original Message 

Subject: Re: [music-dsp] Antialiased OSC

From: "Ross Bencina" 

Date: Sun, August 5, 2018 12:17 am

To: music-dsp@music.columbia.edu

--



> Hi Robert,

>

> On 5/08/2018 8:17 AM, robert bristow-johnson wrote:

>> In a software

>> synthetic that runs on a modern computer, the waste of memory does not

>> seem to be salient.� 4096 � 4 � 64 = 1 meg.� Thats 64 wavetables for

>> some instrument.
i meant to say "In a software synth" and the android spell-fascist insisted on 
"synthetic".


>

> The salient metric is amortized number of L1/L2/L3 cache misses per

> sample lookup.

>
that's a good point.� it's sorta one reason that the code that i posted in 
dropbox processes samples by blocks of samples.� but since the wavetables are 
so large (so that we can get away with linear interpolation between samples), 
the pointer is striding through the wavetable in
big steps.� and then there is the morphing with other wavetables that are 
stored somewhere else.
caching is complicated.� besides morphing/mixing with other wavetables in up 
to, say, 3 dimensions of control, those other wavetables can be anywhere.� i 
think it would be pretty
hard to avoid L1 cache misses (each wavetable is 16K).
i suppose the rows and columns of the wavetables can be switched so that 
samples having the same index number for each wavetable are stored interlaced 
and adjacent to each other, then mixing between the wavetables (for a single 
output
sample) would be less of a problem.� but they will still stride through the 
beginning to the end of the wavetable in just a few samples.�
i haven't thought about cache misses.� seems to me the only thing that can 
maybe help is to reorder ("reshape" in MATLAB) the array
of wavetables; grouping together samples of the same sample index in the 
waveform.� in 3-dimensional mixing, the code that i posted would mix samples 
from 8 different wavetables out of a larger constellation.� and, in the 
simple-minded "all wavetables the same size" in my
approach, (i think Nigel avoids this problem in his code), we know that in one 
output sample computation, the index is the same for all of the wavetables.� i 
just dunno what else can be done to reduce L1 cache misses.� looks like L3 
cache miss is not a
problem.
�
�

--



r b-j� � � � � � � � � � � � �r...@audioimagination.com



"Imagination is more important than knowledge."

�
�
�
�
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Antialiased OSC

2018-08-04 Thread Ross Bencina

Hi Robert,

On 5/08/2018 8:17 AM, robert bristow-johnson wrote:
In a software 
synthetic that runs on a modern computer, the waste of memory does not 
seem to be salient.  4096 × 4 × 64 = 1 meg.  Thats 64 wavetables for 
some instrument.


The salient metric is amortized number of L1/L2/L3 cache misses per 
sample lookup.


Ross.
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Antialiased OSC

2018-08-04 Thread robert bristow-johnson


I am not sure what a "pure sawtooth phasor" is.  Do you mean a "naive sawtooth" 
a.k.a. a ramp function?
The technique that I have suggested is, say, 4096 samples for all active 
wavetables so that alignment and crossfading are simpler.  In a software 
synthetic that runs on a modern computer, the waste of memory does not seem to 
be salient.  4096 × 4 × 64 = 1 meg.  Thats 64 wavetables for some instrument.
Of course the wavetables for higher pitches will have fewer harmonics.

--r b-j                     r...@audioimagination.com
"Imagination is more important than knowledge."




 Original message 
From: Phil Burk  
Date: 8/4/2018  1:39 PM  (GMT-08:00) 
To: A discussion list for music-related DSP  
Subject: Re: [music-dsp] Antialiased OSC 

On Sat, Aug 4, 2018 at 10:53 AM, Nigel Redmon  wrote: 
With a full-bandwidth saw, though, the brightness is constant. That takes more 
like 500 harmonics at 40 Hz, 1000 at 20 Hz. So, as Robert says, 2048 or 4096 
are good choices (for both noise and harmonics). As I change frequencies  above 
86 Hz, I interpolate between wavetables with 1024 samples. For lower 
frequencies I interpolate between a bright wavetable and a pure sawtooth phasor 
that is not band limited. That way I can use the same oscillator as an LFO.
https://github.com/philburk/jsyn/blob/master/src/com/jsyn/engine/MultiTable.java#L167

Phil Burk

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Antialiased OSC

2018-08-04 Thread Phil Burk
On Sat, Aug 4, 2018 at 10:53 AM, Nigel Redmon 
wrote:
>
> With a full-bandwidth saw, though, the brightness is constant. That takes
> more like 500 harmonics at 40 Hz, 1000 at 20 Hz. So, as Robert says, 2048
> or 4096 are good choices (for both noise and harmonics).
>
 As I change frequencies  above 86 Hz, I interpolate between wavetables
with 1024 samples. For lower frequencies I interpolate between a bright
wavetable and a pure sawtooth phasor that is not band limited. That way I
can use the same oscillator as an LFO.

https://github.com/philburk/jsyn/blob/master/src/com/jsyn/engine/MultiTable.java#L167

Phil Burk
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Antialiased OSC

2018-08-04 Thread Nigel Redmon
> even though the wavetables can be *archived* with as few as 128 or 256 
> samples per wavetable (this can accurately represent the magnitude *and* 
> phase of each harmonic up to the 63rd or 127th harmonic), i very much 
> recommend at Program Change time, when the wavetables that will be used are 
> loaded from the archive to the memory space where you'll be 
> rockin'-n-rollin', that these wavetables be expanded (using bandlimited 
> interpolation) to 2048 or 4096 samples and then, in the oscillator code, you 
> do linear interpolation in real-time synthesis.  that wavetable expansion at 
> Program Change time will take a few milliseconds (big fat hairy deal).
> 
I know what you’re saying when you say “can be” (for many possible waves, or 
all waves if you’re willing to accept limitations), but to save possible grief 
for implementors: First, I’m sure many digital synths use 256 sample tables 
(and probably very common for synths that let you “draw” or manipulate wave 
tables), so it’s certainly not wrong. Just realize that 127 harmonics isn’t 
nearly enough if you expect to play a low sawtooth, filter open, with all the 
splendor of an analog synth. At 40 Hz, harmonics will top out at 5 kHz. As you 
play higher or lower notes, you’ll hear the harmonics walk up or down with the 
notes, as if a filter were tracking. With a full-bandwidth saw, though, the 
brightness is constant. That takes more like 500 harmonics at 40 Hz, 1000 at 20 
Hz. So, as Robert says, 2048 or 4096 are good choices (for both noise and 
harmonics).

I just didn’t want someone writing a bunch of code based on 256-sample tables, 
only to be disappointed that it doesn’t sound as analog as expected. We like 
our buzzy sawtooths ;-)



> On Aug 3, 2018, at 2:56 PM, robert bristow-johnson 
>  wrote:
>  Original Message 
> Subject: [music-dsp] Antialiased OSC
> From: "Kevin Chi" 
> Date: Fri, August 3, 2018 2:23 pm
> To: music-dsp@music.columbia.edu
> --
> >
> > Is there such a thing as today's standard for softSynth antialiased
> > oscillators?
> 
> i think there should be, but if i were to say so, i would sound like a stuck 
> record (and there will be people who disagree).
>  
> 
> stuck record:  "wavetable ... wavetable ... wavetable ..."
> 
> 
> >
> > I was looking up PolyBLEP oscillators, and was wondering how it would relate
> > to a 1-2 waveTables per octave based oscillator or maybe to some other
> > algos.
> >
> > thanks for any ideas and recommendations in advance,
> 
> if you want, i can send you a C file to show one way it can be done.  Nigel 
> Redmon also has some code online somewhere.
> 
> if your sample rate is 48 kHz and you're willing to put in a brickwall LPF at 
> 19 kHz, you can get away with 2 wavetables per octave, no aliasing, and 
> represent each surviving harmonic (that is below 19 kHz) perfectly.  if your 
> sample rate is 96 kHz, then there is **really** no problem getting the 
> harmonics down accurately (up to 30 kHz) and no aliases.
> 
> even though the wavetables can be *archived* with as few as 128 or 256 
> samples per wavetable (this can accurately represent the magnitude *and* 
> phase of each harmonic up to the 63rd or 127th harmonic), i very much 
> recommend at Program Change time, when the wavetables that will be used are 
> loaded from the archive to the memory space where you'll be 
> rockin'-n-rollin', that these wavetables be expanded (using bandlimited 
> interpolation) to 2048 or 4096 samples and then, in the oscillator code, you 
> do linear interpolation in real-time synthesis.  that wavetable expansion at 
> Program Change time will take a few milliseconds (big fat hairy deal).
> 
> lemme know, i'll send you that C file no strings attached.  (it's really 
> quite simple.)  and anyone listening in, i can do the same if you email me.  
> now this doesn't do the hard part of **defining** the wavetables (the C file 
> is just the oscillator with morphing).  but we can discuss how to do that 
> here later.
> 
> 
> --
> 
> r b-j r...@audioimagination.com
> 
> "Imagination is more important than knowledge.”
> 

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Antialiased OSC

2018-08-04 Thread Nigel Redmon
Robert mentioned I have some code (and more importantly, I think, discussion):

http://www.earlevel.com/main/category/digital-audio/oscillators/wavetable-oscillators/?order=ASC
 



> On Aug 3, 2018, at 2:23 PM, Kevin Chi  wrote:
> 
> Hi,
> 
> Is there such a thing as today's standard for softSynth antialiased 
> oscillators?
> 
> I was looking up PolyBLEP oscillators, and was wondering how it would relate
> to a 1-2 waveTables per octave based oscillator or maybe to some other algos.
> 
> thanks for any ideas and recommendations in advance,
> Kevin

___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Antialiased OSC

2018-08-04 Thread Ross Bencina

Hi Kevin,

Wavetables are for synthesizing ANY band-limited *periodic* signal. On 
the other hand, the BLEP family of methods are for synthesizing 
band-limited *discontinuities* (first order, and/or higher order 
discontinuities).


It is true that BLEP can be used to synthesize SOME bandlimited periodic 
signals (typically those involving a few low-order discontinuities per 
cycle such as sqare, saw and triangle waveforms). In this case, BLEP can 
be thought of as adding "corrective grains" that cancel out the aliasing 
present in a naive (aliased) waveform that has sample-aligned 
discontinuities. The various __BL__ methods tend to vary in how they 
synthesize the corrective grains.


If all you need to do is synthesize bandlimited periodic signals, I 
don't see many benefits to BLEP methods over wavetable synthesis.


Where BLEP comes into its own (in theory at least) is when the signal 
contains discontinuities that are not synchronous with the fundamental 
period. The obvious application is hard-sync, where discontinuities vary 
relative to the phase of the primary waveform.


The original BLEP method used wavetables for the corrective grains, so 
has no obvious performance benefit over periodic wavetable synthesis. 
Other derivative methods (maybe polyBLEP?) don't use wavetables for the 
corrective grains, so they might potentially have benefits in settings 
where there is limited RAM or where the cost of memory access and/or 
cache pollution is large (e.g. modern memory hierarchies) -- but you'd 
need to measure!


You mention frequency modulation. A couple of thoughts on that:

(1) With wavetable switching, frequency modulation will cause high 
frequency harmonics to fade in and out as the wavetables are crossfaded 
-- a kind of amplitude modulation on the high-order harmonics. The 
strength of the effect will depend on the amount of high-frequency 
content in the waveform, and the number of wavetables (per octave, say): 
Less wavetables per-octave will cause lower frequency harmonics to be 
affected, more wavetables per-octave will lessen the effect on low-order 
harmonics, but will cause the rate of amplitude modulation to increase. 
To some extent you can push this AM effect to higher frequencies by 
allowing some aliasing (say above 18kHz). You could eliminate the AM 
effect entirely with 2x oversampling.


(2) With BLEP-type methods, full alias suppression is dependent on 
generating corrective bandlimited pulses for all non-zero higher-order 
derivatives of the signal. Unless your original signal is a 
square/rectangle waveform, any frequency modulation will introduce 
additional derivative terms (product rule) that need to be compensated. 
For sufficiently low frequency, low amplitude modulation you may be able 
to ignore these terms, but beyond some threshold they will become 
significant and would need to be dealt with. I don't recall how PolyBLEP 
deals with higher-order corrective terms.


In any case, my main point here is that BLEP methods don't magically 
support artifact-free frequency modulation (except maybe for square waves).


In the end I don't think there's one single standard, because there are 
mutually exclusive trade-offs to be made. The design space includes:


Features:

- Supported frequency modulation? (just pitch bend? low frequency LFOs? 
audio-rate modulation?).


- Support for hard sync?

- Support for arbitrary waveforms?

Audio quality:

- Allowable aliasing specification (e.g. below 120dB over whole audio 
spectrum, or below 70dB below 10kHz, etc.)


- High-frequency harmonic modulation margin under FM?

Compute performance:

- RAM usage

- CPU usage per voice

And of course:

- Development time/cost (initial and cost of adding features)


Today, desktop CPUs are fast enough to support the most 
difficult-to-achieve synthesis capabilities with no measurable audio 
artifacts. There are plugins that aim for that, and use a whole CPU core 
to synthesize a single voice. There is a market for that. But if you 
goal is a 128-voice polysynth that uses a maximum of 2% CPU on a 
smartphone then you may not want to aim for say, completely-alias-free 
hard-sync of audio-rate frequency modulated arbitrary waveforms.


Cheers,

Ross.


On 4/08/2018 7:23 AM, Kevin Chi wrote:

Hi,

Is there such a thing as today's standard for softSynth antialiased 
oscillators?


I was looking up PolyBLEP oscillators, and was wondering how it would 
relate
to a 1-2 waveTables per octave based oscillator or maybe to some other 
algos.


thanks for any ideas and recommendations in advance,
Kevin
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp



Re: [music-dsp] Antialiased OSC

2018-08-03 Thread Phil Burk
Hello Kevin,

There are some antialiased oscillators in JSyn (Java) that might interest
you.

Harmonic table approach
https://github.com/philburk/jsyn/blob/master/src/com/jsyn/unitgen/SawtoothOscillatorBL.java

Square generated from Sawtooth
https://github.com/philburk/jsyn/blob/master/src/com/jsyn/unitgen/SquareOscillatorBL.java

Differentiated Parabolic Waveform - Simpler, faster and almost as clean.
https://github.com/philburk/jsyn/blob/master/src/com/jsyn/unitgen/SawtoothOscillatorDPW.java

Phil Burk

On Fri, Aug 3, 2018 at 2:23 PM, Kevin Chi  wrote:

> Hi,
>
> Is there such a thing as today's standard for softSynth antialiased
> oscillators?
>
> I was looking up PolyBLEP oscillators, and was wondering how it would
> relate
> to a 1-2 waveTables per octave based oscillator or maybe to some other
> algos.
>
> thanks for any ideas and recommendations in advance,
> Kevin
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Antialiased OSC

2018-08-03 Thread robert bristow-johnson







 Original Message 

Subject: Re: [music-dsp] Antialiased OSC

From: "Kevin Chi" 

Date: Sat, August 4, 2018 2:44 am

To: music-dsp@music.columbia.edu

--



> Thank you for the quick ideas, the code looks nice.

>

> Currently I am not designing a wavetable OSC, I am just trying to do

> some basic VA waveforms (saw, tri, square) so naively I thought

> I just set up the 15-20 tables/waveform� from fourier series whenever I

> start the app/plugin and that could do the job. Then interpolating

> between the samples of the closest wavetables. But maybe I am too naive? :)

>

> With wavetables I was also wondering how one can act when there is a

> pitch modulation of a running note that will go

> up/down an octave... as my understanding this would mean if I don't

> switch wavetables at a certain pitch mod, then it will

> introduce more and more aliasing. But checking this at every sample

> sounds like some overhead maybe shouldn't be there.

�
as the pitch modulates, let's say the pitch is increasing, you *crossfade* from 
a wavetable with more harmonics to a wavetable that is identical except with 
fewer harmonics.
as the pitch is decreasing you crossfade from a wavetable with fewer harmonics 
to one with more
harmonics.

--



r b-j� � � � � � � � � � � � �r...@audioimagination.com



"Imagination is more important than knowledge."

�
�
�
�
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Antialiased OSC

2018-08-03 Thread Kevin Chi

Thank you for the quick ideas, the code looks nice.

Currently I am not designing a wavetable OSC, I am just trying to do 
some basic VA waveforms (saw, tri, square) so naively I thought
I just set up the 15-20 tables/waveform  from fourier series whenever I 
start the app/plugin and that could do the job. Then interpolating

between the samples of the closest wavetables. But maybe I am too naive? :)

With wavetables I was also wondering how one can act when there is a 
pitch modulation of a running note that will go
up/down an octave... as my understanding this would mean if I don't 
switch wavetables at a certain pitch mod, then it will
introduce more and more aliasing. But checking this at every sample 
sounds like some overhead maybe shouldn't be there.


--
Kevin @ Fine Cut Bodies
Mailto: ke...@finecutbodies.com
Web: http://finecutbodies.com
LinkedIn: http://lnkd.in/XZiSjF

On 8/3/18 3:11 PM, music-dsp-requ...@music.columbia.edu wrote:

lemme know if this doesn't 
work:?https://www.dropbox.com/s/cybcs7tgzgplnwc/wavetable_oscillator.c
remember that this is*synthesis*  code.? it does not extract wavetables from a 
sampled sound (i wrote a paper a quarter century ago how to do that, but it's 
not code).?
nor does it define bandlimited square, saw, PWM, hard-sync, whatever.? that's a 
sorta difficult problem, but one that someone has for sure solved and we can 
discuss here how to do that (perhaps in MATLAB).? extracting wavetables from 
sampled notes requires pitch detection/tracking and
interpolation.


___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Antialiased OSC

2018-08-03 Thread ćwiek
Work's like a charm!
I've implemented wavetable osc in Xaoc Batumi module, that's why it makes
me interested.
Looking forward to do some comparison. Thanks!

sob., 4 sie 2018, 01:11 użytkownik robert bristow-johnson <
r...@audioimagination.com> napisał:

>
>
>
> lemme know if this doesn't work:
> https://www.dropbox.com/s/cybcs7tgzgplnwc/wavetable_oscillator.c
>
> remember that this is *synthesis* code.  it does not extract wavetables
> from a sampled sound (i wrote a paper a quarter century ago how to do that,
> but it's not code).  nor does it define bandlimited square, saw, PWM,
> hard-sync, whatever.  that's a sorta difficult problem, but one that
> someone has for sure solved and we can discuss here how to do that (perhaps
> in MATLAB).  extracting wavetables from sampled notes requires pitch
> detection/tracking and interpolation.
>
> L8r,
>
> r b-j
>
> -------- Original Message ----
> Subject: Re: [music-dsp] Antialiased OSC
> From: ćwiek 
> Date: Fri, August 3, 2018 6:00 pm
> To: r...@audioimagination.com
> music-dsp@music.columbia.edu
> --
>
> > Can you provide the code with something like pastebin/ Dropbox / gdrive?
> > I'm also very interested in seeing this implementation.
> > Thanks,
> > napent
> >
> > sob., 4 sie 2018, 00:57 użytkownik robert bristow-johnson <
> > r...@audioimagination.com> napisał:
> >
> >>
> >>
> >>  Original Message
> 
> >> Subject: [music-dsp] Antialiased OSC
> >>
> From: "Kevin Chi" 
> >> Date: Fri, August 3, 2018 2:23 pm
> >> To: music-dsp@music.columbia.edu
> >>
> --
> >> >
> >> > Is there such a thing as today's standard for softSynth antialiased
> >> > oscillators?
> >>
> >> i think there should be, but if i were to say so, i would sound like a
> >> stuck record (and there will be people who disagree).
> >>
> >>
> >> stuck record: "wavetable ... wavetable ... wavetable ..."
> >>
> >>
> >> >
> >> > I was looking up PolyBLEP oscillators, and was wondering how it would
> >> relate
> >> > to a 1-2 waveTables per octave based oscillator or maybe to some other
> >> > algos.
> >> >
> >> > thanks for any ideas and recommendations in advance,
> >>
> >> if you want, i can send you a C file to show one way it can be done.
> >> Nigel Redmon also has some code online somewhere.
> >>
> >> if your sample rate is 48 kHz and you're willing to put in a brickwall
> LPF
> >> at 19 kHz, you can get away with 2 wavetables per octave, no aliasing,
> and
> >> represent each surviving harmonic (that is below 19 kHz) perfectly. if
> >> your sample rate is 96 kHz, then there is **really** no problem getting
> the
> >> harmonics down accurately (up to 30 kHz) and no aliases.
> >>
> >> even though the wavetables can be *archived* with as few as 128 or 256
> >> samples per wavetable (this can accurately represent the magnitude *and*
> >> phase of each harmonic up to the 63rd or 127th harmonic), i very much
> >> recommend at Program Change time, when the wavetables that will be used
> are
> >> loaded from the archive to the memory space where you'll be
> >> rockin'-n-rollin', that these wavetables be expanded (using bandlimited
> >> interpolation) to 2048 or 4096 samples and then, in the oscillator code,
> >> you do linear interpolation in real-time synthesis. that wavetable
> >> expansion at Program Change time will take a few milliseconds (big fat
> >> hairy deal).
> >>
> >> lemme know, i'll send you that C file no strings attached. (it's really
> >> quite simple.) and anyone listening in, i can do the same if you email
> >> me. now this doesn't do the hard part of **defining** the wavetables
> (the
> >> C file is just the oscillator with morphing). but we can discuss how to
> do
> >> that here later.
> >>
> >>
> >> --
> >>
> >> r b-j r...@audioimagination.com
> >>
> >> "Imagination is more important than knowledge."
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >> ___
> >> dupswapdrop: music-dsp mailing list
> >> music-dsp@music.columbia.edu
> >> https://lists.columbia.edu/mailman/listinfo/music-dsp
> >
>
>
>
>
>
>
>
>
> --
>
> r b-j r...@audioimagination.com
>
> "Imagination is more important than knowledge."
>
>
>
>
>
>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Antialiased OSC

2018-08-03 Thread robert bristow-johnson





�
lemme know if this doesn't 
work:�https://www.dropbox.com/s/cybcs7tgzgplnwc/wavetable_oscillator.c
remember that this is *synthesis* code.� it does not extract wavetables from a 
sampled sound (i wrote a paper a quarter century ago how to do that, but it's 
not code).�
nor does it define bandlimited square, saw, PWM, hard-sync, whatever.� that's a 
sorta difficult problem, but one that someone has for sure solved and we can 
discuss here how to do that (perhaps in MATLAB).� extracting wavetables from 
sampled notes requires pitch detection/tracking and
interpolation.
L8r,
r b-j


 Original Message 

Subject: Re: [music-dsp] Antialiased OSC

From: ćwiek 

Date: Fri, August 3, 2018 6:00 pm

To: r...@audioimagination.com

music-dsp@music.columbia.edu

--



> Can you provide the code with something like pastebin/ Dropbox / gdrive?

> I'm also very interested in seeing this implementation.

> Thanks,

> napent

>

> sob., 4 sie 2018, 00:57 użytkownik robert bristow-johnson <

> r...@audioimagination.com> napisał:

>

>>

>>

>>  Original Message 

>> Subject: [music-dsp] Antialiased OSC

>>

From: "Kevin Chi" 

>> Date: Fri, August 3, 2018 2:23 pm

>> To: music-dsp@music.columbia.edu

>> --

>> >

>> > Is there such a thing as today's standard for softSynth antialiased

>> > oscillators?

>>

>> i think there should be, but if i were to say so, i would sound like a

>> stuck record (and there will be people who disagree).

>>

>>

>> stuck record: "wavetable ... wavetable ... wavetable ..."

>>

>>

>> >

>> > I was looking up PolyBLEP oscillators, and was wondering how it would

>> relate

>> > to a 1-2 waveTables per octave based oscillator or maybe to some other

>> > algos.

>> >

>> > thanks for any ideas and recommendations in advance,

>>

>> if you want, i can send you a C file to show one way it can be done.

>> Nigel Redmon also has some code online somewhere.

>>

>> if your sample rate is 48 kHz and you're willing to put in a brickwall LPF

>> at 19 kHz, you can get away with 2 wavetables per octave, no aliasing, and

>> represent each surviving harmonic (that is below 19 kHz) perfectly. if

>> your sample rate is 96 kHz, then there is **really** no problem getting the

>> harmonics down accurately (up to 30 kHz) and no aliases.

>>

>> even though the wavetables can be *archived* with as few as 128 or 256

>> samples per wavetable (this can accurately represent the magnitude *and*

>> phase of each harmonic up to the 63rd or 127th harmonic), i very much

>> recommend at Program Change time, when the wavetables that will be used are

>> loaded from the archive to the memory space where you'll be

>> rockin'-n-rollin', that these wavetables be expanded (using bandlimited

>> interpolation) to 2048 or 4096 samples and then, in the oscillator code,

>> you do linear interpolation in real-time synthesis. that wavetable

>> expansion at Program Change time will take a few milliseconds (big fat

>> hairy deal).

>>

>> lemme know, i'll send you that C file no strings attached. (it's really

>> quite simple.) and anyone listening in, i can do the same if you email

>> me. now this doesn't do the hard part of **defining** the wavetables (the

>> C file is just the oscillator with morphing). but we can discuss how to do

>> that here later.

>>

>>

>> --

>>

>> r b-j r...@audioimagination.com

>>

>> "Imagination is more important than knowledge."

>>

>>

>>

>>

>>

>>

>>

>> ___

>> dupswapdrop: music-dsp mailing list

>> music-dsp@music.columbia.edu

>> https://lists.columbia.edu/mailman/listinfo/music-dsp

>
�
�
�


--



r b-j� � � � � � � � � � � � �r...@audioimagination.com



"Imagination is more important than knowledge."

�
�
�
�
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Antialiased OSC

2018-08-03 Thread ćwiek
Can you provide the code with something like pastebin/ Dropbox / gdrive?
I'm also very interested in seeing this implementation.
Thanks,
napent

sob., 4 sie 2018, 00:57 użytkownik robert bristow-johnson <
r...@audioimagination.com> napisał:

>
>
>  Original Message 
> Subject: [music-dsp] Antialiased OSC
> From: "Kevin Chi" 
> Date: Fri, August 3, 2018 2:23 pm
> To: music-dsp@music.columbia.edu
> --
> >
> > Is there such a thing as today's standard for softSynth antialiased
> > oscillators?
>
> i think there should be, but if i were to say so, i would sound like a
> stuck record (and there will be people who disagree).
>
>
> stuck record:  "wavetable ... wavetable ... wavetable ..."
>
>
> >
> > I was looking up PolyBLEP oscillators, and was wondering how it would
> relate
> > to a 1-2 waveTables per octave based oscillator or maybe to some other
> > algos.
> >
> > thanks for any ideas and recommendations in advance,
>
> if you want, i can send you a C file to show one way it can be done.
> Nigel Redmon also has some code online somewhere.
>
> if your sample rate is 48 kHz and you're willing to put in a brickwall LPF
> at 19 kHz, you can get away with 2 wavetables per octave, no aliasing, and
> represent each surviving harmonic (that is below 19 kHz) perfectly.  if
> your sample rate is 96 kHz, then there is **really** no problem getting the
> harmonics down accurately (up to 30 kHz) and no aliases.
>
> even though the wavetables can be *archived* with as few as 128 or 256
> samples per wavetable (this can accurately represent the magnitude *and*
> phase of each harmonic up to the 63rd or 127th harmonic), i very much
> recommend at Program Change time, when the wavetables that will be used are
> loaded from the archive to the memory space where you'll be
> rockin'-n-rollin', that these wavetables be expanded (using bandlimited
> interpolation) to 2048 or 4096 samples and then, in the oscillator code,
> you do linear interpolation in real-time synthesis.  that wavetable
> expansion at Program Change time will take a few milliseconds (big fat
> hairy deal).
>
> lemme know, i'll send you that C file no strings attached.  (it's really
> quite simple.)  and anyone listening in, i can do the same if you email
> me.  now this doesn't do the hard part of **defining** the wavetables (the
> C file is just the oscillator with morphing).  but we can discuss how to do
> that here later.
>
>
> --
>
> r b-j r...@audioimagination.com
>
> "Imagination is more important than knowledge."
>
>
>
>
>
>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Antialiased OSC

2018-08-03 Thread robert bristow-johnson



�
 Original Message 

Subject: [music-dsp] Antialiased OSC

From: "Kevin Chi" 

Date: Fri, August 3, 2018 2:23 pm

To: music-dsp@music.columbia.edu

--

>

> Is there such a thing as today's standard for softSynth antialiased

> oscillators?
i think there should be, but if i were to say so, i would sound like a stuck 
record (and there will be people who disagree).

�
stuck record:� "wavetable ... wavetable ... wavetable ..."


>

> I was looking up PolyBLEP oscillators, and was wondering how it would relate

> to a 1-2 waveTables per octave based oscillator or maybe to some other

> algos.

>

> thanks for any ideas and recommendations in advance,
if you want, i can send you a C file to show one way it can be done.� Nigel 
Redmon also has some code online somewhere.
if your sample rate is 48 kHz and you're willing to put in a brickwall LPF at 
19 kHz, you can get away
with 2 wavetables per octave, no aliasing, and represent each surviving 
harmonic (that is below 19 kHz) perfectly.� if your sample rate is 96 kHz, then 
there is **really** no problem getting the harmonics down accurately (up to 30 
kHz) and no aliases.
even though the wavetables can be
*archived* with as few as 128 or 256 samples per wavetable (this can accurately 
represent the magnitude *and* phase of each harmonic up to the 63rd or 127th 
harmonic), i very much recommend at Program Change time, when the wavetables 
that will be used are loaded from the archive to the memory space
where you'll be rockin'-n-rollin', that these wavetables be expanded (using 
bandlimited interpolation) to 2048 or 4096 samples and then, in the oscillator 
code, you do linear interpolation in real-time synthesis.� that wavetable 
expansion at Program Change time will take a few milliseconds (big
fat hairy deal).
lemme know, i'll send you that C file no strings attached.� (it's really quite 
simple.)� and anyone listening in, i can do the same if you email me.� now this 
doesn't do the hard part of **defining** the wavetables (the C file is just the 
oscillator with
morphing).� but we can discuss how to do that here later.

--



r b-j� � � � � � � � � � � � �r...@audioimagination.com



"Imagination is more important than knowledge."

�
�
�
�
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp