Re: [music-dsp] sinc interp, higher orders

2015-09-20 Thread robert bristow-johnson

On 9/11/15 3:25 PM, Nigel Redmon wrote:

Great—glad to hear the articles were helpful, Nuno. (Website back up.)

To build the oscillators tables I’m using that multi table technic 
you describe on your waveforms series posts where maxHarms is: int 
maxHarms = 44100.f / (3.0 * BASE_FREQ) + 0.5; Is this an heuristic?


This equation is specific to my goal of showing the minimum number of 
wavetables for a high-quality oscillator at 44.1 kHz sample rate. That 
is, one wavetable per octave. The oscillator I demonstrated does alias 
in the audio band, but those components are confined to being above 
one-third of the sample rate (in this case 44100.0 / 3.0). I think the 
rest of the equation is obvious—dividing by the base frequency (20 Hz, 
for instance), because the lower the lowest (base) frequency that you 
want to represent, the more harmonics are needed to fill the audio 
band. And the 0.5 is to round up.


So, back to the SR/3 part: Again, we’re allowing one wavetable per 
octave. If you wanted no aliasing at all, you’d need to limit the 
highest harmonic to 22050 Hz (half SR) when that table is shift ed up 
an octave. But the highest harmonic, for such a table, would only be 
at 11025 when not shifted up an octave. That’s not very good. Instead, 
we say that we’ll allow aliasing, but limit it to staying very high in 
the audio band where we can’t hear it, or at least it’s unnoticed.


the way i would quantify the problem is to define a frequency band from 
somewhere below Nyquist (like 19 kHz) up to Nyquist (22.05 kHz) in which 
you *do* allow foldover and aliasing.  or, on the other hand, missing 
harmonics.  if you don't mind crap above 19 kHz (put a brick-wall LPF on 
it, if you do mind), you can get away with 2 wavetables per octave with 
Fs = 44.1 kHz, with non-zero harmonics up to 19 kHz and no aliases below 
19 kHz.  some harmonics will fold back and become non-harmonic, but 
they'll stay above 19.


i thought i tossed out the equations before, but i don't remember.  i 
can do it again, i s'pose.


Well, if we allow 1k of aliasing, when shifted up, it would alias at 
21050, and at the low end it would be 12025. The aliasing would be 
acceptable (not only unhearable, but this is for a synth—higher 
harmonics are typically lower amplitude than lower harmonics, AND we 
usually run through a lowpass filter), but there’s not much 
improvement for the start of the octave’s highest frequency component. 
Optimal is probably where the highest component of the bottom of the 
octave meets the worst-case aliasing from the top—and that is the 
half-way point of the top octave of the audio band (that is, 11025 to 
22050 Hz). The half-way point of the top octave is 22050/1.5, or 
44100/3—14700 Hz.


So, the equation just determines how many harmonics should be in the 
table for a given octave—the highest harmonic for a given octave table 
should be limited to 14700 Hz to allow for shifting up an octave.


Of course, you could use more tables and limit the shifting to a 
fraction of an octave, and do better, but good luck hearing the 
difference ;-) It’s simply a good tradeoff.


--

r b-j  r...@audioimagination.com

"Imagination is more important than knowledge."



___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Announcement: libsoundio 1.0.0 released

2015-09-20 Thread Ross Bencina

On 21/09/2015 10:34 AM, Bjorn Roche wrote:

I noticed that PortAudio's API allows one to open a duplex stream
with different stream parameters for each device. Does it actually
make sense to open an input device and an output device with...

  * ...different sample rates?


PA certainly doesn't support this. You might have two devices open at
one time (one for input and one for output) and they might be running at
separate sample rates, but the stream itself will only have one sample
rate -- at least one device will be SR covered if necessary.


A full duplex PA stream has a single sample rate. There is only one 
sample rate parameter to Pa_OpenStream().




  * ...different latency / hardware buffer values?


Some host APIs support separate values for input and output. And yes, in 
my experience you can get lowest full-duplex latency by tuning the 
parameters separately for input and output.




PA probably only uses one of the two values in at least some situations
like this. In fact, on OS X (and possibly on other APIs),


It depends on the host API. e.g. an ASIO full duplex stream only has one 
buffer size parameter.




the latency
parameter is often ignored completely anyway. (or at least it was when I
last looked at the code)


That is false. Phil and I did a lot of work a couple of years back to 
fix the interpretation of latency parameters.




  * ...different sample formats?


I don't think this is of much use to many people (anybody?). If it is, I
don't think the person who needs it would complain too much about a few
extra lines of conversion code, but maybe I'm wrong.


Agree. There is no particular benefit.

Ross.







___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp


Re: [music-dsp] Announcement: libsoundio 1.0.0 released

2015-09-20 Thread Bjorn Roche
On Sun, Sep 20, 2015 at 10:21 AM, Andrew Kelley 
wrote:

> On Fri, Sep 4, 2015 at 11:47 AM Andrew Kelley 
> wrote:
>


> A ringbuffer introduces a buffer's worth of delay. Not good for
>>> applications that require low latency. A DAW would be a better example
>>> than a reverb. No low-latency monitoring with this arrangement.
>>>
>>
>> I'm going to look carefully into this. I think you brought up a potential
>> flaw in the libsoundio API, in which case I'm going to figure out how to
>> address the problem and then update the API.
>>
>
> I think you are right that duplex streams is a missing feature from
> libsoundio's current API. Upon reexamination, it looks like it is possible
> to support duplex streams on each backend.
>

This will be a boon for libsoundio!


> I noticed that PortAudio's API allows one to open a duplex stream with
> different stream parameters for each device. Does it actually make sense to
> open an input device and an output device with...
>
>  * ...different sample rates?
>

PA certainly doesn't support this. You might have two devices open at one
time (one for input and one for output) and they might be running at
separate sample rates, but the stream itself will only have one sample rate
-- at least one device will be SR covered if necessary.


>  * ...different latency / hardware buffer values?
>

PA probably only uses one of the two values in at least some situations
like this. In fact, on OS X (and possibly on other APIs), the latency
parameter is often ignored completely anyway. (or at least it was when I
last looked at the code)


>  * ...different sample formats?
>

I don't think this is of much use to many people (anybody?). If it is, I
don't think the person who needs it would complain too much about a few
extra lines of conversion code, but maybe I'm wrong.

bjorn

-- 
Bjorn Roche
@shimmeoapp
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Re: [music-dsp] Announcement: libsoundio 1.0.0 released

2015-09-20 Thread Andrew Kelley
On Fri, Sep 4, 2015 at 11:47 AM Andrew Kelley  wrote:

> >>
>> >> And an observation: libsoundio has a read and a write callback. If I
>> was
>> >> writing an audio program that produced output based on the input (such
>> as a
>> >> reverb, for example), do I have any guarantee that a write callback
>> will
>> >> only come after a read callback, and that for every write callback
>> there is
>> >> a read callback?
>> >
>> >
>> > I don't think that every sound driver guarantees this. I see that
>> PortAudio
>> > supports this API but I think they have to do additional buffering to
>> > accomplish it in a cross platform manner.
>> >
>> > If you're writing something that is reading from an input device and
>> writing
>> > to an output device, I think your best bet is to use a ring buffer to
>> store
>> > the input.
>> >
>> > But, if you're creating an effect such as reverb, why bother with sound
>> > devices at all? Sounds like a good use case for JACK or LV2.
>> >
>>
>> A ringbuffer introduces a buffer's worth of delay. Not good for
>> applications that require low latency. A DAW would be a better example
>> than a reverb. No low-latency monitoring with this arrangement.
>>
>
> I'm going to look carefully into this. I think you brought up a potential
> flaw in the libsoundio API, in which case I'm going to figure out how to
> address the problem and then update the API.
>

I think you are right that duplex streams is a missing feature from
libsoundio's current API. Upon reexamination, it looks like it is possible
to support duplex streams on each backend.

I noticed that PortAudio's API allows one to open a duplex stream with
different stream parameters for each device. Does it actually make sense to
open an input device and an output device with...

 * ...different sample rates?
 * ...different latency / hardware buffer values?
 * ...different sample formats?

Regards,
Andrew
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp