On 04/09/15 02:44, robert bristow-johnson wrote:
In both cases the sampling rate is already available before the processing starts using prepareToPlay(int samplesPerBlockExpected, double sampleRate). Having it stored on AudioSampleBuffer while handy would be redundant, and more importantly it would require all calling code to be modified to initialize it properly,

no.  it would not.
would you trust a class where the sampling rate is an optional feature or can default to 0?

What I was saying was that in order to be able to trust the class, a non-backward compatible change would be necessary to go from:

AudioSampleBuffer (int numChannels, int numSamples);
AudioSampleBuffer (float *const *dataToReferTo, int numChannels, int numSamples); AudioSampleBuffer (float *const *dataToReferTo, int numChannels, int startSample, int numSamples);

to

AudioSampleBuffer (int numChannels, int numSamples, double samplingRate);
AudioSampleBuffer (float *const *dataToReferTo, int numChannels, int numSamples, double samplingRate); AudioSampleBuffer (float *const *dataToReferTo, int numChannels, int startSample, int numSamples, double samplingRate);

In order to *force* a compile-time error to make it clear to all client code of this class that the responsibilities have changed.

Of course you can add a variable like samplingRate silently and initialise it to 0 (or a potentially wrong sampling rate). Then will come the time, long after that change where somebody will try to use the sampling rate as something that can be trusted. That kind of bugs are so easy to introduce and to forget about, but not so easy to track afterward.

That's the reason why most of the time I prefer a non-backward-compatible change that force you to update your client code and let the compiler show you where you need to take action over a silent modification that breaks assumptions.

Jules uses that policy in JUCE most of the time. Whenever an important change that can affect client code has to be made in the JUCE codebase, he goes to great lengths to make sure deprecated use of the code triggers a compile-time error and he tells you how to fix it next to the error location in the comments.

perhaps, after sampleRate is added to AudioSampleBuffer, in time someone will make a backward compatible version of prepareToPlay() that is

prepareToPlay(samplesPerBlockExpected);

a call to prepareToPlay() like that would use the sampleRate embedded in the AudioSampleBuffer.

old code would still work, without modification, just like it had before.
That would be a significant change of API.

In one case you're telling beforehand what the sampling rate is and that it's guaranteed to be constant during all processing. This is the place to perform offline sampling-rate dependent initialisation like resampling an impulse response or designing a FIR filter on the fly.

In the other case, your code should be prepared to deal with any sampling rate during the processing call. The host could change the sampling rate without you being notified, and you would have to react on the fly when often it's too late to perform time-sensitive initialization.

In the first case, the sampling rate is a property of a higher-level instance which is the audio stream context, the audio buffers being just lowlevel blocks carrying the payload.

yup. that's the reason. in my case we had AudioSampleBuffers that held many seconds of sound, so it could just as well be a whole song or a whole piece of media. stereo, 16-bit, 44.1 kHz is 10 megabytes per minute. double that for 32-bit floats. so we're talking about, say, 1 meg at a minimum and 100 megs max. it would be nice to not have to either copy or resize that whole mess and just be able to lay down the frames on the original AudioSampleBuffer. and not have to write special code to deal with the first two frames or the last two frames where your window would extend beyond the audio sample array proper. if it was sufficiently zero-padded extending beyond the ends of the audio will not hurt you.

in my work, i found it easier and more robust (against bugs) to zero-pad the damn thing myself, rather than write the special code to deal with the edges. i was doing that so often that i finally started thinking to myself that AudioSampleBuffer should just do that.
I 100% agree with that but to me they're different use cases. In one case we're talking about dataflow streams splitted in AudioSampleBuffer chunks and in the other we're talking about offline random-access soundfile analysis. I think it's hard to tune a classe to work correctly in 2 broadly different contexts.

BTW there's a design patterns that I really like. It is closely related to what you suggest and it really help when trying to write frame-based processing code in a streaming context. It is called PhantomBuffer. It implements a circular buffer that guarantees you contiguous memory access for any offset by having a 'phantom' zone at the end of the buffer that always mirrors the beginning.

here is the reference paper: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.77.6998&rep=rep1&type=pdf

Anyway my goal was only to give some hints about why it may be done like that in JUCE to give another point of view.

_______________________________________________
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Reply via email to