On 9/3/15 5:57 AM, mdsp wrote:
As a long-time JUCE user and observer let me give you my opinion
regarding AudioSampleBuffer.
thank you. i hope it's okay if i respond (and disagree, respectfully).
now, i want us to be clear about the definition of "backward
compatible". Google defines it simply as:
"(of computer hardware or software) able to be used with an older piece
of hardware or software without special adaptation or modification."
i am interpreting it as, in the current context, that code that was
written and uses JUCE, specifically AudioSampleBuffer, and (of course)
does not fiddle with any private variables (i.e. only accesses them via
member functions), and that functions correctly before the change (or
"enhancement"), will also function identically after the change. with
the MATLAB origin-generalization enhancement i have been advocating for
20 years, i would add the extra condition that the backward-compatible
enhancement functions identically save for a possible extremely small
loss of execution efficiency from one or two extra instructions. but
this is not the case with what i am advocating regarding
AudioSampleBuffer. it might take a teeny-weeny bit more memory (unless
you go crazy with the amount of zero padding), but it will *not* cost a
dime in extra computational burden. (and even if it did cost an extra
instruction or two, i would still call it "backward compatible" if it
doesn't break any existing code.)
While I totally understand why you're suggesting that kind of
enhancement, it's important to consider where AudioSampleBuffer is
mostly used (i.e. in AudioProcessor and AudioSource) and why it was
created. It may seem simple and not harmful at first but that's not so
simple:
In both cases the sampling rate is already available before the
processing starts using prepareToPlay(int samplesPerBlockExpected,
double sampleRate). Having it stored on AudioSampleBuffer while handy
would be redundant, and more importantly it would require all calling
code to be modified to initialize it properly,
no. it would not.
not only in JUCE codebase but in all JUCE's users code too if they
want to be able to use it reliably.
nope.
In order to enforce that new contract, the constructor should be
changed to force a compile-time error in all places making use of it
so not a backward-compatible change in the end.
prepareToPlay(samplesPerBlockExpected, sampleRate)
would ignore the value of sampleRate embedded in the AudioSampleBuffer
(i will confess that i do not know exactly how an AudioSampleBuffer gets
connected to an AudioSource, but i don't think that detail matters).
old code would still work the same way.
perhaps, after sampleRate is added to AudioSampleBuffer, in time someone
will make a backward compatible version of prepareToPlay() that is
prepareToPlay(samplesPerBlockExpected);
a call to prepareToPlay() like that would use the sampleRate embedded in
the AudioSampleBuffer.
old code would still work, without modification, just like it had before.
and new code *could* choose to take advantage of the new feature that
greatly simplifies the calling conventions.
Moreover it is most often used to convey small blocks of audio (64 -
4096) so zero-padding could be a significant overhead over existing data,
in memory. memory is cheap.
also the zeroPad parameter can be 0. no reason it can't. i would
suggest a default initial value of more than zero, but perhaps i am
wrong and the default should be 0. in that case it doesn't cost a
single extra word in the sample array data.
but more importantly AudioSampleBuffer doesn't necessarily own its
memory (cf the second constructor that is meant to reference external
data) it is meant as a lightweight wrapper around existing API like
VST that gives you float** buffers.
doesn't matter. if zeroPad>0 it just means more memory in that other
memory space, wherever it is. if memory is super-expensive in that
space or if there is any other reason you don't want to waste any words
on zero-padding, set it to 0. so i guess i have to concede that the
default value should be 0.
Having zero-padding would break this and require allocating an array
(potentially in a real-time thread), and copying the external data.
nothing is broken if zeroPad=0. AudioSampleBuffer already has its means
for allocating memory (Jules has his own malloc() and there is private
data embedded in an AudioSampleBuffer that is used solely for that
purpose). want to increase the size of an AudioSampleBuffer? just call
setSize().
of course if zeroPad is increased, setSize() would have to be called.
this can be done transparently to the user. essentially there would be
a new private function called "setZeroPad()" (or something like that)
that would never be called by legacy code. you cannot make a claim that
future code that calls setZeroPad() and then breaks something is
evidence of violation of backward compatibility.
the objection doesn't "cut the mustard", in my opinion. you have not
shown that this is inconsistent with backward compatibility.
If however, the design goal had been to be a self-contained object
representing a whole sound file ready to be processed / analyzed by
overlap-add methods then the sampling rate + zero-padding would be handy.
yup. that's the reason. in my case we had AudioSampleBuffers that held
many seconds of sound, so it could just as well be a whole song or a
whole piece of media. stereo, 16-bit, 44.1 kHz is 10 megabytes per
minute. double that for 32-bit floats. so we're talking about, say, 1
meg at a minimum and 100 megs max. it would be nice to not have to
either copy or resize that whole mess and just be able to lay down the
frames on the original AudioSampleBuffer. and not have to write special
code to deal with the first two frames or the last two frames where your
window would extend beyond the audio sample array proper. if it was
sufficiently zero-padded extending beyond the ends of the audio will not
hurt you.
in my work, i found it easier and more robust (against bugs) to zero-pad
the damn thing myself, rather than write the special code to deal with
the edges. i was doing that so often that i finally started thinking to
myself that AudioSampleBuffer should just do that.
similarly, in my work, i found myself passing sampleRate to processing
functions (like an IIR filter or some other effect) every single time
that i was also passing an AudioSampleBuffer to that function. i
started thinking to myself that the two objects should be glued together
into a single object.
modular code has both this "separation of concerns"
https://en.wikipedia.org/wiki/Separation_of_concerns and this "single
responsibility principle"
https://en.wikipedia.org/wiki/Single_responsibility_principle that
really requires the sample rate to be embedded with the audio you're
passing around. you cannot process the audio (unless it is a simple
gain or a simple adder or mixer) nor playback the audio without knowing
the sample rate. it is an inherent parameter of the physical audio, no
less than the number of channels (which is part of an AudioSampleBuffer,
as it should be) or the length in samples (which is part of an
AudioSampleBuffer, as it should be) or the actual sample data.
On 03/09/15 01:06, Jean-Baptiste Thiebaut wrote:
Sorry to hear you had a bad experience adjusting the sample rate.
Could you take that issue to the JUCE forum? I'm sorry I can't give
you a satisfying answer here, but I can tell you that we care for
users' feedback. If you browse through the forum (juce.com/forum
<http://juce.com/forum>) you will see that we fix issues quickly.
i guess i have to sign up for the forum and i am already on too many
mailing lists and newsgroups the way it is. and, at the moment, i am
not working on any project with JUCE.
Please let us know if there's anything that you feel needs fixing!
i did. i let Jules know and we discussed it back-and-forth. he just
didn't agree with me that this was a problem. (i take it that by this
he meant that it was good or natural that sampleRate always gets passed
around as a separate argument to all of these functions, which i am
saying that is is neither good nor natural. it's bad and contrived.)
the goofy thing that really bothers me is seeing code written in C++
that just doesn't get what we wanna do with modular structures. 90% of
C++ code i have examined was spaghetti code. the C++ was not used in
any manner that i would recognize as modular or tight. it's like these
guys missed the whole point of why we would be coding in an
object-oriented language.
spaghetti C++ code is far worse, far less readable than lower-level
code, like C or assembly, that is spaghetti.
i just don't get it. why bother with it?
--
r b-j r...@audioimagination.com
"Imagination is more important than knowledge."
_______________________________________________
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp