On Tue, Oct 11, 2016 at 18:43 +0000, Chris Dreher wrote:
> Theoretically, as Gerhard suggested earlier, an output module
> could go a step further and translate from one format to
> another similar format (ex: all audio-based PDs output WAV data
> but the output module can write it as WAV or AU format).
Oh, wait. That's not really what I said. Never did I suggest to
have (several) protocol decoders generate (something similar to)
WAV file format data that gets converted to other formats within
output modules. I'm sure that there are tons of converters out
there which operate on files already, if one needs such a thing.
What I said was that a protocol decoder shall generate a stream
of audio samples that it extracted from its input stream. And
output modules could write those audio samples to files in any
format they please. And the principle applies to any other kind
of data, too.
As was determined before, the specific WAV format that one of the
decoders currently happens to use (for arbitrary empiric or
historical reasons, not necessarily by design) is not exactly a
good match for the purpose. I explicitly questioned the use of
the WAV file format within the protocol decoders which form a
pipe of processing components and which operate on strictly
linear data streams.
If you were talking about the 'data' chunks of WAV files with
just the samples in them, and not the specific WAV file format in
general, then this objection of mine of course becomes moot. In
that case I apologize for my strict interpretation of what you
wrote above ("audio-based PDs output WAV data").
If I understand Soeren correctly, he suggested to use a metadata
packet to carry the information about the audio samples stream
which currently is put into the fake WAV header. The stream of
audio samples shall be mostly transparent, and need not involve
any of the WAV file format's peculiarities.
For the audio samples case, is there prior art to "get inspired
from"? Like signal generators, audio channels from spectrum
analyzers, waveform captures from AWG devices, etc? Is "analog
channels which communicate amplitudes" a good fit already, like
those from oscilloscope channels? Although the audio data was
just an example, I'd like to keep it for consideration, to keep
thinking about the subject in general terms that fits most
decoders and data types.
For MIDI data the file format might be a better fit, as you
suggested (I'm ignorant about that topic). The "just one length
spec in the header, everything else being suitable for
transparent streaming already" sounds good to me. Updating this
single 32bit length spec which resides at a fixed offset after
appending a number of samples shall be rather straight forward.
Just look at the srzip.c source and how it employs the
If you don't understand or are scared by any of the above
ask your parents or an adult to help you.
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
sigrok-devel mailing list