I'm putting together a program that has slowing down and speeding up
sound files as one of its features.  This works fine for WAV files,
which are a header plus the exact binary data that needs to be sent to
the speaker, and now I need to implement it for MP3 (ideally, this
would also support AAC, Ogg, and WMA, but since those are less popular
formats this is not required).  Android does not expose an interface
to decode the MP3 without playing it, so I need to create that
interface.

Three options present themselves, though I'm open to others:
1)  Write my own decoder.  I already have a functional frame detector
that I was hoping to use for option (3), and now should only need to
implement the Huffman decoding tables.
2)  Use JLayer, or an equivalent Java library, to handle the
decoding.  I'm not entirely clear on what the license ramifications
are here.
3)  Connect to the libmedia library/MediaPlayerService.  This is what
SoundPool does, and the amount of use of that service make me believe
that while it's officially unstable, that implementation isn't going
anywhere.  This means writing JNI code to connect to the service, but
I'm finding that that's a deep rabbit hole.  At the surface, I'm
having trouble with the sp<> template.

What is the best way to get data encoded in an MP3 (and ideally in an
AAC/Ogg/WMA) into a Java array or ByteBuffer that I can then
manipulate?

-- 
You received this message because you are subscribed to the Google
Groups "Android Developers" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to
[email protected]
For more options, visit this group at
http://groups.google.com/group/android-developers?hl=en

Reply via email to