Hi Ben,

There is a nice example of what you touched on with your question that helps a bit explaining how with sound time & frequency "domains" are related. http://flexmonkey.blogspot.com/2008/12/born-to-synthesise-sound-synthesis-in.html

FFT's and DFT's are fairly nasty beasts to get to grips with, so if you are able to do your stuff using extract & computeSpectrum, I would try to use that... The calculations can be a bit processor intensive and may not work in realtime in the Flash Player (may be wrong???), but I would hunt around and see if anyone is doing stuff with FFT's / DFT's in Pixel Bender because you might be able to get your GPU to do all the heavy maths for you if someone has written a nice filter :)

   Hope this is useful.
Glen

Gregory Boland wrote:
Ben,

Your going to want to perform an FFT (Fast Fourier Transform) and move from
the time domain to the frequency domain.

And to be more specific I believe you are gonna want to perform a Discrete
Fourier Transform.

I know a lot of work has been done on this in C and perhaps you could find
something and port it over to Actionscript.

There could be a problem computing all of that in real time though.

best of luck

greg

On Wed, Jun 24, 2009 at 9:19 PM, Juan Pablo Califano <
califa010.flashcod...@gmail.com> wrote:

Hi

1) From the docs:


http://help.adobe.com/en_US/AS3LCR/Flash_10.0/flash/media/Sound.html#extract()<http://help.adobe.com/en_US/AS3LCR/Flash_10.0/flash/media/Sound.html#extract%28%29>
 The audio data is always exposed as 44100 Hz Stereo. The sample type is a
32-bit floating-point value, which can be converted to a Number using
ByteArray.readFloat().
So, each sample is 32 bits, that is 4 bytes. If you read less than four
bytes, you're discarding part of the data; if you read more, you're mixing
samples. This could also happen if your reads are not 4 bytes aligned.

2) Not sure if I misunderstood what you meant, but the number of samples
extracted is calculated here:

var extract:Number = Math.floor ((sound.length/1000)*44100);

And is not a power of 2. It seems like the total samples, rounded down.
(seconds of audio data * samples per second).

3) I'm by no means an audio programming expert, but I think that should be
possible. Don't how performant it would be doing it in AS, in real time,
though. Neither I know algorithms for it, but look up audio frequency
analisys, you might find some pointers. By the way, I don't your imaging is
wrong. Digital audio data is a finite/discrete representation of a sound
wave.


Cheers
Juan Pablo Califano


2009/6/24 ben gomez farrell <b...@yellow5labs.com>

Hey guys,
I'm just digging into the Sound.extract feature in FP10.  I've been using
code found at http://www.bytearray.org/?p=329 to learn from, but have a
few questions.  I'm able to create a waveform and draw to the stage, but
want to take it beyond that.

Basically the code I have is this:

              var samples:ByteArray = new ByteArray();
              var extract:Number = Math.floor
((_sound.length/1000)*44100);
                            _sound.extract(samples, extract);
                            var step:int = samples.length/4096;
              do step-- while ( step % 4 );

              samples.position = 0;
                            for (var c:int = 0; c < 4096; c++) {
                  left = samples.readFloat();
                  right = samples.readFloat();
                  samples.position = c*step;
              }

My questions are:

  1.  I understand incrementing the byteArray read position by a step
amount.  What I don't understand is why the step amount needs to be a
multiple of four?  I get some very wacky results from my readFloat() if I
don't do this.

  2.  Anybody know why the number of samples we're taking is a power of
two?  Is this important, maybe just for graphic performance?

  3.  Here's the big one - is it possible to isolate amplitude at certain
frequencies - to get something like you'd get from computeSpectrum?  I
guess
I'm overly confused by the data I'm getting back from the byte array and
how
to dig deep with it.  The way I imagine this byteArray in my head is that
each position in the byteArray, one by one, would be a composite
amplitude
of all frequencies at a small point in time.  I think I must be imagining
this data wrong - and you'll probably cringe at my composite amplitude
remark cause it'll probably make no sense.

Thanks - and again, I really think I have some huge knowledge gap in how
sound data is used and read.  Can anyone help?
ben
_______________________________________________
Flashcoders mailing list
Flashcoders@chattyfig.figleaf.com
http://chattyfig.figleaf.com/mailman/listinfo/flashcoders

_______________________________________________
Flashcoders mailing list
Flashcoders@chattyfig.figleaf.com
http://chattyfig.figleaf.com/mailman/listinfo/flashcoders

_______________________________________________
Flashcoders mailing list
Flashcoders@chattyfig.figleaf.com
http://chattyfig.figleaf.com/mailman/listinfo/flashcoders



--

Glen Pike
01326 218440
www.glenpike.co.uk <http://www.glenpike.co.uk>

_______________________________________________
Flashcoders mailing list
Flashcoders@chattyfig.figleaf.com
http://chattyfig.figleaf.com/mailman/listinfo/flashcoders

Reply via email to