Ok, thanks for the lead. The files are multichannel indeed (stereo). So maybe
it is an interleaving issue.
(Originally I was using WAV files with 16 bit integers, but I changed them to
float recently, because these days I should be able to use that format across
Mac OS and iOS interchangeably. The wav files played back at the right pitch.)
Below is my render callback, after adjusting it to Float32. Does it look like I
am interleaving? (I suppose it may, since I am handling only a single channel,
but how do I adjust this to handle both channels correctly?)
static OSStatus multiChannelMixerRenderCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
__unsafe_unretained SWAudioEngine *self = (__bridge SWAudioEngine
*)inRefCon;
uint32_t channel = 0;
Float32 *out = (Float32 *)ioData->mBuffers[channel].mData;
for (UInt32 i = 0; i < inNumberFrames; ++i) {
MTSampleData data = MT_PopSample(inBusNumber);
out[i] = data.sample;
}
return noErr;
}
MT_PopSample pops a data from a ring buffer. That data contains, among other
info one sample retrieved from the audio file. The sample is retrieved from the
audio file thus: self->audioData[self->packetIndex[voice]++]; So, a straight
packet from the audioData that was read into memory when the file was loaded.
-António
> On 16 Apr 2015, at 19:26, Chad Wagner <[email protected]> wrote:
>
> Re: Steve's point #2:
> I assume the files are multichannel (stereo)? You may also want to make sure
> you're not interleaving 2 channels when you shouldn't be, and/or are setting
> mNumberChannels correctly in your AudioBuffer.
>
>> On Apr 16, 2015, at 11:11 AM, Steve Bird <[email protected]> wrote:
>>
>>
>>> On Apr 16, 2015, at 12:46 PM, Antonio Nunes <[email protected]>
>>> wrote:
>>> When my app plays the files, they sound fine, except for the fact that they
>>> sound an octave (I guess) lower than they are supposed to. I suppose
>>> something is not right with my conversion method. Maybe I shouldn’t be
>>> using NSConvertHostFloatToSwapped to get the sample data into an
>>> NSSwappedFloat format? Any ideas on achieving correct results?
>>
>>
>> If they sound OK, except for the pitch, then you have one of two problems:
>>
>> 1… you've mishandled the sample rate somewhere. It’s recorded at 44100 and
>> you’re playing back at 22050, or something similar.
>> 2… you’re duplicating samples somewhere, reading one sample and turning it
>> into two samples of playback.
>>
>>
>>
>> ----------------------------------------------------------------
>> Steve Bird
>> Culverson Software - Elegant software that is a pleasure to use.
>> www.Culverson.com (toll free) 1-877-676-8175
>>
>>
>>
>> _______________________________________________
>> Do not post admin requests to the list. They will be ignored.
>> Coreaudio-api mailing list ([email protected])
>> Help/Unsubscribe/Update your Subscription:
>> https://lists.apple.com/mailman/options/coreaudio-api/chad%40chadawagner.com
>>
>> This email sent to [email protected]
>
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list ([email protected])
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com
This email sent to [email protected]