Try some objective-c wrapper - I have personal experience with 
TheAmazingAudioEngine. It solves a lot of stuff but still with great 
performance.

The task you are trying to do is few lines of code with this framework.

http://theamazingaudioengine.com

Jindrich

21. 3. 2015 v 8:33, Haris Ali <[email protected]>:

> Hey if you need a few examples of valid AudioStreamBasicDescriptions check 
> out my library's helper functions: 
> https://github.com/syedhali/EZAudio/blob/master/EZAudio/EZAudio.m#L184
> 
> I think reading the code and looking at the examples for EZAudio might 
> help...at least that's why I wrote it :)
> 
>> On Friday, March 20, 2015, Dave O'Neill <[email protected]> wrote:
>> Here's a suitable AudioStreamBasicDescription for mono SInt16:
>> 
>> 
>> mSampleRate            44100.000000
>> mFormatFlags            kAudioFormatFlagIsSignedInteger | 
>> kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked | 
>> kAudioFormatFlagIsNonInterleaved
>> mFormatID                 kAudioFormatLinearPCM
>> mFramesPerPacket   1
>> mBytesPerFrame       2
>> mChannelsPerFrame 1
>> mBitsPerChannel       16
>> mBytesPerPacket       2
>> 
>> But, I think (not %100 sure) that the effect units want stereo floats:
>> 
>> One way to get the right format is to do an AudioUnitGetProperty on the 
>> input of the "downstream" unit and then set the output of the upstream unit 
>> to that format;
>> 
>> But here is a stere float one anyway:
>> 
>> mSampleRate              44100.000000
>> mFormatFlags             kAudioFormatFlagIsFloat | 
>> kAudioFormatFlagIsNonInterleaved | kAudioFormatFlagIsPacked 
>> mFormatID                  kAudioFormatLinearPCM
>> mFramesPerPacket   1
>> mBytesPerFrame       4
>> mChannelsPerFrame 2
>> mBitsPerChannel      32
>> mBytesPerPacket      4
>> 
>> 
>> 
>> I was able to get an offline render going, I wasn't quite there in my 
>> current project but knew I would be soon so I'm in the same boat.  I found a 
>> really good answer that sums it up on Stack Overflow: 
>> http://stackoverflow.com/questions/15297990/core-audio-offline-rendering-genericoutput
>> 
>> but I'll paste in the most relevant section here in case the link dies:
>> 
>> AudioUnitRenderActionFlags flags = 0;
>> AudioTimeStamp inTimeStamp;
>> memset(&inTimeStamp, 0, sizeof(AudioTimeStamp));
>> inTimeStamp.mFlags = kAudioTimeStampSampleTimeValid;
>> UInt32 busNumber = 0;
>> UInt32 numberFrames = 512;
>> inTimeStamp.mSampleTime = 0;
>> int channelCount = 2;
>> 
>> int totFrms = MaxSampleTime;
>> while (totFrms > 0)
>> {
>>     if (totFrms < numberFrames)
>>     {
>>         numberFrames = totFrms;
>>         NSLog(@"Final numberFrames :%li",numberFrames);
>>     }
>>     else
>>     {
>>         totFrms -= numberFrames;
>>     }
>>     AudioBufferList *bufferList = 
>> (AudioBufferList*)malloc(sizeof(AudioBufferList)+sizeof(AudioBuffer)*(channelCount-1));
>>     bufferList->mNumberBuffers = channelCount;
>>     for (int j=0; j<channelCount; j++)
>>     {
>>         AudioBuffer buffer = {0};
>>         buffer.mNumberChannels = 1;
>>         buffer.mDataByteSize = numberFrames*sizeof(AudioUnitSampleType);
>>         buffer.mData = calloc(numberFrames, sizeof(AudioUnitSampleType));
>> 
>>         bufferList->mBuffers[j] = buffer;
>> 
>>     }
>>     CheckError(AudioUnitRender(mGIO,
>>                                &flags,
>>                                &inTimeStamp,
>>                                busNumber,
>>                                numberFrames,
>>                                bufferList),
>>                "AudioUnitRender mGIO");
>> 
>> 
>> 
>> 
>> }
>> 
>> In my test demo I tried looping through some audio in multiple passes, if 
>> you are going to do this you must increment the mSampleTime of the 
>> AudioTimeStamp each render as per the documentation.
>> 
>> Dave
>> 
>> 
>> 
>>  
>> 
>> 
>> 
>>> On Fri, Mar 20, 2015 at 8:55 PM, Patrick J. Collins 
>>> <[email protected]> wrote:
>>> Hi everyone,
>>> 
>>> So a week or so has gone by, and I feel like I am getting nowhere (or at
>>> least close to nowhere) with my goal of being able to simply to:
>>> 
>>>   input buffer -> low pass -> new buffer
>>> 
>>> Can anyone please please please help me?
>>> 
>>> I have read pretty much all of Apple's documentation on this subject and
>>> I just do not understand so many things...
>>> 
>>> At first I was trying to just use the default output so that I could at
>>> least hear the low pass happening..  Unfortunately all I hear is
>>> garbage...  I figured it's because the asbd is wrong-- so I tried
>>> setting the asbd on the lowpass unit, and I get "-10868" when trying to
>>> set the stream format on the low pass unit's input scope or output
>>> scope...
>>> 
>>> Then I tried to set the asbd on the output unit, and then I get error
>>> -50, which says a parameter is wrong-- but..  the parameters are not
>>> wrong!
>>> 
>>> AudioStreamBasicDescription asbd;
>>> asbd.mSampleRate = 8000;
>>> asbd.mFormatID = kAudioFormatLinearPCM;
>>> asbd.mFormatFlags = kAudioFormatFlagIsSignedInteger;
>>> asbd.mFramesPerPacket = 1;
>>> asbd.mChannelsPerFrame = 1;
>>> asbd.mBitsPerChannel = 16;
>>> asbd.mBytesPerPacket = 2;
>>> asbd.mBytesPerFrame = 2;
>>> 
>>> There should be absolutely nothing wrong with those parameters, so I
>>> don't understand why it's giving a -50 error...
>>> 
>>> Regardless, I ultimately don't want to output to the hardward, I want to
>>> do a quick offline render to lowpass filter my buffer...  So, I change
>>> my output description from kAudioUnitSubType_DefaultOutput to
>>> kAudioUnitSubType_GenericOytput
>>> 
>>> And then suddenly my lowpass input render proc is not getting called--
>>> which I assume is because I need to call AudioUnitRender...  However, I
>>> cannot find any documentation or examples anywhere about how to
>>> correctly do this!
>>> 
>>> Where do you call AudioUnitRender?  I assume this needs to be in a loop,
>>> but--  clearly I don't want to manually call this in a loop myself...  I
>>> tried adding a InputProc callback to my generic output unit, but it
>>> doesn't get called either.
>>> 
>>> Here is my code:
>>> 
>>>   https://gist.github.com/patrick99e99/9221d8d7165d610fd3e1
>>> 
>>> I keep asking myself:  Why is this so difficult??  Why is there so
>>> little information out on the internet about how to do this??  All I can
>>> find are a bunch of people asking some-what similar questions on
>>> stackoverflow that aren't similar enough to help answer my questions.
>>> Core audio has been around for a long time, and there are tons of apps
>>> doing this sort of thing, so I am just really surprised by the lack of
>>> information and available help for what seems like should be a simple
>>> thing to do....
>>> 
>>> How about if I try this:
>>> 
>>> HELP!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
>>> 
>>> Thank you!
>>> 
>>> Patrick J. Collins
>>> http://collinatorstudios.com
>>> 
>>>  _______________________________________________
>>> Do not post admin requests to the list. They will be ignored.
>>> Coreaudio-api mailing list      ([email protected])
>>> Help/Unsubscribe/Update your Subscription:
>>> https://lists.apple.com/mailman/options/coreaudio-api/oneill707%40gmail.com
>>> 
>>> This email sent to [email protected]
> 
> 
> -- 
> Syed Haris Ali
> Website: http://syedharisali.com
> Github: https://github.com/syedhali
> 
> _______________________________________________
> Do not post admin requests to the list. They will be ignored.
> Coreaudio-api mailing list      ([email protected])
> Help/Unsubscribe/Update your Subscription:
> https://lists.apple.com/mailman/options/coreaudio-api/jindra%40tappytaps.com
> 
> This email sent to [email protected]
 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      ([email protected])
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to [email protected]

Reply via email to