<<
Date: Tue, 14 Jul 2015 12:27:54 -0700
From: Tom Jeffries <[email protected]>
To: "[email protected] Audio"
Subject: Swift and CoreAudio

Swift is Apple's newest language. How does it work with CoreAudio?
>>

Off the top of my head, there will be challenges directly manipulating audio 
buffers, writing interleaved buffers, using ASBDs, populating AudioBufferLists 
and using existing constants (the constants are not just with CoreAudio)

It works but it is very challenging to do so.   I spent a lot of time 
performing workarounds because performing things like sample rate conversions 
were out of the question using the APIs and there were no solutions for 
function callbacks.  An example of this is using something like 
AudioConverterFillComplexBuffer.  Accessing audio buffers directly was an 
difficult because the prescribed and almost secretive way (because it's not 
documented) is read only.  Attempts to write (manipulate) an audio buffer 
directly in Swift can be done but not in any documented way, which was entirely 
frustrating.  I don’t think I ever figured out how to read and write 
interleaved buffers which should have been easy.  Creating ASBDs and 
determining how to populate an  AudioBufferLists  took some time to figure out 
because the way that you would do so in ObjC is different and if I recall 
correctly, the translation is not even close to obvious.  One thing that really 
bothered me was needing to use an existing constant but in Swift it was 
considered an unmanaged CFString and having to jump through hoops to use the 
constant or just make a new string to supplant it.  If kCoreAudioConstant  in 
the headers was a string in ObjC but an unmanaged CFString in Swift, it takes 
some hoop jumping do use kCoreAudioConstant in your program.

That was the stuff off the top of my head.

Now, with that being said, there are always going to challenges in using 
something new, especially new programming languages.  Mindsets, paradigms along 
with the obvious syntax changes will need to be made.  I very much like Swift 
but there are a great number of adjustments that need to be made and the 
language and IDE are still evolving.  In terms of CoreAudio, it has been a 
difficult journey.  I've re-written a few programs and had number of times that 
I felt that I was in a desert, searching for water because the existing 
documentation is geared toward C-ObjC and there do not seem to be many people 
interested in using CoreAudio and Swift.  Could it be because of documentation, 
lack of Swift-CoreAudio examples, translation of existing programs is 
challenging, the evolving nature of the language or maybe a combination of the 
above?  I would like to see people interested in Swift and CoreAudio make 
examples, document what works, what can be bridged and what cannot from their 
perspective be bridged at all.  I like Swift and CoreAudio and would like the 
two work well together, even if we (initially) have to work a little harder to 
get to get the job done.


W.

 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Coreaudio-api mailing list      ([email protected])
Help/Unsubscribe/Update your Subscription:
https://lists.apple.com/mailman/options/coreaudio-api/archive%40mail-archive.com

This email sent to [email protected]

Reply via email to