Hi,

I'm doing a lot of audio buffering stuff to learn real time audio synthesis and 
generation, and find the NSSpeechSynthesizer class to be a bit lacking on OSX 
compared to IOS. Principally frustrated with not knowing where I can capture 
the synthesized buffer before it goes into the output. Does anyone have an idea 
where I should look to capture this?


Best regards,





Attachment: signature.asc
Description: Message signed with OpenPGP

_______________________________________________
swift-users mailing list
swift-users@swift.org
https://lists.swift.org/mailman/listinfo/swift-users

Reply via email to