Re: Confusion with low-level audio libraries

2021-01-26 Thread AudioGames . net Forum — Developers room : camlorn via Audiogames-reflector


  


Re: Confusion with low-level audio libraries

No, it's not that.  If you want to learn a native systems language it's probably where you want to go, and one of the positive things people say about it is "I came from Python and".  But it's really different.  There are a lot of constraints.For instance, no garbage collector.  Stuff at the Rust/C/C++ level typically can't afford one.  This actually includes Synthizer, because a 1ms or 2ms freeze at the wrong time is super bad.  Mind you if whatever you're doing *can* afford one, it's a good sign you should not use the native-level languages.

URL: https://forum.audiogames.net/post/609931/#p609931




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Confusion with low-level audio libraries

2021-01-25 Thread AudioGames . net Forum — Developers room : chrisnorman7 via Audiogames-reflector


  


Re: Confusion with low-level audio libraries

@8Wow, OK, that completely shattered my misguided notion of Rust as Python for clever people haha.Thanks for the explanation.

URL: https://forum.audiogames.net/post/609830/#p609830




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Confusion with low-level audio libraries

2021-01-25 Thread AudioGames . net Forum — Developers room : camlorn via Audiogames-reflector


  


Re: Confusion with low-level audio libraries

Rust needs to stabilize an advanced enough form of const generics as well as specialization before I'd jump at using Rust for this.  There's also the fact that Rust can't easily express object-oriented inheritance hierarchies, which turns out to be really useful for audio libraries, believe it or not.It is possible to write safe C++ if you know what you're doing and use modern C++ features as opposed to the older stuff.  I know what I'm doing.  I think there's been less than 5 segfault/invalid pointer issues.  Running under asan and dealing with them is a lesser cost than trying to use Rust.  If I wasn't experienced at C++ or if Rust had actually followed through on stabilizing things they've been promising for years, the trade-off might be different.  Admittedly specialization and const generics are both very hard problems that are only used by niche applications, but audio is one of said niche applications so here we are.As for whether it gets rid of machine-specific weirdness? Really depends.  Segfaults, yeah, maybe, until i end up using unsafe for stuff because reasons.  Or get the memory models on atomics wrong on less forgiving platforms than X86.  Rust isn't some sort of magic silver bullet, it just guarantees a lack of data races and a lack of invalid pointers if and only if you never use unsafe, nothing more than that.

URL: https://forum.audiogames.net/post/609752/#p609752




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Confusion with low-level audio libraries

2021-01-25 Thread AudioGames . net Forum — Developers room : chrisnorman7 via Audiogames-reflector


  


Re: Confusion with low-level audio libraries

@CamlornCan't remember if you've addressed this elsewhere, so feel free to tell me to JFGI, but why did you choose to write Synthizer in C++ and not Rust? I thought you were a big fan of rust these days?I seem to remember you saying something about Rust not having some of the stuff you needed, but I don't remember what specifically, and I might not understand it, even if I did.Would you ever bother converting to Rust? I know you've said in the past about machine-specific C++ weirdness. Wouldn't Rust eliminate that?

URL: https://forum.audiogames.net/post/609747/#p609747




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Confusion with low-level audio libraries

2021-01-24 Thread AudioGames . net Forum — Developers room : Dragonlee via Audiogames-reflector


  


Re: Confusion with low-level audio libraries

really interesting stuff +1

URL: https://forum.audiogames.net/post/609486/#p609486




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Confusion with low-level audio libraries

2021-01-24 Thread AudioGames . net Forum — Developers room : Ethin via Audiogames-reflector


  


Re: Confusion with low-level audio libraries

@4, yeah, I've heard of that.

URL: https://forum.audiogames.net/post/609459/#p609459




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Confusion with low-level audio libraries

2021-01-24 Thread AudioGames . net Forum — Developers room : camlorn via Audiogames-reflector


  


Re: Confusion with low-level audio libraries

Autovectorization is interesting in the sense that it's never guaranteed, but it's much easier to write an autovectorized loop than it is to write SIMD intrinsics for two or more platforms at once.  The compiler will recognize all the common structures pretty reliably, e.g. adding two arrays, and that's that.  The only real trick is getting all the conditionals out of the loops, then maybe marking with whatever your compiler's loop hinting pragmas is to say "yes, it's worth doing this here".  But I haven't had to use those even once and even for something this performance sensitive you can just wait until it's too slow to be a problem.  In general as long as you write even halfway decent math code that's friendly to the architecture, the compiler will just do it and you can treat all of that as a black box.  Synthizer used to use the clang vector extensions, but I removed them and they're probably not coming back.If you want to go down the road of really understanding this stuff, honestly even more than I need, there's actually an accessible version of compiler explorer: https://godbolt.org/noscriptIf you don't know what that is, you paste a C/C++/Rust/Go/a bunch of others program into it, select a compiler and some flags, and it gives you the assembly.  Supports all the common platforms/architectures and frankly I have no idea how it's free, but it is.

URL: https://forum.audiogames.net/post/609457/#p609457




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Confusion with low-level audio libraries

2021-01-24 Thread AudioGames . net Forum — Developers room : Ethin via Audiogames-reflector


  


Re: Confusion with low-level audio libraries

@2, wow, a lot to digest there. Autovectorization for me has always been difficult to get right; it seems like you have to write your code in a particular way. And pulling in architecture intrinsics is a pain (I still haven't figured out how to actually use SIMD properly and correctly). Your post was informative; I'll dig into Synthizer sometime and see if I can figure it out from there.

URL: https://forum.audiogames.net/post/609446/#p609446




-- 
Audiogames-reflector mailing list
Audiogames-reflector@sabahattin-gucukoglu.com
https://sabahattin-gucukoglu.com/cgi-bin/mailman/listinfo/audiogames-reflector


Re: Confusion with low-level audio libraries

2021-01-24 Thread AudioGames . net Forum — Developers room : camlorn via Audiogames-reflector


  


Re: Confusion with low-level audio libraries

The really short answer is read Synthizer, which uses Miniaudio.The slightly longer answer is it depends, so I will explain Synthizer:First, you want everything to be the same samplerate at some point.  Samplerate conversions are expensive, not enough that a couple matters but enough that if you run one per sound you're going to start hurting.  So you push that to the edge.  E.g. Synthizer buffers are resampled on load.  Unfortunately for streaming and for audio output this isn't always possible: in the streaming case you're of course limited to the samplerate of the original audio and have to insert one there, and for audio output it's sometimes the case that running the library at a fixed samplerate internally is a big win.  For Synthizer, that's 44100 because HRTF datasets like to be that way, and resampling those at runtime is complicated.The rest of this is basically summing arrays.  You take your sources of audio--Synthizer generators for example.  Sum those to sources.  Pan them.  Sum the output of those to the audio output buffers.  At each stage of this process, you might have to convert between channel formats, especially mono->stereo and stereo->mono.  Synthizer does that with specialized functions.  The general case is a matrix multiplication, but stereo->mono is (l+r)/2 and mono->stereo is just copying to two arrays, so it's worth it.  Also, you want to specialize the no conversion case, either to memcpy or to adding to an output buffer.Synthizer is fast enough to run in prod in debug builds.  You shouldn't, but you can, and that's been valuable for testing.  If you optimize you can do very demanding audio on the CPU.  Unfortunately this means running at maximum efficiency; Synthizer in a real world scenario can easily approach 50 megaflops or even a couple gigaflops or more, depending.  That's not too bad except that you're sharing the system, and it's effectively a hard realtime requirement, even more so than graphics.  To do so, you have to address 3 aspects.First, anything that might block the audio generation thread cannot happen on the audio generation thread.  That's not a hard rule, because sometimes it's just not possible to avoid needing to do memory allocation or something, but mutexes/locks are not your friend at all in any fashion.  You can make this a hard rule, but only if you add limitations like maximum number of sources.  Synthizer just says "here is some reasonable pre-allocated bits, if you do something crazy things might click while we grow the buffers", basically.Second, memory bandwidth is a problem.  Any excess zeroing of buffers, any excess buffers at all for that matter, will push things out of L1.  Your worst case is that things get pushed all the way to ram.  So you don't want that.  Synthizer deals with this in two ways.  First, instead of making buffers per object, you can make buffers per invocation of a function and cache them.  I estimated out the number of buffers a typical generator->source->context stack would need, then wrote a fixed-size cache which can hand them out on request.  Instead of putting a bunch of temporary buffers in your class for the intermediate steps, you ask the cache for buffers, and it's a stack so it's likely that the one you get is still in L1.  Second, Synthizer establishes a convention that all audio processing will add to the specified output buffer rather than just writing.  The naive way of doing this is one buffer per source, you fill all of those, then you loop over it and grab every source and add.  But this is one buffer per source, usually a few kdb each, and you just did the worst thing you could: read all of them start to finish, pushing everything else out of the cache.  Adding to the output buffer shifts this to some need to zero, but you've gone from O(n) buffers to roughly O(1) buffers.  Synthizer isn't perfect about this, but it's fast enough and I'll probably only finish improving it when it becomes time to optimize for the Pi or something like that.Also, pointer chasing is bad so allocate buffers inline when you can, but this is already long enough.  And also also, it doesn't matter how much memory is allocated but how much you access, so allocate all day long as long as you don't have to read all of it all the time, but going into that is also probably beyond the scope of what is quickly becoming an essay.  Suffice it to say that Synthizer does lots of arrays that are waaay oversized for what they need to be, but it's fine because you only access the front.  In particular there's a hard-coded internal limit of 16 audio channels (why 16 is another topic, but I didn't pull it out of my ass and it's not related to CPU efficiency).Third, you have to take advantage of the CPU.  This means autovectorization or hand-vectorized code, and being friendly to the branch predictor and compiler optimizations.  This gets a little bit speculat