On 11/07/2014 12:56 PM, Raine M. Ekman wrote: > The "one thread per note" thing felt quite surprising when I first > heard of it. I then went from "WTF?" through "sounds a little odd" to > "OK, there must've been sound reasons for it". This change makes sense > to my coding instincts, but thinking a bit more I'm pretty divided. > > So, this would be one thread for each track, one for the GUI and... N > for the mixer?
Well no actually, I was a bit inaccurate there. Actually, it'd be one /threadable job /per track, one /threadable job /per mixer channel... the actual number of worker /threads/ is determined by Qt, I think it's mostly based on core count and other system attributes. The worker threads then process the threadable jobs in a first-come-first-served manner. The current situation isn't exactly "one thread per note" either, it's one threadable job per note, one threadable job per audioport, and one job per mixer channel. All my idea does is basically cutting down on the amount of jobs, the amount of worker threads stays the same. (Also in my idea we can have static job queues and not have to rebuild them from scratch every period, which probably counts for something as well.) > That benefit will be real if you have enough stacking, arps and heavy > filters in a track using e.g. SID. Basically, you're talking about > stomping heavily on most native instruments to give the rest a little > bit more CPU time, if any. Actually I think that benefit is very questionable. No matter how many threadable jobs you divide the task into, you still can't process more of those jobs in parallel than you have worker threads. In fact I expect the one job per track paradigm will also improve the performance of native instruments, because there'll be less hopping between jobs, less building of job queues every period, less locking and thread-safety hacks... Since the notes have to be mixed together serially anyway, it makes sense to process them all serially as well. I'm thinking of a model where each track is one threadable job - each track in entirety: notes, arp/stack, rendering, soundshaping, audioport, fx. These have to be processed serially, so I think it just makes sense to squish them into one serial job. > I'd imagine a Zyn-only project not being > hurt much by the overhead of non-existing threads handling > non-existing notes. Well they're not non-existing. Each nph still gets processed as a threadable job, no matter if the nph doesn't do any actual rendering. This means that a thread has to go through several jobs, and all each of them do is increment the frame counters etc. of the NPH and generate midievents for the instrument to use (and that only at noteon/noteoff, in between they just run through them to increment frame counts, which I think could be done more efficiently by just doing it serially). I think it'd make much more sense to just pass the bunch of notes to the instrument all at once and let the instrument sort them out, instead of running threadable jobs just to generate events to be processed serially. And then there's other things to take in account here... For instance, we currently process things in 3 stages: first notes, then audioports (+their fx chains), then mixer channels. This means that when the notes of a track have been processed, the audioport of that track can't be processed before all the other tracks' notes have been processed. With my model, we could combine the 1st and 2nd stages - each instrument track could immediately proceed to the audioport when the notes are processed. We could possibly extend this to the fx mixer too, and get rid of the "stages" entirely, so that a fx channel could start processing as soon as its "dependencies" are met. > Is there any kind of guesstimate on how much overhead there is in the > current massively threading model? No. But overhead isn't the only concern. All the locking etc. that becomes necessary for thread-safety also affects RT-safety. And the lack of RT-safety is bad because it keeps us from eg. properly supporting JACK and such. > I'd look closely at what kind of code compilers can vectorize. Might > be better to put the audio in something like float[notes*2][bufsize] > and possibly get 4 or more channels through the env/lfo section in > parallel. Same probably goes for parts of the mixer. Well there's the thing that again the notes still have to be mixed serially. And channel inputs also have to be processed serially because they all mix to the same channel. I kind of suspect that setting up the threads for parallelizing such a small part of the chain would cost more than it would save... Of course you're welcome to run some tests on this and report your findings. > "One knob, one track" is good, but... what about those relative > automations you mentioned earlier, shouldn't you be able to apply them > on top of absolute automation? Well... yeah, that would be nice, but that again adds a whole bunch of complexity. I think if we leave the relative automation feature until after this whole rehaul business, we can then assess whether it's worth the effort & added complexity to allow multiple relative automations on a knob. > Negative on adding/removing sub-tracks manually. If I move a block to > overlap, why not just add another row automatically to hold the block? > And the row would of course disappear when not needed as well. Well this is getting a bit more into some other ideas which I haven't discussed here yet... didn't want to mention this yet because there was already a whole lot of stuff, but... basically, I'm thinking multitimbral instruments. You could add subtracks to an instrument track, and you could then optionally assign a separate midichannel for each subtrack, so that all the notes on each subtrack could have a different midichannel. Then, if the instrument on that track is multitimbral (eg. zyn), each subtrack could control a different timbre of the instrument. Subtracks for sampletracks could also have some kind of added function, although I don't yet know what that would be... perhaps per-subtrack FX chains, in addition to the track-specific FX chain, allowing you to easily have transient effects in a sample track? Or something like that. Automatic addition of subtracks could still possibly be a nice additional functionality, but that's also something that can be implemented later as an improvement, if we so decide... could be a bit tricky to implement. There's already so much to chew in this whole 2.0 business, I'm just trying to somewhat limit the amount of biting here... > The MIDI model of note-on/note-off really has no alternatives when you > think about live playing and streaming input events, and it should > work for everything else, too. Maybe everything could move to that? > I.e. a note plays until it's turned off. Or is there a need to know > the note length in advance for any features? I could imagine some kind > of advanced arpeggiator doing something with that knowledge. There isn't a need to know in advance, but this is getting to how notes are stored in patterns. We store the notes in a map based on their starting position, and each note contains its length. If you wanted to instead have notes with no length, you'd have to store both noteon and noteoff events in that map, and you'd have to somehow link those events together so we'd know which noteoff noteoffs which noteon (remember, we have a sliding scale of note pitch and allow overlapping notes of the same pitch, unlike MIDI, so we can't identify the pairs on pitch alone)... I think it could be doable, and it's something which I've also considered. I'm just a bit wary about it because I kind of feel like there's lots of potential for things to go wrong there, eg. orphan noteons/noteoffs caused by some bug in processing... orphan noteoffs may not hurt much, but an orphan noteon would be infinitely annoying, literally...
------------------------------------------------------------------------------
_______________________________________________ LMMS-devel mailing list LMMS-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lmms-devel