Quoting Vesa <dii....@nbl.fi>: > On 11/07/2014 12:56 PM, Raine M. Ekman wrote: > The current situation isn't exactly "one thread per note" either, it's > one threadable job per note, one threadable job per audioport, and one > job per mixer channel. All my idea does is basically cutting down on the > amount of jobs, the amount of worker threads stays the same.
That sounds better than my misunderstanding of the situation. Thanks for the clarification. >> That benefit will be real if you have enough stacking, arps and heavy >> filters in a track using e.g. SID. Basically, you're talking about >> stomping heavily on most native instruments to give the rest a little >> bit more CPU time, if any. > > Actually I think that benefit is very questionable. No matter how many > threadable jobs you divide the task into, you still can't process more > of those jobs in parallel than you have worker threads. If there are cores sitting around just twiddling their bits waiting for something to process, the "one job per note" model won't run out of CPU when the instrument has too much processing to do. It evens out the peak demand in busy parts if using one of the multistream instruments. >> I'd look closely at what kind of code compilers can vectorize. Might >> be better to put the audio in something like float[notes*2][bufsize] >> and possibly get 4 or more channels through the env/lfo section in >> parallel. Same probably goes for parts of the mixer. > > Well there's the thing that again the notes still have to be mixed > serially. And channel inputs also have to be processed serially because > they all mix to the same channel. I kind of suspect that setting up the > threads for parallelizing such a small part of the chain would cost more > than it would save... I'm not talking about threads, it's SIMD, Single Instruction, Multiple Data. I.e. making the CPU do the same calculations on more than one number at a time. But if the mixing has to be "serial" I'll just withdraw the suggestion. >> The MIDI model of note-on/note-off really has no alternatives when you >> think about live playing and streaming input events, and it should >> work for everything else, too. Maybe everything could move to that? >> I.e. a note plays until it's turned off. Or is there a need to know >> the note length in advance for any features? I could imagine some kind >> of advanced arpeggiator doing something with that knowledge. > > There isn't a need to know in advance, but this is getting to how notes > are stored in patterns. We store the notes in a map based on their > starting position, and each note contains its length. Oops, spoke too much from an instrument POV. I was probably thinking "it should work for every instrument". As a data structure on the sequencer side a note is totally the correct unit of music compared to plain MIDI events. -- ra...@iki.fi softrabbit on #lmms ------------------------------------------------------------------------------ _______________________________________________ LMMS-devel mailing list LMMS-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/lmms-devel