Am 28.01.2014 04:32, schrieb Tim E. Real:
> On January 27, 2014 11:04:44 PM Dennis Schulmeister wrote:
>> On Mon, 27 Jan 2014 06:54:49 +0100
>>
>> Florian Jung <[email protected]> wrote:
> 
>> i have an idea, stolen from ardour.
>>
>> there are three threads:
>> GUI
>> audio
>> prefetch
>>
>> there is 'the model', containing every information about our song. This
>> is shared among GUI and prefetch thread. audio never touches it.
>>
>> GUI and prefetch do mutex locking when operationg on the model.
>>
>> All data, (also MIDI data), is read from the prefetch thread and written
>> into ringbuffers, to be read and played by the audio thread. the audio
>> thread does effects etc.
> 
> What do you mean all data? 
> A prefetch and ring buffer are only for playing/recording disk material, 
>  such as wave files. You can't 'prefetch' audio from Jack inputs.

I meant "all disk data, *including* MIDI data", sorry.

> During our Jack process callback, the thing reaches waaay down 
>  into the routing system, which *may* include Audio Input tracks,
>  starting from Audio Output tracks and working its way back from there, 
>  and gathers up all the data to be written into the Jack outputs - 
>  immediately. Doing that in another thread is useless and adds latency.       
> I just explained that. 
> Unless ...
> While in the Jack process callback, once you have read the input data 
>  from Jack input ports, you could *immediately* pass control to some 
>  other thread, and make the Jack process callback wait for that thread,
>  and if it does not respond in enough /time/, finish the Jack process callback
>  by simply writing zeros to all Jack output ports. But how?

Naah. I considered this for multi threading, but I don't think it'll work.


>  
>> There are three kinds of operations:
>>
>> fast operations: are executed immediately, with low latency.
>>     Implemented by the GUI sending a message over a ringbuffer directly
>>     to the audio thread, like we do everything currently.
> 
> Not quite. We currently wait for the audio thread.

> 
>>     Examples: sliding mixer controls
>>               muting/unmuting tracks
> 
> This is exactly what I said I wanted to do. 
> 
> Install a ring buffer so that operations like these do not wait for the audio 
>  messaging. These only use operations 'stage 2' realtime and not the 
>  non-realtime stage 1 or 3, which would be harder to work with.
> 
> What I meant about careful timing, mouse press inhibiting etc:
> 
> Currently moving a midi slider or knob sends a message and waits, 
>  guaranteeing synchronization - that the GUI continues only after all of the 
>  work has been done. The next time the heartbeat routines update the 
>  position of that control, its value is /already/ current.
> So there is no jitter moving the control. (Just jerky with big Jack periods.) 
> However, if you use a ring buffer to decouple the GUI movements from audio,  
>  the GUI control is now /free/ to move at any speed it wants but the actual
>  underlying controller values may not be updated until some time later
>  when the audio thread has processed the messages.
> Thus the heartbeat routine comes along and attempts to update the control
>  with a /stale/ previous value which has not been updated yet by audio.
> Thus you get horrible control jitter as it flips back and forth.

I'd actually really put GUI movements directly into the ringbuffer,
without waiting. All data will apply in the next audio period, with the
same amount of latency a mapped MIDI-controls-automation would have.

About updating with stale values: Just don't update controls that are
currently held down, and we're fine.


> 
> So what I did with all audio controls was:
> a) Installed a ring buffer, decoupling the GUI from audio.
> b) While such controls are being pressed down with the mouse, I inhibit
>  /any/ updating of the control from the heartbeat routines (that includes 
>  any audio automation streams).[...]

Cool! Seems like we already have the "Mixer controls ringbuffer" thing I
wanted :)

> 
> There is another detail: Things like midi controls use the heartbeat function
>  to periodically update their values, but things like controller graphs rely 
>  on songChanged signals to redraw themselves.
> Currently we signal songChanged from 'operations' routines in GUI thread. 
> But when using a ring buffer, we can't signal a songChanged until the audio 
>  thread has finished its work processing the ring buffer. So we need to
>  signal the GUI thread right from inside the audio thread. We have such
>  an 'audio-to-GUI' message pipe mechanism as I mentioned, but it sends
>  only single characters. We would need to modify it, if that's what we use.

I would consider controller graphs (as opposed to mixer controls) as a
"slow operation": i.e. there is no need for "fast" synchro via the
ringbuffers we just mentioned.

>  
>>
>> slow oeprations: might take up to some seconds to apply.
>>     Implemented by mutex-locking and editing the model. The prefetcher
>>     might wait for a moment, but no problem because our prefetch buffer
>>     is large enough. (*)
>>
>>     Examples: moving, creating, deleting notes or new wave parts.
>>               turning on/off tracks
>>
>> offline operations: cannot be executed while playing back
>>     require the audio thread to be msgIdled; used for "large", seldomly
>>     Examples: adding/removing (but not bypassing) effects,
>>               adding/removing tracks
>>
>>
>> *) in case of a audio buffer underflow, we de-click the thing:
>> The ringbuffer always holds some hundred frames more than it actually
>> needs. If the number of available frames in the ringbuf drops under a
>> few hundred, then a soft underrun occurs and MusE will play back the
>> remaining samples, but fading out (thus, de-clicking everything).
>> If the buffer comes back again, MusE will fade-in for ~100 samples.
>>
>>
>>
>> Also, time-stretching is done inside the audio thread, and not inside
>> the prefetching thread, to be able to quickly adapt new tempi (->
>> external synchronisation). The prefetcher must prefetch an appropriate
>> number of samples, obviously. (We may assume that the external tempo
>> does not deviate by more than 10% from the tempo map).
> 
> 
>>> Am 27.01.2014 02:02, schrieb Dennis Schulmeister:
>>>> Hi Florian,
>>>>
>>>> So here is a slightly modified idea in pseudo-code as it's too late for
>>>> me to describe it in words. :-)
>>>
>>> [you meant having one model in usage and one copy for editing, and
>>> swapping their pointers after editing since swapping is a 'fast'
>>> operation]
>>
>> Uhm yes. I knew it was easy ...
>>
>>> I've thought about (and ardour does it :)) using a Diff system:
> 
> Again that's like what I said, albeit by way of a VCS system, also for song 
>  files as well, so we can use complete undo trees.
> But what do you mean diff system? Not an actual text based diff file I hope.
> This system needs to operate on data structures.
> Which... we already have - the undo/redo stacks.
> Maybe not how they are currently used, but the actual stacks themselves.
> I'd like to turn them into trees.

yes. I actually repeated you, trying to get everything we want in one
text, not cluttered about a whole discussion ;)

> 
>> So the model contains a list of changes filled by the GUI. During the
>> lock phase all changes are applied at once. If most time is spent
>> waiting on the user to complete his actions, this should work. But it
>> will complicate things due to the indirect nature of making changes to
>> the model. The interface needs to be well thought about, before-hand.
>>
>> Dennis
> 
> Yes of course, if we use locks, then gather up all changes that we possibly 
>  can and then apply them in one quick burst inside a lock.
> Just... I don't know if that's as quickly done as said.

Depends on your view of "quick".

It will not be as quick as we would need for realtime operations. It
would be too slow if the *audio* thread would wait for this lock.

But the audio thread doesn't. The *prefetch* thread waits for this lock,
and thus, we may take up to half as long as our prefetch queue is.

Locking this mutex and "quickly" applying the diff will be WAY more
"quick" than reading audio data from disk, so we're safe.

> Actually this 'gathering' phase is pretty much what we already have with the 
>  operations/undo/redo stack system. But now it is a question of /when/ to 
>  apply them.

That's a bit too detailed for what I tried to describe.

But I'd apply them roughly the way operation groups are applied currently.

Only difference to the Undo system will be: Each UndoOp is a class,
inheriting from UndoOpBase, and will implement the virtual functions
apply() and undo(). That's no conceptual change, but will lead to nicer
code.


BTW, i'm trying to write a document about how I imagine MusE3. I'll put
it on github later, i'd be glad if you would read it.

Seems like we have basically the same thoughts, right?

Cheers,
flo

> 
> Thanks.
> Tim.
> 
> ------------------------------------------------------------------------------
> WatchGuard Dimension instantly turns raw network data into actionable 
> security intelligence. It gives you real-time visual feedback on key
> security issues and trends.  Skip the complicated setup - simply import
> a virtual appliance and go from zero to informed in seconds.
> http://pubads.g.doubleclick.net/gampad/clk?id=123612991&iu=/4140/ostg.clktrk
> _______________________________________________
> Lmuse-developer mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/lmuse-developer
> 


Attachment: signature.asc
Description: OpenPGP digital signature

------------------------------------------------------------------------------
WatchGuard Dimension instantly turns raw network data into actionable 
security intelligence. It gives you real-time visual feedback on key
security issues and trends.  Skip the complicated setup - simply import
a virtual appliance and go from zero to informed in seconds.
http://pubads.g.doubleclick.net/gampad/clk?id=123612991&iu=/4140/ostg.clktrk
_______________________________________________
Lmuse-developer mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/lmuse-developer

Reply via email to