On January 27, 2014 11:04:44 PM Dennis Schulmeister wrote:
> On Mon, 27 Jan 2014 06:54:49 +0100
> 
> Florian Jung <[email protected]> wrote:

> i have an idea, stolen from ardour.
> 
> there are three threads:
> GUI
> audio
> prefetch
> 
> there is 'the model', containing every information about our song. This
> is shared among GUI and prefetch thread. audio never touches it.
> 
> GUI and prefetch do mutex locking when operationg on the model.
> 
> All data, (also MIDI data), is read from the prefetch thread and written
> into ringbuffers, to be read and played by the audio thread. the audio
> thread does effects etc.

What do you mean all data? 
A prefetch and ring buffer are only for playing/recording disk material, 
 such as wave files. You can't 'prefetch' audio from Jack inputs.
During our Jack process callback, the thing reaches waaay down 
 into the routing system, which *may* include Audio Input tracks,
 starting from Audio Output tracks and working its way back from there, 
 and gathers up all the data to be written into the Jack outputs - 
 immediately. Doing that in another thread is useless and adds latency.         
I just explained that. 
Unless ...
While in the Jack process callback, once you have read the input data 
 from Jack input ports, you could *immediately* pass control to some 
 other thread, and make the Jack process callback wait for that thread,
 and if it does not respond in enough /time/, finish the Jack process callback
 by simply writing zeros to all Jack output ports. But how?
The purpose of a system like this as I mentioned, outside of process callback, 
 would be to avoid taking too long in the process callback itself. If we take 
 too long there, we get a glitch /and/ an xrun. Conversely, with the system 
 described above, if we take too long in some other thread but are able to 
 return soon enough from it so that the Jack process callback keeps going 
 and writes muting zeros to all outputs, we get a glitch but /no/ xruns. 
That's the only difference.
 
> There are three kinds of operations:
> 
> fast operations: are executed immediately, with low latency.
>     Implemented by the GUI sending a message over a ringbuffer directly
>     to the audio thread, like we do everything currently.

Not quite. We currently wait for the audio thread.

>     Examples: sliding mixer controls
>               muting/unmuting tracks

This is exactly what I said I wanted to do. 

Install a ring buffer so that operations like these do not wait for the audio 
 messaging. These only use operations 'stage 2' realtime and not the 
 non-realtime stage 1 or 3, which would be harder to work with.

What I meant about careful timing, mouse press inhibiting etc:

Currently moving a midi slider or knob sends a message and waits, 
 guaranteeing synchronization - that the GUI continues only after all of the 
 work has been done. The next time the heartbeat routines update the 
 position of that control, its value is /already/ current.
So there is no jitter moving the control. (Just jerky with big Jack periods.) 
However, if you use a ring buffer to decouple the GUI movements from audio,  
 the GUI control is now /free/ to move at any speed it wants but the actual
 underlying controller values may not be updated until some time later
 when the audio thread has processed the messages.
Thus the heartbeat routine comes along and attempts to update the control
 with a /stale/ previous value which has not been updated yet by audio.
Thus you get horrible control jitter as it flips back and forth.

So what I did with all audio controls was:
a) Installed a ring buffer, decoupling the GUI from audio.
b) While such controls are being pressed down with the mouse, I inhibit
 /any/ updating of the control from the heartbeat routines (that includes 
 any audio automation streams). Eliminating movement jitter. When the 
 mouse is released, theoretically the shown value should match the next 
 update, well soon anyway, unless automation streams then take over.
To fix more elaborate situations, with the midi controller graphs for example 
 (probably the best worst-case example to use - they also wait), the graphs 
 currently hold their own data as 'drawing items'. By pushing to a ring buffer 
 the mouse would be free to move as fast as it wants and the graph is free to 
 be drawn at the same speed, but the underlying midi controller data actually 
 changes later when the audio thread does it.

There is another detail: Things like midi controls use the heartbeat function
 to periodically update their values, but things like controller graphs rely 
 on songChanged signals to redraw themselves.
Currently we signal songChanged from 'operations' routines in GUI thread. 
But when using a ring buffer, we can't signal a songChanged until the audio 
 thread has finished its work processing the ring buffer. So we need to
 signal the GUI thread right from inside the audio thread. We have such
 an 'audio-to-GUI' message pipe mechanism as I mentioned, but it sends
 only single characters. We would need to modify it, if that's what we use.
 
> 
> slow oeprations: might take up to some seconds to apply.
>     Implemented by mutex-locking and editing the model. The prefetcher
>     might wait for a moment, but no problem because our prefetch buffer
>     is large enough. (*)
> 
>     Examples: moving, creating, deleting notes or new wave parts.
>               turning on/off tracks
> 
> offline operations: cannot be executed while playing back
>     require the audio thread to be msgIdled; used for "large", seldomly
>     Examples: adding/removing (but not bypassing) effects,
>               adding/removing tracks
> 
> 
> *) in case of a audio buffer underflow, we de-click the thing:
> The ringbuffer always holds some hundred frames more than it actually
> needs. If the number of available frames in the ringbuf drops under a
> few hundred, then a soft underrun occurs and MusE will play back the
> remaining samples, but fading out (thus, de-clicking everything).
> If the buffer comes back again, MusE will fade-in for ~100 samples.
> 
> 
> 
> Also, time-stretching is done inside the audio thread, and not inside
> the prefetching thread, to be able to quickly adapt new tempi (->
> external synchronisation). The prefetcher must prefetch an appropriate
> number of samples, obviously. (We may assume that the external tempo
> does not deviate by more than 10% from the tempo map).


> > Am 27.01.2014 02:02, schrieb Dennis Schulmeister:
> > > Hi Florian,
> > > 
> > > So here is a slightly modified idea in pseudo-code as it's too late for
> > > me to describe it in words. :-)
> > 
> > [you meant having one model in usage and one copy for editing, and
> > swapping their pointers after editing since swapping is a 'fast'
> > operation]
> 
> Uhm yes. I knew it was easy ...
> 
> > I've thought about (and ardour does it :)) using a Diff system:

Again that's like what I said, albeit by way of a VCS system, also for song 
 files as well, so we can use complete undo trees.
But what do you mean diff system? Not an actual text based diff file I hope.
This system needs to operate on data structures.
Which... we already have - the undo/redo stacks.
Maybe not how they are currently used, but the actual stacks themselves.
I'd like to turn them into trees.

> So the model contains a list of changes filled by the GUI. During the
> lock phase all changes are applied at once. If most time is spent
> waiting on the user to complete his actions, this should work. But it
> will complicate things due to the indirect nature of making changes to
> the model. The interface needs to be well thought about, before-hand.
> 
> Dennis

Yes of course, if we use locks, then gather up all changes that we possibly 
 can and then apply them in one quick burst inside a lock.
Just... I don't know if that's as quickly done as said.
Actually this 'gathering' phase is pretty much what we already have with the 
 operations/undo/redo stack system. But now it is a question of /when/ to 
 apply them.

Thanks.
Tim.

------------------------------------------------------------------------------
WatchGuard Dimension instantly turns raw network data into actionable 
security intelligence. It gives you real-time visual feedback on key
security issues and trends.  Skip the complicated setup - simply import
a virtual appliance and go from zero to informed in seconds.
http://pubads.g.doubleclick.net/gampad/clk?id=123612991&iu=/4140/ostg.clktrk
_______________________________________________
Lmuse-developer mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/lmuse-developer

Reply via email to