----- Original Message ----- From: "Josh Green" <[EMAIL PROTECTED]> To: <[EMAIL PROTECTED]> Sent: Sunday, 28 October 2001 9:16 Subject: Re: [linux-audio-dev] multitrack and editor separate?
> On Sat, 2001-10-27 at 11:47, Paul Davis wrote: > > > > ok, i've been thinking about for the last couple of days. here's my > > proposal for ardour's handling of such an idea: > > > > * user selects "edit region externally" > > * relevant data is written to a file > > * external editor is forked using some environment or config variable > > (e.g. "snd %f") > > * when editor process exits, check the exit status: > > if 0: take the new file > > ... > > > > no. this is pretty bad. as you note, you really need to be able to > > tell ardour that the file has been changed before exiting, so that you > > can hear the result "in the mix". > > > > defining Yet Another API for inter-editor communication? i'd rather > > just write or merge an existing wave editor. > > > > --p > > I think an API for embedding an audio editor would be nice though. Since > an EDL based wave editor is responsible for rendering its own edits, why > not use something like JACK to access the rendered audio? I guess you > would need some control stream from the client program to the wave > editor, so maybe something like this: > > > client <---- Jack audio stream <----- Wave Editor > | ^ > +------Control stream------------------+ > > > The client program would hand off the audio to the wave editor in some > defined fashion (give it a file name, shared memory or through a pipe > for smaller wave files). The wave editor then is responsible for > rendering this audio on demand. There would then need to be some API for > the Control stream that would contain commands for Play, Stop, Seek, > Speed, etc. I guess there would also need to be a status stream, or some > way to get the current length of the edited audio to the client. When > you are done editing the audio, it could be rendered to disk and given > control back to the client. I see here that an embedded editor means that [if even possible] the wave file needs to be saved with the EDL data before it can be used in the host application and that there is no means have a non-destructed audio data across both the editor and the multitrack. but.. the last thing you want it be doing at this stage is rendering it to disk. it's this kind of operation from a users perspective slows down the working process dramatically. interuptions like saving a file to disk in order to import / open it up unto a multitracker is unnecessary if both the editor and the multracker is integrated within the same application. if there was however some way to automate this via a pipe straight over to the multitracker [or vice versa then this would be usable]. several applications on windows and macOS allow one to quick-key the edited sample over to the multriack where it appears up against the playhead in the session... secondly if the editor is to be EDL based then there needs to be a means to ensure that samples are to remain in a nondestructive form during editing and that edited changes to the wave file should be constantly updated in the multitrack window [from what little i know about making an API for embedded applications i don't see how this would be possible]. in this way there is no need to ever close the wave or *commit* it to disk. it is best that the wave remain open for further work after it has been tested in the multitrack session. a comprehensive *instance* based total editing and multitrack environment is where i think the DA scene seems to be going - this intermittent re-saving and re-opening is clumsy and is a thing of the past. de|
