On Sun, 28 Oct 2001 02:40, Paul Davis wrote: > >Huh, from a software engineering point of view they are distinctly > >seperate tasks but from a human interface viewpoint the user is > >"editing digital audio" where both aspects complete a whole picture, > >or sound(s) in this case. > > i don't agree. the process of getting a sample of a drum hit, or a > bass riff, or the chorus, or the door slam, into "shape" is totally > different than the task of put the drum hit in the right place, > copying it, pasting it, and so forth.
Both aspects squarely come under "digital audio editing" where the end goal is to have processed audio ready for further distribution. Creating a 3D model, assembling a scene, then animating a whole sequence uses distinctly seperate tecniques for each part but the overall process is _often_ viewed as a single procedure. > i agree that to use two applications with different UI conventions for > this would be confusing, but the only alternative to that is to And that is the mess we have in linux now when someone tries to do any "digital audio editing". It's painful to try and pull together an audio work chain only to see it fall apart on the next kernel or application upgrade. > require that each program be capable of doing *everything*. The Unix > Way (TM) provides some hope of avoiding that, but perhaps not enough > in a world where the data is highly ordered (and not really just a > stream of bytes at the semantic level we care about). I love the *nix way but stdin/stdout doesn't quite cut it for audio. BTW I was not sugggesting complex GUI interaction via a high level scripted language but being able to call up and "embed" (for want of a better word) "chunks of code" (for want of 3 better words) in different layouts, and yes, most likely on a monitor (but not always). > anyway, it would appear that if i'm headed anywhere with this, it > would be toward an embedded wave editor for Regions, so in some ways, > we're in violent agreement. See, there ya go, can't help yourself can you :-) I hope you find a few more aspects of your program that can be rolled up into nice containers that can be reused elsewhere. > >What would be brilliant is to make Ardour and friends embeddable > >widgets that can be incorporated into the dozens of front end GUI > >systems that are available under linux (and windows too, courtesy > >of us penguin heads). > > You have to be kidding? Ardour as an embeddable widget? Not at all. You would be aware of OpenGL. It is far more sophisticated than Audour may ever be... it exists to facilitate both presentation of and interaction with visual 3D objects. The ability to prototype a usable app quickly, actually use it, then have the option of further streamlining certain parts of the app in C/C++ is very "natural" with gigahertz++ boxes these days. My interest is in visualizing audio (any control or data stream) and some things I want to do is tap into a complex audio engineering process and be able to map various time/quanta to various 3D representations of audio while it's actually in the pipeline, not just when it's all finished and this is what the final "render looks like". I'm going to be sitting here trying like crazy with python and it's OpenGL kit to visually externalise not just the data but also some control widgets. I want to see some cuts/regions in 3D and be able to MMB scroll around it as a rendered model and make adjustments and constraints using almost 3D modelling techniques. I don't have the skill/time/mandate to even think about these kind of potentially very delicious extensions in C/C++, especially so without being able to explore them via a decent OO script language like python first. And here we do hit the "way of *nix"... the ability for anyone to extend others fine work, which is a whole bunch easier if the original authors don't lock up their code. > Ardour exists as a shared library that can be used by a variety of > different user interfaces. But any given instance of Ardour > (e.g. gtk-ardour, ksi-ardour), which combines the library with some > kind of user-interface (not necessarily graphical), is a complex piece > of work, and the idea that you could just make it an embeddable widget > is a little crazy. Besides, we'd be right back the same old > Gtk/Qt/Gtk--/WxWindows/FLTK/XFORMS issues that we've touch upon here > so many times before. You can't embed widgets from one toolkit into an > application that uses another without some disgusting hacks being > present in the toolkit(s) themselves. Especially if the project has been written to _not_ take this into account in the first place. > Sure. Then just use libardour, which has no UI code of any kind at > all. its less than 1/3 of the total source code of gtk-ardour, > however, so you can plan on spending quite some time working on your > own GUI for it :) Not impossible at all then. If libardour is in any way built like OpenGL (perhaps even OpenAL) then I might be able to have my cake and eat it too (use your GUI and roll my own 3D one too). For those who think I'm strectching CPU cycles... my next box will be 3 ghtz and next year we'll have 64 bit varieties. "we" are only a year or so away from having more CPU cycles then we know what to do with. I intend to (ab)use them with a text editor and the likes of python. --markc
