Am 06.02.2013 10:01, schrieb Tim E. Real: > On February 5, 2013 05:58:48 PM Florian Jung wrote: >> Hi >> >> this is going to be a very long mail, but please fully read and >> understand it before answering. (and look at the diagram :) ) >> >> Am 04.02.2013 09:47, schrieb Tim E. Real: >>> No such methods. You probably want WaveEventBase::readAudio(WavePart* ...) >>> That's where I put my processing code. >> >> did i get this right: WaveEventBase::readAudio does not need to be hard >> real time capable, because its output is not directly sent to the >> speakers, but first fed into a some-seconds-long buffer (by the >> AudioPrefetch stuff), and data from this buffer is then read by the >> Audio thread and sent to the speakers? >> >> please correct me if i'm wrong. > > Correct. > > It runs in its own thread. > Basically a disk helper thread that fetches the wave data via SndFile > several blocks into the future, reset upon seek of course. > > It is a (the?) perfect place to interject /specifically/ with realtime > converter processing. Whereas fixed-block processing - effects and so on - > are placed directly into the audio thread process routines, as you know.
agreed. >> i don't like this design (passing information about the parent to the >> childs. IMHO, this should be a one-way-direction, children do not need >> to know about their parents.) > > So why does QWidget have a parent() method? > > Passing the WavePart down to the children and utilizing a map was > far more sane than giving each child a pointer to its parent WavePart, > as much I would like to do that. that's true; but i wouldn't like the children to have a pointer to the parent stored as well > When faced with a situation like that I try to pass down, not up. > > This was the solution I had. > > And I wasn't about to go ripping up the complete design of clones which > already took soooo much of my time to understand and get fixed > in the first place. heh, maybe this was the reason for clones taking sooooo much time ;)? really, usually that's a sign of a bad design. > >> >> >> >> i'd like to redesign it as follows >> >> please have a look at the following epicly drawn diagram ;) >> http://ente.hawo.stw.uni-erlangen.de/~ki08jofa/diagram.jpg >> >> (i left out MIDI stuff, Wave stuff is the relevant) >> >> first some definitions: >> Event means also EventBase >> WaveFile==SndFile >> frame-to-frame-map = f2fmap = stretch profile is a map<int,int>, which >> maps some samples of the original wave file to the sample positions of >> the desired output. >> i.e., for (i=0;i<10000;i+=100) f2fmap[i] = i*i/10000; causes the wave to >> be extremely stretched at the beginning, then getting and faster and >> faster until it's faster than the original in the end. >> the total duration will be the same. >> >> The green stuff is the current-is state, the red additions are what i'd >> like to change sooner or later. >> >> >> my point is, that the calls and queries shall only go from parent to >> children, never should a parent give information about itself to its >> child. (i.e., Event shall not need to access parentPart->pos()!) >> >> Instead, i'd like to assign every Event the frame-to-frame-map it needs, >> so it can just pass this to the SndFile and expect SndFile to process it >> appropriately (SndFile may or may not doing caching stuff, this is >> irrelevant to Event,Part,Track.) >> >> Whenever a Part is moved, or the tempomap is changed, Parts are getting >> informed about this. They will need to iterate through all of their >> events then and adjust the f2fmaps of the events. >> >> >> I understand that my design is incompatible with our clone parts, using >> shared EventLists; because then, we'd have the SAME Event (with the same >> f2fmap) in multiple Parts; i.e., one Event may have more parents! >> >> >> >> >> There are two ways of resolving this immediate problem: >> 1) Change the way how clone parts are implemented: don't share the >> EventList, but each Clone has its own EventList. Whenever one of >> the clones changes its EventList, this is communicated to all other >> clones, which then need to adopt their private EventLists accordingly >> >> or >> >> 2) Events do not contain the f2fmap. Instead, Part is passing this >> f2fmap [1] to Event::getData() on every call. >> Not only this defeats the concept of self-contained objects, which >> know everything, and only what they need, but it also introduces a >> second problem: how can the part find out the f2fmap it needs for >> an event? It could recalculate it for every getData() call, which >> would be dead slow. It cannot store it next to the Event, because >> the Events live in a (shared) Eventlist. >> It could store it in a map<EventID,f2fmap>, but lookup there is in >> O(log(number of events)), which makes this also dead slow. >> >> My point is, every variant of 2) sucks. We cannot avoid the need of >> redesigning the clone parts. >> >> >> 1) might sound like a heavy performance impact, if we need to propagate >> all event list changes to all clone parts; but it isn't. We do not >> substitute the whole EventList (which would require a full rebuild of >> all other Parts' eventlists), but only issuing requests like "please get >> the event at $position and change it". We can easily execute this >> request for all clone parts, because Events are lightweight: >> MIDI-Events only contain some bytes (midi data is small), and >> Wave-Events also only contain some bytes (they only contain the >> *pointer* to the SndFile). >> >> >> okay. i hope you can agree with me so far. the clone design worked fine >> as long everything an Event could do was not influenced by its parent >> part. But now it is, and now it explodes. >> >> >> [1] Well, i lied in the above text. actually we're not passing around >> f2fmaps, neither is an Event storing a full f2fmap<int,int>, not is >> SndFile::getData accepting one. >> Rather, SndFile::create_stretch_profile() accepts a full f2fmap, and >> does the following: >> 1. in all f2fmaps we have stored so far, look if that one is existent. >> 2. if it's not, add this f2fmap together with a unique key to our list >> and return the generated key (the key is an int) >> 3. if it is, just return the key. >> (4. maybe inform our background worker that he might want to start >> calculating things) >> Oppositely, SndFile::drop_stretch_profile() accepts a key, and removes >> the stored map from its list (and possibly clears the associated cache, >> if any.) >> And finally SndFile::get_data() accepts "from", "to" and "key", which >> might or might now use any cached material (which might have been >> calculated by some background thread before), or retrieves the actual >> f2fmap plus context from our internal, private list and does the processing. >> >> >> You might have noticed all these "might or might nots". I'm writing >> this, because i want *abstraction*. It doesn't matter HOW SndFile gives >> us the stretched data. That way we can easily make it cache, or make it >> not cache stuff, we easily can add new algorithms. We'll only need to >> change one source file then. >> >> >> >> >> What do you think of it? Might be some work, but do you consider this >> design clean? >> >> (I really don't want to build the stretching feature upon a bad design. >> And as i have pointed out, i consider the current design bad for this >> purpose.) >> >>> Your new cache is just for stretching? >> >> yes. it might or might not exist at all :) >> >>> Maybe some overlap of functionality with AudioPrefetch. >>> Maybe you can tap into the audio prefetch. >> >> no, because... >> >>> These caches are meant to be semi-small. So they should not eat much >>> memory. >> that's the reason :) > >> My cache is meant to be large or huge. It shall contain the whole >> stretched wave file. > > What? > You're going to make stretched copies in either RAM or disk, > of potentially hundreds of megabytes of waves? only if the user wishes that. > And you're going to make additional copies for every event using > the same wave? Because you'll need to. Different events at different > times on the time-line and tempo map will need their own copies. The Event data structure will be copied, that's right. The SndFile (and thus also the above hundreds of user-wished MB of waves) only exist one time. > > Look, I *do* agree that at some point it is good to make physical copies, > in an off-line processing sort of way, no doubt this a good idea - we could > do all kinds of off-line stuff. Especially resampling, which I hope you plan > to support as well. But if I understand correctly you are wrongly > dismissing realtime processing altogether. depends what you consider as "realtime": "hard realtime", like the audio thread: yeah, i do not intend that stretching works that way. "dynamically prefetch the data for the next 500ms or so": i'm doing that. you, with few RAM but a strong CPU might want to tell MusE "please do not waste my RAM with all this; just pipe the waves through a stretcher every time i play it." me, with lots of RAM/disk space but a weak CPU will tell MusE: "please use memory. Do the stretching one time, store that result in RAM or in a file, and then never bother my CPU with that again (except i move the event or change the tempomap)". both will be possible. > > How are you going to sync waves to a tempo stream instead of a map? > Such as Jack Timebase, external midi clock, or MTC time code? oh. i see. damn. but how do you want to stretch it in real time then? you'd need to know in advance when the next clock signal will occur (which you don't know). and because you cannot look into the future, you will have to readjust again and again. but you cannot readjust in RT mode, because you don't exactly know whether the stretcher has used your desired ratio of 0.9, or if it maybe used 0.8999999, due to the way these stretchers work. errors would accumulate then. > These libraries were designed for both realtime as well as off-line > processing. > > Obviously too many of them slow down the CPU in high-quality realtime mode. > But used sparingly they are useful. > > The simple time-domain methods take almost no CPU time at all, > less time than some LADSPA plugins. > In editing mode, allowing these lower-quality methods is good. > The thing is, you /can/ use a few of these converters on the > highest quality setting in MusE if you want. > Even if there are a dozen tracks you can turn some of them 'off', > or assign them a lower quality. > Or do a mixdown to single track, and use one /very/ high quality converter. > The point is, even left with that single track, syncing can be done, and a > recording of /that/ session can be made. i see. my whole idea was taylored around the "offline mode" of such a stretcher; even if we calculate the data "just in time", the stretcher as a frame2frame-map and is offline. which IMHO is the best solution, as long the tempo master comes from MusE. can you agree with that? real time processing for external clocks... i honestly have no idea about external clocks, how they work, how they are used within MusE etc. and no idea how we could sync any time stretcher, even in real time mode. do you have one? > > And you want slow? Try importing OGG or FLAC files into MusE. > The realtime conversions can take just as much time as these stretchers. > And that's in libsoundfile - we can't change that. > So why not let's make hundreds of megabytes of uncompressed copies > of them as well, then? > (And yes, despite my sarcasm, using off-line processing we /could/ provide > handy conversion features for those OGG and FLAC files and anything else, > while MusE runs.) that was sarcastic? i DO favour creating hundreds of megabytes on-disk (of course not in-RAM) of uncompressed copies, if then my cpu can do more useful things. especially if there is a button labeled "i need that disk space now, delete all of your redundant files now!". disk space is cheap, isn't it? > > So it would be best to have both real-time /and/ off-line processing. > > Both are required and useful. definitely. that's going to be a tough one. still all of this does not defeat the idea of changing the clone parts design. if doing this in realtime, you will need to have one stretcher for every single event, right? or how would you handle stretching? first of all: do we want to stretch events, or stretch parts? i went for events, because this saves us from processing work; but arises the initial problem of changing how clones work, plus would require us to do two independent implementations: stretching events offline, and stretching parts during playback online (stretching single events online would be useless: complicated and no benefit because no caching is possible.) greetings flo > > Tim. > >> >>> MusE (supposedly) prevents disk memory swapping by locking all current and >>> >>> future memory at start. I think it's working but I guess at some point it >>> must swap. >> >> My cache is meant to be treated as any other audio file. It would be >> cool if we don't need to swap it, but we would swap it if neccessary. >> >> greetings >> flo > > ------------------------------------------------------------------------------ > Free Next-Gen Firewall Hardware Offer > Buy your Sophos next-gen firewall before the end March 2013 > and get the hardware for free! Learn more. > http://p.sf.net/sfu/sophos-d2d-feb > _______________________________________________ > Lmuse-developer mailing list > [email protected] > https://lists.sourceforge.net/lists/listinfo/lmuse-developer >
signature.asc
Description: OpenPGP digital signature
------------------------------------------------------------------------------ Free Next-Gen Firewall Hardware Offer Buy your Sophos next-gen firewall before the end March 2013 and get the hardware for free! Learn more. http://p.sf.net/sfu/sophos-d2d-feb
_______________________________________________ Lmuse-developer mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/lmuse-developer
