On February 4, 2013 01:44:50 AM Florian Jung wrote:
> Am 04.02.2013 00:35, schrieb Tim E. Real:
> > On February 3, 2013 02:44:40 PM Florian Jung wrote:
> >> Hi
> >> 
> >> just played around with three GPLed time stretching libraries and wanted
> >> to share my (unscientific and subjective results)
> >> 
> >> 
> >> RubberBand (Phase vocoder)
> >> pro: fast, decent quality under non-extreme conditions, we can supply a
> >> 
> >>      frame map (which is important for avoiding cumulative errors)
> >> 
> >> con: bad quality under extreme conditions (multiply tempo with a factor
> >> 
> >>      of 0.33 or 10.0)
> >> 
> >> LibSBSMS (sinusoidal subband modeling)
> >> pro: REALLY good quality, still REALLY good quality under extreme
> >> 
> >>      conditions, natively supports sliding tempi.
> >> 
> >> con: we can't supply a frame map, very slow (30 seconds need ~40 sec of
> >> 
> >>      processing on my 1.4GHzer)
> >> 
> >> SoundTouch (SOLA: basically finds zero-transients of your signal,
> >> 
> >>                   cuts your audio in small pieces, and glues them
> >>                   together.)
> >> 
> >> pro: very fast
> >> con: really bad quality under extreme conditions (sounds choppy), we
> >> 
> >>      can't supply a frame map
> >> 
> >> i'd conclude from this:
> >> don't use SoundTouch (RubberBand is similarly fast and better quality).
> >> Use RubberBand within MusE for timestretching
> >> Maybe offer to use LibSBSMS for slow quality results, e.g. for a final
> >> downmix.
> >> 
> >> 
> >> 
> >> if you want to try them out:
> >> LibSBSMS and SoundTouch are used in audacity.
> >> When you choose Effects -> Change Tempo, you're getting SoundTouch,
> >> and when you choose Effects -> Sliding Tempo/Pitch Scale, you're getting
> >> libSBSMS.
> >> And RubberBand support will hopefully be in MusE soon :)
> >> 
> >> greetings
> >> flo
> > 
> > Thanks dude! Was wondering about these things yesterday.
> > Was going to comment on CPU cost during mix-down vs simple editing needs.
> > 
> > I heard of SOLA in QTractor. Didn't know it used simple time-domain
> > method.
> > 
> > For editing tasks in MusE we want a low CPU usage.
> > I end up sometimes with dozens of parts on dozens of tracks.
> > The libraries must be applied to each of the wave parts.
> > And it must be done independently - do not 'share' a running instance
> > 
> >  of the lib.
> > 
> > *Note* Pay attention to part clones! They share an EventList pointer!
> > You must somehow keep track of which WaveEvent in which EventList
> > 
> >  in which WavePart is associated with which specific dedicated running
> >  instance of the library!
> > 
> > Yeah, that's the bit that killed my last fully functioning attempt!
> > Had to tear it all down.
> > You will see in my audioconvert.cpp some initial attempts at providing
> > 
> >  this /crucial/ link between wave events and each stretcher/shifter
> >  instance.> 
> > It's just an stl map from pointer to my class AudioConverter to WaveEvent.
> > It is installed as a member right now into each Part (or WavePart?).
> > Well, something like that.
> > But man was I discouraged...  FYI I left off here:
> > It means I have to look all through MusE and find places that might
> > 
> >  add or remove wave events and parts or manipulate them somehow,
> >  and be sure to update that AudioConverter-to-WaveEvent map!
> > 
> > (See below: Why I want expandable converter classes with base classes.)
> > 
> > And the CPU usage really piles up! After a few tracks and parts
> > 
> >  even in stop mode it becomes pretty bad on high quality settings.
> > 
> > Even when a track is muted but not 'off', it must be ready to go at all
> > times> 
> >  because mute had to be designed (me fix!) in the code to be instantly
> >  responding whereas 'off' is lazily slower. It's due to the audio caching
> >  system.
> 
> you cannot do time-stretching in real time with an acceptable amount of
> tracks. at least not if you still want some CPU power for useful tasks.

Sure, using the simple but poor sounding time-domain method like in 
 SoundTouch. That should use virtually no time at all.
Or, if as I say we can get one of the other libraries to go down to a 
 low enough quality that CPU usage equals the time-domain method.

Anyway caching is good.

But read on. Turns out it looks like I *was* using caching after all...

> That's why i want to precalculate or heavily cache this.
> 
> > So I was going to describe the simple (but poor sounding) time-domain
> > 
> >  method but I see that SoundTouch uses it. I read about SoundTouch
> >  before but I could swear I thought it used a more sophisticated method.
> > 
> > So I was wondering if any of the libraries had a mode which would be
> > 
> >  this simple method we could throttle-down to during editing.
> > 
> > Even better, if they used the more sophisticated methods but also allowed
> > 
> >  their quality to be adjusted to the point where CPU use would equal
> >  that of the simple method. RubberBand in particular.
> 
> you can adjust RubberBand's quality, iirc. and btw, SoundTouch isn't
> that much faster than RubberBand.
> 
> > So you've answered my question, we could stick with RubberBand for
> > editing,
> > 
> >  and it or something else at a higher quality setting for final mix-downs.
> 
> "something else": libsbsms :)
> this stuff sounds REALLY cool
> (i tried this with the "pirates of the carribean" theme, and slowed it
> down to 0.33 of its original speed. soundtouch: awful, rubberband:
> smeary, but acceptable, libSBSMS: almost perfect!)
> 
> > I wanted to ask again that we try to keep it as modular classes where
> > 
> >  new stretchers/shifters could be added quickly, based on a common base
> >  class.
> definitely.
> 
> > In my implementation, I figured on a few common members like time and
> > 
> >  pitch factors and a common MusE-side 'quality' setting, keep it simple
> >  say
> >  high, med, low. Plus the process routine.
> > 
> > SecretRabbitCode resampler only uses the time factor while RB uses
> > 
> >  both the time and pitch factors (used together can it be a resampler
> >  IIRC).
> > 
> > All I think have quality settings.
> > So a common settings GUI could be used then.
> > Anything more specific to the libraries would require custom GUIs for
> > each.
> 
> well, there will be specific settings as well, because each method has
> its own tunables. But agreed, we can put this together to some degree
> 
> 
> Tim, please review my following design draft:
> 

> - WaveEvent::process() or ::getData() is responsible for calculating the
>   time-stretch (and also frequency shift, as this is just an additional
>   resampling)
> - WaveEvent::process() gets a WavePart* along, in order to handle
>   different caches for different clones.

No such methods. You probably want WaveEventBase::readAudio(WavePart* ...)
That's where I put my processing code.
(The code is all commented and was migrated into AudioConverter class, 
 but if you look you can mostly see the way it was when it was functioning.
You should have a look for some tips.) 

I already pass a WavePart* all the way from Event::readAudio to
 WaveEventBase::readAudio().

This was in preparation for my AudioConverter-to-WaveEvent map.
Of which... you are going to need, something similar, no?

Are you aware of our wave AudioPrefetch memory cache?

class WaveTrack {
      Fifo _prefetchFifo;  // prefetch Fifo
}

AudioPrefetch::prefetch(..) {
            ... if (track->prefetchFifo()->getWriteBuffer(ch, 
                        MusEGlobal::segmentSize, bp, writePos))
}

WaveTrack::getData() reads from the prefetch cache FIFO, 
 and AudioPrefetch::prefetch() fills the cache via calls to
 WaveEventBase::readAudio().

So you see, I lied I guess. My processing wasn't exactly 'live' real-time.

It was already cached.

I did mine through WaveEventBase::readAudio(WavePart* ...)

Which is cached.        Via AudioPrefetch.

The audio prefetch already runs in its own thread.
Its job is to fetch more blocks of future wave data from the wave files and 
 put them into the WaveTrack::_prefetchFifo FIFO, then getData will read it. 

But you will have still another helper thread to do your processing?
Could we use the same thread somehow?

Your new cache is just for stretching?
Maybe some overlap of functionality with AudioPrefetch.
Maybe you can tap into the audio prefetch.

These caches are meant to be semi-small. So they should not eat much memory. 
MusE (supposedly) prevents disk memory swapping by locking all current and 
 future memory at start. I think it's working but I guess at some point it
 must swap.

So in the end your idea is pretty much the same as how I tapped into the 
 audio prefetch, given also that you'll need, as I did, some 
 AudioConverter-to-WaveEvent map. Am I right?
So AudioPrefetch could help you?

Anyway have a look there, tell me whachya think 'bout all that.

> - WaveEvent::process() keeps track of the last-calculated sample no.

I needed a running frame number (at least in my real-time method) 
 because these things need to be allowed to progress on their own 
 instead of being forced to a specific frame each process.
My code used a "sfCurFrame". But it was moved in preparation.

>   If the requested sample is near the last-calculated, we just continue
>   retrieving data from our converter until we have the requested sample
>   (we might ignore some hundreds or thousands of samples).
>   Only if the difference is larger than a threshold, we do a seek() in
>   our SndFile, and reinstaniate our Converter. This might result in
>   an audible tick, but we have this tick when seeking anyway.

You shouldn't need to re-instantiate the converter. 
The converter should be able to be instantiated once and left running 
 while in use. Maybe you are thinking of 'Reset' which some of them have.
 
Since one may want several converters existing, yet they are not 
 always in use at the same time, but it's bad to keep re-creating 
 and deleting converters on the fly, I had an idea for keeping 
 a 'pool' of them - ready to be used anytime and then 'given up' when 
 done being used, as the song plays on and on.

Later.
Tim.

> - MusE will get a new AudioCache class. This class works like a buffer
>   to the outside, but might internally do one of those:
>     - calculate everything as needed (with very little caching, some
>       kilobytes at most)
>     - try to cache to some large memory (which usually is your harddisk,
>       but might be your RAM if you have plenty).
>   in the second case, there will be a nice-d background task which
>   tries to completely fill the cache.
>   if data which has already been precalculated by this background task
>   is requested, then this data is returned.
>   if data at a place where the cache is still empty is requested, we
>   do very similarly to the first case: instantiate *another* stretcher
>   and let him calculate whatever we just need. we'll discard the result
>   of this stretcher then (i.e., the background task will need to do the
>   same work again; this is to ensure there will be no crackles).
> 
>   Whenever some relevant parameter changes, the cache is dropped.
> 
> 
> All relevant parameters form some kind of checksum. The cache contents
> may be saved to disk, together with that "parameter-checksum".
> If MusE is started the next time, it looks for a cache file which fits
> to the song, the part, the event and also to the param-checksum. If
> present, there is no need to recalculate.
> 
> -> no crackle during playback
> -> acceptable crackle during seek
> -> no unnecessary CPU load, because most of the time you'll be getting
>    data from the caches
> -> no unneccessary waiting time for cache rebuilds (if you need the data
>    NOW, you will get it calculated on-the-fly. the background tasks have
>    a lower priority, and thus don't get in your way)
> -> no waiting time after a song load (cache files are on your disk)
> -> still usable for people with too few disk space (they'll have to wait
>    for the caches after song loading, however)
> 
> i hope this design will work for muse. what do you think tim?
> 
> greetings
> flo
> 
> > Cheers. Tim.
> > 

------------------------------------------------------------------------------
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_jan
_______________________________________________
Lmuse-developer mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/lmuse-developer

Reply via email to