On Jul 2, 2009, at 12:48 AM, Greg Klanderman wrote:

>>>>>> Andy Grundman <[email protected]> writes:
>
>> The rescan code fires 15 seconds after the last change.  If using a
>> network share, the delay is a bit longer.
>
> Sorry not to be clear, I wasn't asking how long before the rescan
> fires, what I want to know is how fast you expect the rescan to take
> to incorporate tag changes to a single track?  What I'm trying to get
> at is whether the time is proportional to the amount of tag changes,
> or is it proportional to the size of the db?  Are you just running the
> current 'Look for new and changed music', in which case the post-scan
> cleanup steps take a very long time, time proportional to the size of
> the db.  Or have you re-done the post scan logic to be incremental?

Oh, yeah, the time is definitely proportional to the amount of tag  
changes.  It does not run the same thing as a "look for changes" scan.

>> Yeah playlists are also rescanned this way (although not well tested
>> yet).
>
> Right, but again, can I incorporate a playlist change in seconds?
> Currently, after doing a full wipe and rescan, and having made no
> changes whatsoever, rescanning my playlists takes 13 seconds, but
> merging various artists and database cleanups take like 3 more
> minutes.  If no track tags have changed, those post-scan steps should
> be completely unnecessary, and even if a small number of track tags
> have changed, it should be possible to incrementally incorporate those
> changes quickly.

The goal is to support playlist changes in seconds, yes.  No need to  
merge artists as you say.

> A full rescan on 7.3 trunk takes 35 minutes.  So if we assume the tag
> reading in 7.3 is 1/3 the time, with the C tag reading that might go
> down to 25 minutes, assuming the new tag reading takes twice the time
> of my C program.  That leaves about 23 minutes manipulating tag
> data and stuffing it in the database - all with the CPU at 100%
> utilization.  That's an awful lot of computation on a 2.8 GHz xeon
> which I just cannot see being justified for 5 Mb of data - that's
> approaching (actually 77% of) a million CPU cycles per byte of data.

There's not much else I can say other than to invite you to use  
Devel::NYTProf to profile a scan and get an idea for the bits that are  
slow.  It's almost all DBIx::Class code.  I plan to work with those  
guys to try and improve performance, but not for this release.

_______________________________________________
beta mailing list
[email protected]
http://lists.slimdevices.com/mailman/listinfo/beta

Reply via email to