>>>>> Andy Grundman <[email protected]> writes: > The rescan code fires 15 seconds after the last change. If using a > network share, the delay is a bit longer.
Sorry not to be clear, I wasn't asking how long before the rescan fires, what I want to know is how fast you expect the rescan to take to incorporate tag changes to a single track? What I'm trying to get at is whether the time is proportional to the amount of tag changes, or is it proportional to the size of the db? Are you just running the current 'Look for new and changed music', in which case the post-scan cleanup steps take a very long time, time proportional to the size of the db. Or have you re-done the post scan logic to be incremental? > Yeah playlists are also rescanned this way (although not well tested > yet). Right, but again, can I incorporate a playlist change in seconds? Currently, after doing a full wipe and rescan, and having made no changes whatsoever, rescanning my playlists takes 13 seconds, but merging various artists and database cleanups take like 3 more minutes. If no track tags have changed, those post-scan steps should be completely unnecessary, and even if a small number of track tags have changed, it should be possible to incrementally incorporate those changes quickly. > Of course, but improving the performance of DBIx::Class is not a > trivial task. Definitely something we need to improve, no question. I didn't expect it to be easy and I'm glad to hear it is something you want to improve. > I don't really like comments like this; you can comparing apples and > oranges. You are completely ignoring the large amount of work SC does > once it gets your tag information. Raw tag scanning is fast, and > Audio::Scan is very fast by itself. But then you have to do something > with the tags, translate them into the right values, put them in the > database, etc. Sure, you could write all of that stuff in C but > matching the features in SC would be an incredible amount of work. By no means did I intend to trivialize what SC is doing, I'm actually very happy with how the scanning works, have watched how much effort it has taken to get it right over many years, and I think you in particular have been doing great stuff on SC for a long while now so I'm not trying to flame you or anyone there. But I do think it is useful to take a step back to consider with an open mind how much time scanning tags should reasonably take. Compared to the speed and amount of RAM in modern computers, the amount of tag data for 20k tracks is puny. I'm pretty familiar with the scanner code and not aware of any processing it's doing that's not essentially linear time w.r.t. the size of the tag data. I see no reason that perl shouldn't be completely capable of doing that tag post-processing in no time at all, and ignoring the database operations, I don't have any reason to believe it is taking appreciable time. Can you point me to any tag post-processing logic that is fundamentally computationally intensive? My tag data can be slurped out of the tracks using a C program in 45 seconds. Stripping out the extraneous output from 'id3info', that tag data is just under 5 Mb, and to give an idea of the redundancy in that data, compresses to less than 600 Kb. A full rescan on 7.3 trunk takes 35 minutes. So if we assume the tag reading in 7.3 is 1/3 the time, with the C tag reading that might go down to 25 minutes, assuming the new tag reading takes twice the time of my C program. That leaves about 23 minutes manipulating tag data and stuffing it in the database - all with the CPU at 100% utilization. That's an awful lot of computation on a 2.8 GHz xeon which I just cannot see being justified for 5 Mb of data - that's approaching (actually 77% of) a million CPU cycles per byte of data. thanks, Greg _______________________________________________ beta mailing list [email protected] http://lists.slimdevices.com/mailman/listinfo/beta
