>>>>> On September 21, 2010 Andy Grundman <[email protected]> wrote:

> Good explanation, this is exactly what 7.6 is doing. :)

I haven't been paying a lot of attention to all of the scanning work
you've been doing the last couple releases, so sorry if this is the
direction you're headed, but it seems to me that you should have a
table in the DB which contains just the absolutely raw tag data.  Have
the new/changed scan update that - there are no dependencies to worry
about.  Then have a separate phase that reprocesses the raw data into
the tables you actually use at runtime, and do that from scratch even
for new/changed scans.  That should completely and fundamentally solve
these sorts of problems.  Also, I suggest doing both phases in ram,
and only at the end write it all to the DB in big chunks rather than
making a zillion little writes and reads from the DB - that was where
all the time was going last I looked.  For 25k tracks, my tag data is
only a few mb and so even much larger libraries should easily fit in
ram even on fairly anemic somewhat modern hardware.

Greg
_______________________________________________
beta mailing list
[email protected]
http://lists.slimdevices.com/mailman/listinfo/beta

Reply via email to