>I haven't been paying a lot of attention to all of the scanning work >you've been doing the last couple releases, so sorry if this is the >direction you're headed, but it seems to me that you should have a >table in the DB which contains just the absolutely raw tag data. Have >the new/changed scan update that - there are no dependencies to worry >about. Then have a separate phase that reprocesses the raw data into >the tables you actually use at runtime, and do that from scratch even >for new/changed scans. That should completely and fundamentally solve >these sorts of problems. > That was discussed in the past - I think it was the plan for the "new schema" branch, which was going to be SBS 8.0. Effectively, one pass through file system, reading tags into a metadata DB, and then process that into an optimised model for SBS to use. Not totally convinced it would perform better, depending on library size. The file tag scanning phase would take just as long, and there would be more DB writes to write tags to the first DB, and then read data back, process and write to a new DB. Would it be any quicker than running a full clear+scan?
>Also, I suggest doing both phases in ram, >and only at the end write it all to the DB in big chunks rather than >making a zillion little writes and reads from the DB - that was where >all the time was going last I looked. For 25k tracks, my tag data is >only a few mb and so even much larger libraries should easily fit in >ram even on fairly anemic somewhat modern hardware. It's not just the memory required to hold the data; there must be processing on the data which means several copies of it. Better off leaving it to a DB engine to handle caching. Tell the DB to use more memory to avoid multiple disk accesses. _______________________________________________ beta mailing list [email protected] http://lists.slimdevices.com/mailman/listinfo/beta
