6k tracks is way little to make the database a bottleneck, I think you
need at least 30k tracks (probably more) for the database to be the
limiting factor.

Then I'm of little help: I have no more than 19k tracks.

In a small library I believe previous tests has shown that the main
bottleneck is always the disk I/O towards music files.

Which stays in contrast to the other thread referenced where using a ramdisk for the cache made a huge difference. At least during the scan DB can cause considerable IO as well. I'm currently running a test with an updated DBD::SQLite on my main server, which is on the really slow side when it comes to disk IO.

As I said before we should have a better understanding of what the performance pain points really are. I only scan once in a while. Most users will spend more time navigating their music collection than scanning it. Is navigation a problem? As some alternative UIs have shown, many aspects of the UIs (in particular the web UI) can be improved without the need of an updated database backend. Most of the data is pretty static. Therefore caching (on the server or even client side) can improve things considerably.

Also, I think it would be preferred to focus on optimizing browse speed
rather than scanning speed.

Oh, you summarized my thoughts already pretty well :-).

I would personally prefer to re-introduce MySQL as an option for people
with large libraries.

Isn't it still possible with no code changes?

--

Michael
_______________________________________________
beta mailing list
[email protected]
http://lists.slimdevices.com/mailman/listinfo/beta

Reply via email to