Well, currently SQLite seems to meet my performance needs. Currently I don't think that brewing my own file format would pay off. I will consider this only as a last solution.

I was being somewhat facetious mainly to prove that you don't
actually want best possible performance. You are far better off
writing whatever code is simplest, especially to test, and then
profiling the results. If performance is not acceptable then,
you can change how things work and will have prior code to test
against.


 http://c2.com/cgi/wiki?PrematureOptimization

The higher-level tasks will be performed by various callers in various
threads. But in the end, each tasks requires several db commands to be
executed, so the db will be the bottleneck;

I did say that the work items should be higher level than a single SQL command.

adding more threads and CPUs
won't help if I have to serialize all requests into a queue that is
processed by a worker thread.

Worker thread*s*. You will generally want one more worker thread than you have CPUs. However until you actually profile you won't know where your bottle neck is. It could be disk, memory, CPU or other factors.

The thread switching at some point will limit scalability.

And you know this how? Given that operating systems and CPUs have been tuning for thread switching for decades, actual contradictory evidence would be useful first.

The other option - opening the db in each thread -
will lead to more memory overhead, also limiting the scalability.

The default settings use 2MB. With a pragma you can change this up or down.

As the page says:

Make it work. Make it right. Make it fast.

Roger

Reply via email to