On 20 Feb 2013, at 5:20, Eduardo Morras wrote:

Execution time doing what?, Waiting for I/O? How do you get execution time? What sql are you doing?

I'm using sqlite3_profile and summing the reported times in nanoseconds.

Don't run with synchronous off, it's only calms the symptom, don't cure/repair the problem and can mask the real problem.

We cannot afford to have any fsyncs ocurring. Host system limitation; if sqlite asks for its database to be flushed to disk, that means two dozen other disk-intensive processes suddenly have to wait for all their writes to be flushed too.

Are you using a join?

No.

I don't use :memory: db, when need to do so, i use a normal db with 10-20% more page cache than file size. In rare use cases, i use a ram memory disk and backup().

I don't have any way to know in advance what the filesize might be.

The test case I was using was pseudo version 1.4.5, on Linux hosts,
using the pseudo wrapper to untar a 28,000 file tarball.

Surely I'm misinterpreted it but, Is the sqlite db in a directory with 28000 files? Each time a journal or temporal file is created, modified and deleted, and the main db file is modified, the directory entry must be updated and with 28000 files it's a very slow process.

There are no journal files, journaling is also off.

"pseudo" is a program in which all file-related syscalls in a client program are routed through to a server. The server is using sqlite to maintain a database of files. The database stores virtualized permissions and ownership. It's used to allow a build system to create root filesystems without root privileges. There are no rollbacks, and data persistence is mostly short-term; most databases last under ten minutes.

Please, post an example of your sql, perhaps it can be tuned for sqlite.

First: The SQL is completely trivial.
Second: I am not having performance problems with sqlite, I am having performance problems with :memory:. Performance on files is lovely.

My claim is that, at a bare minimum, a :memory: database should not be SLOWER than a file on disk. It probably ought to be faster, but slower implies that something has gone horribly wrong.

No joins, only one table, only a couple of things used as keys, they have indexes. Performance is adequate without :memory:.

-s
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to