Peter T. Breuer wrote:
The only operations being done are simple "find the row with this key",
or "update the row with this key". That's all. The queries are not an
issue (though why the PG thread choose to max out cpu when it gets the
chance to do so through a unix socket, I don't know).
There is no disk as such... it's running on a ramdisk at the server
end. But assuming you mean i/o, i/o was completely stalled. Everything
was idle, all waiting on the net.
Indeed, it is single, because that's my application. I don't have
50 simultaneous connections. The use of the database is as a permanent
storage area for the results of previous analyses (static analysis of
the linux kernel codes) from a single client.
I'm not sure your setup is typical, interesting though the figures are.
Google a bit for pg_bench perhaps and see if you can reproduce the
effect with a more typical load. I'd be interested in being proved wrong.
But the load is typical HERE. The application works well against gdbm
and I was hoping to see speedup from using a _real_ full-fledged DB
instead.
I'm not sure you really want a full RDBMS. If you only have a single
connection and are making basic key-lookup queries then 90% of
PostgreSQL's code is just getting in your way. Sounds to me like gdbm
(or one of its alternatives) is a good match for you. Failing that,
sqlite is probably the next lowest-overhead solution.
Of course, if you want to have multiple clients interacting and
performing complex 19-way joins on gigabyte-sized tables with full-text
indexing and full transaction control then you *do* want a RDBMS.
--
Richard Huxton
Archonet Ltd
---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster