Combining the insert statements in a big concatenated string
joined by semicolons - rather than sending each individually
can drastically speed up your inserts; making them much closer
to the speed of copy.
For example, instead of sending them separately, it's much faster
to send a single string
On Mon, 2006-06-26 at 17:20 -0400, Michael Stone wrote:
On Mon, Jun 26, 2006 at 08:33:34PM +0100, Simon Riggs wrote:
of the SQL standard, so being unaware of them when using SQL is strange
to me.
Welcome to the world of programs designed for mysql. You'll almost never
see them batch
On Mon, Jun 26, 2006 at 08:33:34PM +0100, Simon Riggs wrote:
of the SQL standard, so being unaware of them when using SQL is strange
to me.
Welcome to the world of programs designed for mysql. You'll almost never
see them batch inserts, take advantage of referential integrity, etc.
You end
For long involved reasons I'm hanging out late at work today, and rather
than doing real, productive work, I thought I'd run some benchmarks
against our development PostgreSQL database server. My conclusions are
at the end.
The purpose of the benchmarking was to find out how fast Postgres
Brian Hurt [EMAIL PROTECTED] writes:
For long involved reasons I'm hanging out late at work today, and rather
than doing real, productive work, I thought I'd run some benchmarks
against our development PostgreSQL database server. My conclusions are
at the end.
Ummm ... you forgot to
Brian,
Any idea what your bottleneck is? You can find out at a crude level by
attaching an strace to the running backend, assuming it¹s running long
enough to grab it, then look at what the system call breakdown is.
Basically, run one of your long insert streams, do a ³top² to find which
process