On Apr 1, 2005 1:06 PM, Marc G. Fournier <[EMAIL PROTECTED]> wrote:
> Just curious, but does anyone have an idea of what we are capable of? I
> realize that size of record would affect things, as well as hardware, but
> if anyone has some ideas on max, with 'record size', that would be
> appreciated ...
On a AMD64/3000, 1Gb RAM, 2 SATA drives (1 for log, 1 for data), and
inserting using batches of 500-1000 rows, and also using the COPY
syntax, I have seen an interesting thing. There are 5 indexes
involved here, BTW. This is Linux 2.6 running on an XFS file system
(ext3 was even worse for me).
I can start at about 4,000 rows/second, but at about 1M rows, it
plummets, and by 4M it's taking 6-15 seconds to insert 1000 rows.
That's only about 15 rows/second, which is quite pathetic. The
problem seems to be related to my indexes, since I have to keep them
online (the system in continually querying, as well).
This was an application originally written for MySQL/MYISAM, and it's
looking like PostgreSQL can't hold up for it, simply because it's "too
much database" if that makes sense. The same box, running the MySQL
implementation (which uses no transactions) runs around 800-1000
Just a point of reference. I'm trying to collect some data so that I
can provide some charts of the degredation, hoping to find the point
where it dies and thereby find the point where it needs attention.
| Christopher Petrilli
| [EMAIL PROTECTED]
---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?