I have a (big) problem with postgresql when making lots of
inserts per second. I have a tool that is generating an output of ~2500
lines per seconds. I write a script in PERL that opens a pipe to that
tool, reads every line and inserts data.
I tryed both commited and not commited variants (the inserts
were commited at every 60 seconds), and the problem persists.
The problems is that only ~15% of the lines are inserted into
the database. The same script modified to insert the same data in a
similar table created in a MySQL database inserts 100%.
I also dropped the indexes on various columns, just to make sure
that the overhead is not to big (but I also need that indexes because
I'll make lots of SELECTs from that table).
I tried both variants: connecting to a host and localy (through
postgresql server's socket (/tmp/s.PGSQL.5432).
Where can be the problem?
I'm using postgresql 7.4 devel snapshot 20030628 and 20030531.
Some of the settings are:
shared_buffers = 520
max_locks_per_transaction = 128
wal_buffers = 8
max_fsm_relations = 30000
max_fsm_pages = 482000
sort_mem = 131072
vacuum_mem = 131072
effective_cache_size = 10000
random_page_cost = 2
Any views or opinions presented within this e-mail are solely those of
the author and do not necessarily represent those of any company, unless
otherwise expressly stated.
---------------------------(end of broadcast)---------------------------
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]