On Wed, 27 Aug 2003, Bruno Wolff III wrote:
Did you check the error status for the records that weren't entered?
My first guess is that you have some bad data you are trying to insert.
Of course, I checked the error status for every insert, there is
no error. It seems like in my
Hi,
I have a (big) problem with postgresql when making lots of
inserts per second. I have a tool that is generating an output of ~2500
lines per seconds. I write a script in PERL that opens a pipe to that
tool, reads every line and inserts data.
I tryed both commited
On Sun, 26 Oct 2003, Dror Matalon wrote:
Here's the structure of the items table
[snip]
pubdate | timestamp with time zone |
Indexes:
item_channel_link btree (channel, link)
item_created btree (dtstamp)
item_signature btree (signature)
items_channel_article btree
On Sat, 13 Dec 2003, Kari Lavikka wrote:
I evaluated pg 7.4 on our development server and it looked just fine
but performance with production loads seems to be quite poor. Most of
performance problems are caused by nonsensical query plans but there's
also some strange slowness that I can't
On Sun, 2 Jul 2006, Gene wrote:
can use an index and perform as fast as searching with like '2345%'?
Is the only way to create a reverse function and create an index using
the reverse function and modify queries to use:
where reverse(column) like reverse('%2345') ?
Hmm..
Hi guys,
I'm looking for a database+hardware solution which should be able
to handle up to 500 requests per second. The requests will consist in:
- single row updates in indexed tables (the WHERE clauses will use
the index(es), the updated column(s) will not be indexed);
-
On Mon, 14 May 2007, Richard Huxton wrote:
1. Is this one client making 500 requests, or 500 clients making one request
per second?
Up to 250 clients will make up to 500 requests per second.
2. Do you expect the indexes at least to fit in RAM?
not entirely... or not all of