involves file I/O) improve the
above scenario?
Thanks.
- Original Message
From: James Mansion <[EMAIL PROTECTED]>
To: andrew klassen <[EMAIL PROTECTED]>
Cc: pgsql-performance@postgresql.org
Sent: Wednesday, June 4, 2008 3:20:26 PM
Subject: Re: [PERFORM] insert/update tps slow wit
t;[EMAIL PROTECTED]>
To: pgsql-performance@postgresql.org
Sent: Wednesday, June 4, 2008 10:10:38 AM
Subject: Re: [PERFORM] insert/update tps slow with indices on table > 1M rows
On Wed, 4 Jun 2008, andrew klassen wrote:
> I am using multiple threads, but only one worker thread for insert/u
y the corresponding postgres server process for my
thread is
small it does not seem to be the bottleneck. There has to be a bottleneck
somewhere else.
Do you agree or is there some flaw in my reasoning?
- Original Message
From: Matthew Wakeling <[EMAIL PROTECTED]>
To: andrew klas
length of two
text fields. There are 5 total indices: 1 8-byte, 2 4-byte and 2 text fields.
As mentioned all indices are btree.
- Original Message
From: PFC <[EMAIL PROTECTED]>
To: andrew klassen <[EMAIL PROTECTED]>; pgsql-performance@postgresql.org
Sent: Tuesday, June
Running postgres 8.2.5
I have a table that has 5 indices, no foreign keys or any
dependency on any other table. If delete the database and
start entering entries, everything works very well until I get
to some point (let's say 1M rows). Basically, I have a somewhat
constant rate of inserts/upda
I am using Postgres 8.2.5.
I have a table that has rows containing a variable length array with a known
maximum.
I was doing selects on the array elements using an ANY match. The performance
was not too good as my table got bigger. So I added an index on the array.
That didn't help since the