Tom Lane wrote:
> Sezai YILMAZ <[EMAIL PROTECTED]> writes:
> > Tom Lane wrote:
> >> The slowdown you report probably is due to the rewrite of hash indexing
> >> to allow more concurrency --- the locking algorithm is more complex than
> >> it used to be. I am surprised that the effect is so large t
Sezai YILMAZ <[EMAIL PROTECTED]> writes:
> I changed the three hash indexes to btree.
> The performance is increased about 2 times (in PostgreSQL 7.3.4 1905
> rows/s).
> Concurrent inserts now work.
Concurrent inserts should work with hash indexes in 7.4, though not 7.3.
The slowdown you report
On Saturday 28 February 2004 21:27, Tom Lane wrote:
> Shridhar Daithankar <[EMAIL PROTECTED]> writes:
> > Everything default except for shared_buffers=100 and effective
> > cache=25000,
>
> 100?
1000.. That was a typo..
Shridhar
---(end of broadcast)-
> > create index agentid_ndx on logs using hash (agentid);
> > create index ownerid_ndx on logs using hash (ownerid);
> > create index hostid_ndx on logs using hash (hostid);
> > What about concurrent inserts (cocurrent spare test program execution) into
> > the same table? It did not work.
Has
Sezai YILMAZ wrote:
create index agentid_ndx on logs using hash (agentid);
create index ownerid_ndx on logs using hash (ownerid);
create index hostid_ndx on logs using hash (hostid);
speed for speed for
I don't know the answer to the question of why 7.4 is slower, but I have
some suggestions on additional things to test, and how to make it faster.
First off, try 200 transactions of 1000 records each, you might even want
to try 20 transactions of 10,000 records each. Postgres seems to run much
fas
Sezai YILMAZ wrote:
Test Hardware:
IBM Thinkpad R40
CPU: Pentium 4 Mobile 1993 Mhz (full powered)
RAM: 512 MB
OS: GNU/Linux, Fedora Core 1, kernel 2.4.24
A test program developed with libpq inserts 200.000 rows into table
logs. Insertions are made with 100 row per transaction (total 2.000
transac