--
Date: Mon, 14 Mar 2005 09:41:30 +0800
From: Qingqing Zhou [EMAIL PROTECTED]
To: pgsql-performance@postgresql.org
Subject: Re: One tuple per transaction
Message-ID: [EMAIL PROTECTED]
Tambet Matiisen [EMAIL PROTECTED] writes
...
If I'm correct, the dead
Tambet Matiisen wrote:
Not exactly. The dead tuple in the index will be scanned the
first time (and its pointed heap tuple as well), then we will
mark it dead, then next time we came here, we will know that
the index tuple actually points to a uesless tuple, so we
will not scan its pointed
-Original Message-
From: Richard Huxton [mailto:[EMAIL PROTECTED]
Sent: Tuesday, March 15, 2005 11:38 AM
To: Tambet Matiisen
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] One tuple per transaction
...
Consider the often suggested solution for speeding up
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Josh Berkus wrote:
I don't agree. The defaults are there for people who aren't going to read
enough of the documentation to set them. As such, conservative for the
defaults is appropriate.
Sure, but I would argue that 4 is *too* conservative.
On Mar 15, 2005, at 6:35 AM, Greg Sabino Mullane wrote:
Granted, I don't work on
any huge, complex, hundreds of gig databases, but that supports my
point -
if you are really better off with a /higher/ (than 3)
random_page_cost, you
already should be tweaking a lot of stuff yourself anyway.
I
Hello,
just recently I held a short course on PG.
One course attendant, Robert Dollinger, got
interested in benchmarking single inserts (since
he currently maintains an application that does
exactly that on Firebird and speed is an issue
there).
He came up with a table that I think is
One thing that stands out is how terribly bad Windows
performed with many small single transactions and fsync=true.
Appearantly fsync on Windows is a very costly operation.
What's the hardware? If you're running on disks with write cache
enabled, fsync on windows will write through the
Chris Mair wrote:
Timings are in msec, note that you cannot directly
compare Windows and Linux Performance, since machines
were different.
You can, however, compare PG to Firebird, and you
can see the effect of the 3 varied parametert.
One thing that stands out is how terribly
bad
Hi all,
I suspect this problem/bug has been dealt with already, but I couldn't
find anything in the mail archives.
I'm using postgres 7.3, and I managed to recreate the problem using the
attached
files.
The database structure is in slow_structure.sql
After creating the database, using this
Greg Sabino Mullane [EMAIL PROTECTED] writes:
N.B. My own personal starting default is 2, but I thought 3 was a nice
middle ground more likely to reach consensus here. :)
Your argument seems to be this produces nice results for me, not
I have done experiments to measure the actual value of the
I have asked him for the data and played with his queries, and obtained
massive speedups with the following queries :
http://boutiquenumerique.com/pf/miroslav/query.sql
http://boutiquenumerique.com/pf/miroslav/query2.sql
http://boutiquenumerique.com/pf/miroslav/materialize.sql
Note that my
On my machine (Laptop with Pentium-M 1.6 GHz and 512MB DDR333) I get the
following timings :
Big Joins Query will all the fields and no order by (I just put a SELECT
* in the first table) yielding about 6k rows :
= 12136.338 ms
Replacing the SELECT * from the table with many fields by
Hi all,
Il get this strange problem when deleting rows from a Java program.
Sometime (For what I noticed it's not all the time) the server take
almost forever to delete rows from table.
Here It takes 20 minutes to delete the IC table.
Java logs:
INFO [Thread-386] (Dao.java:227) 2005-03-15
On Tuesday 15 March 2005 04:37, Richard Huxton wrote:
Tambet Matiisen wrote:
Now, if typical inserts into your most active table occur in batches of
3 rows, in one transaction, then row count for this table is updated 3
times during transaction. 3 updates generate 3 tuples, while 2 of them
Tom Lane wrote:
Greg Sabino Mullane [EMAIL PROTECTED] writes:
N.B. My own personal starting default is 2, but I thought 3 was a nice
middle ground more likely to reach consensus here. :)
Your argument seems to be this produces nice results for me, not
I have done experiments to measure the actual
On Tue, Mar 15, 2005 at 04:24:17PM -0500, David Gagnon wrote:
Il get this strange problem when deleting rows from a Java program.
Sometime (For what I noticed it's not all the time) the server take
almost forever to delete rows from table.
Do other tables have foreign key references to
Gregory Stark wrote:
The this day and age argument isn't very convincing. Hard drive capacity
growth has far outstripped hard drive seek time and bandwidth improvements.
Random access has more penalty than ever.
In point of fact, there haven't been noticeable seek time improvements
for years.
Robert Treat [EMAIL PROTECTED] writes:
On a similar note I was just wondering if it would be possible to
mark any of these dead tuples as ready to be reused at transaction
commit time, since we know that they are dead to any and all other
transactions currently going on.
I believe VACUUM
On Tue, Mar 15, 2005 at 06:51:19PM -0500, Tom Lane wrote:
Robert Treat [EMAIL PROTECTED] writes:
On a similar note I was just wondering if it would be possible to
mark any of these dead tuples as ready to be reused at transaction
commit time, since we know that they are dead to any and all
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Your argument seems to be this produces nice results for me, not
I have done experiments to measure the actual value of the parameter
and it is X. I *have* done experiments of that sort, which is where
the default of 4 came from. I remain of
Alvaro Herrera [EMAIL PROTECTED] writes:
On Tue, Mar 15, 2005 at 06:51:19PM -0500, Tom Lane wrote:
I believe VACUUM already knows that xmin = xmax implies the tuple
is dead to everyone.
Huh, that is too simplistic in a subtransactions' world, isn't it?
Well, it's still correct as a fast-path
David Brown [EMAIL PROTECTED] writes:
Gregory Stark wrote:
The this day and age argument isn't very convincing. Hard drive capacity
growth has far outstripped hard drive seek time and bandwidth improvements.
Random access has more penalty than ever.
In point of fact, there haven't been
22 matches
Mail list logo