On Wed, 24 Apr 2002, Tom Lane wrote:
Curt Sampson [EMAIL PROTECTED] writes:
Grabbing bigger chunks is always optimal, AFICT, if they're not
*too* big and you use the data. A single 64K read takes very little
longer than a single 8K read.
Proof?
Well, there are various sorts of proof
On Thu, 25 Apr 2002, Bruce Momjian wrote:
Well, we are guilty of trying to push as much as possible on to other
software. We do this for portability reasons, and because we think our
time is best spent dealing with db issues, not issues then can be deal
with by other existing software, as
At 12:19 PM 4/25/02 +0900, Curt Sampson wrote:
Grabbing bigger chunks is always optimal, AFICT, if they're not
*too* big and you use the data. A single 64K read takes very little
longer than a single 8K read.
Yes I agree that if sequential scans are done reading ahead helps.
And often doesn't
On Thu, 25 Apr 2002, Curt Sampson wrote:
Here's the ratio table again, with another column comparing the
aggregate number of requests per second for one process and four
processes:
Just for interest, I ran this again with 20 processes working
simultaneously. I did six runs at each blockread
On Thu, 25 Apr 2002, Lincoln Yeoh wrote:
I think the raw partitions will be more trouble than they are worth.
Reading larger chunks at appropriate circumstances seems to be the low
hanging fruit.
That's certainly a good start. I don't know if the raw partitions
would be more trouble than
Curt Sampson [EMAIL PROTECTED] writes:
1. Theoretical proof: two components of the delay in retrieving a
block from disk are the disk arm movement and the wait for the
right block to rotate under the head.
When retrieving, say, eight adjacent blocks, these will be spread
across no more than
Curt Sampson wrote:
3. Proof by testing. I wrote a little ruby program to seek to a
random point in the first 2 GB of my raw disk partition and read
1-8 8K blocks of data. (This was done as one I/O request.) (Using
the raw disk partition I avoid any filesystem buffering.) Here are
typical
On Thu, 2002-04-25 at 12:47, Curt Sampson wrote:
On Thu, 25 Apr 2002, Lincoln Yeoh wrote:
I think the raw partitions will be more trouble than they are worth.
Reading larger chunks at appropriate circumstances seems to be the low
hanging fruit.
That's certainly a good start. I don't
Tom Lane wrote:
...
Curt Sampson [EMAIL PROTECTED] writes:
3. Proof by testing. I wrote a little ruby program to seek to a
random point in the first 2 GB of my raw disk partition and read
1-8 8K blocks of data. (This was done as one I/O request.) (Using
the raw disk partition I avoid
Nice test. Would you test simultaneous 'dd' on the same file, perhaps
with a slight delay between to the two so they don't read each other's
blocks?
seek() in the file will turn off read-ahead in most OS's. I am not
saying this is a major issue for PostgreSQL but the numbers would be
On Thu, 25 Apr 2002, Tom Lane wrote:
Curt Sampson [EMAIL PROTECTED] writes:
1. Theoretical proof: two components of the delay in retrieving a
block from disk are the disk arm movement and the wait for the
right block to rotate under the head.
When retrieving, say, eight adjacent
Curt Sampson wrote:
At 12:41 PM 4/23/02 -0400, Bruce Momjian wrote:
This is an interesting point, that an index scan may fit in the cache
while a sequential scan may not.
If so, I would expect that the number of pages read is significantly
smaller than it was with a sequential scan. If
On Wed, 24 Apr 2002, Bruce Momjian wrote:
We expect the file system to do re-aheads during a sequential scan.
This will not happen if someone else is also reading buffers from that
table in another place.
Right. The essential difficulties are, as I see it:
1. Not all systems do
Curt Sampson wrote:
On Wed, 24 Apr 2002, Bruce Momjian wrote:
We expect the file system to do re-aheads during a sequential scan.
This will not happen if someone else is also reading buffers from that
table in another place.
Right. The essential difficulties are, as I see it:
On Wed, 24 Apr 2002, Bruce Momjian wrote:
1. Not all systems do readahead.
If they don't, that isn't our problem. We expect it to be there, and if
it isn't, the vendor/kernel is at fault.
It is your problem when another database kicks Postgres' ass
performance-wise.
And at that
Curt Sampson [EMAIL PROTECTED] writes:
Grabbing bigger chunks is always optimal, AFICT, if they're not
*too* big and you use the data. A single 64K read takes very little
longer than a single 8K read.
Proof?
regards, tom lane
---(end of
Curt Sampson [EMAIL PROTECTED] writes:
Grabbing bigger chunks is always optimal, AFICT, if they're not
*too* big and you use the data. A single 64K read takes very little
longer than a single 8K read.
Proof?
Long time ago I tested with the 32k block size and got 1.5-2x speed up
Well, this is a very interesting email. Let me comment on some points.
---
Curt Sampson wrote:
On Wed, 24 Apr 2002, Bruce Momjian wrote:
1. Not all systems do readahead.
If they don't, that isn't our
Tom Lane wrote:
Curt Sampson [EMAIL PROTECTED] writes:
Grabbing bigger chunks is always optimal, AFICT, if they're not
*too* big and you use the data. A single 64K read takes very little
longer than a single 8K read.
Proof?
I contend this statement.
It's optimal to a point. I know that
19 matches
Mail list logo