Don Baccus wrote:
...
I expect TOAST to work even better). Users will still be able change to
larger blocksizes (perhaps a wise thing to do if a large percentage of their
data won't fit into a single PG block). Users using the default will
be able to store rows of *awesome* length,
The cost difference between 32K vs 8K disk reads/writes are so small
these days when compared with overall cost of the disk operation itself,
that you can even measure it, well below 1%. Remember seek times
advertised on disks are an average.
It has been said how small the difference is -
Kevin O'Gorman wrote:
mlw wrote:
Tom Samplonius wrote:
On Tue, 28 Nov 2000, mlw wrote:
Tom Samplonius wrote:
On Mon, 27 Nov 2000, mlw wrote:
This is just a curiosity.
Why is the default postgres block size 8192? These days, with caching
Matthew Kirkwood wrote:
On Tue, 28 Nov 2000, Tom Lane wrote:
Nathan Myers [EMAIL PROTECTED] writes:
In the event of a power outage, the drive will stop writing in
mid-sector.
Really? Any competent drive firmware designer would've made sure that
can't happen. The drive has to
Kevin O'Gorman wrote:
mlw wrote:
Many operating systems used a fixed memory block size allocation for
their disk cache. They do not allocate a new block for every disk
request, they maintain a pool of fixed sized buffer blocks. So if you
use fewer bytes than the OS block size you waste
On Tue, Nov 28, 2000 at 12:38:37AM -0500, Tom Lane wrote:
"Christopher Kings-Lynne" [EMAIL PROTECTED] writes:
I don't believe it's a performance issue, I believe it's that writes to
blocks greater than 8k cannot be guaranteed 'atomic' by the operating
system. Hence, 32k blocks would break
On Tue, Nov 28, 2000 at 04:24:34PM -0500, Tom Lane wrote:
Nathan Myers [EMAIL PROTECTED] writes:
In the event of a power outage, the drive will stop writing in
mid-sector.
Really? Any competent drive firmware designer would've made sure that
can't happen. The drive has to detect power
Tom Samplonius wrote:
On Tue, 28 Nov 2000, mlw wrote:
Tom Samplonius wrote:
On Mon, 27 Nov 2000, mlw wrote:
This is just a curiosity.
Why is the default postgres block size 8192? These days, with caching
file systems, high speed DMA disks, hundreds of megabytes of
I've been using a 32k BLCKSZ for months now without any trouble, though I've
not benchmarked it to see if it's any faster than one with a BLCKSZ of 8k..
-Mitch
This is just a curiosity.
Why is the default postgres block size 8192? These days, with caching
file systems, high speed DMA disks,
ons system. (Or
something like that - am I correct?)
From: [EMAIL PROTECTED] On Behalf Of Mitch Vincent
Sent: Tuesday, November 28, 2000 8:40 AM
Subject: Re: [HACKERS] 8192 BLCKSZ ?
I've been using a 32k BLCKSZ for months now without any trouble,
though I've
not benchmarked it to see if it's
At 08:39 PM 11/27/00 -0500, Bruce Momjian wrote:
[ Charset ISO-8859-1 unsupported, converting... ]
If it breaks anything in PostgreSQL I sure haven't seen any evidence -- the
box this database is running on gets hit pretty hard and I haven't had a
single ounce of trouble since I went to 7.0.X
At 08:39 PM 11/27/00 -0500, Bruce Momjian wrote:
[ Charset ISO-8859-1 unsupported, converting... ]
If it breaks anything in PostgreSQL I sure haven't seen any evidence -- the
box this database is running on gets hit pretty hard and I haven't had a
single ounce of trouble since I went to
At 09:30 PM 11/27/00 -0500, Bruce Momjian wrote:
Well, true, but when you have 256 MB or a half-gig or more to devote to
the cache, you get plenty of blocks, and in pre-PG 7.1 the 8KB limit is a
pain for a lot of folks.
Agreed. The other problem is that most people have 2-4MB of cache, so a
"Christopher Kings-Lynne" [EMAIL PROTECTED] writes:
I don't believe it's a performance issue, I believe it's that writes to
blocks greater than 8k cannot be guaranteed 'atomic' by the operating
system. Hence, 32k blocks would break the transactions system.
As Nathan remarks nearby, it's hard
At 09:30 PM 11/27/00 -0500, Bruce Momjian wrote:
Well, true, but when you have 256 MB or a half-gig or more to devote to
the cache, you get plenty of blocks, and in pre-PG 7.1 the 8KB limit is a
pain for a lot of folks.
Agreed. The other problem is that most people have 2-4MB of
On Mon, 27 Nov 2000, mlw wrote:
This is just a curiosity.
Why is the default postgres block size 8192? These days, with caching
file systems, high speed DMA disks, hundreds of megabytes of RAM, maybe
even gigabytes. Surely, 8K is inefficient.
I think it is a pretty wild assumption to
16 matches
Mail list logo