I don't believe it's a performance issue, I believe it's that writes to
blocks greater than 8k cannot be guaranteed 'atomic' by the operating
system.  Hence, 32k blocks would break the transactions system.  (Or
something like that - am I correct?)

Chris

> -----Original Message-----
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Mitch Vincent
> Sent: Tuesday, November 28, 2000 8:40 AM
> To: mlw; Hackers List
> Subject: Re: [HACKERS] 8192 BLCKSZ ?
>
>
> I've been using a 32k BLCKSZ for months now without any trouble,
> though I've
> not benchmarked it to see if it's any faster than one with a
> BLCKSZ of 8k..
>
> -Mitch
>
> > This is just a curiosity.
> >
> > Why is the default postgres block size 8192? These days, with caching
> > file systems, high speed DMA disks, hundreds of megabytes of RAM, maybe
> > even gigabytes. Surely, 8K is inefficient.
> >
> > Has anyone done any tests to see if a default 32K block would provide a
> > better overall performance? 8K seems so small, and 32K looks to be where
> > most x86 operating systems seem to have a sweet spot.
> >
> > If someone has the answer off the top of their head, and I'm just being
> > stupid, let me have it. However, I have needed to up the block size to
> > 32K for a text management system and have seen no  performance problems.
> > (It has not been a scientific experiment, admittedly.)
> >
> > This isn't a rant, but my gut tells me that a 32k block size as default
> > would be better, and that smaller deployments should adjust down as
> > needed.
> >
>

Reply via email to