On 8/27/13 3:54 PM, Josh Berkus wrote:
I believe that Greenplum currently uses 128K. There's a definite
benefit for the DW use-case.
Since Linux read-ahead can easily give big gains on fast storage, I
normally set that to at least 4096 sectors = 2048KB. That's a lot
bigger than even this, a
Kevin,
> I think I've seen a handful of reports of performance differences
> with different BLCKSZ builds (perhaps not all on community lists).
> My recollection is that some people sifting through data in data
> warehouse environments see a performance benefit up to 32KB, but
> that tests of GiS
Tom Lane
> Another point here is that you could get some of the hoped-for
> benefit just by increasing BLCKSZ ... but nobody's ever
> demonstrated any compelling benefit from larger BLCKSZ (except on
> specialized workloads, if memory serves).
I think I've seen a handful of reports of performanc
On Thu, Aug 22, 2013 at 8:53 PM, Kohei KaiGai wrote:
> An idea that I'd like to investigate is, PostgreSQL allocates a set of
> continuous buffers to fit larger i/o size when block is referenced due to
> sequential scan, then invokes consolidated i/o request on the buffer.
> It probably make sens
Would it make sense to have something easier to configure that recompiling
postgresql and managing a custom executable, say a block size that could be
configured from initdb and/or postmaster.conf, or maybe per-object settings
specified at creation time?
I love the idea of per-object block siz
2013/8/23 Fabien COELHO :
>
>> The big-picture problem with work in this area is that no matter how you
>> do it, any benefit is likely to be both platform- and workload-specific.
>> So the prospects for getting a patch accepted aren't all that bright.
>
>
> Indeed.
>
> Would it make sense to have
The big-picture problem with work in this area is that no matter how you
do it, any benefit is likely to be both platform- and workload-specific.
So the prospects for getting a patch accepted aren't all that bright.
Indeed.
Would it make sense to have something easier to configure that recomp
2013/8/23 Tom Lane :
> Merlin Moncure writes:
>> On Thu, Aug 22, 2013 at 2:53 PM, Kohei KaiGai wrote:
>>> An idea that I'd like to investigate is, PostgreSQL allocates a set of
>>> continuous buffers to fit larger i/o size when block is referenced due to
>>> sequential scan, then invokes consolid
Merlin Moncure writes:
> On Thu, Aug 22, 2013 at 2:53 PM, Kohei KaiGai wrote:
>> An idea that I'd like to investigate is, PostgreSQL allocates a set of
>> continuous buffers to fit larger i/o size when block is referenced due to
>> sequential scan, then invokes consolidated i/o request on the buf
On Thu, Aug 22, 2013 at 2:53 PM, Kohei KaiGai wrote:
> Hello,
>
> A few days before, I got a question as described in the subject line on
> a discussion with my colleague.
>
> In general, larger i/o size per system call gives us wider bandwidth on
> sequential read, than multiple system calls with
Hello,
A few days before, I got a question as described in the subject line on
a discussion with my colleague.
In general, larger i/o size per system call gives us wider bandwidth on
sequential read, than multiple system calls with smaller i/o size.
Probably, people knows this heuristics.
On the
11 matches
Mail list logo