Tom Lane t...@sss.pgh.pa.us
Another point here is that you could get some of the hoped-for
benefit just by increasing BLCKSZ ... but nobody's ever
demonstrated any compelling benefit from larger BLCKSZ (except on
specialized workloads, if memory serves).
I think I've seen a handful of
Kevin,
I think I've seen a handful of reports of performance differences
with different BLCKSZ builds (perhaps not all on community lists).
My recollection is that some people sifting through data in data
warehouse environments see a performance benefit up to 32KB, but
that tests of GiST
On 8/27/13 3:54 PM, Josh Berkus wrote:
I believe that Greenplum currently uses 128K. There's a definite
benefit for the DW use-case.
Since Linux read-ahead can easily give big gains on fast storage, I
normally set that to at least 4096 sectors = 2048KB. That's a lot
bigger than even this,
The big-picture problem with work in this area is that no matter how you
do it, any benefit is likely to be both platform- and workload-specific.
So the prospects for getting a patch accepted aren't all that bright.
Indeed.
Would it make sense to have something easier to configure that
2013/8/23 Fabien COELHO coe...@cri.ensmp.fr:
The big-picture problem with work in this area is that no matter how you
do it, any benefit is likely to be both platform- and workload-specific.
So the prospects for getting a patch accepted aren't all that bright.
Indeed.
Would it make sense
Would it make sense to have something easier to configure that recompiling
postgresql and managing a custom executable, say a block size that could be
configured from initdb and/or postmaster.conf, or maybe per-object settings
specified at creation time?
I love the idea of per-object block
On Thu, Aug 22, 2013 at 8:53 PM, Kohei KaiGai kai...@kaigai.gr.jp wrote:
An idea that I'd like to investigate is, PostgreSQL allocates a set of
continuous buffers to fit larger i/o size when block is referenced due to
sequential scan, then invokes consolidated i/o request on the buffer.
It
Hello,
A few days before, I got a question as described in the subject line on
a discussion with my colleague.
In general, larger i/o size per system call gives us wider bandwidth on
sequential read, than multiple system calls with smaller i/o size.
Probably, people knows this heuristics.
On
On Thu, Aug 22, 2013 at 2:53 PM, Kohei KaiGai kai...@kaigai.gr.jp wrote:
Hello,
A few days before, I got a question as described in the subject line on
a discussion with my colleague.
In general, larger i/o size per system call gives us wider bandwidth on
sequential read, than multiple
Merlin Moncure mmonc...@gmail.com writes:
On Thu, Aug 22, 2013 at 2:53 PM, Kohei KaiGai kai...@kaigai.gr.jp wrote:
An idea that I'd like to investigate is, PostgreSQL allocates a set of
continuous buffers to fit larger i/o size when block is referenced due to
sequential scan, then invokes
2013/8/23 Tom Lane t...@sss.pgh.pa.us:
Merlin Moncure mmonc...@gmail.com writes:
On Thu, Aug 22, 2013 at 2:53 PM, Kohei KaiGai kai...@kaigai.gr.jp wrote:
An idea that I'd like to investigate is, PostgreSQL allocates a set of
continuous buffers to fit larger i/o size when block is referenced
11 matches
Mail list logo