Tom Lane t...@sss.pgh.pa.us writes:
What I intend to do over the next day or so is commit the prefetch
infrastructure and the bitmap scan prefetch logic, but I'm bouncing the
indexscan part back for rework. I think that it should be implemented
in or near index_getnext() and pay attention
Gregory Stark st...@enterprisedb.com writes:
Tom Lane t...@sss.pgh.pa.us writes:
2. I fixed it so that setting effective_io_concurrency to zero disables
prefetching altogether; there was no way to do that in the patch as
submitted.
Hm. the original intent was that effective_io_concurrency 1
Robert Haas robertmh...@gmail.com writes:
OK, here's an update of Greg's patch with the runtime configure test
ripped out, some minor documentation tweaks, and a few unnecessary
whitespace diff hunks quashed. I think this is about ready for
committer review.
I've started to look through
* As coded, it generates prefetch bursts that are much too large and too
widely spaced to be effective, not to mention that they entirely
ignore the effective_io_concurrency control knob as well as the order
in which the pages will actually be needed. I wonder now whether
Robert's inability
Tom Lane t...@sss.pgh.pa.us writes:
Robert Haas robertmh...@gmail.com writes:
OK, here's an update of Greg's patch with the runtime configure test
ripped out, some minor documentation tweaks, and a few unnecessary
whitespace diff hunks quashed. I think this is about ready for
committer
Robert Haas robertmh...@gmail.com writes:
FWIW, following your first set of commits, I extracted, cleaned up,
and re-posted the uncommitted portion of the patch last night. Based
on this it doesn't sound like there's much point in continuing to do
that, but I'm happy to do so if anyone thinks
Robert Haas robertmh...@gmail.com writes:
OK, here's an update of Greg's patch with the runtime configure test
ripped out, some minor documentation tweaks, and a few unnecessary
whitespace diff hunks quashed. I think this is about ready for
committer review.
I've applied most of this, with a
Tom Lane t...@sss.pgh.pa.us writes:
Robert Haas robertmh...@gmail.com writes:
OK, here's an update of Greg's patch with the runtime configure test
ripped out, some minor documentation tweaks, and a few unnecessary
whitespace diff hunks quashed. I think this is about ready for
committer
Peter Eisentraut pete...@gmx.net writes:
The way I read this is that this was a temporary kernel/libc mismatch in
a development version of Debian 3 years ago that was fixed within 2
months of being reported and was never released to the general public.
So it would be on the same level as
Tom Lane wrote:
Peter Eisentraut pete...@gmx.net writes:
The way I read this is that this was a temporary kernel/libc mismatch in
a development version of Debian 3 years ago that was fixed within 2
months of being reported and was never released to the general public.
So it would be on
I think at a minimum there should be a manual configuration switch
(ie something in pg_config_manual.h) to allow the builder to disable
use of posix_fadvise, even if configure thinks it's there. Depending
on buildfarm results we may have to do more than that.
This seems pretty pointless,
Gregory Stark wrote:
Peter Eisentraut pete...@gmx.net writes:
On Friday 02 January 2009 06:49:57 Greg Stark wrote:
The guc run-time check is checking for known-buggy versions of glibc
using sysconf to check what version of glibc you have.
Could you please mention the bug number in the
On Friday 02 January 2009 06:49:57 Greg Stark wrote:
The guc run-time check is checking for known-buggy versions of glibc
using sysconf to check what version of glibc you have.
Could you please mention the bug number in the relevant source code comments?
--
Sent via pgsql-hackers mailing
Peter Eisentraut pete...@gmx.net writes:
On Friday 02 January 2009 06:49:57 Greg Stark wrote:
The guc run-time check is checking for known-buggy versions of glibc
using sysconf to check what version of glibc you have.
Could you please mention the bug number in the relevant source code
On Thu, 1 Jan 2009, Robert Haas wrote:
The only thing I haven't been able to do is demonstrate that this change
actually produces a performance improvement. Either I'm testing the
wrong thing, or it just doesn't provide any benefit on a single-spindle
system.
When I did a round of testing
Greg Smith gsm...@gregsmith.com writes:
On Thu, 1 Jan 2009, Robert Haas wrote:
The only thing I haven't been able to do is demonstrate that this change
actually produces a performance improvement. Either I'm testing the
wrong thing, or it just doesn't provide any benefit on a single-spindle
On Fri, 2 Jan 2009, Tom Lane wrote:
ISTM that you *should* be able to see an improvement on even
single-spindle systems, due to better overlapping of CPU and I/O effort.
The earlier synthetic tests I did:
http://archives.postgresql.org/pgsql-hackers/2008-09/msg01401.php
Showed a substantial
Greg Smith gsm...@gregsmith.com writes:
On Fri, 2 Jan 2009, Tom Lane wrote:
ISTM that you *should* be able to see an improvement on even
single-spindle systems, due to better overlapping of CPU and I/O effort.
The earlier synthetic tests I did:
Tom Lane t...@sss.pgh.pa.us writes:
In principle you should be able to adjust the constant so that vmstat
shows about 50% CPU busy, and then enabling fadvise should improve
matters significantly.
I think in practice individual queries don't interleave much cpu with i/o
work. A single random
Greg Smith wrote:
On Fri, 2 Jan 2009, Tom Lane wrote:
ISTM that you *should* be able to see an improvement on even
single-spindle systems, due to better overlapping of CPU and I/O effort.
The earlier synthetic tests I did:
I've got a stack of hardware I can do performance testing of this patch on,
what I haven't been able to find time for is setting up any sort of test
harness right now. If you or Greg have any benchmark or test program you
could suggest that should show off the improvements here, I'd be glad
On Fri, Jan 2, 2009 at 5:36 PM, Robert Haas robertmh...@gmail.com wrote:
I've got a stack of hardware I can do performance testing of this patch on,
what I haven't been able to find time for is setting up any sort of test
harness right now. If you or Greg have any benchmark or test program you
Gregory Stark st...@enterprisedb.com writes:
Tom Lane t...@sss.pgh.pa.us writes:
In principle you should be able to adjust the constant so that vmstat
shows about 50% CPU busy, and then enabling fadvise should improve
matters significantly.
I think in practice individual queries don't
Hm, what were those plans? You might want to put the old code back in
explain.c to print the prefetching target to see how well it's doing.
Well, bad news. Here's one where prefetching seems to make it WORSE.
rhaas=# explain select sum(1) from enormous where l_shipdate in
('1992-01-01',
On Fri, Jan 2, 2009 at 8:42 PM, Robert Haas robertmh...@gmail.com wrote:
Hm, what were those plans? You might want to put the old code back in
explain.c to print the prefetching target to see how well it's doing.
Well, bad news. Here's one where prefetching seems to make it WORSE.
rhaas=#
Any chance you could put back the code in explain.c which showed
whether posix_fadvise is actually getting used? Another thing I did
when testing was attaching with strace to see if posix_fadvise (the
syscall on linux was actually fadvise64 iirc) is actually getting
called.
I tried changing
Tom Lane t...@sss.pgh.pa.us writes:
The point of the suggestion is to prove that the patch works as
advertised. How wide the sweet spot is for this test isn't nearly as
interesting as proving that there *is* a sweet spot. If you can't
find one it suggests that either the patch or the local
On Fri, Jan 2, 2009 at 11:13 PM, Robert Haas robertmh...@gmail.com wrote:
When I did that, it when back from 50 s to 33 s, which I think means
that posix_fadvise is getting called and that that is what is making
it slower.
And is this on a system with multiple spindles? How many?
Latitude
I tried this on my laptop running FC9, and because I forgot to run
autoconf, I got this error message when I tried to turn on
posix_fadvise.
rhaas=# set effective_io_concurrency to 3;
ERROR: could not determine if this system has a working posix_fadvise
DETAIL: Check configure.log produced by
Robert Haas robertmh...@gmail.com writes:
Am I correct in thinking that the only thing we're really checking for
here is whether a trivial posix_fadvise() call returns success? If
so, is this test really worth doing?
Runtime tests performed during configure are generally a bad idea to
start
On Thu, Jan 1, 2009 at 3:55 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
Am I correct in thinking that the only thing we're really checking for
here is whether a trivial posix_fadvise() call returns success? If
so, is this test really worth doing?
Runtime
In theory there should be no benefit on a single spindle system. There
could be a slight benefit due to reordering of I/o but only on a raid
array would you see a significant speedup -- which should be about
equal to the number of spindles.
What would be interesting is whether you see a
Now that there's an actual run-time sysconf check for the buggy glibc called
by the guc function we arguably don't need the autoconf check_run check
anymore anyways.
Isn't that the check I just removed for you, or are you talking about
some other check that can also be removed?
...Robert
--
Sorry for top-posting -- phone mail client sucks.
I thought the autoconf ac_run_check was the test that people were
questioning. That calls posix_fadvise to see if it crashes at
configure time.
The guc run-time check is checking for known-buggy versions of glibc
using sysconf to check
On Thu, Jan 1, 2009 at 11:49 PM, Greg Stark greg.st...@enterprisedb.com wrote:
Sorry for top-posting -- phone mail client sucks.
I thought the autoconf ac_run_check was the test that people were
questioning. That calls posix_fadvise to see if it crashes at configure
time.
Yes, that's what I
I'll send another path with at least 1 and 3 fixed and hunt around
again for a header file to put this guc into.
On 10 Dec 2008, at 04:22, ITAGAKI Takahiro [EMAIL PROTECTED]
wrote:
Hello,
Gregory Stark [EMAIL PROTECTED] wrote:
Here's an update to eliminate two small bitrot conflicts.
Greg Stark [EMAIL PROTECTED] writes:
A variable prefetch_pages is defined as unsigned or int
in some places. Why don't you define it only once in a header
and include the header in source files?
Just... Which header?
MHO: the header that goes with the source file that is most concerned with
On Thu, Dec 11, 2008 at 4:29 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Greg Stark greg.st...@enterprisedb.com writes:
A variable prefetch_pages is defined as unsigned or int
in some places. Why don't you define it only once in a header
and include the header in source files?
Just... Which
Here's the update
I also skimmed through and cleaned a couple other things. There's *still* a
function prototype which I don't see what header file to put it in, that's the
one in port/posix_fadvise.c which contains one function with one caller, guc.c.
posix_fadvise_v23.diff.gz
Description:
Hello,
Gregory Stark [EMAIL PROTECTED] wrote:
Here's an update to eliminate two small bitrot conflicts.
I read your patch with interest, but found some trivial bad manners.
* LET_OS_MANAGE_FILESIZE is already obsoleted.
You don't have to cope with the option.
* Type mismatch in prefetch_pages
Here's an update to eliminate two small bitrot conflicts.
posix_fadvise_v22.diff.gz
Description: Binary data
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's RemoteDBA services!
--
Sent via pgsql-hackers mailing list
41 matches
Mail list logo