Tom Lane t...@sss.pgh.pa.us wrote:
Kevin Grittner kevin.gritt...@wicourts.gov writes:
The checkpoint_segments seems dramatic enough to be real. I wonder
if the test is short enough that it never got around to re-using
any of them, so it was doing extra writes for the initial creation
during
Kevin Grittner kevin.gritt...@wicourts.gov writes:
Regarding the fact that even with the xlog files pre-populated, the
smaller set of xlog files is faster: I'm only guessing, but I suspect
the battery backed RAID controller is what's defeating conventional
wisdom here. By writing to the same,
On Fri, June 26, 2009 4:13 pm, Kevin Grittner wrote:
By the way, the number of xlog files seemed to always go to two above
2x checkpoint_segments.
The docs say:
There will always be at least one WAL segment file, and will normally not
be more than (2 + checkpoint_completion_target) *
Tom Lane t...@sss.pgh.pa.us wrote:
How big is your BBU cache?
On this machine, I guess it is 512MB. (Possibly 1GB, but I'm having
trouble finding the right incantation to check it at the moment, so
I'm going by what the hardware tech remembers.)
-Kevin
--
Sent via pgsql-hackers mailing
On Mon, 2009-06-22 at 15:18 -0500, Kevin Grittner wrote:
Tom Lane t...@sss.pgh.pa.us wrote:
default postgresql.conf (comments stripped)
max_connections = 100
shared_buffers = 32MB
This forces ring size to be 4MB, since min(32MB/8, ringsize).
Please re-run tests with your config, ring
Hi Tom,
How much concern is there for the contention for use cases where the WAL
can't be bypassed?
Thanks, Alan
On Sun, Jun 21, 2009 at 10:00 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Stefan Kaltenbrunner ste...@kaltenbrunner.cc writes:
The following copying 3M rows(each) into a seperate
Tom Lane wrote:
Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:
I was going to say that since we flush the WAL every 16MB anyway (at
every XLOG file switch), you shouldn't see any benefit with larger ring
buffers, since to fill 16MB of data you're not going to write more than
On Mon, 2009-06-22 at 10:52 +0300, Heikki Linnakangas wrote:
Tom Lane wrote:
Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:
I was going to say that since we flush the WAL every 16MB anyway (at
every XLOG file switch), you shouldn't see any benefit with larger ring
Alan Li a...@truviso.com writes:
How much concern is there for the contention for use cases where the WAL
can't be bypassed?
If you mean is something going to be done about it in 8.4, the
answer is no. This is a pre-existing issue that there is no simple
fix for.
Tom Lane wrote:
Alan Li a...@truviso.com writes:
How much concern is there for the contention for use cases where the WAL
can't be bypassed?
If you mean is something going to be done about it in 8.4, the
answer is no. This is a pre-existing issue that there is no simple
fix for.
Andrew Dunstan and...@dunslane.net writes:
Tom Lane wrote:
Alan Li a...@truviso.com writes:
How much concern is there for the contention for use cases where the WAL
can't be bypassed?
If you mean is something going to be done about it in 8.4, the
answer is no. This is a pre-existing issue
Tom Lane wrote:
I thought he was asking for a solution to the problem of WALInsertLock
contention. In any case, we have WAL bypass on a table by table basis
now, don't we?
If we do I'm ignorant of it ;-) How do we say Never WAL this table?
cheers
andrew
--
Andrew Dunstan and...@dunslane.net writes:
Tom Lane wrote:
I thought he was asking for a solution to the problem of WALInsertLock
contention. In any case, we have WAL bypass on a table by table basis
now, don't we?
If we do I'm ignorant of it ;-) How do we say Never WAL this table?
Make it
* Andrew Dunstan and...@dunslane.net [090622 10:47]:
If we do I'm ignorant of it ;-) How do we say Never WAL this table?
CREATE TEMPORARY TABLE ...
a.
--
Aidan Van Dyk Create like a god,
ai...@highrise.ca
* Tom Lane (t...@sss.pgh.pa.us) wrote:
The more useful case for data load is create or truncate it in the
same transaction, of course.
Unfortunately, WAL bypass also requires not being in archive mode with
no way to turn that off w/o a server restart, aiui.
Thanks,
Stephen Frost sfr...@snowman.net writes:
* Tom Lane (t...@sss.pgh.pa.us) wrote:
The more useful case for data load is create or truncate it in the
same transaction, of course.
Unfortunately, WAL bypass also requires not being in archive mode with
no way to turn that off w/o a server restart,
* Tom Lane (t...@sss.pgh.pa.us) wrote:
Stephen Frost sfr...@snowman.net writes:
Unfortunately, WAL bypass also requires not being in archive mode with
no way to turn that off w/o a server restart, aiui.
Well, if you're trying to archive then you certainly wouldn't want WAL
off, so I'm
On Mon, 2009-06-22 at 11:14 -0400, Tom Lane wrote:
Stephen Frost sfr...@snowman.net writes:
* Tom Lane (t...@sss.pgh.pa.us) wrote:
The more useful case for data load is create or truncate it in the
same transaction, of course.
Unfortunately, WAL bypass also requires not being in
Simon Riggs si...@2ndquadrant.com writes:
I was thinking it might be beneficial to be able to defer writing WAL
until COPY is complete, so heap_sync would either fsync the whole heap
file or copy the whole file to WAL.
What about indexes?
regards, tom lane
--
Sent
On Mon, 2009-06-22 at 11:24 -0400, Tom Lane wrote:
Simon Riggs si...@2ndquadrant.com writes:
I was thinking it might be beneficial to be able to defer writing WAL
until COPY is complete, so heap_sync would either fsync the whole heap
file or copy the whole file to WAL.
What about
Tom Lane wrote:
Andrew Dunstan and...@dunslane.net writes:
Tom Lane wrote:
I thought he was asking for a solution to the problem of WALInsertLock
contention. In any case, we have WAL bypass on a table by table basis
now, don't we?
If we do I'm ignorant of it ;-) How do
Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:
Tom Lane wrote:
I'm not convinced that WAL segment boundaries are particularly relevant
to this. The unit of flushing is an 8K page, not a segment.
We fsync() the old WAL segment every time we switch to a new WAL
segment.
Le 22 juin 2009 à 17:24, Tom Lane t...@sss.pgh.pa.us a écrit :
Simon Riggs si...@2ndquadrant.com writes:
I was thinking it might be beneficial to be able to defer writing WAL
until COPY is complete, so heap_sync would either fsync the whole
heap
file or copy the whole file to WAL.
What
Tom Lane t...@sss.pgh.pa.us wrote:
I wonder though whether the wal_buffers setting interacts with the
ring size. Has everyone who's tested this used the same 16MB
wal_buffers setting as in Alan's original scenario?
I had been using his postgresql.conf file, then added autovacuum =
off.
Kevin Grittner kevin.gritt...@wicourts.gov writes:
Tom Lane t...@sss.pgh.pa.us wrote:
I wonder though whether the wal_buffers setting interacts with the
ring size. Has everyone who's tested this used the same 16MB
wal_buffers setting as in Alan's original scenario?
I had been using his
On Mon, 22 Jun 2009, Kevin Grittner wrote:
When I tried with setting the ring size to 16MB, I accidentally left off
the step to copy the postgresql.conf file, and got better performance.
Do you have happen to have a build with assertions turned on? That is one
common cause of performance
Greg Smith gsm...@gregsmith.com wrote:
Do you have happen to have a build with assertions turned on?
Nope. I showed my ./configure options upthread, but can confirm with
pg_config:
BINDIR = /usr/local/pgsql-8.4rc1/bin
DOCDIR = /usr/local/pgsql-8.4rc1/share/doc
HTMLDIR =
Stefan Kaltenbrunner ste...@kaltenbrunner.cc wrote:
A 25-30% performance regression in our main bulk loading mechanism
should at least be explained before the release...
I think a performance regression of that magnitude merits holding up
a release to resolve.
Note that in a follow-up
Tom Lane t...@sss.pgh.pa.us wrote:
Huh, that's bizarre. I can see that increasing shared_buffers
should make no difference in this test case (we're not using them
all anyway). But why should increasing wal_buffers make it slower?
I forget the walwriter's control algorithm at the moment
I wrote:
Stefan Kaltenbrunner ste...@kaltenbrunner.cc wrote:
A 25-30% performance regression in our main bulk loading mechanism
should at least be explained before the release...
I think a performance regression of that magnitude merits holding
up a release to resolve.
Wow. That
Kevin Grittner kevin.gritt...@wicourts.gov writes:
The checkpoint_segments seems dramatic enough to be real. I wonder if
the test is short enough that it never got around to re-using any of
them, so it was doing extra writes for the initial creation during the
test?
That's exactly what I was
On Mon, Jun 22, 2009 at 7:16 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Alan Li a...@truviso.com writes:
How much concern is there for the contention for use cases where the WAL
can't be bypassed?
If you mean is something going to be done about it in 8.4, the
answer is no. This is a
On Sat, 20 Jun 2009, Simon Riggs wrote:
I would suggest that we check how much WAL has been written. There may
be a secondary effect or a different regression hidden in these results.
What's the easiest way to do that? My first thought was to issue a
checkpoint before the test (which is a
Tom Lane wrote:
Simon Riggs si...@2ndquadrant.com writes:
On Sat, 2009-06-20 at 13:15 +0200, Stefan Kaltenbrunner wrote:
8192 6m43.203s/6m48.293s
16384 6m24.980s/6m24.116s
32768 6m20.753s/6m22.083s
65536 6m22.913s/6m22.449s
1048576 6m23.765s/6m24.645s
The rest of the patch should have had a
Heikki Linnakangas wrote:
Tom Lane wrote:
Simon Riggs si...@2ndquadrant.com writes:
On Sat, 2009-06-20 at 13:15 +0200, Stefan Kaltenbrunner wrote:
8192 6m43.203s/6m48.293s
16384 6m24.980s/6m24.116s
32768 6m20.753s/6m22.083s
65536 6m22.913s/6m22.449s
1048576 6m23.765s/6m24.645s
The rest of
On Sun, 2009-06-21 at 10:28 +0200, Stefan Kaltenbrunner wrote:
I did some limited testing on that but I was unable to measure any
significant effect - especially since the difference between
wal-logged and not is rather small for a non-parallel COPY (ie in the
above example you get around
On Sun, 2009-06-21 at 02:45 -0400, Greg Smith wrote:
On Sat, 20 Jun 2009, Simon Riggs wrote:
I would suggest that we check how much WAL has been written. There may
be a secondary effect or a different regression hidden in these results.
What's the easiest way to do that?
Simon Riggs wrote:
On Sun, 2009-06-21 at 10:28 +0200, Stefan Kaltenbrunner wrote:
I did some limited testing on that but I was unable to measure any
significant effect - especially since the difference between
wal-logged and not is rather small for a non-parallel COPY (ie in the
above example
Stefan Kaltenbrunner wrote:
Simon Riggs wrote:
On Sun, 2009-06-21 at 10:28 +0200, Stefan Kaltenbrunner wrote:
I did some limited testing on that but I was unable to measure any
significant effect - especially since the difference between
wal-logged and not is rather small for a non-parallel
On Sun, Jun 21, 2009 at 6:48 AM, Stefan
Kaltenbrunnerste...@kaltenbrunner.cc wrote:
So I do think that IO is in fact not too significant for this kind of
testing and we still have ways to go in terms of CPU efficiency in COPY.
It would be interesting to see some gprof or oprofile output from
Robert Haas robertmh...@gmail.com writes:
It would be interesting to see some gprof or oprofile output from that
test. I went back and dug up the results that I got when I profiled
this patch during initial development, and my version of the patch
applied, the profile looked like this on my
Tom Lane wrote:
Robert Haas robertmh...@gmail.com writes:
It would be interesting to see some gprof or oprofile output from that
test. I went back and dug up the results that I got when I profiled
this patch during initial development, and my version of the patch
applied, the profile looked
Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:
I wonder if using the small ring showed any benefit when the COPY is not
WAL-logged? In that scenario block-on-WAL-flush behavior doesn't happen,
so the small ring might have some L2 cache benefits.
I think the notion that we
On Sun, Jun 21, 2009 at 11:31 AM, Tom Lanet...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
It would be interesting to see some gprof or oprofile output from that
test. I went back and dug up the results that I got when I profiled
this patch during initial development, and
On Sun, Jun 21, 2009 at 11:52 AM, Tom Lanet...@sss.pgh.pa.us wrote:
Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:
I wonder if using the small ring showed any benefit when the COPY is not
WAL-logged? In that scenario block-on-WAL-flush behavior doesn't happen,
so the small ring
Greg Stark gsst...@mit.edu writes:
There was some discussion of doing this in general for all inserts
inside the indexam. The btree indexam could buffer up any inserts done
within the transaction and keep them in an in-memory btree. Any
lookups done within the transaction first look up in the
Tom Lane wrote:
Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:
I wonder if using the small ring showed any benefit when the COPY is not
WAL-logged? In that scenario block-on-WAL-flush behavior doesn't happen,
so the small ring might have some L2 cache benefits.
I think the
Stefan Kaltenbrunner ste...@kaltenbrunner.cc writes:
The following copying 3M rows(each) into a seperate table of the same
database.
Is this with WAL, or bypassing WAL? Given what we've already seen,
a lot of contention for WALInsertLock wouldn't surprise me much here.
It should be possible
Robert Haas wrote:
On Sun, Jun 21, 2009 at 11:52 AM, Tom Lanet...@sss.pgh.pa.us wrote:
So to my mind, the only question left to answer (at least for the 8.4
cycle) is is 16MB enough, or do we want to make the ring even bigger?.
Right at the moment I'd be satisfied with 16, but I wonder whether
Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:
I was going to say that since we flush the WAL every 16MB anyway (at
every XLOG file switch), you shouldn't see any benefit with larger ring
buffers, since to fill 16MB of data you're not going to write more than
16MB WAL.
I'm
Tom Lane wrote:
Stefan Kaltenbrunner ste...@kaltenbrunner.cc writes:
The following copying 3M rows(each) into a seperate table of the same
database.
Is this with WAL, or bypassing WAL? Given what we've already seen,
a lot of contention for WALInsertLock wouldn't surprise me much here.
It
On Sun, 2009-06-21 at 12:38 -0400, Tom Lane wrote:
Greg Stark gsst...@mit.edu writes:
There was some discussion of doing this in general for all inserts
inside the indexam. The btree indexam could buffer up any inserts done
within the transaction and keep them in an in-memory btree. Any
On Sun, 2009-06-21 at 20:37 +0300, Heikki Linnakangas wrote:
Robert Haas wrote:
On Sun, Jun 21, 2009 at 11:52 AM, Tom Lanet...@sss.pgh.pa.us wrote:
So to my mind, the only question left to answer (at least for the 8.4
cycle) is is 16MB enough, or do we want to make the ring even bigger?.
On Sat, 20 Jun 2009, Simon Riggs wrote:
At the time, I also proposed a filled buffer list change to bufmgr to
allow bgwriter to preferentially target COPY's filled blocks, which
would also help with this effect.
One of the things I keep meaning to investigate is whether there's any
benefit
On Sat, 2009-06-20 at 02:53 -0400, Greg Smith wrote:
On Sat, 20 Jun 2009, Simon Riggs wrote:
At the time, I also proposed a filled buffer list change to bufmgr to
allow bgwriter to preferentially target COPY's filled blocks, which
would also help with this effect.
One of the things I
On Fri, 2009-06-19 at 22:03 -0400, Greg Smith wrote:
This makes me wonder if in addition to the ring buffering issue, there
isn't just plain more writing per average completed transaction in 8.4
with this type of COPY.
I would suggest that we check how much WAL has been written. There may
be
On Sat, Jun 20, 2009 at 9:22 AM, Simon Riggssi...@2ndquadrant.com wrote:
That would seem to me to be a more robust general approach for solving
this class of problem than the whole ring buffer idea, which is a great
start but bound to run into situations where the size of the buffer just
isn't
Greg Smith wrote:
On Fri, 19 Jun 2009, Stefan Kaltenbrunner wrote:
In my case both the CPU (an Intel E5530 Nehalem) and the IO subsystem
(8GB Fiberchannel connected NetApp with 4GB cache) are pretty fast.
The server Alan identified as Solaris 10 8/07 s10x_u4wos_12b X86 has a
Xeon E5320
On Sat, Jun 20, 2009 at 12:10 PM, Greg Starkgsst...@mit.edu wrote:
I don't understand what you mean by size of the buffer either.
Ok, having gone back and read the whole thread I understand the
context for that statement. Nevermind.
--
greg
http://mit.edu/~gsstark/resume.pdf
--
Sent via
Stefan Kaltenbrunner ste...@kaltenbrunner.cc writes:
and I believe I have seen the more IO with 8.4 thing here too but I
have not actually paid enough attention yet to be sure.
FSM/VM overhead maybe? I think COPY IN is setting the SKIP_FSM bit,
but I wonder if there's some vestigial
On Sat, 2009-06-20 at 13:15 +0200, Stefan Kaltenbrunner wrote:
8.3.7: 0m39.266s 0m43.269s (alan: 36.2 - 39.2)
8192: 0m40.715s 0m42.480s
16384: 0m41.318s 0m42.118s
65536: 0m41.675s 0m42.955s
hmm interesting - I just did a bunch of runs using the lineitem table
from
On Sat, 20 Jun 2009, Simon Riggs wrote:
The reason for not doing that would be that we don't know that the
blocks are free to use; we know very little about them. The longer we
leave them the more likely they are to be reused, so putting buffers
onto the freelist when they aren't actually free
Simon Riggs si...@2ndquadrant.com writes:
On Sat, 2009-06-20 at 13:15 +0200, Stefan Kaltenbrunner wrote:
8192 6m43.203s/6m48.293s
16384 6m24.980s/6m24.116s
32768 6m20.753s/6m22.083s
65536 6m22.913s/6m22.449s
1048576 6m23.765s/6m24.645s
The rest of the patch should have had a greater effect
It doesn't look like it's related to autovacuum. I re-ran the test against
the two solaris boxes with autovacuum turned off and the results look about
the same.
8.3.7 - Solaris 10 11/06 s10x_u3wos_10 X86
real0m43.662s
user0m0.001s
sys 0m0.003s
real0m43.565s
user0m0.001s
sys
Yes, you are right. I thought that they were absolute function
counts. The data makes more sense now.
Regards,
Ken
On Thu, Jun 18, 2009 at 07:03:34PM -0500, Kevin Grittner wrote:
Kenneth Marshall k...@rice.edu wrote:
What is not clear from Stefen's function listing is how the 8.4
server
Kevin Grittner wrote:
8.3.7
real0m24.249s
real0m24.054s
real0m24.361s
8.4rc1
real0m33.503s
real0m34.198s
real0m33.931s
Ugh. This looks like a poster child case for a benchfarm ...
Is there any chance you guys could triangulate this a bit? Good initial
On 6/19/09, Andrew Dunstan and...@dunslane.net wrote:
Kevin Grittner wrote:
8.3.7
real0m24.249s
real0m24.054s
real0m24.361s
8.4rc1
real0m33.503s
real0m34.198s
real0m33.931s
Ugh. This looks like a poster child case for a benchfarm ...
Is there any
Andrew Dunstan wrote:
Kevin Grittner wrote:
8.3.7
real0m24.249s
real0m24.054s
real0m24.361s
8.4rc1
real0m33.503s
real0m34.198s
real0m33.931s
Ugh. This looks like a poster child case for a benchfarm ...
indeed...
Is there any chance you guys could
Just eyeing the code ... another thing we changed since 8.3 is to enable
posix_fadvise() calls for WAL. Any of the complaints want to try diking
out this bit of code (near line 2580 in src/backend/access/transam/xlog.c)?
#if defined(USE_POSIX_FADVISE) defined(POSIX_FADV_DONTNEED)
if
Tom Lane wrote:
Just eyeing the code ... another thing we changed since 8.3 is to enable
posix_fadvise() calls for WAL. Any of the complaints want to try diking
out this bit of code (near line 2580 in src/backend/access/transam/xlog.c)?
#if defined(USE_POSIX_FADVISE)
Tom Lane wrote:
Just eyeing the code ... another thing we changed since 8.3 is to enable
posix_fadvise() calls for WAL. Any of the complaints want to try diking
out this bit of code (near line 2580 in src/backend/access/transam/xlog.c)?
#if defined(USE_POSIX_FADVISE)
On Fri, Jun 19, 2009 at 07:49:31PM +0200, Stefan Kaltenbrunner wrote:
Tom Lane wrote:
Just eyeing the code ... another thing we changed since 8.3 is to enable
posix_fadvise() calls for WAL. Any of the complaints want to try diking
out this bit of code (near line 2580 in
Stefan Kaltenbrunner ste...@kaltenbrunner.cc writes:
ok after a bit of bisecting I'm happy to announce the winner of the contest:
http://archives.postgresql.org/pgsql-committers/2008-11/msg00054.php
this patch causes a 25-30% performance regression for WAL logged copy,
however in the WAL
Tom Lane wrote:
Stefan Kaltenbrunner ste...@kaltenbrunner.cc writes:
ok after a bit of bisecting I'm happy to announce the winner of the contest:
http://archives.postgresql.org/pgsql-committers/2008-11/msg00054.php
this patch causes a 25-30% performance regression for WAL logged copy,
so 4096 * 1024 / BLCKSZ seems to be the sweet spot and also results in
more or less the same performance that 8.3 had.
Can some folks test this with different size COPYs? That's both
larger/smaller tables, and larger/smaller rows. We should also test
copy with large blob data.
--
Josh
On Fri, 2009-06-19 at 14:11 -0400, Tom Lane wrote:
Stefan Kaltenbrunner ste...@kaltenbrunner.cc writes:
ok after a bit of bisecting I'm happy to announce the winner of the contest:
http://archives.postgresql.org/pgsql-committers/2008-11/msg00054.php
this patch causes a 25-30% performance
On Fri, 19 Jun 2009, Stefan Kaltenbrunner wrote:
In my case both the CPU (an Intel E5530 Nehalem) and the IO subsystem
(8GB Fiberchannel connected NetApp with 4GB cache) are pretty fast.
The server Alan identified as Solaris 10 8/07 s10x_u4wos_12b X86 has a
Xeon E5320 (1.86GHz) and a single
Any objections if I add:
http://archives.postgresql.org/pgsql-performance/2009-06/msg00215.php
to the (currently empty) list of open items for 8.4?
A 25-30% performance regression in our main bulk loading mechanism
should at least be explained before the release...
Stefan
--
Sent via
Stefan Kaltenbrunner ste...@kaltenbrunner.cc writes:
Any objections if I add:
http://archives.postgresql.org/pgsql-performance/2009-06/msg00215.php
to the (currently empty) list of open items for 8.4?
I am unable to duplicate any slowdown on this test case. AFAICT
8.4 and 8.3 branch tip are
Tom Lane t...@sss.pgh.pa.us wrote:
Stefan Kaltenbrunner ste...@kaltenbrunner.cc writes:
Any objections if I add:
http://archives.postgresql.org/pgsql-performance/2009-06/msg00215.php
to the (currently empty) list of open items for 8.4?
I am unable to duplicate any slowdown on this test
Kevin Grittner kevin.gritt...@wicourts.gov writes:
Tom Lane t...@sss.pgh.pa.us wrote:
I am unable to duplicate any slowdown on this test case.
[ Kevin can ]
It'd be useful first off to figure out if it's a CPU or I/O issue.
Is there any visible difference in vmstat output? Also, try turning
Tom Lane t...@sss.pgh.pa.us wrote:
It'd be useful first off to figure out if it's a CPU or I/O issue.
Is there any visible difference in vmstat output? Also, try turning
off autovacuum in both cases, just to see if that's related.
Both took slightly longer with autovacuum off, but
On Thu, Jun 18, 2009 at 05:20:08PM -0400, Tom Lane wrote:
Stefan Kaltenbrunner ste...@kaltenbrunner.cc writes:
Any objections if I add:
http://archives.postgresql.org/pgsql-performance/2009-06/msg00215.php
to the (currently empty) list of open items for 8.4?
I am unable to duplicate any
Kevin Grittner kevin.gritt...@wicourts.gov wrote:
I've got to go keep an appointment
Sorry about that. Back now. Anything else I can do to help with
this?
-Kevin
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
Kenneth Marshall k...@rice.edu wrote:
What is not clear from Stefen's function listing is how the 8.4
server could issue 33% more XLogInsert() and CopyReadLine()
calls than the 8.3.7 server using the same input file.
I thought those were profiling numbers -- the number of times a timer
85 matches
Mail list logo