Robert Haas robertmh...@gmail.com writes:
One possible way to do make an improvement in this area would be to
move the responsibility for accepting connections out of the
postmaster. Instead, you'd have a group of children that would all
call accept() on the socket, and the OS would
On 12/06/2010 09:38 AM, Tom Lane wrote:
Another issue that would require some thought is what algorithm the
postmaster uses for deciding to spawn new children. But that doesn't
sound like a potential showstopper.
We'd probably want a couple of different ones, optimized for different
On Mon, Dec 6, 2010 at 12:38 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
One possible way to do make an improvement in this area would be to
move the responsibility for accepting connections out of the
postmaster. Instead, you'd have a group of children
At some point Hackers should look at pg vs MySQL multi tenantry but it
is way tangential today.
My understanding is that our schemas work like MySQL databases; and
our databases are an even higher level of isolation. No?
That's correct. Drizzle is looking at implementing a feature like
On Mon, Dec 6, 2010 at 12:57 PM, Josh Berkus j...@agliodbs.com wrote:
At some point Hackers should look at pg vs MySQL multi tenantry but it
is way tangential today.
My understanding is that our schemas work like MySQL databases; and
our databases are an even higher level of isolation. No?
Please explain more precisely what is wrong with SET SESSION
AUTHORIZATION / SET ROLE.
1) Session GUCS do not change with a SET ROLE (this is a TODO I haven't
had any time to work on)
2) Users can always issue their own SET ROLE and then hack into other
users' data.
--
On Mon, Dec 6, 2010 at 2:47 PM, Josh Berkus j...@agliodbs.com wrote:
Please explain more precisely what is wrong with SET SESSION
AUTHORIZATION / SET ROLE.
1) Session GUCS do not change with a SET ROLE (this is a TODO I haven't
had any time to work on)
2) Users can always issue their own
Excerpts from Robert Haas's message of lun dic 06 23:09:56 -0300 2010:
On Mon, Dec 6, 2010 at 2:47 PM, Josh Berkus j...@agliodbs.com wrote:
Please explain more precisely what is wrong with SET SESSION
AUTHORIZATION / SET ROLE.
1) Session GUCS do not change with a SET ROLE (this is a
It seems plausible to fix the first one, but how would you fix the
second one? You either allow SET ROLE (which you need, to support the
pooler changing authorization), or you don't. There doesn't seem to be
a usable middleground.
Well, this is why such a pooler would *have* to be built
On Mon, Dec 6, 2010 at 9:37 PM, Alvaro Herrera
alvhe...@commandprompt.com wrote:
Excerpts from Robert Haas's message of lun dic 06 23:09:56 -0300 2010:
On Mon, Dec 6, 2010 at 2:47 PM, Josh Berkus j...@agliodbs.com wrote:
Please explain more precisely what is wrong with SET SESSION
On 07/12/10 10:48, Josh Berkus wrote:
It seems plausible to fix the first one, but how would you fix the
second one? You either allow SET ROLE (which you need, to support the
pooler changing authorization), or you don't. There doesn't seem to be
a usable middleground.
Well, this is why
On 12/01/2010 05:32 AM, Jeff Janes wrote:
On 11/28/10, Robert Haasrobertmh...@gmail.com wrote:
In a close race, I don't think we should get bogged down in
micro-optimization here, both because micro-optimizations may not gain
much and because what works well on one platform may not do much at
* no coordination of restarts/configuration changes between the cluster
and the pooler
* you have two pieces of config files to configure your pooling settings
(having all that available say in a catalog in pg would be awesome)
* you lose all of the advanced authentication features of pg
On Sun, Dec 5, 2010 at 11:59 AM, Josh Berkus j...@agliodbs.com wrote:
* no coordination of restarts/configuration changes between the cluster
and the pooler
* you have two pieces of config files to configure your pooling settings
(having all that available say in a catalog in pg would be
On Sun, Dec 5, 2010 at 12:45 PM, Rob Wultsch wult...@gmail.com wrote:
One thing I would suggest that the PG community keeps in mind while
talking about built in connection process caching, is that it is very
nice feature for memory leaks caused by a connection to not exist for
and continue
On Sun, Dec 5, 2010 at 3:17 PM, Rob Wultsch wult...@gmail.com wrote:
On Sun, Dec 5, 2010 at 12:45 PM, Rob Wultsch wult...@gmail.com wrote:
One thing I would suggest that the PG community keeps in mind while
talking about built in connection process caching, is that it is very
nice feature for
On Sun, Dec 5, 2010 at 2:45 PM, Rob Wultsch wult...@gmail.com wrote:
I think you have read a bit more into what I have said than is
correct. MySQL can deal with thousands of users and separate schemas
on commodity hardware. There are many design decisions (some
questionable) that have made
On Sat, Dec 4, 2010 at 8:04 PM, Jeff Janes jeff.ja...@gmail.com wrote:
But who would be doing the passing? For the postmaster to be doing
that would probably go against the minimalist design. It would have
to keep track of which backend is available, and which db and user it
is primed for.
On Sun, Dec 5, 2010 at 6:59 PM, Robert Haas robertmh...@gmail.com wrote:
On Sun, Dec 5, 2010 at 2:45 PM, Rob Wultsch wult...@gmail.com wrote:
I think you have read a bit more into what I have said than is
correct. MySQL can deal with thousands of users and separate schemas
on commodity
On Sun, Dec 5, 2010 at 9:35 PM, Rob Wultsch wult...@gmail.com wrote:
On Sun, Dec 5, 2010 at 6:59 PM, Robert Haas robertmh...@gmail.com wrote:
On Sun, Dec 5, 2010 at 2:45 PM, Rob Wultsch wult...@gmail.com wrote:
I think you have read a bit more into what I have said than is
correct. MySQL can
On Wed, Dec 1, 2010 at 6:20 AM, Robert Haas robertmh...@gmail.com wrote:
On Tue, Nov 30, 2010 at 11:32 PM, Jeff Janes jeff.ja...@gmail.com wrote:
On 11/28/10, Robert Haas robertmh...@gmail.com wrote:
In a close race, I don't think we should get bogged down in
micro-optimization here, both
On Tue, Nov 30, 2010 at 11:32 PM, Jeff Janes jeff.ja...@gmail.com wrote:
On 11/28/10, Robert Haas robertmh...@gmail.com wrote:
In a close race, I don't think we should get bogged down in
micro-optimization here, both because micro-optimizations may not gain
much and because what works well on
On Wednesday 01 December 2010 15:20:32 Robert Haas wrote:
On Tue, Nov 30, 2010 at 11:32 PM, Jeff Janes jeff.ja...@gmail.com wrote:
On 11/28/10, Robert Haas robertmh...@gmail.com wrote:
To some degree we're a
victim of our own flexible and extensible architecture here, but I
find it pretty
Robert Haas robertmh...@gmail.com wrote:
Jeff Janes jeff.ja...@gmail.com wrote:
Oracle's backend start up time seems to be way higher than PG's.
Interesting. How about MySQL and SQL Server?
My recollection of Sybase ASE is that establishing a connection
doesn't start a backend or even a
On mån, 2010-11-29 at 13:10 -0500, Tom Lane wrote:
Rolling in calloc in place of
malloc/memset made no particular difference either, which says that
Fedora 13's glibc does not have any optimization for that case as I'd
hoped.
glibc's calloc is either mmap of /dev/zero or malloc followed by
Peter Eisentraut pete...@gmx.net writes:
On mån, 2010-11-29 at 13:10 -0500, Tom Lane wrote:
Rolling in calloc in place of
malloc/memset made no particular difference either, which says that
Fedora 13's glibc does not have any optimization for that case as I'd
hoped.
glibc's calloc is
On 11/28/10, Robert Haas robertmh...@gmail.com wrote:
In a close race, I don't think we should get bogged down in
micro-optimization here, both because micro-optimizations may not gain
much and because what works well on one platform may not do much at
all on another. The more general issue
On tis, 2010-11-30 at 15:49 -0500, Tom Lane wrote:
Peter Eisentraut pete...@gmx.net writes:
On mån, 2010-11-29 at 13:10 -0500, Tom Lane wrote:
Rolling in calloc in place of
malloc/memset made no particular difference either, which says that
Fedora 13's glibc does not have any optimization
Robert Haas robertmh...@gmail.com writes:
Well, the lack of extensible XLOG support is definitely a big handicap
to building a *production* index AM as an add-on. But it's not such a
handicap for development.
Realistically, it's hard for me to imagine that anyone would go to the
trouble of
On Sun, Nov 28, 2010 at 11:51 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
Yeah, very true. What's a bit frustrating about the whole thing is
that we spend a lot of time pulling data into the caches that's
basically static and never likely to change
On Monday 29 November 2010 17:57:51 Robert Haas wrote:
On Sun, Nov 28, 2010 at 11:51 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
Yeah, very true. What's a bit frustrating about the whole thing is
that we spend a lot of time pulling data into the caches
On Mon, Nov 29, 2010 at 12:24 PM, Andres Freund and...@anarazel.de wrote:
Hm. A quick test shows that its quite a bit faster if you allocate memory
with:
size_t s = 512*1024*1024;
char *bss = mmap(0, s, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_POPULATE|
MAP_ANONYMOUS, -1, 0);
Numbers?
--
On Monday 29 November 2010 18:34:02 Robert Haas wrote:
On Mon, Nov 29, 2010 at 12:24 PM, Andres Freund and...@anarazel.de wrote:
Hm. A quick test shows that its quite a bit faster if you allocate memory
with:
size_t s = 512*1024*1024;
char *bss = mmap(0, s, PROT_READ|PROT_WRITE,
On Mon, Nov 29, 2010 at 9:24 AM, Andres Freund and...@anarazel.de wrote:
On Monday 29 November 2010 17:57:51 Robert Haas wrote:
On Sun, Nov 28, 2010 at 11:51 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
Yeah, very true. What's a bit frustrating about the
Robert Haas robertmh...@gmail.com writes:
I guess the word run is misleading (I wrote the program in 5
minutes); it's just zeroing the same chunk twice and measuring the
times. The difference is presumably the page fault overhead, which
implies that faulting is two-thirds of the overhead on
Jeff Janes jeff.ja...@gmail.com writes:
Are you sure you haven't just moved the page-fault time to a part of
the code where it still exists, but just isn't being captured and
reported?
I'm a bit suspicious about that too. Another thing to keep in mind
is that Robert's original program doesn't
On Mon, Nov 29, 2010 at 12:50 PM, Tom Lane t...@sss.pgh.pa.us wrote:
(On the last two machines I had to cut the array size to 256MB to avoid
swapping.)
You weren't kidding about that not so recent part. :-)
This makes me pretty
pessimistic about the chances of a meaningful speedup here.
Tom Lane wrote:
Robert Haas robertmh...@gmail.com writes:
On Sat, Nov 27, 2010 at 11:18 PM, Bruce Momjian br...@momjian.us wrote:
Not sure that information moves us forward. ?If the postmaster cleared
the memory, we would have COW in the child and probably be even slower.
Well, we can
Robert Haas wrote:
In a close race, I don't think we should get bogged down in
micro-optimization here, both because micro-optimizations may not gain
much and because what works well on one platform may not do much at
all on another. The more general issue here is what to do about our
high
Tom Lane wrote:
BTW, this might be premature to mention pending some tests about mapping
versus zeroing overhead, but it strikes me that there's more than one
way to skin a cat. I still think the idea of statically allocated space
sucks. But what if we rearranged things so that palloc0
Robert Haas wrote:
On Sun, Nov 28, 2010 at 7:15 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
One possible way to get a real speedup here would be to look for ways
to trim the number of catcaches.
BTW, it's not going to help to remove catcaches that
Greg Stark wrote:
On Mon, Nov 29, 2010 at 12:33 AM, Tom Lane t...@sss.pgh.pa.us wrote:
The most portable way to do that would be to use calloc insted of malloc,
and hope that libc is smart enough to provide freshly-mapped space.
It would be good to look and see whether glibc actually does
On Monday 29 November 2010 19:10:07 Tom Lane wrote:
Jeff Janes jeff.ja...@gmail.com writes:
Are you sure you haven't just moved the page-fault time to a part of
the code where it still exists, but just isn't being captured and
reported?
I'm a bit suspicious about that too. Another thing
Robert Haas robertmh...@gmail.com writes:
On Sat, Nov 27, 2010 at 11:18 PM, Bruce Momjian br...@momjian.us wrote:
Not sure that information moves us forward. If the postmaster cleared
the memory, we would have COW in the child and probably be even slower.
Well, we can determine the answers
On Sun, Nov 28, 2010 at 11:41 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
On Sat, Nov 27, 2010 at 11:18 PM, Bruce Momjian br...@momjian.us wrote:
Not sure that information moves us forward. If the postmaster cleared
the memory, we would have COW in the
Robert Haas wrote:
On Sat, Nov 27, 2010 at 11:18 PM, Bruce Momjian br...@momjian.us wrote:
Not sure that information moves us forward. ?If the postmaster cleared
the memory, we would have COW in the child and probably be even slower.
Well, we can determine the answers to these questions
Robert Haas robertmh...@gmail.com writes:
The more general issue here is what to do about our
high backend startup costs. Beyond trying to recycle backends for new
connections, as I've previous proposed and with all the problems it
entails, the only thing that looks promising here is to try
On Sun, Nov 28, 2010 at 3:53 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
The more general issue here is what to do about our
high backend startup costs. Beyond trying to recycle backends for new
connections, as I've previous proposed and with all the
Robert Haas robertmh...@gmail.com writes:
After our recent conversation
about KNNGIST, it occurred to me to wonder whether there's really any
point in pretending that a user can usefully add an AM, both due to
hard-wired planner knowledge and due to lack of any sort of extensible
XLOG
On Sun, Nov 28, 2010 at 6:41 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
After our recent conversation
about KNNGIST, it occurred to me to wonder whether there's really any
point in pretending that a user can usefully add an AM, both due to
hard-wired
Robert Haas robertmh...@gmail.com writes:
One possible way to get a real speedup here would be to look for ways
to trim the number of catcaches.
BTW, it's not going to help to remove catcaches that have a small
initial size, as the pg_am cache certainly does. If the bucket zeroing
cost is
BTW, this might be premature to mention pending some tests about mapping
versus zeroing overhead, but it strikes me that there's more than one
way to skin a cat. I still think the idea of statically allocated space
sucks. But what if we rearranged things so that palloc0 doesn't consist
of
On Mon, Nov 29, 2010 at 12:33 AM, Tom Lane t...@sss.pgh.pa.us wrote:
The most portable way to do that would be to use calloc insted of malloc,
and hope that libc is smart enough to provide freshly-mapped space.
It would be good to look and see whether glibc actually does so,
of course. If not
Greg Stark gsst...@mit.edu writes:
On Mon, Nov 29, 2010 at 12:33 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Another question that would be worth asking here is whether the
hand-baked MemSet macro still outruns memset on modern architectures.
I think it's been quite a few years since that was last
On Sun, Nov 28, 2010 at 7:15 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
One possible way to get a real speedup here would be to look for ways
to trim the number of catcaches.
BTW, it's not going to help to remove catcaches that have a small
initial size,
Robert Haas robertmh...@gmail.com writes:
Yeah, very true. What's a bit frustrating about the whole thing is
that we spend a lot of time pulling data into the caches that's
basically static and never likely to change anywhere, ever.
True. I wonder if we could do something like the relcache
Robert Haas wrote:
In fact, it wouldn't be that hard to relax the known at compile time
constraint either. ?We could just declare:
char lotsa_zero_bytes[NUM_ZERO_BYTES_WE_NEED];
...and then peel off chunks.
Won't this just cause loads of additional pagefaults after fork() when those
On Sat, Nov 27, 2010 at 11:18 PM, Bruce Momjian br...@momjian.us wrote:
Not sure that information moves us forward. If the postmaster cleared
the memory, we would have COW in the child and probably be even slower.
Well, we can determine the answers to these questions empirically. I
think some
On Wed, Nov 24, 2010 at 2:10 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
Anything we can do about this? That's a lot of overhead, and it'd be
a lot worse on a big machine with 8GB of shared_buffers.
Micro-optimizing that search for the non-zero value helps a little bit
Robert Haas robertmh...@gmail.com writes:
On Wed, Nov 24, 2010 at 2:10 AM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
Micro-optimizing that search for the non-zero value helps a little bit
(attached). Reduces the percentage shown by oprofile from about 16% to 12%
on my
On Wed, Nov 24, 2010 at 10:25 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
The first optimization that occurred to me was remove the loop
altogether.
Or make it execute only in assert-enabled mode, perhaps.
This check had some use back in the bad old
Robert Haas robertmh...@gmail.com writes:
On Wed, Nov 24, 2010 at 10:25 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Or make it execute only in assert-enabled mode, perhaps.
But making the check execute only in assert-enabled more
doesn't seem right, since the check actually acts to mask other
On Wed, Nov 24, 2010 at 11:33 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
On Wed, Nov 24, 2010 at 10:25 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Or make it execute only in assert-enabled mode, perhaps.
But making the check execute only in assert-enabled more
Robert Haas robertmh...@gmail.com writes:
OK, patch attached.
Two comments:
1. A comment would help, something like Assert we released all buffer pins.
2. AtProcExit_LocalBuffers should be redone the same way, for
consistency (it likely won't make any performance difference).
Note the comment
On Wed, Nov 24, 2010 at 1:06 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
OK, patch attached.
Two comments:
Revised patch attached.
I tried configuring oprofile with --callgraph=10 and then running
oprofile with -c, but it gives kooky looking output I
On Wed, Nov 24, 2010 at 01:20:36PM -0500, Robert Haas wrote:
On Wed, Nov 24, 2010 at 1:06 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
OK, patch attached.
Two comments:
Revised patch attached.
I tried configuring oprofile with --callgraph=10 and
On Wednesday 24 November 2010 19:01:32 Robert Haas wrote:
Somehow I don't think I'm going to get much further with this without
figuring out how to get oprofile to cough up a call graph.
I think to do that sensibly you need CFLAGS=-O2 -fno-omit-frame-pointer...
--
Sent via pgsql-hackers
Gerhard Heift ml-postgresql-20081012-3...@gheift.de writes:
On Wed, Nov 24, 2010 at 01:20:36PM -0500, Robert Haas wrote:
I tried configuring oprofile with --callgraph=10 and then running
oprofile with -c, but it gives kooky looking output I can't interpret.
Have a look at the wiki:
Robert Haas robertmh...@gmail.com writes:
Revised patch attached.
The asserts in AtProcExit_LocalBuffers are a bit pointless since
you forgot to remove the code that forcibly zeroes LocalRefCount[]...
otherwise +1.
regards, tom lane
--
Sent via pgsql-hackers mailing
Robert Haas robertmh...@gmail.com writes:
Full results, and call graph, attached. The first obvious fact is
that most of the memset overhead appears to be coming from
InitCatCache.
AFAICT that must be the palloc0 calls that are zeroing out (mostly)
the hash bucket headers. I don't see any
On Wed, Nov 24, 2010 at 3:14 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
Full results, and call graph, attached. The first obvious fact is
that most of the memset overhead appears to be coming from
InitCatCache.
AFAICT that must be the palloc0 calls that
On Wednesday 24 November 2010 21:47:32 Robert Haas wrote:
On Wed, Nov 24, 2010 at 3:14 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
Full results, and call graph, attached. The first obvious fact is
that most of the memset overhead appears to be coming
On Wed, Nov 24, 2010 at 3:53 PM, Andres Freund and...@anarazel.de wrote:
On Wednesday 24 November 2010 21:47:32 Robert Haas wrote:
On Wed, Nov 24, 2010 at 3:14 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
Full results, and call graph, attached. The first
Robert Haas robertmh...@gmail.com writes:
On Wed, Nov 24, 2010 at 3:53 PM, Andres Freund and...@anarazel.de wrote:
The idea I had was to go the other way and say, hey, if these hash
tables can't be expanded anyway, let's put them on the BSS instead of
heap-allocating them.
Won't this just
On Wednesday 24 November 2010 21:54:53 Robert Haas wrote:
On Wed, Nov 24, 2010 at 3:53 PM, Andres Freund and...@anarazel.de wrote:
On Wednesday 24 November 2010 21:47:32 Robert Haas wrote:
On Wed, Nov 24, 2010 at 3:14 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com
On Nov 24, 2010, at 4:05 PM, Andres Freund and...@anarazel.de wrote:
Won't this just cause loads of additional pagefaults after fork() when
those pages are used the first time and then a second time when first
written to (to copy it)?
Aren't we incurring those page faults anyway, for
Robert Haas robertmh...@gmail.com writes:
On Nov 24, 2010, at 4:05 PM, Andres Freund and...@anarazel.de wrote:
Yes, but only once. Also scrubbing a page is faster than copying it... (and
there were patches floating around to do that in advance, not sure if they
got
integrated into mainline
On Wednesday 24 November 2010 22:18:08 Robert Haas wrote:
On Nov 24, 2010, at 4:05 PM, Andres Freund and...@anarazel.de wrote:
Won't this just cause loads of additional pagefaults after fork() when
those pages are used the first time and then a second time when first
written to (to copy
On Wednesday 24 November 2010 22:25:45 Tom Lane wrote:
Robert Haas robertmh...@gmail.com writes:
On Nov 24, 2010, at 4:05 PM, Andres Freund and...@anarazel.de wrote:
Yes, but only once. Also scrubbing a page is faster than copying it...
(and there were patches floating around to do that in
On Wed, Nov 24, 2010 at 4:05 PM, Tom Lane t...@sss.pgh.pa.us wrote:
(You might be able to confirm or disprove this theory if you ask
oprofile to count memory access stalls instead of CPU clock cycles...)
I don't see an event for that.
# opcontrol --list-events | grep STALL
Robert Haas robertmh...@gmail.com writes:
On Wed, Nov 24, 2010 at 4:05 PM, Tom Lane t...@sss.pgh.pa.us wrote:
(You might be able to confirm or disprove this theory if you ask
oprofile to count memory access stalls instead of CPU clock cycles...)
I don't see an event for that.
You probably
On Wednesday 24 November 2010 23:03:48 Tom Lane wrote:
Robert Haas robertmh...@gmail.com writes:
On Wed, Nov 24, 2010 at 4:05 PM, Tom Lane t...@sss.pgh.pa.us wrote:
(You might be able to confirm or disprove this theory if you ask
oprofile to count memory access stalls instead of CPU clock
On Wed, Nov 24, 2010 at 5:15 PM, Andres Freund and...@anarazel.de wrote:
On Wednesday 24 November 2010 23:03:48 Tom Lane wrote:
Robert Haas robertmh...@gmail.com writes:
On Wed, Nov 24, 2010 at 4:05 PM, Tom Lane t...@sss.pgh.pa.us wrote:
(You might be able to confirm or disprove this theory
Robert Haas robertmh...@gmail.com writes:
I don't see anything for BUS OUTSTANDING. For CACHE and MISS I have
some options:
DATA_CACHE_MISSES: (counter: all)
L3_CACHE_MISSES: (counter: all)
Those two look promising, though I can't claim to be an expert.
regards, tom
On Wed, Nov 24, 2010 at 5:42 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
I don't see anything for BUS OUTSTANDING. For CACHE and MISS I have
some options:
DATA_CACHE_MISSES: (counter: all)
L3_CACHE_MISSES: (counter: all)
Those two look promising,
Per previous threats, I spent some time tonight running oprofile
(using the directions Tom Lane was foolish enough to provide me back
in May). I took testlibpq.c and hacked it up to just connect to the
server and then disconnect in a tight loop without doing anything
useful, hoping to measure the
On Wed, Nov 24, 2010 at 12:07 AM, Robert Haas robertmh...@gmail.com wrote:
Per previous threats, I spent some time tonight running oprofile
(using the directions Tom Lane was foolish enough to provide me back
in May). I took testlibpq.c and hacked it up to just connect to the
server and then
On 24.11.2010 07:07, Robert Haas wrote:
Per previous threats, I spent some time tonight running oprofile
(using the directions Tom Lane was foolish enough to provide me back
in May). I took testlibpq.c and hacked it up to just connect to the
server and then disconnect in a tight loop without
88 matches
Mail list logo