On Fri, 13 Mar 2009, Kevin Grittner wrote:
Tom Lane wrote:
Robert Haas writes:
I think that changing the locking behavior is attacking the problem
at the wrong level anyway.
Right. By the time a patch here could have any effect, you've
already lost the game --- having to deschedule and re
Robert Haas writes:
> On Fri, Mar 13, 2009 at 10:06 PM, Tom Lane wrote:
>
>> I assume you meant effective_io_concurrency. We'd still need a special
>> case because the default is currently hard-wired at 1, not 0, if
>> configure thinks the function exists. Also there's a posix_fadvise call
>>
On Fri, Mar 13, 2009 at 10:06 PM, Tom Lane wrote:
> Gregory Stark writes:
>> Tom Lane writes:
>>> Ugh. So apparently, we actually need to special-case Solaris to not
>>> believe that posix_fadvise works, or we'll waste cycles uselessly
>>> calling a do-nothing function. Thanks, Sun.
>
>> Do we
On Fri, Mar 13, 2009 at 7:08 PM, Tom Lane wrote:
> Vamsidhar Thummala writes:
> > I am wondering why are we subtracting the entire Seq Scan time of
> Lineitem
> > from the total time to calculate the HashJoin time.
>
> Well, if you're trying to identify the speed of the join itself and not
> how l
Gregory Stark writes:
> Tom Lane writes:
>> Ugh. So apparently, we actually need to special-case Solaris to not
>> believe that posix_fadvise works, or we'll waste cycles uselessly
>> calling a do-nothing function. Thanks, Sun.
> Do we? Or do we just document that setting effective_cache_size
Tom Lane writes:
> Alan Stange writes:
>> Gregory Stark wrote:
>>> AFAIK Opensolaris doesn't implement posix_fadvise() so there's no benefit.
>
>> It's implemented. I'm guessing it's not what you want to see though:
>> http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/lib/libc/port
Vamsidhar Thummala writes:
> I am wondering why are we subtracting the entire Seq Scan time of Lineitem
> from the total time to calculate the HashJoin time.
Well, if you're trying to identify the speed of the join itself and not
how long it takes to provide the input for it, that seems like a
se
Thanks for such quick response.
On Fri, Mar 13, 2009 at 5:34 PM, Tom Lane wrote:
> > 2) Why is the Hash Join (top most) so slow?
>
> Doesn't look that bad to me. The net time charged to the HashJoin node
> is 186107.210 - 53597.555 - 112439.592 = 20070.063 msec. In addition it
> would be reason
Vamsidhar Thummala writes:
> 1) The actual time on Seq Scan on Lineitem shows that the first record is
> fetched at time 0.022ms and the last record is fetched at 53.5s. Does it
> mean the sequential scan is completed with-in first 53.4s (absolute time)?
No, it means that we spent a total of 53.5
>From the documentation, I understand that range of actual time represents
the time taken for retrieving the first result and the last result
respectively. However, the following output of explain analyze confuses me:
GroupAggregate (cost=632185.58..632525.55 rows=122884 width=57)
(actual time=18
Redid the test with - waking up all waiters irrespective of shared,
exclusive
480: 64: Medium Throughput: 66688.000 Avg Medium Resp: 0.005
540: 72: Medium Throughput: 74355.000 Avg Medium Resp: 0.005
600: 80: Medium Throughput: 82920.000 Avg Medium Resp: 0.005
660: 88: Medium Throughput: 91466.0
Somebody else asked a question: This is actually a two socket machine
(128) threads but one socket is disabled by the OS so only 64-threads
are available... The idea being let me choke one socket first with 100%
CPU ..
Forgot some data: with the second test above, CPU: 48% user, 18% sys,
35% id
Its an interesting question, but the answer is most likely simply that the
client can't keep up. And in the real world, no matter how incredible your
connection pool is, there will be some inefficiency, there will be some network
delay, there will be some client side time, etc.
I'm still not s
Tom Lane wrote:
> Robert Haas writes:
>> I think that changing the locking behavior is attacking the problem
>> at the wrong level anyway.
>
> Right. By the time a patch here could have any effect, you've
> already lost the game --- having to deschedule and reschedule a
> process is a large co
On 3/13/09 10:29 AM, "Scott Carey" wrote:
-
Now, with 0ms delay, no threading change:
Throughput is 136000/min @184 users, response time 13ms. Response time has not
jumped too drastically yet, but linear performance increases stopped at about
130 users or so. ProcArrayLock bus
On 3/13/09 10:16 AM, "Tom Lane" wrote:
Robert Haas writes:
> I think that changing the locking behavior is attacking the problem at
> the wrong level anyway.
Right. By the time a patch here could have any effect, you've already
lost the game --- having to deschedule and reschedule a process i
On 3/13/09 9:42 AM, "Jignesh K. Shah" wrote:
Now with a modified Fix (not the original one that I proposed but
something that works like a heart valve : Opens and shuts to minimum
default way thus controlling how many waiters are waked up )
Is this the server with 128 thread capability or 64
Alan Stange writes:
> Gregory Stark wrote:
>> AFAIK Opensolaris doesn't implement posix_fadvise() so there's no benefit.
> It's implemented. I'm guessing it's not what you want to see though:
> http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/lib/libc/port/gen/posix_fadvise.c
Ugh.
Scott Carey wrote:
On 3/13/09 8:55 AM, "Kevin Grittner" wrote:
>>> "Jignesh K. Shah" wrote:
> usr sys wt idl sze
> 38 11 0 50 64
The fact that you're maxing out at 50% CPU utilization has me
wondering -- are there really 64 CPUs here, or are there 32 CPUs with
"hyper
Robert Haas writes:
> I think that changing the locking behavior is attacking the problem at
> the wrong level anyway.
Right. By the time a patch here could have any effect, you've already
lost the game --- having to deschedule and reschedule a process is a
large cost compared to the typical loc
On Fri, 13 Mar 2009, Jignesh K. Shah wrote:
I can use dbt2, dbt3 tests to see how 8.4 performs and compare it with
8.3?
That would be very helpful. There's been some work at updating the DTrace
capabilities available too; you might compare what that's reporting too.
* Visibility map - Redu
On 3/13/09 8:55 AM, "Kevin Grittner" wrote:
>>> "Jignesh K. Shah" wrote:
> usr sys wt idl sze
> 38 11 0 50 64
The fact that you're maxing out at 50% CPU utilization has me
wondering -- are there really 64 CPUs here, or are there 32 CPUs with
"hyperthreading" technology (or something conc
Now with a modified Fix (not the original one that I proposed but
something that works like a heart valve : Opens and shuts to minimum
default way thus controlling how many waiters are waked up )
Time:Users:throughput: Reponse
60: 8: Medium Throughput: 7774.000 Avg Medium Resp: 0.004
120: 1
>>> "Jignesh K. Shah" wrote:
> 600: 80: Medium Throughput: 82632.000 Avg Medium Resp: 0.005
Personally, I'd be pretty interested in seeing what the sampling shows
in a steady state at this level. Any blocking at this level which
wasn't waiting for input or output in communications with the cli
>>> "Jignesh K. Shah" wrote:
> usr sys wt idl sze
> 38 11 0 50 64
The fact that you're maxing out at 50% CPU utilization has me
wondering -- are there really 64 CPUs here, or are there 32 CPUs with
"hyperthreading" technology (or something conceptually similar)?
-Kevin
--
Sent via pg
Gregory Stark wrote:
A minute ago I said:
AFAIK Opensolaris doesn't implement posix_fadvise() so there's no benefit. It
would be great to hear if you could catch the ear of the right people to get
an implementation committed. Depending on how the i/o scheduler system is
written it might not
In general, I suggest that it is useful to run tests with a few
different types of pacing. Zero delay pacing will not have realistic
number of connections, but will expose bottlenecks that are
universal, and less controversial. Small latency (100ms to 1s) tests
are easy to make from the ze
A minute ago I said:
AFAIK Opensolaris doesn't implement posix_fadvise() so there's no benefit. It
would be great to hear if you could catch the ear of the right people to get
an implementation committed. Depending on how the i/o scheduler system is
written it might not even be hard -- the Li
"Jignesh K. Shah" writes:
> Gregory Stark wrote:
>> Keep in mind when you do this that it's not interesting to test a number of
>> connections much larger than the number of processors you have. Once the
>> system reaches 100% cpu usage it would be a misconfigured connection pooler
>> that kept m
"Jignesh K. Shah" writes:
> Can we do a vote on which specific performance features we want to test?
>
> Many of the improvements may not be visible through this standard tests so
> feedback on testing methology for those is also appreciated.
> * Visibility map - Reduce Vacuum overhead - (I thin
Gregory Stark wrote:
"Jignesh K. Shah" writes:
Scott Carey wrote:
On 3/12/09 11:37 AM, "Jignesh K. Shah" wrote:
In general, I suggest that it is useful to run tests with a few different
types of pacing. Zero delay pacing will not have realistic number of
connections, but will expo
"Jignesh K. Shah" writes:
> Scott Carey wrote:
>> On 3/12/09 11:37 AM, "Jignesh K. Shah" wrote:
>>
>> In general, I suggest that it is useful to run tests with a few different
>> types of pacing. Zero delay pacing will not have realistic number of
>> connections, but will expose bottlenecks that
"Jignesh K. Shah" writes:
> Scott Carey wrote:
>> On 3/12/09 11:37 AM, "Jignesh K. Shah" wrote:
>>
>> In general, I suggest that it is useful to run tests with a few different
>> types of pacing. Zero delay pacing will not have realistic number of
>> connections, but will expose bottlenecks that
Greg Smith wrote:
On Thu, 12 Mar 2009, Jignesh K. Shah wrote:
As soon as I get more "cycles" I will try variations of it but it
would help if others can try it out in their own environments to see
if it helps their instances.
What you should do next is see whether you can remove the bottle
Scott Carey wrote:
On 3/12/09 11:37 AM, "Jignesh K. Shah" wrote:
And again this is the third time I am saying.. the test users also
have some latency build up in them which is what generally is
exploited to get more users than number of CPUS on the system but
that's the point
On Fri, Mar 13, 2009 at 9:28 AM, sathiya psql wrote:
> for profiling, you can also use the epqa.
>
> http://epqa.sourceforge.net/
or PGSI : http://bucardo.org/pgsi/
But it require a syslog date format we don't use here. So i wasn't
able to test it :/
--
F4FQM
Kerunix Flan
Laurent Laborde
--
S
for profiling, you can also use the epqa.
http://epqa.sourceforge.net/
On Fri, Mar 13, 2009 at 3:02 AM, Laurent Laborde wrote:
> On Wed, Mar 11, 2009 at 11:42 PM, Frank Joerdens
> wrote:
> >
> > effective_cache_size= 4GB
>
> Only 4GB with 64GB of ram ?
>
> About logging, we have 3 p
37 matches
Mail list logo