Greg Smith wrote:
This thread reminds me of Jignesh's "Proposal of tunable fix for
scalability of 8.4" thread from March, except with only a fraction of
the real-world detail. There are multiple high-profile locks causing
scalability concerns at quadruple digit high user counts in the
Postgre
On Fri, 5 Jun 2009, Greg Smith wrote:
On Thu, 4 Jun 2009, Robert Haas wrote:
That's because this thread has altogether too much theory and
altogether too little gprof.
But running benchmarks and profiling is actual work; that's so much less fun
than just speculating about what's going on!
On Thu, 4 Jun 2009, Mark Mielke wrote:
da...@lang.hm wrote:
On Thu, 4 Jun 2009, Mark Mielke wrote:
An alternative approach might be: 1) Idle processes not currently running
a transaction do not need to be consulted for their snapshot (and other
related expenses) - if they are idle for a per
On Thu, 4 Jun 2009, Robert Haas wrote:
That's because this thread has altogether too much theory and
altogether too little gprof.
But running benchmarks and profiling is actual work; that's so much less
fun than just speculating about what's going on!
This thread reminds me of Jignesh's "Pr
da...@lang.hm wrote:
On Thu, 4 Jun 2009, Mark Mielke wrote:
You should really only have as 1X or 2X many threads as there are
CPUs waiting on one monitor. Beyond that is waste. The idle threads
can be pooled away, and only activated (with individual monitors
which can be far more easily and ef
On Thu, Jun 4, 2009 at 8:51 PM, wrote:
> if this is the case, how hard would it be to have threads add and remove
> themselves from some list as they get busy/become idle?
>
> I've been puzzled as I've been watching this conversation on what internal
> locking/lookup is happening that is causing t
On Thu, 4 Jun 2009, Mark Mielke wrote:
Kevin Grittner wrote:
James Mansion wrote:
I know that if you do use a large number of threads, you have to be
pretty adaptive. In our Java app that pulls data from 72 sources and
replicates it to eight, plus feeding it to filters which determine
what
On 6/4/09 3:08 PM, "Kevin Grittner" wrote:
> James Mansion wrote:
>> I'm sorry, but (in particular) UNIX systems have routinely
>> managed large numbers of runnable processes where the run queue
>> lengths are long without such an issue.
>
> Well, the OP is looking at tens of thousands of con
On Thu, 4 Jun 2009, Robert Haas wrote:
On Wed, Jun 3, 2009 at 5:09 PM, Scott Carey wrote:
On 6/3/09 11:39 AM, "Robert Haas" wrote:
On Wed, Jun 3, 2009 at 2:12 PM, Scott Carey wrote:
Postgres could fix its connection scalability issues -- that is entirely
independent of connection pooling.
Kevin Grittner wrote:
James Mansion wrote:
Kevin Grittner wrote:
Sure, but the architecture of those products is based around all
the work being done by "engines" which try to establish affinity to
different CPUs, and loop through the various tasks to be done. You
don't get a context
James Mansion wrote:
>> they spend a lot of time spinning around queue access to see if
>> anything has become available to do -- which causes them not to
>> play nice with other processes on the same box.
> UNIX systems have routinely managed large numbers of runnable
> processes where the r
James Mansion wrote:
> Kevin Grittner wrote:
>> Sure, but the architecture of those products is based around all
>> the work being done by "engines" which try to establish affinity to
>> different CPUs, and loop through the various tasks to be done. You
>> don't get a context switch storm becaus
Kevin Grittner wrote:
Sure, but the architecture of those products is based around all the
work being done by "engines" which try to establish affinity to
different CPUs, and loop through the various tasks to be done. You
don't get a context switch storm because you normally have the number
of e
On 6/3/09 7:32 PM, Janine Sisk wrote:
I'm sorry if this is a stupid question, but... I changed
default_statistics_target from the default of 10 to 100, restarted PG,
and then ran "vacuumdb -z" on the database. The plan is exactly the same
as before. Was I supposed to do something else? Do I need
On Thu, Jun 4, 2009 at 2:04 PM, Scott Carey wrote:
> To clarify if needed:
>
> I'm not saying the two issues are unrelated. I'm saying that the
> relationship between connection pooling and a database is multi-dimensional,
> and the scalability improvement does not have a hard dependency on
> con
On 6/4/09 6:16 AM, "Robert Haas" wrote:
> On Thu, Jun 4, 2009 at 7:31 AM, Erik Aronesty wrote:
>> Seems like "VACUUM FULL" could figure out to do that too depending on
>> the bloat-to-table-size ratio ...
>>
>> - copy all rows to new table
>> - lock for a millisecond while renaming table
On 6/4/09 3:57 AM, "Robert Haas" wrote:
> On Wed, Jun 3, 2009 at 5:09 PM, Scott Carey wrote:
>> On 6/3/09 11:39 AM, "Robert Haas" wrote:
>>> On Wed, Jun 3, 2009 at 2:12 PM, Scott Carey wrote:
Postgres could fix its connection scalability issues -- that is entirely
independent of con
On 6/4/09 4:31 AM, "Erik Aronesty" wrote:
>> read the entry on pg_stat_all_tables
>
> yeah, it's running ... vacuum'ed last night
>
> it's odd, to me, that the performance would degrade so extremely
> (noticeably) over the course of one year on a table which has few
> insertions, no deletions,
Brian Herlihy wrote:
We have a problem with some of our query plans. One of our
tables is quite volatile, but postgres always uses the last
statistics snapshot from the last time it was analyzed for query
planning. Is there a way to tell postgres that it should not
trust the statistics for this
Revisiting the thread a month back or so, I'm still investigating
performance problems with GiST indexes in Postgres.
Looking at http://wiki.postgresql.org/wiki/PostgreSQL_8.4_Open_Items I'd
like to clarify the contrib/seg issue. Contrib/seg is vulnerable to
pathological behaviour which is f
On Thu, Jun 4, 2009 at 7:31 AM, Erik Aronesty wrote:
> Seems like "VACUUM FULL" could figure out to do that too depending on
> the bloat-to-table-size ratio ...
>
> - copy all rows to new table
> - lock for a millisecond while renaming tables
> - drop old table.
You'd have to lock the table
> read the entry on pg_stat_all_tables
yeah, it's running ... vacuum'ed last night
it's odd, to me, that the performance would degrade so extremely
(noticeably) over the course of one year on a table which has few
insertions, no deletions,and daily updates of an integer non null
column (stock lev
On Wed, Jun 3, 2009 at 5:09 PM, Scott Carey wrote:
> On 6/3/09 11:39 AM, "Robert Haas" wrote:
>> On Wed, Jun 3, 2009 at 2:12 PM, Scott Carey wrote:
>>> Postgres could fix its connection scalability issues -- that is entirely
>>> independent of connection pooling.
>>
>> Really? I'm surprised. I
It's not that trivial with Oracle either. I guess you had to use shared
servers to get to that amount of sessions. They're most of the time not
activated by default (dispatchers is at 0).
Granted, they are part of the 'main' product, so you just have to set up
dispatchers, shared servers, circu
24 matches
Mail list logo