>> The problem with our "cheap" connection pool is that the persistent
>> connections don't seem to be available immediately after they're
>> released by the previous process. pg_close doesn't seem to help the
>> situation. We understand that pg_close doesn't really close a
>> persistent connect
FYI - We have implemented a number of changes...
a) some query and application optimizations
b) connection pool (on the cheap: set max number of clients on
Postgres server and created a blocking wrapper to pg_pconnect that
will block until it gets a connection)
c) moved the application server to a
> Bob, you might want to just send plain text, to avoid such problems.
Will do. Looks like gmail's interface does it nicely.
>
> -Kevin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-p
On Tue, Jan 12, 2010 at 12:12 PM, Matthew Wakeling wrote:
> On Mon, 11 Jan 2010, Bob Dusek wrote:
>
>> How do I learn more about the actual lock contention in my db? Lock
>> contention makes
>> some sense. Each of the 256 requests are relatively similar. So, I don
>
>
> I haven't been keeping up on the hardware, so I defer to you on
> that. It certainly seems like it would fit with the symptoms. On
> the other hand, I haven't seen anything yet to convince me that it
> *couldn't* be a client-side or network bottleneck, or the sort of
> lock contention bottle
On Mon, Jan 11, 2010 at 1:20 PM, Kevin Grittner wrote:
> Bob Dusek wrote:
> > Kevin Grittner wrote:
> >> Bob Dusek wrote:
>
> >> Anyway, my benchmarks tend to show that best throughput occurs at
> >> about (CPU_count * 2) plus effective_spindle_count.
>
>> RAID-0
>>
>
> And how many drives?
>
> Just two.
We have an application server that is processing requests. Each request
>> consists of a combination of selects, inserts, and deletes. We actually see
>> degredation when we get more than 40 concurrent requests. The exact number
>> of querie
On Mon, Jan 11, 2010 at 12:17 PM, Kevin Grittner <
kevin.gritt...@wicourts.gov> wrote:
> Bob Dusek wrote:
> > Scott Marlowe wrote:
> >> Bob Dusek wrote:
>
> >>> 4X E7420 Xeon, Four cores (for a total of 16 cores)
>
> >> What method of striped
>
>
> > This is to be expected, to some extent, as we would expect some
> perfromance
> > degradation with higher utilization. But, the hardware doesn't appear to
> be
> > very busy, and that's where we're hoping for some help.
>
> It's likely in io wait.
>
> >> What do the following commands tell
On Mon, Jan 11, 2010 at 9:07 AM, A. Kretschmer <
andreas.kretsch...@schollglas.com> wrote:
> In response to Bob Dusek :
> > Hello,
> >
> > We're running Postgres 8.4.2 on Red Hat 5, on pretty hefty hardware...
> >
> > 4X E7420 Xeon, Four cores (for
On Mon, Jan 11, 2010 at 8:50 AM, Scott Marlowe wrote:
> On Mon, Jan 11, 2010 at 6:44 AM, Bob Dusek wrote:
> > Hello,
> >
> > We're running Postgres 8.4.2 on Red Hat 5, on pretty hefty hardware...
> >
> > 4X E7420 Xeon, Four cores (for a total of 16 cores
Hello,
We're running Postgres 8.4.2 on Red Hat 5, on pretty hefty hardware...
4X E7420 Xeon, Four cores (for a total of 16 cores)
2.13 GHz, 8M Cache, 1066 Mhz FSB
32 Gigs of RAM
15 K RPM drives in striped raid
Things run fine, but when we get a lot of concurrent queries running, we see
a pretty
Hello all,
I've been running performance tests on various incantations of Postgres
on/off for a month or so. And, I've just come across some unexpected
results.
When I start my Postgres build as such:
# (Scenario 1)
./configure --prefix=/usr --libdir=/usr/lib --bindir=/usr/bin
--includedir=/us
13 matches
Mail list logo