In my opinion it depends on the application, the priority of the application
and whether or not it is a commercially sold product, but depending on your
needs you might want to consider having a 3rd party vendor who has expertise
in this process review and help tune the application. One vendor
On Friday 04 June 2010 15:59:05 Tom Lane wrote:
Marc Cousin cousinm...@gmail.com writes:
I hope I'm not going to expose an already known problem, but I couldn't
find it mailing list archives (I only found
http://archives.postgresql.org/pgsql- hackers/2009-12/msg01543.php).
You sure this
The Friday 04 June 2010 15:59:05, Tom Lane wrote :
Marc Cousin cousinm...@gmail.com writes:
I hope I'm not going to expose an already known problem, but I couldn't
find it mailing list archives (I only found
http://archives.postgresql.org/pgsql- hackers/2009-12/msg01543.php).
You sure
Is this a bulk insert? Are you wrapping your statements within a
transaction(s)?
How many columns in the table? What do the table statistics look like?
On Fri, Jun 4, 2010 at 9:21 AM, Michael Gould
mgo...@intermodalsoftwaresolutions.net wrote:
In my opinion it depends on the application,
Hi
I am performing a DB insertion and update for 3000+ records and while doing so
i get CPU utilization
to 100% with 67% of CPU used by postgres
I have also done optimization on queries too...
Is there any way to optimized the CPU utilization for postgres
I am currently using postgres
As promised, I did a tiny benchmark - basically, 8 empty tables are
filled with 100k rows each within 8 transactions (somewhat typically for
my application). The test machine has 4 cores, 64G RAM and RAID1 10k
drives for data.
# INSERTs into a TEMPORARY table:
[joac...@testsrv scaling]$
Am 02.06.2010 12:03, schrieb Pierre C:
Usually WAL causes a much larger performance hit than this.
Since the following command :
CREATE TABLE tmp AS SELECT n FROM generate_series(1,100) AS n
which inserts 1M rows takes 1.6 seconds on my desktop, your 800k rows
INSERT taking more than 3
On 05/26/2010 06:03 PM, Joachim Worringen wrote:
Am 25.05.2010 12:41, schrieb Andres Freund:
On Tuesday 25 May 2010 11:00:24 Joachim Worringen wrote:
Thanks. So, the Write-Ahead-Logging (being used or not) does not matter?
It does matter quite significantly in my experience. Both from an io
On Fri, May 28, 2010 at 4:04 AM, Joachim Worringen
joachim.worrin...@iathh.de wrote:
On 05/26/2010 06:03 PM, Joachim Worringen wrote:
Am 25.05.2010 12:41, schrieb Andres Freund:
On Tuesday 25 May 2010 11:00:24 Joachim Worringen wrote:
Thanks. So, the Write-Ahead-Logging (being used or not)
Am 25.05.2010 12:41, schrieb Andres Freund:
On Tuesday 25 May 2010 11:00:24 Joachim Worringen wrote:
Thanks. So, the Write-Ahead-Logging (being used or not) does not matter?
It does matter quite significantly in my experience. Both from an io and a cpu
overhead perspective.
O.k., looks as if
WAL matters in performance. Hence why it is advisable to have it on a
separate drive :)
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Greetings,
in
http://archives.postgresql.org/message-id/1056648218.7041.11.ca...@jester,
it is stated that the performance of temporary tables is the same as a
regular table but without
WAL on the table contents..
I have a datamining-type application which makes heavy use of temporary
temporary tables are handled pretty much like the regular table. The
magic happens on schema level, new schema is setup for connection, so
that it can access its own temporary tables.
Temporary tables also are not autovacuumed.
And that's pretty much the most of the differences.
--
Sent via
Am 25.05.2010 10:49, schrieb Grzegorz Jaśkiewicz:
temporary tables are handled pretty much like the regular table. The
magic happens on schema level, new schema is setup for connection, so
that it can access its own temporary tables.
Temporary tables also are not autovacuumed.
And that's pretty
2010/5/25 Joachim Worringen joachim.worrin...@iathh.de:
Am 25.05.2010 10:49, schrieb Grzegorz Jaśkiewicz:
temporary tables are handled pretty much like the regular table. The
magic happens on schema level, new schema is setup for connection, so
that it can access its own temporary tables.
Am 25.05.2010 11:15, schrieb Thom Brown:
2010/5/25 Joachim Worringenjoachim.worrin...@iathh.de:
And, is there anything like RAM-only tables? I really don't care whether the
staging data is lost on the rare event of a machine crash, or whether the
query crashes due to lack of memory (I make sure
WAL does the same thing to DB journaling does to the FS.
Plus allows you to roll back (PITR).
As for the RAM, it will be in ram as long as OS decides to keep it in
RAM cache, and/or its in the shared buffers memory.
Unless you have a lot of doubt about the two, I don't think it makes
too much
Am 25.05.2010 11:38, schrieb Grzegorz Jaśkiewicz:
WAL does the same thing to DB journaling does to the FS.
Plus allows you to roll back (PITR).
As for the RAM, it will be in ram as long as OS decides to keep it in
RAM cache, and/or its in the shared buffers memory.
Or until I commit the
On Tuesday 25 May 2010 11:00:24 Joachim Worringen wrote:
Am 25.05.2010 10:49, schrieb Grzegorz Jaśkiewicz:
temporary tables are handled pretty much like the regular table. The
magic happens on schema level, new schema is setup for connection, so
that it can access its own temporary tables.
On Wed, May 12, 2010 at 1:45 AM, venu madhav venutaurus...@gmail.com wrote:
[Venu] Yes, autovacuum is running every hour. I could see in the log
messages. All the configurations for autovacuum are disabled except that it
should run for every hour. This application runs on an embedded box, so
On Wed, May 12, 2010 at 3:17 AM, Jorge Montero
jorge_mont...@homedecorators.com wrote:
First, are you sure you are getting autovacuum to run hourly? Autovacuum
will only vacuum when certain configuration thresholds are reached. You can
set it to only check for those thresholds every so
PM
To: pgsql-performance@postgresql.org
Subject: [PERFORM] Performance issues when the number of records are around
10 Million
Hi all,
In my database application, I've a table whose records can reach 10M
and insertions can happen at a faster rate like 100 insertions per second
On Wed, May 12, 2010 at 5:25 PM, Kevin Grittner kevin.gritt...@wicourts.gov
wrote:
venu madhav wrote:
AND e.timestamp = '1270449180'
AND e.timestamp '1273473180'
ORDER BY.
e.cid DESC,
e.cid DESC
limit 21
offset 10539780
The second column acts as a secondary key for sorting
On Wed, May 12, 2010 at 7:26 PM, Kevin Grittner kevin.gritt...@wicourts.gov
wrote:
venu madhav venutaurus...@gmail.com wrote:
If the records are more in the interval,
How do you know that before you run your query?
I calculate the count first.
This and other comments suggest that
venu madhav venutaurus...@gmail.com wrote:
Kevin Grittner kevin.gritt...@wicourts.gov wrote:
I calculate the count first.
This and other comments suggest that the data is totally static
while this application is running. Is that correct?
No, the data gets added when the application is
venu madhav wrote:
AND e.timestamp = '1270449180'
AND e.timestamp '1273473180'
ORDER BY.
e.cid DESC,
e.cid DESC
limit 21
offset 10539780
The second column acts as a secondary key for sorting if the
primary sorting key is a different column. For this query both of
them are same.
venu madhav venutaurus...@gmail.com wrote:
If the records are more in the interval,
How do you know that before you run your query?
I calculate the count first.
This and other comments suggest that the data is totally static
while this application is running. Is that correct?
If
On 5/12/10 4:55 AM, Kevin Grittner wrote:
venu madhav wrote:
we display in sets of 20/30 etc. The user also has the option to
browse through any of those records hence the limit and offset.
Have you considered alternative techniques for paging? You might
use values at the edges of the page
Hi all,
In my database application, I've a table whose records can reach 10M
and insertions can happen at a faster rate like 100 insertions per second in
the peak times. I configured postgres to do auto vacuum on hourly basis. I
have frontend GUI application in CGI which displays the data
venu madhav venutaurus...@gmail.com wrote:
When I try to get the last twenty records from the database, it
takes around 10-15 mins to complete the operation.
Making this a little easier to read (for me, at least) I get this:
select e.cid, timestamp, s.sig_class, s.sig_priority,
-performance@postgresql.org
Subject: [PERFORM] Performance issues when the number of records are around 10
Million
Hi all,
In my database application, I've a table whose records can reach 10M and
insertions can happen at a faster rate like 100 insertions per second in the
peak times. I
* select e.cid, timestamp, s.sig_class, s.sig_priority, s.sig_name,
e.sniff_ip, e.sniff_channel, s.sig_config, e.wifi_addr_1,
e.wifi_addr_2, e.view_status, bssid FROM event e, signature s WHERE
s.sig_id = e.signature AND e.timestamp = '1270449180' AND e.timestamp
'1273473180' ORDER BY
Hi,
I have access to servers running 8.3.1, 8.3.8, 8.4.2 and 8.4.3. I have
noticed that on the 8.4.* versions, a lot of our code is either taking
much longer to complete, or never completing. I think I have isolated
the problem to queries using in(), not in() or not exists(). I've put
On Tue, Apr 20, 2010 at 12:38 PM, Roger Ging rg...@musicreports.com wrote:
Hi,
I have access to servers running 8.3.1, 8.3.8, 8.4.2 and 8.4.3. I have
noticed that on the 8.4.* versions, a lot of our code is either taking much
longer to complete, or never completing. I think I have isolated
Scott Marlowe scott.marl...@gmail.com writes:
On Tue, Apr 20, 2010 at 12:38 PM, Roger Ging rg...@musicreports.com wrote:
I have access to servers running 8.3.1, 8.3.8, 8.4.2 and 8.4.3. I have
noticed that on the 8.4.* versions, a lot of our code is either taking much
longer to complete, or
On Mon, 29 Mar 2010, randa...@bioinfo.wsu.edu wrote:
WHERE ... lower(n.name) LIKE 'Scaffold:scaffold_163:1000..1199%' ...
I'm sure you noticed that this is never going to return any rows?
Matthew
--
Me... a skeptic? I trust you have proof?
--
Sent via pgsql-performance mailing list
Hi,
I am querying a Postgresql 8.3 database table that has approximately 22 million
records. The (explain analyze) query is listed below:
gdr_gbrowse_live= explain analyze SELECT
f.id,f.object,f.typeid,f.seqid,f.start,f.end,f.strand FROM feature as f, name
as n WHERE (n.id=f.id AND
randa...@bioinfo.wsu.edu writes:
I can see I am hitting an index using an index that I created using the
varchar_pattern_ops setting. This is very fast and performs like I would
expect. However, when my application, GBrowse, access the database, I see in
my slow query log this:
...@sss.pgh.pa.us
To: randa...@bioinfo.wsu.edu
Cc: pgsql-performance@postgresql.org
Sent: Monday, March 29, 2010 10:00:03 AM
Subject: Re: [PERFORM] Performance regarding LIKE searches
randa...@bioinfo.wsu.edu writes:
I can see I am hitting an index using an index that I created using the
varchar_pattern_ops
- Original Message -
From: Tom Lanet...@sss.pgh.pa.us
To: randa...@bioinfo.wsu.edu
Cc: pgsql-performance@postgresql.org
Sent: Monday, March 29, 2010 10:00:03 AM
Subject: Re: [PERFORM] Performance regarding LIKE searches
randa...@bioinfo.wsu.edu writes:
I can see I am hitting an index using
On Thu, Mar 25, 2010 at 11:56 PM, Eliot Gable
egable+pgsql-performa...@gmail.com wrote:
How many rows min/max/avg are coming back in your refcursors? Are you
using cursors in order to return multiple complex data structures
(sets, etc) in a single function call?
I think the largest number
On 26/03/10 03:56, Eliot Gable wrote:
I really am chasing milliseconds here, and I appreciate all your feedback.
You've given me a relatively large number of possible optimizations I can
try out. I will definitely try out the libpqtypes. That sounds like a
promising way to further cut down on
On Tue, Mar 23, 2010 at 5:00 PM, Eliot Gable
egable+pgsql-performa...@gmail.com wrote:
The complex type contains roughly 25 fields, mostly text, plus another 10
REFCURSORs.
How many rows min/max/avg are coming back in your refcursors? Are you
using cursors in order to return multiple complex
On Thu, Mar 25, 2010 at 10:00 PM, Merlin Moncure mmonc...@gmail.com wrote:
On Tue, Mar 23, 2010 at 5:00 PM, Eliot Gable
egable+pgsql-performa...@gmail.com egable%2bpgsql-performa...@gmail.com
wrote:
The complex type contains roughly 25 fields, mostly text, plus another 10
REFCURSORs.
How
I would greatly appreciate any advice anyone could give me in terms of
performance tuning a large PL/PGSQL stored procedure. First, I should point
out that I have read a considerable amount of information in the online
PostgreSQL documentation and on Google about optimizing SQL queries and
Hi Jeff,
Are you running VACUUM (without FULL) regularly? And if so, is that
insufficient?
Unfortunately, we have not run vacuumlo as often as we would like, and that
has caused a lot of garbage blobs to get generated by our application.
You can always expect some degree of bloat. Can you
Hello,
We are running into some performance issues with running VACUUM FULL on the
pg_largeobject table in Postgres (8.4.2 under Linux), and I'm wondering if
anybody here might be able to suggest anything to help address the issue.
Specifically, when running VACUUM FULL on the pg_largeobject
On Tue, 2010-01-19 at 12:19 -0800, PG User 2010 wrote:
Hello,
We are running into some performance issues with running VACUUM FULL
on the pg_largeobject table in Postgres (8.4.2 under Linux), and I'm
wondering if anybody here might be able to suggest anything to help
address the issue.
Are
Bob Dusek redu...@gmail.com writes:
So, pgBouncer is pretty good. It doesn't appear to be as good as
limiting TCON and using pconnect, but since we can't limit TCON in a
production environment, we may not have a choice.
You can still use pconnect() with pgbouncer, in transaction mode, if
your
So, pgBouncer is pretty good. It doesn't appear to be as good as
limiting TCON and using pconnect, but since we can't limit TCON in a
production environment, we may not have a choice.
Actually, you can : use lighttpd and php/fastcgi.
Lighttpd handles the network stuff, and
Jaime Casanova jcasa...@systemguards.com.ec writes:
why we don't show some of that info in explain?
Lack of round tuits; plus concern about breaking programs that read
EXPLAIN output, which I guess will be alleviated in 8.5.
the reason i say most of the temp files is that when i removed
On Wed, Jan 13, 2010 at 1:31 AM, Jaime Casanova
jcasa...@systemguards.com.ec wrote:
On Mon, Jan 11, 2010 at 3:18 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Hmm. Not clear where the temp files are coming from, but it's *not* the
sort --- the internal sort ended line shows that that sort never went
Robert Haas robertmh...@gmail.com writes:
I had an idea at one point of making explain show the planned and
actual # of batches for each hash join. I believe that actual # of
batches 1 is isomorphic to hash join went to disk. The code is
actually pretty easy; the hard part is figuring out
On Wed, Jan 13, 2010 at 10:42 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
I had an idea at one point of making explain show the planned and
actual # of batches for each hash join. I believe that actual # of
batches 1 is isomorphic to hash join went to
On Wed, Jan 13, 2010 at 11:11 AM, Robert Haas robertmh...@gmail.com wrote:
Well, what about when we're just doing EXPLAIN, not EXPLAIN ANALYZE?
It'll add another line to the output for the expected number of
batches.
and when we are in EXPLAIN ANALYZE the real number as well?
--
On Wed, Jan 13, 2010 at 11:14 AM, Jaime Casanova
jcasa...@systemguards.com.ec wrote:
On Wed, Jan 13, 2010 at 11:11 AM, Robert Haas robertmh...@gmail.com wrote:
Well, what about when we're just doing EXPLAIN, not EXPLAIN ANALYZE?
It'll add another line to the output for the expected number of
Robert Haas robertmh...@gmail.com writes:
Yeah. My question is whether it's acceptable to add an extra line to
the EXPLAIN output for every hash join, even w/o ANALYZE.
We could add it if either VERBOSE or ANALYZE appears. Not sure if
that's just too much concern for backwards compatibility,
On Wed, Jan 13, 2010 at 11:53 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
Yeah. My question is whether it's acceptable to add an extra line to
the EXPLAIN output for every hash join, even w/o ANALYZE.
We could add it if either VERBOSE or ANALYZE appears.
FYI - We have implemented a number of changes...
a) some query and application optimizations
b) connection pool (on the cheap: set max number of clients on
Postgres server and created a blocking wrapper to pg_pconnect that
will block until it gets a connection)
c) moved the application server to
On Wed, Jan 13, 2010 at 1:10 PM, Bob Dusek redu...@gmail.com wrote:
And, we pretty much doubled our capacity... from approx 40 requests
per second to approx 80.
Excellent!
The problem with our cheap connection pool is that the persistent
connections don't seem to be available immediately
Bob Dusek wrote:
So, pgBouncer is pretty good. It doesn't appear to be as good as
limiting TCON and using pconnect, but since we can't limit TCON in a
production environment, we may not have a choice.
It may be worth looking into pgpool, as well. If you have a very
cheap-to-connect-to local
Bob Dusek wrote:
The problem with our cheap connection pool is that the persistent
connections don't seem to be available immediately after they're
released by the previous process. pg_close doesn't seem to help the
situation. We understand that pg_close doesn't really close a
persistent
On Mon, 11 Jan 2010, Bob Dusek wrote:
How do I learn more about the actual lock contention in my db? Lock
contention makes
some sense. Each of the 256 requests are relatively similar. So, I don't
doubt that
lock contention could be an issue. I just don't know how to observe it or
correct
Matthew Wakeling matt...@flymine.org wrote:
-Kevin
It'd really help us reading your emails if you could make sure
that it is easy to distinguish your words from words you are
quoting. It can be very confusing reading some of your emails,
trying to remember which bits I have seen
On Tue, Jan 12, 2010 at 12:12 PM, Matthew Wakeling matt...@flymine.orgwrote:
On Mon, 11 Jan 2010, Bob Dusek wrote:
How do I learn more about the actual lock contention in my db? Lock
contention makes
some sense. Each of the 256 requests are relatively similar. So, I don't
doubt that
Bob, you might want to just send plain text, to avoid such problems.
Will do. Looks like gmail's interface does it nicely.
-Kevin
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
On Tue, 12 Jan 2010, Bob Dusek wrote:
Each of the concurrent clients does a series of selects, inserts, updates,
and deletes. The requests would generally never update or delete the same
rows in a table. However, the requests do generally write to the same
tables. And, they are all reading
On 13/01/2010 2:01 AM, Bob Dusek wrote:
The connections to postgres are happening on the localhost. Our
application server accepts connections from the network, and the
application queries Postgres using a standard pg_pconnect on the localhost.
Well, that's a good reason to have all those
On Mon, Jan 11, 2010 at 3:18 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Hmm. Not clear where the temp files are coming from, but it's *not* the
sort --- the internal sort ended line shows that that sort never went
to disk. What kind of plan is feeding the sort node?
some time ago, you said:
Hello,
We're running Postgres 8.4.2 on Red Hat 5, on pretty hefty hardware...
4X E7420 Xeon, Four cores (for a total of 16 cores)
2.13 GHz, 8M Cache, 1066 Mhz FSB
32 Gigs of RAM
15 K RPM drives in striped raid
Things run fine, but when we get a lot of concurrent queries running, we see
a pretty
On Mon, Jan 11, 2010 at 6:44 AM, Bob Dusek redu...@gmail.com wrote:
Hello,
We're running Postgres 8.4.2 on Red Hat 5, on pretty hefty hardware...
4X E7420 Xeon, Four cores (for a total of 16 cores)
2.13 GHz, 8M Cache, 1066 Mhz FSB
32 Gigs of RAM
15 K RPM drives in striped raid
What method
In response to Bob Dusek :
Hello,
We're running Postgres 8.4.2 on Red Hat 5, on pretty hefty hardware...
4X E7420 Xeon, Four cores (for a total of 16 cores)
2.13 GHz, 8M Cache, 1066 Mhz FSB
32 Gigs of RAM
15 K RPM drives in striped raid
Things run fine, but when we get a lot of
On Mon, Jan 11, 2010 at 8:50 AM, Scott Marlowe scott.marl...@gmail.comwrote:
On Mon, Jan 11, 2010 at 6:44 AM, Bob Dusek redu...@gmail.com wrote:
Hello,
We're running Postgres 8.4.2 on Red Hat 5, on pretty hefty hardware...
4X E7420 Xeon, Four cores (for a total of 16 cores)
2.13 GHz,
On Mon, Jan 11, 2010 at 9:07 AM, A. Kretschmer
andreas.kretsch...@schollglas.com wrote:
In response to Bob Dusek :
Hello,
We're running Postgres 8.4.2 on Red Hat 5, on pretty hefty hardware...
4X E7420 Xeon, Four cores (for a total of 16 cores)
2.13 GHz, 8M Cache, 1066 Mhz FSB
32
Bob Dusek wrote:
On Mon, Jan 11, 2010 at 8:50 AM, Scott Marlowe scott.marl...@gmail.com
mailto:scott.marl...@gmail.com wrote:
On Mon, Jan 11, 2010 at 6:44 AM, Bob Dusek redu...@gmail.com
mailto:redu...@gmail.com wrote:
Hello,
We're running Postgres 8.4.2 on Red Hat 5,
Bob Dusek redu...@gmail.com wrote:
Scott Marlowe scott.marl...@gmail.comwrote:
Bob Dusek redu...@gmail.com wrote:
4X E7420 Xeon, Four cores (for a total of 16 cores)
What method of striped RAID?
RAID-0
I hope you have a plan for what to do when any one drive in this
array fails, and
Ivan Voras wrote:
Yes, but you are issuing 133 write operations per seconds per drive(s) -
this is nearly the limit of what you can get with 15k RPM drives
(actually, the limit should be somewhere around 200..250 IOPS but 133
isn't that far).
I saw in your other post you have fsync turned
On Mon, Jan 11, 2010 at 9:42 AM, Bob Dusek redu...@gmail.com wrote:
What method of striped RAID? RAID-5? RAID-10? RAID-4? RAID-0?
RAID-0
Just wondering how many drives?
To be more specific about the degradation, we've set the
log_min_duration_statement=200, and when we run with 40
We may want to start looking at query plans for the slowest queries.
Use explain analyze to find them and attach them here. I kinda have a
feeling you're running into a limit on the speed of your memory
though, and there's no real cure for that. You can buy a little time
with some query or db
This is to be expected, to some extent, as we would expect some
perfromance
degradation with higher utilization. But, the hardware doesn't appear to
be
very busy, and that's where we're hoping for some help.
It's likely in io wait.
What do the following commands tell you?
On Mon, Jan 11, 2010 at 12:17 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
Bob Dusek redu...@gmail.com wrote:
Scott Marlowe scott.marl...@gmail.comwrote:
Bob Dusek redu...@gmail.com wrote:
4X E7420 Xeon, Four cores (for a total of 16 cores)
What method of striped RAID?
On Sun, Sep 13, 2009 at 5:37 PM, Tom Lane t...@sss.pgh.pa.us wrote:
It might be useful to turn on trace_sort to see if the small files
are coming from sorts. If they're from hashes I'm afraid there's
no handy instrumentation ...
yes they are, this is the log (i deleted the STATEMENT lines
On Mon, Jan 11, 2010 at 10:49 AM, Bob Dusek redu...@gmail.com wrote:
Depends, is that the first iteration of output? if so, ignore it and
show me the second and further on. Same for vmstat... In fact let
them run for a minute or two and attach the results... OTOH, if that
is the second or
Bob Dusek redu...@gmail.com wrote:
Kevin Grittner kevin.gritt...@wicourts.gov wrote:
Bob Dusek redu...@gmail.com wrote:
Anyway, my benchmarks tend to show that best throughput occurs at
about (CPU_count * 2) plus effective_spindle_count. Since you
seem to be fully cached,
RAID-0
And how many drives?
Just two.
We have an application server that is processing requests. Each request
consists of a combination of selects, inserts, and deletes. We actually see
degredation when we get more than 40 concurrent requests. The exact number
of queries executed by
On Mon, Jan 11, 2010 at 1:15 PM, Jaime Casanova
jcasa...@systemguards.com.ec wrote:
On Sun, Sep 13, 2009 at 5:37 PM, Tom Lane t...@sss.pgh.pa.us wrote:
It might be useful to turn on trace_sort to see if the small files
are coming from sorts. If they're from hashes I'm afraid there's
no handy
On Mon, Jan 11, 2010 at 2:07 PM, Robert Haas robertmh...@gmail.com wrote:
On Mon, Jan 11, 2010 at 1:15 PM, Jaime Casanova
jcasa...@systemguards.com.ec wrote:
On Sun, Sep 13, 2009 at 5:37 PM, Tom Lane t...@sss.pgh.pa.us wrote:
It might be useful to turn on trace_sort to see if the small files
On Mon, Jan 11, 2010 at 2:14 PM, Jaime Casanova
jcasa...@systemguards.com.ec wrote:
On Mon, Jan 11, 2010 at 2:07 PM, Robert Haas robertmh...@gmail.com wrote:
On Mon, Jan 11, 2010 at 1:15 PM, Jaime Casanova
jcasa...@systemguards.com.ec wrote:
On Sun, Sep 13, 2009 at 5:37 PM, Tom Lane
On Mon, Jan 11, 2010 at 10:54 AM, Bob Dusek redu...@gmail.com wrote:
You want to use some connection pooling which queues requests when
more than some configurable number of connections is already active
with a request. You probably want to run that on the server side.
As for the
On Mon, Jan 11, 2010 at 1:20 PM, Kevin Grittner kevin.gritt...@wicourts.gov
wrote:
Bob Dusek redu...@gmail.com wrote:
Kevin Grittner kevin.gritt...@wicourts.gov wrote:
Bob Dusek redu...@gmail.com wrote:
Anyway, my benchmarks tend to show that best throughput occurs at
about (CPU_count
On Mon, Jan 11, 2010 at 12:36 PM, Scott Marlowe scott.marl...@gmail.com wrote:
FYI, on an 8 or 16 core machine, 10k to 30k context switches per
second aren't that much. If you're climbing past 100k you might want
to look out.
The more I read up on the 74xx CPUs and look at the numbers here
On Mon, Jan 11, 2010 at 12:34 PM, Bob Dusek redu...@gmail.com wrote:
Yeah :) We haven't run into much trouble. But, we cut our teeth doing
performance analysis of our app using PG 7.4. And, people on this list seem
to be adamantly against this config these days. Is this safer in older
I have slave dbs running on four 7200RPM SATA drives with fsync off.
They only get updated from the master db so if they go boom, I just
recreate their node. There's times fsync off is ok, you just have to
know that that db is now considered disposable.
However, I'd suggest doing some
On Mon, Jan 11, 2010 at 1:13 PM, Dusek, Bob rd185...@ncr.com wrote:
I haven't been involved in any benchmarking of PG8 with fsync=off, but we
certainly did it with PG 7.4. fsync=0ff, for our purposes, was MUCH faster.
And many changes have been made since then to make fsyncing much
faster.
Jaime Casanova jcasa...@systemguards.com.ec writes:
LOG: begin tuple sort: nkeys = 1, workMem = 1024, randomAccess = f
LOG: switching to bounded heapsort at 641 tuples: CPU 0.08s/0.13u sec
elapsed 0.25 sec
LOG: temporary file: path base/pgsql_tmp/pgsql_tmp8507.5, size 471010
LOG:
Scott Marlowe scott.marl...@gmail.com writes:
On Mon, Jan 11, 2010 at 1:13 PM, Dusek, Bob rd185...@ncr.com wrote:
I haven't been involved in any benchmarking of PG8 with fsync=off, but we
certainly did it with PG 7.4. fsync=0ff, for our purposes, was MUCH faster.
And many changes have been
Scott Marlowe scott.marl...@gmail.com wrote:
FYI, on an 8 or 16 core machine, 10k to 30k context switches per
second aren't that much.
Yeah, on our 16 core machines under heavy load we hover around 30k.
He was around 50k, which is why I said it looked like it was
becoming a problem.
If
On Mon, Jan 11, 2010 at 3:18 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Jaime Casanova jcasa...@systemguards.com.ec writes:
LOG: begin tuple sort: nkeys = 1, workMem = 1024, randomAccess = f
LOG: switching to bounded heapsort at 641 tuples: CPU 0.08s/0.13u sec
elapsed 0.25 sec
LOG: temporary
I haven't been keeping up on the hardware, so I defer to you on
that. It certainly seems like it would fit with the symptoms. On
the other hand, I haven't seen anything yet to convince me that it
*couldn't* be a client-side or network bottleneck, or the sort of
lock contention bottleneck
Jaime Casanova jcasa...@systemguards.com.ec writes:
the temp files shoul be coming from hash operations but AFAICS the
files are small and every hash operation should be using until
work_mem memory, right?
No, when a hash spills to disk the code has to guess the partition sizes
(number of
801 - 900 of 2129 matches
Mail list logo