Hi,
It is frequently said that for PostgreSQL the number 1 thing to pay attention
to when increasing performance is the amount of IOPS a storage system is
capable of. Now I wonder if there is any situation in which sequential IO
performance comes into play. E.g. perhaps during a tablescan on
On Tue, 10 Mar 2009, henk de wit wrote:
It is frequently said that for PostgreSQL the number 1 thing to pay
attention to when increasing performance is the amount of IOPS a storage
system is capable of. Now I wonder if there is any situation in which
sequential IO performance comes into play.
Tom Lane t...@sss.pgh.pa.us writes:
Mario Splivalo mario.spliv...@megafon.hr writes:
Now I'm confused, why is 'sql' function much slower than 'direct' SELECT?
Usually the reason for this is that the planner chooses a different plan
when it has knowledge of the particular value you are
Hi,
On Tue, 10 Mar 2009, henk de wit wrote:
Now I wonder if there is any situation in which
sequential IO performance comes into play. E.g. perhaps during a
tablescan on a non-fragmented table, or during a backup or restore?
Yes, up to a point. That point is when a single CPU can no
f...@redhat.com (Frank Ch. Eigler) writes:
For a prepared statement, could the planner produce *several* plans,
if it guesses great sensitivity to the parameter values? Then it
could choose amongst them at run time.
We've discussed that in the past. Choose at runtime is a bit more
easily
On Tue, 10 Mar 2009, henk de wit wrote:
Now I wonder if there is any situation in which sequential IO
performance comes into play. E.g. perhaps during a tablescan on a
non-fragmented table, or during a backup or restore?
If you're doing a sequential scan of data that was loaded in a fairly
On 3/10/09 6:28 AM, Matthew Wakeling matt...@flymine.org wrote:
On Tue, 10 Mar 2009, henk de wit wrote:
It is frequently said that for PostgreSQL the number 1 thing to pay
attention to when increasing performance is the amount of IOPS a storage
system is capable of. Now I wonder if there is
Hi,
I'd be grateful for any advice we can get... we recently switched from MySQL
to PostgreSQL on the basis of some trials we carried out with datasets of
varying sizes and varying rates of contention. For the most part we've been
pleased with performance, but one particular application runs
On Tue, Mar 10, 2009 at 3:12 PM, Steve McLellan smclel...@mintel.com wrote:
Hi,
I'd be grateful for any advice we can get... we recently switched from MySQL
to PostgreSQL on the basis of some trials we carried out with datasets of
varying sizes and varying rates of contention. For the most
Steve McLellan smclel...@mintel.com wrote:
The server itself is a dual-core 3.7GHz Xeon Dell (each core
reporting 2 logical CPUs) running an amd64 build of FreeBSD 6.2, and
postgres 8.3.5 built from source. It's got 400GB storage in RAID-5
(on 5 disks). It has 8GB of physical RAM. I'm able
On Tue, Mar 10, 2009 at 3:12 PM, Steve McLellan smclel...@mintel.com wrote:
Nested Loop (cost=466.34..192962.24 rows=15329 width=12) (actual
time=13653.238..31332.113 rows=131466 loops=1)
Both your query plans end with this nested loop join which is taking
up about half your time in your
Kevin Grittner kevin.gritt...@wicourts.gov writes:
You probably need to reduce random_page_cost. If your caching is
complete enough, you might want to set it equal to seq_page_cost
(never set it lower that seq_page_cost!) and possibly reduce both of
these to 0.1.
Some people have had good
Steve McLellan smclel...@mintel.com writes:
lc_messages = 'en_US.UTF-8'
lc_monetary = 'en_US.UTF-8'
lc_numeric = 'en_US.UTF-8'
lc_time = 'en_US.UTF-8'
BTW, aside from the points already made: the above indicates that you
initialized your database in en_US.utf8 locale. This is not necessarily
*Tom Lane t...@sss.pgh.pa.us*
Sent by: pgsql-performance-ow...@postgresql.org
03/10/2009 08:16 PM AST
Steve McLellan smclel...@mintel.com writes:
lc_messages = 'en_US.UTF-8'
lc_monetary = 'en_US.UTF-8'
lc_numeric = 'en_US.UTF-8'
lc_time = 'en_US.UTF-8'
BTW, aside from the points
*Scott Marlowe scott.marl...@gmail.com*
03/10/2009 05:19 PM
Nested Loop (cost=466.34..192962.24 rows=15329 width=12) (actual
time=13653.238..31332.113 rows=131466 loops=1)
Both your query plans end with this nested loop join which is taking
up about half your time in your query.
*Kevin Grittner kevin.gritt...@wicourts.gov*
03/10/2009 05:06 PM EST
enable_seqscan = off
Not a good idea; some queries will optimize better with seqscans.
You can probably get the behavior you want using other adjustments.
The bullet to cure the headache, as Scott said.
You
On Tue, Mar 10, 2009 at 9:15 PM, Steve McLellan smclel...@mintel.com wrote:
Thanks - the nested loop is indeed causing problems - reducing
seq_page_cost had the same effect of removing the nested loop for this
query. We'd noticed the poor row count estimation. Increasing the statistics
doesn't
17 matches
Mail list logo