Re: [PERFORM] Auto ANALYZE criteria

2010-09-21 Thread Joe Miller
I was looking at the autovacuum documentation: http://www.postgresql.org/docs/9.0/interactive/routine-vacuuming.html#AUTOVACUUM For analyze, a similar condition is used: the threshold, defined as: analyze threshold = analyze base threshold + analyze scale factor * number of tuples is

Re: [PERFORM] Auto ANALYZE criteria

2010-09-21 Thread Joe Miller
On Mon, Sep 20, 2010 at 6:28 PM, Kevin Grittner kevin.gritt...@wicourts.gov wrote: Joe Miller joe.d.mil...@gmail.com wrote: I can set up a cron job to run the ANALYZE manually, but it seems like the autovacuum daemon should be smart enough to figure this out on its own.  Deletes can have as

[PERFORM] Query much faster with enable_seqscan=0

2010-09-21 Thread Ogden
Hello, I have received some help from the IRC channel, however, the problem still exists. When running the following query with enable_seqscan set to 0, it takes less than a second, whereas with it set to 1, the query returns in 14 seconds. The machine itself has 8GB Ram and is running

Re: [PERFORM] Query much faster with enable_seqscan=0

2010-09-21 Thread Kenneth Marshall
You DB is more than likely cached. You should adjust your page costs to better reflect reality and then the planner can make more accurate estimates and then choose the proper plan. Cheers, Ken On Tue, Sep 21, 2010 at 12:32:01PM -0500, Ogden wrote: Hello, I have received some help from the

Re: [PERFORM] Query much faster with enable_seqscan=0

2010-09-21 Thread Ogden
I assume you mean random_page_cost? It is currently set to 4.0 - is it better to increase or decrease this value? Thank you Ogden On Sep 21, 2010, at 1:06 PM, Kenneth Marshall wrote: You DB is more than likely cached. You should adjust your page costs to better reflect reality and then

Re: [PERFORM] Query much faster with enable_seqscan=0

2010-09-21 Thread Jesper Krogh
On 2010-09-21 20:21, Ogden wrote: I assume you mean random_page_cost? It is currently set to 4.0 - is it better to increase or decrease this value? Should be lowered to a bit over seq_page_cost.. and more importantly.. you should make sure that you have updated your statistics .. run

Re: [PERFORM] Query much faster with enable_seqscan=0

2010-09-21 Thread Ogden
How odd, I set the following: seq_page_cost = 1.0 random_page_cost = 2.0 And now the query runs in milliseconds as opposed to 14 seconds. Could this really be the change? I am running ANALYZE now - how often is it recommended to do this? Thank you Ogden On Sep 21, 2010, at 1:51

Re: [PERFORM] Query much faster with enable_seqscan=0

2010-09-21 Thread Joshua D. Drake
On Tue, 2010-09-21 at 14:02 -0500, Ogden wrote: How odd, I set the following: seq_page_cost = 1.0 random_page_cost = 2.0 And now the query runs in milliseconds as opposed to 14 seconds. Could this really be the change? I am running ANALYZE now - how often is it recommended to

Re: [PERFORM] Query much faster with enable_seqscan=0

2010-09-21 Thread Greg Smith
Joshua D. Drake wrote: PostgreSQL's defaults are based on extremely small and some would say (non production) size databases. As a matter of course I always recommend bringing seq_page_cost and random_page_cost more in line. Also, they presume that not all of your data is going to be in

Re: [PERFORM] GPU Accelerated Sorting

2010-09-21 Thread Robert Haas
On Mon, Aug 30, 2010 at 9:46 AM, Eliot Gable ega...@gmail.com wrote: Not sure if anyone else saw this, but it struck me as an interesting idea if it could be added to PostgreSQL. GPU accelerated database operations could be very... interesting. Of course, this could be difficult to do in a way

Re: [PERFORM] Query much faster with enable_seqscan=0

2010-09-21 Thread Ogden
On Sep 21, 2010, at 2:16 PM, Greg Smith wrote: Joshua D. Drake wrote: PostgreSQL's defaults are based on extremely small and some would say (non production) size databases. As a matter of course I always recommend bringing seq_page_cost and random_page_cost more in line. Also, they

Re: [PERFORM] Using Between

2010-09-21 Thread Robert Haas
On Mon, Aug 30, 2010 at 12:51 PM, Ozer, Pam po...@automotive.com wrote: Yes.  ANALYZE was run after we loaded the data.  Thanks for your assistance Here is the full Query. select distinct VehicleUsed.VehicleUsedId as VehicleUsedId ,  VehicleUsed.VehicleUsedDisplayPriority as

Re: [PERFORM] slow DDL creation

2010-09-21 Thread Robert Haas
On Tue, Aug 31, 2010 at 11:35 AM, Kevin Kempter cs_...@consistentstate.com wrote: On Monday 30 August 2010 17:04, bricklen wrote: On Mon, Aug 30, 2010 at 3:28 PM, Kevin Kempter cs_...@consistentstate.com wrote: Hi all ; we have an automated partition creation process that includes the

Re: [PERFORM] Using Between

2010-09-21 Thread Ozer, Pam
There are 850,000 records in vehicleused. And the database is too big to be kept in memory. Here are our config settings. listen_addresses = '*'# what IP address(es) to listen on; #

Re: [PERFORM] Auto ANALYZE criteria

2010-09-21 Thread Tom Lane
Joe Miller joe.d.mil...@gmail.com writes: I was looking at the autovacuum documentation: http://www.postgresql.org/docs/9.0/interactive/routine-vacuuming.html#AUTOVACUUM For analyze, a similar condition is used: the threshold, defined as: analyze threshold = analyze base threshold +

Re: [PERFORM] Query much faster with enable_seqscan=0

2010-09-21 Thread Tom Lane
Greg Smith g...@2ndquadrant.com writes: and the query optimizer needs to be careful about what it does and doesn't pull from disk. If that's not the case, like here where there's 8GB of RAM and a 7GB database, dramatic reductions to both seq_page_cost and random_page_cost can make sense.

Re: [PERFORM] Query much faster with enable_seqscan=0

2010-09-21 Thread Tom Lane
Ogden li...@darkstatic.com writes: SELECT tr.id, tr.sid FROM test_registration tr, INNER JOIN test_registration_result r on (tr.id = r.test_registration_id) WHERE.

Re: [PERFORM] Performance degradation, index bloat and planner estimates

2010-09-21 Thread Craig Ringer
On 20/09/2010 7:59 PM, Daniele Varrazzo wrote: Does anybody have some information about where the bloat is coming from and what is the best way to get rid of it? Would a vacuum full fix this kind of problem? Is there a way to fix it without taking the system offline? It's hard to know where

Re: [PERFORM] Performance degradation, index bloat and planner estimates

2010-09-21 Thread Tom Lane
Craig Ringer cr...@postnewspapers.com.au writes: If the bloat issue were with relations rather than indexes I'd suspect free space map issues as you're on 8.3. http://www.postgresql.org/docs/8.3/interactive/runtime-config-resource.html My (poor) understanding is that index-only bloat