Re: [PERFORM] Index bloat, reindex weekly, suggestions etc?

2008-10-18 Thread Віталій Тимчишин
2008/10/17 Tory M Blue [EMAIL PROTECTED]


 The real issue is my index growth and my requirement for weekly
 re-indexing  (which blocks and therefore is more or less a manual
 process in a live production environment (fail over, changing vips
 etc).


BTW: Can't you simply recreate indexes online? Since postgresql accepts
multiple indexes of same definition, this may look like:
1) create index concurrently index_alt
2) analyze index_alt
3) drop index_orig
Both index_alt and index_orig having same definition


[PERFORM] Explain Analyze - Total runtime very differentes

2008-10-18 Thread tarcizioab
Hello friends ...

I'm evaluating the performance of algorithms for optimization of queries.
I am comparing results between the algorithm of Dynamic Programming and an
implementation of Kruskal's algorithm. When submitting a query that makes
reference to only 2 tables of my base, logically the same Query Plan is
shown. But the Total runtime displayed by the command Explain-Analyze
presents a variation of time very high:

Dynamic Programming Total runtime: 1204.220 ms

Kruskal Total runtime: 3744.879 ms

No change of data (insert, delete, update) in the tables was made during
the tests. The same query was submitted several times (with Kruskal and
Dynamic Programming algorithms) and the variation of results persists.

The explain analyze only reports the time to run *execute* the query.
With the same Query Plan, does not understand why this variation occurs.

In annex the Query Plans

If someone can help me.

Thank's for attention.

Tarcizio Bini
(Kruskal) QUERY PLAN
   

 Aggregate  (cost=474090.39..474090.40 rows=1 width=4) (actual 
time=3744.711..3744.712 rows=1 loops=1)
   -  Nested Loop  (cost=0.00..474086.63 rows=1501 width=4) (actual 
time=33.844..3744.296 rows=573 loops=1)
 Join Filter: (public.lineitem.l_quantity  (subplan))
 -  Seq Scan on part  (cost=0.00..7063.92 rows=200 width=4) (actual 
time=0.245..69.567 rows=211 loops=1)
   Filter: ((p_brand = 'Brand#12'::bpchar) AND (p_container = 'LG 
CAN'::bpchar))
 -  Index Scan using i_l_partkey on lineitem  (cost=0.00..95.35 
rows=23 width=12) (actual time=12.459..12.482 rows=30 loops=211)
   Index Cond: (public.lineitem.l_partkey = part.p_partkey)
 SubPlan
   -  Aggregate  (cost=97.35..97.37 rows=1 width=4) (actual 
time=0.163..0.163 rows=1 loops=6334)
 -  Index Scan using i_l_partkey on lineitem  
(cost=0.00..97.29 rows=23 width=4) (actual time=0.004..0.151 rows=31 loops=6334)
   Index Cond: (l_partkey = $0)
 Total runtime: 3744.879 ms
(12 rows)

Time: 3778,490 ms
Timing is on.




(Dynamic Programming) QUERY PLAN


 Aggregate  (cost=474090.39..474090.40 rows=1 width=4) (actual 
time=1204.064..1204.064 rows=1 loops=1)
   -  Nested Loop  (cost=0.00..474086.63 rows=1501 width=4) (actual 
time=11.754..1203.669 rows=573 loops=1)
 Join Filter: (public.lineitem.l_quantity  (subplan))
 -  Seq Scan on part  (cost=0.00..7063.92 rows=200 width=4) (actual 
time=0.264..67.012 rows=211 loops=1)
   Filter: ((p_brand = 'Brand#12'::bpchar) AND (p_container = 'LG 
CAN'::bpchar))
 -  Index Scan using i_l_partkey on lineitem  (cost=0.00..95.35 
rows=23 width=12) (actual time=3.841..3.864 rows=30 loops=211)
   Index Cond: (public.lineitem.l_partkey = part.p_partkey)
 SubPlan
   -  Aggregate  (cost=97.35..97.37 rows=1 width=4) (actual 
time=0.049..0.049 rows=1 loops=6334)
 -  Index Scan using i_l_partkey on lineitem  
(cost=0.00..97.29 rows=23 width=4) (actual time=0.004..0.037 rows=31 loops=6334)
   Index Cond: (l_partkey = $0)
 Total runtime: 1204.220 ms
(12 rows)

Time: 1208,423 ms
Timing is on.
-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Postgres Performance on CPU limited Platforms

2008-10-18 Thread Scott Marlowe
On Fri, Sep 12, 2008 at 12:07 PM, H. Hall [EMAIL PROTECTED] wrote:

 Hmmm  ARM/XScale, 64MB.  Just curious. Are you running a Postgres server on
 a pocket pc or possibly a cell phone?


I would think SQLite would be a better choice on that kind of thing.
Unless you're trying to run really complex queries maybe.

-- When fascism comes to America, it will be draped in a flag and
carrying a cross - Sinclair Lewis

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance