[PERFORM] loading increase into huge table with 50.000.000 records

2006-07-26 Thread nuggets72
Hello, Sorry for my poor english, My problem : I meet some performance problem during load increase. massive update of 50.000.000 records and 2.000.000 insert with a weekly frequency in a huge table (+50.000.000 records, ten fields, 12 Go on hard disk) current performance obtained : 120

Re: [PERFORM] loading increase into huge table with 50.000.000 records

2006-07-26 Thread Sven Geisler
Hi Larry, Do you run vacuum and analyze frequently? Did you check PowerPostgresql.com for hints about PostgreSQL tuning? http://www.powerpostgresql.com/Docs/ You can increase wal_buffers, checkpoint_segments and checkpoint_timeout much higher. Here is a sample which works for me. wal_buffers =

Re: [PERFORM] loading increase into huge table with 50.000.000 records

2006-07-26 Thread Markus Schaber
Hi, Larry, Hi, Sven, Sven Geisler wrote: You can increase wal_buffers, checkpoint_segments and checkpoint_timeout much higher. You also should increase the free space map settings, it must be large enough to cope with your weekly bunch. Markus -- Markus Schaber | Logical TrackingTracing

Re: [PERFORM] loading increase into huge table with 50.000.000 records

2006-07-26 Thread Merlin Moncure
On 7/26/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: Hello, Sorry for my poor english, My problem : I meet some performance problem during load increase. massive update of 50.000.000 records and 2.000.000 insert with a weekly frequency in a huge table (+50.000.000 records, ten fields, 12

[PERFORM] Is it possible to speed this query up?

2006-07-26 Thread Arnau
Hi all, I execute the following query on postgresql 8.1.0: SELECT u.telephone_number , u.telecom_operator_id , u.name FROM campanas_subcampaign AS sub , agenda_users AS u , agenda_users_groups ug WHERE sub.customer_app_config_id = 19362 AND sub.subcampaign_id = 9723 AND

Re: [PERFORM] Is it possible to speed this query up?

2006-07-26 Thread Tom Lane
Arnau [EMAIL PROTECTED] writes: the explain analyze shouts the following: The expensive part appears to be this indexscan: - Index Scan using pk_agndusrgrp_usergroup on agenda_users_groups ug (cost=0.00..123740.26 rows=2936058 width=30) (actual time=0.101..61921.260