Re: [PERFORM] SQL Function Performance

2006-02-14 Thread Adnan DURSUN






From: Michael Fuhr
Date: 02/14/06 03:32:28
To: Tom Lane
Cc: Adnan DURSUN; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] SQL 
Function Performance

On Mon, Feb 13, 2006 at 07:57:07PM -0500, Tom Lane wrote:
 "Adnan DURSUN" [EMAIL PROTECTED] writes:
  EXPLAIN ANALYZE EXECUTE stmt (...);

 Here is the EXPLAIN ANALYZE output for 
prepared statement :

 This is exactly the same as the other plan --- you did not 
parameterize
 the query.To see what's going on, you need to insert 
PREPARE
 parameters in the places where the function uses plpgsql 
variables.

Actually it was an SQL function, but that also does 
PREPARE/EXECUTE,
right?

Adnan, what Tom is saying is that I requested this (simplified):

PREPARE stmt (integer) AS SELECT * FROM foo WHERE id = $1;
EXPLAIN ANALYZE EXECUTE stmt (12345);

Ok. I am sending right execution plan.I made mistake apologize 
me..


 QUERY PLAN"HashAggregate (cost=276.73..276.76 rows=1 
width=58) (actual time=192648.385..192648.385 rows=0 loops=1)"" 
- Nested Loop (cost=5.90..276.71 rows=1 width=58) (actual 
time=192648.377..192648.377 rows=0 
loops=1)"" Join Filter: 
(((""inner"".hat_no)::text = (""outer"".hat_no)::text) AND (""inner"".sefer_kod 
= ""outer"".sefer_kod) AND (""inner"".plan_tarihi = ""outer"".plan_tarihi) AND 
(""inner"".bin_yer_kod = ""outer"".bin_yer_kod) AND (""inner"".koltuk_no = 
""outer"".koltuk_no) AND (((""inner"".islem_tarihi = $2) AND (($5)::text = 
'I'::text)) OR ((""outer"".kalkis_tarihi = $2) AND (($5)::text = 'K'::text))) 
AND (((""outer"".lokal_kod = $3) AND (($4)::text = 'K'::text)) OR 
((""inner"".ypt_lcl_kod = $3) AND (($4)::text = 
'I'::text"" - Nested 
Loop (cost=5.90..267.19 rows=3 width=101) (actual time=76.240..30974.777 
rows=63193 
loops=1)"" 
- Nested Loop (cost=5.90..123.48 rows=26 width=73) (actual 
time=32.082..4357.786 rows=14296 
loops=1)"" 
- Nested Loop (cost=3.62..15.29 rows=1 width=48) (actual 
time=1.279..46.882 rows=41 
loops=1)"" 
Join Filter: ((""inner"".kod)::text = 
(""outer"".durumu)::text)"" 
- Nested Loop (cost=3.62..13.01 rows=1 width=53) (actual 
time=1.209..40.010 rows=41 
loops=1)"" 
- Nested Loop (cost=3.62..8.49 rows=1 width=47) (actual 
time=1.150..38.928 rows=41 
loops=1)"" 
Join Filter: ((""inner"".""no"")::text = 
(""outer"".hat_no)::text)"" 
- Nested Loop (cost=2.25..6.79 rows=1 width=28) (actual 
time=0.710..24.708 rows=41 
loops=1)"" 
Join Filter: (""inner"".sefer_tip_kod = 
""outer"".kod)"" 
- Seq Scan on t_sefer_tip t (cost=0.00..1.03 rows=1 width=9) 
(actual time=0.117..0.126 rows=1 
loops=1)"" 
Filter: (((iptal)::text = 'H'::text) AND (($1)::text = 
(firma_no)::text))"" 
- Hash Join (cost=2.25..5.74 rows=2 width=32) (actual 
time=0.567..24.349 rows=41 
loops=1)"" 
Hash Cond: ((""outer"".ek_dev)::text = 
(""inner"".kod)::text)"" 
- Seq Scan on t_seferler s (cost=0.00..3.21 rows=34 width=37) 
(actual time=0.077..23.466 rows=41 
loops=1)"" 
Filter: (((iptal)::text = 'H'::text) AND ((iptal)::text = 'H'::text) AND 
((firma_no)::text = 
($1)::text))"" 
- Hash (cost=2.25..2.25 rows=2 width=5) (actual 
time=0.451..0.451 rows=2 
loops=1)"" 
- Seq Scan on t_domains d1 (cost=0.00..2.25 rows=2 width=5) 
(actual time=0.346..0.429 rows=2 
loops=1)"" 
Filter: ((name)::text = 
'EKDEV'::text)"" 
- Merge Join (cost=1.37..1.59 rows=9 width=24) (actual 
time=0.032..0.313 rows=10 
loops=41)"" 
Merge Cond: (""outer"".kod = 
""inner"".kalkis_yer_kod)"" 
- Index Scan using t_yer_pkey on t_yer y1 (cost=0.00..9.62 
rows=115 width=14) (actual time=0.013..0.164 rows=40 
loops=41)"" 
Filter: ((iptal)::text = 
'H'::text)"" 
- Sort (cost=1.37..1.39 rows=9 width=18) (actual 
time=0.007..0.025 rows=10 
loops=41)"" 
Sort Key: 
h.kalkis_yer_kod"" 
- Seq Scan on t_hatlar h (cost=0.00..1.23 rows=9 width=18) 
(actual time=0.078..0.125 rows=10 
loops=1)"" 
Filter: (($1)::text = 
(firma_no)::text)"" 
- Index Scan using t_yer_pkey on t_yer y2 (cost=0.00..4.51 
rows=1 width=14) (actual time=0.011..0.015 rows=1 
loops=41)"" 
Index Cond: (""outer"".varis_yer_kod = 
y2.kod)"" 
Filter: ((iptal)::text = 
'H'::text)"" 
- Seq Scan on t_domains d2 (cost=0.00..2.25 rows=2 width=5) 
(actual time=0.054..0.140 rows=2 
loops=41)"" 
Filter: ((name)::text = 
'SFR_DURUMU'::text)"" 
- Bitmap Heap Scan on t_lokal_plan lp (cost=2.28..107.70 rows=33 
width=30) (actual time=9.709..103.130 rows=349 
loops=41)"" 
Recheck Cond: (((lp.firma_no)::text = ($1)::text) AND ((""outer"".hat_no)::text 
= (lp.hat_no)::text) AND (""outer"".kod = 
lp.sefer_kod))"" 
- Bitmap Index Scan on t_lokal_plan_pkey (cost=0.00..2.28 
rows=33 width=0) (actual time=8.340..8.340 rows=349 
loops=41)"" 
Index Cond: (((lp.firma_no)::text = ($1)::text) AND ((""outer"".hat_no)::text = 
(lp.hat_no)::text) AND (""outer"".kod = 
lp.sefer_kod))"" 
- Index Scan using t_koltuk_son_durum_pkey on t_koltuk_son_durum 
sd (cost=0.00..5.51 rows=1 width=28) (actual time=0.467..1.829 rows=4 
loops=14296)"" 
Index Cond: ((($1)::text = (sd.firma_no)::text) AND ((""outer"".hat_no)::text = 

[PERFORM] copy and postgresql.conf

2006-02-14 Thread FERREIRA, William (VALTECH)

hi,

i load data from files using copy method.
Files contain between 2 and 7 millions of rows, spread on 5 tables.

For loading all the data, it takes 40mn, and the same processing takes 17mn 
with Oracle.
I think that this time can be improved by changing postgresql configuration 
file.
But which parameters i need to manipulate and with which values ?

Here are the specifications of my system :
V250 architecture sun4u
2xCPU UltraSparc IIIi 1.28 GHz.
8 Go RAM.

Regards.

Will


This e-mail is intended only for the above addressee. It may contain
privileged information. If you are not the addressee you must not copy,
distribute, disclose or use any of the information in it. If you have
received it in error please delete it and immediately notify the sender.
Security Notice: all e-mail, sent to or from this address, may be
accessed by someone other than the recipient, for system management and
security reasons. This access is controlled under Regulation of
Investigatory Powers Act 2000, Lawful Business Practises.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] copy and postgresql.conf

2006-02-14 Thread Albert Cervera Areny
Hi William,
which PostgreSQL version are you using? Newer (8.0+) versions have some 
important performance improvements for the COPY command.

Also, you'll notice significant improvements by creating primary  
foreign 
keys after the copy command. I think config tweaking can improve key and 
index creation but I don't think you can improve the COPY command itself.

There are also many threads in this list commenting on this issue, 
you'll 
find it easely in the archives.

A Dimarts 14 Febrer 2006 10:44, FERREIRA, William (VALTECH) va escriure:
 hi,

 i load data from files using copy method.
 Files contain between 2 and 7 millions of rows, spread on 5 tables.

 For loading all the data, it takes 40mn, and the same processing takes 17mn
 with Oracle. I think that this time can be improved by changing postgresql
 configuration file. But which parameters i need to manipulate and with
 which values ?

 Here are the specifications of my system :
 V250 architecture sun4u
 2xCPU UltraSparc IIIi 1.28 GHz.
 8 Go RAM.

 Regards.

   Will


 This e-mail is intended only for the above addressee. It may contain
 privileged information. If you are not the addressee you must not copy,
 distribute, disclose or use any of the information in it. If you have
 received it in error please delete it and immediately notify the sender.
 Security Notice: all e-mail, sent to or from this address, may be
 accessed by someone other than the recipient, for system management and
 security reasons. This access is controlled under Regulation of
 Investigatory Powers Act 2000, Lawful Business Practises.

 ---(end of broadcast)---
 TIP 4: Have you searched our list archives?

http://archives.postgresql.org

-- 
Albert Cervera Areny
Dept. Informàtica Sedifa, S.L.

Av. Can Bordoll, 149
08202 - Sabadell (Barcelona)
Tel. 93 715 51 11
Fax. 93 715 51 12


  AVISO LEGAL  
La   presente  comunicación  y sus anexos tiene como destinatario la
persona a  la  que  va  dirigida, por  lo  que  si  usted lo  recibe
por error  debe  notificarlo  al  remitente  y   eliminarlo   de  su
sistema,  no  pudiendo  utilizarlo,  total  o   parcialmente,   para
ningún  fin.  Su  contenido  puede  tener información confidencial o
protegida legalmente   y   únicamente   expresa  la  opinión del
remitente.  El   uso   del   correo   electrónico   vía Internet  no
permite   asegurarni  la   confidencialidad   de   los  mensajes
nisucorrecta recepción.   Enel  caso   de   que   el
destinatario no consintiera la utilización  del correo  electrónico,
deberá ponerlo en nuestro conocimiento inmediatamente.

... DISCLAIMER .
This message and its  attachments are  intended  exclusively for the
named addressee. If you  receive  this  message  in   error,  please
immediately delete it from  your  system  and notify the sender. You
may  not  use  this message  or  any  part  of it  for any  purpose.
The   message   may  contain  information  that  is  confidential or
protected  by  law,  and  any  opinions  expressed  are those of the
individualsender.  Internet  e-mail   guarantees   neither   the
confidentiality   nor  the  proper  receipt  of  the  message  sent.
If  the  addressee  of  this  message  does  not  consent to the use
of   internete-mail,pleaseinform usinmmediately.



 

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] copy and postgresql.conf

2006-02-14 Thread FERREIRA, William (VALTECH)

thanks,

i'm using postgresql 8.0.3
there is no primary key and no index on my tables

regards

-Message d'origine-
De : [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] la part de Albert
Cervera Areny
Envoyé : mardi 14 février 2006 12:38
À : pgsql-performance@postgresql.org
Objet : Re: [PERFORM] copy and postgresql.conf



Hi William,
which PostgreSQL version are you using? Newer (8.0+) versions have some

important performance improvements for the COPY command.

Also, you'll notice significant improvements by creating primary  
foreign

keys after the copy command. I think config tweaking can improve key and

index creation but I don't think you can improve the COPY command itself.

There are also many threads in this list commenting on this issue, 
you'll

find it easely in the archives.

A Dimarts 14 Febrer 2006 10:44, FERREIRA, William (VALTECH) va escriure:
 hi,

 i load data from files using copy method.
 Files contain between 2 and 7 millions of rows, spread on 5 tables.

 For loading all the data, it takes 40mn, and the same processing takes 17mn
 with Oracle. I think that this time can be improved by changing postgresql
 configuration file. But which parameters i need to manipulate and with
 which values ?

 Here are the specifications of my system :
 V250 architecture sun4u
 2xCPU UltraSparc IIIi 1.28 GHz.
 8 Go RAM.

 Regards.

   Will


 This e-mail is intended only for the above addressee. It may contain
 privileged information. If you are not the addressee you must not copy,
 distribute, disclose or use any of the information in it. If you have
 received it in error please delete it and immediately notify the sender.
 Security Notice: all e-mail, sent to or from this address, may be
 accessed by someone other than the recipient, for system management and
 security reasons. This access is controlled under Regulation of
 Investigatory Powers Act 2000, Lawful Business Practises.

 ---(end of broadcast)---
 TIP 4: Have you searched our list archives?

http://archives.postgresql.org

--

Albert Cervera Areny
Dept. Informàtica Sedifa, S.L.

Av. Can Bordoll, 149
08202 - Sabadell (Barcelona)
Tel. 93 715 51 11
Fax. 93 715 51 12


  AVISO LEGAL  
La   presente  comunicación  y sus anexos tiene como destinatario la
persona a  la  que  va  dirigida, por  lo  que  si  usted lo  recibe
por error  debe  notificarlo  al  remitente  y   eliminarlo   de  su
sistema,  no  pudiendo  utilizarlo,  total  o   parcialmente,   para
ningún  fin.  Su  contenido  puede  tener información confidencial o
protegida legalmente   y   únicamente   expresa  la  opinión del
remitente.  El   uso   del   correo   electrónico   vía Internet  no
permite   asegurarni  la   confidencialidad   de   los  mensajes
nisucorrecta recepción.   Enel  caso   de   que   el
destinatario no consintiera la utilización  del correo  electrónico,
deberá ponerlo en nuestro conocimiento inmediatamente.

... DISCLAIMER .
This message and its  attachments are  intended  exclusively for the
named addressee. If you  receive  this  message  in   error,  please
immediately delete it from  your  system  and notify the sender. You
may  not  use  this message  or  any  part  of it  for any  purpose.
The   message   may  contain  information  that  is  confidential or
protected  by  law,  and  any  opinions  expressed  are those of the
individualsender.  Internet  e-mail   guarantees   neither   the
confidentiality   nor  the  proper  receipt  of  the  message  sent.
If  the  addressee  of  this  message  does  not  consent to the use
of   internete-mail,pleaseinform usinmmediately.






---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings



This mail has originated outside your organization,
either from an external partner or the Global Internet.
Keep this in mind if you answer this message.


This e-mail is intended only for the above addressee. It may contain
privileged information. If you are not the addressee you must not copy,
distribute, disclose or use any of the information in it. If you have
received it in error please delete it and immediately notify the sender.
Security Notice: all e-mail, sent to or from this address, may be
accessed by someone other than the recipient, for system management and
security reasons. This access is controlled under Regulation of
Investigatory Powers Act 2000, Lawful Business Practises.

---(end of broadcast)---
TIP 5: don't forget to increase 

[PERFORM] out of memory

2006-02-14 Thread martial . bizel
Hello,

I've error out of memory with these traces :


TopMemoryContext: 32768 total in 3 blocks; 5152 free (1 chunks); 27616 used
TopTransactionContext: 8192 total in 1 blocks; 8136 free (0 chunks); 56 used
DeferredTriggerXact: 0 total in 0 blocks; 0 free (0 chunks); 0 used
MessageContext: 24576 total in 2 blocks; 2688 free (14 chunks); 21888 used
PortalMemory: 8192 total in 1 blocks; 8040 free (0 chunks); 152 used
PortalHeapMemory: 8192 total in 1 blocks; 3936 free (0 chunks); 4256 used
PortalHeapMemory: 23552 total in 5 blocks; 1160 free (4 chunks); 22392 used
ExecutorState: 8192 total in 1 blocks; 3280 free (4 chunks); 4912 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExecutorState: 24576 total in 2 blocks; 11264 free (14 chunks); 13312 used
ExprContext: 8192 total in 1 blocks; 8128 free (0 chunks); 64 used
AggContext: -1976573952 total in 287 blocks; 25024 free (414 chunks);
-1976598976 used
DynaHashTable: 503439384 total in 70 blocks; 6804760 free (257 chunks);
496634624 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used
CacheMemoryContext: 516096 total in 6 blocks; 126648 free (2 chunks); 389448
used
test_query: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
test_date: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
query_string_query_string_key: 1024 total in 1 blocks; 640 free (0 chunks); 384
used
query_string_pkey: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_index_indrelid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_attrdef_adrelid_adnum_index: 1024 total in 1 blocks; 320 free (0 chunks); 704
used
pg_amop_opc_strategy_index: 1024 total in 1 blocks; 320 free (0 chunks); 704
used
pg_shadow_usename_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_amop_opr_opc_index: 1024 total in 1 blocks; 320 free (0 chunks); 704 used
pg_conversion_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_language_name_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_statistic_relid_att_index: 1024 total in 1 blocks; 320 free (0 chunks); 704
used
pg_attribute_relid_attnam_index: 1024 total in 1 blocks; 320 free (0 chunks);
704 used
pg_shadow_usesysid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_cast_source_target_index: 1024 total in 1 blocks; 320 free (0 chunks); 704
used
pg_conversion_name_nsp_index: 1024 total in 1 blocks; 320 free (0 chunks); 704
used
pg_trigger_tgrelid_tgname_index: 1024 total in 1 blocks; 320 free (0 chunks);
704 used
pg_namespace_nspname_index: 1024 total in 1 blocks; 640 free (0 chunks); 384
used
pg_conversion_default_index: 2048 total in 1 blocks; 704 free (0 chunks); 1344
used
pg_class_relname_nsp_index: 1024 total in 1 blocks; 320 free (0 chunks); 704
used
pg_aggregate_fnoid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_inherits_relid_seqno_index: 1024 total in 1 blocks; 320 free (0 chunks); 704
used
pg_language_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_type_typname_nsp_index: 1024 total in 1 blocks; 320 free (0 chunks); 704 used
pg_group_sysid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_namespace_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_proc_proname_args_nsp_index: 2048 total in 1 blocks; 704 free (0 chunks);
1344 used
pg_opclass_am_name_nsp_index: 2048 total in 1 blocks; 768 free (0 chunks); 1280
used
pg_group_name_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_proc_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_operator_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_amproc_opc_procnum_index: 1024 total in 1 blocks; 320 free (0 chunks); 704
used
pg_index_indexrelid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_operator_oprname_l_r_n_index: 2048 total in 1 blocks; 704 free (0 chunks);
1344 used
pg_opclass_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_rewrite_rel_rulename_index: 1024 total in 1 blocks; 320 free (0 chunks); 704
used
pg_type_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
pg_attribute_relid_attnum_index: 1024 total in 1 blocks; 320 free (0 chunks);
704 used
pg_class_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used
MdSmgr: 8192 total in 1 blocks; 5712 free (0 chunks); 2480 used
DynaHash: 8192 total in 1 blocks; 6912 free (0 chunks); 1280 used
DynaHashTable: 8192 total in 1 blocks; 2008 free (0 chunks); 6184 used
DynaHashTable: 8192 total in 1 blocks; 5080 free (0 chunks); 3112 used
DynaHashTable: 8192 total in 1 blocks; 2008 free (0 chunks); 6184 used
DynaHashTable: 8192 total in 1 blocks; 3016 free (0 chunks); 5176 

Re: [PERFORM] out of memory

2006-02-14 Thread Tom Lane
[EMAIL PROTECTED] writes:
 I've error out of memory with these traces :

Doing what?

 AggContext: -1976573952 total in 287 blocks; 25024 free (414 chunks);
 -1976598976 used
 DynaHashTable: 503439384 total in 70 blocks; 6804760 free (257 chunks);
 496634624 used

I'd guess that a HashAgg operation ran out of memory ...

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[PERFORM] Postgres slower than MS ACCESS

2006-02-14 Thread Jay Greenfield








I am running some simple queries to benchmark
Postgres 8.1 against MS Access and Postgres is 2 to 3 times slower that
Access. 



Hardware:

Dell Optiplex GX280

P4 3.20 GHz

3GB RAM

Windows XP SP1



Database has one table with 1.2 million rows



Query:

UPDATE ntdn SET gha=area/1



I could post the EXPLAIN ANALYZE results but its
4,000+ lines long



Ive run various tests on a number of Postgres
parameters; none of which have come close to Access time of 5.00 min. Postgres
times range between 24 min and 121 min.



Some of the Postgres variables and ranges Ive
tested.



work_mem: 1,000 to 2,000,000

temp_buffers: 1,000 to 10,000

shared_buffers: 1,000 to 64,000

sort_mem: 1,024,000

fsync on / off



Why does Access run so much faster? How can I
get Postgres to run as fast as Access?



Thanks, 



Jay 




















Re: [PERFORM] out of memory

2006-02-14 Thread martial . bizel
Thanks for your response,

I've made this request :

SELECT query_string, DAY.ocu from search_data.query_string,
 (SELECT SUM(occurence) as ocu, query
FROM daily.queries_detail_statistics
 WHERE date = '2006-01-01' AND date = '2006-01-30'
 AND portal IN (1,2)
 GROUP BY query
 ORDER BY ocu DESC
 LIMIT 1000) as DAY
 WHERE DAY.query=id;

and after few minutes, i've error out of memory with this execution plan :
Nested Loop  (cost=8415928.63..8418967.13 rows=1001 width=34)
   -  Subquery Scan day  (cost=8415928.63..8415941.13 rows=1000 width=16)
 -  Limit  (cost=8415928.63..8415931.13 rows=1000 width=12)
   -  Sort  (cost=8415928.63..8415932.58 rows=1582 width=12)
 Sort Key: sum(occurence)
 -  HashAggregate  (cost=8415840.61..8415844.56 rows=1582
width=12)
   -  Seq Scan on queries_detail_statistics 
(cost=0.00..8414056.00 rows=356922 width=12)
 Filter: ((date = '2006-01-01'::date) AND (date
= '2006-01-30'::date) AND (((portal)::text = '1'::text) OR ((portal)::text =
'2'::text)))
   -  Index Scan using query_string_pkey on query_string  (cost=0.00..3.01
rows=1 width=34)
 Index Cond: (outer.query = query_string.id)
(10 rows)

if HashAgg operation ran out of memory, what can i do ?

thanks a lot
martial

 [EMAIL PROTECTED] writes:
  I've error out of memory with these traces :

 Doing what?

  AggContext: -1976573952 total in 287 blocks; 25024 free (414 chunks);
  -1976598976 used
  DynaHashTable: 503439384 total in 70 blocks; 6804760 free (257 chunks);
  496634624 used

 I'd guess that a HashAgg operation ran out of memory ...

   regards, tom lane

 ---(end of broadcast)---
 TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] Postgres slower than MS ACCESS

2006-02-14 Thread Scott Marlowe
On Tue, 2006-02-14 at 09:51, Jay Greenfield wrote:
 I am running some simple queries to benchmark Postgres 8.1 against MS
 Access and Postgres is 2 to 3 times slower that Access.  

A BUNCH OF STUFF SNIPPED

 Why does Access run so much faster?  How can I get Postgres to run as
 fast as Access?

Because Access is not a multi-user database management system designed
to handle anywhere from a couple to several thousand users at the same
time?

PostgreSQL can do this update while still allowing users to access the
data in the database, and can handle updates to the same table at the
same time, as long as they aren't hitting the same rows.

They're two entirely different beasts.

One is good at batch processing moderate amounts of data for one user at
a time.  The other is good for real time processing of very large
amounts of data for a fairly large number of users while running at an
acceptable, if slower speed.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PERFORM] out of memory

2006-02-14 Thread Scott Marlowe
On Tue, 2006-02-14 at 10:03, [EMAIL PROTECTED] wrote:
 Thanks for your response,

SNIP

 if HashAgg operation ran out of memory, what can i do ?

1: Don't top post.

2: Have you run analyze? Normally when hash agg runs out of memory, the
planner THOUGHT the hash agg would fit in memory, but it was larger than
expected.  This is commonly a problem when you haven't run analyze.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] copy and postgresql.conf

2006-02-14 Thread Albert Cervera Areny
Sorry, COPY improvements came with 8.1 
(http://www.postgresql.org/docs/whatsnew)

A Dimarts 14 Febrer 2006 14:26, FERREIRA, William (VALTECH) va escriure:
 thanks,

 i'm using postgresql 8.0.3
 there is no primary key and no index on my tables

 regards

 -Message d'origine-
 De : [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] la part de Albert
 Cervera Areny
 Envoyé : mardi 14 février 2006 12:38
 À : pgsql-performance@postgresql.org
 Objet : Re: [PERFORM] copy and postgresql.conf



 Hi William,
   which PostgreSQL version are you using? Newer (8.0+) versions have some

 important performance improvements for the COPY command.

   Also, you'll notice significant improvements by creating primary  
 foreign

 keys after the copy command. I think config tweaking can improve key and

 index creation but I don't think you can improve the COPY command itself.

   There are also many threads in this list commenting on this issue, 
 you'll

 find it easely in the archives.

 A Dimarts 14 Febrer 2006 10:44, FERREIRA, William (VALTECH) va escriure:
  hi,
 
  i load data from files using copy method.
  Files contain between 2 and 7 millions of rows, spread on 5 tables.
 
  For loading all the data, it takes 40mn, and the same processing takes
  17mn with Oracle. I think that this time can be improved by changing
  postgresql configuration file. But which parameters i need to manipulate
  and with which values ?
 
  Here are the specifications of my system :
  V250 architecture sun4u
  2xCPU UltraSparc IIIi 1.28 GHz.
  8 Go RAM.
 
  Regards.
 
  Will
 
 
  This e-mail is intended only for the above addressee. It may contain
  privileged information. If you are not the addressee you must not copy,
  distribute, disclose or use any of the information in it. If you have
  received it in error please delete it and immediately notify the sender.
  Security Notice: all e-mail, sent to or from this address, may be
  accessed by someone other than the recipient, for system management and
  security reasons. This access is controlled under Regulation of
  Investigatory Powers Act 2000, Lawful Business Practises.
 
  ---(end of broadcast)---
  TIP 4: Have you searched our list archives?
 
 http://archives.postgresql.org

 --

 Albert Cervera Areny
 Dept. Informàtica Sedifa, S.L.

 Av. Can Bordoll, 149
 08202 - Sabadell (Barcelona)
 Tel. 93 715 51 11
 Fax. 93 715 51 12

 
   AVISO LEGAL  
 La   presente  comunicación  y sus anexos tiene como destinatario la
 persona a  la  que  va  dirigida, por  lo  que  si  usted lo  recibe
 por error  debe  notificarlo  al  remitente  y   eliminarlo   de  su
 sistema,  no  pudiendo  utilizarlo,  total  o   parcialmente,   para
 ningún  fin.  Su  contenido  puede  tener información confidencial o
 protegida legalmente   y   únicamente   expresa  la  opinión del
 remitente.  El   uso   del   correo   electrónico   vía Internet  no
 permite   asegurarni  la   confidencialidad   de   los  mensajes
 nisucorrecta recepción.   Enel  caso   de   que   el
 destinatario no consintiera la utilización  del correo  electrónico,
 deberá ponerlo en nuestro conocimiento inmediatamente.
 
 ... DISCLAIMER .
 This message and its  attachments are  intended  exclusively for the
 named addressee. If you  receive  this  message  in   error,  please
 immediately delete it from  your  system  and notify the sender. You
 may  not  use  this message  or  any  part  of it  for any  purpose.
 The   message   may  contain  information  that  is  confidential or
 protected  by  law,  and  any  opinions  expressed  are those of the
 individualsender.  Internet  e-mail   guarantees   neither   the
 confidentiality   nor  the  proper  receipt  of  the  message  sent.
 If  the  addressee  of  this  message  does  not  consent to the use
 of   internete-mail,pleaseinform usinmmediately.
 





 ---(end of broadcast)---
 TIP 5: don't forget to increase your free space map settings



 This mail has originated outside your organization,
 either from an external partner or the Global Internet.
 Keep this in mind if you answer this message.


 This e-mail is intended only for the above addressee. It may contain
 privileged information. If you are not the addressee you must not copy,
 distribute, disclose or use any of the information in it. If you have
 received it in error please delete it and immediately notify the sender.
 Security Notice: all e-mail, sent to or from this address, may be
 accessed by someone other than the recipient, for system 

Re: [PERFORM] out of memory

2006-02-14 Thread martial . bizel
Yes, I've launched ANALYZE command before sending request.
I precise that's postgres version is 7.3.4

 On Tue, 2006-02-14 at 10:03, [EMAIL PROTECTED] wrote:
  Thanks for your response,

 SNIP

  if HashAgg operation ran out of memory, what can i do ?

 1: Don't top post.

 2: Have you run analyze? Normally when hash agg runs out of memory, the
 planner THOUGHT the hash agg would fit in memory, but it was larger than
 expected.  This is commonly a problem when you haven't run analyze.

 ---(end of broadcast)---
 TIP 5: don't forget to increase your free space map settings




---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] Postgres slower than MS ACCESS

2006-02-14 Thread Jay Greenfield
Is it possible to configure Postgres to behave like Access - a single user
and use as much of the recourses as required?  

Thanks, 

Jay.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Scott Marlowe
Sent: Tuesday, February 14, 2006 8:05 AM
To: Jay Greenfield
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Postgres slower than MS ACCESS

On Tue, 2006-02-14 at 09:51, Jay Greenfield wrote:
 I am running some simple queries to benchmark Postgres 8.1 against MS
 Access and Postgres is 2 to 3 times slower that Access.  

A BUNCH OF STUFF SNIPPED

 Why does Access run so much faster?  How can I get Postgres to run as
 fast as Access?

Because Access is not a multi-user database management system designed
to handle anywhere from a couple to several thousand users at the same
time?

PostgreSQL can do this update while still allowing users to access the
data in the database, and can handle updates to the same table at the
same time, as long as they aren't hitting the same rows.

They're two entirely different beasts.

One is good at batch processing moderate amounts of data for one user at
a time.  The other is good for real time processing of very large
amounts of data for a fairly large number of users while running at an
acceptable, if slower speed.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] out of memory

2006-02-14 Thread Scott Marlowe
On Tue, 2006-02-14 at 10:15, [EMAIL PROTECTED] wrote:
 Yes, I've launched ANALYZE command before sending request.
 I precise that's postgres version is 7.3.4

So what does explain analyze show for this query, if anything?  Can you
increase your sort_mem or shared_buffers (I forget which hash_agg uses
off the top of my head...) if necessary to make it work.  Note you can
increase sort_mem on the fly for a given connection.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Postgres slower than MS ACCESS

2006-02-14 Thread Scott Marlowe
On Tue, 2006-02-14 at 10:17, Jay Greenfield wrote:
 Is it possible to configure Postgres to behave like Access - a single user
 and use as much of the recourses as required?  

No.  If you want something akin to that, try SQL Lite.  it's not as
featureful as PostgreSQL, but it's closer to it than Access.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] out of memory

2006-02-14 Thread martial . bizel
command explain analyze crash with the out of memory error

I precise that I've tried a lot of values from parameters shared_buffer and
sort_mem

now, in config file, values are :
sort_mem=32768
and shared_buffer=3

server has 4Go RAM.
and kernel.shmmax=30720



 On Tue, 2006-02-14 at 10:15, [EMAIL PROTECTED] wrote:
  Yes, I've launched ANALYZE command before sending request.
  I precise that's postgres version is 7.3.4

 So what does explain analyze show for this query, if anything?  Can you
 increase your sort_mem or shared_buffers (I forget which hash_agg uses
 off the top of my head...) if necessary to make it work.  Note you can
 increase sort_mem on the fly for a given connection.

 ---(end of broadcast)---
 TIP 5: don't forget to increase your free space map settings




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[PERFORM] 0ut of Memory Error during Vacuum Analyze

2006-02-14 Thread Tomeh, Husam
 
This is the second time I'm getting out of memory error when I start a
database vacuum or try to vacuum any table. Note this machine has been
used for data load batch purposes. 

=# vacuum analyze code;
ERROR:  out of memory
DETAIL:  Failed on request of size 1073741820.

I'm running Postgres 8.1.1 on RedHat 2.6 kernel (HP server). 
My maintenance work area never been changed. It's set to 1GB.
(maintenance_work_mem = 1048576). Physical memory: 32 GB.  

Bouncing the database does not help. 

Two workarounds I have used so far:

  1) Decreasing the maintenance_work_mem to 512MB, vacuum analyze would
work just fine.

Or 

  2) Bouncing the server (maintaining the original 1GB
maintenance_work_mem) would also work.

I have not had that error on the production instances (which are
identical copies of the loading instance) - only the loading instance..

Any explanation as to why and how to avoid that ?  Thanks



 
Husam  
**
This message contains confidential information intended only for the use of the 
addressee(s) named above and may contain information that is legally 
privileged.  If you are not the addressee, or the person responsible for 
delivering it to the addressee, you are hereby notified that reading, 
disseminating, distributing or copying this message is strictly prohibited.  If 
you have received this message by mistake, please immediately notify us by 
replying to the message and delete the original message immediately thereafter.

Thank you.

   FADLD Tag
**


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] out of memory

2006-02-14 Thread Scott Marlowe
On Tue, 2006-02-14 at 10:32, [EMAIL PROTECTED] wrote:
 command explain analyze crash with the out of memory error
 
 I precise that I've tried a lot of values from parameters shared_buffer and
 sort_mem
 
 now, in config file, values are :
 sort_mem=32768
 and shared_buffer=3

OK, on the command line, try increasing the sort_mem until hash_agg can
work.  With a 4 gig machine, you should be able to go as high as needed
here, I'd think.  Try as high as 50 or so or more.  Then when
explain analyze works, compare the actual versus estimated number of
rows.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] copy and postgresql.conf

2006-02-14 Thread FERREIRA, William (VALTECH)

30% faster !!! i will test this new version ...

thanks a lot

-Message d'origine-
De : [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] la part de Albert
Cervera Areny
Envoyé : mardi 14 février 2006 17:07
À : pgsql-performance@postgresql.org
Objet : Re: [PERFORM] copy and postgresql.conf



Sorry, COPY improvements came with 8.1

(http://www.postgresql.org/docs/whatsnew)

A Dimarts 14 Febrer 2006 14:26, FERREIRA, William (VALTECH) va escriure:
 thanks,

 i'm using postgresql 8.0.3
 there is no primary key and no index on my tables

 regards

 -Message d'origine-
 De : [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] la part de Albert
 Cervera Areny
 Envoyé : mardi 14 février 2006 12:38
 À : pgsql-performance@postgresql.org
 Objet : Re: [PERFORM] copy and postgresql.conf



 Hi William,
   which PostgreSQL version are you using? Newer (8.0+) versions have some

 important performance improvements for the COPY command.

   Also, you'll notice significant improvements by creating primary  
 foreign

 keys after the copy command. I think config tweaking can improve key and

 index creation but I don't think you can improve the COPY command itself.

   There are also many threads in this list commenting on this issue, 
 you'll

 find it easely in the archives.

 A Dimarts 14 Febrer 2006 10:44, FERREIRA, William (VALTECH) va escriure:
  hi,
 
  i load data from files using copy method.
  Files contain between 2 and 7 millions of rows, spread on 5 tables.
 
  For loading all the data, it takes 40mn, and the same processing takes
  17mn with Oracle. I think that this time can be improved by changing
  postgresql configuration file. But which parameters i need to manipulate
  and with which values ?
 
  Here are the specifications of my system :
  V250 architecture sun4u
  2xCPU UltraSparc IIIi 1.28 GHz.
  8 Go RAM.
 
  Regards.
 
  Will
 
 
  This e-mail is intended only for the above addressee. It may contain
  privileged information. If you are not the addressee you must not copy,
  distribute, disclose or use any of the information in it. If you have
  received it in error please delete it and immediately notify the sender.
  Security Notice: all e-mail, sent to or from this address, may be
  accessed by someone other than the recipient, for system management and
  security reasons. This access is controlled under Regulation of
  Investigatory Powers Act 2000, Lawful Business Practises.
 
  ---(end of broadcast)---
  TIP 4: Have you searched our list archives?
 
 http://archives.postgresql.org

 --

 Albert Cervera Areny
 Dept. Informàtica Sedifa, S.L.

 Av. Can Bordoll, 149
 08202 - Sabadell (Barcelona)
 Tel. 93 715 51 11
 Fax. 93 715 51 12

 
   AVISO LEGAL  
 La   presente  comunicación  y sus anexos tiene como destinatario la
 persona a  la  que  va  dirigida, por  lo  que  si  usted lo  recibe
 por error  debe  notificarlo  al  remitente  y   eliminarlo   de  su
 sistema,  no  pudiendo  utilizarlo,  total  o   parcialmente,   para
 ningún  fin.  Su  contenido  puede  tener información confidencial o
 protegida legalmente   y   únicamente   expresa  la  opinión del
 remitente.  El   uso   del   correo   electrónico   vía Internet  no
 permite   asegurarni  la   confidencialidad   de   los  mensajes
 nisucorrecta recepción.   Enel  caso   de   que   el
 destinatario no consintiera la utilización  del correo  electrónico,
 deberá ponerlo en nuestro conocimiento inmediatamente.
 
 ... DISCLAIMER .
 This message and its  attachments are  intended  exclusively for the
 named addressee. If you  receive  this  message  in   error,  please
 immediately delete it from  your  system  and notify the sender. You
 may  not  use  this message  or  any  part  of it  for any  purpose.
 The   message   may  contain  information  that  is  confidential or
 protected  by  law,  and  any  opinions  expressed  are those of the
 individualsender.  Internet  e-mail   guarantees   neither   the
 confidentiality   nor  the  proper  receipt  of  the  message  sent.
 If  the  addressee  of  this  message  does  not  consent to the use
 of   internete-mail,pleaseinform usinmmediately.
 





 ---(end of broadcast)---
 TIP 5: don't forget to increase your free space map settings



 This mail has originated outside your organization,
 either from an external partner or the Global Internet.
 Keep this in mind if you answer this message.


 This e-mail is intended only for the above addressee. It may contain
 privileged information. If you are not the 

Re: [PERFORM] Postgres slower than MS ACCESS

2006-02-14 Thread Stephen Frost
* Jay Greenfield ([EMAIL PROTECTED]) wrote:
 Database has one table with 1.2 million rows
 Query:
 
 UPDATE ntdn SET gha=area/1
 
 I could post the EXPLAIN ANALYZE results but its 4,000+ lines long

How do you get 4,000+ lines of explain analyze for one update query in a
database with only one table?  Something a bit fishy there.  Perhaps you 
mean explain verbose, though I don't really see how that'd be so long 
either, but it'd be closer.  Could you provide some more sane
information?

 I've run various tests on a number of Postgres parameters; none of which
 have come close to Access' time of 5.00 min.  Postgres times range between
 24 min and 121 min.
 
 Some of the Postgres variables and ranges I've tested.
 work_mem:  1,000 to 2,000,000
 temp_buffers:  1,000 to 10,000
 shared_buffers:  1,000 to 64,000
 sort_mem:  1,024,000
 fsync on / off
 
 Why does Access run so much faster?  How can I get Postgres to run as fast
 as Access?

While it's true that Access almost certainly takes some shortcuts, 24
minutes for an update across 1.2 millon rows seems an awefully long time
for Postgres.  Is this table exceptionally large in same way (ie: lots 
of columns)?  I expect running with fsync off would be closer to 'Access
mode' though it has risks (of course).  Also, it might be faster to
insert into a seperate table rather than run a huge update like that in
Postgres.  Also, if there are indexes on the table in question, you
might drop them before doing the update/insert and recreate them after
the query has finished.

You really havn't provided anywhere near enough information to figure
out what the actual problem is here.  Access does take shortcuts but the
times you're posting for Postgres seem quite far off based on the
hardware and commands you've described...

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [PERFORM] out of memory

2006-02-14 Thread Tom Lane
[EMAIL PROTECTED] writes:
 Yes, I've launched ANALYZE command before sending request.
 I precise that's postgres version is 7.3.4

Can't possibly be 7.3.4, that version didn't have HashAggregate.

How many distinct values of query actually exist in the table?

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] out of memory

2006-02-14 Thread Scott Marlowe
On Tue, 2006-02-14 at 11:36, Tom Lane wrote:
 [EMAIL PROTECTED] writes:
  Yes, I've launched ANALYZE command before sending request.
  I precise that's postgres version is 7.3.4
 
 Can't possibly be 7.3.4, that version didn't have HashAggregate.
 
 How many distinct values of query actually exist in the table?

I thought that looked odd.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] 0ut of Memory Error during Vacuum Analyze

2006-02-14 Thread Tom Lane
Tomeh, Husam [EMAIL PROTECTED] writes:
 =# vacuum analyze code;
 ERROR:  out of memory
 DETAIL:  Failed on request of size 1073741820.

That looks a whole lot like a corrupt-data issue.  The apparent
dependency on maintenance_work_mem is probably illusory --- I suspect
some of your trials are selecting the corrupted row to use in the
ANALYZE stats, and others are randomly selecting other rows.

If you are able to pg_dump the table in question then this theory is
wrong, but I'd suggest trying that first.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Postgres slower than MS ACCESS

2006-02-14 Thread Tom Lane
Stephen Frost [EMAIL PROTECTED] writes:
 While it's true that Access almost certainly takes some shortcuts, 24
 minutes for an update across 1.2 millon rows seems an awefully long time
 for Postgres.

I did some experiments along this line with a trivial table (2 integer
columns) of 1.28M rows.  I used CVS tip with all parameters at defaults.
With no indexes, an UPDATE took about 50 seconds.  With one index, it
took 628 seconds.  It's not hard to believe you could get to Jay's
figures with multiple indexes.

Looking in the postmaster log, I see I was getting checkpoints every few
seconds.  Increasing checkpoint_segments to 30 (a factor of 10) brought
it down to 355 seconds, and then increasing shared_buffers to 2
brought it down to 165 sec.  Separating WAL and data onto different
disks would have helped too, no doubt, but I'm too lazy to try it.

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Postgres slower than MS ACCESS

2006-02-14 Thread Stephen Frost
* Tom Lane ([EMAIL PROTECTED]) wrote:
 Stephen Frost [EMAIL PROTECTED] writes:
  While it's true that Access almost certainly takes some shortcuts, 24
  minutes for an update across 1.2 millon rows seems an awefully long time
  for Postgres.
 
 I did some experiments along this line with a trivial table (2 integer
 columns) of 1.28M rows.  I used CVS tip with all parameters at defaults.
 With no indexes, an UPDATE took about 50 seconds.  With one index, it
 took 628 seconds.  It's not hard to believe you could get to Jay's
 figures with multiple indexes.

With multiple indexes, you might want to drop them and recreate them
when you're updating an entire table.

 Looking in the postmaster log, I see I was getting checkpoints every few
 seconds.  Increasing checkpoint_segments to 30 (a factor of 10) brought
 it down to 355 seconds, and then increasing shared_buffers to 2
 brought it down to 165 sec.  Separating WAL and data onto different
 disks would have helped too, no doubt, but I'm too lazy to try it.

Sure, this was kind of my point, we need more information about the
database if we're going to have much of a chance of improving the
results he's seeing.  165 seconds is certainly a great deal better than
24 minutes. :)

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [PERFORM] Postgres slower than MS ACCESS

2006-02-14 Thread Jay Greenfield
 How do you get 4,000+ lines of explain analyze for one update query in a
 database with only one table?  Something a bit fishy there.  Perhaps you 
 mean explain verbose, though I don't really see how that'd be so long 
 either, but it'd be closer.  Could you provide some more sane
 information?

My mistake - there was 4,000 lines in the EXPLAIN ANALYZE VERBOSE output.
Here is the output of EXPLAIN ANALYZE:

QUERY PLAN
Seq Scan on ntdn  (cost=0.00..3471884.39 rows=1221391 width=1592) (actual
time=57292.580..1531300.003 rows=1221391 loops=1)
Total runtime: 4472646.988 ms


 Is this table exceptionally large in same way (ie: lots 
 of columns)?

The table is 1.2 million rows X 246 columns.  The only index is the primary
key.  I will try to remove that index to see if that improves performance at
all.

Jay

-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, February 14, 2006 12:43 PM
To: Stephen Frost
Cc: Jay Greenfield; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Postgres slower than MS ACCESS 

Stephen Frost [EMAIL PROTECTED] writes:
 While it's true that Access almost certainly takes some shortcuts, 24
 minutes for an update across 1.2 millon rows seems an awefully long time
 for Postgres.

I did some experiments along this line with a trivial table (2 integer
columns) of 1.28M rows.  I used CVS tip with all parameters at defaults.
With no indexes, an UPDATE took about 50 seconds.  With one index, it
took 628 seconds.  It's not hard to believe you could get to Jay's
figures with multiple indexes.

Looking in the postmaster log, I see I was getting checkpoints every few
seconds.  Increasing checkpoint_segments to 30 (a factor of 10) brought
it down to 355 seconds, and then increasing shared_buffers to 2
brought it down to 165 sec.  Separating WAL and data onto different
disks would have helped too, no doubt, but I'm too lazy to try it.

regards, tom lane


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] Postgres slower than MS ACCESS

2006-02-14 Thread Tom Lane
Jay Greenfield [EMAIL PROTECTED] writes:
 The table is 1.2 million rows X 246 columns.  The only index is the primary
 key.  I will try to remove that index to see if that improves performance at
 all.

Hmm, the large number of columns might have something to do with it ...
what datatypes are the columns?

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] SQL Function Performance

2006-02-14 Thread Michael Fuhr
On Tue, Feb 14, 2006 at 11:33:57AM +0200, Adnan DURSUN wrote:
 -  Nested Loop  (cost=5.90..267.19 rows=3 width=101) (actual 
 time=76.240..30974.777 rows=63193 loops=1)
   -  Nested Loop  (cost=5.90..123.48 rows=26 width=73) (actual 
 time=32.082..4357.786 rows=14296 loops=1)

A prepared query is planned before the parameters' values are known,
so the planner can't take full advantage of column statistics to
estimate row counts.  The planner must therefore decide on a plan
that should be reasonable in most cases; apparently this isn't one
of those cases, as the disparity between estimated and actual rows
shows.  Maybe Tom (one of the core developers) can comment on whether
anything can be done to improve the plan in this case.

Absent a better solution, you could write a PL/pgSQL function and
build the query as a text string, then EXECUTE it.  That would give
you a new plan each time, one that can take better advantage of
statistics, at the cost of having to plan the query each time you
call the function (but you probably don't care about that cost
as long as the overall results are better).  Here's an example:

CREATE FUNCTION fooquery(qval text) RETURNS SETOF foo AS $$
DECLARE
rowfoo%ROWTYPE;
query  text;
BEGIN
query := 'SELECT * FROM foo WHERE val = ' || quote_literal(qval);

FOR row IN EXECUTE query LOOP
RETURN NEXT row;
END LOOP;

RETURN;
END;
$$ LANGUAGE plpgsql STABLE STRICT;

-- 
Michael Fuhr

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] Postgres slower than MS ACCESS

2006-02-14 Thread Jay Greenfield
 Hmm, the large number of columns might have something to do with it ...
 what datatypes are the columns?

All sorts, but mostly float4 and varchar(2 to 10)

Jay

-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, February 14, 2006 1:03 PM
To: Jay Greenfield
Cc: 'Stephen Frost'; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Postgres slower than MS ACCESS 

Jay Greenfield [EMAIL PROTECTED] writes:
 The table is 1.2 million rows X 246 columns.  The only index is the
primary
 key.  I will try to remove that index to see if that improves performance
at
 all.

Hmm, the large number of columns might have something to do with it ...
what datatypes are the columns?

regards, tom lane


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[PERFORM] 8.2.1 on FreeBSD 5.4-RELEASE

2006-02-14 Thread Paul Khavkine


Hi Guys.


We are running v8.1.2 on FreeBSD 5.4-RELEASE and the server is running
with above averege load.

When i do top i see alot of postmaster processes in sbwait state:

# uptime
4:29PM  up 23 days, 20:01, 3 users, load averages: 3.73, 1.97, 1.71

# top
82808 pgsql   1   40 15580K 12008K sbwait 0 107:06  7.52%
postgres
82804 pgsql   1   40 15612K 12028K sbwait 0 106:13  7.08%
postgres
82806 pgsql   1   40 15576K 12008K sbwait 0 106:07  6.84%
postgres
82793 pgsql   1   40 15576K 12008K sbwait 0 106:05  6.54%
postgres
82801 pgsql   1   40 15612K 12032K sbwait 0 106:13  5.57%
postgres
82800 pgsql   1   40 15580K 12012K sbwait 0 105:45  4.88%
postgres
 6613 pgsql   1   40 15612K 12020K sbwait 0  28:47  4.59%
postgres
82798 pgsql   1   40 15612K 12036K sbwait 0 106:10  4.49%
postgres
82799 pgsql   1   40 15612K 12036K sbwait 0 106:27  4.39%
postgres
82797 pgsql   1   40 15612K 12036K sbwait 1 106:23  4.25%
postgres
82748 pgsql   1   40 15564K 11864K sbwait 0  48:12  3.08%
postgres
82747 pgsql   1   40 15560K 11848K sbwait 0  47:58  3.08%
postgres
82749 pgsql   1   40 15564K 11868K sbwait 0  48:27  1.95%
postgres
82751 pgsql   1   40 15564K 11864K sbwait 0  48:14  1.66%
postgres
82739 pgsql   1   40 15564K 11868K sbwait 1  48:38  1.37%
postgres
82750 pgsql   1   40 15564K 11864K sbwait 0  48:07  1.27%
postgres


The server is not very busy, but it has more or less as many writes as
reads.

I have not seen more then 10-15 simultaneous queries.


Any idea why idle postmaster consume 3-5% CPU ?

This is a FreeBSD 5.4-RELEASE server with 2x3G Xeon CPUs, 2G memory,
RAID1 mirrored U320 drives.


Thanx
Paul



signature.asc
Description: This is a digitally signed message part


Re: [PERFORM] copy and postgresql.conf

2006-02-14 Thread Jignesh K. Shah

What version of Solaris are you using?

Do you have the recommendations while using COPY on Solaris?
http://blogs.sun.com/roller/page/jkshah?entry=postgresql_on_solaris_better_use

wal_sync_method = fsync
wal_buffers = 128
checkpoint_segments = 128
bgwriter_percent = 0
bgwriter_maxpages = 0


And also for /etc/system on Solaris 10, 9 SPARC use the following

set maxphys=1048576
set md:md_maxphys=1048576
set segmap_percent=50
set ufs:freebehind=0
set msgsys:msginfo_msgmni = 3584
set semsys:seminfo_semmni = 4096
set shmsys:shminfo_shmmax = 15392386252
set shmsys:shminfo_shmmni = 4096


Can you try putting in one run with this values and send back your 
experiences on whether it helps your workload or not?


Atleast I saw improvements using the above settings with COPY with 
Postgres 8.0 and Postgres 8.1 on Solaris.


Regards,
Jignesh




FERREIRA, William (VALTECH) wrote:


30% faster !!! i will test this new version ...

thanks a lot

-Message d'origine-
De : [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] la part de Albert
Cervera Areny
Envoyé : mardi 14 février 2006 17:07
À : pgsql-performance@postgresql.org
Objet : Re: [PERFORM] copy and postgresql.conf



Sorry, COPY improvements came with 8.1

(http://www.postgresql.org/docs/whatsnew)

A Dimarts 14 Febrer 2006 14:26, FERREIRA, William (VALTECH) va escriure:
 


thanks,

i'm using postgresql 8.0.3
there is no primary key and no index on my tables

regards

-Message d'origine-
De : [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] la part de Albert
Cervera Areny
Envoyé : mardi 14 février 2006 12:38
À : pgsql-performance@postgresql.org
Objet : Re: [PERFORM] copy and postgresql.conf



Hi William,
which PostgreSQL version are you using? Newer (8.0+) versions have some

important performance improvements for the COPY command.

Also, you'll notice significant improvements by creating primary  
foreign

keys after the copy command. I think config tweaking can improve key and

index creation but I don't think you can improve the COPY command itself.

There are also many threads in this list commenting on this issue, 
you'll

find it easely in the archives.

A Dimarts 14 Febrer 2006 10:44, FERREIRA, William (VALTECH) va escriure:
   


hi,

i load data from files using copy method.
Files contain between 2 and 7 millions of rows, spread on 5 tables.

For loading all the data, it takes 40mn, and the same processing takes
17mn with Oracle. I think that this time can be improved by changing
postgresql configuration file. But which parameters i need to manipulate
and with which values ?

Here are the specifications of my system :
V250 architecture sun4u
2xCPU UltraSparc IIIi 1.28 GHz.
8 Go RAM.

Regards.

Will


This e-mail is intended only for the above addressee. It may contain
privileged information. If you are not the addressee you must not copy,
distribute, disclose or use any of the information in it. If you have
received it in error please delete it and immediately notify the sender.
Security Notice: all e-mail, sent to or from this address, may be
accessed by someone other than the recipient, for system management and
security reasons. This access is controlled under Regulation of
Investigatory Powers Act 2000, Lawful Business Practises.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org
 


--

Albert Cervera Areny
Dept. Informàtica Sedifa, S.L.

Av. Can Bordoll, 149
08202 - Sabadell (Barcelona)
Tel. 93 715 51 11
Fax. 93 715 51 12


  AVISO LEGAL  
La   presente  comunicación  y sus anexos tiene como destinatario la
persona a  la  que  va  dirigida, por  lo  que  si  usted lo  recibe
por error  debe  notificarlo  al  remitente  y   eliminarlo   de  su
sistema,  no  pudiendo  utilizarlo,  total  o   parcialmente,   para
ningún  fin.  Su  contenido  puede  tener información confidencial o
protegida legalmente   y   únicamente   expresa  la  opinión del
remitente.  El   uso   del   correo   electrónico   vía Internet  no
permite   asegurarni  la   confidencialidad   de   los  mensajes
nisucorrecta recepción.   Enel  caso   de   que   el
destinatario no consintiera la utilización  del correo  electrónico,
deberá ponerlo en nuestro conocimiento inmediatamente.

... DISCLAIMER .
This message and its  attachments are  intended  exclusively for the
named addressee. If you  receive  this  message  in   error,  please
immediately delete it from  your  system  and notify the sender. You
may  not  use  this message  or  any  part  of it  for any  purpose.
The   message   may  contain  information  that  is  confidential or
protected  by  law,  and  any  

Re: [PERFORM] 0ut of Memory Error during Vacuum Analyze and Create Index

2006-02-14 Thread Tom Lane
Tomeh, Husam [EMAIL PROTECTED] writes:
 I have run pg_dump and had no errors. I also got this error when
 creating one index but not another. When I lowered my
 maintenance_work_mem, the create index succeeded. 

Create index too?  Hm.  That begins to sound more like a memory leak.
Do you have any custom data types or anything like that in this
table?  Can you put together a self-contained test case using dummy
data?

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] SQL Function Performance

2006-02-14 Thread Adnan DURSUN






---Original Message---


From: Michael Fuhr
Date: 02/14/06 23:05:55
To: Adnan DURSUN
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] SQL 
Function Performance

On Tue, Feb 14, 2006 at 11:33:57AM +0200, Adnan DURSUN wrote:
 
-Nested Loop(cost=5.90..267.19 rows=3 width=101) 
(actual time=76.240..30974.777 rows=63193 loops=1)
 
-Nested Loop(cost=5.90..123.48 rows=26 width=73) 
(actual time=32.082..4357.786 rows=14296 loops=1)

Absent a better solution, you could write a PL/pgSQL function and
build the query as a text string, then EXECUTE it.That 
would give
you a new plan each time, one that can take better advantage of
statistics, at the cost of having to plan the query each time you
call the function (but you probably don't care about that cost
as long as the overall results are better).Here's an 
example:

 Yes, i did it. i wrote a PL/pgSQL function. Now results 
come at 100 ms.. :-)
I dont like that method but i have to do it for perfomance

Many thanks to everyone who helps...


Adnan DURSUN
ASRIN Bilisim Ltd.
Ankara /TURKEY
---(end of 
broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] 10+hrs vs 15min because of just one index

2006-02-14 Thread Aaron Turner
Well just a little update:

1) Looks like I'm definately RAM constrained.   Just placed an order
for another 4GB.
2) I ended up dropping the primary key too which helped with disk
thrashing a lot (average disk queue wait was between 500ms and 8500ms
before and 250-500ms after)
3) Playing with most of the settings in the postgresql.conf actually
dropped performance significantly.  Looks like I'm starving the disk
cache.
4) I'm going to assume going to a bytea helped some (width is 54 vs
66) but nothing really measurable

Thanks everyone for your help!

--
Aaron Turner
http://synfin.net/

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] 0ut of Memory Error during Vacuum Analyze and

2006-02-14 Thread Tomeh, Husam

No special data types. The table is pretty large one with over 15GB. The
index is about 1.5 GB. Here's the table structure :

 Column  | Type  | Modifiers
-+---+---
 county_id   | numeric(5,0)  | not null
 batch_dt| numeric(8,0)  | not null
 batch_seq   | numeric(5,0)  | not null
 mtg_seq_nbr | numeric(1,0)  | not null
 mtg_rec_dt  | numeric(8,0)  |
 mtg_doc_nbr | character varying(12) |
 mtg_rec_bk  | character varying(6)  |
 mtg_rec_pg  | character varying(6)  |
 mtg_amt | numeric(11,0) |
 lndr_cd | character varying(10) |
 lndr_nm | character varying(30) |
 mtg_assm_ind| character(1)  |
 mtg_typ | character varying(5)  |
 adj_rate_ind| character(1)  |
 mtg_term_nbr| numeric(5,0)  |
 mtg_term_cd | character varying(4)  |
 mtg_due_dt  | numeric(8,0)  |
 mtg_deed_typ| character varying(6)  |
 reverse_mtg_ind | character(1)  |
 refi_ind| character(1)  |
 conform_ind | character(1)  |
 cnstr_ln_ind| character(1)  |
 title_co_cd | character varying(5)  |
 state_id| numeric(5,0)  |
 msa | numeric(4,0)  |
Indexes:
uq_mortgage UNIQUE, btree (county_id, batch_dt, batch_seq,
mtg_seq_nbr)
mortgage_idxc_county_id_mtg_rec_dt btree (county_id, mtg_rec_dt)
mortgage_idxc_state_id_mtg_rec_dt btree (state_id, mtg_rec_dt)

-

 Here's the test I did with maintenance_work_mem = 1GB:


mtrac=# show maintenance_work_mem ;
 maintenance_work_mem
--
 1048576==
(1 row)

mtrac=#
mtrac=#
mtrac=# create index mort_ht on mortgage(county_id,mtg_rec_dt);
ERROR:  out of memory   ===
DETAIL:  Failed on request of size 134217728.   ===

 Then I changed the parameter to 512 MB:  


mtrac=# show maintenance_work_mem ;
 maintenance_work_mem
--
 524288 ===
(1 row)

mtrac=#  create index mort_ht_512 on mortgage(county_id,mtg_rec_dt);
CREATE INDEX
---



Regards,

 
Husam  

-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, February 14, 2006 2:16 PM
To: Tomeh, Husam
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] 0ut of Memory Error during Vacuum Analyze and
Create Index

Tomeh, Husam [EMAIL PROTECTED] writes:
 I have run pg_dump and had no errors. I also got this error when
 creating one index but not another. When I lowered my
 maintenance_work_mem, the create index succeeded. 

Create index too?  Hm.  That begins to sound more like a memory leak.
Do you have any custom data types or anything like that in this
table?  Can you put together a self-contained test case using dummy
data?

regards, tom lane
**
This message contains confidential information intended only for the use of the 
addressee(s) named above and may contain information that is legally 
privileged.  If you are not the addressee, or the person responsible for 
delivering it to the addressee, you are hereby notified that reading, 
disseminating, distributing or copying this message is strictly prohibited.  If 
you have received this message by mistake, please immediately notify us by 
replying to the message and delete the original message immediately thereafter.

Thank you.

   FADLD Tag
**


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] 0ut of Memory Error during Vacuum Analyze and Create Index

2006-02-14 Thread Tom Lane
Tomeh, Husam [EMAIL PROTECTED] writes:
 mtrac=# show maintenance_work_mem ;
  maintenance_work_mem
 --
  1048576==
 (1 row)

 mtrac=#
 mtrac=#
 mtrac=# create index mort_ht on mortgage(county_id,mtg_rec_dt);
 ERROR:  out of memory   ===
 DETAIL:  Failed on request of size 134217728.   ===

It would be useful to look at the detailed allocation info that this
(should have) put into the postmaster log.  Also, if you could get
a stack trace back from the error, that would be even more useful.
To do that,
* start psql
* determine PID of connected backend (use pg_backend_pid())
* in another window, as postgres user,
gdb /path/to/postgres backend-PID
gdb break errfinish
gdb cont
* issue failing command in psql
* when breakpoint is reached,
gdb bt
... stack trace printed here ...
gdb q

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


[PERFORM] could not send data to client: Broken pipe

2006-02-14 Thread Pradeep Parmar
Hi,

I'm using Postgres 7.4. I have a web application built with php4 using postgres7.4

I was going through /var/log/messages of my linux box ( SLES 9). I encountered the following messages quite a few times.

postgres[20199]: [4-1] ERROR: could not send data to client: Broken pipe
postgres[30391]: [6-1] LOG: could not send data to client: Broken pipe
postgres[30570]: [6-1] LOG: could not send data to client: Broken pipe

Can anyone help me in interpreting these messages? 
What is causing this error msg? What is the severity?


Regards

-- Pradeep