[PERFORM] Performance problem from migrating between versions!

2005-01-17 Thread Kaloyan Iliev Iliev
Hi,
I have the following problem. A week ago we've migrated from PGv7.2.3 to 
7.4.6. There were a lot of things in the apps to chenge but we made 
them. But one query doesn't want to run. In the old PGv7.2.3 it passes 
for 10 min. In the new one it gaves:

DBD::Pg::st execute failed: ERROR:  out of memory
So the Server was not upgrated or preconfigured, so I suppose that the 
problem is somewhere in the configuration of the Postgres. Here I post 
the query and the explain. I can't post the explain analyze, 
because:))... the query can't execute:)
I also post the result of SHOW ALL to give a view of the server 
configuration.

Thanks in advance for all sugestions.
Kaloyan Iliev
SHOW ALL
namesetting
add_missing_fromon
australian_timezonesoff
authentication_timeout  60
check_function_bodies   on
checkpoint_segments 16
checkpoint_timeout  300
checkpoint_warning  30
client_encoding SQL_ASCII
client_min_messages notice
commit_delay0
commit_siblings 5
cpu_index_tuple_cost0.001
cpu_operator_cost   0.0025
cpu_tuple_cost  0.01
DateStyle   ISO, DMY
db_user_namespace   off
deadlock_timeout1000
debug_pretty_print  off
debug_print_parse   off
debug_print_planoff
debug_print_rewritten   off
default_statistics_target   10
default_transaction_isolation   read committed
default_transaction_read_only   off
dynamic_library_path$libdir
effective_cache_size13000
enable_hashagg  on
enable_hashjoin on
enable_indexscanon
enable_mergejoinon
enable_nestloop on
enable_seqscan  on
enable_sort on
enable_tidscan  on
explain_pretty_printon
extra_float_digits  0
from_collapse_limit 8
fsync   on
geqoon
geqo_effort 1
geqo_generations0
geqo_pool_size  0
geqo_selection_bias 2
geqo_threshold  11
join_collapse_limit 8
krb_server_keyfile  unset
lc_collate  C
lc_ctypeCP1251
lc_messages C
lc_monetary C
lc_numeric  C
lc_time C
log_connections off
log_durationoff
log_error_verbosity default
log_executor_stats  off
log_hostnameoff
log_min_duration_statement  -1
log_min_error_statement panic
log_min_messagesnotice
log_parser_statsoff
log_pid off
log_planner_stats   off
log_source_port off
log_statement   off
log_statement_stats off
log_timestamp   on
max_connections 256
max_expr_depth  1
max_files_per_process   1000
max_fsm_pages   2
max_fsm_relations   1000
max_locks_per_transaction   64
password_encryption on
port5432
pre_auth_delay  0
preload_libraries   unset
random_page_cost4
regex_flavoradvanced
rendezvous_name unset
search_path $user,public
server_encoding SQL_ASCII
server_version  7.4.6
shared_buffers  1000
silent_mode off
sort_mem1024
sql_inheritance off
ssl off
statement_timeout   0
stats_block_level   on
stats_command_stringon
stats_reset_on_server_start off
stats_row_level on
stats_start_collector   on
superuser_reserved_connections  2
syslog  0
syslog_facility LOCAL0
syslog_identpostgres
tcpip_socketon
TimeZoneunknown
trace_notifyoff
transaction_isolation   read committed
transaction_read_only   off
transform_null_equals   off
unix_socket_directory   unset
unix_socket_group   unset
unix_socket_permissions 511
vacuum_mem  8192
virtual_hostunset
wal_buffers 8
wal_debug   0
wal_sync_method fsync
zero_damaged_pages  off
(113 rows)
And now the query:
explain select UNPAID.ino,
   I.idate,
   round(UNPAID.saldo - 
  ( select round(coalesce(sum(total),0),5)
from invoices I1 
where I1.iino = I.ino AND
  I1.istatus = 0 AND
  I1.itype = 2 )
   ,2) AS saldo,
   C.name AS client_name,
   SC.branch AS client_branch,
   I.total,
   I.nomenclature_no AS nom,
   I.subnom_no AS subnom,
OF.description AS office, 
 coalesce((select 1.2 * sum(AD.bgl_amount)::float / AC.amount
 from  acc_clients AC, 
   config C,
   acc_debts AD, 
   debts_desc D
		 where
		   C.office = OF.officeid AND
		   not AC.credit AND
		   AC.ino = I.ino AND
		   AC.transact_no = AD.transact_no AND
		   AD.credit AND
		   AD.debtid = D.debtid AND
		   C.confid = D.refid AND

Re: [PERFORM] Index on a function and SELECT DISTINCT

2005-01-17 Thread Frank Wiles
On Fri, 14 Jan 2005 12:32:12 -0600
Adrian Holovaty <[EMAIL PROTECTED]> wrote:

> If I have this table, function and index in Postgres 7.3.6 ...
> 
> """
> CREATE TABLE news_stories (
> id serial primary key NOT NULL,
> pub_date timestamp with time zone NOT NULL,
> ...
> )
> CREATE OR REPLACE FUNCTION get_year_trunc(timestamp with time zone)
> returns timestamp with time zone AS 'SELECT date_trunc(\'year\',$1);'
> LANGUAGE 'SQL' IMMUTABLE;
> CREATE INDEX news_stories_pub_date_year_trunc ON 
> news_stories( get_year_trunc(pub_date) );
> """
>  
> ...why does this query not use the index?
>  
> db=# EXPLAIN SELECT DISTINCT get_year_trunc(pub_date) FROM
> news_stories;
>QUERY PLAN
> -
> 
>  Unique  (cost=59597.31..61311.13 rows=3768 width=8)
>->  Sort  (cost=59597.31..60454.22 rows=342764 width=8)
>  Sort Key: date_trunc('year'::text, pub_date)
>  ->  Seq Scan on news_stories  (cost=0.00..23390.55
>  rows=342764 
> width=8)
> (4 rows)
> 
> The query is noticably slow (2 seconds) on a database with 150,000+
> records. How can I speed it up?

  It's doing a sequence scan because you're not limiting the query in
  the FROM clause.  No point in using an index when you're asking for
  the entire table. :) 

 -
   Frank Wiles <[EMAIL PROTECTED]>
   http://www.wiles.org
 -


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [PERFORM] Performance problem from migrating between versions!

2005-01-17 Thread Tom Lane
Kaloyan Iliev Iliev <[EMAIL PROTECTED]> writes:
> I have the following problem. A week ago we've migrated from PGv7.2.3 to 
> 7.4.6. There were a lot of things in the apps to chenge but we made 
> them. But one query doesn't want to run. In the old PGv7.2.3 it passes 
> for 10 min. In the new one it gaves:
> DBD::Pg::st execute failed: ERROR:  out of memory

Does setting enable_hashagg to OFF fix it?

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[PERFORM] Optimizing this count query

2005-01-17 Thread Andrei Bintintan



Hi to 
all, 
 
I have a query which counts how many elements I have in the 
database.
 
SELECT count(o.id) FROM orders 
o  INNER JOIN report r ON 
o.id=r.id_order  INNER JOIN status s ON 
o.id_status=s.id  INNER JOIN contact c ON 
o.id_ag=c.id  INNER JOIN endkunde e ON 
o.id_endkunde=e.id  INNER JOIN zufriden z 
ON r.id_zufriden=z.id  INNER JOIN plannung 
v ON v.id=o.id_plannung  INNER JOIN 
mpsworker w ON v.id_worker=w.id  INNER 
JOIN person p ON p.id = w.id_person  WHERE 
o.id_status>3 
 
In the tables are not quite so many rows (~ 
10).
 
I keep the joins because in the where clause there 
can be also other search elemens which are searched in the other tables. 

Now the id_status from the orders table (>3) can 
be 4 or 6. The id_status=6 has the most bigger percentage (4 = 10%, 6 = 70% 
and the rest are other statuses < 4). I think this is why the planner uses 

 
I'm asking how can I improve the execution time of 
this query, because these tables are always increasing. And this count sometimes 
takes more than 10 secs and I need to run this count very offen.
 
Best regards, 
Andy.
 
 
The explain:
Aggregate  (cost=37931.33..37931.33 rows=1 
width=4)  ->  Hash Join  (cost=27277.86..37828.45 
rows=41154 width=4)    Hash Cond: 
("outer".id_person = "inner".id)    
->  Hash Join  (cost=27269.79..37100.18 rows=41153 
width=8)  
Hash Cond: ("outer".id_worker = 
"inner".id)  
->  Hash Join  (cost=27268.28..36378.50 rows=41152 
width=8)    
Hash Cond: ("outer".id_endkunde = 
"inner".id)    
->  Hash Join  (cost=25759.54..33326.98 rows=41151 
width=12)  
Hash Cond: ("outer".id_ag = 
"inner".id)  
->  Hash Join  (cost=25587.07..32331.51 rows=41150 
width=16)    
Hash Cond: ("outer".id_status = 
"inner".id)    
->  Hash Join  (cost=25586.00..31713.18 rows=41150 
width=20)  
Hash Cond: ("outer".id_zufriden = 
"inner".id)  
->  Hash Join  (cost=25584.85..31094.78 rows=41150 
width=24)    
Hash Cond: ("outer".id_plannung = 
"inner".id)    
->  Hash Join  (cost=24135.60..27869.53 rows=41149 
width=24)  
Hash Cond: ("outer".id = 
"inner".id_order)  
->  Seq Scan on orders o  (cost=0.00..2058.54 rows=42527 
width=20)    
Filter: (id_status > 
3)  
->  Hash  (cost=23860.48..23860.48 rows=42848 
width=8)    
->  Seq Scan on report r  (cost=0.00..23860.48 rows=42848 
width=8)    
->  Hash  (cost=1050.80..1050.80 rows=62180 
width=8)  
->  Seq Scan on plannung v  (cost=0.00..1050.80 rows=62180 
width=8)  
->  Hash  (cost=1.12..1.12 rows=12 
width=4)    
->  Seq Scan on zufriden z  (cost=0.00..1.12 rows=12 
width=4)    
->  Hash  (cost=1.06..1.06 rows=6 
width=4)  
->  Seq Scan on status s  (cost=0.00..1.06 rows=6 
width=4)  
->  Hash  (cost=161.57..161.57 rows=4357 
width=4)    
->  Seq Scan on contact c  (cost=0.00..161.57 rows=4357 
width=4)    
->  Hash  (cost=1245.99..1245.99 rows=44299 
width=4)  
->  Seq Scan on endkunde e  (cost=0.00..1245.99 rows=44299 
width=4)  
->  Hash  (cost=1.41..1.41 rows=41 
width=8)    
->  Seq Scan on mpsworker w  (cost=0.00..1.41 rows=41 
width=8)    ->  Hash  
(cost=7.66..7.66 rows=166 
width=4)  
->  Seq Scan on person p  (cost=0.00..7.66 rows=166 
width=4)


Re: [PERFORM] Increasing RAM for more than 4 Gb. using postgresql

2005-01-17 Thread William Yu
I inferred this from reading up on the compressed vm project. It can be 
higher or lower depending on what devices you have in your system -- 
however, I've read messages from kernel hackers saying Linux is very 
aggressive in reserving memory space for devices because it must be 
allocated at boottime.


Josh Berkus wrote:
William,

The theshold for using PAE is actually far lower than 4GB. 4GB is the
total memory address space -- split that in half for 2GB for userspace,
2GB for kernel. The OS cache resides in kernel space -- after you take
alway the memory allocation for devices, you're left with a window of
roughly 900MB.

I'm curious, how do you get 1.1GB for memory allocation for devices?
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [PERFORM] Increasing RAM for more than 4 Gb. using postgresql

2005-01-17 Thread William Yu
[EMAIL PROTECTED] wrote:
Since the optimal state is to allocate a small amount of memory to
Postgres and leave a huge chunk to the OS cache, this means you are
already hitting the PAE penalty at 1.5GB of memory.
How could I chang this hitting?
Upgrade to 64-bit processors + 64-bit linux.
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] Performance problem from migrating between versions!

2005-01-17 Thread Kaloyan Iliev Iliev
Thanks,
It worked. I have read in the docs what this "enable_hashagg" do, but I 
couldn't understand it. What does it change?

From the Doc:
---
enable_hashagg (boolean)
   Enables or disables the query planner's use of hashed aggregation
   plan types. The default is on. This is used for debugging the query
   planner. 


How it is used to debug the query planner? And why it lower the mem usage?
Thank you in advance.
Kaloyan Iliev

Tom Lane wrote:
Kaloyan Iliev Iliev <[EMAIL PROTECTED]> writes:
 

I have the following problem. A week ago we've migrated from PGv7.2.3 to 
7.4.6. There were a lot of things in the apps to chenge but we made 
them. But one query doesn't want to run. In the old PGv7.2.3 it passes 
for 10 min. In the new one it gaves:
DBD::Pg::st execute failed: ERROR:  out of memory
   

Does setting enable_hashagg to OFF fix it?
regards, tom lane
 

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [PERFORM] Optimizing this count query

2005-01-17 Thread Tom Lane
"Andrei Bintintan" <[EMAIL PROTECTED]> writes:
> SELECT count(o.id) FROM orders o
>   INNER JOIN report r ON o.id=r.id_order
>   INNER JOIN status s ON o.id_status=s.id
>   INNER JOIN contact c ON o.id_ag=c.id
>   INNER JOIN endkunde e ON o.id_endkunde=e.id
>   INNER JOIN zufriden z ON r.id_zufriden=z.id
>   INNER JOIN plannung v ON v.id=o.id_plannung
>   INNER JOIN mpsworker w ON v.id_worker=w.id
>   INNER JOIN person p ON p.id = w.id_person
>   WHERE o.id_status>3

> I'm asking how can I improve the execution time of this query, because =
> these tables are always increasing. And this count sometimes takes more =
> than 10 secs and I need to run this count very offen.

Unless you've increased the default value of join_collapse_limit, this
construction will be forcing the join order; see
http://www.postgresql.org/docs/7.4/static/explicit-joins.html

I'm not sure if you can improve the join order at all --- since you only
showed EXPLAIN and not EXPLAIN ANALYZE, it's hard to be sure whether any
of the steps are producing large intermediate results.  But it's
something to look into.

You should also ask yourself if you need to be joining so many tables at
all.  The planner seems to think that only the o/r join is really going
to affect the result row count.  I can't tell if it's right or not, but
if this is a star schema and the other seven tables are just detail
tables, you don't need them in order to obtain a count.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] Index on a function and SELECT DISTINCT

2005-01-17 Thread Adrian Holovaty
Frank Wiles wrote:
> Adrian Holovaty <[EMAIL PROTECTED]> wrote:
> > If I have this table, function and index in Postgres 7.3.6 ...
> >
> > """
> > CREATE TABLE news_stories (
> > id serial primary key NOT NULL,
> > pub_date timestamp with time zone NOT NULL,
> > ...
> > )
> > CREATE OR REPLACE FUNCTION get_year_trunc(timestamp with time zone)
> > returns timestamp with time zone AS 'SELECT date_trunc(\'year\',$1);'
> > LANGUAGE 'SQL' IMMUTABLE;
> > CREATE INDEX news_stories_pub_date_year_trunc ON
> > news_stories( get_year_trunc(pub_date) );
> > """
> >
> > ...why does this query not use the index?
> >
> > db=# EXPLAIN SELECT DISTINCT get_year_trunc(pub_date) FROM
> > news_stories;
> >QUERY PLAN
> > -
> > 
> >  Unique  (cost=59597.31..61311.13 rows=3768 width=8)
> >->  Sort  (cost=59597.31..60454.22 rows=342764 width=8)
> >  Sort Key: date_trunc('year'::text, pub_date)
> >  ->  Seq Scan on news_stories  (cost=0.00..23390.55
> >  rows=342764
> > width=8)
> > (4 rows)
> >
> > The query is noticably slow (2 seconds) on a database with 150,000+
> > records. How can I speed it up?
>
>   It's doing a sequence scan because you're not limiting the query in
>   the FROM clause.  No point in using an index when you're asking for
>   the entire table. :)

Ah, that makes sense. So is there a way to optimize SELECT DISTINCT queries 
that have no WHERE clause?

Adrian

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [PERFORM] Performance problem from migrating between versions!

2005-01-17 Thread Kaloyan Iliev Iliev




Hi,

I am asking the prev. question because there is no change in the query
plan (as far as I see) but the mem usage decreases from 258M to 16M.

Kaloyan Iliev

Tom Lane wrote:

  Kaloyan Iliev Iliev <[EMAIL PROTECTED]> writes:
  
  
I have the following problem. A week ago we've migrated from PGv7.2.3 to 
7.4.6. There were a lot of things in the apps to chenge but we made 
them. But one query doesn't want to run. In the old PGv7.2.3 it passes 
for 10 min. In the new one it gaves:
DBD::Pg::st execute failed: ERROR:  out of memory

  
  
Does setting enable_hashagg to OFF fix it?

			regards, tom lane


  





Re: [PERFORM] Performance problem from migrating between versions!

2005-01-17 Thread Tom Lane
Kaloyan Iliev Iliev <[EMAIL PROTECTED]> writes:
> It worked. I have read in the docs what this "enable_hashagg" do, but I 
> couldn't understand it. What does it change?

Your original 7.4 query plan has several HashAgg steps in it, which are
doing aggregate/GROUP BY operations.  The planner thinks that they will
use only nominal amounts of memory because there are only a few distinct
groups in each case.  Evidently that is wrong and at least one of them
is dealing with so many groups as to run out of memory.  So the next
question is have you ANALYZEd all of these tables recently?

I wouldn't recommend turning off hashagg as a permanent solution, it
was just a quickie to verify my suspicion of where the memory was going.

regards, tom lane

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] Index on a function and SELECT DISTINCT

2005-01-17 Thread PFC

Try :
EXPLAIN SELECT get_year_trunc(pub_date) as foo FROM ... GROUP BY foo
Apart from that, you could use a materialized view...
> db=# EXPLAIN SELECT DISTINCT get_year_trunc(pub_date) FROM

Ah, that makes sense. So is there a way to optimize SELECT DISTINCT  
queries
that have no WHERE clause?

Adrian
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
   http://www.postgresql.org/docs/faqs/FAQ.html

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] Performance problem from migrating between versions!

2005-01-17 Thread Kaloyan Iliev Iliev
Tom Lane wrote:
I wouldn't recommend turning off hashagg as a permanent solution, it
was just a quickie to verify my suspicion of where the memory was going.
 

Hi,
How to understant the upper sentence? I shouldn't turn "hashagg" off 
permanently for this query or for the entire database. For now I turn it 
off for this query, so it can work. If I shouldn't, then what should I 
do? Will ANALYZE resove this?

Kaloyan Iliev
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] Performance problem from migrating between versions!

2005-01-17 Thread Tom Lane
Kaloyan Iliev Iliev <[EMAIL PROTECTED]> writes:
> Will ANALYZE resove this?

Try it and find out.

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] Increasing RAM for more than 4 Gb. using postgresql

2005-01-17 Thread amrit
> >>Since the optimal state is to allocate a small amount of memory to
> >>Postgres and leave a huge chunk to the OS cache, this means you are
> >>already hitting the PAE penalty at 1.5GB of memory.
> >>
> >
> > How could I change this hitting?
>
> Upgrade to 64-bit processors + 64-bit linux.

Does the PAE help linux in handling the memory of more than 4 Gb limit in 32 bit
archetech ? My intel server board could handle the mem of 12 Gb [according to
intel spec.] and if I use Fedora C2 with PAE , will it useless for mem of more
than >4Gb.?

Any comment please?
Amrit
Thailand


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [PERFORM] Increasing RAM for more than 4 Gb. using postgresql

2005-01-17 Thread Dave Cramer




Amrit, 

It's not useless, it's just not optimal.

All operating systems, FC2, FC3,  will have the same problem with 
greater than 4G of memory on a 32 bit processor.

The *only* way to avoid this is to go to a 64 bit processor (opteron)
and then
for greater performance use a linux distribution compiled for a 64bit
processor.

Have you identified and optimized the queries, are you sure you need
more memory?

Dave

[EMAIL PROTECTED] wrote:

  

  
Since the optimal state is to allocate a small amount of memory to
Postgres and leave a huge chunk to the OS cache, this means you are
already hitting the PAE penalty at 1.5GB of memory.


  
  How could I change this hitting?
  

Upgrade to 64-bit processors + 64-bit linux.

  
  
Does the PAE help linux in handling the memory of more than 4 Gb limit in 32 bit
archetech ? My intel server board could handle the mem of 12 Gb [according to
intel spec.] and if I use Fedora C2 with PAE , will it useless for mem of more
than >4Gb.?

Any comment please?
Amrit
Thailand


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


  


-- 
Dave Cramer
http://www.postgresintl.com
519 939 0336
ICQ#14675561





Re: [PERFORM] Increasing RAM for more than 4 Gb. using postgresql

2005-01-17 Thread Mark Kirkwood
[EMAIL PROTECTED] wrote:
Does the PAE help linux in handling the memory of more than 4 Gb limit in 
32 bit
archetech ? My intel server board could handle the mem of 12 Gb [according to
intel spec.] and if I use Fedora C2 with PAE , will it useless for mem of more
than >4Gb.?
Any comment please?
I understand that the 2.6.* kernels are much better at large memory
support (with respect to performance issues), so unless you have a
64-bit machine lying around - this is probably worth a try.
You may need to build a new kernel with the various parameters :
CONFIG_NOHIGHMEM
CONFIG_HIGHMEM4G
CONFIG_HIGHMEM64G
set appropriately (or even upgrade to the latest 2.6.10). I would expect
that some research and experimentation will be required to get the best
out of it - (e.g. the 'bounce buffers' issue).
regards
Mark

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] Increasing RAM for more than 4 Gb. using postgresql

2005-01-17 Thread amrit
> I understand that the 2.6.* kernels are much better at large memory
> support (with respect to performance issues), so unless you have a
> 64-bit machine lying around - this is probably worth a try.
>
> You may need to build a new kernel with the various parameters :
>
> CONFIG_NOHIGHMEM
> CONFIG_HIGHMEM4G
> CONFIG_HIGHMEM64G
>
> set appropriately (or even upgrade to the latest 2.6.10). I would expect
> that some research and experimentation will be required to get the best
> out of it - (e.g. the 'bounce buffers' issue).

In the standard rpm FC 2-3 with a newly install server , would it automatically
detect and config it if I use the mechine with > 4 Gb [6Gb.] or should I
manually config it?
Amrit
Thailand

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] Increasing RAM for more than 4 Gb. using postgresql

2005-01-17 Thread Mark Kirkwood
[EMAIL PROTECTED] wrote:
In the standard rpm FC 2-3 with a newly install server , would it 
automatically
detect and config it if I use the mechine with > 4 Gb [6Gb.] or should I
manually config it?
Amrit
Thailand
Good question. I dont have FC2-3 here to check. I recommend firing off a
question to [EMAIL PROTECTED] (you need to subscribe first):
http://www.redhat.com/mailman/listinfo/fedora-list
best wishes
Mark
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [PERFORM] Increasing RAM for more than 4 Gb. using postgresql

2005-01-17 Thread William Yu
My experience is RH9 auto detected machines >= 2GB of RAM and installs 
the PAE bigmem kernel by default. I'm pretty sure the FC2/3 installer 
will do the same.


[EMAIL PROTECTED] wrote:
I understand that the 2.6.* kernels are much better at large memory
support (with respect to performance issues), so unless you have a
64-bit machine lying around - this is probably worth a try.
You may need to build a new kernel with the various parameters :
CONFIG_NOHIGHMEM
CONFIG_HIGHMEM4G
CONFIG_HIGHMEM64G
set appropriately (or even upgrade to the latest 2.6.10). I would expect
that some research and experimentation will be required to get the best
out of it - (e.g. the 'bounce buffers' issue).

In the standard rpm FC 2-3 with a newly install server , would it automatically
detect and config it if I use the mechine with > 4 Gb [6Gb.] or should I
manually config it?
Amrit
Thailand
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [PERFORM] Optimizing this count query

2005-01-17 Thread Andrei Bintintan
I have to do all the joins because in the where cause I can also have other 
conditions that are related to the other tables.
For example:
WHERE o.id_status>3 AND o.id_ag=72 AND v.id_worker=5 AND z.id=10.

Now if these search functions are IN then the query runs faster.
One thing I could do at this point is not to make the join if that table is 
not needed in the where clause.

This is the explain analize for the first query.
Aggregate  (cost=37182.56..37182.56 rows=1 width=4) (actual 
time=3032.126..3032.126 rows=1 loops=1)
 ->  Hash Join  (cost=27279.22..37079.68 rows=41154 width=4) (actual 
time=662.600..2999.845 rows=42835 loops=1)
   Hash Cond: ("outer".id_endkunde = "inner".id)
   ->  Hash Join  (cost=25770.48..34068.10 rows=41153 width=8) (actual 
time=561.112..2444.574 rows=42835 loops=1)
 Hash Cond: ("outer".id_worker = "inner".id)
 ->  Hash Join  (cost=25759.54..33326.98 rows=41151 width=12) 
(actual time=560.514..2361.776 rows=42835 loops=1)
   Hash Cond: ("outer".id_ag = "inner".id)
   ->  Hash Join  (cost=25587.07..32331.51 rows=41150 
width=16) (actual time=551.505..2240.217 rows=42835 loops=1)
 Hash Cond: ("outer".id_status = "inner".id)
 ->  Hash Join  (cost=25586.00..31713.18 rows=41150 
width=20) (actual time=551.418..2150.224 rows=42835 loops=1)
   Hash Cond: ("outer".id_zufriden = 
"inner".id)
   ->  Hash Join  (cost=25584.85..31094.78 
rows=41150 width=24) (actual time=551.341..2057.142 rows=42835 loops=1)
 Hash Cond: ("outer".id_plannung = 
"inner".id)
 ->  Hash Join 
(cost=24135.60..27869.53 rows=41149 width=24) (actual time=415.189..1162.429 
rows=42835 loops=1)
   Hash Cond: ("outer".id = 
"inner".id_order)
   ->  Seq Scan on orders o 
(cost=0.00..2058.54 rows=42527 width=20) (actual time=0.046..93.692 
rows=42835 loops=1)
 Filter: (id_status > 3)
   ->  Hash 
(cost=23860.48..23860.48 rows=42848 width=8) (actual time=414.923..414.923 
rows=0 loops=1)
 ->  Seq Scan on report r 
(cost=0.00..23860.48 rows=42848 width=8) (actual time=282.905..371.401 
rows=42848 loops=1)
 ->  Hash  (cost=1050.80..1050.80 
rows=62180 width=8) (actual time=133.505..133.505 rows=0 loops=1)
   ->  Seq Scan on plannung v 
(cost=0.00..1050.80 rows=62180 width=8) (actual time=0.034..73.048 
rows=62180 loops=1)
   ->  Hash  (cost=1.12..1.12 rows=12 width=4) 
(actual time=0.048..0.048 rows=0 loops=1)
 ->  Seq Scan on zufriden z 
(cost=0.00..1.12 rows=12 width=4) (actual time=0.027..0.040 rows=12 loops=1)
 ->  Hash  (cost=1.06..1.06 rows=6 width=4) (actual 
time=0.045..0.045 rows=0 loops=1)
   ->  Seq Scan on status s  (cost=0.00..1.06 
rows=6 width=4) (actual time=0.032..0.037 rows=6 loops=1)
   ->  Hash  (cost=161.57..161.57 rows=4357 width=4) 
(actual time=8.973..8.973 rows=0 loops=1)
 ->  Seq Scan on contact c  (cost=0.00..161.57 
rows=4357 width=4) (actual time=0.032..5.902 rows=4357 loops=1)
 ->  Hash  (cost=10.84..10.84 rows=42 width=4) (actual 
time=0.557..0.557 rows=0 loops=1)
   ->  Hash Join  (cost=1.51..10.84 rows=42 width=4) 
(actual time=0.182..0.523 rows=41 loops=1)
 Hash Cond: ("outer".id = "inner".id_person)
 ->  Seq Scan on person p  (cost=0.00..7.66 
rows=166 width=4) (actual time=0.027..0.216 rows=166 loops=1)
 ->  Hash  (cost=1.41..1.41 rows=41 width=8) 
(actual time=0.125..0.125 rows=0 loops=1)
   ->  Seq Scan on mpsworker w 
(cost=0.00..1.41 rows=41 width=8) (actual time=0.038..0.086 rows=41 loops=1)
   ->  Hash  (cost=1245.99..1245.99 rows=44299 width=4) (actual 
time=101.257..101.257 rows=0 loops=1)
 ->  Seq Scan on endkunde e  (cost=0.00..1245.99 rows=44299 
width=4) (actual time=0.050..59.641 rows=44301 loops=1)
Total runtime: 3033.230 ms

Thanks for help.
Andy.
- Original Message - 
From: "Tom Lane" <[EMAIL PROTECTED]>
To: "Andrei Bintintan" <[EMAIL PROTECTED]>
Cc: 
Sent: Monday, January 17, 2005 7:55 PM
Subject: Re: [PERFORM] Optimizing this count query


"Andrei Bintintan" <[EMAIL PROTECTED]> writes:
SELECT count(o.id) FROM orders o
  INNER JOIN report r ON o.id=r.id_order
  INNER JOIN status s ON o.id_status=s.id
  INNER JOIN contact c ON o.id_ag=c.id
  INNER JOIN endkunde e ON o.id_endkunde=e.id
  INNER JOIN zufriden z ON r.id_zufrid