Re: [PERFORM] Querying 19million records very slowly

2005-06-23 Thread Kjell Tore Fossbakk
-I also changed now() to current_time, which increased performance quite good. I need to make further tests, before I'll post any results. -I tried to change now()- interval 'x hours' to like 2005-06-22 16:00:00+02. This also increased the performance. changing to time '2005-06-22 16:00:00+02'

Re: [PERFORM] Querying 19million records very slowly

2005-06-23 Thread Tobias Brox
I'm running FreeBSD 4.11, and im editing the file in /usr/local/etc/postgresql.conf, but it doesnt help. On my system the 'live' config file resides in /var/lib/postgresql/data/postgresql.conf - maybe you have them in /usr/local/var/lib ... -- Tobias Brox, +86-13521622905 Nordicbet, IT dept

Re: [PERFORM] Querying 19million records very slowly

2005-06-22 Thread Kjell Tore Fossbakk
database= set enable_seqscan to on; SET Time: 0.34 ms database= explain analyze select count(*) from test where p1=53; QUERY PLAN

Re: [PERFORM] Querying 19million records very slowly

2005-06-22 Thread Tobias Brox
[Kjell Tore Fossbakk - Wed at 09:45:22AM +0200] database= explain analyze select count(*) from test where p1=53 and time now() - interval '24 hours' ; Sorry to say that I have not followed the entire thread neither read the entire email I'm replying to, but I have a quick hint on this one (ref

Re: [PERFORM] Querying 19million records very slowly

2005-06-22 Thread Kjell Tore Fossbakk
Appreciate your time, Mr Brox. I'll test the use of current_timestamp, rather than now(). I am not sure if Pg can do a match between a fixed timestamp and a datetime? time current_timestamp - interval '24 hours', when time is -mm-dd hh-mm-ss+02, like 2005-06-22 16:00:00+02. If Pg cant do

Re: [PERFORM] Querying 19million records very slowly

2005-06-22 Thread Tobias Brox
[Kjell Tore Fossbakk - Wed at 10:18:30AM +0200] I'll test the use of current_timestamp, rather than now(). I am not sure if Pg can do a match between a fixed timestamp and a datetime? I have almost all my experience with timestamps wo timezones, but ... isn't that almost the same as the

Re: [PERFORM] Querying 19million records very slowly

2005-06-22 Thread Kjell Tore Fossbakk
Try to type in '2005-06-21 16:36:22+08' directly in the query, and see if it makes changes. Or probably '2005-06-21 10:36:22+02' in your case ;-) Which one does Pg read fastes? Does he convert datetime in the table, then my where clause and check, for each row? How does he compare a datetime

Re: [PERFORM] Querying 19million records very slowly

2005-06-22 Thread Tobias Brox
[Kjell Tore Fossbakk - Wed at 11:10:42AM +0200] Which one does Pg read fastes? Does he convert datetime in the table, then my where clause and check, for each row? How does he compare a datetime with a datetime? Timestamp are easy, large number bigger than another large number.. time

Re: [PERFORM] Querying 19million records very slowly

2005-06-22 Thread Tom Lane
Tobias Brox [EMAIL PROTECTED] writes: time (datetime) '2005-06-21 10:36:22+02' or time (timestamp) 'some timestamp pointing to yesterday' If I have understood it correctly, the planner will recognize the timestamp and compare it with the statistics in the first example but not in the

Re: [PERFORM] Querying 19million records very slowly

2005-06-22 Thread Paul Ramsey
OK, so the planner is in fact making a mistake (I think). Try turning down your random_page_cost a little. It defaults at 4.0, see if 2.0 works right. (Careful, move these things around too much for one query, you will wreck others.) 4.0 is a little large for almost all modern hardware,

Re: [PERFORM] Querying 19million records very slowly

2005-06-22 Thread Kjell Tore Fossbakk
I cant get the config file to load into my postgres. that's the problem. I want to set it to 10k, but it is only still at 1000... I save the file and restart the service.. yes, i ment 'pg_ctl reload', sry about that one. kjell tore On 6/22/05, Bricklen Anderson [EMAIL PROTECTED] wrote:

Re: [PERFORM] Querying 19million records very slowly

2005-06-22 Thread Stefan Weiss
On 2005-06-22 10:55, Michael Glaesemann wrote: There has been discussion in the past on storing the time zone name with the timestamptz as well, though no one has implemented this yet. The reason for this may be that time zone names (abbreviations) are not unique. For example, ECT can mean

Re: [PERFORM] Querying 19million records very slowly

2005-06-22 Thread Tobias Brox
[Kjell Tore Fossbakk - Wed at 07:41:54AM -0700] I cant get the config file to load into my postgres. that's the problem. I want to set it to 10k, but it is only still at 1000... I save the file and restart the service.. yes, i ment 'pg_ctl reload', sry about that one. Classical problem, a

[PERFORM] Querying 19million records very slowly

2005-06-21 Thread Kjell Tore Fossbakk
Hello!I use FreeBSD 4.11 with PostGreSQL 7.3.8.I got a huge database with roughly 19 million records. There is just onetable, with a time field, a few ints and a few strings.table testfields time (timestamp), source (string), destination (string), p1 (int), p2 (int)I have run VACUUM ANALYZE ;I

Re: [PERFORM] Querying 19million records very slowly

2005-06-21 Thread Paul Ramsey
Some tips: - EXPLAIN ANALYZE provides a more useful analysis of a slow query, because it gives both the estimate and actual times/rows for each step in the plan. - The documentation is right: rows with little variation are pretty useless to index. Indexing is about selectivity, reducing the

Re: [PERFORM] Querying 19million records very slowly

2005-06-21 Thread PFC
use CURRENT_TIME which is a constant instead of now() which is not considered constant... (I think) ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq