Re: [PERFORM] failures on machines using jfs

2004-01-13 Thread Hannu Krosing
Greg Spiegelberg kirjutas E, 12.01.2004 kell 19:03:
> Hannu Krosing wrote:
> > Spiegelberg, Greg kirjutas P, 11.01.2004 kell 18:21:
> > 
> >>It would seem we're experiencing somthing similiar with our scratch
> >>volume (JFS mounted with noatime).
> > 
> > 
> > Which files/directories do you keep on "scratch" volume ?
> > 
> > All postgres files or just some (WAL, tmp) ?
> 
> No Postgres files are kept in scratch only the files being loaded
> into the database via COPY or lo_import.

then the speedup does not make any sense !

Is reading from jfs filesystem also 5 times faster than reading from
ext3 ?

The only explanation I can give to filling database from jfs volume to
be so much faster could be some strange filesystem cache interactions.


Hannu



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] failures on machines using jfs

2004-01-13 Thread Bill Moran
Hannu Krosing wrote:
Greg Spiegelberg kirjutas E, 12.01.2004 kell 19:03:

Hannu Krosing wrote:

Spiegelberg, Greg kirjutas P, 11.01.2004 kell 18:21:


It would seem we're experiencing somthing similiar with our scratch
volume (JFS mounted with noatime).


Which files/directories do you keep on "scratch" volume ?

All postgres files or just some (WAL, tmp) ?
No Postgres files are kept in scratch only the files being loaded
into the database via COPY or lo_import.


then the speedup does not make any sense !

Is reading from jfs filesystem also 5 times faster than reading from
ext3 ?
The only explanation I can give to filling database from jfs volume to
be so much faster could be some strange filesystem cache interactions.
http://www.potentialtech.com/wmoran/postgresql.php

--
Bill Moran
Potential Technologies
http://www.potentialtech.com
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [PERFORM] failures on machines using jfs

2004-01-13 Thread Greg Spiegelberg
Hannu Krosing wrote:
Greg Spiegelberg kirjutas E, 12.01.2004 kell 19:03:

Hannu Krosing wrote:

Spiegelberg, Greg kirjutas P, 11.01.2004 kell 18:21:


It would seem we're experiencing somthing similiar with our scratch
volume (JFS mounted with noatime).


Which files/directories do you keep on "scratch" volume ?

All postgres files or just some (WAL, tmp) ?
No Postgres files are kept in scratch only the files being loaded
into the database via COPY or lo_import.


then the speedup does not make any sense !
We do a lot of preprocessing before the data gets loaded.  It's that
process that experiences the hiccups I mentioned.
--
Greg Spiegelberg
 Sr. Product Development Engineer
 Cranel, Incorporated.
 Phone: 614.318.4314
 Fax:   614.431.8388
 Email: [EMAIL PROTECTED]
Cranel. Technology. Integrity. Focus.


**
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed. If you have received this email in error please notify
the system manager.
This footnote also confirms that this email message has been swept by
MIMEsweeper for the presence of computer viruses.
www.mimesweeper.com
**
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


[PERFORM] freebsd 5.2 and max_connections

2004-01-13 Thread David Hill
Hello -
I am using postgresql to hold aliases, users, and relay_domains for postfix and 
courier to do lookups from.  I am not storing mail in sql.

I need postgresql to have fast read performance, so i setup index's on the tables.  
Also, the queries are basically "select blah from table where domain='domain.com'";, 
so i dont need to be able to support large results.

I will  have a lot of mail servers connecting to this postgresql db, so i need to 
support a lot of connections... but dont need to support large results.

I am using FreeBSD 5.2.

What are some tuning options and formulas I can use to get good values?

Thanks!
David


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [PERFORM] freebsd 5.2 and max_connections

2004-01-13 Thread Richard Huxton
On Tuesday 13 January 2004 16:04, David Hill wrote:
> Hello -
> I am using postgresql to hold aliases, users, and relay_domains for postfix
> and courier to do lookups from.  I am not storing mail in sql.
>
> I need postgresql to have fast read performance, so i setup index's on the
> tables.  Also, the queries are basically "select blah from table where
> domain='domain.com'";, so i dont need to be able to support large results.
>
> I will  have a lot of mail servers connecting to this postgresql db, so i
> need to support a lot of connections... but dont need to support large
> results.

Firstly - if you don't know about the tuning guidelines/annotated config file, 
you should go here:
http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php

Hmm - small result sets accessed directly via indexed fields, so sort_mem 
probably isn't important to you.

Make sure your effective cache setting is accurate though, so PG can estimate 
whether it'll need to access the disks.

Not sure if clustering one or more tables will help - I'm guessing not. What 
might help is to increase the statistics gathered on important columns. That 
should give the planner a more accurate estimate of value distribution and 
shouldn't cost you too much to keep accurate, since I'm guessing a low rate 
of updating.

You might want to play with the random page cost (?or is it random access 
cost?) but more RAM for a bigger disk cache is probably the simplest tweak.

-- 
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] COUNT & Pagination

2004-01-13 Thread mallah
> I understand that COUNT queries are expensive.  So I'm looking for advice
> on
> displaying paginated query results.
>
> I display my query results like this:
>
>   Displaying 1 to 50 of 2905.
>   1-50 | 51-100 | 101-150 | etc.
>
> I do this by executing two queries.  One is of the form:
>
>   SELECT  FROM  WHERE  LIMIT m
> OFFSET n
>
> The other is identical except that I replace the select list with
> COUNT(*).
>
> I'm looking for suggestions to replace that COUNT query.



We avert the subsequent execution of count(*) by passing the
value of cout(*) as a query parameter through the link in page
numbers. This works for us.

This ofcourse assumes that that the number of rows matching the
Where clause does not changes while the user is viewing the search
results.

Hope it helps.

Regds
Mallah.




I cannot use the
> method of storing the number of records in a separate table because my
> queries
> (a) involve joins, and (b) have a WHERE clause.
>
> And an unrelated question:
> I'm running PG 7.2.2 and want to upgrade to 7.4.1.  I've never upgraded PG
> before and I'm nervous.  Can I simply run pg_dumpall, install 7.4.1, and
> then
> feed the dump into psql?  I'm planning to use pg_dumpall rather than
> pg_dump
> because I want to preserve the users I've defined.  My database is the
> only one
> on the system.
>
> Thanks.
> -David (who would love to go to Bruce Momjian's boot camp)
>
> ---(end of broadcast)---
> TIP 9: the planner will ignore your desire to choose an index scan if your
>   joining column's datatypes do not match
>
>


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] COUNT & Pagination

2004-01-13 Thread David Shadovitz
> We avert the subsequent execution of count(*) by passing the
> value of count(*) as a query parameter through the link in page
> numbers.

Mallah, and others who mentioned caching the record count:

Yes, I will certainly do this.  I can detect whether the query's filter has 
been changed, or whether the user is merely paging through the results or 
sorting* the results.

I'd love to completely eliminate the cost of the COUNT(*) query, but I guess 
that I cannot have everything.

* My HTML table column headers are hyperlinks which re-execute the query, 
sorting the results by the selected column.  The first click does an ASC 
sort; a second click does a DESC sort.

Thanks.
-David

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [PERFORM] COUNT & Pagination

2004-01-13 Thread scott.marlowe
On Tue, 13 Jan 2004, David Shadovitz wrote:

> > We avert the subsequent execution of count(*) by passing the
> > value of count(*) as a query parameter through the link in page
> > numbers.
> 
> Mallah, and others who mentioned caching the record count:
> 
> Yes, I will certainly do this.  I can detect whether the query's filter has 
> been changed, or whether the user is merely paging through the results or 
> sorting* the results.
> 
> I'd love to completely eliminate the cost of the COUNT(*) query, but I guess 
> that I cannot have everything.
> 
> * My HTML table column headers are hyperlinks which re-execute the query, 
> sorting the results by the selected column.  The first click does an ASC 
> sort; a second click does a DESC sort.

another useful trick is to have your script save out the count(*) result 
in a single row table with a timestamp, and every time you grab if, check 
to see if x number of minutes have passed, and if so, update that row with 
a count(*).  You can even have a cron job do it so your own scripts don't 
incur the cost of the count(*) and delay output to the user.



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly