Title: Message
Here
is general information on postgres: http://www.ca.postgresql.org/docs/aw_pgsql_book/index.html
Here
is an optimization file included: This was not written by
me!!!
Let
me know if you have questions...s
Ray
Hunter
Firmware
Engineer
ENTERASYS
NETWORKS
-Original Message-From: Aron Pilhofer
[mailto:[EMAIL PROTECTED]] Sent: Monday, March 04, 2002 9:10
AMTo: Hunter, RaySubject: RE: [PHP-DB] optimization
(another tack)
That
would be great! Thanks.
[Aron Pilhofer]
-Original
Message-From: Hunter, Ray
[mailto:[EMAIL PROTECTED]]Sent: Monday, March 04, 2002 11:04
AMTo: 'Aron Pilhofer'; [EMAIL PROTECTED]Subject: RE:
[PHP-DB] optimization (another tack)
If you are using php and a database you can add more memory
to the script and optimize the database. I only use postgres databases
for all my large data so I can let you know how to optimize
postgres...
Ray Hunter Firmware Engineer
ENTERASYS NETWORKS
-Original Message- From:
Aron Pilhofer [mailto:[EMAIL PROTECTED]]
Sent: Monday, March 04, 2002 9:02 AM
To: [EMAIL PROTECTED] Subject:
[PHP-DB] optimization (another tack)
Let me try this again more generally. I am trying to
optimize a function in PHP that handles very large result sets, which are
transferred to arrays, and that does some extremely heavy lifting in terms
of calculations on those arrays. By design, it iterates through each
possible combination of two result sets, and does some calculations on those
results. As you can imagine, the numbers get quite large, quite fast; sets
of 500 by 1000 necessitate a half-million calculations.
So, short of rewriting this function in C, which I cannot
do, are there any suggestions for optimizing. For example:
1) is there any advantage to caching an array as a local
file? 2) the script pumps the results of the
calculations into a new table.. would it be faster to dump it into a local
file instead?
3) is there any advantage to executing the script as a CGI?
(does that make sense? I don't know if I know the correct jargon
here...)
Any other tips folks have for scripts that handle a lot of
calculations would be greatly appreciated.
Thanks in advance.
-- PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php
Optimizing Postgresql
Ericson Smith
Following Tim Perdue's excellent article on the comparison between MySQL and
Postgresql,
I decided to take a shot at installing and using this database. For most of our work I
use MySQL
and will continue to do so, because of its ease of use and unrivaled select query
speed,
and also because there is no point in trying to mess around with production systems
that already work fine.
But some new projects suffered greatly from MySQL's table locking feature when I
needed
to update data (which I do a lot). Here are my adventures in setting up a Postgresql
database server.
Our configuration for a dedicated Postgresql server was:
Redhat 7.1
Dual PIII 650Mhz System
512MB RAM
18Gig SCSI drive for the postgresql data partition
Downloading and Installing
I downloaded and installed the 7.1.2 RPM's from http://postgres.org without any
trouble.
For a server installation, I only installed: postgresql-server and postgresql-7.1.2
(base).
I then started the server up and running by executing:
/etc/init.d/postgresql start
A small sized database was ported from MySQL (three tables totaling about 5000
records).
I created sufficient indexes for postgresql's optimizer to use, and modified our C
application
to use the postgresql C client interface for a small CGI program that would brutally
query
this table. This small CGI program receives thousands of queries per minute.
Optimizing
One of the first things I noticed after turning on the CGI program, was that although
queries
were returned almost as fast as from the previous MySQL based system, the load on the
server was much higher -- in fact almost 90-percent! Then I started to go down into
the
nitty-gritty of things. I had optimized MySQL before by greatly increasing cache and
buffer
sizes and by throwing more ram towards the problem.
The single biggest thing that you have to do before running Postgresql, is to provide
enough
shared buffer space. Let me repeat: provide enough buffer space! Let's say you have
about
512MB of ram on a dedicated database server, then you need to turn over about
75-percent
of it to this shared buffer. Postgresql does best when it can load most or -- even
better -- all
of a table into its shared memory space. In our case, since our database was fairly
small,
I decided to allocate 128MB of RAM towards the shared buffer space.
The file /var/lib/pgsql/data/postgresql.conf co