On Thu, 12 Feb 2004, Leon Out wrote:

> Hello all. I am in the midst of porting a large web application from a 
> MS SQL Server backend to PostgreSQL. The migration work is basically 
> complete, and we're at the testing and optimization phase of the 
> project. The results so far have been disappointing, with Postgres 
> performing queries in about the same time as SQL Server even though 
> Postgres is running on a dedicated box with about 4 times the clock 
> speed of the SQL Server box. For a chart of my results, please see 
> http://leonout.com/pggraph.pdf for a graph of some test results.

A couple of things.  One, CPU speed is about number 5 in the list of 
things that make a database fast.  Drive subsystem (number of drivers, 
controller, RAID cache), memory speed, memory size, and proper database 
tuning are all significantly more important thatn the CPU speed.

Our old server was a dual PIII-750 with 1.5 gig ram (PC133) and it ran 
about 85% as fast as our brand spanking new Dell 2650 dual 2800MHz box 
with 2 gig ram.  They both had the same basic drive subsystem, by the way.

Using a battery backed RAID controller (the lsi megaraid one, not the 
adaptect, as it's not very fast) made the biggest difference.  With that 
thrown in we got about double the speed on the new box as the old one.

Have you read the tuning docs on varlena?


It's a must read.

> Here are the specs of the systems:
> SQL Server
> Dell PowerEdge 2400
> Windows 2000 Advanced Server
> Dual Pentium III 667
> 2 GB Registered PC133 SDRAM
> MS SQL Server 2000 SP2 - shared database (although to be fair, this app 
> is by far the heaviest)
> RAID 1 for system / RAID 5 for data (10k RPM Ultra160 SCSI drives)
> PostgreSQL
> Dell PowerEdge 2650
> RedHat Enterprise Linux 3.1
> Dual Xeon 3.06 GHz (Hyperthreading currently disabled)
> PostgreSQL 7.4 - dedicated to this app, with no other apps running on 
> system
> RAID 5 (15k RPM Ultra160 SCSI drives)
> The database is about 4.3 GB in size.
> My postgresql.conf is as follows:
> max_connections = 50
> shared_buffers = 10000          # min 16, at least max_connections*2, 
> 8KB each - default is 1000
> sort_mem = 2000                 # min 64, size in KB - default is 1024 
> (commented out)
> effective_cache_size = 250000   # typically 8KB each - default is 1000 
> (commented out)

I'm gonna guess that you could use a larger sort_mem (at least 8 meg, no 
more than 32 meg is usually a good range.  With 4 gigs of ram, you can 
probably go to 64 or 128 meg if you only handle a hand full of clients at 
at time, but sort_mem is per sort, so be careful cranking it up too fast, 
as you'll throwh the server into a swap storm.  I.e. setting sort_mem high 
is a foot gun.

Your effective cache size should likely be at LEAST a setting that 
represents 2 gigs, maybe more.  It's measured in blocks, so unless you've 
changed your block size from 8k, that would be: 250000

What are your query settings for random_page_cost, and cpu*cost settings?

It's likely a good idea to drop your random page cost to close to 1, as 
with this much memory, most of your data will find itself in memory.

10000 is probably plenty for shared_buffers.  You might try setting it 
higher to see if it helps, but I'm doubting it will.

But more important, WHAT are you doing that's slow?  Matching text, 
foreign keys, triggers, stored procedures?  Use explain analyze on the the 
slow / mediocre queries and we can help a bit.

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Reply via email to