Re: [PERFORM] availability of SATA vendors

2006-11-23 Thread Arjen van der Meijden

Hi Luke,

I forgot about that article, thanks for that link. That's indeed a nice 
overview of (in august) recent controllers. The Areca 1280 in that test 
(and the results I linked to earlier) is a pre-production model, so it 
might actually perform even better than in that test.


We've been getting samples from AMCC in the past, so a 96xx should be 
possible. I've pointed it out to the author of the previous 
raid-articles. Thanks for pointing that out to me.


Best regards,

Arjen

On 22-11-2006 22:47 Luke Lonergan wrote:

Arjen,

As usual, your articles are excellent!

Your results show again that the 3Ware 9550SX is really poor at random I/O
with RAID5 and all of the Arecas are really good.  3Ware/AMCC have designed
the 96xx to do much better for RAID5, but I've not seen results - can you
get a card and test it?

We now run the 3Ware controllers in RAID10 with 8 disks each and they have
been excellent.  Here (on your site) are results that bear this out:
  http://tweakers.net/reviews/639/9

- Luke


On 11/22/06 11:07 AM, Arjen van der Meijden [EMAIL PROTECTED]
wrote:


Jeff,

You can find some (Dutch) results here on our website:
http://tweakers.net/reviews/647/5

You'll find the AMCC/3ware 9550SX-12 with up to 12 disks, Areca 1280 and
1160 with up to 14 disks and a Promise and LSI sata-raid controller with
each up to 8 disks. Btw, that Dell Perc5 (sas) is afaik not the same
card as the LSI MegaRAID SATA 300-8X, but I have no idea whether they
share the same controllerchip.
In most of the graphs you also see a Areca 1160 with 1GB in stead of its
default 256MB. Hover over the labels to see only that specific line,
that makes the graphs quite readable.

You'll also see a Dell Perc5/e in the results, but that was done using
Fujitsu SAS 15k rpm drives, not the WD Raptor 10k rpm's

If you dive deeper in our (still Dutch) benchmark database you may
find some results of several disk-configurations on several controllers
in various storage related tests, like here:
http://tweakers.net/benchdb/test/193

If you want to filter some results, look for Resultaatfilter 
tabelgenerator and press on the Toon filteropties-tekst. I think
you'll be able to understand the selection-overview there, even if you
don't understand Dutch ;)
Filter resultaten below means the same as in English (filter [the]
results)

Best regards,

Arjen

On 22-11-2006 17:36 Jeff Frost wrote:

On Wed, 22 Nov 2006, Bucky Jordan wrote:


Dells (at least the 1950 and 2950) come with the Perc5, which is
basically just the LSI MegaRAID. The units I have come with a 256MB BBU,
I'm not sure if it's upgradeable, but it looks like a standard DIMM in
there...

I posted some dd and bonnie++ benchmarks of a 6-disk setup a while back
on a 2950, so you might search the archive for those numbers if you're
interested- you should be able to get the same or better from a
similarly equipped LSI setup. I don't recall if I posted pgbench
numbers, but I can if that's of interest.

I could only find the 6 disk RAID5 numbers in the archives that were run
with bonnie++1.03.  Have you run the RAID10 tests since?  Did you settle
on 6 disk RAID5 or 2xRAID1 + 4XRAID10?





---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [PERFORM] PostgreSQL underestimates sorting

2006-11-23 Thread Simon Riggs
On Wed, 2006-11-22 at 11:17 +0100, Markus Schaber wrote:

 PostgreSQL 8.1 (and, back then, 7.4) have the tendency to underestimate
 the costs of sort operations, compared to index scans.
 
 The Backend allocates gigs of memory (we've set sort_mem to 1 gig), and
 then starts spilling out more Gigs of temporary data to the disk. So the
 execution gets - in the end - much slower compared to an index scan, and
 wastes lots of disk space.
 
 We did not manage to tune the config values appropriately, at least not
 without causing other query plans to suffer badly.

8.2 has substantial changes to sort code, so you may want to give the
beta version a try to check for how much better it works. That's another
way of saying that sort in 8.1 and before has some performance problems
when you are sorting more than 6 * 2 * work_mem (on randomly sorted
data) and the cost model doesn't get this right, as you observe.

Try enabling trace_sort (available in both 8.1 and 8.2) and post the
results here please, which would be very useful to have results on such
a large real-world sort.

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] Lying drives [Was: Re: Which OS provides the

2006-11-23 Thread Bruce Momjian
Greg Smith wrote:
 On Mon, 13 Nov 2006, Guy Thornley wrote:
 
  I've yet to find a drive that lies about write completion. The problem 
  is that the drives boot-up default is write-caching enabled (or perhaps 
  the system BIOS sets it that way). If you turn an IDE disks write cache 
  off explicity, using hdparm or similar, they behave.
 
 I found a rather ominous warning from SGI on this subject at 
 http://oss.sgi.com/projects/xfs/faq.html#wcache_query
 
 [Disabling the write cache] is kept persistent for a SCSI disk. However, 
 for a SATA/PATA disk this needs to be done after every reset as it will 
 reset back to the default of the write cache enabled. And a reset can 
 happen after reboot or on error recovery of the drive. This makes it 
 rather difficult to guarantee that the write cache is maintained as 
 disabled.
 
 As I've been learning more about this subject recently, I've become 
 increasingly queasy about using IDE drives for databases unless they're 
 hooked up to a high-end (S|P)ATA controller.  As far as I know the BIOS 

Yes, avoiding IDE for serious database servers is a conclusion I made
long ago.

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Direct I/O issues

2006-11-23 Thread Tom Lane
Greg Smith [EMAIL PROTECTED] writes:
 The results I get now look fishy.

There are at least two things wrong with this program:

* It does not respect the alignment requirement for O_DIRECT buffers
  (reportedly either 512 or 4096 bytes depending on filesystem).

* It does not check for errors (if it had, you might have realized the
  other problem).

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] Direct I/O issues

2006-11-23 Thread Greg Smith

On Thu, 23 Nov 2006, Tom Lane wrote:


* It does not check for errors (if it had, you might have realized the
 other problem).


All the test_fsync code needs to check for errors better; there have been 
multiple occasions where I've run that with quesiontable input and it 
didn't complain, it just happily ran and reported times that were almost 
0.


Thanks for the note about alignment, I had seen something about that in 
the xlog.c but wasn't sure if that was important in this case.


It's very important to the project I'm working on that I get this cleared 
up, and I think I'm in a good position to fix it myself now.  I just 
wanted to report the issue and get some initial feedback on what's wrong. 
I'll try to rewrite that code with an eye toward the Determine optimal 
fdatasync/fsync, O_SYNC/O_DSYNC options to-do item, which is what I'd 
really like to have.


--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [PERFORM] Priority to a mission critical transaction

2006-11-23 Thread Brad Nicholson
On Tue, 2006-11-21 at 21:43 -0200, Carlos H. Reimer wrote:
 Hi,
  
 We have an application that is mission critical, normally very fast,
 but when an I/O or CPU bound transaction appears, the mission critical
 application suffers. Is there a way go give some kind of priority to
 this kind of application?
 Reimer


Not that I'm aware of.  Depending on what the problems transactions are,
setting up a replica on a separate machine and running those
transactions against the replica might be the solution.

-- 
Brad Nicholson  416-673-4106
Database Administrator, Afilias Canada Corp.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[PERFORM] Postgres scalability and performance on windows

2006-11-23 Thread Gopal
Hi all,

 

I have a postgres installation thats running under 70-80% CPU usage
while

an MSSQL7 installation did 'roughly' the same thing with 1-2% CPU load.

 

Here's the scenario,

300 queries/second

Server: Postgres 8.1.4 on win2k server

CPU: Dual Xeon 3.6 Ghz, 

Memory: 4GB RAM

Disks: 3 x 36gb , 15K RPM SCSI

C# based web application calling postgres functions using npgsql 0.7.

Its almost completely read-only db apart from fortnightly updates.

 

Table 1 - About 300,000 rows with simple rectangles

Table 2 - 1 million rows 

Total size: 300MB

 

Functions : Simple coordinate reprojection and intersection query +
inner join of table1 and table2.

I think I have all the right indexes defined and indeed the performance
for  queries under low loads is fast.

 

 


==

postgresql.conf has following settings

max_connections = 150

hared_buffers = 2# min 16 or
max_connections*2, 8KB each

temp_buffers = 2000   # min 100, 8KB each

max_prepared_transactions = 25 # can be 0 or more

# note: increasing max_prepared_transactions costs ~600 bytes of shared
memory

# per transaction slot, plus lock space (see max_locks_per_transaction).

work_mem = 512   # min 64, size in KB

#maintenance_work_mem = 16384  # min 1024, size in
KB

max_stack_depth = 2048

effective_cache_size = 82728  # typically 8KB each

random_page_cost = 4   # units are one
sequential page fetch 


==

 

SQL server caches all the data in memory which is making it faster(uses
about 1.2GB memory- which is fine).

But postgres has everything spread across 10-15 processes, with each
process using about 10-30MB, not nearly enough to cache all the data and
ends up doing a lot of disk reads.

I've read that postgres depends on OS to cache the files, I wonder if
this is not happenning on windows.

 

In any case I cannot believe that having 15-20 processes running on
windows helps. Why not spwan of threads instead of processes, which
might

be far less expensive and more efficient. Is there any way of doing
this?

 

My question is, should I just accept the performance I am getting as the
limit on windows or should I be looking at some other params that I
might have missed?

 

Thanks,

Gopal



This e-mail has been scanned for all viruses by Star. The
service is powered by MessageLabs. For more information on a proactive
anti-virus service working around the clock, around the globe, visit:
http://www.star.net.uk




Re: [PERFORM] Postgres scalability and performance on windows

2006-11-23 Thread Heikki Linnakangas

Gopal wrote:


Functions : Simple coordinate reprojection and intersection query +
inner join of table1 and table2.

I think I have all the right indexes defined and indeed the performance
for  queries under low loads is fast.


Can you do a EXPLAIN ANALYZE on your queries, and send the results back 
to the list just to be sure?



SQL server caches all the data in memory which is making it faster(uses
about 1.2GB memory- which is fine).

But postgres has everything spread across 10-15 processes, with each
process using about 10-30MB, not nearly enough to cache all the data and
ends up doing a lot of disk reads.


I don't know Windows memory management very well, but let me just say 
that it's not that simple.



I've read that postgres depends on OS to cache the files, I wonder if
this is not happenning on windows.


Using the Task Manager, or whatever it's called these days, you can see 
how much memory is used for caching.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [PERFORM] Direct I/O issues

2006-11-23 Thread Bruce Momjian
Greg Smith wrote:
 On Thu, 23 Nov 2006, Tom Lane wrote:
 
  * It does not check for errors (if it had, you might have realized the
   other problem).
 
 All the test_fsync code needs to check for errors better; there have been 
 multiple occasions where I've run that with quesiontable input and it 
 didn't complain, it just happily ran and reported times that were almost 
 0.
 
 Thanks for the note about alignment, I had seen something about that in 
 the xlog.c but wasn't sure if that was important in this case.
 
 It's very important to the project I'm working on that I get this cleared 
 up, and I think I'm in a good position to fix it myself now.  I just 
 wanted to report the issue and get some initial feedback on what's wrong. 
 I'll try to rewrite that code with an eye toward the Determine optimal 
 fdatasync/fsync, O_SYNC/O_DSYNC options to-do item, which is what I'd 
 really like to have.

Please send an updated patch for test_fsync.c so we can get it working
for 8.2.

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings