If SATA drives don't have the ability to replace SCSI for a multi-user
Postgres apps, but you needed to save on cost (ALWAYS an issue), could/would you implement SATA for your logs (pg_xlog) and keep the rest on SCSI?


Steve Poe

Mohan, Ross wrote:

I've been doing some reading up on this, trying to keep up here, and have found out that (experts, just yawn and cover your ears)

1) some SATA drives (just type II, I think?) have a "Phase Zero"
   implementation of Tagged Command Queueing (the special sauce
   for SCSI).
2) This SATA "TCQ" is called NCQ and I believe it basically
   allows the disk software itself to do the reordering
   (this is called "simple" in TCQ terminology) It does not
   yet allow the TCQ "head of queue" command, allowing the
   current tagged request to go to head of queue, which is
   a simple way of manifesting a "high priority" request.

3) SATA drives are not yet multi-initiator?

Largely b/c of 2 and 3, multi-initiator SCSI RAID'ed drives
are likely to whomp SATA II drives for a while yet (read: a
year or two) in multiuser PostGres applications.




-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Greg Stark
Sent: Thursday, April 14, 2005 2:04 PM
To: Kevin Brown
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] How to improve db performance with $7K?


Kevin Brown <[EMAIL PROTECTED]> writes:



Greg Stark wrote:




I think you're being misled by analyzing the write case.

Consider the read case. When a user process requests a block and that read makes its way down to the driver level, the driver can't just put it aside and wait until it's convenient. It has to go ahead and issue the read right away.


Well, strictly speaking it doesn't *have* to. It could delay for a couple of milliseconds to see if other requests come in, and then issue the read if none do. If there are already other requests being fulfilled, then it'll schedule the request in question just like the rest.



But then the cure is worse than the disease. You're basically describing exactly what does happen anyways, only you're delaying more requests than necessary. That intervening time isn't really idle, it's filled with all the requests that were delayed during the previous large seek...



Once the first request has been fulfilled, the driver can now schedule the rest of the queued-up requests in disk-layout order.

I really don't see how this is any different between a system that has tagged queueing to the disks and one that doesn't. The only difference is where the queueing happens.



And *when* it happens. Instead of being able to issue requests while a large seek is happening and having some of them satisfied they have to wait until that seek is finished and get acted on during the next large seek.

If my theory is correct then I would expect bandwidth to be essentially 
equivalent but the latency on SATA drives to be increased by about 50% of the 
average seek time. Ie, while a busy SCSI drive can satisfy most requests in 
about 10ms a busy SATA drive would satisfy most requests in 15ms. (add to that 
that 10k RPM and 15kRPM SCSI drives have even lower seek times and no such 
IDE/SATA drives exist...)

In reality higher latency feeds into a system feedback loop causing your 
application to run slower causing bandwidth demands to be lower as well. It's 
often hard to distinguish root causes from symptoms when optimizing complex 
systems.





---------------------------(end of broadcast)---------------------------
TIP 9: the planner will ignore your desire to choose an index scan if your
     joining column's datatypes do not match

Reply via email to