t;[EMAIL PROTECTED]> wrote:
-Original Message-
From: Alex Turner [mailto:[EMAIL PROTECTED]
Sent: Monday, April 18, 2005 5:50 PM
To: Bruce Momjian
Cc: Kevin Brown; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] How to improve db performance with $7K?
Does it really matter at which
Alex et al.,
I wonder if thats something to think about adding to Postgresql? A setting for
multiblock read count like Oracle (Although
|| I would think so, yea. GMTA: I was just having this micro-chat with Mr. Jim
Nasby.
having said that I believe that Oracle natively caches pages much m
> -Original Message-
> From: Alex Turner [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, April 20, 2005 12:04 PM
> To: Dave Held
> Cc: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] How to improve db performance with $7K?
>
> [...]
> Lets say we i
/05, Dave Held <[EMAIL PROTECTED]> wrote:
> > -Original Message-
> > From: Alex Turner [mailto:[EMAIL PROTECTED]
> > Sent: Monday, April 18, 2005 5:50 PM
> > To: Bruce Momjian
> > Cc: Kevin Brown; pgsql-performance@postgresql.org
> > Subject: Re: [P
I wonder if thats something to think about adding to Postgresql? A
setting for multiblock read count like Oracle (Although having said
that I believe that Oracle natively caches pages much more
aggressively that postgresql, which allows the OS to do the file
caching).
Alex Turner
netEconomist
P.S
PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Dawid Kuroczko
Sent: Wednesday, April 20, 2005 4:56 AM
To: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] How to improve db performance with $7K?
On 4/19/05, Mohan, Ross <[EMAIL PROTECTED]> wrote:
> Clustered file systems is the
TED]
Sent: Tuesday, April 19, 2005 8:12 PM
To: Mohan, Ross
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] How to improve db performance with $7K?
On Mon, Apr 18, 2005 at 06:41:37PM -, Mohan, Ross wrote:
> Don't you think "optimal stripe width" would be
> a good qu
On 4/19/05, Mohan, Ross <[EMAIL PROTECTED]> wrote:
> Clustered file systems is the first/best example that
> comes to mind. Host A and Host B can both request from diskfarm, eg.
Something like a Global File System?
http://www.redhat.com/software/rha/gfs/
(I believe some other company did develop
On Tue, Apr 19, 2005 at 11:22:17AM -0500, [EMAIL PROTECTED] wrote:
>
>
> [EMAIL PROTECTED] wrote on 04/19/2005 11:10:22 AM:
> >
> > What is 'multiple initiators' used for in the real world?
>
> I asked this same question and got an answer off list: Somebody said their
> SAN hardware used multip
On Mon, Apr 18, 2005 at 06:41:37PM -, Mohan, Ross wrote:
> Don't you think "optimal stripe width" would be
> a good question to research the binaries for? I'd
> think that drives the answer, largely. (uh oh, pun alert)
>
> EG, oracle issues IO requests (this may have changed _just_
> recentl
On Mon, Apr 18, 2005 at 10:20:36AM -0500, Dave Held wrote:
> Hmm...so you're saying that at some point, quantity beats quality?
> That's an interesting point. However, it presumes that you can
> actually distribute your data over a larger number of drives. If
> you have a db with a bottleneck of
On Mon, Apr 18, 2005 at 07:41:49PM +0200, Jacques Caron wrote:
> It would be interesting to actually compare this to real-world (or
> nearly-real-world) benchmarks to measure the effectiveness of features like
> TCQ/NCQ etc.
I was just thinking that it would be very interesting to benchmark
diff
On Thu, Apr 14, 2005 at 10:51:46AM -0500, Matthew Nuzum wrote:
> So if you all were going to choose between two hard drives where:
> drive A has capacity C and spins at 15K rpms, and
> drive B has capacity 2 x C and spins at 10K rpms and
> all other features are the same, the price is the same and
uests from multiple hosts
can be queued.
-Original Message-
From: Bruce Momjian [mailto:[EMAIL PROTECTED]
Sent: Tuesday, April 19, 2005 12:16 PM
To: Mohan, Ross
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] How to improve db performance with $7K?
Mohan, Ross wrote:
&
[EMAIL PROTECTED] wrote on 04/19/2005 11:10:22 AM:
>
> What is 'multiple initiators' used for in the real world?
I asked this same question and got an answer off list: Somebody said their
SAN hardware used multiple initiators. I would try to check the archives
for you, but this thread is becom
t part?
---
> -Original Message-
> From: Bruce Momjian [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, April 19, 2005 12:10 PM
> To: Mohan, Ross
> Cc: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] How to improve db performance with $7K?
>
>
> Mohan
Subject: Re: [PERFORM] How to improve db performance with $7K?
Mohan, Ross wrote:
> The only part I am pretty sure about is that real-world experience
> shows SCSI is better for a mixed I/O environment. Not sure why,
> exactly, but the command queueing obviously helps, and I am not sure
&g
Mohan, Ross wrote:
> The only part I am pretty sure about is that real-world experience shows SCSI
> is better for a mixed I/O environment. Not sure why, exactly, but the
> command queueing obviously helps, and I am not sure what else does.
>
> || TCQ is the secret sauce, no doubt. I think NCQ
Good question. If the SCSI system was moving the head from track 1 to 10, and
a request then came in for track 5, could the system make the head stop at
track 5 on its way to track 10? That is something that only the controller
could do. However, I have no idea if SCSI does that.
|| SCSI, A
> -Original Message-
> From: Alex Turner [mailto:[EMAIL PROTECTED]
> Sent: Monday, April 18, 2005 5:50 PM
> To: Bruce Momjian
> Cc: Kevin Brown; pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] How to improve db performance with $7K?
>
> Does it really ma
Alex Turner wrote:
> Does it really matter at which end of the cable the queueing is done
> (Assuming both ends know as much about drive geometry etc..)?
Good question. If the SCSI system was moving the head from track 1 to
10, and a request then came in for track 5, could the system make the
hea
On 4/14/05, Tom Lane <[EMAIL PROTECTED]> wrote:
>
> That's basically what it comes down to: SCSI lets the disk drive itself
> do the low-level I/O scheduling whereas the ATA spec prevents the drive
> from doing so (unless it cheats, ie, caches writes). Also, in SCSI it's
> possible for the drive
On Mon, Apr 18, 2005 at 06:49:44PM -0400, Alex Turner wrote:
> Does it really matter at which end of the cable the queueing is done
> (Assuming both ends know as much about drive geometry etc..)?
That is a pretty strong assumption, isn't it? Also you seem to be
assuming that the controller<->disk
Does it really matter at which end of the cable the queueing is done
(Assuming both ends know as much about drive geometry etc..)?
Alex Turner
netEconomist
On 4/18/05, Bruce Momjian wrote:
> Kevin Brown wrote:
> > Greg Stark wrote:
> >
> >
> > > I think you're being misled by analyzing the write
Kevin Brown wrote:
> Greg Stark wrote:
>
>
> > I think you're being misled by analyzing the write case.
> >
> > Consider the read case. When a user process requests a block and
> > that read makes its way down to the driver level, the driver can't
> > just put it aside and wait until it's conven
Oooops, I revived the never-ending $7K thread. :)
Well part of my message is to first relook at the idea that SATA is
cheap but slow. Most people look at SATA from the view of consumer-level
drives, no NCQ/TCQ -- basically these drives are IDEs that can connect
to SATA cables. But if you then lo
On 4/18/05, Jacques Caron <[EMAIL PROTECTED]> wrote:
> Hi,
>
> At 20:21 18/04/2005, Alex Turner wrote:
> >So I wonder if one could take this stripe size thing further and say
> >that a larger stripe size is more likely to result in requests getting
> >served parallized across disks which would lea
Mistype.. I meant 0+1 in the second instance :(
On 4/18/05, Joshua D. Drake <[EMAIL PROTECTED]> wrote:
> Alex Turner wrote:
> > Not true - the recommended RAID level is RAID 10, not RAID 0+1 (at
> > least I would never recommend 1+0 for anything).
>
> Uhmm I was under the impression that 1+0 was
Hi,
At 20:21 18/04/2005, Alex Turner wrote:
So I wonder if one could take this stripe size thing further and say
that a larger stripe size is more likely to result in requests getting
served parallized across disks which would lead to increased
performance?
Actually, it would be pretty much the opp
Caron
Cc: Greg Stark; William Yu; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] How to improve db performance with $7K?
So I wonder if one could take this stripe size thing further and say that a
larger stripe size is more likely to result in requests getting served
parallized across disks w
Hi,
At 20:16 18/04/2005, Alex Turner wrote:
So my assertion that adding more drives doesn't help is pretty
wrong... particularly with OLTP because it's always dealing with
blocks that are smaller that the stripe size.
When doing random seeks (which is what a database needs most of the time),
the n
Alex Turner wrote:
Not true - the recommended RAID level is RAID 10, not RAID 0+1 (at
least I would never recommend 1+0 for anything).
Uhmm I was under the impression that 1+0 was RAID 10 and that 0+1 is NOT
RAID 10.
Ref: http://www.acnc.com/raid.html
Sincerely,
Joshua D. Drake
---
Jacques Caron <[EMAIL PROTECTED]> writes:
> When writing:
> - in RAID 0, 1 drive
> - in RAID 1, RAID 0+1 or 1+0, 2 drives
> - in RAID 5, you need to read on all drives and write on 2.
Actually RAID 5 only really needs to read from two drives. The existing parity
block and the block you're replac
So I wonder if one could take this stripe size thing further and say
that a larger stripe size is more likely to result in requests getting
served parallized across disks which would lead to increased
performance?
Again, thanks to all people on this list, I know that I have learnt a
_hell_ of alot
Ok - well - I am partially wrong...
If you're stripe size is 64Kb, and you are reading 256k worth of data,
it will be spread across four drives, so you will need to read from
four devices to get your 256k of data (RAID 0 or 5 or 10), but if you
are only reading 64kb of data, I guess you would only
I think the add more disks thing is really from the point of view that
one disk isn't enough ever. You should really have at least four
drives configured into two RAID 1s. Most DBAs will know this, but
most average Joes won't.
Alex Turner
netEconomist
On 4/18/05, Steve Poe <[EMAIL PROTECTED]> w
Not true - the recommended RAID level is RAID 10, not RAID 0+1 (at
least I would never recommend 1+0 for anything).
RAID 10 and RAID 0+1 are _quite_ different. One gives you very good
redundancy, the other is only slightly better than RAID 5, but
operates faster in degraded mode (single drive).
Alex,
In the situation of the animal hospital server I oversee, their
application is OLTP. Adding hard drives (6-8) does help performance.
Benchmarks like pgbench and OSDB agree with it, but in reality users
could not see noticeable change. However, moving the top 5/10 tables and
indexes to the
Hi,
At 16:59 18/04/2005, Greg Stark wrote:
William Yu <[EMAIL PROTECTED]> writes:
> Using the above prices for a fixed budget for RAID-10, you could get:
>
> SATA 7200 -- 680MB per $1000
> SATA 10K -- 200MB per $1000
> SCSI 10K -- 125MB per $1000
What a lot of these analyses miss is that cheaper
Alex Turner wrote:
[snip]
Adding drives will not let you get lower response times than the average seek
time on your drives*. But it will let you reach that response time more often.
[snip]
I believe your assertion is fundamentaly flawed. Adding more drives
will not let you reach that response t
Alex Turner wrote:
[snip]
Adding drives will not let you get lower response times than the average seek
time on your drives*. But it will let you reach that response time more often.
[snip]
I believe your assertion is fundamentaly flawed. Adding more drives
will not let you reach that resp
Hi,
At 18:56 18/04/2005, Alex Turner wrote:
All drives are required to fill every request in all RAID levels
No, this is definitely wrong. In many cases, most drives don't actually
have the data requested, how could they handle the request?
When reading one random sector, only *one* drive out of
[snip]
>
> Adding drives will not let you get lower response times than the average seek
> time on your drives*. But it will let you reach that response time more often.
>
[snip]
I believe your assertion is fundamentaly flawed. Adding more drives
will not let you reach that response time more o
Alex Turner <[EMAIL PROTECTED]> writes:
> This is fundamentaly untrue.
>
> A mirror is still a mirror. At most in a RAID 10 you can have two
> simultaneous seeks. You are always going to be limited by the seek
> time of your drives. It's a stripe, so you have to read from all
> members of the
This is fundamentaly untrue.
A mirror is still a mirror. At most in a RAID 10 you can have two
simultaneous seeks. You are always going to be limited by the seek
time of your drives. It's a stripe, so you have to read from all
members of the stripe to get data, requiring all drives to seek.
Th
> -Original Message-
> From: Greg Stark [mailto:[EMAIL PROTECTED]
> Sent: Monday, April 18, 2005 9:59 AM
> To: William Yu
> Cc: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] How to improve db performance with $7K?
>
> William Yu <[EMAIL PROTECTED
William Yu <[EMAIL PROTECTED]> writes:
> Using the above prices for a fixed budget for RAID-10, you could get:
>
> SATA 7200 -- 680MB per $1000
> SATA 10K -- 200MB per $1000
> SCSI 10K -- 125MB per $1000
What a lot of these analyses miss is that cheaper == faster because cheaper
means you can
or two) in multiuser PostGres applications.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Greg Stark
Sent: Thursday, April 14, 2005 2:04 PM
To: Kevin Brown
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] How to improve db performance with $7K?
K
Rosser Schwarz wrote:
> while you weren't looking, Kevin Brown wrote:
>
> [reordering bursty reads]
>
> > In other words, it's a corner case that I strongly suspect
> > isn't typical in situations where SCSI has historically made a big
> > difference.
>
> [...]
>
> > But I rather doubt that has
Vivek Khera wrote:
>
> On Apr 14, 2005, at 10:03 PM, Kevin Brown wrote:
>
> >Now, bad block remapping destroys that guarantee, but unless you've
> >got a LOT of bad blocks, it shouldn't destroy your performance, right?
> >
>
> ALL disks have bad blocks, even when you receive them. you honestly
Tom Lane wrote:
> Kevin Brown <[EMAIL PROTECTED]> writes:
> > In the case of pure random reads, you'll end up having to wait an
> > average of half of a rotation before beginning the read.
>
> You're assuming the conclusion. The above is true if the disk is handed
> one request at a time by a ker
vin Brown; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] How to improve db performance with $7K?
Tom Lane <[EMAIL PROTECTED]> writes:
> Yes, you can probably assume that blocks with far-apart numbers are
> going to require a big seek, and you might even be right in supposing
Tom Lane <[EMAIL PROTECTED]> writes:
> Yes, you can probably assume that blocks with far-apart numbers are
> going to require a big seek, and you might even be right in supposing
> that a block with an intermediate number should be read on the way.
> But you have no hope at all of making the right
On Apr 15, 2005, at 11:58 AM, Joshua D. Drake wrote:
ALL disks have bad blocks, even when you receive them. you honestly
think that these large disks made today (18+ GB is the smallest now)
that there are no defects on the surfaces?
That is correct. It is just that the HD makers will mark the ba
Vivek Khera wrote:
On Apr 14, 2005, at 10:03 PM, Kevin Brown wrote:
Now, bad block remapping destroys that guarantee, but unless you've
got a LOT of bad blocks, it shouldn't destroy your performance, right?
ALL disks have bad blocks, even when you receive them. you honestly
think that these large
On Apr 14, 2005, at 10:03 PM, Kevin Brown wrote:
Now, bad block remapping destroys that guarantee, but unless you've
got a LOT of bad blocks, it shouldn't destroy your performance, right?
ALL disks have bad blocks, even when you receive them. you honestly
think that these large disks made today (
PFC wrote:
My argument is that a sufficiently smart kernel scheduler *should*
yield performance results that are reasonably close to what you can
get with that feature. Perhaps not quite as good, but reasonably
close. It shouldn't be an orders-of-magnitude type difference.
And a controller
platter compared to the rotational speed, which would agree with the
fact that you can read 70MB/sec, but it takes up to 13ms to seek.
Actually :
- the head has to be moved
this time depends on the distance, for instance moving from a cylinder to
the next is very fast (it needs to, to get goo
My argument is that a sufficiently smart kernel scheduler *should*
yield performance results that are reasonably close to what you can
get with that feature. Perhaps not quite as good, but reasonably
close. It shouldn't be an orders-of-magnitude type difference.
And a controller card (or drive)
Kevin Brown <[EMAIL PROTECTED]> writes:
> In the case of pure random reads, you'll end up having to wait an
> average of half of a rotation before beginning the read.
You're assuming the conclusion. The above is true if the disk is handed
one request at a time by a kernel that doesn't have any lo
Tom Lane wrote:
> Kevin Brown <[EMAIL PROTECTED]> writes:
> > Tom Lane wrote:
> >> The reason this is so much more of a win than it was when ATA was
> >> designed is that in modern drives the kernel has very little clue about
> >> the physical geometry of the disk. Variable-size tracks, bad-block
Kevin Brown <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> The reason this is so much more of a win than it was when ATA was
>> designed is that in modern drives the kernel has very little clue about
>> the physical geometry of the disk. Variable-size tracks, bad-block
>> sparing, and stuff like
3ware claim that their 'software' implemented command queueing
performs at 95% effectiveness compared to the hardware queueing on a
SCSI drive, so I would say that they agree with you.
I'm still learning, but as I read it, the bits are split across the
platters and there is only 'one' head, but ha
Tom Lane wrote:
> Kevin Brown <[EMAIL PROTECTED]> writes:
> > I really don't see how this is any different between a system that has
> > tagged queueing to the disks and one that doesn't. The only
> > difference is where the queueing happens. In the case of SCSI, the
> > queueing happens on the d
05 2:04 PM
To: Kevin Brown
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] How to improve db performance with $7K?
Kevin Brown <[EMAIL PROTECTED]> writes:
Greg Stark wrote:
I think you're being misled by analyzing the write case.
Consider the read case. When a user proc
> -Original Message-
> From: Mohan, Ross [mailto:[EMAIL PROTECTED]
> Sent: Thursday, April 14, 2005 1:30 PM
> To: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] How to improve db performance with $7K?
>
> Greg Stark wrote:
> >
> > Kevin
PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Greg Stark
Sent: Thursday, April 14, 2005 2:04 PM
To: Kevin Brown
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] How to improve db performance with $7K?
Kevin Brown <[EMAIL PROTECTED]> writes:
Greg Stark wrote:
I think
The real question is whether you choose the single 15kRPM drive or
additional
drives at 10kRPM... Additional spindles would give a much bigger
And the bonus question.
Expensive fast drives as a RAID for everything, or for the same price
many more slower drives (even SATA) so you can put the
es applications.
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Greg Stark
Sent: Thursday, April 14, 2005 2:04 PM
To: Kevin Brown
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] How to improve db performance with $7K?
Kevin Brown <[EMAIL PR
> -Original Message-
> From: Greg Stark [mailto:[EMAIL PROTECTED]
> Sent: Thursday, April 14, 2005 12:55 PM
> To: [EMAIL PROTECTED]
> Cc: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] How to improve db performance with $7K?
>
> "Matthew Nuzu
"Matthew Nuzum" <[EMAIL PROTECTED]> writes:
> So if you all were going to choose between two hard drives where:
> drive A has capacity C and spins at 15K rpms, and
> drive B has capacity 2 x C and spins at 10K rpms and
> all other features are the same, the price is the same and C is enough
> disk
Kevin Brown <[EMAIL PROTECTED]> writes:
> Greg Stark wrote:
>
>
> > I think you're being misled by analyzing the write case.
> >
> > Consider the read case. When a user process requests a block and
> > that read makes its way down to the driver level, the driver can't
> > just put it aside and
"Matthew Nuzum" <[EMAIL PROTECTED]> writes:
> drive A has capacity C and spins at 15K rpms, and
> drive B has capacity 2 x C and spins at 10K rpms and
> all other features are the same, the price is the same and C is enough
> disk space which would you choose?
In this case you always choose the 1
On 4/14/05, Tom Lane <[EMAIL PROTECTED]> wrote:
>
> That's basically what it comes down to: SCSI lets the disk drive itself
> do the low-level I/O scheduling whereas the ATA spec prevents the drive
> from doing so (unless it cheats, ie, caches writes). Also, in SCSI it's
> possible for the drive t
e useful.
YMMV.
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Kevin Brown
Sent: Thursday, April 14, 2005 4:36 AM
To: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] How to improve db performance with $7K?
Greg Stark wrote:
> I th
while you weren't looking, Kevin Brown wrote:
[reordering bursty reads]
> In other words, it's a corner case that I strongly suspect
> isn't typical in situations where SCSI has historically made a big
> difference.
[...]
> But I rather doubt that has to be a huge penalty, if any. When a
> pro
Kevin Brown <[EMAIL PROTECTED]> writes:
> I really don't see how this is any different between a system that has
> tagged queueing to the disks and one that doesn't. The only
> difference is where the queueing happens. In the case of SCSI, the
> queueing happens on the disks (or at least on the c
Greg Stark wrote:
> I think you're being misled by analyzing the write case.
>
> Consider the read case. When a user process requests a block and
> that read makes its way down to the driver level, the driver can't
> just put it aside and wait until it's convenient. It has to go ahead
> and issu
Kevin Brown <[EMAIL PROTECTED]> writes:
> My question is: why does this (physical I/O scheduling) seem to matter
> so much?
>
> Before you flame me for asking a terribly idiotic question, let me
> provide some context.
>
> The operating system maintains a (sometimes large) buffer cache, with
>
Tom Lane wrote:
> Greg Stark <[EMAIL PROTECTED]> writes:
> > In any case the issue with the IDE protocol is that fundamentally you
> > can only have a single command pending. SCSI can have many commands
> > pending.
>
> That's the bottom line: the SCSI protocol was designed (twenty years ago!)
> t
Yep, that's it, as well as increased quality control. I found this from
Seagate:
http://www.seagate.com/content/docs/pdf/whitepaper/D2c_More_than_Interface_ATA_vs_SCSI_042003.pdf
With this quote (note that ES stands for Enterprise System and PS stands
for Personal System):
There is significantl
Based on the reading I'm doing, and somebody please correct me if I'm
wrong, it seems that SCSI drives contain an on disk controller that
has to process the tagged queue. SATA-I doesn't have this. This
additional controller, is basicaly an on board computer that figures
out the best order in whic
Another simple question: Why is SCSI more expensive? After the
eleventy-millionth controller is made, it seems like SCSI and SATA are
using a controller board and a spinning disk. Is somebody still making
money by licensing SCSI technology?
Rick
[EMAIL PROTECTED] wrote on 04/06/2005 11:58:33 PM
A good one page discussion on the future of SCSI and SATA can
be found in the latest CHIPS (The Department of the Navy Information
Technology Magazine, formerly CHIPS AHOY) in an article by
Patrick G. Koehler and Lt. Cmdr. Stan Bush.
Click below if you don't mind being logged visiting Space and Na
You asked for it! ;-)
If you want cheap, get SATA. If you want fast under
*load* conditions, get SCSI. Everything else at this
time is marketing hype, either intentional or learned.
Ignoring dollars, expect to see SCSI beat SATA by 40%.
* * * What I tell you three times is true * * *
Also, c
Greg Stark <[EMAIL PROTECTED]> writes:
> In any case the issue with the IDE protocol is that fundamentally you
> can only have a single command pending. SCSI can have many commands
> pending.
That's the bottom line: the SCSI protocol was designed (twenty years ago!)
to allow the drive to do physic
Yeah - the more reading I'm doing - the more I'm finding out.
Alledgelly the Western Digial Raptor drives implement a version of
ATA-4 Tagged Queing which allows reordering of commands. Some
controllers support this. The 3ware docs say that the controller
support both reordering on the controlle
Alex Turner <[EMAIL PROTECTED]> writes:
> SATA gives each drive it's own channel, but you have to share in SCSI.
> A SATA controller typicaly can do 3Gb/sec (384MB/sec) per drive, but
> SCSI can only do 320MB/sec across the entire array.
SCSI controllers often have separate channels for each de
Ok - I take it back - I'm reading through this now, and realising that
the reviews are pretty clueless in several places...
On Apr 6, 2005 8:12 PM, Alex Turner <[EMAIL PROTECTED]> wrote:
> Ok - so I found this fairly good online review of various SATA cards
> out there, with 3ware not doing too h
Ok - so I found this fairly good online review of various SATA cards
out there, with 3ware not doing too hot on RAID 5, but ok on RAID 10.
http://www.tweakers.net/reviews/557/
Very interesting stuff.
Alex Turner
netEconomist
On Apr 6, 2005 7:32 PM, Alex Turner <[EMAIL PROTECTED]> wrote:
> I gue
I guess I'm setting myself up here, and I'm really not being ignorant,
but can someone explain exactly how is SCSI is supposed to better than
SATA?
Both systems use drives with platters. Each drive can physically only
read one thing at a time.
SATA gives each drive it's own channel, but you have
Sorry if I'm pointing out the obvious here, but it seems worth
mentioning. AFAIK all 3ware controllers are setup so that each SATA
drive gets it's own SATA bus. My understanding is that by and large,
SATA still suffers from a general inability to have multiple outstanding
commands on the bus at onc
Well - unfortuantely software RAID isn't appropriate for everyone, and
some of us need a hardware RAID controller. The LSI Megaraid 320-2
card is almost exactly the same price as the 3ware 9500S-12 card
(although I will conceed that a 320-2 card can handle at most 2x14
devices compare with the 12
It's the same money if you factor in the 3ware controller. Even without
a caching controller, SCSI works good in multi-threaded IO (not
withstanding crappy shit from Dell or Compaq). You can get such cards
from LSI for $75. And of course, many server MBs come with LSI
controllers built-in. Our
It's hardly the same money, the drives are twice as much.
It's all about the controller baby with any kind of dive. A bad SCSI
controller will give sucky performance too, believe me. We had a
Compaq Smart Array 5304, and it's performance was _very_ sub par.
If someone has a simple benchmark tes
Alex Turner wrote:
I'm no drive expert, but it seems to me that our write performance is
excellent. I think what most are concerned about is OLTP where you
are doing heavy write _and_ heavy read performance at the same time.
Our system is mostly read during the day, but we do a full system
update
I'm doing some research on SATA vs SCSI right now, but to be honest
I'm not turning up much at the protocol level. Alot of stupid
benchmarks comparing 10k Raptor drives against Top of the line 15k
drives, where usnurprsingly the SCSI drives win but of course cost 4
times as much. Although even in
On Apr 4, 2005, at 3:12 PM, Alex Turner wrote:
Our system is mostly read during the day, but we do a full system
update everynight that is all writes, and it's very fast compared to
the smaller SCSI system we moved off of. Nearly a 6x spead
improvement, as fast as 900 rows/sec with a 48 byte recor
I'm no drive expert, but it seems to me that our write performance is
excellent. I think what most are concerned about is OLTP where you
are doing heavy write _and_ heavy read performance at the same time.
Our system is mostly read during the day, but we do a full system
update everynight that is
Alex Turner wrote:
To be honest, I've yet to run across a SCSI configuration that can
touch the 3ware SATA controllers. I have yet to see one top 80MB/sec,
let alone 180MB/sec read or write, which is why we moved _away_ from
SCSI. I've seen Compaq, Dell and LSI controllers all do pathetically
ba
1 - 100 of 118 matches
Mail list logo