On 4/19/05, Mohan, Ross [EMAIL PROTECTED] wrote:
Clustered file systems is the first/best example that
comes to mind. Host A and Host B can both request from diskfarm, eg.
Something like a Global File System?
http://www.redhat.com/software/rha/gfs/
(I believe some other company did develop it
: Tuesday, April 19, 2005 8:12 PM
To: Mohan, Ross
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] How to improve db performance with $7K?
On Mon, Apr 18, 2005 at 06:41:37PM -, Mohan, Ross wrote:
Don't you think optimal stripe width would be
a good question to research the binaries
] [mailto:[EMAIL PROTECTED] On Behalf Of Dawid Kuroczko
Sent: Wednesday, April 20, 2005 4:56 AM
To: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] How to improve db performance with $7K?
On 4/19/05, Mohan, Ross [EMAIL PROTECTED] wrote:
Clustered file systems is the first/best example
I wonder if thats something to think about adding to Postgresql? A
setting for multiblock read count like Oracle (Although having said
that I believe that Oracle natively caches pages much more
aggressively that postgresql, which allows the OS to do the file
caching).
Alex Turner
netEconomist
/05, Dave Held [EMAIL PROTECTED] wrote:
-Original Message-
From: Alex Turner [mailto:[EMAIL PROTECTED]
Sent: Monday, April 18, 2005 5:50 PM
To: Bruce Momjian
Cc: Kevin Brown; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] How to improve db performance with $7K?
Does
-Original Message-
From: Alex Turner [mailto:[EMAIL PROTECTED]
Sent: Wednesday, April 20, 2005 12:04 PM
To: Dave Held
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] How to improve db performance with $7K?
[...]
Lets say we invented a new protocol that including
Alex et al.,
I wonder if thats something to think about adding to Postgresql? A setting for
multiblock read count like Oracle (Although
|| I would think so, yea. GMTA: I was just having this micro-chat with Mr. Jim
Nasby.
having said that I believe that Oracle natively caches pages much
PROTECTED] wrote:
-Original Message-
From: Alex Turner [mailto:[EMAIL PROTECTED]
Sent: Monday, April 18, 2005 5:50 PM
To: Bruce Momjian
Cc: Kevin Brown; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] How to improve db performance with $7K?
Does it really matter at which end
-Original Message-
From: Alex Turner [mailto:[EMAIL PROTECTED]
Sent: Monday, April 18, 2005 5:50 PM
To: Bruce Momjian
Cc: Kevin Brown; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] How to improve db performance with $7K?
Does it really matter at which end of the cable
Good question. If the SCSI system was moving the head from track 1 to 10, and
a request then came in for track 5, could the system make the head stop at
track 5 on its way to track 10? That is something that only the controller
could do. However, I have no idea if SCSI does that.
|| SCSI,
Mohan, Ross wrote:
The only part I am pretty sure about is that real-world experience shows SCSI
is better for a mixed I/O environment. Not sure why, exactly, but the
command queueing obviously helps, and I am not sure what else does.
|| TCQ is the secret sauce, no doubt. I think NCQ
Subject: Re: [PERFORM] How to improve db performance with $7K?
Mohan, Ross wrote:
The only part I am pretty sure about is that real-world experience
shows SCSI is better for a mixed I/O environment. Not sure why,
exactly, but the command queueing obviously helps, and I am not sure
what else
?
---
-Original Message-
From: Bruce Momjian [mailto:[EMAIL PROTECTED]
Sent: Tuesday, April 19, 2005 12:10 PM
To: Mohan, Ross
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] How to improve db performance with $7K?
Mohan, Ross wrote:
The only part I am pretty
[EMAIL PROTECTED] wrote on 04/19/2005 11:10:22 AM:
What is 'multiple initiators' used for in the real world?
I asked this same question and got an answer off list: Somebody said their
SAN hardware used multiple initiators. I would try to check the archives
for you, but this thread is
multiple hosts
can be queued.
-Original Message-
From: Bruce Momjian [mailto:[EMAIL PROTECTED]
Sent: Tuesday, April 19, 2005 12:16 PM
To: Mohan, Ross
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] How to improve db performance with $7K?
Mohan, Ross wrote:
Clustered file
On Thu, Apr 14, 2005 at 10:51:46AM -0500, Matthew Nuzum wrote:
So if you all were going to choose between two hard drives where:
drive A has capacity C and spins at 15K rpms, and
drive B has capacity 2 x C and spins at 10K rpms and
all other features are the same, the price is the same and C
On Mon, Apr 18, 2005 at 07:41:49PM +0200, Jacques Caron wrote:
It would be interesting to actually compare this to real-world (or
nearly-real-world) benchmarks to measure the effectiveness of features like
TCQ/NCQ etc.
I was just thinking that it would be very interesting to benchmark
On Mon, Apr 18, 2005 at 10:20:36AM -0500, Dave Held wrote:
Hmm...so you're saying that at some point, quantity beats quality?
That's an interesting point. However, it presumes that you can
actually distribute your data over a larger number of drives. If
you have a db with a bottleneck of one
On Mon, Apr 18, 2005 at 06:41:37PM -, Mohan, Ross wrote:
Don't you think optimal stripe width would be
a good question to research the binaries for? I'd
think that drives the answer, largely. (uh oh, pun alert)
EG, oracle issues IO requests (this may have changed _just_
recently) in
On Tue, Apr 19, 2005 at 11:22:17AM -0500, [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] wrote on 04/19/2005 11:10:22 AM:
What is 'multiple initiators' used for in the real world?
I asked this same question and got an answer off list: Somebody said their
SAN hardware used multiple
PROTECTED] On Behalf Of Greg Stark
Sent: Thursday, April 14, 2005 2:04 PM
To: Kevin Brown
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] How to improve db performance with $7K?
Kevin Brown [EMAIL PROTECTED] writes:
Greg Stark wrote:
I think you're being misled by analyzing
William Yu [EMAIL PROTECTED] writes:
Using the above prices for a fixed budget for RAID-10, you could get:
SATA 7200 -- 680MB per $1000
SATA 10K -- 200MB per $1000
SCSI 10K -- 125MB per $1000
What a lot of these analyses miss is that cheaper == faster because cheaper
means you can buy
This is fundamentaly untrue.
A mirror is still a mirror. At most in a RAID 10 you can have two
simultaneous seeks. You are always going to be limited by the seek
time of your drives. It's a stripe, so you have to read from all
members of the stripe to get data, requiring all drives to seek.
[snip]
Adding drives will not let you get lower response times than the average seek
time on your drives*. But it will let you reach that response time more often.
[snip]
I believe your assertion is fundamentaly flawed. Adding more drives
will not let you reach that response time more
Hi,
At 18:56 18/04/2005, Alex Turner wrote:
All drives are required to fill every request in all RAID levels
No, this is definitely wrong. In many cases, most drives don't actually
have the data requested, how could they handle the request?
When reading one random sector, only *one* drive out of
Alex Turner wrote:
[snip]
Adding drives will not let you get lower response times than the average seek
time on your drives*. But it will let you reach that response time more often.
[snip]
I believe your assertion is fundamentaly flawed. Adding more drives
will not let you reach that
Alex Turner wrote:
[snip]
Adding drives will not let you get lower response times than the average seek
time on your drives*. But it will let you reach that response time more often.
[snip]
I believe your assertion is fundamentaly flawed. Adding more drives
will not let you reach that response
Hi,
At 16:59 18/04/2005, Greg Stark wrote:
William Yu [EMAIL PROTECTED] writes:
Using the above prices for a fixed budget for RAID-10, you could get:
SATA 7200 -- 680MB per $1000
SATA 10K -- 200MB per $1000
SCSI 10K -- 125MB per $1000
What a lot of these analyses miss is that cheaper ==
Alex,
In the situation of the animal hospital server I oversee, their
application is OLTP. Adding hard drives (6-8) does help performance.
Benchmarks like pgbench and OSDB agree with it, but in reality users
could not see noticeable change. However, moving the top 5/10 tables and
indexes to
Not true - the recommended RAID level is RAID 10, not RAID 0+1 (at
least I would never recommend 1+0 for anything).
RAID 10 and RAID 0+1 are _quite_ different. One gives you very good
redundancy, the other is only slightly better than RAID 5, but
operates faster in degraded mode (single drive).
I think the add more disks thing is really from the point of view that
one disk isn't enough ever. You should really have at least four
drives configured into two RAID 1s. Most DBAs will know this, but
most average Joes won't.
Alex Turner
netEconomist
On 4/18/05, Steve Poe [EMAIL PROTECTED]
Ok - well - I am partially wrong...
If you're stripe size is 64Kb, and you are reading 256k worth of data,
it will be spread across four drives, so you will need to read from
four devices to get your 256k of data (RAID 0 or 5 or 10), but if you
are only reading 64kb of data, I guess you would
So I wonder if one could take this stripe size thing further and say
that a larger stripe size is more likely to result in requests getting
served parallized across disks which would lead to increased
performance?
Again, thanks to all people on this list, I know that I have learnt a
_hell_ of
Jacques Caron [EMAIL PROTECTED] writes:
When writing:
- in RAID 0, 1 drive
- in RAID 1, RAID 0+1 or 1+0, 2 drives
- in RAID 5, you need to read on all drives and write on 2.
Actually RAID 5 only really needs to read from two drives. The existing parity
block and the block you're replacing.
Alex Turner wrote:
Not true - the recommended RAID level is RAID 10, not RAID 0+1 (at
least I would never recommend 1+0 for anything).
Uhmm I was under the impression that 1+0 was RAID 10 and that 0+1 is NOT
RAID 10.
Ref: http://www.acnc.com/raid.html
Sincerely,
Joshua D. Drake
Hi,
At 20:16 18/04/2005, Alex Turner wrote:
So my assertion that adding more drives doesn't help is pretty
wrong... particularly with OLTP because it's always dealing with
blocks that are smaller that the stripe size.
When doing random seeks (which is what a database needs most of the time),
the
Yu; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] How to improve db performance with $7K?
So I wonder if one could take this stripe size thing further and say that a
larger stripe size is more likely to result in requests getting served
parallized across disks which would lead
Hi,
At 20:21 18/04/2005, Alex Turner wrote:
So I wonder if one could take this stripe size thing further and say
that a larger stripe size is more likely to result in requests getting
served parallized across disks which would lead to increased
performance?
Actually, it would be pretty much the
Mistype.. I meant 0+1 in the second instance :(
On 4/18/05, Joshua D. Drake [EMAIL PROTECTED] wrote:
Alex Turner wrote:
Not true - the recommended RAID level is RAID 10, not RAID 0+1 (at
least I would never recommend 1+0 for anything).
Uhmm I was under the impression that 1+0 was RAID 10
On 4/18/05, Jacques Caron [EMAIL PROTECTED] wrote:
Hi,
At 20:21 18/04/2005, Alex Turner wrote:
So I wonder if one could take this stripe size thing further and say
that a larger stripe size is more likely to result in requests getting
served parallized across disks which would lead to
Kevin Brown wrote:
Greg Stark wrote:
I think you're being misled by analyzing the write case.
Consider the read case. When a user process requests a block and
that read makes its way down to the driver level, the driver can't
just put it aside and wait until it's convenient. It has
Does it really matter at which end of the cable the queueing is done
(Assuming both ends know as much about drive geometry etc..)?
Alex Turner
netEconomist
On 4/18/05, Bruce Momjian pgman@candle.pha.pa.us wrote:
Kevin Brown wrote:
Greg Stark wrote:
I think you're being misled by
On Mon, Apr 18, 2005 at 06:49:44PM -0400, Alex Turner wrote:
Does it really matter at which end of the cable the queueing is done
(Assuming both ends know as much about drive geometry etc..)?
That is a pretty strong assumption, isn't it? Also you seem to be
assuming that the controller-disk
On 4/14/05, Tom Lane [EMAIL PROTECTED] wrote:
That's basically what it comes down to: SCSI lets the disk drive itself
do the low-level I/O scheduling whereas the ATA spec prevents the drive
from doing so (unless it cheats, ie, caches writes). Also, in SCSI it's
possible for the drive to
My argument is that a sufficiently smart kernel scheduler *should*
yield performance results that are reasonably close to what you can
get with that feature. Perhaps not quite as good, but reasonably
close. It shouldn't be an orders-of-magnitude type difference.
And a controller card (or
platter compared to the rotational speed, which would agree with the
fact that you can read 70MB/sec, but it takes up to 13ms to seek.
Actually :
- the head has to be moved
this time depends on the distance, for instance moving from a cylinder to
the next is very fast (it needs to, to get
PFC wrote:
My argument is that a sufficiently smart kernel scheduler *should*
yield performance results that are reasonably close to what you can
get with that feature. Perhaps not quite as good, but reasonably
close. It shouldn't be an orders-of-magnitude type difference.
And a controller
On Apr 14, 2005, at 10:03 PM, Kevin Brown wrote:
Now, bad block remapping destroys that guarantee, but unless you've
got a LOT of bad blocks, it shouldn't destroy your performance, right?
ALL disks have bad blocks, even when you receive them. you honestly
think that these large disks made today
Vivek Khera wrote:
On Apr 14, 2005, at 10:03 PM, Kevin Brown wrote:
Now, bad block remapping destroys that guarantee, but unless you've
got a LOT of bad blocks, it shouldn't destroy your performance, right?
ALL disks have bad blocks, even when you receive them. you honestly
think that these
On Apr 15, 2005, at 11:58 AM, Joshua D. Drake wrote:
ALL disks have bad blocks, even when you receive them. you honestly
think that these large disks made today (18+ GB is the smallest now)
that there are no defects on the surfaces?
That is correct. It is just that the HD makers will mark the
to improve db performance with $7K?
Tom Lane [EMAIL PROTECTED] writes:
Yes, you can probably assume that blocks with far-apart numbers are
going to require a big seek, and you might even be right in supposing
that a block with an intermediate number should be read on the way.
But you have no hope
Tom Lane wrote:
Kevin Brown [EMAIL PROTECTED] writes:
In the case of pure random reads, you'll end up having to wait an
average of half of a rotation before beginning the read.
You're assuming the conclusion. The above is true if the disk is handed
one request at a time by a kernel that
Vivek Khera wrote:
On Apr 14, 2005, at 10:03 PM, Kevin Brown wrote:
Now, bad block remapping destroys that guarantee, but unless you've
got a LOT of bad blocks, it shouldn't destroy your performance, right?
ALL disks have bad blocks, even when you receive them. you honestly
think
Tom Lane wrote:
Greg Stark [EMAIL PROTECTED] writes:
In any case the issue with the IDE protocol is that fundamentally you
can only have a single command pending. SCSI can have many commands
pending.
That's the bottom line: the SCSI protocol was designed (twenty years ago!)
to allow the
Kevin Brown [EMAIL PROTECTED] writes:
My question is: why does this (physical I/O scheduling) seem to matter
so much?
Before you flame me for asking a terribly idiotic question, let me
provide some context.
The operating system maintains a (sometimes large) buffer cache, with
each
Greg Stark wrote:
I think you're being misled by analyzing the write case.
Consider the read case. When a user process requests a block and
that read makes its way down to the driver level, the driver can't
just put it aside and wait until it's convenient. It has to go ahead
and issue the
Kevin Brown [EMAIL PROTECTED] writes:
I really don't see how this is any different between a system that has
tagged queueing to the disks and one that doesn't. The only
difference is where the queueing happens. In the case of SCSI, the
queueing happens on the disks (or at least on the
while you weren't looking, Kevin Brown wrote:
[reordering bursty reads]
In other words, it's a corner case that I strongly suspect
isn't typical in situations where SCSI has historically made a big
difference.
[...]
But I rather doubt that has to be a huge penalty, if any. When a
process
Of Kevin Brown
Sent: Thursday, April 14, 2005 4:36 AM
To: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] How to improve db performance with $7K?
Greg Stark wrote:
I think you're being misled by analyzing the write case.
Consider the read case. When a user process requests a block
On 4/14/05, Tom Lane [EMAIL PROTECTED] wrote:
That's basically what it comes down to: SCSI lets the disk drive itself
do the low-level I/O scheduling whereas the ATA spec prevents the drive
from doing so (unless it cheats, ie, caches writes). Also, in SCSI it's
possible for the drive to
Matthew Nuzum [EMAIL PROTECTED] writes:
drive A has capacity C and spins at 15K rpms, and
drive B has capacity 2 x C and spins at 10K rpms and
all other features are the same, the price is the same and C is enough
disk space which would you choose?
In this case you always choose the 15k RPM
Kevin Brown [EMAIL PROTECTED] writes:
Greg Stark wrote:
I think you're being misled by analyzing the write case.
Consider the read case. When a user process requests a block and
that read makes its way down to the driver level, the driver can't
just put it aside and wait until
Matthew Nuzum [EMAIL PROTECTED] writes:
So if you all were going to choose between two hard drives where:
drive A has capacity C and spins at 15K rpms, and
drive B has capacity 2 x C and spins at 10K rpms and
all other features are the same, the price is the same and C is enough
disk space
to improve db performance with $7K?
Kevin Brown [EMAIL PROTECTED] writes:
Greg Stark wrote:
I think you're being misled by analyzing the write case.
Consider the read case. When a user process requests a block and
that read makes its way down to the driver level, the driver can't
just put
Tom Lane wrote:
Kevin Brown [EMAIL PROTECTED] writes:
I really don't see how this is any different between a system that has
tagged queueing to the disks and one that doesn't. The only
difference is where the queueing happens. In the case of SCSI, the
queueing happens on the disks (or
3ware claim that their 'software' implemented command queueing
performs at 95% effectiveness compared to the hardware queueing on a
SCSI drive, so I would say that they agree with you.
I'm still learning, but as I read it, the bits are split across the
platters and there is only 'one' head, but
Kevin Brown [EMAIL PROTECTED] writes:
Tom Lane wrote:
The reason this is so much more of a win than it was when ATA was
designed is that in modern drives the kernel has very little clue about
the physical geometry of the disk. Variable-size tracks, bad-block
sparing, and stuff like that make
Tom Lane wrote:
Kevin Brown [EMAIL PROTECTED] writes:
Tom Lane wrote:
The reason this is so much more of a win than it was when ATA was
designed is that in modern drives the kernel has very little clue about
the physical geometry of the disk. Variable-size tracks, bad-block
sparing,
Kevin Brown [EMAIL PROTECTED] writes:
In the case of pure random reads, you'll end up having to wait an
average of half of a rotation before beginning the read.
You're assuming the conclusion. The above is true if the disk is handed
one request at a time by a kernel that doesn't have any
A good one page discussion on the future of SCSI and SATA can
be found in the latest CHIPS (The Department of the Navy Information
Technology Magazine, formerly CHIPS AHOY) in an article by
Patrick G. Koehler and Lt. Cmdr. Stan Bush.
Click below if you don't mind being logged visiting Space and
Another simple question: Why is SCSI more expensive? After the
eleventy-millionth controller is made, it seems like SCSI and SATA are
using a controller board and a spinning disk. Is somebody still making
money by licensing SCSI technology?
Rick
[EMAIL PROTECTED] wrote on 04/06/2005 11:58:33
Based on the reading I'm doing, and somebody please correct me if I'm
wrong, it seems that SCSI drives contain an on disk controller that
has to process the tagged queue. SATA-I doesn't have this. This
additional controller, is basicaly an on board computer that figures
out the best order in
Yep, that's it, as well as increased quality control. I found this from
Seagate:
http://www.seagate.com/content/docs/pdf/whitepaper/D2c_More_than_Interface_ATA_vs_SCSI_042003.pdf
With this quote (note that ES stands for Enterprise System and PS stands
for Personal System):
There is
Alex Turner wrote:
I'm no drive expert, but it seems to me that our write performance is
excellent. I think what most are concerned about is OLTP where you
are doing heavy write _and_ heavy read performance at the same time.
Our system is mostly read during the day, but we do a full system
update
It's the same money if you factor in the 3ware controller. Even without
a caching controller, SCSI works good in multi-threaded IO (not
withstanding crappy shit from Dell or Compaq). You can get such cards
from LSI for $75. And of course, many server MBs come with LSI
controllers built-in. Our
Well - unfortuantely software RAID isn't appropriate for everyone, and
some of us need a hardware RAID controller. The LSI Megaraid 320-2
card is almost exactly the same price as the 3ware 9500S-12 card
(although I will conceed that a 320-2 card can handle at most 2x14
devices compare with the 12
Sorry if I'm pointing out the obvious here, but it seems worth
mentioning. AFAIK all 3ware controllers are setup so that each SATA
drive gets it's own SATA bus. My understanding is that by and large,
SATA still suffers from a general inability to have multiple outstanding
commands on the bus at
I guess I'm setting myself up here, and I'm really not being ignorant,
but can someone explain exactly how is SCSI is supposed to better than
SATA?
Both systems use drives with platters. Each drive can physically only
read one thing at a time.
SATA gives each drive it's own channel, but you
Ok - so I found this fairly good online review of various SATA cards
out there, with 3ware not doing too hot on RAID 5, but ok on RAID 10.
http://www.tweakers.net/reviews/557/
Very interesting stuff.
Alex Turner
netEconomist
On Apr 6, 2005 7:32 PM, Alex Turner [EMAIL PROTECTED] wrote:
I guess
Ok - I take it back - I'm reading through this now, and realising that
the reviews are pretty clueless in several places...
On Apr 6, 2005 8:12 PM, Alex Turner [EMAIL PROTECTED] wrote:
Ok - so I found this fairly good online review of various SATA cards
out there, with 3ware not doing too hot
Alex Turner [EMAIL PROTECTED] writes:
SATA gives each drive it's own channel, but you have to share in SCSI.
A SATA controller typicaly can do 3Gb/sec (384MB/sec) per drive, but
SCSI can only do 320MB/sec across the entire array.
SCSI controllers often have separate channels for each device
Yeah - the more reading I'm doing - the more I'm finding out.
Alledgelly the Western Digial Raptor drives implement a version of
ATA-4 Tagged Queing which allows reordering of commands. Some
controllers support this. The 3ware docs say that the controller
support both reordering on the
Greg Stark [EMAIL PROTECTED] writes:
In any case the issue with the IDE protocol is that fundamentally you
can only have a single command pending. SCSI can have many commands
pending.
That's the bottom line: the SCSI protocol was designed (twenty years ago!)
to allow the drive to do physical
You asked for it! ;-)
If you want cheap, get SATA. If you want fast under
*load* conditions, get SCSI. Everything else at this
time is marketing hype, either intentional or learned.
Ignoring dollars, expect to see SCSI beat SATA by 40%.
* * * What I tell you three times is true * * *
Also,
To be honest, I've yet to run across a SCSI configuration that can
touch the 3ware SATA controllers. I have yet to see one top 80MB/sec,
let alone 180MB/sec read or write, which is why we moved _away_ from
SCSI. I've seen Compaq, Dell and LSI controllers all do pathetically
badly on RAID 1, RAID
Alex Turner wrote:
To be honest, I've yet to run across a SCSI configuration that can
touch the 3ware SATA controllers. I have yet to see one top 80MB/sec,
let alone 180MB/sec read or write, which is why we moved _away_ from
SCSI. I've seen Compaq, Dell and LSI controllers all do pathetically
I'm no drive expert, but it seems to me that our write performance is
excellent. I think what most are concerned about is OLTP where you
are doing heavy write _and_ heavy read performance at the same time.
Our system is mostly read during the day, but we do a full system
update everynight that
On Apr 4, 2005, at 3:12 PM, Alex Turner wrote:
Our system is mostly read during the day, but we do a full system
update everynight that is all writes, and it's very fast compared to
the smaller SCSI system we moved off of. Nearly a 6x spead
improvement, as fast as 900 rows/sec with a 48 byte
I'm doing some research on SATA vs SCSI right now, but to be honest
I'm not turning up much at the protocol level. Alot of stupid
benchmarks comparing 10k Raptor drives against Top of the line 15k
drives, where usnurprsingly the SCSI drives win but of course cost 4
times as much. Although even
Yeah, 35Mb per sec is slow for a raid controller, the 3ware mirrored is
about 50Mb/sec, and striped is about 100
Dave
PFC wrote:
With hardware tuning, I am sure we can do better than 35Mb per sec. Also
WTF ?
My Laptop does 19 MB/s (reading 10 KB files, reiser4) !
A recent desktop
With hardware tuning, I am sure we can do better than 35Mb per sec. Also
WTF ?
My Laptop does 19 MB/s (reading 10 KB files, reiser4) !
A recent desktop 7200rpm IDE drive
# hdparm -t /dev/hdc1
/dev/hdc1:
Timing buffered disk reads: 148 MB in 3.02 seconds = 49.01 MB/sec
# ll
Yeah, 35Mb per sec is slow for a raid controller, the 3ware mirrored is
about 50Mb/sec, and striped is about 100
Dave
PFC wrote:
With hardware tuning, I am sure we can do better than 35Mb per sec. Also
WTF ?
My Laptop does 19 MB/s (reading 10 KB files, reiser4) !
A recent desktop
On Mon, 2005-03-28 at 17:36 +, Steve Poe wrote:
I agree with you. Unfortunately, I am not the developer of the
application. The vendor uses ProIV which connects via ODBC. The vendor
could certain do some tuning and create more indexes where applicable. I
am encouraging the vendor to
Dave Cramer [EMAIL PROTECTED] writes:
PFC wrote:
My Laptop does 19 MB/s (reading 10 KB files, reiser4) !
Yeah, 35Mb per sec is slow for a raid controller, the 3ware mirrored is
about 50Mb/sec, and striped is about 100
Well you're comparing apples and oranges here. A modern 7200rpm
Have you already considered application/database tuning? Adding
indexes? shared_buffers large enough? etc.
Your database doesn't seem that large for the hardware you've already
got. I'd hate to spend $7k and end up back in the same boat. :)
On Sat, 2005-03-26 at 13:04 +, Steve Poe wrote:
Cott Lang wrote:
Have you already considered application/database tuning? Adding
indexes? shared_buffers large enough? etc.
Your database doesn't seem that large for the hardware you've already
got. I'd hate to spend $7k and end up back in the same boat. :)
Cott,
I agree with you.
1. Buy for empty PCI-X Slot - 1 or dual channel SCSI-320
hardware RAID controller, like MegaRAID SCSI 320-2X
(don't forget check driver for your OS)
plus battery backup
plus (optional) expand RAM to Maximum 256MB - approx $1K
2. Buy new MAXTOR drives - Atlas 15K II (4x36.7GB) - approx 4x$400.
3.
You could build a dual opteron with 4 GB of ram, 12 10k raptor SATA
drives with a battery backed cache for about 7k or less.
Okay. You trust SATA drives? I've been leary of them for a production
database. Pardon my ignorance, but what is a battery backed cache? I
know the drives have a
Hi Steve,
Okay. You trust SATA drives? I've been leary of them for a production
database. Pardon my ignorance, but what is a battery backed cache? I
know the drives have a built-in cache but I don't if that's the same.
Are the 12 drives internal or an external chasis? Could you point me to
a
Bjoern, Josh, Steve,
Get 12 or 16 x 74GB Western Digital Raptor S-ATA drives, one 3ware
9500S-12 or two 3ware 9500S-8 raid controllers with a battery backup
unit (in case of power loss the controller saves unflushed data), a
decent tyan board for the existing dual xeon with 2 pci-x slots and
1 - 100 of 106 matches
Mail list logo