You should search the archives for Luke Lonegran's posting about how IO in
Postgresql is significantly bottlenecked because it's not async. A 12 disk
array is going to max out Postgresql's max theoretical write capacity to
disk, and therefore BigRDBMS is always going to win in such a config.
The problem I see with software raid is the issue of a battery backed unit:
If the computer loses power, then the 'cache' which is held in system
memory, goes away, and fubars your RAID.
Alex
On 12/5/06, Michael Stone [EMAIL PROTECTED] wrote:
On Tue, Dec 05, 2006 at 01:21:38AM -0500, Alex
The test that I did - which was somewhat limited, showed no benefit
splitting disks into seperate partitions for large bulk loads.
The program read from one very large file and wrote the input out to two
other large files.
The totaly throughput on a single partition was close to the maximum
/AMCC, LSI).
Thanks,
Alex
On 12/4/06, Scott Marlowe [EMAIL PROTECTED] wrote:
On Mon, 2006-12-04 at 01:17, Alex Turner wrote:
People recommend LSI MegaRAID controllers on here regularly, but I
have found that they do not work that well. I have bonnie++ numbers
that show the controller
http://en.wikipedia.org/wiki/RAID_controller
Alex
On 12/4/06, Michael Stone [EMAIL PROTECTED] wrote:
On Mon, Dec 04, 2006 at 12:37:29PM -0500, Alex Turner wrote:
This discussion I think is important, as I think it would be useful for
this
list to have a list of RAID cards that _do_ work well
bring us to a _good_ card).
Alex.
On 12/4/06, Greg Smith [EMAIL PROTECTED] wrote:
On Mon, 4 Dec 2006, Alex Turner wrote:
People recommend LSI MegaRAID controllers on here regularly, but I have
found that they do not work that well. I have bonnie++ numbers that
show the controller
People recommend LSI MegaRAID controllers on here regularly, but I have
found that they do not work that well. I have bonnie++ numbers that show
the controller is not performing anywhere near the disk's saturation level
in a simple RAID 1 on RedHat Linux EL4 on two seperate machines provided by
Does anyone have any performance experience with the Dell Perc 5i
controllers in RAID 10/RAID 5?
Thanks,
Alex
The query expain analyze looks like this:click-counter=# explain analyze select count(*) as count, to_char(date_trunc('day',c.datestamp),'DD-Mon') as day from impression c, url u, handle h where c.url_id=u.url_id and
c.handle_id=h.handle_id and h.handle like '1.19%' group by
ahh good pointThanksOn 9/22/06, Tom Lane [EMAIL PROTECTED] wrote:
Alex Turner [EMAIL PROTECTED] writes: Home come the query statistics showed that 229066 blocks where read given that all the blocks in all the tables put together only total 122968?
You forgot to count the indexes.Also, the use
-indexed, and no changes beyond this insert were made in that time and result_entry has recently been vacuumed.Any insight would be greatly appreciatedAlex
On 9/22/06, Alex Turner [EMAIL PROTECTED] wrote:
ahh good pointThanksOn 9/22/06, Tom Lane
[EMAIL PROTECTED] wrote:
Alex Turner [EMAIL
Do the basic math:If you have a table with 100million records, each of which is 200bytes long, that gives you roughtly 20 gig of data (assuming it was all written neatly and hasn't been updated much). If you have to do a full table scan, then it will take roughly 400 seconds with a single 10k RPM
Sweet - thats good - RAID 10 support seems like an odd thing to leave out.AlexOn 9/18/06, Luke Lonergan
[EMAIL PROTECTED] wrote:Alex,On 9/18/06 4:14 PM, Alex Turner
[EMAIL PROTECTED] wrote: Be warned, the tech specs page: http://www.sun.com/servers/x64/x4500/specs.xml#anchor3
doesn't mention
First things first, run a bonnie++ benchmark, and post the numbers. That will give a good indication of raw IO performance, and is often the first inidication of problems separate from the DB. We have seen pretty bad performance from SANs in the past. How many FC lines do you have running to your
Oh - and it's usefull to know if you are CPU bound, or IO bound. Check top or vmstat to get an idea of thatAlexOn 8/22/06, Alex Turner
[EMAIL PROTECTED] wrote:First things first, run a bonnie++ benchmark, and post the numbers. That will give a good indication of raw IO performance, and is often
First off - very few third party tools support debian. Debian is a sure fire way to have an unsupported system. Use RedHat or SuSe (flame me all you want, it doesn't make it less true).Second, run bonnie++ benchmark against your disk array(s) to see what performance you are getting, and make sure
These number are pretty darn good for a four disk RAID 10, pretty close to perfect infact. Nice advert for the 642 - I guess we have a Hardware RAID controller than will read indpendently from mirrors.Alex
On 8/8/06, Steve Poe [EMAIL PROTECTED] wrote:
Luke,Here are the results of two runs of 16GB
Although I for one have yet to see a controller that actualy does this (I believe software RAID on linux doesn't either).Alex.On 8/7/06, Markus Schaber
[EMAIL PROTECTED] wrote:Hi, Charles,
Charles Sprickman wrote: I've also got a 1U with a 9500SX-4 and 4 drives.I like how the 3Ware card scales
This is a great testament to the fact that very often software RAID will seriously outperform hardware RAID because the OS guys who implemented it took the time to do it right, as compared with some controller manufacturers who seem to think it's okay to provided sub-standard performance.
Based on
With 18 disks dedicated to data, you could make 100/7*9 seeks/second (7ms av seeks time, 9 independant units) which is 128seeks/second writing on average 64kb of data, which is 4.1MB/sec throughput worst case, probably 10x best case so 40Mb/sec - you might want to take more disks for your data and
On 7/17/06, Mikael Carneholm [EMAIL PROTECTED] wrote:
This is something I'd also would like to test, as a common best-practice these days is to go for a SAME (stripe all, mirroreverything) setup. From a development perspective it's easier to use SAME as the
developers won't have to think about
On 7/17/06, Ron Peacetree [EMAIL PROTECTED] wrote:
-Original Message-From: Mikael Carneholm [EMAIL PROTECTED]Sent: Jul 17, 2006 5:16 PMTo: RonPeacetree
[EMAIL PROTECTED], pgsql-performance@postgresql.orgSubject: RE: [PERFORM] RAID stripe size question15Krpm HDs will have average access
I have a really stupid question about top, what exactly is iowait CPU time?Alex
Given the fact that most SATA drives have only an 8MB cache, and your RAID controller should have at least 64MB, I would argue that the system with the RAID controller should always be faster. If it's not, you're getting short-changed somewhere, which is typical on linux, because the drivers just
Anyone who has tried x86-64 linux knows what a royal pain in the ass it is. They didn't do anything sensible, like just make the whole OS 64 bit, no, they had to split it up, and put 64-bit libs in a new directory /lib64. This means that a great many applications don't know to check in there for
He's talking about RAID 1 here, not a gargantuan RAID 6. Onboard RAM
on the controller card is going to make very little difference. All
it will do is allow the card to re-order writes to a point (not all
cards even do this).
Alex.
On 1/18/06, William Yu [EMAIL PROTECTED] wrote:
Benjamin Arai
http://www.3ware.com/products/serial_ata2-9000.asp
Check their data sheet - the cards are BBU ready - all you have to do
is order a BBU
which you can from here:
http://www.newegg.com/Product/Product.asp?Item=N82E16815999601
Alex.
On 1/18/06, Joshua D. Drake [EMAIL PROTECTED] wrote:
A 3ware card will re-order your writes to put them more in disk order,
which will probably improve performance a bit, but just going from a
software RAID 1 to a hardware RAID 1, I would not imagine that you
will see much of a performance boost. Really to get better
performance you will need to
It's irrelavent what controller, you still have to actualy write the
parity blocks, which slows down your write speed because you have to
write n+n/2 blocks. instead of just n blocks making the system write
50% more data.
RAID 5 must write 50% more data to disk therefore it will always be slower.
Yes - they work excellently. I have several medium and large servers
running 3ware 9500S series cards with great success. We have
rebuilding many failed RAID 10s over the course with no problems.
Alex
On 12/26/05, Benjamin Arai [EMAIL PROTECTED] wrote:
Have you have any experience rebuilding
, Alex Turner wrote:
It's irrelavent what controller, you still have to actualy write the
parity blocks, which slows down your write speed because you have to
write n+n/2 blocks. instead of just n blocks making the system write
50% more data.
RAID 5 must write 50% more data to disk therefore
Personaly I would split into two RAID 1s. One for pg_xlog, one for
the rest. This gives probably the best performance/reliability
combination.
Alex.
On 12/10/05, Carlos Benkendorf [EMAIL PROTECTED] wrote:
Hello,
I would like to know which is the best configuration to use 4 scsi drives
with
I would argue that almost certainly won't by doing that as you will
create a new place even further away for the disk head to seek to
instead of just another file on the same FS that is probably closer to
the current head position.
Alex
On 12/6/05, Michael Stone [EMAIL PROTECTED] wrote:
On Tue,
Ok - so I ran the same test on my system and get a total speed of
113MB/sec. Why is this? Why is the system so limited to around just
110MB/sec? I tuned read ahead up a bit, and my results improve a
bit..
Alex
On 11/18/05, Luke Lonergan [EMAIL PROTECTED] wrote:
Dave,
On 11/18/05 5:00
On 11/16/05, William Yu [EMAIL PROTECTED] wrote:
Alex Turner wrote:
Spend a fortune on dual core CPUs and then buy crappy disks... I bet
for most applications this system will be IO bound, and you will see a
nice lot of drive failures in the first year of operation with
consumer grade
Just pick up a SCSI drive and a consumer ATA drive.
Feel their weight.
You don't have to look inside to tell the difference.
Alex
On 11/16/05, David Boreham [EMAIL PROTECTED] wrote:
I suggest you read this on the difference between enterprise/SCSI and
desktop/IDE drives:
On 11/16/05, Joshua D. Drake [EMAIL PROTECTED] wrote:
The only questions would be:
(1) Do you need a SMP server at all? I'd claim yes -- you always need
2+ cores whether it's DC or 2P to avoid IO interrupts blocking other
processes from running.
I would back this up. Even for smaller
).
It should also be noted that 64 drive chassis' are going to become
possible once 2.5 10Krpm SATA II and FC HDs become the standard next
year (48's are the TOTL now). We need controller technology to keep up.
Ron
At 12:16 AM 11/16/2005, Alex Turner wrote:
Not at random access in RAID 10
On 11/15/05, Luke Lonergan [EMAIL PROTECTED] wrote:
Adam,
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
Claus Guttesen
Sent: Tuesday, November 15, 2005 12:29 AM
To: Adam Weisberg
Cc: pgsql-performance@postgresql.org
Subject: Re:
Not at random access in RAID 10 they aren't, and anyone with their
head screwed on right is using RAID 10. The 9500S will still beat the
Areca cards at RAID 10 database access patern.
Alex.
On 11/15/05, Dave Cramer [EMAIL PROTECTED] wrote:
Luke,
Have you tried the areca cards, they are
We use this memory in all our servers (well - the 512 sticks). 0
problems to date:
http://www.newegg.com/Product/Product.asp?Item=N82E16820145513
$163 for 1GB.
This stuff is probably better than the Samsung RAM dell is selling you
for 3 times the price.
Alex
On 11/10/05, Ron Peacetree [EMAIL
and 2xRAID 1. Make sure you get the
firmware update if you have these controllers though.
Alex Turner
NetEconomist
On 11/6/05, Joost Kraaijeveld [EMAIL PROTECTED] wrote:
Hi,
I am experiencing very long update queries and I want to know if it
reasonable to expect them to perform better
b.order_val=25 and
b.order_val50 and a.primary_key_id=b.primary_key_id
If the data updates alot then this won't work as well though as the
index table will require frequent updates to potentialy large number
of records (although a small number of pages so it still won't be
horrible).
Alex Turner
Just to play devils advocate here for as second, but if we have an
algorithm that is substational better than just plain old LRU, which is
what I believe the kernel is going to use to cache pages (I'm no kernel
hacker), then why don't we apply that and have a significantly larger
page cache a la
This is possible with Oracle utilizing the keep pool
alter table t_name storage ( buffer_pool keep);
If Postgres were to implement it's own caching system, this seems like
it would be easily to implement (beyond the initial caching effort).
Alex
On 10/24/05, Craig A. James [EMAIL PROTECTED]
Oracle uses LRU caching algorithm also, not LFU.
AlexOn 10/21/05, Martin Nickel [EMAIL PROTECTED] wrote:
I was reading a comment in another posting and it started me thinkingabout this.Let's say I startup an Oracle server.All my queries are alittle bit (sometimes a lot bit) slow until it gets its
[snip]to the second processor in my dual Xeon eServer) has got me to thepoint that the perpetually high memory usage doesn't affect my
application server.
I'm curious - how does the high memory usage affect your application server?
Alex
disk pages. It
looks to me more like either a Java problem, or a kernel problem...
Alex Turner
NetEconomistOn 10/10/05, Jon Brisbin [EMAIL PROTECTED] wrote:
Tom Lane wrote: Are you sure it's not cached data pages, rather than cached inodes? If so, the above behavior is *good*. People often have
Well - to each his own I guess - we did extensive testing on 1.4, and
it refused to allocate much past 1gig on both Linux x86/x86-64 and
Windows.
AlexOn 10/11/05, Alan Stange [EMAIL PROTECTED] wrote:
Alex Turner wrote: Perhaps this is true for 1.5 on x86-32 (I've only used it on x86-64) but I
people without
resorting to SSD.
Alex Turner
NetEconomistOn 10/4/05, Emil Briggs [EMAIL PROTECTED] wrote:
I have an application that has a table that is both read and write intensive.Data from iostat indicates that the write speed of the system is the factorthat is limiting performance. The table
that lower stripe sizes impacted performance badly as did overly large stripe sizes.
Alex Turner
NetEconomistOn 16 Sep 2005 04:51:43 -0700, bmmbn [EMAIL PROTECTED] wrote:
Hi EveryoneThe machine is IBM x345 with ServeRAID 6i 128mb cache and 6 SCSI 15kdisks.2 disks are in RAID1 and hold the OS, SWAP
100% usage,
they get all kinds of unhappy, we haven't had the same problem with JFS.
Alex Turner
NetEconomistOn 9/20/05, Welty, Richard [EMAIL PROTECTED] wrote:
Alex Turnerwrote: I would also recommend looking at file system.For us JFS worked signifcantlyfaster than resier for large read loads
I have found that while the OS may flush to the controller fast with
fsync=true, the controller does as it pleases (it has BBU, so I'm not
too worried), so you get great performance because your controller is
determine read/write sequence outside of what is being demanded by an
fsync.
Alex Turner
don't co-operate with linux well.
Alex Turner
NetEconomist
On 9/12/05, Brandon Black [EMAIL PROTECTED] wrote:
I'm in the process of developing an application which uses PostgreSQL
for data storage. Our database traffic is very atypical, and as a
result it has been rather challenging to figure out how
. I have two independant controlers on two
independant PCI buses to give max throughput. on with a 6 drive RAID
10 and the other with two 4 drive RAID 10s.
Alex Turner
NetEconomist
On 8/19/05, Mark Cotner [EMAIL PROTECTED] wrote:
Hi all,
I bet you get tired of the same ole questions over and
over
Are you calculating aggregates, and if so, how are you doing it (I ask
the question from experience of a similar application where I found
that my aggregating PGPLSQL triggers were bogging the system down, and
changed them so scheduled jobs instead).
Alex Turner
NetEconomist
On 8/16/05, Ulrich
Also seems pretty silly to put it on a regular SATA connection, when
all that can manage is 150MB/sec. If you made it connection directly
to 66/64-bit PCI then it could actualy _use_ the speed of the RAM, not
to mention PCI-X.
Alex Turner
NetEconomist
On 7/26/05, John A Meinel [EMAIL PROTECTED
ol 2.5 Reg ECC.
Alex Turner
NetEconomist
On 7/26/05, PFC [EMAIL PROTECTED] wrote:
I'm a little leary as it is definitely a version 1.0 product (it is
still using an FPGA as the controller, so they were obviously pushing to
get the card into production).
Not necessarily. FPGA's
and will read from independant halves, but gives
worse redundancy.
Alex Turner
NetEconomistOn 6/18/05, Jacques Caron [EMAIL PROTECTED] wrote:
Hi,At 18:00 18/06/2005, PFC wrote: I don't know what I'm talking about, but wouldn't mirorring be fasterthan striping for random reads like you often get
to default after that problem (that partition is not on a
DB server though).
Alex Turner
netEconomistOn 6/3/05, Martin Fandel [EMAIL PROTECTED] wrote:
Hi @ all,i have only a little question. Which filesystem is preferred forpostgresql? I'm plan to use xfs (before i used reiserfs). The reasonis
Until you start worrying about MVC - we have had problems with the
MSSQL implementation of read consistency because of this 'feature'.
Alex Turner
NetEconomistOn 5/24/05, Bruno Wolff III [EMAIL PROTECTED] wrote:
On Tue, May 24, 2005 at 08:36:36 -0700,mark durrant [EMAIL PROTECTED] wrote
website I've managed
will ever see.
Why solve the complicated clustered sessions problem, when you don't
really need to?
Alex Turner
netEconomist
On 5/11/05, PFC [EMAIL PROTECTED] wrote:
However, memcached (and for us, pg_memcached) is an excellent way to
improve
horizontal scalability
size RAID
arrays on 10k discs with fsync on for small transactions. I'm sure
that could easily be bettered with a few more dollars.
Maybe my number are off, but somehow it doesn't seem like that many
people need a highly complex session solution to me.
Alex Turner
netEconomist
On 5/12/05, Alex
Is:
REINDEX DATABASE blah
supposed to rebuild all indices in the database, or must you specify
each table individualy? (I'm asking because I just tried it and it
only did system tables)
Alex Turner
netEconomist
On 4/21/05, Josh Berkus josh@agliodbs.com wrote:
Bill,
What about if an out
forces the system to physically re-allocate all
that data space, and now you have just 2499 entries, that use 625
blocks.
I'm not sure that 'blocks' is the correct term in postgres, it's
segments in Oracle, but the concept remains the same.
Alex Turner
netEconomist
On 4/21/05, Bill Chandler [EMAIL
I wonder if thats something to think about adding to Postgresql? A
setting for multiblock read count like Oracle (Although having said
that I believe that Oracle natively caches pages much more
aggressively that postgresql, which allows the OS to do the file
caching).
Alex Turner
netEconomist
in an array, which would
yield better performance to cost ratio. Therefore I would suggest it
is something that should be investigated. After all, why implemented
TCQ on each drive, if it can be handled more effeciently at the other
end by the controller for less money?!
Alex Turner
netEconomist
On 4/19
having to factor in the cost of a bigger chassis
to hold more drives, which can be big bucks.
Alex Turner
netEconomist
On 18 Apr 2005 10:59:05 -0400, Greg Stark [EMAIL PROTECTED] wrote:
William Yu [EMAIL PROTECTED] writes:
Using the above prices for a fixed budget for RAID-10, you could get
multiple tablespaces on
seperate paritions.
My assertion therefore is that simply adding more drives to an already
competent* configuration is about as likely to increase your database
effectiveness as swiss cheese is to make your car run faster.
Alex Turner
netEconomist
*Assertion here
).
Alex Turner
netEconomist
On 4/18/05, John A Meinel [EMAIL PROTECTED] wrote:
Alex Turner wrote:
[snip]
Adding drives will not let you get lower response times than the average
seek
time on your drives*. But it will let you reach that response time more
often.
[snip]
I
I think the add more disks thing is really from the point of view that
one disk isn't enough ever. You should really have at least four
drives configured into two RAID 1s. Most DBAs will know this, but
most average Joes won't.
Alex Turner
netEconomist
On 4/18/05, Steve Poe [EMAIL PROTECTED
only need to read
from one disk.
So my assertion that adding more drives doesn't help is pretty
wrong... particularly with OLTP because it's always dealing with
blocks that are smaller that the stripe size.
Alex Turner
netEconomist
On 4/18/05, Jacques Caron [EMAIL PROTECTED] wrote:
Hi,
At 18:56
of alot since subscribing.
Alex Turner
netEconomist
On 4/18/05, Alex Turner [EMAIL PROTECTED] wrote:
Ok - well - I am partially wrong...
If you're stripe size is 64Kb, and you are reading 256k worth of data,
it will be spread across four drives, so you will need to read from
four devices to get
Mistype.. I meant 0+1 in the second instance :(
On 4/18/05, Joshua D. Drake [EMAIL PROTECTED] wrote:
Alex Turner wrote:
Not true - the recommended RAID level is RAID 10, not RAID 0+1 (at
least I would never recommend 1+0 for anything).
Uhmm I was under the impression that 1+0 was RAID 10
On 4/18/05, Jacques Caron [EMAIL PROTECTED] wrote:
Hi,
At 20:21 18/04/2005, Alex Turner wrote:
So I wonder if one could take this stripe size thing further and say
that a larger stripe size is more likely to result in requests getting
served parallized across disks which would lead
Does it really matter at which end of the cable the queueing is done
(Assuming both ends know as much about drive geometry etc..)?
Alex Turner
netEconomist
On 4/18/05, Bruce Momjian pgman@candle.pha.pa.us wrote:
Kevin Brown wrote:
Greg Stark wrote:
I think you're being misled
.
The 3ware trounces the Areca in all IO/sec test.
Alex Turner
netEconomist
On 4/15/05, Marinos Yannikos [EMAIL PROTECTED] wrote:
Joshua D. Drake wrote:
Well I have never even heard of it. 3ware is the defacto authority of
reasonable SATA RAID.
no! 3ware was rather early in this business
. Our biggest hit is reads, so
we can buy 3xSATA machines and load balance. It's all about the
application, and buying what is appropriate. I don't buy a Corvette
if all I need is a malibu.
Alex Turner
netEconomist
On 4/15/05, Dave Held [EMAIL PROTECTED] wrote:
-Original Message-
From
I stand corrected!
Maybe I should re-evaluate our own config!
Alex T
(The dell PERC controllers do pretty much suck on linux)
On 4/15/05, Vivek Khera [EMAIL PROTECTED] wrote:
On Apr 15, 2005, at 11:01 AM, Alex Turner wrote:
You can't fit a 15k RPM SCSI solution into $7K ;) Some of us
I have read a large chunk of this, and I would highly recommend it to
anyone who has been participating in the drive discussions. It is
most informative!!
Alex Turner
netEconomist
On 4/14/05, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
Greg,
I posted this link under a different thread
=40devID_2=259devID_3=267devID_4=261devID_5=248devCnt=6
It does illustrate some of the weaknesses of SATA drives, but all in
all the Raptor drives put on a good show.
Alex Turner
netEconomist
On 4/14/05, Alex Turner [EMAIL PROTECTED] wrote:
I have read a large chunk of this, and I would highly
as NCQ on the drive).
Alex Turner
netEconomist
On 4/14/05, Dave Held [EMAIL PROTECTED] wrote:
-Original Message-
From: Alex Turner [mailto:[EMAIL PROTECTED]
Sent: Thursday, April 14, 2005 12:14 PM
To: [EMAIL PROTECTED]
Cc: Greg Stark; pgsql-performance@postgresql.org;
[EMAIL
linear
increment.
Alex Turner
netEconomist
On 4/14/05, Kevin Brown [EMAIL PROTECTED] wrote:
Tom Lane wrote:
Kevin Brown [EMAIL PROTECTED] writes:
I really don't see how this is any different between a system that has
tagged queueing to the disks and one that doesn't. The only
, thereby generating a
cost increase (at least thats what the manufactures tell us). I know
if you ever held a 15k drive in your hand, you can notice a
considerable weight difference between it and a 7200RPM IDE drive.
Alex Turner
netEconomist
On Apr 7, 2005 11:37 AM, [EMAIL PROTECTED]
[EMAIL PROTECTED
I think everyone was scared off by the 5000 inserts per second number.
I've never seen even Oracle do this on a top end Dell system with
copious SCSI attached storage.
Alex Turner
netEconomist
On Apr 6, 2005 3:17 AM, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
Unfortunately.
But we
think the underlying code to do this has to
be not-too-complex.
I'd say we're there.
-Original Message-
From: Alex Turner [mailto:[EMAIL PROTECTED]
Sent: Wednesday, April 06, 2005 11:38 AM
To: [EMAIL PROTECTED]
Cc: pgsql-performance@postgresql.org; Mohan, Ross
Subject: Re
I think his point was that 9 * 4 != 2400
Alex Turner
netEconomist
On Apr 6, 2005 2:23 PM, Rod Taylor [EMAIL PROTECTED] wrote:
On Wed, 2005-04-06 at 19:42 +0200, Steinar H. Gunderson wrote:
On Wed, Apr 06, 2005 at 01:18:29PM -0400, Rod Taylor wrote:
Yeah, I think that can be done provided
such tests, we'd all be delighted with to see the
results so we have another option for building servers.
Alex Turner wrote:
It's hardly the same money, the drives are twice as much.
It's all about the controller baby with any kind of dive. A bad SCSI
controller will give sucky performance
have to share in SCSI.
A SATA controller typicaly can do 3Gb/sec (384MB/sec) per drive, but
SCSI can only do 320MB/sec across the entire array.
What am I missing here?
Alex Turner
netEconomist
On Apr 6, 2005 5:41 PM, Jim C. Nasby [EMAIL PROTECTED] wrote:
Sorry if I'm pointing out the obvious here
Ok - so I found this fairly good online review of various SATA cards
out there, with 3ware not doing too hot on RAID 5, but ok on RAID 10.
http://www.tweakers.net/reviews/557/
Very interesting stuff.
Alex Turner
netEconomist
On Apr 6, 2005 7:32 PM, Alex Turner [EMAIL PROTECTED] wrote:
I guess
Ok - I take it back - I'm reading through this now, and realising that
the reviews are pretty clueless in several places...
On Apr 6, 2005 8:12 PM, Alex Turner [EMAIL PROTECTED] wrote:
Ok - so I found this fairly good online review of various SATA cards
out there, with 3ware not doing too hot
on the controller and to the drive. *shrug*
This of course is all supposed to go away with SATA II which as NCQ,
Native Command Queueing. Of course the 3ware controllers don't
support SATA II, but a few other do, and I'm sure 3ware will come out
with a controller that does.
Alex Turner
netEconomist
On 06 Apr
would be greatly interested.
Alex Turner
netEconomist
On Mar 29, 2005 8:17 AM, Dave Cramer [EMAIL PROTECTED] wrote:
Yeah, 35Mb per sec is slow for a raid controller, the 3ware mirrored is
about 50Mb/sec, and striped is about 100
Dave
PFC wrote:
With hardware tuning, I am sure we can do
with SCSI. If anyone has a usefull
link on that, it would be greatly appreciated.
More drives will give more throughput/sec, but not necesarily more
transactions/sec. For that you will need more RAM on the controler,
and defaintely a BBU to keep your data safe.
Alex Turner
netEconomist
On Apr 4, 2005
in some, SATA wins, or draws. I'm
trying to find something more apples to apples. 10k to 10k.
Alex Turner
netEconomist
On Apr 4, 2005 3:23 PM, Vivek Khera [EMAIL PROTECTED] wrote:
On Apr 4, 2005, at 3:12 PM, Alex Turner wrote:
Our system is mostly read during the day, but we do a full
Oh - this is with a seperate transaction per command.
fsync is on.
Alex Turner
netEconomist
On Apr 1, 2005 4:17 PM, Alex Turner [EMAIL PROTECTED] wrote:
1250/sec with record size average is 26 bytes
800/sec with record size average is 48 bytes.
250/sec with record size average is 618 bytes
on
a fourth. Unsurprisingly this looks alot like the Oracle recommended
minimum config.
Also a note for interest is that this is _software_ raid...
Alex Turner
netEconomist
On 13 Mar 2005 23:36:13 -0500, Greg Stark [EMAIL PROTECTED] wrote:
Arshavir Grigorian [EMAIL PROTECTED] writes:
Hi,
I
He doesn't have a RAID controller, it's software RAID...
Alex Turner
netEconomis
On Mon, 14 Mar 2005 16:18:00 -0500, Merlin Moncure
[EMAIL PROTECTED] wrote:
Alex Turner wrote:
35 Trans/sec is pretty slow, particularly if they are only one row at
a time. I typicaly get 200-400/sec on our
to recommend against a 14 drive RAID 5!
This is statisticaly as likely to fail as a 7 drive RAID 0 (not
counting the spare, but rebuiling a spare is very hard on existing
drives).
Alex Turner
netEconomist
On Fri, 11 Mar 2005 16:13:05 -0500, Arshavir Grigorian [EMAIL PROTECTED]
wrote:
Hi,
I have
Not true - with fsync on I get nearly 500 tx/s, with it off I'm as
high as 1600/sec with dual opteron and 14xSATA drives and 4GB RAM on a
3ware Escalade. Database has 3 million rows.
As long as queries use indexes, multi billion row shouldn't be too
bad. Full table scan will suck though.
Alex
1 - 100 of 124 matches
Mail list logo