RAID 10, database is on RAID 10.
Data is very spread out because database turnover time is very high,
so our performance is about double this with a fresh DB. (the data
half life is probably measurable in days or weeks).
Alex Turner
netEconomist
On Apr 1, 2005 1:06 PM, Marc G. Fournier <[EM
Oh - this is with a seperate transaction per command.
fsync is on.
Alex Turner
netEconomist
On Apr 1, 2005 4:17 PM, Alex Turner <[EMAIL PROTECTED]> wrote:
> 1250/sec with record size average is 26 bytes
> 800/sec with record size average is 48 bytes.
> 250/sec with record size
ly in
linux), I would be greatly interested.
Alex Turner
netEconomist
On Mar 29, 2005 8:17 AM, Dave Cramer <[EMAIL PROTECTED]> wrote:
> Yeah, 35Mb per sec is slow for a raid controller, the 3ware mirrored is
> about 50Mb/sec, and striped is about 100
>
> Dave
>
> PFC wrote:
Yup, Battery backed, cache enabled. 6 drive RAID 10, and 4 drive RAID
10, and 2xRAID 1.
It's a 3ware 9500S-8MI - not bad for $450 plus BBU.
Alex Turner
netEconomist
On Apr 1, 2005 6:03 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> Alex Turner <[EMAIL PROTECTED]> writes:
> &
otocol compared with SCSI. If anyone has a usefull
link on that, it would be greatly appreciated.
More drives will give more throughput/sec, but not necesarily more
transactions/sec. For that you will need more RAM on the controler,
and defaintely a BBU to keep your data safe.
Alex Turner
netEcono
hough even in some, SATA wins, or draws. I'm
trying to find something more apples to apples. 10k to 10k.
Alex Turner
netEconomist
On Apr 4, 2005 3:23 PM, Vivek Khera <[EMAIL PROTECTED]> wrote:
>
> On Apr 4, 2005, at 3:12 PM, Alex Turner wrote:
>
> > Our system is mos
imple benchmark test database to run, I would be
happy to run it on our hardware here.
Alex Turner
On Apr 6, 2005 3:30 AM, William Yu <[EMAIL PROTECTED]> wrote:
> Alex Turner wrote:
> > I'm no drive expert, but it seems to me that our write performance is
> > excellent.
I think everyone was scared off by the 5000 inserts per second number.
I've never seen even Oracle do this on a top end Dell system with
copious SCSI attached storage.
Alex Turner
netEconomist
On Apr 6, 2005 3:17 AM, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
> Unfortun
ing up the buffers.
>
> Lets try again with more data this time.
>
> 31Million tuples were loaded in approx 279 seconds, or approx 112k rows per
> second.
>
> > I'd love to see PG get into this range..i am a big fan of PG (just a
> > rank newbie) but I gott
I think his point was that 9 * 4 != 2400
Alex Turner
netEconomist
On Apr 6, 2005 2:23 PM, Rod Taylor <[EMAIL PROTECTED]> wrote:
> On Wed, 2005-04-06 at 19:42 +0200, Steinar H. Gunderson wrote:
> > On Wed, Apr 06, 2005 at 01:18:29PM -0400, Rod Taylor wrote:
> > > Yeah,
ur needs -- or spend $$$ on SCSI drives
> and be sure.
>
> Now if you want to run such tests, we'd all be delighted with to see the
> results so we have another option for building servers.
>
>
> Alex Turner wrote:
> > It's hardly the same money,
hannel, but you have to share in SCSI.
A SATA controller typicaly can do 3Gb/sec (384MB/sec) per drive, but
SCSI can only do 320MB/sec across the entire array.
What am I missing here?
Alex Turner
netEconomist
On Apr 6, 2005 5:41 PM, Jim C. Nasby <[EMAIL PROTECTED]> wrote:
> Sorry if I
Ok - so I found this fairly good online review of various SATA cards
out there, with 3ware not doing too hot on RAID 5, but ok on RAID 10.
http://www.tweakers.net/reviews/557/
Very interesting stuff.
Alex Turner
netEconomist
On Apr 6, 2005 7:32 PM, Alex Turner <[EMAIL PROTECTED]> wrot
Ok - I take it back - I'm reading through this now, and realising that
the reviews are pretty clueless in several places...
On Apr 6, 2005 8:12 PM, Alex Turner <[EMAIL PROTECTED]> wrote:
> Ok - so I found this fairly good online review of various SATA cards
> out there, with 3w
ing on the controller and to the drive. *shrug*
This of course is all supposed to go away with SATA II which as NCQ,
Native Command Queueing. Of course the 3ware controllers don't
support SATA II, but a few other do, and I'm sure 3ware will come out
with a controller that does.
Alex Turner
net
and technology, thereby generating a
cost increase (at least thats what the manufactures tell us). I know
if you ever held a 15k drive in your hand, you can notice a
considerable weight difference between it and a 7200RPM IDE drive.
Alex Turner
netEconomist
On Apr 7, 2005 11:37 AM, [EMAIL PRO
7;t
belong there).
Alex Turner
netEconomist
On Apr 12, 2005 10:10 AM, Tom Lane <[EMAIL PROTECTED]> wrote:
> hubert lubaczewski <[EMAIL PROTECTED]> writes:
> > and it made me wonder - is there a way to tell how much time of backend
> > was spent on triggers, index update
I have read a large chunk of this, and I would highly recommend it to
anyone who has been participating in the drive discussions. It is
most informative!!
Alex Turner
netEconomist
On 4/14/05, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> Greg,
>
> I posted this link under a d
es=1&devID_0=232&devID_1=40&devID_2=259&devID_3=267&devID_4=261&devID_5=248&devCnt=6
It does illustrate some of the weaknesses of SATA drives, but all in
all the Raptor drives put on a good show.
Alex Turner
netEconomist
On 4/14/05, Alex Turner <[EMAIL PROTECTED]> wrote
Just to clarify these are tests from http://www.storagereview.com, not
my own. I guess they couldn't get number for those parts. I think
everyone understands that a 0ms seek time impossible, and indicates a
missing data point.
Thanks,
Alex Turner
netEconomist
On 4/14/05, Dave Held &l
od as NCQ on the drive).
Alex Turner
netEconomist
On 4/14/05, Dave Held <[EMAIL PROTECTED]> wrote:
> > -Original Message-----
> > From: Alex Turner [mailto:[EMAIL PROTECTED]
> > Sent: Thursday, April 14, 2005 12:14 PM
> > To: [EMAIL PROTECTED]
> > Cc: Gre
y how
expensive it is to retrieve a given block knowing it's linear
increment.
Alex Turner
netEconomist
On 4/14/05, Kevin Brown <[EMAIL PROTECTED]> wrote:
> Tom Lane wrote:
> > Kevin Brown <[EMAIL PROTECTED]> writes:
> > > I really don't see how this is a
ore to set up a good review to be honest.
The 3ware trounces the Areca in all IO/sec test.
Alex Turner
netEconomist
On 4/15/05, Marinos Yannikos <[EMAIL PROTECTED]> wrote:
> Joshua D. Drake wrote:
> > Well I have never even heard of it. 3ware is the defacto authority of
> >
15k RPM drive config. Our biggest hit is reads, so
we can buy 3xSATA machines and load balance. It's all about the
application, and buying what is appropriate. I don't buy a Corvette
if all I need is a malibu.
Alex Turner
netEconomist
On 4/15/05, Dave Held <[EMAIL PROTECTED]&
I stand corrected!
Maybe I should re-evaluate our own config!
Alex T
(The dell PERC controllers do pretty much suck on linux)
On 4/15/05, Vivek Khera <[EMAIL PROTECTED]> wrote:
>
> On Apr 15, 2005, at 11:01 AM, Alex Turner wrote:
>
> > You can't fit a 15k RPM SCSI
you start having to factor in the cost of a bigger chassis
to hold more drives, which can be big bucks.
Alex Turner
netEconomist
On 18 Apr 2005 10:59:05 -0400, Greg Stark <[EMAIL PROTECTED]> wrote:
>
> William Yu <[EMAIL PROTECTED]> writes:
>
> > Using the above price
database accross multiple tablespaces on
seperate paritions.
My assertion therefore is that simply adding more drives to an already
competent* configuration is about as likely to increase your database
effectiveness as swiss cheese is to make your car run faster.
Alex Turner
netEconomist
*Asser
).
Alex Turner
netEconomist
On 4/18/05, John A Meinel <[EMAIL PROTECTED]> wrote:
> Alex Turner wrote:
>
> >[snip]
> >
> >
> >>Adding drives will not let you get lower response times than the average
> >>seek
> >>time on your drives*. But
I think the add more disks thing is really from the point of view that
one disk isn't enough ever. You should really have at least four
drives configured into two RAID 1s. Most DBAs will know this, but
most average Joes won't.
Alex Turner
netEconomist
On 4/18/05, Steve Poe <[EM
would only need to read
from one disk.
So my assertion that adding more drives doesn't help is pretty
wrong... particularly with OLTP because it's always dealing with
blocks that are smaller that the stripe size.
Alex Turner
netEconomist
On 4/18/05, Jacques Caron <[EMAIL PROTECTED]>
alot since subscribing.
Alex Turner
netEconomist
On 4/18/05, Alex Turner <[EMAIL PROTECTED]> wrote:
> Ok - well - I am partially wrong...
>
> If you're stripe size is 64Kb, and you are reading 256k worth of data,
> it will be spread across four drives, so you will need to read
Mistype.. I meant 0+1 in the second instance :(
On 4/18/05, Joshua D. Drake <[EMAIL PROTECTED]> wrote:
> Alex Turner wrote:
> > Not true - the recommended RAID level is RAID 10, not RAID 0+1 (at
> > least I would never recommend 1+0 for anything).
>
> Uhmm I was un
On 4/18/05, Jacques Caron <[EMAIL PROTECTED]> wrote:
> Hi,
>
> At 20:21 18/04/2005, Alex Turner wrote:
> >So I wonder if one could take this stripe size thing further and say
> >that a larger stripe size is more likely to result in requests getting
> >served pa
Does it really matter at which end of the cable the queueing is done
(Assuming both ends know as much about drive geometry etc..)?
Alex Turner
netEconomist
On 4/18/05, Bruce Momjian wrote:
> Kevin Brown wrote:
> > Greg Stark wrote:
> >
> >
> > > I think you
I wonder if thats something to think about adding to Postgresql? A
setting for multiblock read count like Oracle (Although having said
that I believe that Oracle natively caches pages much more
aggressively that postgresql, which allows the OS to do the file
caching).
Alex Turner
netEconomist
array, which would
yield better performance to cost ratio. Therefore I would suggest it
is something that should be investigated. After all, why implemented
TCQ on each drive, if it can be handled more effeciently at the other
end by the controller for less money?!
Alex Turner
netEconomist
On 4/19
Is:
REINDEX DATABASE blah
supposed to rebuild all indices in the database, or must you specify
each table individualy? (I'm asking because I just tried it and it
only did system tables)
Alex Turner
netEconomist
On 4/21/05, Josh Berkus wrote:
> Bill,
>
> > What about if an ou
the index forces the system to physically re-allocate all
that data space, and now you have just 2499 entries, that use 625
blocks.
I'm not sure that 'blocks' is the correct term in postgres, it's
segments in Oracle, but the concept remains the same.
Alex Turner
netEconomist
On
website I've managed
will ever see.
Why solve the complicated clustered sessions problem, when you don't
really need to?
Alex Turner
netEconomist
On 5/11/05, PFC <[EMAIL PROTECTED]> wrote:
>
>
> > However, memcached (and for us, pg_memcached) is an excellent way t
a couple of mid size RAID
arrays on 10k discs with fsync on for small transactions. I'm sure
that could easily be bettered with a few more dollars.
Maybe my number are off, but somehow it doesn't seem like that many
people need a highly complex session solution to me.
Alex Turner
netEc
Until you start worrying about MVC - we have had problems with the
MSSQL implementation of read consistency because of this 'feature'.
Alex Turner
NetEconomistOn 5/24/05, Bruno Wolff III <[EMAIL PROTECTED]> wrote:
On Tue, May 24, 2005 at 08:36:36 -0700, mark durrant <[EMAI
about reiser, I went
straight back to default after that problem (that partition is not on a
DB server though).
Alex Turner
netEconomistOn 6/3/05, Martin Fandel <[EMAIL PROTECTED]> wrote:
Hi @ all,i have only a little question. Which filesystem is preferred forpostgresql? I'm plan to
will read from independant halves, but gives
worse redundancy.
Alex Turner
NetEconomistOn 6/18/05, Jacques Caron <[EMAIL PROTECTED]> wrote:
Hi,At 18:00 18/06/2005, PFC wrote:> I don't know what I'm talking about, but wouldn't mirorring be> faster>than stri
Also seems pretty silly to put it on a regular SATA connection, when
all that can manage is 150MB/sec. If you made it connection directly
to 66/64-bit PCI then it could actualy _use_ the speed of the RAM, not
to mention PCI-X.
Alex Turner
NetEconomist
On 7/26/05, John A Meinel <[EMAIL PROTEC
good ol 2.5 Reg ECC.
Alex Turner
NetEconomist
On 7/26/05, PFC <[EMAIL PROTECTED]> wrote:
>
> > I'm a little leary as it is definitely a version 1.0 product (it is
> > still using an FPGA as the controller, so they were obviously pushing to
> > get the ca
Are you calculating aggregates, and if so, how are you doing it (I ask
the question from experience of a similar application where I found
that my aggregating PGPLSQL triggers were bogging the system down, and
changed them so scheduled jobs instead).
Alex Turner
NetEconomist
On 8/16/05, Ulrich
ocks to
rebuild just one block where n is the number of drives in the array,
whereas a mirror only required to read from a single spindle of the
RAID.
I would suggest running some benchmarks at RAID 5 and RAID 10 to see
what the _real_ performance actualy is, thats the only way to really
tel
Don't forget that often controlers don't obey fsyncs like a plain
drive does. thats the point of having a BBU ;)
Alex Turner
NetEconomist
On 8/16/05, John A Meinel <[EMAIL PROTECTED]> wrote:
> Anjan Dave wrote:
> > Yes, that's true, though, I am a bit confu
, U320 is only 320MB/channel...
Alex Turner
NetEconomist
On 8/16/05, Anjan Dave <[EMAIL PROTECTED]> wrote:
> Thanks, everyone. I got some excellent replies, including some long
> explanations. Appreciate the time you guys took out for the responses.
>
> The gist of it i take, i
just around $7k. I have two independant controlers on two
independant PCI buses to give max throughput. on with a 6 drive RAID
10 and the other with two 4 drive RAID 10s.
Alex Turner
NetEconomist
On 8/19/05, Mark Cotner <[EMAIL PROTECTED]> wrote:
> Hi all,
> I bet you get tired of the
ck reads, which is sequential reads.
Alex Turner
NetEconomist
P.S. Sorry if i'm a bit punchy, I've been up since yestarday with
server upgrade nightmares that continue ;)
On 8/19/05, Ron <[EMAIL PROTECTED]> wrote:
> Alex mentions a nice setup, but I'm pretty sure I know how to
don't co-operate with linux well.
Alex Turner
NetEconomist
On 9/12/05, Brandon Black <[EMAIL PROTECTED]> wrote:
I'm in the process of developing an application which uses PostgreSQL
for data storage. Our database traffic is very atypical, and as a
result it has been rather chall
that lower stripe sizes impacted performance badly as did overly large stripe sizes.
Alex Turner
NetEconomistOn 16 Sep 2005 04:51:43 -0700, bmmbn <[EMAIL PROTECTED]> wrote:
Hi EveryoneThe machine is IBM x345 with ServeRAID 6i 128mb cache and 6 SCSI 15kdisks.2 disks are in RAID1 and hold the OS
le systems hit 100% usage,
they get all kinds of unhappy, we haven't had the same problem with JFS.
Alex Turner
NetEconomistOn 9/20/05, Welty, Richard <[EMAIL PROTECTED]> wrote:
Alex Turner wrote:> I would also recommend looking at file system. For us JFS worked signifcantly> f
I have found that while the OS may flush to the controller fast with
fsync=true, the controller does as it pleases (it has BBU, so I'm not
too worried), so you get great performance because your controller is
determine read/write sequence outside of what is being demanded by an
fsync.
Alex T
most people without
resorting to SSD.
Alex Turner
NetEconomistOn 10/4/05, Emil Briggs <[EMAIL PROTECTED]> wrote:
I have an application that has a table that is both read and write intensive.Data from iostat indicates that the write speed of the system is the factorthat is limiting performanc
doing an iostat and see how many IOs
and how much throughput is happening. That will rappidly help determine
if you are bound by IOs or by MB/sec.
Worst case I'm wrong, but IMHO it's worth a try.
Alex Turner
NetEconomistOn 10/4/05, Emil Briggs <[EMAIL PROTECTED]> wrote:
> Talk
with cached disk pages. It
looks to me more like either a Java problem, or a kernel problem...
Alex Turner
NetEconomistOn 10/10/05, Jon Brisbin <[EMAIL PROTECTED]> wrote:
Tom Lane wrote:>> Are you sure it's not cached data pages, rather than cached inodes?> If so, the above beh
Perhaps this is true for 1.5 on x86-32 (I've only used it on x86-64)
but I was more thinking 1.4 which many folks are still using.
AlexOn 10/11/05, Alan Stange <[EMAIL PROTECTED]> wrote:
Alex Turner wrote:> Realise also that unless you are running the 1.5 x86-64 build, java>
Well - to each his own I guess - we did extensive testing on 1.4, and
it refused to allocate much past 1gig on both Linux x86/x86-64 and
Windows.
AlexOn 10/11/05, Alan Stange <[EMAIL PROTECTED]> wrote:
Alex Turner wrote:> Perhaps this is true for 1.5 on x86-32 (I've only used it on
Oracle uses LRU caching algorithm also, not LFU.
AlexOn 10/21/05, Martin Nickel <[EMAIL PROTECTED]> wrote:
I was reading a comment in another posting and it started me thinkingabout this. Let's say I startup an Oracle server. All my queries are alittle bit (sometimes a lot bit) slow until it get
[snip]to the second processor in my dual Xeon eServer) has got me to thepoint that the perpetually high memory usage doesn't affect my
application server.
I'm curious - how does the high memory usage affect your application server?
Alex
Just to play devils advocate here for as second, but if we have an
algorithm that is substational better than just plain old LRU, which is
what I believe the kernel is going to use to cache pages (I'm no kernel
hacker), then why don't we apply that and have a significantly larger
page cache a la Or
This is possible with Oracle utilizing the keep pool
alter table t_name storage ( buffer_pool keep);
If Postgres were to implement it's own caching system, this seems like
it would be easily to implement (beyond the initial caching effort).
Alex
On 10/24/05, Craig A. James <[EMAIL PROTECTED]>
b.order_val>=25 and
b.order_val<50 and a.primary_key_id=b.primary_key_id
If the data updates alot then this won't work as well though as the
index table will require frequent updates to potentialy large number
of records (although a small number of pages so it still won't be
horribl
Reasons not to buy from Sun or Compaq - why get Opteron 252 when a 240
will do just fine for a fraction of the cost, which of course they
don't stock, white box all the way baby ;). My box from Sun or Compaq
or IBM is 2x the whitebox cost because you can't buy apples to apples.
We have a bitchin'
ID 10 and 2xRAID 1. Make sure you get the
firmware update if you have these controllers though.
Alex Turner
NetEconomist
On 11/6/05, Joost Kraaijeveld <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I am experiencing very long update queries and I want to know if it
> reasonable to expect
We use this memory in all our servers (well - the 512 sticks). 0
problems to date:
http://www.newegg.com/Product/Product.asp?Item=N82E16820145513
$163 for 1GB.
This stuff is probably better than the Samsung RAM dell is selling you
for 3 times the price.
Alex
On 11/10/05, Ron Peacetree <[EMAIL
On 11/15/05, Luke Lonergan <[EMAIL PROTECTED]> wrote:
> Adam,
>
> > -Original Message-
> > From: [EMAIL PROTECTED]
> > [mailto:[EMAIL PROTECTED] On Behalf Of
> > Claus Guttesen
> > Sent: Tuesday, November 15, 2005 12:29 AM
> > To: Adam Weisberg
> > Cc: pgsql-performance@postgresql.org
> > S
Not at random access in RAID 10 they aren't, and anyone with their
head screwed on right is using RAID 10. The 9500S will still beat the
Areca cards at RAID 10 database access patern.
Alex.
On 11/15/05, Dave Cramer <[EMAIL PROTECTED]> wrote:
> Luke,
>
> Have you tried the areca cards, they are s
llowing
> for up to 8GB of cache using 2 4GB DIMMs as of this writing).
>
> It should also be noted that 64 drive chassis' are going to become
> possible once 2.5" 10Krpm SATA II and FC HDs become the standard next
> year (48's are the TOTL now). We need controller t
On 11/16/05, William Yu <[EMAIL PROTECTED]> wrote:
> Alex Turner wrote:
> > Spend a fortune on dual core CPUs and then buy crappy disks... I bet
> > for most applications this system will be IO bound, and you will see a
> > nice lot of drive failures in the f
Just pick up a SCSI drive and a consumer ATA drive.
Feel their weight.
You don't have to look inside to tell the difference.
Alex
On 11/16/05, David Boreham <[EMAIL PROTECTED]> wrote:
>
>
> I suggest you read this on the difference between enterprise/SCSI and
> desktop/IDE drives:
>
> http://w
On 11/16/05, Joshua D. Drake <[EMAIL PROTECTED]> wrote:
> >
> > The only questions would be:
> > (1) Do you need a SMP server at all? I'd claim yes -- you always need
> > 2+ cores whether it's DC or 2P to avoid IO interrupts blocking other
> > processes from running.
>
> I would back this up. Even
Ok - so I ran the same test on my system and get a total speed of
113MB/sec. Why is this? Why is the system so limited to around just
110MB/sec? I tuned read ahead up a bit, and my results improve a
bit..
Alex
On 11/18/05, Luke Lonergan <[EMAIL PROTECTED]> wrote:
> Dave,
>
> On 11/18/05 5:0
I would argue that almost certainly won't by doing that as you will
create a new place even further away for the disk head to seek to
instead of just another file on the same FS that is probably closer to
the current head position.
Alex
On 12/6/05, Michael Stone <[EMAIL PROTECTED]> wrote:
> On Tu
Personaly I would split into two RAID 1s. One for pg_xlog, one for
the rest. This gives probably the best performance/reliability
combination.
Alex.
On 12/10/05, Carlos Benkendorf <[EMAIL PROTECTED]> wrote:
> Hello,
>
> I would like to know which is the best configuration to use 4 scsi drives
>
It's irrelavent what controller, you still have to actualy write the
parity blocks, which slows down your write speed because you have to
write n+n/2 blocks. instead of just n blocks making the system write
50% more data.
RAID 5 must write 50% more data to disk therefore it will always be slower.
Yes - they work excellently. I have several medium and large servers
running 3ware 9500S series cards with great success. We have
rebuilding many failed RAID 10s over the course with no problems.
Alex
On 12/26/05, Benjamin Arai <[EMAIL PROTECTED]> wrote:
> Have you have any experience rebuildin
t; wrote:
> On Mon, 26 Dec 2005, Alex Turner wrote:
>
> > It's irrelavent what controller, you still have to actualy write the
> > parity blocks, which slows down your write speed because you have to
> > write n+n/2 blocks. instead of just n blocks making the system writ
Does anyone have any performance experience with the Dell Perc 5i
controllers in RAID 10/RAID 5?
Thanks,
Alex
People recommend LSI MegaRAID controllers on here regularly, but I have
found that they do not work that well. I have bonnie++ numbers that show
the controller is not performing anywhere near the disk's saturation level
in a simple RAID 1 on RedHat Linux EL4 on two seperate machines provided by
t
eca, 3Ware/AMCC, LSI).
Thanks,
Alex
On 12/4/06, Scott Marlowe <[EMAIL PROTECTED]> wrote:
On Mon, 2006-12-04 at 01:17, Alex Turner wrote:
> People recommend LSI MegaRAID controllers on here regularly, but I
> have found that they do not work that well. I have bonnie++ numbers
>
http://en.wikipedia.org/wiki/RAID_controller
Alex
On 12/4/06, Michael Stone <[EMAIL PROTECTED]> wrote:
On Mon, Dec 04, 2006 at 12:37:29PM -0500, Alex Turner wrote:
>This discussion I think is important, as I think it would be useful for
this
>list to have a list of RAID cards th
suck worse, that doesn't bring us to a _good_ card).
Alex.
On 12/4/06, Greg Smith <[EMAIL PROTECTED]> wrote:
On Mon, 4 Dec 2006, Alex Turner wrote:
> People recommend LSI MegaRAID controllers on here regularly, but I have
> found that they do not work that well. I have bonnie++
The problem I see with software raid is the issue of a battery backed unit:
If the computer loses power, then the 'cache' which is held in system
memory, goes away, and fubars your RAID.
Alex
On 12/5/06, Michael Stone <[EMAIL PROTECTED]> wrote:
On Tue, Dec 05, 2006 at 01:21:
The test that I did - which was somewhat limited, showed no benefit
splitting disks into seperate partitions for large bulk loads.
The program read from one very large file and wrote the input out to two
other large files.
The totaly throughput on a single partition was close to the maximum
theo
You should search the archives for Luke Lonegran's posting about how IO in
Postgresql is significantly bottlenecked because it's not async. A 12 disk
array is going to max out Postgresql's max theoretical write capacity to
disk, and therefore BigRDBMS is always going to win in such a config. You
http://www.3ware.com/products/serial_ata2-9000.asp
Check their data sheet - the cards are BBU ready - all you have to do
is order a BBU
which you can from here:
http://www.newegg.com/Product/Product.asp?Item=N82E16815999601
Alex.
On 1/18/06, Joshua D. Drake <[EMAIL PROTECTED]> wrote:
>
> > Obvi
A 3ware card will re-order your writes to put them more in disk order,
which will probably improve performance a bit, but just going from a
software RAID 1 to a hardware RAID 1, I would not imagine that you
will see much of a performance boost. Really to get better
performance you will need to add
He's talking about RAID 1 here, not a gargantuan RAID 6. Onboard RAM
on the controller card is going to make very little difference. All
it will do is allow the card to re-order writes to a point (not all
cards even do this).
Alex.
On 1/18/06, William Yu <[EMAIL PROTECTED]> wrote:
> Benjamin Ar
Anyone who has tried x86-64 linux knows what a royal pain in the ass it is. They didn't do anything sensible, like just make the whole OS 64 bit, no, they had to split it up, and put 64-bit libs in a new directory /lib64. This means that a great many applications don't know to check in there for
Given the fact that most SATA drives have only an 8MB cache, and your RAID controller should have at least 64MB, I would argue that the system with the RAID controller should always be faster. If it's not, you're getting short-changed somewhere, which is typical on linux, because the drivers just
I have a really stupid question about top, what exactly is iowait CPU time?Alex
With 18 disks dedicated to data, you could make 100/7*9 seeks/second (7ms av seeks time, 9 independant units) which is 128seeks/second writing on average 64kb of data, which is 4.1MB/sec throughput worst case, probably 10x best case so 40Mb/sec - you might want to take more disks for your data and
On 7/17/06, Mikael Carneholm <[EMAIL PROTECTED]> wrote:
>> This is something I'd also would like to test, as a common>> best-practice these days is to go for a SAME (stripe all, mirroreverything) setup.>> From a development perspective it's easier to use SAME as the
>> developers won't have to thin
On 7/17/06, Ron Peacetree <[EMAIL PROTECTED]> wrote:
-Original Message->From: Mikael Carneholm <[EMAIL PROTECTED]>>Sent: Jul 17, 2006 5:16 PM>To: Ron Peacetree <
[EMAIL PROTECTED]>, pgsql-performance@postgresql.org>Subject: RE: [PERFORM] RAID stripe size question>>>15Krpm HDs will have ave
This is a great testament to the fact that very often software RAID will seriously outperform hardware RAID because the OS guys who implemented it took the time to do it right, as compared with some controller manufacturers who seem to think it's okay to provided sub-standard performance.
Based on
Although I for one have yet to see a controller that actualy does this (I believe software RAID on linux doesn't either).Alex.On 8/7/06, Markus Schaber
<[EMAIL PROTECTED]> wrote:Hi, Charles,
Charles Sprickman wrote:> I've also got a 1U with a 9500SX-4 and 4 drives. I like how the 3Ware> card scal
First off - very few third party tools support debian. Debian is a sure fire way to have an unsupported system. Use RedHat or SuSe (flame me all you want, it doesn't make it less true).Second, run bonnie++ benchmark against your disk array(s) to see what performance you are getting, and make sure
1 - 100 of 141 matches
Mail list logo