On Wed, Feb 18, 2009 at 12:52 AM, Rajesh Kumar Mallah
mallah.raj...@gmail.com wrote:
the raid10 voulme was benchmarked again
taking in consideration above points
Effect of ReadAhead Settings
disabled,256(default) , 512,1024
xfs_ra0 414741 , 66144
xfs_ra256
Effect of ReadAhead Settings
disabled,256(default) , 512,1024
SEQUENTIAL
xfs_ra0 414741 , 66144
xfs_ra256403647, 545026 all tests on sda6
xfs_ra512411357, 564769
xfs_ra1024 404392, 431168
looks like 512 was the best
On Wed, Feb 18, 2009 at 1:44 AM, Rajesh Kumar Mallah
mallah.raj...@gmail.com wrote:
Effect of ReadAhead Settings
disabled,256(default) , 512,1024
SEQUENTIAL
xfs_ra0 414741 , 66144
xfs_ra256403647, 545026 all tests on sda6
xfs_ra512
have you tried hanging bunch of raid1 to linux's md, and let it do
raid0 for you ?
I heard plenty of stories where this actually sped up performance. One
noticeable is case of youtube servers.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your
On Wed, Feb 18, 2009 at 2:27 PM, Grzegorz Jaśkiewicz gryz...@gmail.com wrote:
have you tried hanging bunch of raid1 to linux's md, and let it do
raid0 for you ?
Hmmm , i will have only 3 bunches in that case as system has to boot
from first bunch
as system has only 8 drives. i think reducing
2009/2/18 Rajesh Kumar Mallah mallah.raj...@gmail.com:
On Wed, Feb 18, 2009 at 2:27 PM, Grzegorz Jaśkiewicz gryz...@gmail.com
wrote:
have you tried hanging bunch of raid1 to linux's md, and let it do
raid0 for you ?
Hmmm , i will have only 3 bunches in that case as system has to boot
from
On 2/18/09 12:31 AM, Scott Marlowe scott.marl...@gmail.com wrote:
Effect of ReadAhead Settings
disabled,256(default) , 512,1024
xfs_ra0 414741 , 66144
xfs_ra256403647, 545026 all tests on sda6
xfs_ra512411357, 564769
xfs_ra1024
One thing to note, is that linux's md sets the readahead to 8192 by default
instead of 128. I've noticed that in many situations, a large chunk of the
performance boost reported is due to this alone.
On 2/18/09 12:57 AM, Grzegorz Jaśkiewicz gryz...@gmail.com wrote:
have you tried hanging
On 2/17/09 11:52 PM, Rajesh Kumar Mallah mallah.raj...@gmail.com wrote:
the raid10 voulme was benchmarked again
taking in consideration above points
Effect of ReadAhead Settings
disabled,256(default) , 512,1024
xfs_ra0 414741 , 66144
xfs_ra256403647, 545026
There has been an error in the tests the dataset size was not 2*MEM it
was 0.5*MEM
i shall redo the tests and post results.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Tue, 17 Feb 2009, Rajesh Kumar Mallah wrote:
sda6 -- xfs with default formatting options.
sda7 -- mkfs.xfs -f -d sunit=128,swidth=512 /dev/sda7
sda8 -- ext3 (default)
it looks like mkfs.xfs options sunit=128 and swidth=512 did not improve
io throughtput as such in bonnie++ tests .
it
On Tue, Feb 17, 2009 at 5:15 PM, Matthew Wakeling matt...@flymine.org wrote:
On Tue, 17 Feb 2009, Rajesh Kumar Mallah wrote:
sda6 -- xfs with default formatting options.
sda7 -- mkfs.xfs -f -d sunit=128,swidth=512 /dev/sda7
sda8 -- ext3 (default)
it looks like mkfs.xfs options sunit=128
Mallah
[mallah.raj...@gmail.com]
Sent: Tuesday, February 17, 2009 5:25 AM
To: Matthew Wakeling
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] suggestions for postgresql setup on Dell 2950 , PERC6i
controller
On Tue, Feb 17, 2009 at 5:15 PM, Matthew Wakeling matt...@flymine.org wrote
...@postgresql.org
[pgsql-performance-ow...@postgresql.org] On Behalf Of Rajesh Kumar Mallah
[mallah.raj...@gmail.com]
Sent: Tuesday, February 17, 2009 5:25 AM
To: Matthew Wakeling
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] suggestions for postgresql setup on Dell 2950 , PERC6i
...@gmail.com]
Sent: Tuesday, February 17, 2009 5:25 AM
To: Matthew Wakeling
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] suggestions for postgresql setup on Dell 2950 ,
PERC6i controller
On Tue, Feb 17, 2009 at 5:15 PM, Matthew Wakeling matt...@flymine.org
wrote:
On Tue, 17 Feb 2009
Arjen van der Meijden acmmail...@tweakers.net writes:
When we purchased our Perc 5/e with MD1000 filled with 15 15k rpm sas disks,
my
colleague actually spend some time benchmarking the PERC and a ICP Vortex
(basically a overclocked Adaptec) on those drives. Unfortunately he doesn't
have
BTW
our Machine got build with 8 15k drives in raid10 ,
from bonnie++ results its looks like the machine is
able to do 400 Mbytes/s seq write and 550 Mbytes/s
read. the BB cache is enabled with 256MB
sda6 -- xfs with default formatting options.
sda7 -- mkfs.xfs -f -d sunit=128,swidth=512
Rajesh Kumar Mallah wrote:
I've checked out the latest Areca controllers, but the manual
available on their website states there's a limitation of 32 disks
in an array...
Where exactly is there limitation of 32 drives. the datasheet of
1680 states support upto 128drives using
Scott Carey wrote:
You probably don’t want a single array with more than 32 drives anyway,
its almost always better to start carving out chunks and using software
raid 0 or 1 on top of that for various reasons. I wouldn’t put more than
16 drives in one array on any of these RAID cards, they’re
On Fri, Feb 6, 2009 at 2:04 AM, Matt Burke mattbli...@icritical.com wrote:
Scott Carey wrote:
You probably don't want a single array with more than 32 drives anyway,
its almost always better to start carving out chunks and using software
raid 0 or 1 on top of that for various reasons. I
Matt Burke wrote:
Scott Carey wrote:
You probably don?t want a single array with more than 32 drives anyway,
its almost always better to start carving out chunks and using software
raid 0 or 1 on top of that for various reasons. I wouldn?t put more than
16 drives in one array on any of
Glyn Astill wrote:
Stupid question, but why do people bother with the Perc line of
cards if the LSI brand is better? It seems the headache of trying
to get the Perc cards to perform is not worth any money saved.
I think in most cases the dell cards actually cost more, people end
up stuck
Matt Burke wrote:
Glyn Astill wrote:
Stupid question, but why do people bother with the Perc line of
cards if the LSI brand is better? It seems the headache of trying
to get the Perc cards to perform is not worth any money saved.
I think in most cases the dell cards actually cost
Bruce Momjian wrote:
Matt Burke wrote:
we'd have no choice other than replacing the server+shelf+disks.
I want to see just how much better a high-end Areca/Adaptec controller
is, but I just don't think I can get approval for a ?1000 card "because
some guy on the internet said
--- On Fri, 6/2/09, Bruce Momjian br...@momjian.us wrote:
Stupid question, but why do people bother with the Perc
line of cards if
the LSI brand is better? It seems the headache of trying
to get the
Perc cards to perform is not worth any money saved.
I think in most cases the dell cards
On Fri, Feb 6, 2009 at 8:19 AM, Matt Burke mattbli...@icritical.com wrote:
Glyn Astill wrote:
Stupid question, but why do people bother with the Perc line of
cards if the LSI brand is better? It seems the headache of trying
to get the Perc cards to perform is not worth any money saved.
I
On 4-2-2009 22:36 Scott Marlowe wrote:
We purhcased the Perc 5E, which dell wanted $728 for last fall with 8
SATA disks in an MD-1000 and the performance is just terrible. No
matter what we do the best throughput on any RAID setup was about 30
megs/second write and 60 Megs/second read. I can
3. Pure s/w RAID10 if I can convince the PERC to let the OS see the disks
Look for JBOD mode.
PERC 6 does not have JBOD mode exposed. Dell disables the feature from the LSI
firmware in their customization.
However, I have been told that you can convince them to tell you the 'secret
On 6-2-2009 16:27 Bruce Momjian wrote:
The experiences I have heard is that Dell looks at server hardware in
the same way they look at their consumer gear, If I put in a cheaper
part, how much will it cost Dell to warranty replace it. Sorry, but I
don't look at my performance or downtime in the
On Fri, 6 Feb 2009, Bruce Momjian wrote:
Stupid question, but why do people bother with the Perc line of cards if
the LSI brand is better?
Because when you're ordering a Dell server, all you do is click a little
box and you get a PERC card with it. There aren't that many places that
carry
On 2/6/09 9:53 AM, Arjen van der Meijden acmmail...@tweakers.net wrote:
When we purchased our Perc 5/e with MD1000 filled with 15 15k rpm sas
disks, my colleague actually spend some time benchmarking the PERC and a
ICP Vortex (basically a overclocked Adaptec) on those drives.
Unfortunately he
Arjen van der Meijden wrote:
Afaik the Perc 5/i and /e are more or less rebranded LSI-cards (they're
not identical in layout etc), so it would be a bit weird if they
performed much less than the similar LSI's wouldn't you think?
I've recently had to replace a PERC4/DC with the exact same card
--- On Thu, 5/2/09, Matt Burke mattbli...@icritical.com wrote:
From: Matt Burke mattbli...@icritical.com
Subject: Re: [PERFORM] suggestions for postgresql setup on Dell 2950 , PERC6i
controller
To: pgsql-performance@postgresql.org
Date: Thursday, 5 February, 2009, 12:40 PM
Arjen van
Matt Burke wrote:
Arjen van der Meijden wrote:
Afaik the Perc 5/i and /e are more or less rebranded LSI-cards (they're
not identical in layout etc), so it would be a bit weird if they
performed much less than the similar LSI's wouldn't you think?
I've recently had to replace a PERC4/DC
Scott Marlowe scott.marl...@gmail.com writes:
We purhcased the Perc 5E, which dell wanted $728 for last fall with 8
SATA disks in an MD-1000 and the performance is just terrible. No
matter what we do the best throughput on any RAID setup was about 30
megs/second write and 60 Megs/second
On Thu, 2009-02-05 at 12:40 +, Matt Burke wrote:
Arjen van der Meijden wrote:
Are there any reasonable choices for bigger (3+ shelf) direct-connected
RAID10 arrays, or are hideously expensive SANs the only option? I've
checked out the latest Areca controllers, but the manual available
On Thu, Feb 5, 2009 at 6:10 PM, Matt Burke mattbli...@icritical.com wrote:
Arjen van der Meijden wrote:
Afaik the Perc 5/i and /e are more or less rebranded LSI-cards (they're
not identical in layout etc), so it would be a bit weird if they
performed much less than the similar LSI's wouldn't
On 2/5/09 4:40 AM, Matt Burke mattbli...@icritical.com wrote:
Are there any reasonable choices for bigger (3+ shelf) direct-connected
RAID10 arrays, or are hideously expensive SANs the only option? I've
checked out the latest Areca controllers, but the manual available on
their website states
Hi,
I am going to get a Dell 2950 with PERC6i with
8 * 73 15K SAS drives +
300 GB EMC SATA SAN STORAGE,
I seek suggestions from users sharing their experience with
similar hardware if any. I have following specific concerns.
1. On list i read that RAID10 function in PERC5 is not really
On Wed, Feb 4, 2009 at 11:45 AM, Rajesh Kumar Mallah
mallah.raj...@gmail.com wrote:
Hi,
I am going to get a Dell 2950 with PERC6i with
8 * 73 15K SAS drives +
300 GB EMC SATA SAN STORAGE,
I seek suggestions from users sharing their experience with
similar hardware if any. I have following
Rajesh Kumar Mallah wrote:
Hi,
I am going to get a Dell 2950 with PERC6i with
8 * 73 15K SAS drives +
300 GB EMC SATA SAN STORAGE,
I seek suggestions from users sharing their experience with
similar hardware if any. I have following specific concerns.
1. On list i read that RAID10 function
On 4-2-2009 21:09 Scott Marlowe wrote:
I have little experience with the 6i. I do have experience with all
the Percs from the 3i/3c series to the 5e series. My experience has
taught me that a brand new, latest model $700 Dell RAID controller is
about as good as a $150 LSI, Areca, or
On Wed, Feb 4, 2009 at 2:11 PM, Arjen van der Meijden
acmmail...@tweakers.net wrote:
On 4-2-2009 21:09 Scott Marlowe wrote:
I have little experience with the 6i. I do have experience with all
the Percs from the 3i/3c series to the 5e series. My experience has
taught me that a brand new,
Sorry for the top posts, I don't have a client that is inline post friendly.
Most PERCs are rebranded LSI's lately. The difference between the 5 and 6 is
PCIX versus PCIe LSI series, relatively recent ones. Just look at the
OpenSolaris drivers for the PERC cards for a clue to what is what.
Sorry for the top post --
Assuming Linux --
1: PERC 6 is still a bit inferior to other options, but not that bad. Its
random IOPS is fine, sequential speeds are noticeably less than say the latest
from Adaptec or Areca.
2: Random iops will probably scale ok from 6 to 8 drives, but depending
Scott Carey wrote:
Sorry for the top post --
Assuming Linux --
1: PERC 6 is still a bit inferior to other options, but not that bad.
Its random IOPS is fine, sequential speeds are noticeably less than
say the latest from Adaptec or Areca.
In the archives there was big thread about this
46 matches
Mail list logo