That's something on the way RAID works, regardless what most
'super-ultra' powerfull hardware controller you may have. RAID 5 or 6
will never have the same write performance as a RAID 10 o 0 for example.
Writeback caches can deal with bursts well but they have a limit
therefore there will always be a penalty compared to what else you could
If you have a continuous stream of data (a big VM deployment or a large
data copy) there will be a continuous write and that will likely fill up
the cache making the disks underneath the bottleneck.
That's why on some other scenarios, like ZFS people have multiple groups
of RAID 6 (called RAIDZ2) so it improves the write speeds for these type
In the scenario given in this thread with just 3 servers, each with a
RAID 6 there will be a bare limit on the write performance specially for
streammed data for most powerfull your hardware controller can do
Also I agree the 40Gb NICs may not be used fully and 10Gb can do the job
well, but if they were available at the begining, why not use them.
On 08/08/2017 03:16, Fabrice Bacchella wrote:
Le 8 août 2017 à 04:08, FERNANDO FREDIANI <fernando.fredi...@upx.com> a écrit :
Even if you have a Hardware RAID Controller with Writeback cache you will have
a significant performance penalty and may not fully use all the resources you
mentioned you have.
Nope again,from my experience with HP Smart Array and write back cache, write,
that goes in the cache, are even faster that read that must goes to the disks.
of course if the write are too fast and to big, they will over overflow the
cache. But on todays controller they are multi-gigabyte cache, you must write a
lot to fill them. And if you can afford 40Gb card, you can afford decent
Users mailing list