> From: Guoqing Jiang
>>On 12/7/19 11:44 PM, John Stoffel wrote:
>> As Daniel says, he's got lots of disk load, but plenty of CPU, so the
>> single thread for RAID5 is a big bottleneck.
>Perhaps set "/sys/block/mdx/md/group_thread_cnt" could help here,
Now I finally had a chance to test this.
> "Gionatan" == Gionatan Danti writes:
Gionatan> On 09/12/19 11:26, Daniel Janzon wrote:
>> Exactly. The md driver executes on a single core, but with a bunch of RAID5s
>> I can distribute the load over many cores. That's also why I cannot join the
>> bunch of RAID5's with a RAID0 (as
On 09/12/19 11:26, Daniel Janzon wrote:
Exactly. The md driver executes on a single core, but with a bunch of RAID5s
I can distribute the load over many cores. That's also why I cannot join the
bunch of RAID5's with a RAID0 (as someone suggested) because then again
all data is pulled through a
On 12/9/19 11:26 AM, Daniel Janzon wrote:
The origin of my problem is indeed the poor performance of RAID5,
which maxes out the single core the driver runs on. But if I accept that
as a given, the next problem is LVM striping. Since I do get 10x better
What stripesize was used for striped
On 12/7/19 11:44 PM, John Stoffel wrote:
"Stuart" == Stuart D Gathman writes:
Stuart> On Tue, Oct 29, 2019 at 12:14 PM Daniel
Janzon wrote:
I have a server with very high load using four NVMe SSDs and
therefore no HW RAID. Instead I used SW RAID with the mdadm tool.
Using one RAID5 volume
> From: "John Stoffel"
> Stuart> The mdadm layer already does the striping. So doing it again
> Stuart> in the LVM layer completely screws it up. You want plain JBOD
> Stuart> (Just a Bunch Of Disks).
> Umm... not really. The problem here is more the MD layer not being
> able to run RAID5
> "Stuart" == Stuart D Gathman writes:
Stuart> On Sat, 7 Dec 2019, John Stoffel wrote:
>> The biggest harm to performance here is really the RAID5, and if you
>> can instead move to RAID 10 (mirror then stripe across mirrors) then
>> you should be a performance boost.
Stuart> Yeah, That's
Il 08-12-2019 00:14 Stuart D. Gathman ha scritto:
On Sat, 7 Dec 2019, John Stoffel wrote:
The biggest harm to performance here is really the RAID5, and if you
can instead move to RAID 10 (mirror then stripe across mirrors) then
you should be a performance boost.
Yeah, That's what I do.
On Sat, 7 Dec 2019, John Stoffel wrote:
The biggest harm to performance here is really the RAID5, and if you
can instead move to RAID 10 (mirror then stripe across mirrors) then
you should be a performance boost.
Yeah, That's what I do. RAID10, and use LVM to join together as JBOD.
I forgot
> "Stuart" == Stuart D Gathman writes:
Stuart> On Tue, Oct 29, 2019 at 12:14 PM Daniel Janzon
wrote:
>> I have a server with very high load using four NVMe SSDs and
>> therefore no HW RAID. Instead I used SW RAID with the mdadm tool.
>> Using one RAID5 volume does not work well since the
On Tue, Oct 29, 2019 at 12:14 PM Daniel Janzon
wrote:
I have a server with very high load using four NVMe SSDs and
therefore no HW RAID. Instead I used SW RAID with the mdadm tool.
Using one RAID5 volume does not work well since the driver can only
utilize one CPU core which spikes at 100% and
Did you thought about RAID 50 ?
Messaggio originale
Da: mator...@gmail.com
Inviato: 7 dicembre 2019 17:17
A: linux-lvm@redhat.com
Rispondi a: linux-lvm@redhat.com
Oggetto: Re: [linux-lvm] Best way to run LVM over multiple SW RAIDs?
On Tue, Oct 29, 2019 at 12:14 PM Daniel Janzon
wrote
On Tue, Oct 29, 2019 at 12:14 PM Daniel Janzon
wrote:
>
> Hello,
>
> I have a server with very high load using four NVMe SSDs and therefore no HW
> RAID. Instead I used SW RAID with the mdadm tool. Using one RAID5 volume does
> not work well since the driver can only utilize one CPU core which
Hello,
I have a server with very high load using four NVMe SSDs and therefore no HW
RAID. Instead I used SW RAID with the mdadm tool. Using one RAID5 volume does
not work well since the driver can only utilize one CPU core which spikes at
100% and harms performance. Therefore I created 8
14 matches
Mail list logo