But still, have some doubt how LVM will handle in case of disk failure!
Or even mdadm.
Both need intervention when one or more disks die!
What I need is something like ZFS but that uses less resources...

Thanks any way



---
Gilberto Nunes Ferreira




Em ter., 28 de jul. de 2020 às 17:16, Gilberto Nunes <
[email protected]> escreveu:

> Good to know that...
> Thanks
> ---
> Gilberto Nunes Ferreira
>
>
>
>
> Em ter., 28 de jul. de 2020 às 17:08, Alvin Starr <[email protected]>
> escreveu:
>
>> Having just been burnt by BTRFS I would stick with XFS and LVM/others.
>>
>> LVM will do disk replication or raid1. I do not believe that
>> raid3,4,5,6.. is supported.
>> mdadm does support all the various raid modes and I have used it quite
>> reliably for years.
>> You may want to look at the raid456 write-journal but that will require
>> an SSD or NVME deivce to be used effectively.
>>
>>
>> On 7/28/20 3:43 PM, Gilberto Nunes wrote:
>>
>> Hi there....
>>
>> 'till now, I am using glusterfs over XFS and so far so good.
>> Using LVM too....
>> Unfortunately, there is no way with XFS to merge two or more HDD, in
>> order to use more than one HDD, like RAID1 or RAID5.
>> My primary goal is to use two server with GlusterFS on top of multiples
>> HDDs for qemu images.
>> I have think about BTRFS or mdadm.
>> Anybody has some experience on this?
>>
>> Thanks a lot
>>
>> ---
>> Gilberto Nunes Ferreira
>>
>>
>>
>>
>> ________
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://bluejeans.com/441850968
>>
>> Gluster-users mailing 
>> [email protected]https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
>> --
>> Alvin Starr                   ||   land:  (647)478-6285
>> Netvel Inc.                   ||   Cell:  (416)[email protected]      
>>         ||
>>
>>
>>
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
[email protected]
https://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to