Hi, How about hardware raid with XFS? I assuming it would be faster than ZFS raid since it has physical cache on raid controller for reads and writes.
Thanks, > On Mar 6, 2017, at 3:08 PM, Gandalf Corvotempesta > <[email protected]> wrote: > > Hardware raid with ZFS should avoided > ZFS needs direct access to disks and with hardware raid you have a controller > in the middle > > If you need ZFS, skip the hardware raid and use ZFS raid > > Il 6 mar 2017 9:23 PM, "Dung Le" <[email protected] > <mailto:[email protected]>> ha scritto: > Hi, > > Since I am new with Gluster, need your advices. I have 2 different Gluster > configuration: > > Purpose: Need to create 5 Gluster volumes. I am running the gluster version > is 3.9.0. > > Config #1: 5 bricks from one zpool > 3 storage nodes. > Using hardware raid to create one array with raid5 (9+1) per storage node > Create a zpool on top of the array per storage node > Create 5 ZFS shares (each share is a brick) per storage node > Create 5 volumes with replica of 3 using 5 different bricks. > > Config #2: 1 brick from one zpool > 3 storage nodes. > Using hardware raid to create one array with raid5 (9+1) per storage node > Create a zpool on top of the array per storage node > Create 1 ZFS shares per storage node. Using the share as brick. > Create 5 volumes with replica of 3 with same share. > > 1) Is there any different on the performance on both config? > 2) Will the single brick be handling parallel writing vs multiple brick? > 3) Since I am using hardware raid controller, any option that I need to > enable or disable for the gluster volume? > > Best Regards, > ~ Vic Le > > > _______________________________________________ > Gluster-users mailing list > [email protected] <mailto:[email protected]> > http://lists.gluster.org/mailman/listinfo/gluster-users > <http://lists.gluster.org/mailman/listinfo/gluster-users>
_______________________________________________ Gluster-users mailing list [email protected] http://lists.gluster.org/mailman/listinfo/gluster-users
