On 4/9/20 12:47 PM, Strahil Nikolov wrote:

On April 9, 2020 5:16:47 PM GMT+03:00, Valerio Luccio <valerio.luc...@nyu.edu> 
wrote:
Hi,

I'm afraid I still need some help.

When I originally set up my gluster (about 3 years ago), I set it up as

Distributed-Replicated without specifying a replica count and I believe

that defaulted to a replica 2. I have 4 servers with 3 RAIDs attached
to
each server. This was my result:

    Number of Bricks: 6 x 2 = 12
    Transport-type: tcp
    Bricks:
    Brick1: hydra1:/gluster1/data
    Brick2: hydra1:/gluster2/data
    Brick3: hydra1:/gluster3/data
    Brick4: hydra2:/gluster1/data
    Brick5: hydra2:/gluster2/data
    Brick6: hydra2:/gluster3/data
    Brick7: hydra3:/gluster1/data
    Brick8: hydra3:/gluster2/data
    Brick9: hydra3:/gluster3/data
    Brick10: hydra4:/gluster1/data
    Brick11: hydra4:/gluster2/data
    Brick12: hydra4:/gluster3/data

If I understand this correctly I have 6 sub-volumes with Brick2 replica

of Brick1, Brick4 of Brick3, etc. Correct ?

I realize now that it would probably have been better to specify a
different order, but now I cannot change it.

Now I want to store oVirt images on the Gluster and it requires either
replica 1 or replica 3. I need to be able to reuse the bricks I have
and
was planning to remove some bricks, initialize them and add them back
as
a replica 3.

Am I supposed to remove 6 bricks, one from each sub-volume ? Will that
work ? Will I lose storage space ? Can I just remove a brick from each
server and use those for the replica 3 ?

Thanks for all the help.
Hi Valerio,

You can find a small system and make  it an arbiter ,  so you will end up with 
'replica 3  arbiter 1' .

The arbiter  storage calculation is that you need  4K for each file in the 
volume.If you don't know them  the general rule of thumb is your arbiter brick 
be 1/1024 the size of the data brick.

Keep in mind that it will be better  if your  arbiter has an SSD (for example 
mounted on /gluster) from which you can create 6 directories.The arbiter stores 
 only metadata and the SSD random access performance will be the optimal 
approach.

Something like:
arbiter:/gluster/data1
arbiter:/gluster/data2
arbiter:/gluster/data3
arbiter:/gluster/data4
arbiter:/gluster/data5
arbiter:/gluster/data6


Of course , proper testing on a test volume is a good approach before  
implementing on prod.

Best Regards,
Strahil Nikolov

Thanks,

that sounds like a great idea.

--
As a result of Coronavirus-related precautions, NYU and the Center for Brain Imaging operations will be managed remotely until further notice. All telephone calls and e-mail correspondence are being monitored remotely during our normal business hours of 9am-5pm, Monday through Friday. For MRI scanner-related emergency, please contact: Keith Sanzenbach at keith.sanzenb...@nyu.edu and/or Pablo Velasco at pablo.vela...@nyu.edu For computer/hardware/software emergency, please contact: Valerio Luccio at valerio.luc...@nyu.edu For TMS/EEG-related emergency, please contact: Chrysa Papadaniil at chr...@nyu.edu For CBI-related administrative emergency, please contact: Jennifer Mangan at jennifer.man...@nyu.edu

Valerio Luccio          (212) 998-8736
Center for Brain Imaging                4 Washington Place, Room 158
New York University             New York, NY 10003

   "In an open world, who needs windows or gates ?"

________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to