On Sat, Jan 5, 2019 at 8:12 AM <[email protected]> wrote:
>
> From previous thread about gluster issues, things seem to be running much 
> better than before, and it has raised a few questions that I can't seem to 
> find any answers to:
>
> Setup:
> 3 hyperconverged nodes
> each node has 1x 1tb SSD and 1x 500gb nVME drive
> each node is connected via ethernet and also by a 40gb infiniband connection 
> for the gluster replication.
>
> Questions:
>
> 1. I created a 3tb VDO drive on the SSD and a 1.3tb VDO Drive on the nVME 
> drive with a 1Gb cache on each server and I enabled the RDMA transport.
>      a. Did I lose anything by doing the whole process manually?  Since I did 
> it like that, things seem to run MUCH better so far, rebooting nodes the 
> gluster resyncs almost instantly and storage seems faster also.

By manual process - I assume you did not use the Cockpit UI to create
the bricks and gluster volumes? You should review the volume options
that are set.

>      b. Is there a way to split out the gluster traffic from the normal ovirt 
> traffic (VM and cluster communication)?  originally I had separate names for 
> each node for the gluster network and it WAS split, but if I ever reset a 
> gluster node, the name got changed.  What I did now is just put in a hosts 
> file on each node that sends all traffic over the infiniband, but I don't 
> feel like that's optimal.  I was unable to add in the separate gluster names 
> in the configuration as they were not part of the existing cluster.

You can split the gluster traffic by using a different address (of
gluster network) to peer probe the gluster servers. And use the
ovirtmgmt interface to add the hosts to engine.

>      c. Why does the hyper-connverged wizard not let you adjust the VDO cache 
> size, the transport type, or use VDO to "over-subscribe" the drive sizes like 
> I did manually?

This is a bug which we are tracking to fix in 4.3.

> 2. I wanted to add more space in the near future, would it be better to 
> create a RAID0 with the new drives, or just use them as another separate 
> storage location?

Both are valid options. creating new volume would mean that you may
have to move some of the vdisks to the newly created volume to free up
space.

> _______________________________________________
> Users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/[email protected]/message/HRKXWOLAXHDL4E26ODRLUXO7VRNZDNDI/
_______________________________________________
Users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/[email protected]/message/QQDVVSIDPSTGZXPZUIDZ3NVB74X43IZO/

Reply via email to