From previous thread about gluster issues, things seem to be running much
better than before, and it has raised a few questions that I can't seem to find
any answers to:
Setup:
3 hyperconverged nodes
each node has 1x 1tb SSD and 1x 500gb nVME drive
each node is connected via ethernet and also by a 40gb infiniband connection
for the gluster replication.
Questions:
1. I created a 3tb VDO drive on the SSD and a 1.3tb VDO Drive on the nVME drive
with a 1Gb cache on each server and I enabled the RDMA transport.
a. Did I lose anything by doing the whole process manually? Since I did
it like that, things seem to run MUCH better so far, rebooting nodes the
gluster resyncs almost instantly and storage seems faster also.
b. Is there a way to split out the gluster traffic from the normal ovirt
traffic (VM and cluster communication)? originally I had separate names for
each node for the gluster network and it WAS split, but if I ever reset a
gluster node, the name got changed. What I did now is just put in a hosts file
on each node that sends all traffic over the infiniband, but I don't feel like
that's optimal. I was unable to add in the separate gluster names in the
configuration as they were not part of the existing cluster.
c. Why does the hyper-connverged wizard not let you adjust the VDO cache
size, the transport type, or use VDO to "over-subscribe" the drive sizes like I
did manually?
2. I wanted to add more space in the near future, would it be better to create
a RAID0 with the new drives, or just use them as another separate storage
location?
_______________________________________________
Users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/[email protected]/message/HRKXWOLAXHDL4E26ODRLUXO7VRNZDNDI/