On 10/7/21 9:07 pm, Nathaniel Roach wrote:
On 10/7/21 3:31 am, Nir Soffer wrote:
On Fri, Jul 9, 2021 at 5:57 PM nroach44--- via Users wrote:
Hi All,
After upgrading some of my hosts to 4.4.7, and after fixing the policy issue,
I'm no longer able to migrate VMs to or from 4.4.7 hosts. Starti
On Sat, Jul 10, 2021 at 6:55 PM wrote:
>
> Hi Nir
>
> Thank you for your explanation.
>
> Can I ask you if you can explain this a bit further? I performed and
> experiment and installed ovirt + a hypervisor on 2 VMs (on the hypervisor VM
> I enabled nested virtualization) - both based on Rocky L
Hi.
I have a custom use case where I would like to have two guacamole RDP
connections that point to the same host, but the underlying xrdp start
script would initialize one connection slightly different than the
other. I don't see a simple way that I can pass anything between
Guacamole and t
Hi Nir
Thank you for your explanation.
Can I ask you if you can explain this a bit further? I performed and experiment
and installed ovirt + a hypervisor on 2 VMs (on the hypervisor VM I enabled
nested virtualization) - both based on Rocky Linux following the official oVirt
instructions for 4
HI Jayme & Strahil,
Thank you again for your messages.
Reading
https://stackoverflow.com/questions/52394849/can-i-change-glusterfs-replica-3-to-replica-3-with-arbiter-1,
I think I understand now what Strahil is suggesting.
It sounds to me like you're saying I can reconfigure the existing gluste
On 10/7/21 3:31 am, Nir Soffer wrote:
On Fri, Jul 9, 2021 at 5:57 PM nroach44--- via Users wrote:
Hi All,
After upgrading some of my hosts to 4.4.7, and after fixing the policy issue,
I'm no longer able to migrate VMs to or from 4.4.7 hosts. Starting them works
fine regardless of the host ve
Hi,
I've been trying to get oVirt Node 4.4.6 up and running on my Dell r620
hosts but am facing a strange issue where seemingly all network adapters
get reset at random times after install.
The interfaces reset as soon as a bit of traffic is flowing through them.
Also the logs show nfs timeout
Just a thought but depending on resources you might be able to use your 4th
server as nfs storage and live migrate vm disks to it and off of your
gluster volumes. I’ve done this in the past when doing major maintenance on
gluster volumes to err on the side of caution.
On Sat, Jul 10, 2021 at 7:22
Hmm right as I said that, I just had a thought.
I DO have a "backup" server in place (that I haven't even started using yet),
that currently has some empty hard drive bays.
It would take some extra work, but I could use that 4th backup server as a
temporary staging ground to begin building t
Thank you. And yes, I agree, this needs to occur in a maintenance window and be
done very carefully. :)
My only problem with this method is that I need to *replace* disks in the two
servers.
I don't have any empty hard drive bays, so will effectively need to put a host
into maintenance mode, re
Hi David,
any storage operation can cause unexpected situations, so always plan your
activites for low traffic hours and test them on your test environment in
advance.
I think it's easier if you (command line):
-verify no heals are pending. Not a single one.- set the host to maintenance
over ov
The engine volume is not supposed to be such big. I think 100GB is enough.
Don't forget to always separate engine from vmstore volumes.
Best Regards,Strahil Nikolov
On Fri, Jul 9, 2021 at 23:08, Patrick Lomakin
wrote: Hello! I have tried to deploy single node vith gluster, but if select
I agree, you either use static IPs outisde the dhcp range, or you provide
static dhcp leases (mac to ip binding).
In worst case scenario, changing the dns A/ & PTR records to the new IP
should bring you to a working state.
As it was mentioned, it's pure DHCP Server-Client relationship and not
13 matches
Mail list logo