That's a good start.

Are you building your VMs from template ?Any chance there is another system 
(outside oVirt ?) with the same MAC ?
you can run arping and check if you get response from more than 1 system.
Best Regards,Strahil Nikolov 
 
  On Mon, Feb 21, 2022 at 11:15, jb<[email protected]> wrote:    
Thank you Nikolov,
 
I have descript the problem a bit wrong. In the UI is see the interface, with 
virsh dumpxml I get:
 
 
  <interface type='bridge'>
       <mac address='00:1a:4a:16:01:83'/>
       <source bridge='ovirtmgmt'/>
       <target dev='vnet9'/>
       <model type='virtio'/>
       <driver name='vhost' queues='4'/>
       <filterref filter='vdsm-no-mac-spoofing'/>
       <link state='up'/>
       <mtu size='1500'/>
       <alias name='ua-c8a50041-2d13-456d-acb6-b57fdaea434b'/>
       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' 
function='0x0'/>
   </interface>
 
 
 
 
lspci on the VM show also the interface and I can also bring the interface up 
again with: ifup enp1s0. So it only loose the connection, not the interface.
 
 
When I restart the VM I get this log message:
 
/var/log/syslog:Feb 21 10:11:29 tv-planer sh[391]: ifup: failed to bring up 
enp1s0
 /var/log/syslog:Feb 21 10:11:29 tv-planer systemd[1]: [email protected]: 
Main process exited, code=exited, status=1/FAILURE
 /var/log/syslog:Feb 21 10:11:29 tv-planer systemd[1]: [email protected]: 
Failed with result 'exit-code'.
 
 
But the VM got his IP and is reachable.
 

 
 

 
 Am 20.02.22 um 14:06 schrieb Strahil Nikolov:
  
 
 Do you see all nic in the UI ? What type are they ?  
  Set this alias on the Hypervisors: alias virsh='virsh -c 
qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf'   and then 
use 'virsh dumpxml name-of-vm' to identify how many nics the vm has . 
  If gou got the correct settings in ovirt, use 'lspci -vvvvv' . 
  Best Regards, Strahil Nikolov
 
 
  On Sun, Feb 20, 2022 at 11:32, Jonathan Baecker <[email protected]> wrote:   
Hello everybody,
  
  I have here a strange behavior: We have a 3 node self hosted cluster 
  with around 20 VMs running on it. Since longer I had the problem with 
  one VM that after some days it lose the network interface. But because 
  this VM was only for testing I was to lazy to dive more in, to figure 
  out what is happen.
  
  Now I have a second VM, with the same problem and this VM is more 
  important. Both VMs running debian 10 and use cifs mounts, so maybe that 
  is related?
  
  Have some one of you seeing this behavior? And can give me a hint, how I 
  can fix this?
  
  At the moment I can't provide a log file, because I didn't know the 
  exact time, when this was happen. And I also don't know, if the problem 
  comes from ovirt or from the operating system inside the VMs.
  
  Have a nice day!
  
  Jonathan
  
  _______________________________________________
  Users mailing list -- [email protected]
  To unsubscribe send an email to [email protected]
  Privacy Statement: https://www.ovirt.org/privacy-policy.html
  oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
  List Archives: 
https://lists.ovirt.org/archives/list/[email protected]/message/DOXZRQ55LFPNKUVS3AWIPXQDJIVH3X7M/
   
    
_______________________________________________
Users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/[email protected]/message/OYT6RRMYDSQHTBVAJH3P7V2A2MU422TY/

Reply via email to