I think I can now narrow the problem down considerably.  When I run the
image manually as

/usr/libexec/qemu-kvm -name test -enable-kvm -cpu host -m 4G -nographic
-hda /home/kcli/images/podvm.qcow2 -device
virtio-net,netdev=netdev0,id=net0 -netdev
tap,br=virbr0,helper=/usr/libexec/qemu-bridge-helper,id=netdev0

it will correctly acquire an IP address via virbr0's DHCP, in other words
it will do what I expect it to do.  I guess this means there's nothing
wrong about the image and its DHCP setup.

For comparison, when launched from peer pods the command line is

/usr/libexec/qemu-kvm -name
guest=podvm-podsandbox-totok-8f10756a,debug-threads=on -S -object
{"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-23-podvm-podsandbox-tot/master-key.aes"}
-machine
pc-i440fx-rhel10.0.0,usb=off,vmport=off,dump-guest-core=on,memory-backend=pc.ram,acpi=on
-accel kvm -cpu
Cascadelake-Server,vmx=on,pdcm=on,hypervisor=on,ss=on,tsc-adjust=on,fdp-excptn-only=on,zero-fcs-fds=on,mpx=on,umip=on,pku=on,md-clear=on,stibp=on,flush-l1d=on,arch-capabilities=on,xsaves=on,ibpb=on,ibrs=on,amd-stibp=on,amd-ssbd=on,rdctl-no=on,ibrs-all=on,skip-l1dfl-vmentry=on,mds-no=on,pschange-mc-no=on,tsx-ctrl=on,sbdr-ssdp-no=on,psdp-no=on,fb-clear=on,gds-no=on,rfds-no=on,vmx-ins-outs=on,vmx-true-ctls=on,vmx-store-lma=on,vmx-activity-hlt=on,vmx-activity-wait-sipi=on,vmx-vmwrite-vmexit-fields=on,vmx-apicv-xapic=on,vmx-ept=on,vmx-desc-exit=on,vmx-rdtscp-exit=on,vmx-apicv-x2apic=on,vmx-vpid=on,vmx-wbinvd-exit=on,vmx-unrestricted-guest=on,vmx-apicv-register=on,vmx-apicv-vid=on,vmx-rdrand-exit=on,vmx-invpcid-exit=on,vmx-vmfunc=on,vmx-shadow-vmcs=on,vmx-rdseed-exit=on,vmx-pml=on,vmx-xsaves=on,vmx-tsc-scaling=on,vmx-ept-execonly=on,vmx-page-walk-4=on,vmx-ept-2mb=on,vmx-ept-1gb=on,vmx-invept=on,vmx-eptad=on,vmx-invept-single-context=on,vmx-invept-all-context=on,vmx-invvpid=on,vmx-invvpid-single-addr=on,vmx-invvpid-all-context=on,vmx-invept-single-context-noglobals=on,vmx-intr-exit=on,vmx-nmi-exit=on,vmx-vnmi=on,vmx-preemption-timer=on,vmx-posted-intr=on,vmx-vintr-pending=on,vmx-tsc-offset=on,vmx-hlt-exit=on,vmx-invlpg-exit=on,vmx-mwait-exit=on,vmx-rdpmc-exit=on,vmx-rdtsc-exit=on,vmx-cr3-load-noexit=on,vmx-cr3-store-noexit=on,vmx-cr8-load-exit=on,vmx-cr8-store-exit=on,vmx-flexpriority=on,vmx-vnmi-pending=on,vmx-movdr-exit=on,vmx-io-exit=on,vmx-io-bitmap=on,vmx-mtf=on,vmx-msr-bitmap=on,vmx-monitor-exit=on,vmx-pause-exit=on,vmx-secondary-ctls=on,vmx-exit-nosave-debugctl=on,vmx-exit-load-perf-global-ctrl=on,vmx-exit-ack-intr=on,vmx-exit-save-pat=on,vmx-exit-load-pat=on,vmx-exit-save-efer=on,vmx-exit-load-efer=on,vmx-exit-save-preemption-timer=on,vmx-exit-clear-bndcfgs=on,vmx-entry-noload-debugctl=on,vmx-entry-ia32e-mode=on,vmx-entry-load-perf-global-ctrl=on,vmx-entry-load-pat=on,vmx-entry-load-efer=on,vmx-entry-load-bndcfgs=on,vmx-eptp-switching=on,hle=off,rtm=off
-m size=65536k -object
{"qom-type":"memory-backend-ram","id":"pc.ram","size":67108864} -overcommit
mem-lock=off -smp 1,sockets=1,cores=1,threads=1 -uuid
d71c1e97-5764-4ae4-a1bc-6470523f459c -display none -no-user-config
-nodefaults -chardev socket,id=charmonitor,fd=61,server=on,wait=off -mon
chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
-boot strict=on -device
{"driver":"piix3-usb-uhci","id":"usb","bus":"pci.0","addr":"0x1.0x2"}
-device {"driver":"ahci","id":"sata0","bus":"pci.0","addr":"0x3"} -blockdev
{"driver":"file","filename":"/root/kcli/images/podvm.qcow2","node-name":"libvirt-3-storage","auto-read-only":true,"discard":"unmap"}
-blockdev
{"node-name":"libvirt-3-format","read-only":true,"driver":"qcow2","file":"libvirt-3-storage","backing":null}
-blockdev
{"driver":"file","filename":"/root/kcli/images/podvm-podsandbox-totok-8f10756a-root.qcow2","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}
-blockdev
{"node-name":"libvirt-2-format","read-only":false,"driver":"qcow2","file":"libvirt-2-storage","backing":"libvirt-3-format"}
-device
{"driver":"ide-hd","bus":"sata0.0","drive":"libvirt-2-format","id":"sata0-0-0","bootindex":1}
-blockdev
{"driver":"file","filename":"/root/kcli/images/podvm-podsandbox-totok-8f10756a-cloudinit.iso","node-name":"libvirt-1-storage","read-only":true}
-device
{"driver":"ide-cd","bus":"ide.0","unit":0,"drive":"libvirt-1-storage","id":"ide0-0-0"}
-netdev
{"type":"tap","fd":"62","vhost":true,"vhostfd":"65","id":"hostnet0"}
-device
{"driver":"virtio-net-pci","netdev":"hostnet0","id":"net0","mac":"52:54:00:ed:06:2e","bus":"pci.0","addr":"0x2"}
-chardev pty,id=charserial0 -device
{"driver":"isa-serial","chardev":"charserial0","id":"serial0","index":0}
-audiodev {"id":"audio1","driver":"none"} -device
{"driver":"virtio-balloon-pci","id":"balloon0","bus":"pci.0","addr":"0x4"}
-sandbox
on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny
-msg timestamp=on

and the machine doesn't get its IP address.

Where's the important difference?  I include the whole domain xml(*) for
completeness.

pvl

(*) # virsh dumpxml podvm-podsandbox-totok-8f10756a
<domain type='kvm' id='23'>
  <name>podvm-podsandbox-totok-8f10756a</name>
  <uuid>d71c1e97-5764-4ae4-a1bc-6470523f459c</uuid>
  <description>This Virtual Machine is the peer-pod VM</description>
  <memory dumpCore='on' unit='KiB'>65536</memory>
  <currentMemory unit='KiB'>65536</currentMemory>
  <vcpu placement='static'>1</vcpu>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-i440fx-rhel10.0.0'>hvm</type>
  </os>
  <features>
    <acpi/>
    <apic/>
    <vmport state='off'/>
  </features>
  <cpu mode='custom' match='exact' check='full'>
    <model fallback='forbid'>Cascadelake-Server</model>
    <vendor>Intel</vendor>
    <feature policy='require' name='vmx'/>
    <feature policy='require' name='pdcm'/>
    <feature policy='require' name='hypervisor'/>
    <feature policy='require' name='ss'/>
    <feature policy='require' name='tsc_adjust'/>
    <feature policy='require' name='fdp-excptn-only'/>
    <feature policy='require' name='zero-fcs-fds'/>
    <feature policy='require' name='mpx'/>
    <feature policy='require' name='umip'/>
    <feature policy='require' name='pku'/>
    <feature policy='require' name='md-clear'/>
    <feature policy='require' name='stibp'/>
    <feature policy='require' name='flush-l1d'/>
    <feature policy='require' name='arch-capabilities'/>
    <feature policy='require' name='xsaves'/>
    <feature policy='require' name='ibpb'/>
    <feature policy='require' name='ibrs'/>
    <feature policy='require' name='amd-stibp'/>
    <feature policy='require' name='amd-ssbd'/>
    <feature policy='require' name='rdctl-no'/>
    <feature policy='require' name='ibrs-all'/>
    <feature policy='require' name='skip-l1dfl-vmentry'/>
    <feature policy='require' name='mds-no'/>
    <feature policy='require' name='pschange-mc-no'/>
    <feature policy='require' name='tsx-ctrl'/>
    <feature policy='require' name='sbdr-ssdp-no'/>
    <feature policy='require' name='psdp-no'/>
    <feature policy='require' name='fb-clear'/>
    <feature policy='require' name='gds-no'/>
    <feature policy='require' name='rfds-no'/>
    <feature policy='require' name='vmx-ins-outs'/>
    <feature policy='require' name='vmx-true-ctls'/>
    <feature policy='require' name='vmx-store-lma'/>
    <feature policy='require' name='vmx-activity-hlt'/>
    <feature policy='require' name='vmx-activity-wait-sipi'/>
    <feature policy='require' name='vmx-vmwrite-vmexit-fields'/>
    <feature policy='require' name='vmx-apicv-xapic'/>
    <feature policy='require' name='vmx-ept'/>
    <feature policy='require' name='vmx-desc-exit'/>
    <feature policy='require' name='vmx-rdtscp-exit'/>
    <feature policy='require' name='vmx-apicv-x2apic'/>
    <feature policy='require' name='vmx-vpid'/>
    <feature policy='require' name='vmx-wbinvd-exit'/>
    <feature policy='require' name='vmx-unrestricted-guest'/>
    <feature policy='require' name='vmx-apicv-register'/>
    <feature policy='require' name='vmx-apicv-vid'/>
    <feature policy='require' name='vmx-rdrand-exit'/>
    <feature policy='require' name='vmx-invpcid-exit'/>
    <feature policy='require' name='vmx-vmfunc'/>
    <feature policy='require' name='vmx-shadow-vmcs'/>
    <feature policy='require' name='vmx-rdseed-exit'/>
    <feature policy='require' name='vmx-pml'/>
    <feature policy='require' name='vmx-xsaves'/>
    <feature policy='require' name='vmx-tsc-scaling'/>
    <feature policy='require' name='vmx-ept-execonly'/>
    <feature policy='require' name='vmx-page-walk-4'/>
    <feature policy='require' name='vmx-ept-2mb'/>
    <feature policy='require' name='vmx-ept-1gb'/>
    <feature policy='require' name='vmx-invept'/>
    <feature policy='require' name='vmx-eptad'/>
    <feature policy='require' name='vmx-invept-single-context'/>
    <feature policy='require' name='vmx-invept-all-context'/>
    <feature policy='require' name='vmx-invvpid'/>
    <feature policy='require' name='vmx-invvpid-single-addr'/>
    <feature policy='require' name='vmx-invvpid-all-context'/>
    <feature policy='require' name='vmx-invvpid-single-context-noglobals'/>
    <feature policy='require' name='vmx-intr-exit'/>
    <feature policy='require' name='vmx-nmi-exit'/>
    <feature policy='require' name='vmx-vnmi'/>
    <feature policy='require' name='vmx-preemption-timer'/>
    <feature policy='require' name='vmx-posted-intr'/>
    <feature policy='require' name='vmx-vintr-pending'/>
    <feature policy='require' name='vmx-tsc-offset'/>
    <feature policy='require' name='vmx-hlt-exit'/>
    <feature policy='require' name='vmx-invlpg-exit'/>
    <feature policy='require' name='vmx-mwait-exit'/>
    <feature policy='require' name='vmx-rdpmc-exit'/>
    <feature policy='require' name='vmx-rdtsc-exit'/>
    <feature policy='require' name='vmx-cr3-load-noexit'/>
    <feature policy='require' name='vmx-cr3-store-noexit'/>
    <feature policy='require' name='vmx-cr8-load-exit'/>
    <feature policy='require' name='vmx-cr8-store-exit'/>
    <feature policy='require' name='vmx-flexpriority'/>
    <feature policy='require' name='vmx-vnmi-pending'/>
    <feature policy='require' name='vmx-movdr-exit'/>
    <feature policy='require' name='vmx-io-exit'/>
    <feature policy='require' name='vmx-io-bitmap'/>
    <feature policy='require' name='vmx-mtf'/>
    <feature policy='require' name='vmx-msr-bitmap'/>
    <feature policy='require' name='vmx-monitor-exit'/>
    <feature policy='require' name='vmx-pause-exit'/>
    <feature policy='require' name='vmx-secondary-ctls'/>
    <feature policy='require' name='vmx-exit-nosave-debugctl'/>
    <feature policy='require' name='vmx-exit-load-perf-global-ctrl'/>
    <feature policy='require' name='vmx-exit-ack-intr'/>
    <feature policy='require' name='vmx-exit-save-pat'/>
    <feature policy='require' name='vmx-exit-load-pat'/>
    <feature policy='require' name='vmx-exit-save-efer'/>
    <feature policy='require' name='vmx-exit-load-efer'/>
    <feature policy='require' name='vmx-exit-save-preemption-timer'/>
    <feature policy='require' name='vmx-exit-clear-bndcfgs'/>
    <feature policy='require' name='vmx-entry-noload-debugctl'/>
    <feature policy='require' name='vmx-entry-ia32e-mode'/>
    <feature policy='require' name='vmx-entry-load-perf-global-ctrl'/>
    <feature policy='require' name='vmx-entry-load-pat'/>
    <feature policy='require' name='vmx-entry-load-efer'/>
    <feature policy='require' name='vmx-entry-load-bndcfgs'/>
    <feature policy='require' name='vmx-eptp-switching'/>
    <feature policy='disable' name='hle'/>
    <feature policy='disable' name='rtm'/>
  </cpu>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source
file='/root/kcli/images/podvm-podsandbox-totok-8f10756a-root.qcow2'
index='2'/>
      <backingStore type='file' index='3'>
        <format type='qcow2'/>
        <source file='/root/kcli/images/podvm.qcow2'/>
        <backingStore/>
      </backingStore>
      <target dev='sda' bus='sata'/>
      <boot order='1'/>
      <alias name='sata0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source
file='/root/kcli/images/podvm-podsandbox-totok-8f10756a-cloudinit.iso'
index='1'/>
      <backingStore/>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <alias name='ide0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <controller type='usb' index='0' model='piix3-uhci'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01'
function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <controller type='ide' index='0'>
      <alias name='ide'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01'
function='0x1'/>
    </controller>
    <controller type='sata' index='0'>
      <alias name='sata0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
    </controller>
    <interface type='network'>
      <mac address='52:54:00:ed:06:2e'/>
      <source network='default'
portid='315baa97-2c37-46ad-a0ff-8be16fffb275' bridge='virbr0'/>
      <target dev='vnet21'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/0'/>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/0'>
      <source path='/dev/pts/0'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <input type='mouse' bus='ps2'>
      <alias name='input0'/>
    </input>
    <input type='keyboard' bus='ps2'>
      <alias name='input1'/>
    </input>
    <audio id='1' type='none'/>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04'
function='0x0'/>
    </memballoon>
  </devices>
  <seclabel type='dynamic' model='selinux' relabel='yes'>
    <label>system_u:system_r:svirt_t:s0:c62,c325</label>
    <imagelabel>system_u:object_r:svirt_image_t:s0:c62,c325</imagelabel>
  </seclabel>
  <seclabel type='dynamic' model='dac' relabel='yes'>
    <label>+107:+107</label>
    <imagelabel>+107:+107</imagelabel>
  </seclabel>
</domain>


On Thu, Sep 18, 2025 at 11:55 AM Pavel Mores <pmo...@redhat.com> wrote:

> On Thu, Sep 18, 2025 at 11:29 AM Pavel Mores <pmo...@redhat.com> wrote:
>
>> On Thu, Sep 18, 2025 at 10:25 AM Martin Kletzander <mklet...@redhat.com>
>> wrote:
>>
>>> On Wed, Sep 17, 2025 at 04:02:12PM +0200, Pavel Mores wrote:
>>> >On Wed, Sep 17, 2025 at 3:05 PM Martin Kletzander <mklet...@redhat.com>
>>> >wrote:
>>> >
>>> >> On Wed, Sep 17, 2025 at 02:14:51PM +0200, Pavel Mores via Users wrote:
>>> >> >Hi,
>>> >> >
>>> >> >I'm examining a domain that's connected to the 'default' network
>>> >> >
>>> >> ># virsh net-dumpxml default
>>> >> ><network connections='1'>
>>> >> >  <name>default</name>
>>> >> >  <uuid>c757baa7-2b31-4794-9dfb-0df384575602</uuid>
>>> >> >  <forward mode='nat'>
>>> >> >    <nat>
>>> >> >      <port start='1024' end='65535'/>
>>> >> >    </nat>
>>> >> >  </forward>
>>> >> >  <bridge name='virbr0' stp='on' delay='0'/>
>>> >> >  <mac address='52:54:00:37:b7:92'/>
>>> >> >  <ip address='192.168.122.1' netmask='255.255.255.0'>
>>> >> >    <dhcp>
>>> >> >      <range start='192.168.122.2' end='192.168.122.254'/>
>>> >> >    </dhcp>
>>> >> >  </ip>
>>> >> ></network>
>>> >> >
>>> >>
>>> >> This is standard.
>>> >>
>>> >> >using a device as follows:
>>> >> >
>>> >> ><interface type='network'>
>>> >> >  <mac address='52:54:00:ed:06:2e'/>
>>> >> >  <source network='default'
>>> portid='83db8ca9-baed-47f3-ba0d-1a967ee86aa5'
>>> >> >bridge='virbr0'/>
>>> >> >  <target dev='vnet19'/>
>>> >> >  <model type='virtio'/>
>>> >> >  <alias name='net0'/>
>>> >> >  <address type='pci' domain='0x0000' bus='0x00' slot='0x02'
>>> >> >function='0x0'/>
>>> >> ></interface>
>>> >> >
>>> >>
>>> >> This looks fine.
>>> >>
>>> >> >The domain is running but apparently without an IP address:
>>> >> >
>>> >> ># virsh domifaddr podvm-podsandbox-totok-8f10756a
>>> >> > Name       MAC address          Protocol     Address
>>> >>
>>> >>
>>> >-------------------------------------------------------------------------------
>>> >> >
>>> >>
>>> >> This shows that libvirt does not know about any IP address.  Does
>>> adding
>>> >> "--source agent", "--source arp" or "--source lease" change anything?
>>> >>
>>> >
>>> >'arp' and 'lease' don't but
>>> >
>>> ># virsh domifaddr --source agent podvm-podsandbox-totok-8f10756a
>>> >error: Failed to query for interfaces addresses
>>> >error: argument unsupported: QEMU guest agent is not configured
>>> >
>>> >This is surprising to me since this is a peer pods setup where the
>>> domain
>>> >in question is a podvm running an image which I was told does have
>>> >the qemu agent running.
>>> >
>>> >However the agent shouldn't be necessary for IP address acquisition I
>>> guess,
>>> >right?
>>> >
>>> >>The requisite host-side interfaces look good (to me anyway :-)):
>>> >> >
>>> >> >10: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb
>>> state UP
>>> >> >group default qlen 1000
>>> >> >    link/ether 52:54:00:37:b7:92 brd ff:ff:ff:ff:ff:ff
>>> >> >    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
>>> >> >       valid_lft forever preferred_lft forever
>>> >> >[...]
>>> >> >35: vnet19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
>>> >> master
>>> >> >virbr0 state UNKNOWN group default qlen 1000
>>> >> >    link/ether fe:54:00:ed:06:2e brd ff:ff:ff:ff:ff:ff
>>> >> >    inet6 fe80::fc54:ff:feed:62e/64 scope link proto kernel_ll
>>> >> >       valid_lft forever preferred_lft forever
>>> >> >
>>> >> >I can share more information about the setup if necessary but I'll
>>> stop
>>> >> >here for now since I feel this must be just a simple stupid
>>> oversight on
>>> >> my
>>> >> >part.  Please let me know if you'd like to have additional info.
>>> >> >
>>> >>
>>> >> When this happens to me sometimes, it's most often a firewall issue
>>> and
>>> >> the VM does not get any IP address or cannot communicate outside its
>>> >> network.
>>> >>
>>> >
>>> >I've seen a firewall suggested as a possible culprit, yes, however I
>>> don't
>>> >quite
>>> >know what it should look like.  iptables appear unconfigured:
>>> >
>>> ># iptables -L -v -n
>>> >Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
>>> > pkts bytes target     prot opt in     out     source
>>> >destination
>>> >
>>> >Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
>>> > pkts bytes target     prot opt in     out     source
>>> >destination
>>> >
>>> >Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
>>> > pkts bytes target     prot opt in     out     source
>>> >destination
>>> >
>>> >`nft list ruleset` lists only rules that look managed by libvirt
>>> >itself(*).  At any
>>> >rate the host machine has no specific hand-configured firewall that I
>>> know
>>> >of.
>>> >
>>> >
>>> >> What it can be here is that there are some access issues to the
>>> dnsmasq
>>> >> lease file.
>>> >>
>>> >> What's in your /var/lib/libvirt/dnsmasq/virbr0.status file on the
>>> host?
>>> >>
>>> >
>>> >It's empty.
>>> >
>>> >Thanks Martin!
>>> >pvl
>>> >
>>> >(*) # nft list ruleset
>>> >table ip libvirt_network {
>>> >chain forward {
>>> >type filter hook forward priority filter; policy accept;
>>> >counter packets 85854914 bytes 398726525237 jump guest_cross
>>> >counter packets 85854914 bytes 398726525237 jump guest_input
>>> >counter packets 34777368 bytes 3386943972 jump guest_output
>>> >}
>>> >
>>> >chain guest_output {
>>> >ip saddr 192.168.122.0/24 iif "virbr0" counter packets 0 bytes 0 accept
>>>
>>> This suggests there were no incoming packets from an IP address in the
>>> range on the bridge.
>>>
>>> >iif "virbr0" counter packets 0 bytes 0 reject
>>>
>>> And no packets from outside of that range that would fall through to
>>> this above rule.
>>>
>>> [...]
>>>
>>> >}
>>> >
>>> >chain guest_input {
>>>
>>> [...]
>>>
>>> >oif "virbr0" ip daddr 192.168.122.0/24 ct state established,related
>>> counter
>>> >packets 0 bytes 0 accept
>>>
>>> No packets sent to the address range on the bridge, but
>>>
>>> >oif "virbr0" counter packets 0 bytes 0 reject
>>>
>>> basically no packets sent at all.
>>>
>>> >}
>>> >
>>> >chain guest_cross {
>>> >iif "openshift-412" oif "openshift-412" counter packets 0 bytes 0 accept
>>> >iif "openshift-419" oif "openshift-419" counter packets 0 bytes 0 accept
>>> >iif "openshift-416" oif "openshift-416" counter packets 0 bytes 0 accept
>>> >iif "openshift-415" oif "openshift-415" counter packets 0 bytes 0 accept
>>> >iif "openshift-413" oif "openshift-413" counter packets 0 bytes 0 accept
>>> >iif "virbr0" oif "virbr0" counter packets 0 bytes 0 accept
>>>
>>> No intra-network communication
>>>
>>> [...]
>>>
>>> >chain guest_nat {
>>> >type nat hook postrouting priority srcnat; policy accept;
>>>
>>> [...]
>>>
>>> >ip saddr 192.168.122.0/24 ip daddr 224.0.0.0/24 counter packets 50
>>> bytes
>>> >3676 return
>>>
>>> There were some IPv4 multicast packets, but these could've originated
>>> from the host.
>>>
>>> >ip saddr 192.168.122.0/24 ip daddr 255.255.255.255 counter packets 0
>>> bytes
>>> >0 return
>>>
>>> And no broadcast packets from the address space.
>>>
>>> >meta l4proto tcp ip saddr 192.168.122.0/24 ip daddr != 192.168.122.0/24
>>> >counter packets 0 bytes 0 masquerade to :1024-65535
>>> >meta l4proto udp ip saddr 192.168.122.0/24 ip daddr != 192.168.122.0/24
>>> >counter packets 0 bytes 0 masquerade to :1024-65535
>>> >ip saddr 192.168.122.0/24 ip daddr != 192.168.122.0/24 counter packets
>>> 0
>>> >bytes 0 masquerade
>>>
>>> No NAT, anything.
>>>
>>> [...]
>>> >ip saddr 192.168.14.0/24 ip daddr 224.0.0.0/24 counter packets 50 bytes
>>> >3675 return
>>>
>>> These counters on another range are the same, so I would say all the
>>> multicast packets on the range we are interested in are just the same,
>>> hence having nothing to do with the guest.
>>>
>>> >}
>>> >}
>>> >table ip6 libvirt_network {
>>> >chain forward {
>>> >type filter hook forward priority filter; policy accept;
>>> >counter packets 0 bytes 0 jump guest_cross
>>> >counter packets 0 bytes 0 jump guest_input
>>> >counter packets 0 bytes 0 jump guest_output
>>>
>>> And totally nothing with IPv6.
>>>
>>
>> As a bit of context, this is a virtlab machine whose primary purpose is
>> to run
>> kcli-based openshift clusters whose nodes are libvirt domains.  Those are
>> the
>> "openshift-41[1-9]" networks and bridges.  They are unrelated to the
>> setup I'm
>> looking into and most of them are actually obsolete (it's been years now
>> since
>> a 4.11 cluster last ran on the host :-)).
>>
>> My "guess" would be that the guest did not even get an IP address, maybe
>>> did not eve try DHCP.  Are you sure the guest booted?
>>>
>>
>> I think it is, based on
>>
>> # virsh list
>>  Id   Name                              State
>> -------------------------------------------------
>> [...]
>>  20   podvm-podsandbox-totok-8f10756a   running
>>
>> But now that you mention it, I'm not positively sure that it tried DHCP.
>> The zero
>> traffic on the virbr0 bridge you mention above is overall explainable by
>> the domain
>> not having an address *but* if it did try DHCP those packets would show
>> up in the
>> virbr0 stats I guess?
>>
>> I did check out previously the DHCP leases on the 'default' network:
>>
>> # virsh net-dhcp-leases default
>>  Expiry Time   MAC address   Protocol   IP address   Hostname   Client ID
>> or DUID
>>
>> -----------------------------------------------------------------------------------
>>
>> and there are none but that doesn't rule out any other failure in DHCP.
>>
>> The domain runs a peer pods podvm image which I don't have any control
>> over and
>> frankly am not familiar with.  I assume that it does do DHCP to configure
>> its interfaces
>> but as the guest agent example shows my information about the image might
>> not be
>> always accurate.
>>
>
> I verified that the VM does do DHCP (there actually doesn't seem to be any
> other
> means for a podvm to get its network configured in peer pods).
>
> pvl
>
>
>> Is there a way to check if the domain attempts DHCP purely from the
>> libvirt side, just
>> using libvirt means?
>>
>> Thanks!
>> pvl
>>
>

Reply via email to