On Mon, Aug 9, 2021 at 1:43 PM Nir Soffer <nsof...@redhat.com> wrote:
>
> On Mon, Aug 9, 2021 at 10:35 AM Yedidyah Bar David <d...@redhat.com> wrote:
> >
> > On Sun, Aug 8, 2021 at 5:42 PM Code Review <ger...@ovirt.org> wrote:
> > >
> > > From Jenkins CI <jenk...@ovirt.org>:
> > >
> > > Jenkins CI has posted comments on this change. ( 
> > > https://gerrit.ovirt.org/c/ovirt-system-tests/+/115392 )
> > >
> > > Change subject: HE: Use node image
> > > ......................................................................
> > >
> > >
> > > Patch Set 13: Continuous-Integration-1
> > >
> > > Build Failed
> >
> > While trying to deactivate a host, the engine wanted to migrate a VM
> > (vm0) from host-0 to host-1. vdsm log of host-0 says:
> >
> > 2021-08-08 14:31:10,076+0000 ERROR (migsrc/cde311f9) [virt.vm]
> > (vmId='cde311f9-9a33-4eb9-8338-fa22ff49edc2') Failed to migrate
> > (migration:503)
> > Traceback (most recent call last):
> >   File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line
> > 477, in _regular_run
> >     time.time(), machineParams
> >   File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line
> > 578, in _startUnderlyingMigration
> >     self._perform_with_conv_schedule(duri, muri)
> >   File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line
> > 667, in _perform_with_conv_schedule
> >     self._perform_migration(duri, muri)
> >   File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line
> > 596, in _perform_migration
> >     self._migration_flags)
> >   File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line
> > 159, in call
> >     return getattr(self._vm._dom, name)(*a, **kw)
> >   File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line 101, 
> > in f
> >     ret = attr(*args, **kwargs)
> >   File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py",
> > line 131, in wrapper
> >     ret = f(*args, **kwargs)
> >   File "/usr/lib/python3.6/site-packages/vdsm/common/function.py",
> > line 94, in wrapper
> >     return func(inst, *args, **kwargs)
> >   File "/usr/lib64/python3.6/site-packages/libvirt.py", line 2126, in
> > migrateToURI3
> >     raise libvirtError('virDomainMigrateToURI3() failed')
> > libvirt.libvirtError: Unsafe migration: Migration without shared
> > storage is unsafe
>
> Please share the vm xml:
>
>     sudo virsh -r dumpxl vm-name

I think you should be able to find a dump of it in vdsm.log:

https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/18650/artifact/check-patch.he-basic_suite_master.el8.x86_64/test_logs/ost-he-basic-suite-master-host-0/var/log/vdsm/vdsm.log

I think the first line of starting a migration is:

2021-08-08 14:31:08,350+0000 DEBUG (jsonrpc/4) [jsonrpc.JsonRpcServer]
Calling 'VM.migrate' in bridge with {'vmID':
'cde311f9-9a33-4eb9-8338-fa22ff49edc2', 'params':

A few lines later:

2021-08-08 14:31:08,387+0000 DEBUG (migsrc/cde311f9)
[virt.metadata.Descriptor] dumped metadata for
cde311f9-9a33-4eb9-8338-fa22ff49edc2: <?xml version='1.0'
encoding='utf-8'?>
<vm>
    <balloonTarget type="int">98304</balloonTarget>
    <ballooningEnabled>true</ballooningEnabled>
    <clusterVersion>4.6</clusterVersion>
    <destroy_on_reboot type="bool">False</destroy_on_reboot>
    <guestAgentAPIVersion type="int">0</guestAgentAPIVersion>
    <jobs>{}</jobs>
    <launchPaused>false</launchPaused>
    <memGuaranteedSize type="int">96</memGuaranteedSize>
    <minGuaranteedMemoryMb type="int">96</minGuaranteedMemoryMb>
    <resumeBehavior>auto_resume</resumeBehavior>
    <startTime type="float">1628431993.720967</startTime>
    <device alias="ua-b8dc7873-189b-41aa-8bbd-2994772700ff"
mac_address="02:00:00:00:00:00">
        <network>ovirtmgmt</network>
    </device>
    <device alias="ua-fbd3927c-cb42-4b3a-9f31-9c9c06f4e868"
mac_address="02:00:00:00:00:02">
        <network>;vdsmdummy;</network>
    </device>
    <device devtype="disk" name="sda">
        <GUID>36001405bc9d94e4419b4b80a2f702e2f</GUID>
        <imageID>36001405bc9d94e4419b4b80a2f702e2f</imageID>
        <managed type="bool">False</managed>
    </device>
    <device devtype="disk" name="vda">
        <domainID>46fa5761-bb9e-46be-8f1c-35f4b03d0203</domainID>
        <imageID>20002ad2-4a97-4d2f-b3fc-c103477b5b91</imageID>
        <managed type="bool">False</managed>
        <poolID>7d97ea80-f849-11eb-ac79-5452d501341a</poolID>
        <volumeID>614abd56-4d4f-4412-aa2a-3f7bad2f3a87</volumeID>
        <specParams>
            <pinToIoThread>1</pinToIoThread>
        </specParams>
        <volumeChain>
            <volumeChainNode>
                <domainID>46fa5761-bb9e-46be-8f1c-35f4b03d0203</domainID>
                <imageID>20002ad2-4a97-4d2f-b3fc-c103477b5b91</imageID>
                <leaseOffset type="int">0</leaseOffset>

<leasePath>/rhev/data-center/mnt/192.168.200.2:_exports_nfs_share1/46fa5761-bb9e-46be-8f1c-35f4b03d0203/images/20002ad2-4a97-4d2f-b3fc-c103477b5b91/1d3f07dc-b481-492f-a2a6-7c46689d82ba.lease</leasePath>

<path>/rhev/data-center/mnt/192.168.200.2:_exports_nfs_share1/46fa5761-bb9e-46be-8f1c-35f4b03d0203/images/20002ad2-4a97-4d2f-b3fc-c103477b5b91/1d3f07dc-b481-492f-a2a6-7c46689d82ba</path>
                <volumeID>1d3f07dc-b481-492f-a2a6-7c46689d82ba</volumeID>
            </volumeChainNode>
            <volumeChainNode>
                <domainID>46fa5761-bb9e-46be-8f1c-35f4b03d0203</domainID>
                <imageID>20002ad2-4a97-4d2f-b3fc-c103477b5b91</imageID>
                <leaseOffset type="int">0</leaseOffset>

<leasePath>/rhev/data-center/mnt/192.168.200.2:_exports_nfs_share1/46fa5761-bb9e-46be-8f1c-35f4b03d0203/images/20002ad2-4a97-4d2f-b3fc-c103477b5b91/614abd56-4d4f-4412-aa2a-3f7bad2f3a87.lease</leasePath>

<path>/rhev/data-center/mnt/192.168.200.2:_exports_nfs_share1/46fa5761-bb9e-46be-8f1c-35f4b03d0203/images/20002ad2-4a97-4d2f-b3fc-c103477b5b91/614abd56-4d4f-4412-aa2a-3f7bad2f3a87</path>
                <volumeID>614abd56-4d4f-4412-aa2a-3f7bad2f3a87</volumeID>
            </volumeChainNode>
            <volumeChainNode>
                <domainID>46fa5761-bb9e-46be-8f1c-35f4b03d0203</domainID>
                <imageID>20002ad2-4a97-4d2f-b3fc-c103477b5b91</imageID>
                <leaseOffset type="int">0</leaseOffset>

<leasePath>/rhev/data-center/mnt/192.168.200.2:_exports_nfs_share1/46fa5761-bb9e-46be-8f1c-35f4b03d0203/images/20002ad2-4a97-4d2f-b3fc-c103477b5b91/a4309ef3-01bb-45db-8bf7-0f9498a7feeb.lease</leasePath>

<path>/rhev/data-center/mnt/192.168.200.2:_exports_nfs_share1/46fa5761-bb9e-46be-8f1c-35f4b03d0203/images/20002ad2-4a97-4d2f-b3fc-c103477b5b91/a4309ef3-01bb-45db-8bf7-0f9498a7feeb</path>
                <volumeID>a4309ef3-01bb-45db-8bf7-0f9498a7feeb</volumeID>
            </volumeChainNode>
        </volumeChain>
    </device>
    <device devtype="disk" name="sdc">
        <managed type="bool">False</managed>
    </device>
</vm>
 (metadata:517)
-- 
Didi
_______________________________________________
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/VY2GGW6UW4KRNBGM6SKTFHDA44PRFQ5Y/

Reply via email to