Hi all,

I have a test GPU system that seemed to be working properly under Kilo
running 1 and 2 GPU instnace types on an 8GPU server.

After Mitaka upgrade it seems to alway try and assing the same Device
which is alredy in use rather than pick one of the 5 currently
available.


 Build of instance 9542cc63-793c-440e-9a57-cc06eb401839 was
 re-scheduled: Requested operation is not valid: PCI device
 0000:09:00.0 is in use by driver QEMU, domain instance-000abefa
 _do_build_and_run_instance
 /usr/lib/python2.7/dist-packages/nova/compute/manager.py:1945

it tries to schedule 5 times, but each time uses the same busy
device.  Since there are currently 3 in use if it had just picked a
new one each time

In trying to debug this I realize I have no idea how devices are
selected. Does OpenStack track which PCI devices are claimed or is
that a libvirt function and in either case where woudl I look to find
out what it thinks the current state is?

Thanks,
-Jon
-- 

_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Reply via email to