On Fri, Jun 1, 2018 at 3:54 PM Stefan Hajnoczi <[email protected]> wrote:
> On Thu, May 31, 2018 at 11:02:01PM +0300, Nir Soffer wrote: > > On Thu, May 31, 2018 at 1:55 AM Bernhard Dick <[email protected]> wrote: > > > > > Hi, > > > > > > I found the reason for my timeout problems: It is the version of > librbd1 > > > (which is 0.94.5) in conjunction with my CEPH test-environment which is > > > running the luminous release. > > > When I install the librbd1 (and librados2) packages from the > > > centos-ceph-luminous repository (version 12.2.5) I'm able to start and > > > migrate VMs inbetween the hosts. > > > > > > > vdsm does not require librbd since qemu brings this dependency, and vdsm > > does not access ceph directly yet. > > > > Maybe qemu should require newer version of librbd? > > Upstream QEMU builds against any librbd version that exports the > necessary APIs. > > The choice of library versions is mostly up to distro package > maintainers. > > Have you filed a bug against Ceph on the distro you are using? > Thanks for clearing this up Stefan. Bernhard, can you give more info about your Linux version and installed packages (.e.g qemu-*)? Nir > > Am 25.05.2018 um 17:08 schrieb Bernhard Dick: > > > > Hi, > > > > > > > > as you might already know I try to use ceph with openstack in an > oVirt > > > > test environment. I'm able to create and remove volumes. But if I > try to > > > > run a VM which contains a ceph volume it is in the "Wait for launch" > > > > state for a very long time. Then it gets into "down" state again. The > > > > qemu log states > > > > > > > > 2018-05-25T15:03:41.100401Z qemu-kvm: -drive > > > > > > > > file=rbd:rbd/volume-3bec499e-d0d0-45ef-86ad-2c187cdb2811:id=cinder:auth_supported=cephx\;none:mon_host=[mon0]\:6789\;[mon1]\:6789,file.password-secret=scsi0-0-0-0-secret0,format=raw,if=none,id=drive-scsi0-0-0-0,serial=3bec499e-d0d0-45ef-86ad-2c187cdb2811,cache=none,werror=stop,rerror=stop,aio=threads: > > > > > > > error connecting: Connection timed out > > > > > > > > 2018-05-25 15:03:41.109+0000: shutting down, reason=failed > > > > > > > > On the monitor hosts I see traffic with the ceph-mon-port, but not on > > > > other ports (the osds for example). In the ceph logs however I don't > > > > really see what happens. > > > > Do you have some tips how to debug this problem? > > > > > > > > Regards > > > > Bernhard > > > _______________________________________________ > > > Users mailing list -- [email protected] > > > To unsubscribe send an email to [email protected] > > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > > > oVirt Code of Conduct: > > > https://www.ovirt.org/community/about/community-guidelines/ > > > List Archives: > > > > https://lists.ovirt.org/archives/list/[email protected]/message/N6ODADRIIYRJPSSX23ITWLNQLX3ER3Q4/ > > > >
