Re: [ovirt-users] problem while moving/copying disks: vdsm low level image copy failed

2017-07-31 Thread Johan Bernhardsson
I added log snippets from when it fails to the bug entry and how the
volume is setup.
/Johan
On Mon, 2017-07-31 at 10:23 +0300, Benny Zlotnik wrote:
> Forgot to add there is a bug for this issue[1] 
> Please add your gluster mount and brick logs to the bug entry
> 
> [1] - https://bugzilla.redhat.com/show_bug.cgi?id=1458846
> 
> On Sun, Jul 30, 2017 at 3:02 PM, Johan Bernhardsson 
> wrote:
> > OS Version:
> > RHEL - 7 - 3.1611.el7.centos
> > OS Description:
> > CentOS Linux 7 (Core)
> > Kernel Version:
> > 3.10.0 - 514.16.1.el7.x86_64
> > KVM Version:
> > 2.6.0 - 28.el7_3.9.1
> > LIBVIRT Version:
> > libvirt-2.0.0-10.el7_3.9
> > VDSM Version:
> > vdsm-4.19.15-1.el7.centos
> > SPICE Version:
> > 0.12.4 - 20.el7_3
> > GlusterFS Version:
> > glusterfs-3.8.11-1.el7
> > CEPH Version:
> > librbd1-0.94.5-1.el7
> > qemu-img version 2.6.0 (qemu-kvm-ev-2.6.0-28.el7_3.9.1), Copyright
> > (c) 2004-2008 Fabrice Bellard
> > 
> > This is what i have on the hosts.
> > 
> > /Johan
> > 
> > On Sun, 2017-07-30 at 13:56 +0300, Benny Zlotnik wrote:
> > > Hi, 
> > > 
> > > Can please you provide the versions of vdsm, qemu, libvirt?
> > > 
> > > On Sun, Jul 30, 2017 at 1:01 PM, Johan Bernhardsson 
> > > se> wrote:
> > > > Hello,
> > > > 
> > > > We get this error message while moving or copying some of the
> > > > disks on
> > > > our main cluster running 4.1.2 on centos7  
> > > > 
> > > > This is shown in the engine:
> > > > VDSM vbgkvm02 command HSMGetAllTasksStatusesVDS failed: low
> > > > level Image
> > > > copy failed
> > > > 
> > > > I can copy it inside the host. And i can use dd to copy.
> > > > Haven't tried
> > > > to run qemu-img manually yet.
> > > > 
> > > > 
> > > > This is from vdsm.log on the host:
> > > > 2017-07-28 09:07:22,741+0200 ERROR (tasks/6) [root] Job
> > > > u'c82d4c53-
> > > > 3eb4-405e-a2d5-c4c77519360e' failed (jobs:217)
> > > > Traceback (most recent call last):
> > > >   File "/usr/lib/python2.7/site-packages/vdsm/jobs.py", line
> > > > 154, in
> > > > run
> > > > self._run()
> > > >   File "/usr/share/vdsm/storage/sdm/api/copy_data.py", line 88,
> > > > in _run
> > > > self._operation.wait_for_completion()
> > > >   File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line
> > > > 329, in
> > > > wait_for_completion
> > > > self.poll(timeout)
> > > >   File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line
> > > > 324, in
> > > > poll
> > > > self.error)
> > > > QImgError: cmd=['/usr/bin/taskset', '--cpu-list', '0-15',
> > > > '/usr/bin/nice', '-n', '19', '/usr/bin/ionice', '-c', '3',
> > > > '/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T',
> > > > 'none', '-f',
> > > > 'raw', u'/rhev/data-
> > > > center/mnt/glusterSD/vbgsan02:_fs02/0924ff77-ef51-
> > > > 43
> > > > 5b-b90d-50bfbf2e8de7/images/750f4184-b852-4b00-94fc-
> > > > 476f3f5b93c7/3fe43487-3302-4b34-865a-07c5c6aedbf2', '-O',
> > > > 'raw',
> > > > u'/rhev/data-center/mnt/glusterSD/10.137.30.105:_fs03/5d47a297-
> > > > a21f-
> > > > 4587-bb7c-dd00d52010d5/images/750f4184-b852-4b00-94fc-
> > > > 476f3f5b93c7/3fe43487-3302-4b34-865
> > > > a-07c5c6aedbf2'], ecode=1, stdout=, stderr=qemu-img: error
> > > > while
> > > > reading sector 12197886: No data available
> > > > , message=None
> > > > 
> > > > 
> > > > The storage domains are all based on gluster. The storage
> > > > domains that
> > > > we see this on is configured as dispersed volumes. 
> > > > 
> > > > Found a way to "fix" the problem. And that is to run dd
> > > > if=/dev/vda
> > > > of=/dev/null bs=1M  inside the virtual guest. After that we can
> > > > copy an
> > > > image or use storage livemigration.
> > > > 
> > > > Is this a gluster problem or an vdsm problem? Or could it be
> > > > something
> > > > with qemu-img?
> > > > 
> > > > /Johan
> > > > ___
> > > > Users mailing list
> > > > Users@ovirt.org
> > > > http://lists.ovirt.org/mailman/listinfo/users
> > > > ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] problem while moving/copying disks: vdsm low level image copy failed

2017-07-31 Thread Benny Zlotnik
Forgot to add there is a bug for this issue[1]
Please add your gluster mount and brick logs to the bug entry

[1] - https://bugzilla.redhat.com/show_bug.cgi?id=1458846

On Sun, Jul 30, 2017 at 3:02 PM, Johan Bernhardsson  wrote:

> OS Version:
> RHEL - 7 - 3.1611.el7.centos
> OS Description:
> CentOS Linux 7 (Core)
> Kernel Version:
> 3.10.0 - 514.16.1.el7.x86_64
> KVM Version:
> 2.6.0 - 28.el7_3.9.1
> LIBVIRT Version:
> libvirt-2.0.0-10.el7_3.9
> VDSM Version:
> vdsm-4.19.15-1.el7.centos
> SPICE Version:
> 0.12.4 - 20.el7_3
> GlusterFS Version:
> glusterfs-3.8.11-1.el7
> CEPH Version:
> librbd1-0.94.5-1.el7
>
> qemu-img version 2.6.0 (qemu-kvm-ev-2.6.0-28.el7_3.9.1), Copyright (c)
> 2004-2008 Fabrice Bellard
>
> This is what i have on the hosts.
>
> /Johan
>
> On Sun, 2017-07-30 at 13:56 +0300, Benny Zlotnik wrote:
>
> Hi,
>
> Can please you provide the versions of vdsm, qemu, libvirt?
>
> On Sun, Jul 30, 2017 at 1:01 PM, Johan Bernhardsson 
> wrote:
>
> Hello,
>
> We get this error message while moving or copying some of the disks on
> our main cluster running 4.1.2 on centos7
>
> This is shown in the engine:
> VDSM vbgkvm02 command HSMGetAllTasksStatusesVDS failed: low level Image
> copy failed
>
> I can copy it inside the host. And i can use dd to copy. Haven't tried
> to run qemu-img manually yet.
>
>
> This is from vdsm.log on the host:
> 2017-07-28 09:07:22,741+0200 ERROR (tasks/6) [root] Job u'c82d4c53-
> 3eb4-405e-a2d5-c4c77519360e' failed (jobs:217)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/jobs.py", line 154, in
> run
> self._run()
>   File "/usr/share/vdsm/storage/sdm/api/copy_data.py", line 88, in _run
> self._operation.wait_for_completion()
>   File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 329, in
> wait_for_completion
> self.poll(timeout)
>   File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 324, in
> poll
> self.error)
> QImgError: cmd=['/usr/bin/taskset', '--cpu-list', '0-15',
> '/usr/bin/nice', '-n', '19', '/usr/bin/ionice', '-c', '3',
> '/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f',
> 'raw', u'/rhev/data-center/mnt/glusterSD/vbgsan02:_fs02/0924ff77-ef51-
> 43
> 5b-b90d-50bfbf2e8de7/images/750f4184-b852-4b00-94fc-
> 476f3f5b93c7/3fe43487-3302-4b34-865a-07c5c6aedbf2', '-O', 'raw',
> u'/rhev/data-center/mnt/glusterSD/10.137.30.105:_fs03/5d47a297-a21f-
> 4587-bb7c-dd00d52010d5/images/750f4184-b852-4b00-94fc-
> 476f3f5b93c7/3fe43487-3302-4b34-865
> a-07c5c6aedbf2'], ecode=1, stdout=, stderr=qemu-img: error while
> reading sector 12197886: No data available
> , message=None
>
>
> The storage domains are all based on gluster. The storage domains that
> we see this on is configured as dispersed volumes.
>
> Found a way to "fix" the problem. And that is to run dd if=/dev/vda
> of=/dev/null bs=1M  inside the virtual guest. After that we can copy an
> image or use storage livemigration.
>
> Is this a gluster problem or an vdsm problem? Or could it be something
> with qemu-img?
>
> /Johan
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] problem while moving/copying disks: vdsm low level image copy failed

2017-07-30 Thread Johan Bernhardsson
OS Version:
RHEL - 7 - 3.1611.el7.centos
OS Description:
CentOS Linux 7
(Core)
Kernel Version:
3.10.0 - 514.16.1.el7.x86_64
KVM Version:
2.6.0 -
28.el7_3.9.1
LIBVIRT Version:
libvirt-2.0.0-10.el7_3.9
VDSM Version:
vdsm-
4.19.15-1.el7.centos
SPICE Version:
0.12.4 - 20.el7_3
GlusterFS Version:
gl
usterfs-3.8.11-1.el7
CEPH Version:
librbd1-0.94.5-1.el7

qemu-img version 2.6.0 (qemu-kvm-ev-2.6.0-28.el7_3.9.1), Copyright (c)
2004-2008 Fabrice Bellard
This is what i have on the hosts.
/Johan
On Sun, 2017-07-30 at 13:56 +0300, Benny Zlotnik wrote:
> Hi, 
> 
> Can please you provide the versions of vdsm, qemu, libvirt?
> 
> On Sun, Jul 30, 2017 at 1:01 PM, Johan Bernhardsson 
> wrote:
> > Hello,
> > 
> > We get this error message while moving or copying some of the disks
> > on
> > our main cluster running 4.1.2 on centos7  
> > 
> > This is shown in the engine:
> > VDSM vbgkvm02 command HSMGetAllTasksStatusesVDS failed: low level
> > Image
> > copy failed
> > 
> > I can copy it inside the host. And i can use dd to copy. Haven't
> > tried
> > to run qemu-img manually yet.
> > 
> > 
> > This is from vdsm.log on the host:
> > 2017-07-28 09:07:22,741+0200 ERROR (tasks/6) [root] Job u'c82d4c53-
> > 3eb4-405e-a2d5-c4c77519360e' failed (jobs:217)
> > Traceback (most recent call last):
> >   File "/usr/lib/python2.7/site-packages/vdsm/jobs.py", line 154,
> > in
> > run
> > self._run()
> >   File "/usr/share/vdsm/storage/sdm/api/copy_data.py", line 88, in
> > _run
> > self._operation.wait_for_completion()
> >   File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line
> > 329, in
> > wait_for_completion
> > self.poll(timeout)
> >   File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line
> > 324, in
> > poll
> > self.error)
> > QImgError: cmd=['/usr/bin/taskset', '--cpu-list', '0-15',
> > '/usr/bin/nice', '-n', '19', '/usr/bin/ionice', '-c', '3',
> > '/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none',
> > '-f',
> > 'raw', u'/rhev/data-center/mnt/glusterSD/vbgsan02:_fs02/0924ff77-
> > ef51-
> > 43
> > 5b-b90d-50bfbf2e8de7/images/750f4184-b852-4b00-94fc-
> > 476f3f5b93c7/3fe43487-3302-4b34-865a-07c5c6aedbf2', '-O', 'raw',
> > u'/rhev/data-center/mnt/glusterSD/10.137.30.105:_fs03/5d47a297-
> > a21f-
> > 4587-bb7c-dd00d52010d5/images/750f4184-b852-4b00-94fc-
> > 476f3f5b93c7/3fe43487-3302-4b34-865
> > a-07c5c6aedbf2'], ecode=1, stdout=, stderr=qemu-img: error while
> > reading sector 12197886: No data available
> > , message=None
> > 
> > 
> > The storage domains are all based on gluster. The storage domains
> > that
> > we see this on is configured as dispersed volumes. 
> > 
> > Found a way to "fix" the problem. And that is to run dd if=/dev/vda
> > of=/dev/null bs=1M  inside the virtual guest. After that we can
> > copy an
> > image or use storage livemigration.
> > 
> > Is this a gluster problem or an vdsm problem? Or could it be
> > something
> > with qemu-img?
> > 
> > /Johan
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> > ___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] problem while moving/copying disks: vdsm low level image copy failed

2017-07-30 Thread Benny Zlotnik
Hi,

Can please you provide the versions of vdsm, qemu, libvirt?

On Sun, Jul 30, 2017 at 1:01 PM, Johan Bernhardsson  wrote:

> Hello,
>
> We get this error message while moving or copying some of the disks on
> our main cluster running 4.1.2 on centos7
>
> This is shown in the engine:
> VDSM vbgkvm02 command HSMGetAllTasksStatusesVDS failed: low level Image
> copy failed
>
> I can copy it inside the host. And i can use dd to copy. Haven't tried
> to run qemu-img manually yet.
>
>
> This is from vdsm.log on the host:
> 2017-07-28 09:07:22,741+0200 ERROR (tasks/6) [root] Job u'c82d4c53-
> 3eb4-405e-a2d5-c4c77519360e' failed (jobs:217)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/jobs.py", line 154, in
> run
> self._run()
>   File "/usr/share/vdsm/storage/sdm/api/copy_data.py", line 88, in _run
> self._operation.wait_for_completion()
>   File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 329, in
> wait_for_completion
> self.poll(timeout)
>   File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 324, in
> poll
> self.error)
> QImgError: cmd=['/usr/bin/taskset', '--cpu-list', '0-15',
> '/usr/bin/nice', '-n', '19', '/usr/bin/ionice', '-c', '3',
> '/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f',
> 'raw', u'/rhev/data-center/mnt/glusterSD/vbgsan02:_fs02/0924ff77-ef51-
> 43
> 5b-b90d-50bfbf2e8de7/images/750f4184-b852-4b00-94fc-
> 476f3f5b93c7/3fe43487-3302-4b34-865a-07c5c6aedbf2', '-O', 'raw',
> u'/rhev/data-center/mnt/glusterSD/10.137.30.105:_fs03/5d47a297-a21f-
> 4587-bb7c-dd00d52010d5/images/750f4184-b852-4b00-94fc-
> 476f3f5b93c7/3fe43487-3302-4b34-865
> a-07c5c6aedbf2'], ecode=1, stdout=, stderr=qemu-img: error while
> reading sector 12197886: No data available
> , message=None
>
>
> The storage domains are all based on gluster. The storage domains that
> we see this on is configured as dispersed volumes.
>
> Found a way to "fix" the problem. And that is to run dd if=/dev/vda
> of=/dev/null bs=1M  inside the virtual guest. After that we can copy an
> image or use storage livemigration.
>
> Is this a gluster problem or an vdsm problem? Or could it be something
> with qemu-img?
>
> /Johan
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] problem while moving/copying disks: vdsm low level image copy failed

2017-07-30 Thread Johan Bernhardsson
Hello,

We get this error message while moving or copying some of the disks on
our main cluster running 4.1.2 on centos7  

This is shown in the engine:
VDSM vbgkvm02 command HSMGetAllTasksStatusesVDS failed: low level Image
copy failed

I can copy it inside the host. And i can use dd to copy. Haven't tried
to run qemu-img manually yet.


This is from vdsm.log on the host:
2017-07-28 09:07:22,741+0200 ERROR (tasks/6) [root] Job u'c82d4c53-
3eb4-405e-a2d5-c4c77519360e' failed (jobs:217)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/jobs.py", line 154, in
run
self._run()
  File "/usr/share/vdsm/storage/sdm/api/copy_data.py", line 88, in _run
self._operation.wait_for_completion()
  File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 329, in
wait_for_completion
self.poll(timeout)
  File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 324, in
poll
self.error)
QImgError: cmd=['/usr/bin/taskset', '--cpu-list', '0-15',
'/usr/bin/nice', '-n', '19', '/usr/bin/ionice', '-c', '3',
'/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f', 
'raw', u'/rhev/data-center/mnt/glusterSD/vbgsan02:_fs02/0924ff77-ef51-
43
5b-b90d-50bfbf2e8de7/images/750f4184-b852-4b00-94fc-
476f3f5b93c7/3fe43487-3302-4b34-865a-07c5c6aedbf2', '-O', 'raw',
u'/rhev/data-center/mnt/glusterSD/10.137.30.105:_fs03/5d47a297-a21f-
4587-bb7c-dd00d52010d5/images/750f4184-b852-4b00-94fc-
476f3f5b93c7/3fe43487-3302-4b34-865
a-07c5c6aedbf2'], ecode=1, stdout=, stderr=qemu-img: error while
reading sector 12197886: No data available
, message=None


The storage domains are all based on gluster. The storage domains that
we see this on is configured as dispersed volumes. 

Found a way to "fix" the problem. And that is to run dd if=/dev/vda
of=/dev/null bs=1M  inside the virtual guest. After that we can copy an
image or use storage livemigration.

Is this a gluster problem or an vdsm problem? Or could it be something
with qemu-img?

/Johan
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users