[ovirt-users] Re: Bad volume specification

2020-09-18 Thread Facundo Garat
Vojtech, thanks for this info I will try that.

On Fri, 18 Sep 2020 at 6:31 AM Vojtech Juranek  wrote:

> On čtvrtek 17. září 2020 16:07:16 CEST Facundo Garat wrote:
>
> > I don't think so, We have bigger LUNs assigned so if that were the case
> we
>
> > would have lost access to many LVs.
>
> >
>
> > Do I have a way to manually query the LVs content to find out if it's
> still
>
> > there?.
>
>
>
> you can ssh to a host and check LV
>
>
>
> lvs
> 55327311-e47c-46b5-b168-258c5924757b/bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f
> --config 'devices {filter=["a|.*|"]}'
>
>
>
> and path
>
>
>
>
> /rhev/data-center/mnt/blockSD/55327311-e47c-46b5-b168-258c5924757b/images/f5bd2e15-a1ab-4724-883a-988b4dc7985b/bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f
>
>
>
> If path exists, you can run qemu-img info on it.
>
>
>
> Looking more into vdsm log, it seems that volume
> bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f
>
> is not part of the disk image (f5bd2e15-a1ab-4724-883a-988b4dc7985b).
>
>
>
> Unfortunately no idea how it got into such state, maybe engine DB is out
> of sync with storage?
>
> (you can check the DB on engine, by connecting to postgres DB "engine" and
> running
>
> select image_group_id from images where
> image_guid='bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f';
>
> to see to what image this volume belong to according to engine DB)
>
>
>
> > On Thu, Sep 17, 2020 at 10:55 AM Vojtech Juranek 
>
> >
>
> > wrote:
>
> > > On čtvrtek 17. září 2020 14:30:12 CEST Facundo Garat wrote:
>
> > > > Hi Vojtech,
>
> > > >
>
> > > >   find the log attached.
>
> > >
>
> > > thanks. It fails as one of the image volumes
>
> > > (6058a880-9ee6-4c57-9db0-5946c6dab676) is now available/present on the
>
> > > storage. You can check manually if there's such LV, but it's not very
>
> > > likely.
>
> > > Didn't you by accident remove the volume?
>
> > >
>
> > > > On Thu, Sep 17, 2020 at 4:27 AM Vojtech Juranek  >
>
> > >
>
> > > wrote:
>
> > > > > Hi,
>
> > > > > could you please send us also relevant part of vdsm log
>
> > > > > (/var/log/vdsm/
>
> > > > > vdsm.log)?
>
> > > > > Thanks
>
> > > > > Vojta
>
> > > > >
>
> > > > > > The VM has one snapshot which I can't delete because it shows a
>
> > >
>
> > > similar
>
> > >
>
> > > > > > error. That doesn't allow me to attach the disks to another VM.
> This
>
> > >
>
> > > VM
>
> > >
>
> > > > > > will boot ok if the disks are deactivated.
>
> > > > > >
>
> > > > > > Find the engine.log attached.
>
> > > > > >
>
> > > > > > The steps associated with the engine log:
>
> > > > > >- The VM is on booted from CD with all disks deactivated
>
> > > > > >- Try to attach all three disks (fail!)
>
> > > > > >- Power off the VM
>
> > > > > >- Activate all three disks
>
> > > > > >- Try to delete the snapshot.
>
> > > > > >
>
> > > > > > Thanks.
>
> > > > > >
>
> > > > > > On Wed, Sep 16, 2020 at 9:35 AM Ahmad Khiet 
>
> > >
>
> > > wrote:
>
> > > > > > > Hi,
>
> > > > > > >
>
> > > > > > > can you please attach the engine log? what steps did you make
>
> > >
>
> > > before
>
> > >
>
> > > > > > > this error is shown? did you tried to create a snapshot and
> failed
>
> > > > >
>
> > > > > before
>
> > > > >
>
> > > > > > > On Wed, Sep 16, 2020 at 7:49 AM Strahil Nikolov via Users
>
> > > > > > > 
>
> > > > > > >
>
> > > > > > > wrote:
>
> > > > > > >> What happens if you create another VM and attach the disks to
> it
>
> > > > > > >> ?
>
> > > > > > >> Does it boot properly ?
>
> > > > > > >>
>
> > > > > > >> Best Regards,
>
> > > > > > >> Strahil Nikolov
>
> > > > > > >>
>
> > > > > > >>
>
> > > > > > >>
>
> > > > > > >>
>
> > > > > > >>
>
> > > > > > >>
>
> > > > > > >> В сряда, 16 септември 2020 г., 02:19:26 Гринуич+3, Facundo
> Garat
>
> > > > > > >> <
>
> > > > > > >> fga...@gmail.com> написа:
>
> > > > > > >>
>
> > > > > > >>
>
> > > > > > >>
>
> > > > > > >>
>
> > > > > > >>
>
> > > > > > >>
>
> > > > > > >> Hi all,
>
> > > > > > >>
>
> > > > > > >>  I'm having some issues with one VM. The VM won't start and
> it's
>
> > > > >
>
> > > > > showing
>
> > > > >
>
> > > > > > >> problems with the virtual disks so I started the VM without
> any
>
> > >
>
> > > disks
>
> > >
>
> > > > > and
>
> > > > >
>
> > > > > > >> trying to hot adding the disk and that's fail too.
>
> > > > > > >>
>
> > > > > > >>  The servers are connected thru FC, all the other VMs are
> working
>
> > > > >
>
> > > > > fine.
>
> > > > >
>
> > > > > > >>   Any ideas?.
>
> > > > > > >>
>
> > > > > > >> Thanks!!
>
> > > > > > >>
>
> > > > > > >> PS: The engine.log is showing this:
>
> > > > > > >> 2020-09-15 20:10:37,926-03 INFO
>
> > > > > > >>
>
> > > > > > >>
> [org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand]
>
> > > > >
>
> > > > > (default
>
> > > > >
>
> > > > > > >> task-168) [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Lock
> Acquired to
>
> > > > >
>
> > > > > object
>
> > >
>
> > >
> 'EngineLock:{exclusiveLocks='[f5bd2e15-a1ab-4724-883a-988b4dc7985b=DISK]'
>
> > >
>
> > > > > > >> ,
>
> > > > > > >> sharedL

[ovirt-users] Re: Bad volume specification

2020-09-18 Thread Vojtech Juranek
On čtvrtek 17. září 2020 16:07:16 CEST Facundo Garat wrote:
> I don't think so, We have bigger LUNs assigned so if that were the case we
> would have lost access to many LVs.
> 
> Do I have a way to manually query the LVs content to find out if it's still
> there?.

you can ssh to a host and check LV

lvs 55327311-e47c-46b5-b168-258c5924757b/bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f 
--config 'devices {filter=["a|.*|"]}'

and path

/rhev/data-center/mnt/blockSD/55327311-e47c-46b5-b168-258c5924757b/images/f5bd2e15-a1ab-4724-883a-988b4dc7985b/bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f

If path exists, you can run qemu-img info on it.

Looking more into vdsm log, it seems that volume 
bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f
is not part of the disk image (f5bd2e15-a1ab-4724-883a-988b4dc7985b).

Unfortunately no idea how it got into such state, maybe engine DB is out of 
sync with storage?
(you can check the DB on engine, by connecting to postgres DB "engine" and 
running 
select image_group_id from images where 
image_guid='bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f';
to see to what image this volume belong to according to engine DB)

> On Thu, Sep 17, 2020 at 10:55 AM Vojtech Juranek 
> 
> wrote:
> > On čtvrtek 17. září 2020 14:30:12 CEST Facundo Garat wrote:
> > > Hi Vojtech,
> > > 
> > >   find the log attached.
> > 
> > thanks. It fails as one of the image volumes
> > (6058a880-9ee6-4c57-9db0-5946c6dab676) is now available/present on the
> > storage. You can check manually if there's such LV, but it's not very
> > likely.
> > Didn't you by accident remove the volume?
> > 
> > > On Thu, Sep 17, 2020 at 4:27 AM Vojtech Juranek 
> > 
> > wrote:
> > > > Hi,
> > > > could you please send us also relevant part of vdsm log
> > > > (/var/log/vdsm/
> > > > vdsm.log)?
> > > > Thanks
> > > > Vojta
> > > > 
> > > > > The VM has one snapshot which I can't delete because it shows a
> > 
> > similar
> > 
> > > > > error. That doesn't allow me to attach the disks to another VM. This
> > 
> > VM
> > 
> > > > > will boot ok if the disks are deactivated.
> > > > > 
> > > > > Find the engine.log attached.
> > > > > 
> > > > > The steps associated with the engine log:
> > > > >- The VM is on booted from CD with all disks deactivated
> > > > >- Try to attach all three disks (fail!)
> > > > >- Power off the VM
> > > > >- Activate all three disks
> > > > >- Try to delete the snapshot.
> > > > > 
> > > > > Thanks.
> > > > > 
> > > > > On Wed, Sep 16, 2020 at 9:35 AM Ahmad Khiet 
> > 
> > wrote:
> > > > > > Hi,
> > > > > > 
> > > > > > can you please attach the engine log? what steps did you make
> > 
> > before
> > 
> > > > > > this error is shown? did you tried to create a snapshot and failed
> > > > 
> > > > before
> > > > 
> > > > > > On Wed, Sep 16, 2020 at 7:49 AM Strahil Nikolov via Users
> > > > > > 
> > > > > > 
> > > > > > wrote:
> > > > > >> What happens if you create another VM and attach the disks to it
> > > > > >> ?
> > > > > >> Does it boot properly ?
> > > > > >> 
> > > > > >> Best Regards,
> > > > > >> Strahil Nikolov
> > > > > >> 
> > > > > >> 
> > > > > >> 
> > > > > >> 
> > > > > >> 
> > > > > >> 
> > > > > >> В сряда, 16 септември 2020 г., 02:19:26 Гринуич+3, Facundo Garat
> > > > > >> <
> > > > > >> fga...@gmail.com> написа:
> > > > > >> 
> > > > > >> 
> > > > > >> 
> > > > > >> 
> > > > > >> 
> > > > > >> 
> > > > > >> Hi all,
> > > > > >> 
> > > > > >>  I'm having some issues with one VM. The VM won't start and it's
> > > > 
> > > > showing
> > > > 
> > > > > >> problems with the virtual disks so I started the VM without any
> > 
> > disks
> > 
> > > > and
> > > > 
> > > > > >> trying to hot adding the disk and that's fail too.
> > > > > >> 
> > > > > >>  The servers are connected thru FC, all the other VMs are working
> > > > 
> > > > fine.
> > > > 
> > > > > >>   Any ideas?.
> > > > > >> 
> > > > > >> Thanks!!
> > > > > >> 
> > > > > >> PS: The engine.log is showing this:
> > > > > >> 2020-09-15 20:10:37,926-03 INFO
> > > > > >> 
> > > > > >>  [org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand]
> > > > 
> > > > (default
> > > > 
> > > > > >> task-168) [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Lock Acquired to
> > > > 
> > > > object
> > 
> > 'EngineLock:{exclusiveLocks='[f5bd2e15-a1ab-4724-883a-988b4dc7985b=DISK]'
> > 
> > > > > >> ,
> > > > > >> sharedLocks='[71db02c2-df29-4552-8a7e-cb8bb429a2ac=VM]'}'
> > > > > >> 2020-09-15 20:10:38,082-03 INFO
> > > > > >> 
> > > > > >>  [org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand]
> > > > > >> 
> > > > > >> (EE-ManagedThreadFactory-engine-Thread-36528)
> > > > > >> [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Running command:
> > > > > >> HotPlugDiskToVmCommand internal: false. Entities affected :  ID:
> > > > > >> 71db02c2-df29-4552-8a7e-cb8bb429a2ac Type: VMAction group
> > > > > >> CONFIGURE_VM_STORAGE with role type USER
> > > > > >> 2020-09-15 20:10:38,117-03 INFO
> > > > > >> 
> > > > > >>  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlu

[ovirt-users] Re: Bad volume specification

2020-09-17 Thread Facundo Garat
I don't think so, We have bigger LUNs assigned so if that were the case we
would have lost access to many LVs.

Do I have a way to manually query the LVs content to find out if it's still
there?.

On Thu, Sep 17, 2020 at 10:55 AM Vojtech Juranek 
wrote:

> On čtvrtek 17. září 2020 14:30:12 CEST Facundo Garat wrote:
> > Hi Vojtech,
> >   find the log attached.
>
> thanks. It fails as one of the image volumes
> (6058a880-9ee6-4c57-9db0-5946c6dab676) is now available/present on the
> storage. You can check manually if there's such LV, but it's not very
> likely.
> Didn't you by accident remove the volume?
>
> > On Thu, Sep 17, 2020 at 4:27 AM Vojtech Juranek 
> wrote:
> > > Hi,
> > > could you please send us also relevant part of vdsm log (/var/log/vdsm/
> > > vdsm.log)?
> > > Thanks
> > > Vojta
> > >
> > > > The VM has one snapshot which I can't delete because it shows a
> similar
> > > > error. That doesn't allow me to attach the disks to another VM. This
> VM
> > > > will boot ok if the disks are deactivated.
> > > >
> > > > Find the engine.log attached.
> > > >
> > > > The steps associated with the engine log:
> > > >- The VM is on booted from CD with all disks deactivated
> > > >- Try to attach all three disks (fail!)
> > > >- Power off the VM
> > > >- Activate all three disks
> > > >- Try to delete the snapshot.
> > > >
> > > > Thanks.
> > > >
> > > > On Wed, Sep 16, 2020 at 9:35 AM Ahmad Khiet 
> wrote:
> > > > > Hi,
> > > > >
> > > > > can you please attach the engine log? what steps did you make
> before
> > > > > this error is shown? did you tried to create a snapshot and failed
> > >
> > > before
> > >
> > > > > On Wed, Sep 16, 2020 at 7:49 AM Strahil Nikolov via Users
> > > > > 
> > > > >
> > > > > wrote:
> > > > >> What happens if you create another VM and attach the disks to it ?
> > > > >> Does it boot properly ?
> > > > >>
> > > > >> Best Regards,
> > > > >> Strahil Nikolov
> > > > >>
> > > > >>
> > > > >>
> > > > >>
> > > > >>
> > > > >>
> > > > >> В сряда, 16 септември 2020 г., 02:19:26 Гринуич+3, Facundo Garat <
> > > > >> fga...@gmail.com> написа:
> > > > >>
> > > > >>
> > > > >>
> > > > >>
> > > > >>
> > > > >>
> > > > >> Hi all,
> > > > >>
> > > > >>  I'm having some issues with one VM. The VM won't start and it's
> > >
> > > showing
> > >
> > > > >> problems with the virtual disks so I started the VM without any
> disks
> > >
> > > and
> > >
> > > > >> trying to hot adding the disk and that's fail too.
> > > > >>
> > > > >>  The servers are connected thru FC, all the other VMs are working
> > >
> > > fine.
> > >
> > > > >>   Any ideas?.
> > > > >>
> > > > >> Thanks!!
> > > > >>
> > > > >> PS: The engine.log is showing this:
> > > > >> 2020-09-15 20:10:37,926-03 INFO
> > > > >>
> > > > >>  [org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand]
> > >
> > > (default
> > >
> > > > >> task-168) [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Lock Acquired to
> > >
> > > object
> > >
> > >
> 'EngineLock:{exclusiveLocks='[f5bd2e15-a1ab-4724-883a-988b4dc7985b=DISK]'
> > >
> > > > >> ,
> > > > >> sharedLocks='[71db02c2-df29-4552-8a7e-cb8bb429a2ac=VM]'}'
> > > > >> 2020-09-15 20:10:38,082-03 INFO
> > > > >>
> > > > >>  [org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand]
> > > > >>
> > > > >> (EE-ManagedThreadFactory-engine-Thread-36528)
> > > > >> [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Running command:
> > > > >> HotPlugDiskToVmCommand internal: false. Entities affected :  ID:
> > > > >> 71db02c2-df29-4552-8a7e-cb8bb429a2ac Type: VMAction group
> > > > >> CONFIGURE_VM_STORAGE with role type USER
> > > > >> 2020-09-15 20:10:38,117-03 INFO
> > > > >>
> > > > >>  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
> > > > >>
> > > > >> (EE-ManagedThreadFactory-engine-Thread-36528)
> > > > >> [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] START,
> > > > >> HotPlugDiskVDSCommand(HostName = nodo2,
> > >
> > >
> HotPlugDiskVDSParameters:{hostId='1c24c269-76c3-468d-a7ce-d0332beb7aef',
> > >
> > > > >> vmId='71db02c2-df29-4552-8a7e-cb8bb429a2ac',
> > > > >> diskId='f5bd2e15-a1ab-4724-883a-988b4dc7985b',
> addressMap='null'}),
> > >
> > > log
> > >
> > > > >> id:
> > > > >> f57ee9e
> > > > >> 2020-09-15 20:10:38,125-03 INFO
> > > > >>
> > > > >>  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
> > > > >>
> > > > >> (EE-ManagedThreadFactory-engine-Thread-36528)
> > > > >> [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Disk hot-plug:  > >
> > > version="1.0"
> > >
> > > > >> encoding="UTF-8"?>
> > > > >>
> > > > >>   
> > > > >>
> > > > >> 
> > > > >>
> > > > >>   
> > > > >>> >
> > >
> dev="/rhev/data-center/mnt/blockSD/55327311-e47c-46b5-b168-258c5924757b/i
> > >
> > >
> mages/f5bd2e15-a1ab-4724-883a-988b4dc7985b/bd714f21-8eed-43ee-a2d4-3d2ef1
> > >
> > > > >> ee4c3f">>>
> > > > >>
> > > > >> 
> > > > >>
> > > > >>   
> > > > >>> > > >>   error_policy="stop"
> > > > >>
> > > > >> cache="none"/>
> > > > >>
> > > > >>

[ovirt-users] Re: Bad volume specification

2020-09-17 Thread Vojtech Juranek
On čtvrtek 17. září 2020 14:30:12 CEST Facundo Garat wrote:
> Hi Vojtech,
>   find the log attached.

thanks. It fails as one of the image volumes 
(6058a880-9ee6-4c57-9db0-5946c6dab676) is now available/present on the 
storage. You can check manually if there's such LV, but it's not very likely. 
Didn't you by accident remove the volume?

> On Thu, Sep 17, 2020 at 4:27 AM Vojtech Juranek  wrote:
> > Hi,
> > could you please send us also relevant part of vdsm log (/var/log/vdsm/
> > vdsm.log)?
> > Thanks
> > Vojta
> > 
> > > The VM has one snapshot which I can't delete because it shows a similar
> > > error. That doesn't allow me to attach the disks to another VM. This VM
> > > will boot ok if the disks are deactivated.
> > > 
> > > Find the engine.log attached.
> > > 
> > > The steps associated with the engine log:
> > >- The VM is on booted from CD with all disks deactivated
> > >- Try to attach all three disks (fail!)
> > >- Power off the VM
> > >- Activate all three disks
> > >- Try to delete the snapshot.
> > > 
> > > Thanks.
> > > 
> > > On Wed, Sep 16, 2020 at 9:35 AM Ahmad Khiet  wrote:
> > > > Hi,
> > > > 
> > > > can you please attach the engine log? what steps did you make before
> > > > this error is shown? did you tried to create a snapshot and failed
> > 
> > before
> > 
> > > > On Wed, Sep 16, 2020 at 7:49 AM Strahil Nikolov via Users
> > > > 
> > > > 
> > > > wrote:
> > > >> What happens if you create another VM and attach the disks to it ?
> > > >> Does it boot properly ?
> > > >> 
> > > >> Best Regards,
> > > >> Strahil Nikolov
> > > >> 
> > > >> 
> > > >> 
> > > >> 
> > > >> 
> > > >> 
> > > >> В сряда, 16 септември 2020 г., 02:19:26 Гринуич+3, Facundo Garat <
> > > >> fga...@gmail.com> написа:
> > > >> 
> > > >> 
> > > >> 
> > > >> 
> > > >> 
> > > >> 
> > > >> Hi all,
> > > >> 
> > > >>  I'm having some issues with one VM. The VM won't start and it's
> > 
> > showing
> > 
> > > >> problems with the virtual disks so I started the VM without any disks
> > 
> > and
> > 
> > > >> trying to hot adding the disk and that's fail too.
> > > >> 
> > > >>  The servers are connected thru FC, all the other VMs are working
> > 
> > fine.
> > 
> > > >>   Any ideas?.
> > > >> 
> > > >> Thanks!!
> > > >> 
> > > >> PS: The engine.log is showing this:
> > > >> 2020-09-15 20:10:37,926-03 INFO
> > > >> 
> > > >>  [org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand]
> > 
> > (default
> > 
> > > >> task-168) [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Lock Acquired to
> > 
> > object
> > 
> > 'EngineLock:{exclusiveLocks='[f5bd2e15-a1ab-4724-883a-988b4dc7985b=DISK]'
> > 
> > > >> ,
> > > >> sharedLocks='[71db02c2-df29-4552-8a7e-cb8bb429a2ac=VM]'}'
> > > >> 2020-09-15 20:10:38,082-03 INFO
> > > >> 
> > > >>  [org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand]
> > > >> 
> > > >> (EE-ManagedThreadFactory-engine-Thread-36528)
> > > >> [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Running command:
> > > >> HotPlugDiskToVmCommand internal: false. Entities affected :  ID:
> > > >> 71db02c2-df29-4552-8a7e-cb8bb429a2ac Type: VMAction group
> > > >> CONFIGURE_VM_STORAGE with role type USER
> > > >> 2020-09-15 20:10:38,117-03 INFO
> > > >> 
> > > >>  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
> > > >> 
> > > >> (EE-ManagedThreadFactory-engine-Thread-36528)
> > > >> [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] START,
> > > >> HotPlugDiskVDSCommand(HostName = nodo2,
> > 
> > HotPlugDiskVDSParameters:{hostId='1c24c269-76c3-468d-a7ce-d0332beb7aef',
> > 
> > > >> vmId='71db02c2-df29-4552-8a7e-cb8bb429a2ac',
> > > >> diskId='f5bd2e15-a1ab-4724-883a-988b4dc7985b', addressMap='null'}),
> > 
> > log
> > 
> > > >> id:
> > > >> f57ee9e
> > > >> 2020-09-15 20:10:38,125-03 INFO
> > > >> 
> > > >>  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
> > > >> 
> > > >> (EE-ManagedThreadFactory-engine-Thread-36528)
> > > >> [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Disk hot-plug:  > 
> > version="1.0"
> > 
> > > >> encoding="UTF-8"?>
> > > >> 
> > > >>   
> > > >>   
> > > >> 
> > > >> 
> > > >>   
> > > >>> 
> > dev="/rhev/data-center/mnt/blockSD/55327311-e47c-46b5-b168-258c5924757b/i
> > 
> > mages/f5bd2e15-a1ab-4724-883a-988b4dc7985b/bd714f21-8eed-43ee-a2d4-3d2ef1
> > 
> > > >> ee4c3f">>>
> > > >> 
> > > >> 
> > > >>   
> > > >>   
> > > >>> > >>   error_policy="stop"
> > > >> 
> > > >> cache="none"/>
> > > >> 
> > > >>   
> > > >>   f5bd2e15-a1ab-4724-883a-988b4dc7985b
> > > >> 
> > > >> 
> > > >>   
> > > >>   
> > > >>   http://ovirt.org/vm/1.0";>
> > > >>   
> > > >> 
> > > >> 
> > > >>   
> > 
> > 0001-0001-0001-0001-0311
> > 
> > 
> > bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f > 
> > > >> D>
> > 
> > f5bd2e15-a1ab-4724-883a-988b4dc7985b
> > 
> > 
> > 55327311-e47c-46b5-b168-258c5924757b > 
> > > >> D>
> > > >> 
> > > >>   
> > > >> 
> > > >> 
> > > >>   
> > > >>   
> > > >> 

[ovirt-users] Re: Bad volume specification

2020-09-17 Thread Vojtech Juranek
Hi,
could you please send us also relevant part of vdsm log (/var/log/vdsm/
vdsm.log)?
Thanks
Vojta


> The VM has one snapshot which I can't delete because it shows a similar
> error. That doesn't allow me to attach the disks to another VM. This VM
> will boot ok if the disks are deactivated.
> 
> Find the engine.log attached.
> 
> The steps associated with the engine log:
> 
>- The VM is on booted from CD with all disks deactivated
>- Try to attach all three disks (fail!)
>- Power off the VM
>- Activate all three disks
>- Try to delete the snapshot.
> 
> Thanks.
> 
> On Wed, Sep 16, 2020 at 9:35 AM Ahmad Khiet  wrote:
> > Hi,
> > 
> > can you please attach the engine log? what steps did you make before
> > this error is shown? did you tried to create a snapshot and failed before
> > 
> > 
> > On Wed, Sep 16, 2020 at 7:49 AM Strahil Nikolov via Users
> > 
> > 
> > wrote:
> >> What happens if you create another VM and attach the disks to it ?
> >> Does it boot properly ?
> >> 
> >> Best Regards,
> >> Strahil Nikolov
> >> 
> >> 
> >> 
> >> 
> >> 
> >> 
> >> В сряда, 16 септември 2020 г., 02:19:26 Гринуич+3, Facundo Garat <
> >> fga...@gmail.com> написа:
> >> 
> >> 
> >> 
> >> 
> >> 
> >> 
> >> Hi all,
> >> 
> >>  I'm having some issues with one VM. The VM won't start and it's showing
> >> 
> >> problems with the virtual disks so I started the VM without any disks and
> >> trying to hot adding the disk and that's fail too.
> >> 
> >>  The servers are connected thru FC, all the other VMs are working fine.
> >>  
> >>   Any ideas?.
> >> 
> >> Thanks!!
> >> 
> >> PS: The engine.log is showing this:
> >> 2020-09-15 20:10:37,926-03 INFO
> >> 
> >>  [org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand] (default
> >> 
> >> task-168) [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Lock Acquired to object
> >> 'EngineLock:{exclusiveLocks='[f5bd2e15-a1ab-4724-883a-988b4dc7985b=DISK]'
> >> ,
> >> sharedLocks='[71db02c2-df29-4552-8a7e-cb8bb429a2ac=VM]'}'
> >> 2020-09-15 20:10:38,082-03 INFO
> >> 
> >>  [org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand]
> >> 
> >> (EE-ManagedThreadFactory-engine-Thread-36528)
> >> [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Running command:
> >> HotPlugDiskToVmCommand internal: false. Entities affected :  ID:
> >> 71db02c2-df29-4552-8a7e-cb8bb429a2ac Type: VMAction group
> >> CONFIGURE_VM_STORAGE with role type USER
> >> 2020-09-15 20:10:38,117-03 INFO
> >> 
> >>  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
> >> 
> >> (EE-ManagedThreadFactory-engine-Thread-36528)
> >> [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] START,
> >> HotPlugDiskVDSCommand(HostName = nodo2,
> >> HotPlugDiskVDSParameters:{hostId='1c24c269-76c3-468d-a7ce-d0332beb7aef',
> >> vmId='71db02c2-df29-4552-8a7e-cb8bb429a2ac',
> >> diskId='f5bd2e15-a1ab-4724-883a-988b4dc7985b', addressMap='null'}), log
> >> id:
> >> f57ee9e
> >> 2020-09-15 20:10:38,125-03 INFO
> >> 
> >>  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
> >> 
> >> (EE-ManagedThreadFactory-engine-Thread-36528)
> >> [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Disk hot-plug:  >> encoding="UTF-8"?>
> >> 
> >>   
> >>   
> >> 
> >> 
> >>   
> >>>> 
> >> dev="/rhev/data-center/mnt/blockSD/55327311-e47c-46b5-b168-258c5924757b/i
> >> mages/f5bd2e15-a1ab-4724-883a-988b4dc7985b/bd714f21-8eed-43ee-a2d4-3d2ef1
> >> ee4c3f">>> 
> >> 
> >>   
> >>   
> >>>> 
> >> cache="none"/>
> >> 
> >>   
> >>   f5bd2e15-a1ab-4724-883a-988b4dc7985b
> >> 
> >> 
> >>   
> >>   
> >>   http://ovirt.org/vm/1.0";>
> >>   
> >> 
> >> 
> >>   
> >> 
> >> 0001-0001-0001-0001-0311
> >> 
> >> bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f >> D>
> >> 
> >> f5bd2e15-a1ab-4724-883a-988b4dc7985b
> >> 
> >> 55327311-e47c-46b5-b168-258c5924757b >> D>
> >> 
> >>   
> >> 
> >> 
> >>   
> >>   
> >> 
> >> 
> >> 
> >> 2020-09-15 20:10:38,289-03 ERROR
> >> [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
> >> (EE-ManagedThreadFactory-engine-Thread-36528)
> >> [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Failed in 'HotPlugDiskVDS' method
> >> 2020-09-15 20:10:38,295-03 ERROR
> >> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> >> (EE-ManagedThreadFactory-engine-Thread-36528)
> >> [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] EVENT_ID:
> >> VDS_BROKER_COMMAND_FAILURE(10,802), VDSM nodo2 command HotPlugDiskVDS
> >> failed: General Exception: ("Bad volume specification {'device': 'disk',
> >> 'type': 'disk', 'diskType': 'block', 'specParams': {}, 'alias':
> >> 'ua-f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'domainID':
> >> '55327311-e47c-46b5-b168-258c5924757b', 'imageID':
> >> 'f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'poolID':
> >> '0001-0001-0001-0001-0311', 'volumeID':
> >> 'bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f', 'path':
> >> '/rhev/data-center/mnt/blockSD/55327311-e47c-46b5-b168-258c5924757b/image
> >> s/f5bd2e15-a1ab-4724-883a-988

[ovirt-users] Re: Bad volume specification

2020-09-16 Thread Facundo Garat
The VM has one snapshot which I can't delete because it shows a similar
error. That doesn't allow me to attach the disks to another VM. This VM
will boot ok if the disks are deactivated.

Find the engine.log attached.

The steps associated with the engine log:

   - The VM is on booted from CD with all disks deactivated
   - Try to attach all three disks (fail!)
   - Power off the VM
   - Activate all three disks
   - Try to delete the snapshot.

Thanks.





On Wed, Sep 16, 2020 at 9:35 AM Ahmad Khiet  wrote:

> Hi,
>
> can you please attach the engine log? what steps did you make before
> this error is shown? did you tried to create a snapshot and failed before
>
>
> On Wed, Sep 16, 2020 at 7:49 AM Strahil Nikolov via Users 
> wrote:
>
>> What happens if you create another VM and attach the disks to it ?
>> Does it boot properly ?
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>>
>>
>>
>>
>> В сряда, 16 септември 2020 г., 02:19:26 Гринуич+3, Facundo Garat <
>> fga...@gmail.com> написа:
>>
>>
>>
>>
>>
>>
>> Hi all,
>>  I'm having some issues with one VM. The VM won't start and it's showing
>> problems with the virtual disks so I started the VM without any disks and
>> trying to hot adding the disk and that's fail too.
>>
>>  The servers are connected thru FC, all the other VMs are working fine.
>>
>>   Any ideas?.
>>
>> Thanks!!
>>
>> PS: The engine.log is showing this:
>> 2020-09-15 20:10:37,926-03 INFO
>>  [org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand] (default
>> task-168) [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Lock Acquired to object
>> 'EngineLock:{exclusiveLocks='[f5bd2e15-a1ab-4724-883a-988b4dc7985b=DISK]',
>> sharedLocks='[71db02c2-df29-4552-8a7e-cb8bb429a2ac=VM]'}'
>> 2020-09-15 20:10:38,082-03 INFO
>>  [org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand]
>> (EE-ManagedThreadFactory-engine-Thread-36528)
>> [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Running command:
>> HotPlugDiskToVmCommand internal: false. Entities affected :  ID:
>> 71db02c2-df29-4552-8a7e-cb8bb429a2ac Type: VMAction group
>> CONFIGURE_VM_STORAGE with role type USER
>> 2020-09-15 20:10:38,117-03 INFO
>>  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
>> (EE-ManagedThreadFactory-engine-Thread-36528)
>> [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] START,
>> HotPlugDiskVDSCommand(HostName = nodo2,
>> HotPlugDiskVDSParameters:{hostId='1c24c269-76c3-468d-a7ce-d0332beb7aef',
>> vmId='71db02c2-df29-4552-8a7e-cb8bb429a2ac',
>> diskId='f5bd2e15-a1ab-4724-883a-988b4dc7985b', addressMap='null'}), log id:
>> f57ee9e
>> 2020-09-15 20:10:38,125-03 INFO
>>  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
>> (EE-ManagedThreadFactory-engine-Thread-36528)
>> [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Disk hot-plug: > encoding="UTF-8"?>
>>   
>> 
>>   
>>   > dev="/rhev/data-center/mnt/blockSD/55327311-e47c-46b5-b168-258c5924757b/images/f5bd2e15-a1ab-4724-883a-988b4dc7985b/bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f">
>> 
>>   
>>   > cache="none"/>
>>   
>>   f5bd2e15-a1ab-4724-883a-988b4dc7985b
>> 
>>   
>>   http://ovirt.org/vm/1.0";>
>> 
>>   
>>
>> 0001-0001-0001-0001-0311
>>
>> bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f
>>
>> f5bd2e15-a1ab-4724-883a-988b4dc7985b
>>
>> 55327311-e47c-46b5-b168-258c5924757b
>>   
>> 
>>   
>> 
>>
>> 2020-09-15 20:10:38,289-03 ERROR
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
>> (EE-ManagedThreadFactory-engine-Thread-36528)
>> [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Failed in 'HotPlugDiskVDS' method
>> 2020-09-15 20:10:38,295-03 ERROR
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (EE-ManagedThreadFactory-engine-Thread-36528)
>> [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] EVENT_ID:
>> VDS_BROKER_COMMAND_FAILURE(10,802), VDSM nodo2 command HotPlugDiskVDS
>> failed: General Exception: ("Bad volume specification {'device': 'disk',
>> 'type': 'disk', 'diskType': 'block', 'specParams': {}, 'alias':
>> 'ua-f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'domainID':
>> '55327311-e47c-46b5-b168-258c5924757b', 'imageID':
>> 'f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'poolID':
>> '0001-0001-0001-0001-0311', 'volumeID':
>> 'bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f', 'path':
>> '/rhev/data-center/mnt/blockSD/55327311-e47c-46b5-b168-258c5924757b/images/f5bd2e15-a1ab-4724-883a-988b4dc7985b/bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f',
>> 'discard': False, 'format': 'cow', 'propagateErrors': 'off', 'cache':
>> 'none', 'iface': 'virtio', 'name': 'vda', 'serial':
>> 'f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'index': 0}",)
>> 2020-09-15 20:10:38,295-03 INFO
>>  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
>> (EE-ManagedThreadFactory-engine-Thread-36528)
>> [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Command
>> 'org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand' return
>> value 'StatusOnlyReturn [status=Status [code=100, message=General
>> Exception: ("Bad volume

[ovirt-users] Re: Bad volume specification

2020-09-16 Thread Ahmad Khiet
Hi,

can you please attach the engine log? what steps did you make before
this error is shown? did you tried to create a snapshot and failed before


On Wed, Sep 16, 2020 at 7:49 AM Strahil Nikolov via Users 
wrote:

> What happens if you create another VM and attach the disks to it ?
> Does it boot properly ?
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В сряда, 16 септември 2020 г., 02:19:26 Гринуич+3, Facundo Garat <
> fga...@gmail.com> написа:
>
>
>
>
>
>
> Hi all,
>  I'm having some issues with one VM. The VM won't start and it's showing
> problems with the virtual disks so I started the VM without any disks and
> trying to hot adding the disk and that's fail too.
>
>  The servers are connected thru FC, all the other VMs are working fine.
>
>   Any ideas?.
>
> Thanks!!
>
> PS: The engine.log is showing this:
> 2020-09-15 20:10:37,926-03 INFO
>  [org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand] (default
> task-168) [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Lock Acquired to object
> 'EngineLock:{exclusiveLocks='[f5bd2e15-a1ab-4724-883a-988b4dc7985b=DISK]',
> sharedLocks='[71db02c2-df29-4552-8a7e-cb8bb429a2ac=VM]'}'
> 2020-09-15 20:10:38,082-03 INFO
>  [org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand]
> (EE-ManagedThreadFactory-engine-Thread-36528)
> [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Running command:
> HotPlugDiskToVmCommand internal: false. Entities affected :  ID:
> 71db02c2-df29-4552-8a7e-cb8bb429a2ac Type: VMAction group
> CONFIGURE_VM_STORAGE with role type USER
> 2020-09-15 20:10:38,117-03 INFO
>  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-36528)
> [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] START,
> HotPlugDiskVDSCommand(HostName = nodo2,
> HotPlugDiskVDSParameters:{hostId='1c24c269-76c3-468d-a7ce-d0332beb7aef',
> vmId='71db02c2-df29-4552-8a7e-cb8bb429a2ac',
> diskId='f5bd2e15-a1ab-4724-883a-988b4dc7985b', addressMap='null'}), log id:
> f57ee9e
> 2020-09-15 20:10:38,125-03 INFO
>  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-36528)
> [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Disk hot-plug:  encoding="UTF-8"?>
>   
> 
>   
>dev="/rhev/data-center/mnt/blockSD/55327311-e47c-46b5-b168-258c5924757b/images/f5bd2e15-a1ab-4724-883a-988b4dc7985b/bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f">
> 
>   
>cache="none"/>
>   
>   f5bd2e15-a1ab-4724-883a-988b4dc7985b
> 
>   
>   http://ovirt.org/vm/1.0";>
> 
>   
>
> 0001-0001-0001-0001-0311
>
> bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f
>
> f5bd2e15-a1ab-4724-883a-988b4dc7985b
>
> 55327311-e47c-46b5-b168-258c5924757b
>   
> 
>   
> 
>
> 2020-09-15 20:10:38,289-03 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-36528)
> [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Failed in 'HotPlugDiskVDS' method
> 2020-09-15 20:10:38,295-03 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engine-Thread-36528)
> [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] EVENT_ID:
> VDS_BROKER_COMMAND_FAILURE(10,802), VDSM nodo2 command HotPlugDiskVDS
> failed: General Exception: ("Bad volume specification {'device': 'disk',
> 'type': 'disk', 'diskType': 'block', 'specParams': {}, 'alias':
> 'ua-f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'domainID':
> '55327311-e47c-46b5-b168-258c5924757b', 'imageID':
> 'f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'poolID':
> '0001-0001-0001-0001-0311', 'volumeID':
> 'bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f', 'path':
> '/rhev/data-center/mnt/blockSD/55327311-e47c-46b5-b168-258c5924757b/images/f5bd2e15-a1ab-4724-883a-988b4dc7985b/bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f',
> 'discard': False, 'format': 'cow', 'propagateErrors': 'off', 'cache':
> 'none', 'iface': 'virtio', 'name': 'vda', 'serial':
> 'f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'index': 0}",)
> 2020-09-15 20:10:38,295-03 INFO
>  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand]
> (EE-ManagedThreadFactory-engine-Thread-36528)
> [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Command
> 'org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand' return
> value 'StatusOnlyReturn [status=Status [code=100, message=General
> Exception: ("Bad volume specification {'device': 'disk', 'type': 'disk',
> 'diskType': 'block', 'specParams': {}, 'alias':
> 'ua-f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'domainID':
> '55327311-e47c-46b5-b168-258c5924757b', 'imageID':
> 'f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'poolID':
> '0001-0001-0001-0001-0311', 'volumeID':
> 'bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f', 'path':
> '/rhev/data-center/mnt/blockSD/55327311-e47c-46b5-b168-258c5924757b/images/f5bd2e15-a1ab-4724-883a-988b4dc7985b/bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f',
> 'discard': False, 'format': 'cow', 'propagateErrors': 'off', 'cache':
> 'none', 'iface': 'virtio', 'name': 'vda', 'serial':

[ovirt-users] Re: Bad volume specification

2020-09-15 Thread Strahil Nikolov via Users
What happens if you create another VM and attach the disks to it ?
Does it boot properly ?

Best Regards,
Strahil Nikolov






В сряда, 16 септември 2020 г., 02:19:26 Гринуич+3, Facundo Garat 
 написа: 






Hi all, 
 I'm having some issues with one VM. The VM won't start and it's showing 
problems with the virtual disks so I started the VM without any disks and 
trying to hot adding the disk and that's fail too.

 The servers are connected thru FC, all the other VMs are working fine.

  Any ideas?.

Thanks!!

PS: The engine.log is showing this:
2020-09-15 20:10:37,926-03 INFO  
[org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand] (default 
task-168) [dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Lock Acquired to object 
'EngineLock:{exclusiveLocks='[f5bd2e15-a1ab-4724-883a-988b4dc7985b=DISK]', 
sharedLocks='[71db02c2-df29-4552-8a7e-cb8bb429a2ac=VM]'}'
2020-09-15 20:10:38,082-03 INFO  
[org.ovirt.engine.core.bll.storage.disk.HotPlugDiskToVmCommand] 
(EE-ManagedThreadFactory-engine-Thread-36528) 
[dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Running command: HotPlugDiskToVmCommand 
internal: false. Entities affected :  ID: 71db02c2-df29-4552-8a7e-cb8bb429a2ac 
Type: VMAction group CONFIGURE_VM_STORAGE with role type USER
2020-09-15 20:10:38,117-03 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-36528) 
[dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] START, HotPlugDiskVDSCommand(HostName = 
nodo2, HotPlugDiskVDSParameters:{hostId='1c24c269-76c3-468d-a7ce-d0332beb7aef', 
vmId='71db02c2-df29-4552-8a7e-cb8bb429a2ac', 
diskId='f5bd2e15-a1ab-4724-883a-988b4dc7985b', addressMap='null'}), log id: 
f57ee9e
2020-09-15 20:10:38,125-03 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-36528) 
[dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Disk hot-plug: 
  
    
      
      
        
      
      
      
      f5bd2e15-a1ab-4724-883a-988b4dc7985b
    
  
  http://ovirt.org/vm/1.0";>
    
      
        0001-0001-0001-0001-0311
        
bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f
        
f5bd2e15-a1ab-4724-883a-988b4dc7985b
        
55327311-e47c-46b5-b168-258c5924757b
      
    
  


2020-09-15 20:10:38,289-03 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-36528) 
[dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Failed in 'HotPlugDiskVDS' method
2020-09-15 20:10:38,295-03 ERROR 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engine-Thread-36528) 
[dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] EVENT_ID: 
VDS_BROKER_COMMAND_FAILURE(10,802), VDSM nodo2 command HotPlugDiskVDS failed: 
General Exception: ("Bad volume specification {'device': 'disk', 'type': 
'disk', 'diskType': 'block', 'specParams': {}, 'alias': 
'ua-f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'domainID': 
'55327311-e47c-46b5-b168-258c5924757b', 'imageID': 
'f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'poolID': 
'0001-0001-0001-0001-0311', 'volumeID': 
'bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f', 'path': 
'/rhev/data-center/mnt/blockSD/55327311-e47c-46b5-b168-258c5924757b/images/f5bd2e15-a1ab-4724-883a-988b4dc7985b/bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f',
 'discard': False, 'format': 'cow', 'propagateErrors': 'off', 'cache': 'none', 
'iface': 'virtio', 'name': 'vda', 'serial': 
'f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'index': 0}",)
2020-09-15 20:10:38,295-03 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-36528) 
[dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Command 
'org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand' return value 
'StatusOnlyReturn [status=Status [code=100, message=General Exception: ("Bad 
volume specification {'device': 'disk', 'type': 'disk', 'diskType': 'block', 
'specParams': {}, 'alias': 'ua-f5bd2e15-a1ab-4724-883a-988b4dc7985b', 
'domainID': '55327311-e47c-46b5-b168-258c5924757b', 'imageID': 
'f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'poolID': 
'0001-0001-0001-0001-0311', 'volumeID': 
'bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f', 'path': 
'/rhev/data-center/mnt/blockSD/55327311-e47c-46b5-b168-258c5924757b/images/f5bd2e15-a1ab-4724-883a-988b4dc7985b/bd714f21-8eed-43ee-a2d4-3d2ef1ee4c3f',
 'discard': False, 'format': 'cow', 'propagateErrors': 'off', 'cache': 'none', 
'iface': 'virtio', 'name': 'vda', 'serial': 
'f5bd2e15-a1ab-4724-883a-988b4dc7985b', 'index': 0}",)]]'
2020-09-15 20:10:38,295-03 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-36528) 
[dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] HostName = nodo2
2020-09-15 20:10:38,295-03 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] 
(EE-ManagedThreadFactory-engine-Thread-36528) 
[dd72c8e8-cdbe-470f-8e32-b3d14b96f37a] Command 'HotPlugDiskVDSCommand(HostName 
= nodo2, 
HotPlugDiskVDSParameters:{hostId='1c24c269-76c3-468d

[ovirt-users] Re: Bad volume specification

2018-05-28 Thread Bryan Sockel
Permissions all seem to be correct, path is accessible, can run other vm's 
off the same storage domain.

Thank You,

From: "Oliver Riesener (oliver.riese...@hs-bremen.de)" 

To: Bryan Sockel 
Cc: "users@ovirt.org" 
Date: Sun, 27 May 2018 15:38:32 +0200
Subject: [ovirt-users] Re: Bad volume specification

Hi,

check what wrong with your **PATH**:

permissions, lvm, device-mapper, NFS, iSCSI, etc.

Am 27.05.2018 um 15:28 schrieb Bryan Sockel :

 'path': 
'/rhev/data-center/9e7d643c-592d-11e8-82eb-005056b41d15/2b79768f-a329-4eab-81e0-120a81ac8906/images/ec7a3258-7a99-4813-aa7d-dceb727a1975/8f4ddee4-68b3-48e9-be27-0231557f5218'___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/G2CSVVM5BPSBMCS4VBC35IHNZUMQJW3D/


[ovirt-users] Re: Bad volume specification

2018-05-27 Thread Oliver Riesener
Hi,

check what wrong with your **PATH**:

permissions, lvm, device-mapper, NFS, iSCSI, etc.

> Am 27.05.2018 um 15:28 schrieb Bryan Sockel :
> 
>  'path': 
> '/rhev/data-center/9e7d643c-592d-11e8-82eb-005056b41d15/2b79768f-a329-4eab-81e0-120a81ac8906/images/ec7a3258-7a99-4813-aa7d-dceb727a1975/8f4ddee4-68b3-48e9-be27-0231557f5218'

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org