[ovirt-users] Re: Managed Block Storage and Templates

2021-09-28 Thread Shantur Rathore
Possibly due to https://bugzilla.redhat.com/show_bug.cgi?id=2008533

On Fri, Sep 24, 2021 at 10:35 AM Shantur Rathore
 wrote:
>
> I tried with external Ceph with cinderlib and Synology iSCSI with cinderlib 
> both as Managed block storage
>
> On Fri, 24 Sep 2021, 09:51 Gianluca Cecchi,  wrote:
>>
>> On Wed, Sep 22, 2021 at 2:30 PM Shantur Rathore  
>> wrote:
>>>
>>> Hi all,
>>>
>>> Anyone tried using Templates with Managed Block Storage?
>>> I created a VM on MBS and then took a snapshot.
>>> This worked but as soon as I created a Template from snapshot, the
>>> template got created but there is no disk attached to the template.
>>>
>>> Anyone seeing something similar?
>>>
>>> Thanks
>>>
>>
>> Are you using an external ceph cluster? Or what other cinder volume driver 
>> have you configured for the MBS storage domain?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WX7AUAMMHMLSLETYESUJFLRWX4NMWXOH/


[ovirt-users] Re: Managed Block Storage issues

2021-09-28 Thread Shantur Rathore
For 2nd Created : https://bugzilla.redhat.com/show_bug.cgi?id=2008533

I still need to test the rule

On Wed, Sep 22, 2021 at 11:59 AM Benny Zlotnik  wrote:
>
> I see the rule is created in the logs:
>
> MainProcess|jsonrpc/5::DEBUG::2021-09-22
> 10:39:37,504::supervdsm_server::95::SuperVdsm.ServerCallback::(wrapper)
> call add_managed_udev_rule with
> ('ed1a0e9f-4d30-4896-b965-534861cc0c02',
> '/dev/mapper/360014054b727813d1bc4d4cefdade7db') {}
> MainProcess|jsonrpc/5::DEBUG::2021-09-22
> 10:39:37,505::udev::124::SuperVdsm.ServerCallback::(add_managed_udev_rule)
> Creating rule 
> /etc/udev/rules.d/99-vdsm-managed_ed1a0e9f-4d30-4896-b965-534861cc0c02.rules:
> 'SYMLINK=="mapper/360014054b727813d1bc4d4cefdade7db",
> RUN+="/usr/bin/chown vdsm:qemu $env{DEVNAME}"\n'
>
> While we no longer test backends other than ceph, this used to work
> back when we started and it worked for NetApp. Perhaps this rule is
> incorrect, can you check this manually?
>
> regarding 2, can you please submit a bug?
>
> On Wed, Sep 22, 2021 at 1:03 PM Shantur Rathore
>  wrote:
> >
> > Hi all,
> >
> > I am trying to set up Managed block storage and have the following issues.
> >
> > My setup:
> > Latest oVirt Node NG : 4.4.8
> > Latest oVirt Engine : 4.4.8
> >
> > 1. Unable to copy to iSCSI based block storage
> >
> > I created a MBS with Synology UC3200 as a backend ( supported by
> > Cinderlib ). It was created fine but when I try to copy disks to it,
> > it fails.
> > Upon looking at the logs from SPM, I found "qemu-img" failed with an
> > error that it cannot open "/dev/mapper/xx" : Permission Error.
> > Had a look through the code and digging out more, I saw that
> > a. Sometimes /dev/mapper/ symlink isn't created ( log attached )
> > b. The ownership to /dev/mapper/xx and /dev/dm-xx for the new
> > device always stays at root:root
> >
> > I added a udev rule
> > ACTION=="add|change", ENV{DM_UUID}=="mpath-*", GROUP="qemu",
> > OWNER="vdsm", MODE="0660"
> >
> > and the disk copied correctly when /dev/mapper/x got created.
> >
> > 2. Copy progress finishes in UI very early than the actual qemu-img process.
> > The UI shows the Copy process is completed successfully but it's
> > actually still copying the image.
> > This happens both for ceph and iscsi based mbs.
> >
> > Is there any known workaround to get iSCSI MBS working?
> >
> > Kind regards,
> > Shantur
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct: 
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives: 
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/G6TMTW23SUAKR4UOXVSZKXHJY3PVMIDD/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5MJDMKCIVMLTR3HWJT3634YDTWYQAVOY/


[ovirt-users] Re: Managed Block Storage and Templates

2021-09-24 Thread Shantur Rathore
I tried with external Ceph with cinderlib and Synology iSCSI with cinderlib
both as Managed block storage

On Fri, 24 Sep 2021, 09:51 Gianluca Cecchi, 
wrote:

> On Wed, Sep 22, 2021 at 2:30 PM Shantur Rathore 
> wrote:
>
>> Hi all,
>>
>> Anyone tried using Templates with Managed Block Storage?
>> I created a VM on MBS and then took a snapshot.
>> This worked but as soon as I created a Template from snapshot, the
>> template got created but there is no disk attached to the template.
>>
>> Anyone seeing something similar?
>>
>> Thanks
>>
>>
> Are you using an external ceph cluster? Or what other cinder volume driver
> have you configured for the MBS storage domain?
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GAGQFFIB2FWLC4AZJ2IXF7FZV3ATMQLY/


[ovirt-users] Re: Managed Block Storage and Templates

2021-09-24 Thread Gianluca Cecchi
On Wed, Sep 22, 2021 at 2:30 PM Shantur Rathore 
wrote:

> Hi all,
>
> Anyone tried using Templates with Managed Block Storage?
> I created a VM on MBS and then took a snapshot.
> This worked but as soon as I created a Template from snapshot, the
> template got created but there is no disk attached to the template.
>
> Anyone seeing something similar?
>
> Thanks
>
>
Are you using an external ceph cluster? Or what other cinder volume driver
have you configured for the MBS storage domain?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XOOT3MEALKWAIZFBJGJMQEUBEM32M6FI/


[ovirt-users] Re: Managed Block Storage and Templates

2021-09-24 Thread Benny Zlotnik
Can you submit a bug for this?

On Wed, Sep 22, 2021 at 3:31 PM Shantur Rathore
 wrote:
>
> Hi all,
>
> Anyone tried using Templates with Managed Block Storage?
> I created a VM on MBS and then took a snapshot.
> This worked but as soon as I created a Template from snapshot, the
> template got created but there is no disk attached to the template.
>
> Anyone seeing something similar?
>
> Thanks
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z6SPHZ3XOSXRYE72SWRANTXZCA27RKDY/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CUZJYWSQ4ZVHZ67BCETJC7MSOOGGNCBT/


[ovirt-users] Re: Managed Block Storage issues

2021-09-22 Thread Benny Zlotnik
I see the rule is created in the logs:

MainProcess|jsonrpc/5::DEBUG::2021-09-22
10:39:37,504::supervdsm_server::95::SuperVdsm.ServerCallback::(wrapper)
call add_managed_udev_rule with
('ed1a0e9f-4d30-4896-b965-534861cc0c02',
'/dev/mapper/360014054b727813d1bc4d4cefdade7db') {}
MainProcess|jsonrpc/5::DEBUG::2021-09-22
10:39:37,505::udev::124::SuperVdsm.ServerCallback::(add_managed_udev_rule)
Creating rule 
/etc/udev/rules.d/99-vdsm-managed_ed1a0e9f-4d30-4896-b965-534861cc0c02.rules:
'SYMLINK=="mapper/360014054b727813d1bc4d4cefdade7db",
RUN+="/usr/bin/chown vdsm:qemu $env{DEVNAME}"\n'

While we no longer test backends other than ceph, this used to work
back when we started and it worked for NetApp. Perhaps this rule is
incorrect, can you check this manually?

regarding 2, can you please submit a bug?

On Wed, Sep 22, 2021 at 1:03 PM Shantur Rathore
 wrote:
>
> Hi all,
>
> I am trying to set up Managed block storage and have the following issues.
>
> My setup:
> Latest oVirt Node NG : 4.4.8
> Latest oVirt Engine : 4.4.8
>
> 1. Unable to copy to iSCSI based block storage
>
> I created a MBS with Synology UC3200 as a backend ( supported by
> Cinderlib ). It was created fine but when I try to copy disks to it,
> it fails.
> Upon looking at the logs from SPM, I found "qemu-img" failed with an
> error that it cannot open "/dev/mapper/xx" : Permission Error.
> Had a look through the code and digging out more, I saw that
> a. Sometimes /dev/mapper/ symlink isn't created ( log attached )
> b. The ownership to /dev/mapper/xx and /dev/dm-xx for the new
> device always stays at root:root
>
> I added a udev rule
> ACTION=="add|change", ENV{DM_UUID}=="mpath-*", GROUP="qemu",
> OWNER="vdsm", MODE="0660"
>
> and the disk copied correctly when /dev/mapper/x got created.
>
> 2. Copy progress finishes in UI very early than the actual qemu-img process.
> The UI shows the Copy process is completed successfully but it's
> actually still copying the image.
> This happens both for ceph and iscsi based mbs.
>
> Is there any known workaround to get iSCSI MBS working?
>
> Kind regards,
> Shantur
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/G6TMTW23SUAKR4UOXVSZKXHJY3PVMIDD/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CFELPIEEW2J4DVEBUNJPMQGMAR5JBKL4/


[ovirt-users] Re: Managed Block Storage and more

2021-01-22 Thread Matthias Leopold



Am 22.01.21 um 12:01 schrieb Shantur Rathore:

Thanks Matthias,

Ceph iSCSI is indeed supported but it introduces an overhead for running 
LIO gateways for iSCSI.
CephFS works as a posix domain, if we could get a posix domain to work 
as a master domain then we could run a self-hosted engine on it.
Concerning this you should look at 
https://bugzilla.redhat.com/show_bug.cgi?id=1577529.


Ceph RBD ( rbd-nbd hopefully in future ) could be used with 
cinderlib and we have got a self-hosted infrastructure with Ceph.


I am hopeful that when cinderlib integration is mature enough to be out 
of Tech preview, there will be a way to migrate old cinder disks to new 
cinderlib.


PS: About your large deployment, go OpenStack or OpenNebula if you like. 
Proxmox clustering isn't very great, it doesn't have a single controller 
and uses coro-sync based clustering which isn't very great.


Cheers,
Shantur

On Fri, Jan 22, 2021 at 10:36 AM Matthias Leopold 
> wrote:


I can confirm that Ceph iSCSI can be used for master domain, we are
using it together with VM disks on Ceph via Cinder ("old style").
Recent
developments concerning Ceph in oVirt are disappointing for me, I think
I will have to look elsewhere (OpenStack, Proxmox) for our rather big
deployment. At least Nir Soffer's explanation for the move to cinderlib
in another thread (dated 20210121) shed some light on the background of
this decision.

Matthias

...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HCV4TNVUKWECSWDW2VNRLO465MOIQS4P/


[ovirt-users] Re: Managed Block Storage and more

2021-01-22 Thread Shantur Rathore
Thanks Matthias,

Ceph iSCSI is indeed supported but it introduces an overhead for running
LIO gateways for iSCSI.
CephFS works as a posix domain, if we could get a posix domain to work as a
master domain then we could run a self-hosted engine on it.
Ceph RBD ( rbd-nbd hopefully in future ) could be used with cinderlib and
we have got a self-hosted infrastructure with Ceph.

I am hopeful that when cinderlib integration is mature enough to be out of
Tech preview, there will be a way to migrate old cinder disks to new
cinderlib.

PS: About your large deployment, go OpenStack or OpenNebula if you like.
Proxmox clustering isn't very great, it doesn't have a single controller
and uses coro-sync based clustering which isn't very great.

Cheers,
Shantur

On Fri, Jan 22, 2021 at 10:36 AM Matthias Leopold <
matthias.leop...@meduniwien.ac.at> wrote:

> I can confirm that Ceph iSCSI can be used for master domain, we are
> using it together with VM disks on Ceph via Cinder ("old style"). Recent
> developments concerning Ceph in oVirt are disappointing for me, I think
> I will have to look elsewhere (OpenStack, Proxmox) for our rather big
> deployment. At least Nir Soffer's explanation for the move to cinderlib
> in another thread (dated 20210121) shed some light on the background of
> this decision.
>
> Matthias
>
> Am 19.01.21 um 12:57 schrieb Gianluca Cecchi:
> > On Tue, Jan 19, 2021 at 12:20 PM Benny Zlotnik  > > wrote:
> >
> >  >Thanks for pointing out the requirement for Master domain. In
> > theory, will I be able to satisfy the requirement with another iSCSI
> > or >maybe Ceph iSCSI as master domain?
> > It should work as ovirt sees it as a regular domain, cephFS will
> > probably work too
> >
> >
> > Ceph iSCSI gateway should be supported since 4.1, so I think I can use
> > it for configuring the master domain and still leveraging the same
> > overall storage environment provided by Ceph, correct?
> >
> > https://bugzilla.redhat.com/show_bug.cgi?id=1527061
> >
> > Gianluca
> >
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3ASTNEGXSV7I4NIOG5RVZKDWIPQCEPMU/
> >
>
> --
> Matthias Leopold
> IT Systems & Communications
> Medizinische Universität Wien
> Spitalgasse 23 / BT 88 / Ebene 00
> A-1090 Wien
> Tel: +43 1 40160-21241
> Fax: +43 1 40160-921200
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JNLF5QBPVMFNQXCD5J7RCPJKIZ4WOJ76/


[ovirt-users] Re: Managed Block Storage and more

2021-01-22 Thread Matthias Leopold
I can confirm that Ceph iSCSI can be used for master domain, we are 
using it together with VM disks on Ceph via Cinder ("old style"). Recent 
developments concerning Ceph in oVirt are disappointing for me, I think 
I will have to look elsewhere (OpenStack, Proxmox) for our rather big 
deployment. At least Nir Soffer's explanation for the move to cinderlib 
in another thread (dated 20210121) shed some light on the background of 
this decision.


Matthias

Am 19.01.21 um 12:57 schrieb Gianluca Cecchi:
On Tue, Jan 19, 2021 at 12:20 PM Benny Zlotnik > wrote:


 >Thanks for pointing out the requirement for Master domain. In
theory, will I be able to satisfy the requirement with another iSCSI
or >maybe Ceph iSCSI as master domain?
It should work as ovirt sees it as a regular domain, cephFS will
probably work too


Ceph iSCSI gateway should be supported since 4.1, so I think I can use 
it for configuring the master domain and still leveraging the same 
overall storage environment provided by Ceph, correct?


https://bugzilla.redhat.com/show_bug.cgi?id=1527061

Gianluca

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3ASTNEGXSV7I4NIOG5RVZKDWIPQCEPMU/



--
Matthias Leopold
IT Systems & Communications
Medizinische Universität Wien
Spitalgasse 23 / BT 88 / Ebene 00
A-1090 Wien
Tel: +43 1 40160-21241
Fax: +43 1 40160-921200
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HHM2IJ7OE5V6CI74RUEKGTZOSHR7EFQ3/


[ovirt-users] Re: Managed Block Storage and more

2021-01-22 Thread Shantur Rathore
Thanks Konstantin.

I do get that oVirt needs a master domain.
Just want to make a POSIX domain as a master domain. I can see there is no
option in UI for that but do not understand if it is incompatible or not
implemented.
If it is not implemented then there might be a possibility of creating one
with manual steps.

Thanks

On Fri, Jan 22, 2021 at 10:21 AM Konstantin Shalygin  wrote:

> Shantur, this is oVirt. You always should make master domain. It’s enough
> some 1GB NFS on manager side.
>
>
> k
>
> On 22 Jan 2021, at 12:02, Shantur Rathore  wrote:
>
> Just a bump. Any ideas anyone?
>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2S5MPA3CWH6YTPAIWZE5GLCBIP7ZQLJ5/


[ovirt-users] Re: Managed Block Storage and more

2021-01-22 Thread Konstantin Shalygin
Shantur, this is oVirt. You always should make master domain. It’s enough some 
1GB NFS on manager side.


k

> On 22 Jan 2021, at 12:02, Shantur Rathore  wrote:
> 
> Just a bump. Any ideas anyone?

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5BNHSC23IQJYFPQ6NOKIEXKCXGIPXJMC/


[ovirt-users] Re: Managed Block Storage and more

2021-01-22 Thread Shantur Rathore
Just a bump. Any ideas anyone?

On Wed, Jan 20, 2021 at 4:13 PM Shantur Rathore  wrote:

> So,
> after a quick dive into source code, I cannot see any mention of posix
> storage in hosted-engine code.
> I am not sure if there is a manual way of moving the locally created
> hosted-engine vm to POSIX storage and create a storage domain using API as
> it does for other types of domains while installing self-hosted engine.
>
> Regards,
> Shantur
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LZNM3TMKMJPFR22SKIUBN32EK4FS5GCH/


[ovirt-users] Re: Managed Block Storage and more

2021-01-20 Thread Shantur Rathore
So,
after a quick dive into source code, I cannot see any mention of posix
storage in hosted-engine code.
I am not sure if there is a manual way of moving the locally created
hosted-engine vm to POSIX storage and create a storage domain using API as
it does for other types of domains while installing self-hosted engine.

Regards,
Shantur
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MXFYKVBM2WDGOMDVPDV3C6PGLJQ74AV6/


[ovirt-users] Re: Managed Block Storage and more

2021-01-20 Thread Shantur Rathore
>
> It should work as ovirt sees it as a regular domain, cephFS will
> probably work too


Just tried to setup Ceph hyperconverged

1. Installed oVirt NG 4.4.4 on a machine ( partitioned to leave space for
Ceph )
2. Installed CephAdm : https://docs.ceph.com/en/latest/cephadm/install/
3. Enabled EPEL and other required repos.
4. Bootstrapped ceph cluster
5. Created LV on the partitioned free space
6. Added OSD to ceph cluster
7. Added CephFS
8. Set min_size and size to 1 for osd pools to make it work with 1 OSD.

All ready to deploy Self hosted engine from Cockpit

1. Started Self-Hosted engine deployment (not Hyperconverged)
2. Enter the details to Prepare-VM.
3. Prepare-VM successful.
4. Feeling excited, get the cephfs mount details ready.
5. Storage screen - There is no option to use POSIX storage for
Self-Hosted. Bummer.

Is there any way to work around this?
I am able to add this to another oVirt Engine.

[image: Screenshot 2021-01-20 at 12.19.55.png]

Thanks,
Shantur

On Tue, Jan 19, 2021 at 11:16 AM Benny Zlotnik  wrote:

> >Thanks for pointing out the requirement for Master domain. In theory,
> will I be able to satisfy the requirement with another iSCSI or >maybe Ceph
> iSCSI as master domain?
> It should work as ovirt sees it as a regular domain, cephFS will
> probably work too
>
> >So each node has
>
> >- oVirt Node NG / Centos
> >- Ceph cluster member
> >- iSCSI or Ceph iSCSI master domain
>
> >How practical is such a setup?
> Not sure, it could work, but it hasn't been tested and it's likely you
> are going to be the first to try it
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YNGXYAZ3S3KXPGHEFHDCVXSDL7QA3IAY/


[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Benny Zlotnik
>Ceph iSCSI gateway should be supported since 4.1, so I think I can use it for 
>configuring the master domain and still leveraging the same overall storage 
>environment provided by Ceph, correct?

yes, it shouldn't be a problem
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LN6JWSEXX7TTQMWWPUHPFRPTPQQMPUP3/


[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Gianluca Cecchi
On Tue, Jan 19, 2021 at 12:20 PM Benny Zlotnik  wrote:

> >Thanks for pointing out the requirement for Master domain. In theory,
> will I be able to satisfy the requirement with another iSCSI or >maybe Ceph
> iSCSI as master domain?
> It should work as ovirt sees it as a regular domain, cephFS will
> probably work too
>

Ceph iSCSI gateway should be supported since 4.1, so I think I can use it
for configuring the master domain and still leveraging the same overall
storage environment provided by Ceph, correct?

https://bugzilla.redhat.com/show_bug.cgi?id=1527061

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3ASTNEGXSV7I4NIOG5RVZKDWIPQCEPMU/


[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Benny Zlotnik
>Thanks for pointing out the requirement for Master domain. In theory, will I 
>be able to satisfy the requirement with another iSCSI or >maybe Ceph iSCSI as 
>master domain?
It should work as ovirt sees it as a regular domain, cephFS will
probably work too

>So each node has

>- oVirt Node NG / Centos
>- Ceph cluster member
>- iSCSI or Ceph iSCSI master domain

>How practical is such a setup?
Not sure, it could work, but it hasn't been tested and it's likely you
are going to be the first to try it
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PH6K2B2QMTRZPCRNBHWIV4OZB7X3NLHE/


[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Sandro Bonazzola
Il giorno mar 19 gen 2021 alle ore 09:07 Gianluca Cecchi <
gianluca.cec...@gmail.com> ha scritto:

> On Tue, Jan 19, 2021 at 8:43 AM Benny Zlotnik  wrote:
>
>> Ceph support is available via Managed Block Storage (tech preview), it
>> cannot be used instead of gluster for hyperconverged setups.
>>
>>
> Just for clarification: when you say Managed Block Storage you mean
> cinderlib integration, correct?
> Is still this one below the correct reference page for 4.4?
>
> https://www.ovirt.org/develop/release-management/features/storage/cinderlib-integration.html
>
> So are the manual steps still needed (and also repo config that seems
> against pike)?
> Or do you have an updated link for configuring cinderlib in 4.4?
>

Above mentioned page was feature development page and not considered end
user documentation.
Updated documentation is here:
https://ovirt.org/documentation/installing_ovirt_as_a_standalone_manager_with_local_databases/#Set_up_Cinderlib




>
> Moreover, it is not possible to use a pure Managed Block Storage setup
>> at all, there has to be at least one regular storage domain in a
>> datacenter
>>
>>
> Is this true only for Self Hosted Engine Environment or also if I have an
> external engine?
>
> Thanks,
> Gianluca
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/SHSQO6WLMTVDNTVFACLOEFOFOD3GRYLW/
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MB4FAL34LAJJWVYR247R7T2T6IQE6VP3/


[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Konstantin Shalygin


> On 19 Jan 2021, at 13:39, Shantur Rathore  wrote:
> 
> I have tested all options but oVirt seems to tick most required boxes.
> 
> OpenStack : Too complex for use case
> Proxmox : Love Ceph support but very basic clustering support
> OpenNebula : Weird VM state machine.
> 
> Not sure if you know that rbd-nbd support is going to be implemented to 
> Cinderlib. I could understand why oVirt wants to support CinderLib and 
> deprecate Cinder support.

Yes, we love oVirt for “that should work like this”, before oVirt 4.4...
Now imagine: you current cluster runned with qemu-rbd and Cinder, now you 
upgrade oVirt and can’t do anything - can’t migrate, your images in another 
oVirt pool, engine-setup can’t migrate current images to MBS - all in “feature 
preview”, older integration broken, then abandoned.


Thanks,
k___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XZGDUICDWAPGMVQM6V5K4IRZE46PJ3O6/


[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Shantur Rathore
@Konstantin Shalygin  :
>
> I recommend to look to OpenStack or some OpenNebula/Proxmox if you wan’t
> use Ceph Storage.

I have tested all options but oVirt seems to tick most required boxes.

OpenStack : Too complex for use case
Proxmox : Love Ceph support but very basic clustering support
OpenNebula : Weird VM state machine.

Not sure if you know that rbd-nbd support is going to be implemented to
Cinderlib. I could understand why oVirt wants to support CinderLib and
deprecate Cinder support.

@Strahil Nikolov 

> Most probably it will be easier if you stick with full-blown distro.

Yesterday, I was able to bring up a single host single disk Ceph cluster on
oVirt Node NG 4.4.4 after enabling some repositories. Having said that, I
didn't try image based upgrades to host.
I read somewhere that rpms are persisted between host upgrades in Node NG
now.

@Benny Zlotnik

> Moreover, it is not possible to use a pure Managed Block Storage setup
> at all, there has to be at least one regular storage domain in a
> datacenter

Thanks for pointing out the requirement for Master domain. In theory, will
I be able to satisfy the requirement with another iSCSI or maybe Ceph iSCSI
as master domain?

So each node has

- oVirt Node NG / Centos
- Ceph cluster member
- iSCSI or Ceph iSCSI master domain

How practical is such a setup?

Thanks,
Shantur

On Tue, Jan 19, 2021 at 9:39 AM Konstantin Shalygin  wrote:

> Yep, BZ is
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1539837
> https://bugzilla.redhat.com/show_bug.cgi?id=1904669
> https://bugzilla.redhat.com/show_bug.cgi?id=1905113
>
> Thanks,
> k
>
> On 19 Jan 2021, at 11:05, Gianluca Cecchi 
> wrote:
>
> perhaps a copy paste error about the bugzilla entries? They are the same
> number...
>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JNARS3TLZQH62EISYLYGN4STSKFCBX5F/


[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Konstantin Shalygin
Yep, BZ is 

https://bugzilla.redhat.com/show_bug.cgi?id=1539837 

https://bugzilla.redhat.com/show_bug.cgi?id=1904669 

https://bugzilla.redhat.com/show_bug.cgi?id=1905113 


Thanks,
k

> On 19 Jan 2021, at 11:05, Gianluca Cecchi  wrote:
> 
> perhaps a copy paste error about the bugzilla entries? They are the same 
> number...

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QCYCKFFM2LSZSZZIQX4Q5GEOYDO2I5GU/


[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Benny Zlotnik
>Just for clarification: when you say Managed Block Storage you mean cinderlib 
>integration, >correct?
>Is still this one below the correct reference page for 4.4?
>https://www.ovirt.org/develop/release-management/features/storage/cinderlib-integration.html
yes

>So are the manual steps still needed (and also repo config that seems against 
>pike)?
>Or do you have an updated link for configuring cinderlib in 4.4?
It is slightly outdated, I, and other users have successfully used
ussuri. I will update the feature page today.

>Is this true only for Self Hosted Engine Environment or also if I have an 
>external engine?
External engine as well. The reason this is required is that only
regular domains can serve as master domains which is required for a
host to get the SPM role
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JUV5F6GKRNFOCXB2BPW2ZY4UUZZ25DTV/


[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Gianluca Cecchi
On Tue, Jan 19, 2021 at 9:01 AM Konstantin Shalygin  wrote:

> Shantur, I recommend to look to OpenStack or some OpenNebula/Proxmox if
> you wan’t use Ceph Storage.
> Current storage team support in oVirt just can break something and do not
> work with this anymore, take a look what I talking about: in [1], [2], [3]
>
>
> k
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1899453
> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1899453
> [3] https://bugzilla.redhat.com/show_bug.cgi?id=1899453
>
>
>
>
perhaps a copy paste error about the bugzilla entries? They are the same
number...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3XYMG4QUM3TTTL45XGXUWA6DOWIWDQ64/


[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Gianluca Cecchi
On Tue, Jan 19, 2021 at 8:43 AM Benny Zlotnik  wrote:

> Ceph support is available via Managed Block Storage (tech preview), it
> cannot be used instead of gluster for hyperconverged setups.
>
>
Just for clarification: when you say Managed Block Storage you mean
cinderlib integration, correct?
Is still this one below the correct reference page for 4.4?
https://www.ovirt.org/develop/release-management/features/storage/cinderlib-integration.html

So are the manual steps still needed (and also repo config that seems
against pike)?
Or do you have an updated link for configuring cinderlib in 4.4?

Moreover, it is not possible to use a pure Managed Block Storage setup
> at all, there has to be at least one regular storage domain in a
> datacenter
>
>
Is this true only for Self Hosted Engine Environment or also if I have an
external engine?

Thanks,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SHSQO6WLMTVDNTVFACLOEFOFOD3GRYLW/


[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Konstantin Shalygin
Shantur, I recommend to look to OpenStack or some OpenNebula/Proxmox if you 
wan’t use Ceph Storage.
Current storage team support in oVirt just can break something and do not work 
with this anymore, take a look what I talking about: in [1], [2], [3]


k

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1899453 

[2] https://bugzilla.redhat.com/show_bug.cgi?id=1899453 

[3] https://bugzilla.redhat.com/show_bug.cgi?id=1899453 




> On 19 Jan 2021, at 10:40, Benny Zlotnik  wrote:
> 
> Ceph support is available via Managed Block Storage (tech preview), it
> cannot be used instead of gluster for hyperconverged setups.
> 
> Moreover, it is not possible to use a pure Managed Block Storage setup
> at all, there has to be at least one regular storage domain in a
> datacenter

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NQG6XHDYZT7WGCHDIUCY55IS7F5G5OVC/


[ovirt-users] Re: Managed Block Storage and more

2021-01-18 Thread Benny Zlotnik
Ceph support is available via Managed Block Storage (tech preview), it
cannot be used instead of gluster for hyperconverged setups.

Moreover, it is not possible to use a pure Managed Block Storage setup
at all, there has to be at least one regular storage domain in a
datacenter

On Mon, Jan 18, 2021 at 11:58 AM Shantur Rathore  wrote:
>
> Thanks Strahil for your reply.
>
> Sorry just to confirm,
>
> 1. Are you saying Ceph on oVirt Node NG isn't possible?
> 2. Would you know which devs would be best to ask about the recent Ceph 
> changes?
>
> Thanks,
> Shantur
>
> On Sun, Jan 17, 2021 at 4:46 PM Strahil Nikolov via Users  
> wrote:
>>
>> В 15:51 + на 17.01.2021 (нд), Shantur Rathore написа:
>>
>> Hi Strahil,
>>
>> Thanks for your reply, I have 16 nodes for now but more on the way.
>>
>> The reason why Ceph appeals me over Gluster because of the following reasons.
>>
>> 1. I have more experience with Ceph than Gluster.
>>
>> That is a good reason to pick CEPH.
>>
>> 2. I heard in Managed Block Storage presentation that it leverages storage 
>> software to offload storage related tasks.
>> 3. Adding Gluster storage limits to 3 hosts at a time.
>>
>> Only if you wish the nodes to be both Storage and Compute. Yet, you can add 
>> as many as you wish as a compute node (won't be part of Gluster) and later 
>> you can add them to the Gluster TSP (this requires 3 nodes at a time).
>>
>> 4. I read that there is a limit of maximum 12 hosts in Gluster setup. No 
>> such limitation if I go via Ceph.
>>
>> Actually , it's about Red Hat support for RHHI and not for Gluster + oVirt. 
>> As both oVirt and Gluster ,that are used, are upstream projects, support is 
>> on best effort from the community.
>>
>> In my initial testing I was able to enable Centos repositories in Node Ng 
>> but if I remember correctly, there were some librbd versions present in Node 
>> Ng which clashed with the version I was trying to install.
>> Does Ceph hyperconverge still make sense?
>>
>> Yes it is. You got the knowledge to run the CEPH part, yet consider talking 
>> with some of the devs on the list - as there were some changes recently in 
>> oVirt's support for CEPH.
>>
>> Regards
>> Shantur
>>
>> On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users  
>> wrote:
>>
>> Hi Shantur,
>>
>> the main question is how many nodes you have.
>> Ceph integration is still in development/experimental and it should be wise 
>> to consider Gluster also. It has a great integration and it's quite easy to 
>> work with).
>>
>>
>> There are users reporting using CEPH with their oVirt , but I can't tell how 
>> good it is.
>> I doubt that oVirt nodes come with CEPH components , so you most probably 
>> will need to use a full-blown distro. In general, using extra software on 
>> oVirt nodes is quite hard .
>>
>> With such setup, you will need much more nodes than a Gluster setup due to 
>> CEPH's requirements.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>>
>>
>>
>>
>> В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore 
>>  написа:
>>
>>
>>
>>
>>
>> Hi all,
>>
>> I am planning my new oVirt cluster on Apple hosts. These hosts can only have 
>> one disk which I plan to partition and use for hyper converged setup. As 
>> this is my first oVirt cluster I need help in understanding few bits.
>>
>> 1. Is Hyper converged setup possible with Ceph using cinderlib?
>> 2. Can this hyper converged setup be on oVirt Node Next hosts or only Centos?
>> 3. Can I install cinderlib on oVirt Node Next hosts?
>> 4. Are there any pit falls in such a setup?
>>
>>
>> Thanks for your help
>>
>> Regards,
>> Shantur
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKWQBF5LF4B7HCVQXKWLX/
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66OBZ6OHQALFVRAEPMU7/
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/4IBXGXZVXAIUDS2O675QAXZRTSULPD2S/
>
> ___
> Users mailing list -- users@ovirt.org
> To 

[ovirt-users] Re: Managed Block Storage and more

2021-01-18 Thread Sandro Bonazzola
Il giorno lun 18 gen 2021 alle ore 20:04 Strahil Nikolov <
hunter86...@yahoo.com> ha scritto:

> Most probably it will be easier if you stick with full-blown distro.
>
> @Sandro Bonazzola can help with CEPH status.
>

Letting the storage team have a voice here :-)
+Tal Nisan  , +Eyal Shenitzky  , +Nir
Soffer 


>
> Best Regards,Strahil Nikolov
>
>
>
>
>
>
> В понеделник, 18 януари 2021 г., 11:44:32 Гринуич+2, Shantur Rathore <
> rathor...@gmail.com> написа:
>
>
>
>
>
> Thanks Strahil for your reply.
>
> Sorry just to confirm,
>
> 1. Are you saying Ceph on oVirt Node NG isn't possible?
> 2. Would you know which devs would be best to ask about the recent Ceph
> changes?
>
> Thanks,
> Shantur
>
> On Sun, Jan 17, 2021 at 4:46 PM Strahil Nikolov via Users 
> wrote:
> > В 15:51 + на 17.01.2021 (нд), Shantur Rathore написа:
> >> Hi Strahil,
> >>
> >> Thanks for your reply, I have 16 nodes for now but more on the way.
> >>
> >> The reason why Ceph appeals me over Gluster because of the following
> reasons.
> >>
> >> 1. I have more experience with Ceph than Gluster.
> > That is a good reason to pick CEPH.
> >> 2. I heard in Managed Block Storage presentation that it leverages
> storage software to offload storage related tasks.
> >> 3. Adding Gluster storage limits to 3 hosts at a time.
> > Only if you wish the nodes to be both Storage and Compute. Yet, you can
> add as many as you wish as a compute node (won't be part of Gluster) and
> later you can add them to the Gluster TSP (this requires 3 nodes at a time).
> >> 4. I read that there is a limit of maximum 12 hosts in Gluster setup.
> No such limitation if I go via Ceph.
> > Actually , it's about Red Hat support for RHHI and not for Gluster +
> oVirt. As both oVirt and Gluster ,that are used, are upstream projects,
> support is on best effort from the community.
> >> In my initial testing I was able to enable Centos repositories in Node
> Ng but if I remember correctly, there were some librbd versions present in
> Node Ng which clashed with the version I was trying to install.
> >> Does Ceph hyperconverge still make sense?
> > Yes it is. You got the knowledge to run the CEPH part, yet consider
> talking with some of the devs on the list - as there were some changes
> recently in oVirt's support for CEPH.
> >
> >> Regards
> >> Shantur
> >>
> >> On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users <
> users@ovirt.org> wrote:
> >>> Hi Shantur,
> >>>
> >>> the main question is how many nodes you have.
> >>> Ceph integration is still in development/experimental and it should be
> wise to consider Gluster also. It has a great integration and it's quite
> easy to work with).
> >>>
> >>>
> >>> There are users reporting using CEPH with their oVirt , but I can't
> tell how good it is.
> >>> I doubt that oVirt nodes come with CEPH components , so you most
> probably will need to use a full-blown distro. In general, using extra
> software on oVirt nodes is quite hard .
> >>>
> >>> With such setup, you will need much more nodes than a Gluster setup
> due to CEPH's requirements.
> >>>
> >>> Best Regards,
> >>> Strahil Nikolov
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore <
> shantur.rath...@gmail.com> написа:
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> Hi all,
> >>>
> >>> I am planning my new oVirt cluster on Apple hosts. These hosts can
> only have one disk which I plan to partition and use for hyper converged
> setup. As this is my first oVirt cluster I need help in understanding few
> bits.
> >>>
> >>> 1. Is Hyper converged setup possible with Ceph using cinderlib?
> >>> 2. Can this hyper converged setup be on oVirt Node Next hosts or only
> Centos?
> >>> 3. Can I install cinderlib on oVirt Node Next hosts?
> >>> 4. Are there any pit falls in such a setup?
> >>>
> >>>
> >>> Thanks for your help
> >>>
> >>> Regards,
> >>> Shantur
> >>>
> >>> ___
> >>> Users mailing list -- users@ovirt.org
> >>> To unsubscribe send an email to users-le...@ovirt.org
> >>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >>> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> >>> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKWQBF5LF4B7HCVQXKWLX/
> >>> ___
> >>> Users mailing list -- users@ovirt.org
> >>> To unsubscribe send an email to users-le...@ovirt.org
> >>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> >>> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> >>> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66OBZ6OHQALFVRAEPMU7/
> >>>
> >>
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> 

[ovirt-users] Re: Managed Block Storage and more

2021-01-18 Thread Konstantin Shalygin
Faster than fuse-rbd, not qemu.
Main issue is kernel pagecache and client upgrades, for example cluster with 
700 osd and 1000 clients we need update client version for new features. With 
current oVirt realization we need update kernel then reboot host. With librbd 
we just need update package and activate host.


k

Sent from my iPhone

> On 18 Jan 2021, at 19:13, Shantur Rathore  wrote:
> 
> Thanks for pointing that out to me Konstantin.
> 
> I understand that it would use a kernel client instead of userland rbd lib.
> Isn't it better as I have seen kernel clients 20x faster than userland??
> 
> I am probably missing something important here, would you mind detailing that.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TL32D27O5GDQZHMUX57IV5FUYFPKWAKZ/


[ovirt-users] Re: Managed Block Storage and more

2021-01-18 Thread Strahil Nikolov via Users
Most probably it will be easier if you stick with full-blown distro.

@Sandro Bonazzola can help with CEPH status.

Best Regards,Strahil Nikolov






В понеделник, 18 януари 2021 г., 11:44:32 Гринуич+2, Shantur Rathore 
 написа: 





Thanks Strahil for your reply.

Sorry just to confirm,

1. Are you saying Ceph on oVirt Node NG isn't possible?
2. Would you know which devs would be best to ask about the recent Ceph changes?

Thanks,
Shantur

On Sun, Jan 17, 2021 at 4:46 PM Strahil Nikolov via Users  
wrote:
> В 15:51 + на 17.01.2021 (нд), Shantur Rathore написа:
>> Hi Strahil,
>> 
>> Thanks for your reply, I have 16 nodes for now but more on the way.
>> 
>> The reason why Ceph appeals me over Gluster because of the following reasons.
>> 
>> 1. I have more experience with Ceph than Gluster.
> That is a good reason to pick CEPH.
>> 2. I heard in Managed Block Storage presentation that it leverages storage 
>> software to offload storage related tasks. 
>> 3. Adding Gluster storage limits to 3 hosts at a time.
> Only if you wish the nodes to be both Storage and Compute. Yet, you can add 
> as many as you wish as a compute node (won't be part of Gluster) and later 
> you can add them to the Gluster TSP (this requires 3 nodes at a time).
>> 4. I read that there is a limit of maximum 12 hosts in Gluster setup. No 
>> such limitation if I go via Ceph.
> Actually , it's about Red Hat support for RHHI and not for Gluster + oVirt. 
> As both oVirt and Gluster ,that are used, are upstream projects, support is 
> on best effort from the community.
>> In my initial testing I was able to enable Centos repositories in Node Ng 
>> but if I remember correctly, there were some librbd versions present in Node 
>> Ng which clashed with the version I was trying to install.
>> Does Ceph hyperconverge still make sense?
> Yes it is. You got the knowledge to run the CEPH part, yet consider talking 
> with some of the devs on the list - as there were some changes recently in 
> oVirt's support for CEPH.
> 
>> Regards
>> Shantur
>> 
>> On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users  
>> wrote:
>>> Hi Shantur,
>>> 
>>> the main question is how many nodes you have.
>>> Ceph integration is still in development/experimental and it should be wise 
>>> to consider Gluster also. It has a great integration and it's quite easy to 
>>> work with).
>>> 
>>> 
>>> There are users reporting using CEPH with their oVirt , but I can't tell 
>>> how good it is.
>>> I doubt that oVirt nodes come with CEPH components , so you most probably 
>>> will need to use a full-blown distro. In general, using extra software on 
>>> oVirt nodes is quite hard .
>>> 
>>> With such setup, you will need much more nodes than a Gluster setup due to 
>>> CEPH's requirements.
>>> 
>>> Best Regards,
>>> Strahil Nikolov
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore 
>>>  написа: 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> Hi all,
>>> 
>>> I am planning my new oVirt cluster on Apple hosts. These hosts can only 
>>> have one disk which I plan to partition and use for hyper converged setup. 
>>> As this is my first oVirt cluster I need help in understanding few bits.
>>> 
>>> 1. Is Hyper converged setup possible with Ceph using cinderlib?
>>> 2. Can this hyper converged setup be on oVirt Node Next hosts or only 
>>> Centos?
>>> 3. Can I install cinderlib on oVirt Node Next hosts?
>>> 4. Are there any pit falls in such a setup?
>>> 
>>> 
>>> Thanks for your help
>>> 
>>> Regards,
>>> Shantur
>>> 
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct: 
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives: 
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKWQBF5LF4B7HCVQXKWLX/
>>> ___
>>> Users mailing list -- users@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct: 
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives: 
>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66OBZ6OHQALFVRAEPMU7/
>>> 
>> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/4IBXGXZVXAIUDS2O675QAXZRTSULPD2S/
> 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: 

[ovirt-users] Re: Managed Block Storage and more

2021-01-18 Thread Shantur Rathore
Thanks for pointing that out to me Konstantin.

I understand that it would use a kernel client instead of userland rbd lib.
Isn't it better as I have seen kernel clients 20x faster than userland??

I am probably missing something important here, would you mind detailing
that.

Regards,
Shantur


On Mon, Jan 18, 2021 at 3:27 PM Konstantin Shalygin  wrote:

> Beware about Ceph and oVirt Managed Block Storage, current integration is
> only possible with kernel, not with qemu-rbd.
>
>
> k
>
> Sent from my iPhone
>
> On 18 Jan 2021, at 13:00, Shantur Rathore  wrote:
>
> 
> Thanks Strahil for your reply.
>
> Sorry just to confirm,
>
> 1. Are you saying Ceph on oVirt Node NG isn't possible?
> 2. Would you know which devs would be best to ask about the recent Ceph
> changes?
>
> Thanks,
> Shantur
>
> On Sun, Jan 17, 2021 at 4:46 PM Strahil Nikolov via Users 
> wrote:
>
>> В 15:51 + на 17.01.2021 (нд), Shantur Rathore написа:
>>
>> Hi Strahil,
>>
>> Thanks for your reply, I have 16 nodes for now but more on the way.
>>
>> The reason why Ceph appeals me over Gluster because of the following
>> reasons.
>>
>> 1. I have more experience with Ceph than Gluster.
>>
>> That is a good reason to pick CEPH.
>>
>> 2. I heard in Managed Block Storage presentation that it leverages
>> storage software to offload storage related tasks.
>> 3. Adding Gluster storage limits to 3 hosts at a time.
>>
>> Only if you wish the nodes to be both Storage and Compute. Yet, you can
>> add as many as you wish as a compute node (won't be part of Gluster) and
>> later you can add them to the Gluster TSP (this requires 3 nodes at a time).
>>
>> 4. I read that there is a limit of maximum 12 hosts in Gluster setup. No
>> such limitation if I go via Ceph.
>>
>> Actually , it's about Red Hat support for RHHI and not for Gluster +
>> oVirt. As both oVirt and Gluster ,that are used, are upstream projects,
>> support is on best effort from the community.
>>
>> In my initial testing I was able to enable Centos repositories in Node Ng
>> but if I remember correctly, there were some librbd versions present in
>> Node Ng which clashed with the version I was trying to install.
>> Does Ceph hyperconverge still make sense?
>>
>> Yes it is. You got the knowledge to run the CEPH part, yet consider
>> talking with some of the devs on the list - as there were some changes
>> recently in oVirt's support for CEPH.
>>
>> Regards
>> Shantur
>>
>> On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users 
>> wrote:
>>
>> Hi Shantur,
>>
>> the main question is how many nodes you have.
>> Ceph integration is still in development/experimental and it should be
>> wise to consider Gluster also. It has a great integration and it's quite
>> easy to work with).
>>
>>
>> There are users reporting using CEPH with their oVirt , but I can't tell
>> how good it is.
>> I doubt that oVirt nodes come with CEPH components , so you most probably
>> will need to use a full-blown distro. In general, using extra software on
>> oVirt nodes is quite hard .
>>
>> With such setup, you will need much more nodes than a Gluster setup due
>> to CEPH's requirements.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>>
>>
>>
>>
>> В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore <
>> shantur.rath...@gmail.com> написа:
>>
>>
>>
>>
>>
>> Hi all,
>>
>> I am planning my new oVirt cluster on Apple hosts. These hosts can only
>> have one disk which I plan to partition and use for hyper converged setup.
>> As this is my first oVirt cluster I need help in understanding few bits.
>>
>> 1. Is Hyper converged setup possible with Ceph using cinderlib?
>> 2. Can this hyper converged setup be on oVirt Node Next hosts or only
>> Centos?
>> 3. Can I install cinderlib on oVirt Node Next hosts?
>> 4. Are there any pit falls in such a setup?
>>
>>
>> Thanks for your help
>>
>> Regards,
>> Shantur
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKWQBF5LF4B7HCVQXKWLX/
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66OBZ6OHQALFVRAEPMU7/
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> 

[ovirt-users] Re: Managed Block Storage and more

2021-01-18 Thread Konstantin Shalygin
Beware about Ceph and oVirt Managed Block Storage, current integration is only 
possible with kernel, not with qemu-rbd.


k

Sent from my iPhone

> On 18 Jan 2021, at 13:00, Shantur Rathore  wrote:
> 
> 
> Thanks Strahil for your reply.
> 
> Sorry just to confirm,
> 
> 1. Are you saying Ceph on oVirt Node NG isn't possible?
> 2. Would you know which devs would be best to ask about the recent Ceph 
> changes?
> 
> Thanks,
> Shantur
> 
>> On Sun, Jan 17, 2021 at 4:46 PM Strahil Nikolov via Users  
>> wrote:
>> В 15:51 + на 17.01.2021 (нд), Shantur Rathore написа:
>>> Hi Strahil,
>>> 
>>> Thanks for your reply, I have 16 nodes for now but more on the way.
>>> 
>>> The reason why Ceph appeals me over Gluster because of the following 
>>> reasons.
>>> 
>>> 1. I have more experience with Ceph than Gluster.
>> That is a good reason to pick CEPH.
>>> 2. I heard in Managed Block Storage presentation that it leverages storage 
>>> software to offload storage related tasks. 
>>> 3. Adding Gluster storage limits to 3 hosts at a time.
>> Only if you wish the nodes to be both Storage and Compute. Yet, you can add 
>> as many as you wish as a compute node (won't be part of Gluster) and later 
>> you can add them to the Gluster TSP (this requires 3 nodes at a time).
>>> 4. I read that there is a limit of maximum 12 hosts in Gluster setup. No 
>>> such limitation if I go via Ceph.
>> Actually , it's about Red Hat support for RHHI and not for Gluster + oVirt. 
>> As both oVirt and Gluster ,that are used, are upstream projects, support is 
>> on best effort from the community.
>>> In my initial testing I was able to enable Centos repositories in Node Ng 
>>> but if I remember correctly, there were some librbd versions present in 
>>> Node Ng which clashed with the version I was trying to install.
>>> Does Ceph hyperconverge still make sense?
>> Yes it is. You got the knowledge to run the CEPH part, yet consider talking 
>> with some of the devs on the list - as there were some changes recently in 
>> oVirt's support for CEPH.
>> 
>>> Regards
>>> Shantur
>>> 
 On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users  
 wrote:
 Hi Shantur,
 
 the main question is how many nodes you have.
 Ceph integration is still in development/experimental and it should be 
 wise to consider Gluster also. It has a great integration and it's quite 
 easy to work with).
 
 
 There are users reporting using CEPH with their oVirt , but I can't tell 
 how good it is.
 I doubt that oVirt nodes come with CEPH components , so you most probably 
 will need to use a full-blown distro. In general, using extra software on 
 oVirt nodes is quite hard .
 
 With such setup, you will need much more nodes than a Gluster setup due to 
 CEPH's requirements.
 
 Best Regards,
 Strahil Nikolov
 
 
 
 
 
 
 В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore 
  написа: 
 
 
 
 
 
 Hi all,
 
 I am planning my new oVirt cluster on Apple hosts. These hosts can only 
 have one disk which I plan to partition and use for hyper converged setup. 
 As this is my first oVirt cluster I need help in understanding few bits.
 
 1. Is Hyper converged setup possible with Ceph using cinderlib?
 2. Can this hyper converged setup be on oVirt Node Next hosts or only 
 Centos?
 3. Can I install cinderlib on oVirt Node Next hosts?
 4. Are there any pit falls in such a setup?
 
 
 Thanks for your help
 
 Regards,
 Shantur
 
 ___
 Users mailing list -- users@ovirt.org
 To unsubscribe send an email to users-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/privacy-policy.html
 oVirt Code of Conduct: 
 https://www.ovirt.org/community/about/community-guidelines/
 List Archives: 
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKWQBF5LF4B7HCVQXKWLX/
 ___
 Users mailing list -- users@ovirt.org
 To unsubscribe send an email to users-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/privacy-policy.html
 oVirt Code of Conduct: 
 https://www.ovirt.org/community/about/community-guidelines/
 List Archives: 
 https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66OBZ6OHQALFVRAEPMU7/
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/4IBXGXZVXAIUDS2O675QAXZRTSULPD2S/
> ___
> Users mailing list 

[ovirt-users] Re: Managed Block Storage and more

2021-01-18 Thread Shantur Rathore
Thanks Strahil for your reply.

Sorry just to confirm,

1. Are you saying Ceph on oVirt Node NG isn't possible?
2. Would you know which devs would be best to ask about the recent Ceph
changes?

Thanks,
Shantur

On Sun, Jan 17, 2021 at 4:46 PM Strahil Nikolov via Users 
wrote:

> В 15:51 + на 17.01.2021 (нд), Shantur Rathore написа:
>
> Hi Strahil,
>
> Thanks for your reply, I have 16 nodes for now but more on the way.
>
> The reason why Ceph appeals me over Gluster because of the following
> reasons.
>
> 1. I have more experience with Ceph than Gluster.
>
> That is a good reason to pick CEPH.
>
> 2. I heard in Managed Block Storage presentation that it leverages storage
> software to offload storage related tasks.
> 3. Adding Gluster storage limits to 3 hosts at a time.
>
> Only if you wish the nodes to be both Storage and Compute. Yet, you can
> add as many as you wish as a compute node (won't be part of Gluster) and
> later you can add them to the Gluster TSP (this requires 3 nodes at a time).
>
> 4. I read that there is a limit of maximum 12 hosts in Gluster setup. No
> such limitation if I go via Ceph.
>
> Actually , it's about Red Hat support for RHHI and not for Gluster +
> oVirt. As both oVirt and Gluster ,that are used, are upstream projects,
> support is on best effort from the community.
>
> In my initial testing I was able to enable Centos repositories in Node Ng
> but if I remember correctly, there were some librbd versions present in
> Node Ng which clashed with the version I was trying to install.
> Does Ceph hyperconverge still make sense?
>
> Yes it is. You got the knowledge to run the CEPH part, yet consider
> talking with some of the devs on the list - as there were some changes
> recently in oVirt's support for CEPH.
>
> Regards
> Shantur
>
> On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users 
> wrote:
>
> Hi Shantur,
>
> the main question is how many nodes you have.
> Ceph integration is still in development/experimental and it should be
> wise to consider Gluster also. It has a great integration and it's quite
> easy to work with).
>
>
> There are users reporting using CEPH with their oVirt , but I can't tell
> how good it is.
> I doubt that oVirt nodes come with CEPH components , so you most probably
> will need to use a full-blown distro. In general, using extra software on
> oVirt nodes is quite hard .
>
> With such setup, you will need much more nodes than a Gluster setup due to
> CEPH's requirements.
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore <
> shantur.rath...@gmail.com> написа:
>
>
>
>
>
> Hi all,
>
> I am planning my new oVirt cluster on Apple hosts. These hosts can only
> have one disk which I plan to partition and use for hyper converged setup.
> As this is my first oVirt cluster I need help in understanding few bits.
>
> 1. Is Hyper converged setup possible with Ceph using cinderlib?
> 2. Can this hyper converged setup be on oVirt Node Next hosts or only
> Centos?
> 3. Can I install cinderlib on oVirt Node Next hosts?
> 4. Are there any pit falls in such a setup?
>
>
> Thanks for your help
>
> Regards,
> Shantur
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKWQBF5LF4B7HCVQXKWLX/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66OBZ6OHQALFVRAEPMU7/
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/4IBXGXZVXAIUDS2O675QAXZRTSULPD2S/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6WBVRC4GJTAIL3XYPJEEYGOBCCNZY4ZV/


[ovirt-users] Re: Managed Block Storage and more

2021-01-17 Thread Shantur Rathore
Hi Strahil,

Thanks for your reply, I have 16 nodes for now but more on the way.

The reason why Ceph appeals me over Gluster because of the following
reasons.

1. I have more experience with Ceph than Gluster.
2. I heard in Managed Block Storage presentation that it leverages storage
software to offload storage related tasks.
3. Adding Gluster storage limits to 3 hosts at a time.
4. I read that there is a limit of maximum 12 hosts in Gluster setup. No
such limitation if I go via Ceph.

In my initial testing I was able to enable Centos repositories in Node Ng
but if I remember correctly, there were some librbd versions present in
Node Ng which clashed with the version I was trying to install.

Does Ceph hyperconverge still make sense?

Regards
Shantur

On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users 
wrote:

> Hi Shantur,
>
> the main question is how many nodes you have.
> Ceph integration is still in development/experimental and it should be
> wise to consider Gluster also. It has a great integration and it's quite
> easy to work with).
>
>
> There are users reporting using CEPH with their oVirt , but I can't tell
> how good it is.
> I doubt that oVirt nodes come with CEPH components , so you most probably
> will need to use a full-blown distro. In general, using extra software on
> oVirt nodes is quite hard .
>
> With such setup, you will need much more nodes than a Gluster setup due to
> CEPH's requirements.
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore <
> shantur.rath...@gmail.com> написа:
>
>
>
>
>
> Hi all,
>
> I am planning my new oVirt cluster on Apple hosts. These hosts can only
> have one disk which I plan to partition and use for hyper converged setup.
> As this is my first oVirt cluster I need help in understanding few bits.
>
> 1. Is Hyper converged setup possible with Ceph using cinderlib?
> 2. Can this hyper converged setup be on oVirt Node Next hosts or only
> Centos?
> 3. Can I install cinderlib on oVirt Node Next hosts?
> 4. Are there any pit falls in such a setup?
>
>
> Thanks for your help
>
> Regards,
> Shantur
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKWQBF5LF4B7HCVQXKWLX/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66OBZ6OHQALFVRAEPMU7/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KQQR4PF32ALSD2HFOEW4KCC6HKFKZKLW/


[ovirt-users] Re: Managed Block Storage and more

2021-01-17 Thread Strahil Nikolov via Users
В 15:51 + на 17.01.2021 (нд), Shantur Rathore написа:
> Hi Strahil,
> Thanks for your reply, I have 16 nodes for now but more on the way.
> 
> The reason why Ceph appeals me over Gluster because of the following
> reasons.
> 
> 1. I have more experience with Ceph than Gluster.
That is a good reason to pick CEPH.
> 2. I heard in Managed Block Storage presentation that it leverages
> storage software to offload storage related tasks. 
> 3. Adding Gluster storage limits to 3 hosts at a time.
Only if you wish the nodes to be both Storage and Compute. Yet, you can
add as many as you wish as a compute node (won't be part of Gluster)
and later you can add them to the Gluster TSP (this requires 3 nodes at
a time).
> 4. I read that there is a limit of maximum 12 hosts in Gluster setup.
> No such limitation if I go via Ceph.
Actually , it's about Red Hat support for RHHI and not for Gluster +
oVirt.  As  both oVirt and Gluster ,that are used, are upstream
projects, support is on best effort from the community.
> In my initial testing I was able to enable Centos repositories in
> Node Ng but if I remember correctly, there were some librbd versions
> present in Node Ng which clashed with the version I was trying to
> install.
> Does Ceph hyperconverge still make sense?
Yes it is. You got the knowledge to run the CEPH part, yet consider
talking with some of the devs on the list - as there were some changes
recently in oVirt's support for CEPH.
> Regards
> Shantur
> 
> On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users <
> users@ovirt.org> wrote:
> > Hi Shantur,
> > 
> > 
> > 
> > the main question is how many nodes you have.
> > 
> > Ceph integration is still in development/experimental and it should
> > be wise to consider Gluster also. It has a great integration and
> > it's quite easy to work with).
> > 
> > 
> > 
> > 
> > 
> > There are users reporting using CEPH with their oVirt , but I can't
> > tell how good it is.
> > 
> > I doubt that oVirt nodes come with CEPH components , so you most
> > probably will need to use a full-blown distro. In general, using
> > extra software on oVirt nodes is quite hard .
> > 
> > 
> > 
> > With such setup, you will need much more nodes than a Gluster setup
> > due to CEPH's requirements.
> > 
> > 
> > 
> > Best Regards,
> > 
> > Strahil Nikolov
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore <
> > shantur.rath...@gmail.com> написа: 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > Hi all,
> > 
> > 
> > 
> > I am planning my new oVirt cluster on Apple hosts. These hosts can
> > only have one disk which I plan to partition and use for hyper
> > converged setup. As this is my first oVirt cluster I need help in
> > understanding few bits.
> > 
> > 
> > 
> > 1. Is Hyper converged setup possible with Ceph using cinderlib?
> > 
> > 2. Can this hyper converged setup be on oVirt Node Next hosts or
> > only Centos?
> > 
> > 3. Can I install cinderlib on oVirt Node Next hosts?
> > 
> > 4. Are there any pit falls in such a setup?
> > 
> > 
> > 
> > 
> > 
> > Thanks for your help
> > 
> > 
> > 
> > Regards,
> > 
> > Shantur
> > 
> > 
> > 
> > ___
> > 
> > Users mailing list -- users@ovirt.org
> > 
> > To unsubscribe send an email to users-le...@ovirt.org
> > 
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > 
> > oVirt Code of Conduct: 
> > https://www.ovirt.org/community/about/community-guidelines/
> > 
> > List Archives: 
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKWQBF5LF4B7HCVQXKWLX/
> > 
> > ___
> > 
> > Users mailing list -- users@ovirt.org
> > 
> > To unsubscribe send an email to users-le...@ovirt.org
> > 
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > 
> > oVirt Code of Conduct: 
> > https://www.ovirt.org/community/about/community-guidelines/
> > 
> > List Archives: 
> > https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66OBZ6OHQALFVRAEPMU7/
> > 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4IBXGXZVXAIUDS2O675QAXZRTSULPD2S/


[ovirt-users] Re: Managed Block Storage and more

2021-01-17 Thread Shantur Rathore
> Hi Strahil,
>
> Thanks for your reply, I have 16 nodes for now but more on the way.
>
> The reason why Ceph appeals me over Gluster because of the following
> reasons.
>
> 1. I have more experience with Ceph than Gluster.
> 2. I heard in Managed Block Storage presentation that it leverages storage
> software to offload storage related tasks.
> 3. Adding Gluster storage limits to 3 hosts at a time.
> 4. I read that there is a limit of maximum 12 hosts in Gluster setup. No
> such limitation if I go via Ceph.
>
> In my initial testing I was able to enable Centos repositories in Node Ng
> but if I remember correctly, there were some librbd versions present in
> Node Ng which clashed with the version I was trying to install.
>
> Does Ceph hyperconverge still make sense?
>
> Regards
> Shantur
>



On Sun, Jan 17, 2021, 9:58 AM Strahil Nikolov via Users 
> wrote:
>
>> Hi Shantur,
>>
>> the main question is how many nodes you have.
>> Ceph integration is still in development/experimental and it should be
>> wise to consider Gluster also. It has a great integration and it's quite
>> easy to work with).
>>
>>
>> There are users reporting using CEPH with their oVirt , but I can't tell
>> how good it is.
>> I doubt that oVirt nodes come with CEPH components , so you most probably
>> will need to use a full-blown distro. In general, using extra software on
>> oVirt nodes is quite hard .
>>
>> With such setup, you will need much more nodes than a Gluster setup due
>> to CEPH's requirements.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>>
>>
>>
>>
>> В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore <
>> shantur.rath...@gmail.com> написа:
>>
>>
>>
>>
>>
>> Hi all,
>>
>> I am planning my new oVirt cluster on Apple hosts. These hosts can only
>> have one disk which I plan to partition and use for hyper converged setup.
>> As this is my first oVirt cluster I need help in understanding few bits.
>>
>> 1. Is Hyper converged setup possible with Ceph using cinderlib?
>> 2. Can this hyper converged setup be on oVirt Node Next hosts or only
>> Centos?
>> 3. Can I install cinderlib on oVirt Node Next hosts?
>> 4. Are there any pit falls in such a setup?
>>
>>
>> Thanks for your help
>>
>> Regards,
>> Shantur
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKWQBF5LF4B7HCVQXKWLX/
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66OBZ6OHQALFVRAEPMU7/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RQIUZPZHFAV3JXJM4DP3OYG6JYEK446Y/


[ovirt-users] Re: Managed Block Storage and more

2021-01-17 Thread Strahil Nikolov via Users
Hi Shantur,

the main question is how many nodes you have.
Ceph integration is still in development/experimental and it should be wise to 
consider Gluster also. It has a great integration and it's quite easy to work 
with).


There are users reporting using CEPH with their oVirt , but I can't tell how 
good it is.
I doubt that oVirt nodes come with CEPH components , so you most probably will 
need to use a full-blown distro. In general, using extra software on oVirt 
nodes is quite hard .

With such setup, you will need much more nodes than a Gluster setup due to 
CEPH's requirements.

Best Regards,
Strahil Nikolov






В неделя, 17 януари 2021 г., 10:37:57 Гринуич+2, Shantur Rathore 
 написа: 





Hi all,

I am planning my new oVirt cluster on Apple hosts. These hosts can only have 
one disk which I plan to partition and use for hyper converged setup. As this 
is my first oVirt cluster I need help in understanding few bits.

1. Is Hyper converged setup possible with Ceph using cinderlib?
2. Can this hyper converged setup be on oVirt Node Next hosts or only Centos?
3. Can I install cinderlib on oVirt Node Next hosts?
4. Are there any pit falls in such a setup?


Thanks for your help

Regards,
Shantur

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TPQCJSJ3MQOEKWQBF5LF4B7HCVQXKWLX/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RVKIBASSQW7C66OBZ6OHQALFVRAEPMU7/


[ovirt-users] Re: Managed Block Storage/Ceph: Experiences from Catastrophic Hardware failure

2019-10-07 Thread Benny Zlotnik
We support it as part of the cinderlib integration (Managed Block
Storage), each rbd device is represented as single ovirt disk when
used.
The integration is still in tech preview and still has a long way to
go, but any early feedback is highly appreciated


On Mon, Oct 7, 2019 at 2:20 PM Strahil  wrote:
>
> Hi Dan,
>
> As CEPH support is quite new, we need  DEV clarification.
>
> Hi Sandro,
>
> Who can help to clarify if Ovirt supports direct RBD LUNs presented on the 
> VMs?
> Are there any limitations in the current solution ?
>
> Best Regards,
> Strahil NikolovOn Oct 7, 2019 13:54, Dan Poltawski  
> wrote:
> >
> > On Mon, 2019-10-07 at 01:56 +0300, Strahil Nikolov wrote:
> > > I'm not very sure that you are supposed to use the CEPH by giving
> > > each VM direct access.
> > >
> > > Have you considered using an iSCSI gateway as an entry point for your
> > > storage domain ? This way oVirt will have no issues dealing with the
> > > rbd locks.
> > >
> > > Of course, oVirt might be able to deal with RBD locks , but that can
> > > be confirmed/denied by the devs.
> >
> > Thanks for your response - regarding the locks point, I realised later
> > that this was my own incorrect permissions given to the client. The
> > ceph client was detecting the broken locks when mounting the rbd device
> > and unable to blacklist it. I addressed this by swithcign to the
> > permissions 'profile rbd'.
> >
> > Regarding iSCSI, we are using this for the hosted engine. However, I am
> > attracted to the idea of managing block devices with individual rbd
> > devices to facilate individual block device level snapshotting and I
> > assume performance will be better.
> >
> > thanks,
> >
> > Dan
> >
> > 
> >
> > The Networking People (TNP) Limited. Registered office: Network House, 
> > Caton Rd, Lancaster, LA1 3PE. Registered in England & Wales with company 
> > number: 07667393
> >
> > This email and any files transmitted with it are confidential and intended 
> > solely for the use of the individual or entity to whom they are addressed. 
> > If you have received this email in error please notify the system manager. 
> > This message contains confidential information and is intended only for the 
> > individual named. If you are not the named addressee you should not 
> > disseminate, distribute or copy this e-mail. Please notify the sender 
> > immediately by e-mail if you have received this e-mail by mistake and 
> > delete this e-mail from your system. If you are not the intended recipient 
> > you are notified that disclosing, copying, distributing or taking any 
> > action in reliance on the contents of this information is strictly 
> > prohibited.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/DLM6FRTGVDG232PQFHUA3IDOS5PT6WQ2/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AXXQ2KIW23Q62MYZYSXE4POVYS3JXX72/


[ovirt-users] Re: Managed Block Storage/Ceph: Experiences from Catastrophic Hardware failure

2019-10-07 Thread Strahil
Hi Dan,

As CEPH support is quite new, we need  DEV clarification.

Hi Sandro,

Who can help to clarify if Ovirt supports direct RBD LUNs presented on the VMs?
Are there any limitations in the current solution ?

Best Regards,
Strahil NikolovOn Oct 7, 2019 13:54, Dan Poltawski  
wrote:
>
> On Mon, 2019-10-07 at 01:56 +0300, Strahil Nikolov wrote: 
> > I'm not very sure that you are supposed to use the CEPH by giving 
> > each VM direct access. 
> > 
> > Have you considered using an iSCSI gateway as an entry point for your 
> > storage domain ? This way oVirt will have no issues dealing with the 
> > rbd locks. 
> > 
> > Of course, oVirt might be able to deal with RBD locks , but that can 
> > be confirmed/denied by the devs. 
>
> Thanks for your response - regarding the locks point, I realised later 
> that this was my own incorrect permissions given to the client. The 
> ceph client was detecting the broken locks when mounting the rbd device 
> and unable to blacklist it. I addressed this by swithcign to the 
> permissions 'profile rbd'. 
>
> Regarding iSCSI, we are using this for the hosted engine. However, I am 
> attracted to the idea of managing block devices with individual rbd 
> devices to facilate individual block device level snapshotting and I 
> assume performance will be better. 
>
> thanks, 
>
> Dan 
>
>  
>
> The Networking People (TNP) Limited. Registered office: Network House, Caton 
> Rd, Lancaster, LA1 3PE. Registered in England & Wales with company number: 
> 07667393 
>
> This email and any files transmitted with it are confidential and intended 
> solely for the use of the individual or entity to whom they are addressed. If 
> you have received this email in error please notify the system manager. This 
> message contains confidential information and is intended only for the 
> individual named. If you are not the named addressee you should not 
> disseminate, distribute or copy this e-mail. Please notify the sender 
> immediately by e-mail if you have received this e-mail by mistake and delete 
> this e-mail from your system. If you are not the intended recipient you are 
> notified that disclosing, copying, distributing or taking any action in 
> reliance on the contents of this information is strictly prohibited. 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DLM6FRTGVDG232PQFHUA3IDOS5PT6WQ2/


[ovirt-users] Re: Managed Block Storage/Ceph: Experiences from Catastrophic Hardware failure

2019-10-06 Thread Strahil Nikolov
On September 12, 2019 5:55:47 PM GMT+03:00, Dan Poltawski 
 wrote:
>Yesterday we had a catastrophic hardware failure with one of our nodes
>using ceph and the experimental cinderlib integration.
>
>Unfortunately the ovirt cluster recover the situation well and took
>some manual intervention to resolve. I thought I'd share what happened
>and how we resolved it in case there is any best practice to share/bugs
>which are worth creating to help others in similar situaiton. We are
>early in our use of ovirt, so its quite possible we have things
>incorreclty configured.
>
>Our setup: We have two nodes, hosted engine on iSCSI, about 40vms all
>using managed block storage mounting the rbd volumes directly. I hadn't
>configured power management (perhaps this is the fundamental problem).
>
>Yesterday a hardware fault caused one of the nodes to crash and stay
>down awaiting user input in POST screens, taking 20 vms with it.
>
>The hosted engine was fortunately on the 'good' node  and detected that
>the node had become unresponsive, but noted 'Host cannot be fenced
>automatically because power management for the host is disabled.'.
>
>At this point, knowing that one node was dead, I wanted to bring up the
>failed vms on the good node. However, the vms were appearing in an
>unknown state and I couldn't do any operations on them. It wasn't clear
>to me what the best course of action to do there would be. I am not
>sure if there is a way to mark the node as failed?
>
>In my urgency to try and resolve the situation I managed to get the
>failed node startred back up, shortly after it came up the
>engine detected that all the vms were down, I put the failed host into
>maintaince mode and tried to start the failed vms.
>
>Unfortunately the failed vms did not start up cleanly - it turned out
>that they still had rbd locks preventing writing from the failed node.
>
>To finally gets the vms to start I then manually went through every
>vm's managed block, found the id and found the lock and removed it:
>rbd lock list rbd/volume-{id}
>rbd lock remove rbd/voleume-{id} 'auto {lockid}' {lockername}
>
>Some overall thoughts I had:
>* I'm not sure what the best course of action is to notify the engine
>about a catastrophic hardware failure? If power management was
>configured, I suppose it would've removed the power and marked them all
>down?
>
>* Would ovirt have been able to deal with clearing the rbd locks, or
>did I miss a trick somewhere to resolve this situation with manually
>going through each device and clering the lock?
>
>* Might it be possible for ovirt to detect when the rbd images are
>locked for writing and prevent launching?
>
>regards,
>
>Dan
>
>
>
>The Networking People (TNP) Limited. Registered office: Network House,
>Caton Rd, Lancaster, LA1 3PE. Registered in England & Wales with
>company number: 07667393
>
>This email and any files transmitted with it are confidential and
>intended solely for the use of the individual or entity to whom they
>are addressed. If you have received this email in error please notify
>the system manager. This message contains confidential information and
>is intended only for the individual named. If you are not the named
>addressee you should not disseminate, distribute or copy this e-mail.
>Please notify the sender immediately by e-mail if you have received
>this e-mail by mistake and delete this e-mail from your system. If you
>are not the intended recipient you are notified that disclosing,
>copying, distributing or taking any action in reliance on the contents
>of this information is strictly prohibited.
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/NZGGIT2KKBWCPXNB5JEQEA3KQP5ZBNXR/

Hi Dan,

Power management could help, as the engine will have a mechanism to recover the 
host.
In case a node is really down, you can mark it via "Confirm ' Host has been 
rebooted'". Be sure that the host is really down or you could cause a bigger 
problem.

I'm not very sure that you are supposed to use the CEPH by giving each VM 
direct access.

Have you considered using an iSCSI gateway as an entry point for your storage 
domain ? This way oVirt will have no issues dealing with the rbd locks.

Of course, oVirt might be able to deal with RBD locks , but that can be 
confirmed/denied by the devs.

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 

[ovirt-users] Re: Managed Block Storage: ceph detach_volume failing after migration

2019-09-25 Thread Nir Soffer
On Wed, Sep 25, 2019 at 8:02 PM Dan Poltawski 
wrote:

> Hi,
>
> On Wed, 2019-09-25 at 15:42 +0300, Amit Bawer wrote:
> > According to resolution of [1] it's a multipathd/udev configuration
> > issue. Could be worth to track this issue.
> >
> > [1] https://tracker.ceph.com/issues/12763
>
> Thanks, that certainly looks like a smoking gun to me, in the logs:
>
> Sep 25 12:27:45 mario multipathd: rbd29: add path (uevent)
> Sep 25 12:27:45 mario multipathd: rbd29: spurious uevent, path already
> in pathvec
> Sep 25 12:27:45 mario multipathd: rbd29: HDIO_GETGEO failed with 25
> Sep 25 12:27:45 mario multipathd: rbd29: failed to get path uid
> Sep 25 12:27:45 mario multipathd: uevent trigger error
>

Please file oVirt bug. Vdsm manages multipath configuration and I don't
think we have
a blacklist for rbd devices.

If this is the issue, you can fix this locally by installing a multipath
drop-in configuration:

# cat /etc/multipath.conf.d/rbd.conf
blacklist {
   devnode "^(rbd)[0-9]*"
}

Vdsm should include this configuration in /etc/multipath.conf that vdsm
manages.

Nir



>
>
> Dan
>
> >
> > On Wed, Sep 25, 2019 at 3:18 PM Dan Poltawski <
> > dan.poltaw...@tnp.net.uk> wrote:
> > > On ovirt 4.3.5 we are seeing various problems related to the rbd
> > > device staying mapped after a guest has been live migrated. This
> > > causes problems migrating the guest back, as well as rebooting the
> > > guest when it starts back up on the original host. The error
> > > returned is ‘nrbd: unmap failed: (16) Device or resource busy’.
> > > I’ve pasted the full vdsm log below.
> > >
> > > As far as I can tell this isn’t happening 100% of the time, and
> > > seems to be more prevalent on busy guests.
> > >
> > > (Not sure if I should create a bug for this, so thought I’d start
> > > here first)
> > >
> > > Thanks,
> > >
> > > Dan
> > >
> > >
> > > Sep 24 19:26:18 mario vdsm[5485]: ERROR FINISH detach_volume
> > > error=Managed Volume Helper failed.: ('Error executing helper:
> > > Command [\'/usr/libexec/vdsm/managedvolume-helper\', \'detach\']
> > > failed with rc=1 out=\'\' err=\'oslo.privsep.daemon: Running
> > > privsep helper: [\\\'sudo\\\', \\\'privsep-helper\\\', \\\'
> > > --privsep_context\\\', \\\'os_brick.privileged.default\\\', \\\'
> > > --privsep_sock_path\\\',
> > > \\\'/tmp/tmptQzb10/privsep.sock\\\']\\noslo.privsep.daemon: Spawned
> > > new privsep daemon via rootwrap\\noslo.privsep.daemon: privsep
> > > daemon starting\\noslo.privsep.daemon: privsep process running with
> > > uid/gid: 0/0\\noslo.privsep.daemon: privsep process running with
> > > capabilities (eff/prm/inh):
> > > CAP_SYS_ADMIN/CAP_SYS_ADMIN/none\\noslo.privsep.daemon: privsep
> > > daemon running as pid 76076\\nTraceback (most recent call
> > > last):\\n  File "/usr/libexec/vdsm/managedvolume-helper", line 154,
> > > in \\nsys.exit(main(sys.argv[1:]))\\n  File
> > > "/usr/libexec/vdsm/managedvolume-helper", line 77, in main\\n
> > > args.command(args)\\n  File "/usr/libexec/vdsm/managedvolume-
> > > helper", line 149, in detach\\nignore_errors=False)\\n  File
> > > "/usr/lib/python2.7/site-packages/vdsm/storage/nos_brick.py", line
> > > 121, in disconnect_volume\\nrun_as_root=True)\\n  File
> > > "/usr/lib/python2.7/site-packages/os_brick/executor.py", line 52,
> > > in _execute\\nresult = self.__execute(*args, **kwargs)\\n  File
> > > "/usr/lib/python2.7/site-packages/os_brick/privileged/rootwrap.py",
> > > line 169, in execute\\nreturn execute_root(*cmd, **kwargs)\\n
> > > File "/usr/lib/python2.7/site-
> > > packages/oslo_privsep/priv_context.py",  line 241, in _wrap\\n
> > > return self.channel.remote_call(name, args, kwargs)\\n  File
> > > "/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py", line
> > > 203, in remote_call\\nraise
> > > exc_type(*result[2])\\noslo_concurrency.processutils.ProcessExecuti
> > > onError: Unexpected error while running command.\\nCommand: rbd
> > > unmap /dev/rbd/rbd/volume-0e8c1056-45d6-4740-934d-eb07a9f73160 --
> > > conf /tmp/brickrbd_LCKezP --id ovirt --mon_host 172.16.10.13:3300
> > > --mon_host 172.16.10.14:3300 --mon_host 172.16.10.12:6789\\nExit
> > > code: 16\\nStdout: u\\\'\\\'\\nStderr: u\\\'rbd: sysfs write
> > > failednrbd: unmap failed: (16) Device or resource
> > > busyn\\\'\\n\'',)#012Traceback (most recent call last):#012
> > > File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line
> > > 124, in method#012ret = func(*args, **kwargs)#012  File
> > > "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1766, in
> > > detach_volume#012return
> > > managedvolume.detach_volume(vol_id)#012  File
> > > "/usr/lib/python2.7/site-packages/vdsm/storage/managedvolume.py",
> > > line 67, in wrapper#012return func(*args, **kwargs)#012  File
> > > "/usr/lib/python2.7/site-packages/vdsm/storage/managedvolume.py",
> > > line 135, in detach_volume#012run_helper("detach",
> > > vol_info)#012  File "/usr/lib/python2.7/site-
> > > 

[ovirt-users] Re: Managed Block Storage: ceph detach_volume failing after migration

2019-09-25 Thread Dan Poltawski
Hi,

On Wed, 2019-09-25 at 15:42 +0300, Amit Bawer wrote:
> According to resolution of [1] it's a multipathd/udev configuration
> issue. Could be worth to track this issue.
>
> [1] https://tracker.ceph.com/issues/12763

Thanks, that certainly looks like a smoking gun to me, in the logs:

Sep 25 12:27:45 mario multipathd: rbd29: add path (uevent)
Sep 25 12:27:45 mario multipathd: rbd29: spurious uevent, path already
in pathvec
Sep 25 12:27:45 mario multipathd: rbd29: HDIO_GETGEO failed with 25
Sep 25 12:27:45 mario multipathd: rbd29: failed to get path uid
Sep 25 12:27:45 mario multipathd: uevent trigger error


Dan

>
> On Wed, Sep 25, 2019 at 3:18 PM Dan Poltawski <
> dan.poltaw...@tnp.net.uk> wrote:
> > On ovirt 4.3.5 we are seeing various problems related to the rbd
> > device staying mapped after a guest has been live migrated. This
> > causes problems migrating the guest back, as well as rebooting the
> > guest when it starts back up on the original host. The error
> > returned is ‘nrbd: unmap failed: (16) Device or resource busy’.
> > I’ve pasted the full vdsm log below.
> >
> > As far as I can tell this isn’t happening 100% of the time, and
> > seems to be more prevalent on busy guests.
> >
> > (Not sure if I should create a bug for this, so thought I’d start
> > here first)
> >
> > Thanks,
> >
> > Dan
> >
> >
> > Sep 24 19:26:18 mario vdsm[5485]: ERROR FINISH detach_volume
> > error=Managed Volume Helper failed.: ('Error executing helper:
> > Command [\'/usr/libexec/vdsm/managedvolume-helper\', \'detach\']
> > failed with rc=1 out=\'\' err=\'oslo.privsep.daemon: Running
> > privsep helper: [\\\'sudo\\\', \\\'privsep-helper\\\', \\\'
> > --privsep_context\\\', \\\'os_brick.privileged.default\\\', \\\'
> > --privsep_sock_path\\\',
> > \\\'/tmp/tmptQzb10/privsep.sock\\\']\\noslo.privsep.daemon: Spawned
> > new privsep daemon via rootwrap\\noslo.privsep.daemon: privsep
> > daemon starting\\noslo.privsep.daemon: privsep process running with
> > uid/gid: 0/0\\noslo.privsep.daemon: privsep process running with
> > capabilities (eff/prm/inh):
> > CAP_SYS_ADMIN/CAP_SYS_ADMIN/none\\noslo.privsep.daemon: privsep
> > daemon running as pid 76076\\nTraceback (most recent call
> > last):\\n  File "/usr/libexec/vdsm/managedvolume-helper", line 154,
> > in \\nsys.exit(main(sys.argv[1:]))\\n  File
> > "/usr/libexec/vdsm/managedvolume-helper", line 77, in main\\n
> > args.command(args)\\n  File "/usr/libexec/vdsm/managedvolume-
> > helper", line 149, in detach\\nignore_errors=False)\\n  File
> > "/usr/lib/python2.7/site-packages/vdsm/storage/nos_brick.py", line
> > 121, in disconnect_volume\\nrun_as_root=True)\\n  File
> > "/usr/lib/python2.7/site-packages/os_brick/executor.py", line 52,
> > in _execute\\nresult = self.__execute(*args, **kwargs)\\n  File
> > "/usr/lib/python2.7/site-packages/os_brick/privileged/rootwrap.py",
> > line 169, in execute\\nreturn execute_root(*cmd, **kwargs)\\n
> > File "/usr/lib/python2.7/site-
> > packages/oslo_privsep/priv_context.py",  line 241, in _wrap\\n
> > return self.channel.remote_call(name, args, kwargs)\\n  File
> > "/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py", line
> > 203, in remote_call\\nraise
> > exc_type(*result[2])\\noslo_concurrency.processutils.ProcessExecuti
> > onError: Unexpected error while running command.\\nCommand: rbd
> > unmap /dev/rbd/rbd/volume-0e8c1056-45d6-4740-934d-eb07a9f73160 --
> > conf /tmp/brickrbd_LCKezP --id ovirt --mon_host 172.16.10.13:3300
> > --mon_host 172.16.10.14:3300 --mon_host 172.16.10.12:6789\\nExit
> > code: 16\\nStdout: u\\\'\\\'\\nStderr: u\\\'rbd: sysfs write
> > failednrbd: unmap failed: (16) Device or resource
> > busyn\\\'\\n\'',)#012Traceback (most recent call last):#012
> > File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line
> > 124, in method#012ret = func(*args, **kwargs)#012  File
> > "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1766, in
> > detach_volume#012return
> > managedvolume.detach_volume(vol_id)#012  File
> > "/usr/lib/python2.7/site-packages/vdsm/storage/managedvolume.py",
> > line 67, in wrapper#012return func(*args, **kwargs)#012  File
> > "/usr/lib/python2.7/site-packages/vdsm/storage/managedvolume.py",
> > line 135, in detach_volume#012run_helper("detach",
> > vol_info)#012  File "/usr/lib/python2.7/site-
> > packages/vdsm/storage/managedvolume.py", line 179, in
> > run_helper#012sub_cmd, cmd_input=cmd_input)#012  File
> > "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line
> > 56, in __call__#012return callMethod()#012  File
> > "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line
> > 54, in #012**kwargs)#012  File "", line 2, in
> > managedvolume_run_helper#012  File
> > "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in
> > _callmethod#012raise convert_to_error(kind,
> > result)#012ManagedVolumeHelperFailed: Managed Volume Helper
> > failed.: ('Error executing helper: 

[ovirt-users] Re: Managed Block Storage: ceph detach_volume failing after migration

2019-09-25 Thread Amit Bawer
According to resolution of [1] it's a multipathd/udev configuration issue.
Could be worth to track this issue.

[1] https://tracker.ceph.com/issues/12763

On Wed, Sep 25, 2019 at 3:18 PM Dan Poltawski 
wrote:

> On ovirt 4.3.5 we are seeing various problems related to the rbd device
> staying mapped after a guest has been live migrated. This causes problems
> migrating the guest back, as well as rebooting the guest when it starts
> back up on the original host. The error returned is ‘nrbd: unmap failed:
> (16) Device or resource busy’. I’ve pasted the full vdsm log below.
>
>
>
> As far as I can tell this isn’t happening 100% of the time, and seems to
> be more prevalent on busy guests.
>
>
>
> (Not sure if I should create a bug for this, so thought I’d start here
> first)
>
>
>
> Thanks,
>
>
>
> Dan
>
>
>
>
>
> Sep 24 19:26:18 mario vdsm[5485]: ERROR FINISH detach_volume error=Managed
> Volume Helper failed.: ('Error executing helper: Command
> [\'/usr/libexec/vdsm/managedvolume-helper\', \'detach\'] failed with rc=1
> out=\'\' err=\'oslo.privsep.daemon: Running privsep helper: [\\\'sudo\\\',
> \\\'privsep-helper\\\', \\\'--privsep_context\\\',
> \\\'os_brick.privileged.default\\\', \\\'--privsep_sock_path\\\',
> \\\'/tmp/tmptQzb10/privsep.sock\\\']\\noslo.privsep.daemon: Spawned new
> privsep daemon via rootwrap\\noslo.privsep.daemon: privsep daemon
> starting\\noslo.privsep.daemon: privsep process running with uid/gid:
> 0/0\\noslo.privsep.daemon: privsep process running with capabilities
> (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none\\noslo.privsep.daemon:
> privsep daemon running as pid 76076\\nTraceback (most recent call
> last):\\n  File "/usr/libexec/vdsm/managedvolume-helper", line 154, in
> \\nsys.exit(main(sys.argv[1:]))\\n  File
> "/usr/libexec/vdsm/managedvolume-helper", line 77, in main\\n
> args.command(args)\\n  File "/usr/libexec/vdsm/managedvolume-helper", line
> 149, in detach\\nignore_errors=False)\\n  File
> "/usr/lib/python2.7/site-packages/vdsm/storage/nos_brick.py", line 121, in
> disconnect_volume\\nrun_as_root=True)\\n  File
> "/usr/lib/python2.7/site-packages/os_brick/executor.py", line 52, in
> _execute\\nresult = self.__execute(*args, **kwargs)\\n  File
> "/usr/lib/python2.7/site-packages/os_brick/privileged/rootwrap.py", line
> 169, in execute\\nreturn execute_root(*cmd, **kwargs)\\n  File
> "/usr/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line 241,
> in _wrap\\nreturn self.channel.remote_call(name, args, kwargs)\\n  File
> "/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py", line 203, in
> remote_call\\nraise
> exc_type(*result[2])\\noslo_concurrency.processutils.ProcessExecutionError:
> Unexpected error while running command.\\nCommand: rbd unmap
> /dev/rbd/rbd/volume-0e8c1056-45d6-4740-934d-eb07a9f73160 --conf
> /tmp/brickrbd_LCKezP --id ovirt --mon_host 172.16.10.13:3300 --mon_host
> 172.16.10.14:3300 --mon_host 172.16.10.12:6789\\nExit code: 16\\nStdout:
> u\\\'\\\'\\nStderr: u\\\'rbd: sysfs write failednrbd: unmap failed:
> (16) Device or resource busyn\\\'\\n\'',)#012Traceback (most recent
> call last):#012  File
> "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 124, in
> method#012ret = func(*args, **kwargs)#012  File
> "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1766, in
> detach_volume#012return managedvolume.detach_volume(vol_id)#012  File
> "/usr/lib/python2.7/site-packages/vdsm/storage/managedvolume.py", line 67,
> in wrapper#012return func(*args, **kwargs)#012  File
> "/usr/lib/python2.7/site-packages/vdsm/storage/managedvolume.py", line 135,
> in detach_volume#012run_helper("detach", vol_info)#012  File
> "/usr/lib/python2.7/site-packages/vdsm/storage/managedvolume.py", line 179,
> in run_helper#012sub_cmd, cmd_input=cmd_input)#012  File
> "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 56, in
> __call__#012return callMethod()#012  File
> "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 54, in
> #012**kwargs)#012  File "", line 2, in
> managedvolume_run_helper#012  File
> "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in
> _callmethod#012raise convert_to_error(kind,
> result)#012ManagedVolumeHelperFailed: Managed Volume Helper failed.:
> ('Error executing helper: Command
> [\'/usr/libexec/vdsm/managedvolume-helper\', \'detach\'] failed with rc=1
> out=\'\' err=\'oslo.privsep.daemon: Running privsep helper: [\\\'sudo\\\',
> \\\'privsep-helper\\\', \\\'--privsep_context\\\',
> \\\'os_brick.privileged.default\\\', \\\'--privsep_sock_path\\\',
> \\\'/tmp/tmptQzb10/privsep.sock\\\']\\noslo.privsep.daemon: Spawned new
> privsep daemon via rootwrap\\noslo.privsep.daemon: privsep daemon
> starting\\noslo.privsep.daemon: privsep process running with uid/gid:
> 0/0\\noslo.privsep.daemon: privsep process running with capabilities
> (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none\\noslo.privsep.daemon:
> privsep 

[ovirt-users] Re: Managed Block Storage: ceph detach_volume failing after migration

2019-09-25 Thread Benny Zlotnik
This might be a bug, can you share the full vdsm and engine logs?


On Wed, Sep 25, 2019 at 3:18 PM Dan Poltawski  wrote:
>
> On ovirt 4.3.5 we are seeing various problems related to the rbd device 
> staying mapped after a guest has been live migrated. This causes problems 
> migrating the guest back, as well as rebooting the guest when it starts back 
> up on the original host. The error returned is ‘nrbd: unmap failed: (16) 
> Device or resource busy’. I’ve pasted the full vdsm log below.
>
>
>
> As far as I can tell this isn’t happening 100% of the time, and seems to be 
> more prevalent on busy guests.
>
>
>
> (Not sure if I should create a bug for this, so thought I’d start here first)
>
>
>
> Thanks,
>
>
>
> Dan
>
>
>
>
>
> Sep 24 19:26:18 mario vdsm[5485]: ERROR FINISH detach_volume error=Managed 
> Volume Helper failed.: ('Error executing helper: Command 
> [\'/usr/libexec/vdsm/managedvolume-helper\', \'detach\'] failed with rc=1 
> out=\'\' err=\'oslo.privsep.daemon: Running privsep helper: [\\\'sudo\\\', 
> \\\'privsep-helper\\\', \\\'--privsep_context\\\', 
> \\\'os_brick.privileged.default\\\', \\\'--privsep_sock_path\\\', 
> \\\'/tmp/tmptQzb10/privsep.sock\\\']\\noslo.privsep.daemon: Spawned new 
> privsep daemon via rootwrap\\noslo.privsep.daemon: privsep daemon 
> starting\\noslo.privsep.daemon: privsep process running with uid/gid: 
> 0/0\\noslo.privsep.daemon: privsep process running with capabilities 
> (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none\\noslo.privsep.daemon: 
> privsep daemon running as pid 76076\\nTraceback (most recent call last):\\n  
> File "/usr/libexec/vdsm/managedvolume-helper", line 154, in \\n
> sys.exit(main(sys.argv[1:]))\\n  File 
> "/usr/libexec/vdsm/managedvolume-helper", line 77, in main\\n
> args.command(args)\\n  File "/usr/libexec/vdsm/managedvolume-helper", line 
> 149, in detach\\nignore_errors=False)\\n  File 
> "/usr/lib/python2.7/site-packages/vdsm/storage/nos_brick.py", line 121, in 
> disconnect_volume\\nrun_as_root=True)\\n  File 
> "/usr/lib/python2.7/site-packages/os_brick/executor.py", line 52, in 
> _execute\\nresult = self.__execute(*args, **kwargs)\\n  File 
> "/usr/lib/python2.7/site-packages/os_brick/privileged/rootwrap.py", line 169, 
> in execute\\nreturn execute_root(*cmd, **kwargs)\\n  File 
> "/usr/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line 241, in 
> _wrap\\nreturn self.channel.remote_call(name, args, kwargs)\\n  File 
> "/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py", line 203, in 
> remote_call\\nraise 
> exc_type(*result[2])\\noslo_concurrency.processutils.ProcessExecutionError: 
> Unexpected error while running command.\\nCommand: rbd unmap 
> /dev/rbd/rbd/volume-0e8c1056-45d6-4740-934d-eb07a9f73160 --conf 
> /tmp/brickrbd_LCKezP --id ovirt --mon_host 172.16.10.13:3300 --mon_host 
> 172.16.10.14:3300 --mon_host 172.16.10.12:6789\\nExit code: 16\\nStdout: 
> u\\\'\\\'\\nStderr: u\\\'rbd: sysfs write failednrbd: unmap failed: (16) 
> Device or resource busyn\\\'\\n\'',)#012Traceback (most recent call 
> last):#012  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 
> 124, in method#012ret = func(*args, **kwargs)#012  File 
> "/usr/lib/python2.7/site-packages/vdsm/API.py", line 1766, in 
> detach_volume#012return managedvolume.detach_volume(vol_id)#012  File 
> "/usr/lib/python2.7/site-packages/vdsm/storage/managedvolume.py", line 67, in 
> wrapper#012return func(*args, **kwargs)#012  File 
> "/usr/lib/python2.7/site-packages/vdsm/storage/managedvolume.py", line 135, 
> in detach_volume#012run_helper("detach", vol_info)#012  File 
> "/usr/lib/python2.7/site-packages/vdsm/storage/managedvolume.py", line 179, 
> in run_helper#012sub_cmd, cmd_input=cmd_input)#012  File 
> "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 56, in 
> __call__#012return callMethod()#012  File 
> "/usr/lib/python2.7/site-packages/vdsm/common/supervdsm.py", line 54, in 
> #012**kwargs)#012  File "", line 2, in 
> managedvolume_run_helper#012  File 
> "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in 
> _callmethod#012raise convert_to_error(kind, 
> result)#012ManagedVolumeHelperFailed: Managed Volume Helper failed.: ('Error 
> executing helper: Command [\'/usr/libexec/vdsm/managedvolume-helper\', 
> \'detach\'] failed with rc=1 out=\'\' err=\'oslo.privsep.daemon: Running 
> privsep helper: [\\\'sudo\\\', \\\'privsep-helper\\\', 
> \\\'--privsep_context\\\', \\\'os_brick.privileged.default\\\', 
> \\\'--privsep_sock_path\\\', 
> \\\'/tmp/tmptQzb10/privsep.sock\\\']\\noslo.privsep.daemon: Spawned new 
> privsep daemon via rootwrap\\noslo.privsep.daemon: privsep daemon 
> starting\\noslo.privsep.daemon: privsep process running with uid/gid: 
> 0/0\\noslo.privsep.daemon: privsep process running with capabilities 
> (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none\\noslo.privsep.daemon: 
> privsep daemon running as 

[ovirt-users] Re: Managed Block Storage/Ceph: Experiences from Catastrophic Hardware failure

2019-09-15 Thread Benny Zlotnik
>* Would ovirt have been able to deal with clearing the rbd locks, or
did I miss a trick somewhere to resolve this situation with manually
going through each device and clering the lock?

Unfortunately there is no trick on ovirt's side

>* Might it be possible for ovirt to detect when the rbd images are
locked for writing and prevent launching?

Since rbd paths are provided via cinderlib, a higher level interface, ovirt
does not have knowledge of implementation details like this

On Thu, Sep 12, 2019 at 11:27 PM Dan Poltawski 
wrote:

> Yesterday we had a catastrophic hardware failure with one of our nodes
> using ceph and the experimental cinderlib integration.
>
> Unfortunately the ovirt cluster recover the situation well and took
> some manual intervention to resolve. I thought I'd share what happened
> and how we resolved it in case there is any best practice to share/bugs
> which are worth creating to help others in similar situaiton. We are
> early in our use of ovirt, so its quite possible we have things
> incorreclty configured.
>
> Our setup: We have two nodes, hosted engine on iSCSI, about 40vms all
> using managed block storage mounting the rbd volumes directly. I hadn't
> configured power management (perhaps this is the fundamental problem).
>
> Yesterday a hardware fault caused one of the nodes to crash and stay
> down awaiting user input in POST screens, taking 20 vms with it.
>
> The hosted engine was fortunately on the 'good' node  and detected that
> the node had become unresponsive, but noted 'Host cannot be fenced
> automatically because power management for the host is disabled.'.
>
> At this point, knowing that one node was dead, I wanted to bring up the
> failed vms on the good node. However, the vms were appearing in an
> unknown state and I couldn't do any operations on them. It wasn't clear
> to me what the best course of action to do there would be. I am not
> sure if there is a way to mark the node as failed?
>
> In my urgency to try and resolve the situation I managed to get the
> failed node startred back up, shortly after it came up the
> engine detected that all the vms were down, I put the failed host into
> maintaince mode and tried to start the failed vms.
>
> Unfortunately the failed vms did not start up cleanly - it turned out
> that they still had rbd locks preventing writing from the failed node.
>
> To finally gets the vms to start I then manually went through every
> vm's managed block, found the id and found the lock and removed it:
> rbd lock list rbd/volume-{id}
> rbd lock remove rbd/voleume-{id} 'auto {lockid}' {lockername}
>
> Some overall thoughts I had:
> * I'm not sure what the best course of action is to notify the engine
> about a catastrophic hardware failure? If power management was
> configured, I suppose it would've removed the power and marked them all
> down?
>
> * Would ovirt have been able to deal with clearing the rbd locks, or
> did I miss a trick somewhere to resolve this situation with manually
> going through each device and clering the lock?
>
> * Might it be possible for ovirt to detect when the rbd images are
> locked for writing and prevent launching?
>
> regards,
>
> Dan
>
> 
>
> The Networking People (TNP) Limited. Registered office: Network House,
> Caton Rd, Lancaster, LA1 3PE. Registered in England & Wales with company
> number: 07667393
>
> This email and any files transmitted with it are confidential and intended
> solely for the use of the individual or entity to whom they are addressed.
> If you have received this email in error please notify the system manager.
> This message contains confidential information and is intended only for the
> individual named. If you are not the named addressee you should not
> disseminate, distribute or copy this e-mail. Please notify the sender
> immediately by e-mail if you have received this e-mail by mistake and
> delete this e-mail from your system. If you are not the intended recipient
> you are notified that disclosing, copying, distributing or taking any
> action in reliance on the contents of this information is strictly
> prohibited.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NZGGIT2KKBWCPXNB5JEQEA3KQP5ZBNXR/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FJCBWWTWYTHHID3KYL67KEK63H6F2HWT/


[ovirt-users] Re: Managed Block Storage

2019-07-09 Thread Dan Poltawski
On Tue, 2019-07-09 at 11:12 +0300, Benny Zlotnik wrote:
> VM live migration is supported and should work
> Can you add engine and cinderlib logs?


Sorry - looks like once again this was a misconfig by me on the ceph
side..

Is it possible to migrate existing vms to managed block storage? Also
is it possible to host the hosted engine on this storage?


Thanks Again for your help,

Dan

>
> On Tue, Jul 9, 2019 at 11:01 AM Dan Poltawski <
> dan.poltaw...@tnp.net.uk> wrote:
> > On Tue, 2019-07-09 at 08:00 +0100, Dan Poltawski wrote:
> > > I've now managed to succesfully create/mount/delete volumes!
> >
> > However, I'm seeing live migrations stay stuck. Is this supported?
> >
> > (gdb) py-list
> >  345client.conf_set('rados_osd_op_timeout',
> > timeout)
> >  346client.conf_set('rados_mon_op_timeout',
> > timeout)
> >  347client.conf_set('client_mount_timeout',
> > timeout)
> >  348
> >  349client.connect()
> > >350ioctx = client.open_ioctx(pool)
> >  351return client, ioctx
> >  352except self.rados.Error:
> >  353msg = _("Error connecting to ceph
> > cluster.")
> >  354LOG.exception(msg)
> >  355client.shutdown()
> >
> >
> > (gdb) py-bt
> > #15 Frame 0x3ea0e50, for file /usr/lib/python2.7/site-
> > packages/cinder/volume/drivers/rbd.py, line 350, in _do_conn
> > (pool='storage-ssd', remote=None, timeout=-1, name='ceph',
> > conf='/etc/ceph/ceph.conf', user='ovirt', client= > remote
> > 0x7fb1f4f83a60>)
> > ioctx = client.open_ioctx(pool)
> > #20 Frame 0x3ea4620, for file /usr/lib/python2.7/site-
> > packages/retrying.py, line 217, in call
> > (self= > 0x7fb1f4f23488>, _wait_exponential_max=1073741823,
> > _wait_incrementing_start=0, stop= > 0x7fb1f4f23578>,
> > _stop_max_attempt_number=5, _wait_incrementing_increment=100,
> > _wait_random_max=1000, _retry_on_result= > 0x7fb1f51da550>, _stop_max_delay=100, _wait_fixed=1000,
> > _wrap_exception=False, _wait_random_min=0,
> > _wait_exponential_multiplier=1, wait= > 0x7fb1f4f23500>) at remote 0x7fb1f4f1ae90>, fn= > 0x7fb1f4f23668>, args=(None, None, None), kwargs={},
> > start_time=1562658179214, attempt_number=1)
> > attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
> > #25 Frame 0x3e49d50, for file /usr/lib/python2.7/site-
> > packages/cinder/utils.py, line 818, in _wrapper (args=(None, None,
> > None), kwargs={}, r= > remote
> > 0x7fb1f4f23488>, _wait_exponential_max=1073741823,
> > _wait_incrementing_start=0, stop= > 0x7fb1f4f23578>,
> > _stop_max_attempt_number=5, _wait_incrementing_increment=100,
> > _wait_random_max=1000, _retry_on_result= > 0x7fb1f51da550>, _stop_max_delay=100, _wait_fixed=1000,
> > _wrap_exception=False, _wait_random_min=0,
> > _wait_exponential_multiplier=1, wait= > 0x7fb1f4f23500>) at remote 0x7fb1f4f1ae90>)
> > return r.call(f, *args, **kwargs)
> > #29 Frame 0x7fb1f4f9a810, for file /usr/lib/python2.7/site-
> > packages/cinder/volume/drivers/rbd.py, line 358, in
> > _connect_to_rados
> > (self= > 0x7fb20583e830>, _is_replication_enabled=False, _execute= > at
> > remote 0x7fb2041242a8>, _active_config={'name': 'ceph', 'conf':
> > '/etc/ceph/ceph.conf', 'user': 'ovirt'}, _active_backend_id=None,
> > _initialized=False, db= > 0x7fb203f8d520>, qos_specs_get= > 0x7fb1f677d460>, _lock= > _waiters=) at remote
> > 0x7fb1f5205bd0>, _wrap_db_kwargs={'max_retries': 20,
> > 'inc_retry_interval': True, 'retry_interval': 1,
> > 'max_retry_interval':
> > 10}, _backend_mapping={'sqlalchemy': 'cinder.db.sqlalchemy.api'},
> > _backend_name='sqlalchemy', use_db_reconnect=False,
> > get_by_id=,
> > volume_type_get=) at
> > remote
> > 0x7fb2003aab10>, target_mapping={'tgtadm':
> > 'cinder.vol...(truncated)
> > return _do_conn(pool, remote, timeout)
> > #33 Frame 0x7fb1f4f5b220, for file /usr/lib/python2.7/site-
> > packages/cinder/volume/drivers/rbd.py, line 177, in __init__
> > (self= > at
> > remote 0x7fb20583e830>, _is_replication_enabled=False,
> > _execute=,
> > _active_config={'name':
> > 'ceph', 'conf': '/etc/ceph/ceph.conf', 'user': 'ovirt'},
> > _active_backend_id=None, _initialized=False,
> > db= > at remote 0x7fb203f8d520>, qos_specs_get= > 0x7fb1f677d460>, _lock= > _waiters=) at remote
> > 0x7fb1f5205bd0>, _wrap_db_kwargs={'max_retries': 20,
> > 'inc_retry_interval': True, 'retry_interval': 1,
> > 'max_retry_interval':
> > 10}, _backend_mapping={'sqlalchemy': 'cinder.db.sqlalchemy.api'},
> > _backend_name='sqlalchemy', use_db_reconnect=False,
> > get_by_id=,
> > volume_type_get=) at
> > remote
> > 0x7fb2003aab10>, target_mapping={'tgtadm': ...(truncated)
> > self.cluster, self.ioctx = driver._connect_to_rados(pool)
> > #44 Frame 0x7fb1f4f9a620, for file /usr/lib/python2.7/site-
> > packages/cinder/volume/drivers/rbd.py, line 298, in
> > check_for_setup_error (self= > rbd= > remote 0x7fb20583e830>, 

[ovirt-users] Re: Managed Block Storage

2019-07-09 Thread Benny Zlotnik
No problem :)

>Is it possible to migrate existing vms to managed block storage?
We do not have OVF support or stuff like that for MBS domains, you can
attach MBS disks to existing VMs
Or do you mean moving/copying existing disks to an MBS domain? in this case
the answer is unfortunately no
>Also is it possible to host the hosted engine on this storage?
Unfortunately no

On Tue, Jul 9, 2019 at 4:57 PM Dan Poltawski 
wrote:

> On Tue, 2019-07-09 at 11:12 +0300, Benny Zlotnik wrote:
> > VM live migration is supported and should work
> > Can you add engine and cinderlib logs?
>
>
> Sorry - looks like once again this was a misconfig by me on the ceph
> side..
>
> Is it possible to migrate existing vms to managed block storage? Also
> is it possible to host the hosted engine on this storage?
>
>
> Thanks Again for your help,
>
> Dan
>
> >
> > On Tue, Jul 9, 2019 at 11:01 AM Dan Poltawski <
> > dan.poltaw...@tnp.net.uk> wrote:
> > > On Tue, 2019-07-09 at 08:00 +0100, Dan Poltawski wrote:
> > > > I've now managed to succesfully create/mount/delete volumes!
> > >
> > > However, I'm seeing live migrations stay stuck. Is this supported?
> > >
> > > (gdb) py-list
> > >  345client.conf_set('rados_osd_op_timeout',
> > > timeout)
> > >  346client.conf_set('rados_mon_op_timeout',
> > > timeout)
> > >  347client.conf_set('client_mount_timeout',
> > > timeout)
> > >  348
> > >  349client.connect()
> > > >350ioctx = client.open_ioctx(pool)
> > >  351return client, ioctx
> > >  352except self.rados.Error:
> > >  353msg = _("Error connecting to ceph
> > > cluster.")
> > >  354LOG.exception(msg)
> > >  355client.shutdown()
> > >
> > >
> > > (gdb) py-bt
> > > #15 Frame 0x3ea0e50, for file /usr/lib/python2.7/site-
> > > packages/cinder/volume/drivers/rbd.py, line 350, in _do_conn
> > > (pool='storage-ssd', remote=None, timeout=-1, name='ceph',
> > > conf='/etc/ceph/ceph.conf', user='ovirt', client= > > remote
> > > 0x7fb1f4f83a60>)
> > > ioctx = client.open_ioctx(pool)
> > > #20 Frame 0x3ea4620, for file /usr/lib/python2.7/site-
> > > packages/retrying.py, line 217, in call
> > > (self= > > 0x7fb1f4f23488>, _wait_exponential_max=1073741823,
> > > _wait_incrementing_start=0, stop= > > 0x7fb1f4f23578>,
> > > _stop_max_attempt_number=5, _wait_incrementing_increment=100,
> > > _wait_random_max=1000, _retry_on_result= > > 0x7fb1f51da550>, _stop_max_delay=100, _wait_fixed=1000,
> > > _wrap_exception=False, _wait_random_min=0,
> > > _wait_exponential_multiplier=1, wait= > > 0x7fb1f4f23500>) at remote 0x7fb1f4f1ae90>, fn= > > 0x7fb1f4f23668>, args=(None, None, None), kwargs={},
> > > start_time=1562658179214, attempt_number=1)
> > > attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
> > > #25 Frame 0x3e49d50, for file /usr/lib/python2.7/site-
> > > packages/cinder/utils.py, line 818, in _wrapper (args=(None, None,
> > > None), kwargs={}, r= > > remote
> > > 0x7fb1f4f23488>, _wait_exponential_max=1073741823,
> > > _wait_incrementing_start=0, stop= > > 0x7fb1f4f23578>,
> > > _stop_max_attempt_number=5, _wait_incrementing_increment=100,
> > > _wait_random_max=1000, _retry_on_result= > > 0x7fb1f51da550>, _stop_max_delay=100, _wait_fixed=1000,
> > > _wrap_exception=False, _wait_random_min=0,
> > > _wait_exponential_multiplier=1, wait= > > 0x7fb1f4f23500>) at remote 0x7fb1f4f1ae90>)
> > > return r.call(f, *args, **kwargs)
> > > #29 Frame 0x7fb1f4f9a810, for file /usr/lib/python2.7/site-
> > > packages/cinder/volume/drivers/rbd.py, line 358, in
> > > _connect_to_rados
> > > (self= > > 0x7fb20583e830>, _is_replication_enabled=False, _execute= > > at
> > > remote 0x7fb2041242a8>, _active_config={'name': 'ceph', 'conf':
> > > '/etc/ceph/ceph.conf', 'user': 'ovirt'}, _active_backend_id=None,
> > > _initialized=False, db= > > 0x7fb203f8d520>, qos_specs_get= > > 0x7fb1f677d460>, _lock= > > _waiters=) at remote
> > > 0x7fb1f5205bd0>, _wrap_db_kwargs={'max_retries': 20,
> > > 'inc_retry_interval': True, 'retry_interval': 1,
> > > 'max_retry_interval':
> > > 10}, _backend_mapping={'sqlalchemy': 'cinder.db.sqlalchemy.api'},
> > > _backend_name='sqlalchemy', use_db_reconnect=False,
> > > get_by_id=,
> > > volume_type_get=) at
> > > remote
> > > 0x7fb2003aab10>, target_mapping={'tgtadm':
> > > 'cinder.vol...(truncated)
> > > return _do_conn(pool, remote, timeout)
> > > #33 Frame 0x7fb1f4f5b220, for file /usr/lib/python2.7/site-
> > > packages/cinder/volume/drivers/rbd.py, line 177, in __init__
> > > (self= > > at
> > > remote 0x7fb20583e830>, _is_replication_enabled=False,
> > > _execute=,
> > > _active_config={'name':
> > > 'ceph', 'conf': '/etc/ceph/ceph.conf', 'user': 'ovirt'},
> > > _active_backend_id=None, _initialized=False,
> > > db= > > at remote 0x7fb203f8d520>, qos_specs_get= > > 0x7fb1f677d460>, _lock= > > 

[ovirt-users] Re: Managed Block Storage

2019-07-09 Thread Dan Poltawski
On Mon, 2019-07-08 at 18:53 +0300, Benny Zlotnik wrote:
> Can you try to create mutliple ceph volumes manually via rbd from the
> engine machine, so we can simulate what cinderlib does without using
> it, this can be done
> $ rbd -c ceph.conf create /vol1 --size 100M
> $ rbd -c ceph.conf create /vol2 --size 100M

Thanks - I did this and it allowed me to get to the bottom of the
problem. Those commands were freezing and after some investigation and
reconfiguration I think it was a problem was with connectivity to the
ceph osds, where as connectivity to the mons was 'working'. I've now
managed to succesfully create/mount/delete volumes!

Dan



The Networking People (TNP) Limited. Registered office: Network House, Caton 
Rd, Lancaster, LA1 3PE. Registered in England & Wales with company number: 
07667393

This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. If 
you have received this email in error please notify the system manager. This 
message contains confidential information and is intended only for the 
individual named. If you are not the named addressee you should not 
disseminate, distribute or copy this e-mail. Please notify the sender 
immediately by e-mail if you have received this e-mail by mistake and delete 
this e-mail from your system. If you are not the intended recipient you are 
notified that disclosing, copying, distributing or taking any action in 
reliance on the contents of this information is strictly prohibited.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MZSKZTETT6TDZ2E3YSIZ54KY4OZLPDU7/


[ovirt-users] Re: Managed Block Storage

2019-07-09 Thread Benny Zlotnik
VM live migration is supported and should work
Can you add engine and cinderlib logs?

On Tue, Jul 9, 2019 at 11:01 AM Dan Poltawski 
wrote:

> On Tue, 2019-07-09 at 08:00 +0100, Dan Poltawski wrote:
> > I've now managed to succesfully create/mount/delete volumes!
>
> However, I'm seeing live migrations stay stuck. Is this supported?
>
> (gdb) py-list
>  345client.conf_set('rados_osd_op_timeout',
> timeout)
>  346client.conf_set('rados_mon_op_timeout',
> timeout)
>  347client.conf_set('client_mount_timeout',
> timeout)
>  348
>  349client.connect()
> >350ioctx = client.open_ioctx(pool)
>  351return client, ioctx
>  352except self.rados.Error:
>  353msg = _("Error connecting to ceph cluster.")
>  354LOG.exception(msg)
>  355client.shutdown()
>
>
> (gdb) py-bt
> #15 Frame 0x3ea0e50, for file /usr/lib/python2.7/site-
> packages/cinder/volume/drivers/rbd.py, line 350, in _do_conn
> (pool='storage-ssd', remote=None, timeout=-1, name='ceph',
> conf='/etc/ceph/ceph.conf', user='ovirt', client= 0x7fb1f4f83a60>)
> ioctx = client.open_ioctx(pool)
> #20 Frame 0x3ea4620, for file /usr/lib/python2.7/site-
> packages/retrying.py, line 217, in call
> (self= 0x7fb1f4f23488>, _wait_exponential_max=1073741823,
> _wait_incrementing_start=0, stop=,
> _stop_max_attempt_number=5, _wait_incrementing_increment=100,
> _wait_random_max=1000, _retry_on_result= 0x7fb1f51da550>, _stop_max_delay=100, _wait_fixed=1000,
> _wrap_exception=False, _wait_random_min=0,
> _wait_exponential_multiplier=1, wait= 0x7fb1f4f23500>) at remote 0x7fb1f4f1ae90>, fn= 0x7fb1f4f23668>, args=(None, None, None), kwargs={},
> start_time=1562658179214, attempt_number=1)
> attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
> #25 Frame 0x3e49d50, for file /usr/lib/python2.7/site-
> packages/cinder/utils.py, line 818, in _wrapper (args=(None, None,
> None), kwargs={}, r= 0x7fb1f4f23488>, _wait_exponential_max=1073741823,
> _wait_incrementing_start=0, stop=,
> _stop_max_attempt_number=5, _wait_incrementing_increment=100,
> _wait_random_max=1000, _retry_on_result= 0x7fb1f51da550>, _stop_max_delay=100, _wait_fixed=1000,
> _wrap_exception=False, _wait_random_min=0,
> _wait_exponential_multiplier=1, wait= 0x7fb1f4f23500>) at remote 0x7fb1f4f1ae90>)
> return r.call(f, *args, **kwargs)
> #29 Frame 0x7fb1f4f9a810, for file /usr/lib/python2.7/site-
> packages/cinder/volume/drivers/rbd.py, line 358, in _connect_to_rados
> (self= 0x7fb20583e830>, _is_replication_enabled=False, _execute= remote 0x7fb2041242a8>, _active_config={'name': 'ceph', 'conf':
> '/etc/ceph/ceph.conf', 'user': 'ovirt'}, _active_backend_id=None,
> _initialized=False, db= 0x7fb203f8d520>, qos_specs_get= 0x7fb1f677d460>, _lock= _waiters=) at remote
> 0x7fb1f5205bd0>, _wrap_db_kwargs={'max_retries': 20,
> 'inc_retry_interval': True, 'retry_interval': 1, 'max_retry_interval':
> 10}, _backend_mapping={'sqlalchemy': 'cinder.db.sqlalchemy.api'},
> _backend_name='sqlalchemy', use_db_reconnect=False,
> get_by_id=,
> volume_type_get=) at remote
> 0x7fb2003aab10>, target_mapping={'tgtadm': 'cinder.vol...(truncated)
> return _do_conn(pool, remote, timeout)
> #33 Frame 0x7fb1f4f5b220, for file /usr/lib/python2.7/site-
> packages/cinder/volume/drivers/rbd.py, line 177, in __init__
> (self= remote 0x7fb20583e830>, _is_replication_enabled=False,
> _execute=, _active_config={'name':
> 'ceph', 'conf': '/etc/ceph/ceph.conf', 'user': 'ovirt'},
> _active_backend_id=None, _initialized=False, db= at remote 0x7fb203f8d520>, qos_specs_get= 0x7fb1f677d460>, _lock= _waiters=) at remote
> 0x7fb1f5205bd0>, _wrap_db_kwargs={'max_retries': 20,
> 'inc_retry_interval': True, 'retry_interval': 1, 'max_retry_interval':
> 10}, _backend_mapping={'sqlalchemy': 'cinder.db.sqlalchemy.api'},
> _backend_name='sqlalchemy', use_db_reconnect=False,
> get_by_id=,
> volume_type_get=) at remote
> 0x7fb2003aab10>, target_mapping={'tgtadm': ...(truncated)
> self.cluster, self.ioctx = driver._connect_to_rados(pool)
> #44 Frame 0x7fb1f4f9a620, for file /usr/lib/python2.7/site-
> packages/cinder/volume/drivers/rbd.py, line 298, in
> check_for_setup_error (self= remote 0x7fb20583e830>, _is_replication_enabled=False,
> _execute=, _active_config={'name':
> 'ceph', 'conf': '/etc/ceph/ceph.conf', 'user': 'ovirt'},
> _active_backend_id=None, _initialized=False, db= at remote 0x7fb203f8d520>, qos_specs_get= 0x7fb1f677d460>, _lock= _waiters=) at remote
> 0x7fb1f5205bd0>, _wrap_db_kwargs={'max_retries': 20,
> 'inc_retry_interval': True, 'retry_interval': 1, 'max_retry_interval':
> 10}, _backend_mapping={'sqlalchemy': 'cinder.db.sqlalchemy.api'},
> _backend_name='sqlalchemy', use_db_reconnect=False,
> get_by_id=,
> volume_type_get=) at remote
> 0x7fb2003aab10>, target_mapping={'tgtadm': 'cinder...(truncated)
> with 

[ovirt-users] Re: Managed Block Storage

2019-07-09 Thread Dan Poltawski
On Tue, 2019-07-09 at 08:00 +0100, Dan Poltawski wrote:
> I've now managed to succesfully create/mount/delete volumes!

However, I'm seeing live migrations stay stuck. Is this supported?

(gdb) py-list
 345client.conf_set('rados_osd_op_timeout',
timeout)
 346client.conf_set('rados_mon_op_timeout',
timeout)
 347client.conf_set('client_mount_timeout',
timeout)
 348
 349client.connect()
>350ioctx = client.open_ioctx(pool)
 351return client, ioctx
 352except self.rados.Error:
 353msg = _("Error connecting to ceph cluster.")
 354LOG.exception(msg)
 355client.shutdown()


(gdb) py-bt
#15 Frame 0x3ea0e50, for file /usr/lib/python2.7/site-
packages/cinder/volume/drivers/rbd.py, line 350, in _do_conn
(pool='storage-ssd', remote=None, timeout=-1, name='ceph',
conf='/etc/ceph/ceph.conf', user='ovirt', client=)
ioctx = client.open_ioctx(pool)
#20 Frame 0x3ea4620, for file /usr/lib/python2.7/site-
packages/retrying.py, line 217, in call
(self=, _wait_exponential_max=1073741823,
_wait_incrementing_start=0, stop=,
_stop_max_attempt_number=5, _wait_incrementing_increment=100,
_wait_random_max=1000, _retry_on_result=, _stop_max_delay=100, _wait_fixed=1000,
_wrap_exception=False, _wait_random_min=0,
_wait_exponential_multiplier=1, wait=) at remote 0x7fb1f4f1ae90>, fn=, args=(None, None, None), kwargs={},
start_time=1562658179214, attempt_number=1)
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
#25 Frame 0x3e49d50, for file /usr/lib/python2.7/site-
packages/cinder/utils.py, line 818, in _wrapper (args=(None, None,
None), kwargs={}, r=, _wait_exponential_max=1073741823,
_wait_incrementing_start=0, stop=,
_stop_max_attempt_number=5, _wait_incrementing_increment=100,
_wait_random_max=1000, _retry_on_result=, _stop_max_delay=100, _wait_fixed=1000,
_wrap_exception=False, _wait_random_min=0,
_wait_exponential_multiplier=1, wait=) at remote 0x7fb1f4f1ae90>)
return r.call(f, *args, **kwargs)
#29 Frame 0x7fb1f4f9a810, for file /usr/lib/python2.7/site-
packages/cinder/volume/drivers/rbd.py, line 358, in _connect_to_rados
(self=, _is_replication_enabled=False, _execute=, _active_config={'name': 'ceph', 'conf':
'/etc/ceph/ceph.conf', 'user': 'ovirt'}, _active_backend_id=None,
_initialized=False, db=, qos_specs_get=, _lock=) at remote
0x7fb1f5205bd0>, _wrap_db_kwargs={'max_retries': 20,
'inc_retry_interval': True, 'retry_interval': 1, 'max_retry_interval':
10}, _backend_mapping={'sqlalchemy': 'cinder.db.sqlalchemy.api'},
_backend_name='sqlalchemy', use_db_reconnect=False,
get_by_id=,
volume_type_get=) at remote
0x7fb2003aab10>, target_mapping={'tgtadm': 'cinder.vol...(truncated)
return _do_conn(pool, remote, timeout)
#33 Frame 0x7fb1f4f5b220, for file /usr/lib/python2.7/site-
packages/cinder/volume/drivers/rbd.py, line 177, in __init__
(self=, _is_replication_enabled=False,
_execute=, _active_config={'name':
'ceph', 'conf': '/etc/ceph/ceph.conf', 'user': 'ovirt'},
_active_backend_id=None, _initialized=False, db=, qos_specs_get=, _lock=) at remote
0x7fb1f5205bd0>, _wrap_db_kwargs={'max_retries': 20,
'inc_retry_interval': True, 'retry_interval': 1, 'max_retry_interval':
10}, _backend_mapping={'sqlalchemy': 'cinder.db.sqlalchemy.api'},
_backend_name='sqlalchemy', use_db_reconnect=False,
get_by_id=,
volume_type_get=) at remote
0x7fb2003aab10>, target_mapping={'tgtadm': ...(truncated)
self.cluster, self.ioctx = driver._connect_to_rados(pool)
#44 Frame 0x7fb1f4f9a620, for file /usr/lib/python2.7/site-
packages/cinder/volume/drivers/rbd.py, line 298, in
check_for_setup_error (self=, _is_replication_enabled=False,
_execute=, _active_config={'name':
'ceph', 'conf': '/etc/ceph/ceph.conf', 'user': 'ovirt'},
_active_backend_id=None, _initialized=False, db=, qos_specs_get=, _lock=) at remote
0x7fb1f5205bd0>, _wrap_db_kwargs={'max_retries': 20,
'inc_retry_interval': True, 'retry_interval': 1, 'max_retry_interval':
10}, _backend_mapping={'sqlalchemy': 'cinder.db.sqlalchemy.api'},
_backend_name='sqlalchemy', use_db_reconnect=False,
get_by_id=,
volume_type_get=) at remote
0x7fb2003aab10>, target_mapping={'tgtadm': 'cinder...(truncated)
with RADOSClient(self):
#48 Frame 0x3e5bef0, for file /usr/lib/python2.7/site-
packages/cinderlib/cinderlib.py, line 88, in __init__
(self=, _is_replication_enabled=False,
_execute=, _active_config={'name':
'ceph', 'conf': '/etc/ceph/ceph.conf', 'user': 'ovirt'},
_active_backend_id=None, _initialized=False, db=, qos_specs_get=, _lock=) at remote
0x7fb1f5205bd0>, _wrap_db_kwargs={'max_retries': 20,
'inc_retry_interval': True, 'retry_interval': 1, 'max_retry_interval':
10}, _backend_mapping={'sqlalchemy': 'cinder.db.sqlalchemy.api'},
_backend_name='sqlalchemy', use_db_reconnect=False,
get_by_id=,
volume_type_get=) at remote
0x7fb2003aab10>, target_mapping={'tgtadm':

[ovirt-users] Re: Managed Block Storage

2019-07-08 Thread Dan Poltawski
Hi,

On Sun, 2019-07-07 at 09:31 +0300, Benny Zlotnik wrote:
> Any chance you can setup gdb[1] so we can find out where it's stuck
> exactly?

Yes, abolutely - but I will need some assistance in getting GDB configured in 
the engine as I am not very familar with it - or how to enable the correct 
repos to get the debug info.

$ gdb python 54654

[...]

Reading symbols from /lib64/libfreeblpriv3.so...Reading symbols from 
/lib64/libfreeblpriv3.so...(no debugging symbols found)...done.
(no debugging symbols found)...done.
Loaded symbols for /lib64/libfreeblpriv3.so
0x7fcf82256483 in epoll_wait () from /lib64/libc.so.6
Missing separate debuginfos, use: debuginfo-install python-2.7.5-80.el7_6.x86_64
(gdb) pt-bt
Undefined command: "pt-bt".  Try "help".


> Also, which version of ovirt are you using?

Using 4.3.4

> Can you also check the ceph logs for anything suspicious?

I haven't seen anything so far, but is an entirely resonable possibility this 
is ceph misoconfiguraiton as we are learning about both tools.


thanks,

Dan

>
>
> [1] - https://wiki.python.org/moin/DebuggingWithGdb
> $ gdb python 
> then `py-bt`
>
> On Thu, Jul 4, 2019 at 7:00 PM 
> mailto:dan.poltaw...@tnp.net.uk>> wrote:
> > > Can you provide logs? mainly engine.log and cinderlib.log
> > > (/var/log/ovirt-engine/cinderlib/cinderlib.log
> >
> >
> > If I create two volumes, the first one succeeds successfully, the
> > second one hangs. If I look in the processlist after creating the
> > second volume which doesn't succceed, I see the python ./cinderlib-
> > client.py create_volume [...] command still running.
> >
> > On the ceph side, I can see only the one rbd volume.
> >
> > Logs below:
> >
> >
> >
> > --- cinderlib.log --
> >
> > 2019-07-04 16:46:30,863 - cinderlib-client - INFO - Fetch backend
> > stats [b07698bb-1688-472f-841b-70a9d52a250d]
> > 2019-07-04 16:46:56,308 - cinderlib-client - INFO - Creating volume
> > '236285cc-ac01-4239-821c-4beadd66923f', with size '2' GB [0b0f0d6f-
> > cb20-440a-bacb-7f5ead2b4b4d]
> > 2019-07-04 16:47:21,671 - cinderlib-client - INFO - Creating volume
> > '84886485-554a-44ca-964c-9758b4a16aae', with size '2' GB [a793bfc9-
> > fc37-4711-a144-d74c100cc75b]
> >
> > --- engine.log ---
> >
> > 2019-07-04 16:46:54,062+01 INFO
> > [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (default
> > task-22) [0b0f0d6f-cb20-440a-bacb-7f5ead2b4b4d] Running command:
> > AddDiskCommand internal: false. Entities affected :  ID: 31536d80-
> > ff45-496b-9820-15441d505924 Type: StorageAction group CREATE_DISK
> > with role type USER
> > 2019-07-04 16:46:54,150+01 INFO
> > [org.ovirt.engine.core.bll.storage.disk.managedblock.AddManagedBloc
> > kStorageDiskCommand] (EE-ManagedThreadFactory-commandCoordinator-
> > Thread-1) [0b0f0d6f-cb20-440a-bacb-7f5ead2b4b4d] Running command:
> > AddManagedBlockStorageDiskCommand internal: true.
> > 2019-07-04 16:46:56,863+01 INFO
> > [org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor]
> > (EE-ManagedThreadFactory-commandCoordinator-Thread-1) [0b0f0d6f-
> > cb20-440a-bacb-7f5ead2b4b4d] cinderlib output:
> > 2019-07-04 16:46:56,912+01 INFO
> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirect
> > or] (default task-22) [] EVENT_ID:
> > USER_ADD_DISK_FINISHED_SUCCESS(2,021), The disk 'test0' was
> > successfully added.
> > 2019-07-04 16:47:00,126+01 INFO
> > [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback
> > ] (EE-ManagedThreadFactory-engineScheduled-Thread-95) [0b0f0d6f-
> > cb20-440a-bacb-7f5ead2b4b4d] Command 'AddDisk' id: '15fe157d-7adb-
> > 4031-9e81-f51aa0b6528f' child commands '[d056397a-7ed9-4c01-b880-
> > dd518421a2c6]' executions were completed, status 'SUCCEEDED'
> > 2019-07-04 16:47:01,136+01 INFO
> > [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (EE-
> > ManagedThreadFactory-engineScheduled-Thread-99) [0b0f0d6f-cb20-
> > 440a-bacb-7f5ead2b4b4d] Ending command
> > 'org.ovirt.engine.core.bll.storage.disk.AddDiskCommand'
> > successfully.
> > 2019-07-04 16:47:01,141+01 INFO
> > [org.ovirt.engine.core.bll.storage.disk.managedblock.AddManagedBloc
> > kStorageDiskCommand] (EE-ManagedThreadFactory-engineScheduled-
> > Thread-99) [0b0f0d6f-cb20-440a-bacb-7f5ead2b4b4d] Ending command
> > 'org.ovirt.engine.core.bll.storage.disk.managedblock.AddManagedBloc
> > kStorageDiskCommand' successfully.
> > 2019-07-04 16:47:01,145+01 WARN
> > [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (EE-
> > ManagedThreadFactory-engineScheduled-Thread-99) [] VM is null - no
> > unlocking
> > 2019-07-04 16:47:01,186+01 INFO
> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirect
> > or] (EE-ManagedThreadFactory-engineScheduled-Thread-99) []
> > EVENT_ID: USER_ADD_DISK_FINISHED_SUCCESS(2,021), The disk 'test0'
> > was successfully added.
> > 2019-07-04 16:47:19,446+01 INFO
> > [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (default
> > task-22) [a793bfc9-fc37-4711-a144-d74c100cc75b] Running command:
> > 

[ovirt-users] Re: Managed Block Storage

2019-07-08 Thread Dan Poltawski
On Mon, 2019-07-08 at 16:49 +0300, Benny Zlotnik wrote:
> Not too useful unfortunately :\
> Can you try py-list instead of py-bt? Perhaps it will provide better
> results

(gdb) py-list
  57if get_errno(ex) != errno.EEXIST:
  58raise
  59return listener
  60
  61def do_poll(self, seconds):
 >62return self.poll.poll(seconds)
>


Thanks for you help,


Dan

> On Mon, Jul 8, 2019 at 4:41 PM Dan Poltawski <
> dan.poltaw...@tnp.net.uk> wrote:
> > On Mon, 2019-07-08 at 16:25 +0300, Benny Zlotnik wrote:
> > > Hi,
> > >
> > > You have a typo, it's py-bt and I just tried it myself, I only
> > had to
> > > install:
> > > $ yum install -y python-devel
> > > (in addition to the packages specified in the link)
> >
> > Thanks - this is what I get:
> >
> > #3 Frame 0x7f2046b59ad0, for file /usr/lib/python2.7/site-
> > packages/eventlet/hubs/epolls.py, line 62, in do_poll
> > (self= > 0x7f20661059b0>,
> > debug_exceptions=True, debug_blocking_resolution=1, modify= > in
> > method modify of select.epoll object at remote 0x7f2048455168>,
> > running=True, debug_blocking=False, listeners={'read': {20:
> >  > greenlet.greenlet
> > object at remote 0x7f2046878410>, spent=False,
> > greenlet=,
> > evtype='read',
> > mark_as_closed=,
> > tb= > method throw of greenlet.greenlet object at remote 0x7f2046878410>)
> > at
> > remote 0x7f20468cda10>}, 'write': {}}, timers_canceled=0,
> > greenlet=, closed=[],
> > stopping=False, timers=[(,
> > , tpl=( > switch
> > of greenlet.greenlet object at remote 0x7f2046934c30>, (), {}),
> > called=F...(truncated)
> > return self.poll.poll(seconds)
> > #6 Frame 0x32fbf30, for file /usr/lib/python2.7/site-
> > packages/eventlet/hubs/poll.py, line 85, in wait
> > (self= > 0x7f20661059b0>,
> > debug_exceptions=True, debug_blocking_resolution=1, modify= > in
> > method modify of select.epoll object at remote 0x7f2048455168>,
> > running=True, debug_blocking=False, listeners={'read': {20:
> >  > greenlet.greenlet
> > object at remote 0x7f2046878410>, spent=False,
> > greenlet=,
> > evtype='read',
> > mark_as_closed=,
> > tb= > method throw of greenlet.greenlet object at remote 0x7f2046878410>)
> > at
> > remote 0x7f20468cda10>}, 'write': {}}, timers_canceled=0,
> > greenlet=, closed=[],
> > stopping=False, timers=[(,
> > , tpl=( > switch
> > of greenlet.greenlet object at remote 0x7f2046934c30>, (), {}),
> > called=False) at r...(truncated)
> > presult = self.do_poll(seconds)
> > #10 Frame 0x7f2046afca00, for file /usr/lib/python2.7/site-
> > packages/eventlet/hubs/hub.py, line 346, in run
> > (self= > 0x7f20661059b0>,
> > debug_exceptions=True, debug_blocking_resolution=1, modify= > in
> > method modify of select.epoll object at remote 0x7f2048455168>,
> > running=True, debug_blocking=False, listeners={'read': {20:
> >  > greenlet.greenlet
> > object at remote 0x7f2046878410>, spent=False,
> > greenlet=,
> > evtype='read',
> > mark_as_closed=,
> > tb= > method throw of greenlet.greenlet object at remote 0x7f2046878410>)
> > at
> > remote 0x7f20468cda10>}, 'write': {}}, timers_canceled=0,
> > greenlet=, closed=[],
> > stopping=False, timers=[(,
> > , tpl=( > switch
> > of greenlet.greenlet object at remote 0x7f2046934c30>, (), {}),
> > called=False) ...(truncated)
> > self.wait(sleep_time)
> >
> >
> >
> > >
> > > On Mon, Jul 8, 2019 at 2:40 PM Dan Poltawski <
> > > dan.poltaw...@tnp.net.uk> wrote:
> > > > Hi,
> > > >
> > > > On Sun, 2019-07-07 at 09:31 +0300, Benny Zlotnik wrote:
> > > > > > Any chance you can setup gdb[1] so we can find out where
> > it's
> > > > > stuck
> > > > > > exactly?
> > > >
> > > > Yes, abolutely - but I will need some assistance in getting GDB
> > > > configured in the engine as I am not very familar with it - or
> > how
> > > > to enable the correct repos to get the debug info.
> > > >
> > > > $ gdb python 54654
> > > >
> > > > [...]
> > > >
> > > > Reading symbols from /lib64/libfreeblpriv3.so...Reading symbols
> > > > from /lib64/libfreeblpriv3.so...(no debugging symbols
> > > > found)...done.
> > > > (no debugging symbols found)...done.
> > > > Loaded symbols for /lib64/libfreeblpriv3.so
> > > > 0x7fcf82256483 in epoll_wait () from /lib64/libc.so.6
> > > > Missing separate debuginfos, use: debuginfo-install python-
> > 2.7.5-
> > > > 80.el7_6.x86_64
> > > > (gdb) pt-bt
> > > > Undefined command: "pt-bt".  Try "help".
> > > >
> > > >
> > > > > > Also, which version of ovirt are you using?
> > > >
> > > > Using 4.3.4
> > > >
> > > > > > Can you also check the ceph logs for anything suspicious?
> > > >
> > > > I haven't seen anything so far, but is an entirely resonable
> > > > possibility this is ceph misoconfiguraiton as we are learning
> > about
> > > > both tools.
> > > >
> > > >
> > > > thanks,
> > > >
> > > > Dan
> > > >
> > > > > >
> > > > > >
> > > > > > [1] - https://wiki.python.org/moin/DebuggingWithGdb
> > > > > > $ gdb python 
> > > > > > then `py-bt`
> > > > > >
> > > > > > 

[ovirt-users] Re: Managed Block Storage

2019-07-08 Thread Dan Poltawski
On Mon, 2019-07-08 at 16:25 +0300, Benny Zlotnik wrote:
> Hi,
>
> You have a typo, it's py-bt and I just tried it myself, I only had to
> install:
> $ yum install -y python-devel
> (in addition to the packages specified in the link)

Thanks - this is what I get:

#3 Frame 0x7f2046b59ad0, for file /usr/lib/python2.7/site-
packages/eventlet/hubs/epolls.py, line 62, in do_poll
(self=,
debug_exceptions=True, debug_blocking_resolution=1, modify=,
running=True, debug_blocking=False, listeners={'read': {20:
, spent=False,
greenlet=, evtype='read',
mark_as_closed=, tb=) at
remote 0x7f20468cda10>}, 'write': {}}, timers_canceled=0,
greenlet=, closed=[],
stopping=False, timers=[(,
, tpl=(, (), {}),
called=F...(truncated)
return self.poll.poll(seconds)
#6 Frame 0x32fbf30, for file /usr/lib/python2.7/site-
packages/eventlet/hubs/poll.py, line 85, in wait
(self=,
debug_exceptions=True, debug_blocking_resolution=1, modify=,
running=True, debug_blocking=False, listeners={'read': {20:
, spent=False,
greenlet=, evtype='read',
mark_as_closed=, tb=) at
remote 0x7f20468cda10>}, 'write': {}}, timers_canceled=0,
greenlet=, closed=[],
stopping=False, timers=[(,
, tpl=(, (), {}),
called=False) at r...(truncated)
presult = self.do_poll(seconds)
#10 Frame 0x7f2046afca00, for file /usr/lib/python2.7/site-
packages/eventlet/hubs/hub.py, line 346, in run
(self=,
debug_exceptions=True, debug_blocking_resolution=1, modify=,
running=True, debug_blocking=False, listeners={'read': {20:
, spent=False,
greenlet=, evtype='read',
mark_as_closed=, tb=) at
remote 0x7f20468cda10>}, 'write': {}}, timers_canceled=0,
greenlet=, closed=[],
stopping=False, timers=[(,
, tpl=(, (), {}),
called=False) ...(truncated)
self.wait(sleep_time)



>
> On Mon, Jul 8, 2019 at 2:40 PM Dan Poltawski <
> dan.poltaw...@tnp.net.uk> wrote:
> > Hi,
> >
> > On Sun, 2019-07-07 at 09:31 +0300, Benny Zlotnik wrote:
> > > > Any chance you can setup gdb[1] so we can find out where it's
> > > stuck
> > > > exactly?
> >
> > Yes, abolutely - but I will need some assistance in getting GDB
> > configured in the engine as I am not very familar with it - or how
> > to enable the correct repos to get the debug info.
> >
> > $ gdb python 54654
> >
> > [...]
> >
> > Reading symbols from /lib64/libfreeblpriv3.so...Reading symbols
> > from /lib64/libfreeblpriv3.so...(no debugging symbols
> > found)...done.
> > (no debugging symbols found)...done.
> > Loaded symbols for /lib64/libfreeblpriv3.so
> > 0x7fcf82256483 in epoll_wait () from /lib64/libc.so.6
> > Missing separate debuginfos, use: debuginfo-install python-2.7.5-
> > 80.el7_6.x86_64
> > (gdb) pt-bt
> > Undefined command: "pt-bt".  Try "help".
> >
> >
> > > > Also, which version of ovirt are you using?
> >
> > Using 4.3.4
> >
> > > > Can you also check the ceph logs for anything suspicious?
> >
> > I haven't seen anything so far, but is an entirely resonable
> > possibility this is ceph misoconfiguraiton as we are learning about
> > both tools.
> >
> >
> > thanks,
> >
> > Dan
> >
> > > >
> > > >
> > > > [1] - https://wiki.python.org/moin/DebuggingWithGdb
> > > > $ gdb python 
> > > > then `py-bt`
> > > >
> > > > On Thu, Jul 4, 2019 at 7:00 PM 
> > > wrote:
> > > > > > > Can you provide logs? mainly engine.log and cinderlib.log
> > > > > > > (/var/log/ovirt-engine/cinderlib/cinderlib.log
> > > > > >
> > > > > >
> > > > > > If I create two volumes, the first one succeeds
> > > > successfully, the
> > > > > > second one hangs. If I look in the processlist after
> > > > creating the
> > > > > > second volume which doesn't succceed, I see the python
> > > > ./cinderlib-
> > > > > > client.py create_volume [...] command still running.
> > > > > >
> > > > > > On the ceph side, I can see only the one rbd volume.
> > > > > >
> > > > > > Logs below:
> > > > > >
> > > > > >
> > > > > >
> > > > > > --- cinderlib.log --
> > > > > >
> > > > > > 2019-07-04 16:46:30,863 - cinderlib-client - INFO - Fetch
> > > > backend
> > > > > > stats [b07698bb-1688-472f-841b-70a9d52a250d]
> > > > > > 2019-07-04 16:46:56,308 - cinderlib-client - INFO -
> > > > Creating volume
> > > > > > '236285cc-ac01-4239-821c-4beadd66923f', with size '2' GB
> > > > [0b0f0d6f-
> > > > > > cb20-440a-bacb-7f5ead2b4b4d]
> > > > > > 2019-07-04 16:47:21,671 - cinderlib-client - INFO -
> > > > Creating volume
> > > > > > '84886485-554a-44ca-964c-9758b4a16aae', with size '2' GB
> > > > [a793bfc9-
> > > > > > fc37-4711-a144-d74c100cc75b]
> > > > > >
> > > > > > --- engine.log ---
> > > > > >
> > > > > > 2019-07-04 16:46:54,062+01 INFO
> > > > > > [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand]
> > > > (default
> > > > > > task-22) [0b0f0d6f-cb20-440a-bacb-7f5ead2b4b4d] Running
> > > > command:
> > > > > > AddDiskCommand internal: false. Entities affected :  ID:
> > > > 31536d80-
> > > > > > ff45-496b-9820-15441d505924 Type: StorageAction group
> > > > CREATE_DISK
> > > > > > with role type USER
> > > > > > 2019-07-04 16:46:54,150+01 INFO
> > > > > >
> > > > 

[ovirt-users] Re: Managed Block Storage

2019-07-08 Thread Dan Poltawski
Hi,

On Sun, 2019-07-07 at 09:31 +0300, Benny Zlotnik wrote:
> Any chance you can setup gdb[1] so we can find out where it's stuck
> exactly?

Yes, abolutely - but I will need some assistance in getting GDB
configured in the engine as I am not very familar with it - or how to
enable the correct repos to get the debug info.

$ gdb python 54654

[...]

Reading symbols from /lib64/libfreeblpriv3.so...Reading symbols from
/lib64/libfreeblpriv3.so...(no debugging symbols found)...done.
(no debugging symbols found)...done.
Loaded symbols for /lib64/libfreeblpriv3.so
0x7fcf82256483 in epoll_wait () from /lib64/libc.so.6
Missing separate debuginfos, use: debuginfo-install python-2.7.5-
80.el7_6.x86_64
(gdb) pt-bt
Undefined command: "pt-bt".  Try "help".


> Also, which version of ovirt are you using?

Using 4.3.4

> Can you also check the ceph logs for anything suspicious?

I haven't seen anything so far, but is an entirely resonable
possibility this is ceph misoconfiguraiton as we are learning about
both tools.


thanks,

Dan

>
>
> [1] - https://wiki.python.org/moin/DebuggingWithGdb
> $ gdb python 
> then `py-bt`
>
> On Thu, Jul 4, 2019 at 7:00 PM  wrote:
> > > Can you provide logs? mainly engine.log and cinderlib.log
> > > (/var/log/ovirt-engine/cinderlib/cinderlib.log
> >
> >
> > If I create two volumes, the first one succeeds successfully, the
> > second one hangs. If I look in the processlist after creating the
> > second volume which doesn't succceed, I see the python ./cinderlib-
> > client.py create_volume [...] command still running.
> >
> > On the ceph side, I can see only the one rbd volume.
> >
> > Logs below:
> >
> >
> >
> > --- cinderlib.log --
> >
> > 2019-07-04 16:46:30,863 - cinderlib-client - INFO - Fetch backend
> > stats [b07698bb-1688-472f-841b-70a9d52a250d]
> > 2019-07-04 16:46:56,308 - cinderlib-client - INFO - Creating volume
> > '236285cc-ac01-4239-821c-4beadd66923f', with size '2' GB [0b0f0d6f-
> > cb20-440a-bacb-7f5ead2b4b4d]
> > 2019-07-04 16:47:21,671 - cinderlib-client - INFO - Creating volume
> > '84886485-554a-44ca-964c-9758b4a16aae', with size '2' GB [a793bfc9-
> > fc37-4711-a144-d74c100cc75b]
> >
> > --- engine.log ---
> >
> > 2019-07-04 16:46:54,062+01 INFO
> > [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (default
> > task-22) [0b0f0d6f-cb20-440a-bacb-7f5ead2b4b4d] Running command:
> > AddDiskCommand internal: false. Entities affected :  ID: 31536d80-
> > ff45-496b-9820-15441d505924 Type: StorageAction group CREATE_DISK
> > with role type USER
> > 2019-07-04 16:46:54,150+01 INFO
> > [org.ovirt.engine.core.bll.storage.disk.managedblock.AddManagedBloc
> > kStorageDiskCommand] (EE-ManagedThreadFactory-commandCoordinator-
> > Thread-1) [0b0f0d6f-cb20-440a-bacb-7f5ead2b4b4d] Running command:
> > AddManagedBlockStorageDiskCommand internal: true.
> > 2019-07-04 16:46:56,863+01 INFO
> > [org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor]
> > (EE-ManagedThreadFactory-commandCoordinator-Thread-1) [0b0f0d6f-
> > cb20-440a-bacb-7f5ead2b4b4d] cinderlib output:
> > 2019-07-04 16:46:56,912+01 INFO
> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirect
> > or] (default task-22) [] EVENT_ID:
> > USER_ADD_DISK_FINISHED_SUCCESS(2,021), The disk 'test0' was
> > successfully added.
> > 2019-07-04 16:47:00,126+01 INFO
> > [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback
> > ] (EE-ManagedThreadFactory-engineScheduled-Thread-95) [0b0f0d6f-
> > cb20-440a-bacb-7f5ead2b4b4d] Command 'AddDisk' id: '15fe157d-7adb-
> > 4031-9e81-f51aa0b6528f' child commands '[d056397a-7ed9-4c01-b880-
> > dd518421a2c6]' executions were completed, status 'SUCCEEDED'
> > 2019-07-04 16:47:01,136+01 INFO
> > [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (EE-
> > ManagedThreadFactory-engineScheduled-Thread-99) [0b0f0d6f-cb20-
> > 440a-bacb-7f5ead2b4b4d] Ending command
> > 'org.ovirt.engine.core.bll.storage.disk.AddDiskCommand'
> > successfully.
> > 2019-07-04 16:47:01,141+01 INFO
> > [org.ovirt.engine.core.bll.storage.disk.managedblock.AddManagedBloc
> > kStorageDiskCommand] (EE-ManagedThreadFactory-engineScheduled-
> > Thread-99) [0b0f0d6f-cb20-440a-bacb-7f5ead2b4b4d] Ending command
> > 'org.ovirt.engine.core.bll.storage.disk.managedblock.AddManagedBloc
> > kStorageDiskCommand' successfully.
> > 2019-07-04 16:47:01,145+01 WARN
> > [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (EE-
> > ManagedThreadFactory-engineScheduled-Thread-99) [] VM is null - no
> > unlocking
> > 2019-07-04 16:47:01,186+01 INFO
> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirect
> > or] (EE-ManagedThreadFactory-engineScheduled-Thread-99) []
> > EVENT_ID: USER_ADD_DISK_FINISHED_SUCCESS(2,021), The disk 'test0'
> > was successfully added.
> > 2019-07-04 16:47:19,446+01 INFO
> > [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (default
> > task-22) [a793bfc9-fc37-4711-a144-d74c100cc75b] Running command:
> > AddDiskCommand internal: false. Entities 

[ovirt-users] Re: Managed Block Storage

2019-07-08 Thread Benny Zlotnik
Can you try to create mutliple ceph volumes manually via rbd from the
engine machine, so we can simulate what cinderlib does without using it,
this can be done
$ rbd -c ceph.conf create /vol1 --size 100M
$ rbd -c ceph.conf create /vol2 --size 100M

On Mon, Jul 8, 2019 at 4:58 PM Dan Poltawski 
wrote:

> On Mon, 2019-07-08 at 16:49 +0300, Benny Zlotnik wrote:
> > Not too useful unfortunately :\
> > Can you try py-list instead of py-bt? Perhaps it will provide better
> > results
>
> (gdb) py-list
>   57if get_errno(ex) != errno.EEXIST:
>   58raise
>   59return listener
>   60
>   61def do_poll(self, seconds):
>  >62return self.poll.poll(seconds)
> >
>
>
> Thanks for you help,
>
>
> Dan
>
> > On Mon, Jul 8, 2019 at 4:41 PM Dan Poltawski <
> > dan.poltaw...@tnp.net.uk> wrote:
> > > On Mon, 2019-07-08 at 16:25 +0300, Benny Zlotnik wrote:
> > > > Hi,
> > > >
> > > > You have a typo, it's py-bt and I just tried it myself, I only
> > > had to
> > > > install:
> > > > $ yum install -y python-devel
> > > > (in addition to the packages specified in the link)
> > >
> > > Thanks - this is what I get:
> > >
> > > #3 Frame 0x7f2046b59ad0, for file /usr/lib/python2.7/site-
> > > packages/eventlet/hubs/epolls.py, line 62, in do_poll
> > > (self= > > 0x7f20661059b0>,
> > > debug_exceptions=True, debug_blocking_resolution=1, modify= > > in
> > > method modify of select.epoll object at remote 0x7f2048455168>,
> > > running=True, debug_blocking=False, listeners={'read': {20:
> > >  > > greenlet.greenlet
> > > object at remote 0x7f2046878410>, spent=False,
> > > greenlet=,
> > > evtype='read',
> > > mark_as_closed=,
> > > tb= > > method throw of greenlet.greenlet object at remote 0x7f2046878410>)
> > > at
> > > remote 0x7f20468cda10>}, 'write': {}}, timers_canceled=0,
> > > greenlet=, closed=[],
> > > stopping=False, timers=[(,
> > > , tpl=( > > switch
> > > of greenlet.greenlet object at remote 0x7f2046934c30>, (), {}),
> > > called=F...(truncated)
> > > return self.poll.poll(seconds)
> > > #6 Frame 0x32fbf30, for file /usr/lib/python2.7/site-
> > > packages/eventlet/hubs/poll.py, line 85, in wait
> > > (self= > > 0x7f20661059b0>,
> > > debug_exceptions=True, debug_blocking_resolution=1, modify= > > in
> > > method modify of select.epoll object at remote 0x7f2048455168>,
> > > running=True, debug_blocking=False, listeners={'read': {20:
> > >  > > greenlet.greenlet
> > > object at remote 0x7f2046878410>, spent=False,
> > > greenlet=,
> > > evtype='read',
> > > mark_as_closed=,
> > > tb= > > method throw of greenlet.greenlet object at remote 0x7f2046878410>)
> > > at
> > > remote 0x7f20468cda10>}, 'write': {}}, timers_canceled=0,
> > > greenlet=, closed=[],
> > > stopping=False, timers=[(,
> > > , tpl=( > > switch
> > > of greenlet.greenlet object at remote 0x7f2046934c30>, (), {}),
> > > called=False) at r...(truncated)
> > > presult = self.do_poll(seconds)
> > > #10 Frame 0x7f2046afca00, for file /usr/lib/python2.7/site-
> > > packages/eventlet/hubs/hub.py, line 346, in run
> > > (self= > > 0x7f20661059b0>,
> > > debug_exceptions=True, debug_blocking_resolution=1, modify= > > in
> > > method modify of select.epoll object at remote 0x7f2048455168>,
> > > running=True, debug_blocking=False, listeners={'read': {20:
> > >  > > greenlet.greenlet
> > > object at remote 0x7f2046878410>, spent=False,
> > > greenlet=,
> > > evtype='read',
> > > mark_as_closed=,
> > > tb= > > method throw of greenlet.greenlet object at remote 0x7f2046878410>)
> > > at
> > > remote 0x7f20468cda10>}, 'write': {}}, timers_canceled=0,
> > > greenlet=, closed=[],
> > > stopping=False, timers=[(,
> > > , tpl=( > > switch
> > > of greenlet.greenlet object at remote 0x7f2046934c30>, (), {}),
> > > called=False) ...(truncated)
> > > self.wait(sleep_time)
> > >
> > >
> > >
> > > >
> > > > On Mon, Jul 8, 2019 at 2:40 PM Dan Poltawski <
> > > > dan.poltaw...@tnp.net.uk> wrote:
> > > > > Hi,
> > > > >
> > > > > On Sun, 2019-07-07 at 09:31 +0300, Benny Zlotnik wrote:
> > > > > > > Any chance you can setup gdb[1] so we can find out where
> > > it's
> > > > > > stuck
> > > > > > > exactly?
> > > > >
> > > > > Yes, abolutely - but I will need some assistance in getting GDB
> > > > > configured in the engine as I am not very familar with it - or
> > > how
> > > > > to enable the correct repos to get the debug info.
> > > > >
> > > > > $ gdb python 54654
> > > > >
> > > > > [...]
> > > > >
> > > > > Reading symbols from /lib64/libfreeblpriv3.so...Reading symbols
> > > > > from /lib64/libfreeblpriv3.so...(no debugging symbols
> > > > > found)...done.
> > > > > (no debugging symbols found)...done.
> > > > > Loaded symbols for /lib64/libfreeblpriv3.so
> > > > > 0x7fcf82256483 in epoll_wait () from /lib64/libc.so.6
> > > > > Missing separate debuginfos, use: debuginfo-install python-
> > > 2.7.5-
> > > > > 80.el7_6.x86_64
> > > > > (gdb) pt-bt
> > > > > Undefined command: "pt-bt".  Try 

[ovirt-users] Re: Managed Block Storage

2019-07-08 Thread Benny Zlotnik
Not too useful unfortunately :\
Can you try py-list instead of py-bt? Perhaps it will provide better results

On Mon, Jul 8, 2019 at 4:41 PM Dan Poltawski 
wrote:

> On Mon, 2019-07-08 at 16:25 +0300, Benny Zlotnik wrote:
> > Hi,
> >
> > You have a typo, it's py-bt and I just tried it myself, I only had to
> > install:
> > $ yum install -y python-devel
> > (in addition to the packages specified in the link)
>
> Thanks - this is what I get:
>
> #3 Frame 0x7f2046b59ad0, for file /usr/lib/python2.7/site-
> packages/eventlet/hubs/epolls.py, line 62, in do_poll
> (self=,
> debug_exceptions=True, debug_blocking_resolution=1, modify= method modify of select.epoll object at remote 0x7f2048455168>,
> running=True, debug_blocking=False, listeners={'read': {20:
>  object at remote 0x7f2046878410>, spent=False,
> greenlet=, evtype='read',
> mark_as_closed=, tb= method throw of greenlet.greenlet object at remote 0x7f2046878410>) at
> remote 0x7f20468cda10>}, 'write': {}}, timers_canceled=0,
> greenlet=, closed=[],
> stopping=False, timers=[(,
> , tpl=( of greenlet.greenlet object at remote 0x7f2046934c30>, (), {}),
> called=F...(truncated)
> return self.poll.poll(seconds)
> #6 Frame 0x32fbf30, for file /usr/lib/python2.7/site-
> packages/eventlet/hubs/poll.py, line 85, in wait
> (self=,
> debug_exceptions=True, debug_blocking_resolution=1, modify= method modify of select.epoll object at remote 0x7f2048455168>,
> running=True, debug_blocking=False, listeners={'read': {20:
>  object at remote 0x7f2046878410>, spent=False,
> greenlet=, evtype='read',
> mark_as_closed=, tb= method throw of greenlet.greenlet object at remote 0x7f2046878410>) at
> remote 0x7f20468cda10>}, 'write': {}}, timers_canceled=0,
> greenlet=, closed=[],
> stopping=False, timers=[(,
> , tpl=( of greenlet.greenlet object at remote 0x7f2046934c30>, (), {}),
> called=False) at r...(truncated)
> presult = self.do_poll(seconds)
> #10 Frame 0x7f2046afca00, for file /usr/lib/python2.7/site-
> packages/eventlet/hubs/hub.py, line 346, in run
> (self=,
> debug_exceptions=True, debug_blocking_resolution=1, modify= method modify of select.epoll object at remote 0x7f2048455168>,
> running=True, debug_blocking=False, listeners={'read': {20:
>  object at remote 0x7f2046878410>, spent=False,
> greenlet=, evtype='read',
> mark_as_closed=, tb= method throw of greenlet.greenlet object at remote 0x7f2046878410>) at
> remote 0x7f20468cda10>}, 'write': {}}, timers_canceled=0,
> greenlet=, closed=[],
> stopping=False, timers=[(,
> , tpl=( of greenlet.greenlet object at remote 0x7f2046934c30>, (), {}),
> called=False) ...(truncated)
> self.wait(sleep_time)
>
>
>
> >
> > On Mon, Jul 8, 2019 at 2:40 PM Dan Poltawski <
> > dan.poltaw...@tnp.net.uk> wrote:
> > > Hi,
> > >
> > > On Sun, 2019-07-07 at 09:31 +0300, Benny Zlotnik wrote:
> > > > > Any chance you can setup gdb[1] so we can find out where it's
> > > > stuck
> > > > > exactly?
> > >
> > > Yes, abolutely - but I will need some assistance in getting GDB
> > > configured in the engine as I am not very familar with it - or how
> > > to enable the correct repos to get the debug info.
> > >
> > > $ gdb python 54654
> > >
> > > [...]
> > >
> > > Reading symbols from /lib64/libfreeblpriv3.so...Reading symbols
> > > from /lib64/libfreeblpriv3.so...(no debugging symbols
> > > found)...done.
> > > (no debugging symbols found)...done.
> > > Loaded symbols for /lib64/libfreeblpriv3.so
> > > 0x7fcf82256483 in epoll_wait () from /lib64/libc.so.6
> > > Missing separate debuginfos, use: debuginfo-install python-2.7.5-
> > > 80.el7_6.x86_64
> > > (gdb) pt-bt
> > > Undefined command: "pt-bt".  Try "help".
> > >
> > >
> > > > > Also, which version of ovirt are you using?
> > >
> > > Using 4.3.4
> > >
> > > > > Can you also check the ceph logs for anything suspicious?
> > >
> > > I haven't seen anything so far, but is an entirely resonable
> > > possibility this is ceph misoconfiguraiton as we are learning about
> > > both tools.
> > >
> > >
> > > thanks,
> > >
> > > Dan
> > >
> > > > >
> > > > >
> > > > > [1] - https://wiki.python.org/moin/DebuggingWithGdb
> > > > > $ gdb python 
> > > > > then `py-bt`
> > > > >
> > > > > On Thu, Jul 4, 2019 at 7:00 PM 
> > > > wrote:
> > > > > > > > Can you provide logs? mainly engine.log and cinderlib.log
> > > > > > > > (/var/log/ovirt-engine/cinderlib/cinderlib.log
> > > > > > >
> > > > > > >
> > > > > > > If I create two volumes, the first one succeeds
> > > > > successfully, the
> > > > > > > second one hangs. If I look in the processlist after
> > > > > creating the
> > > > > > > second volume which doesn't succceed, I see the python
> > > > > ./cinderlib-
> > > > > > > client.py create_volume [...] command still running.
> > > > > > >
> > > > > > > On the ceph side, I can see only the one rbd volume.
> > > > > > >
> > > > > > > Logs below:
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > --- cinderlib.log --
> > > > > > >
> > > > > > > 2019-07-04 16:46:30,863 - 

[ovirt-users] Re: Managed Block Storage

2019-07-08 Thread Benny Zlotnik
Hi,

You have a typo, it's py-bt and I just tried it myself, I only had to
install:
$ yum install -y python-devel
(in addition to the packages specified in the link)

On Mon, Jul 8, 2019 at 2:40 PM Dan Poltawski 
wrote:

> Hi,
>
> On Sun, 2019-07-07 at 09:31 +0300, Benny Zlotnik wrote:
>
> > Any chance you can setup gdb[1] so we can find out where it's stuck
> > exactly?
>
>
> Yes, abolutely - but I will need some assistance in getting GDB configured
> in the engine as I am not very familar with it - or how to enable the
> correct repos to get the debug info.
>
> $ gdb python 54654
>
> [...]
>
> Reading symbols from /lib64/libfreeblpriv3.so...Reading symbols from
> /lib64/libfreeblpriv3.so...(no debugging symbols found)...done.
> (no debugging symbols found)...done.
> Loaded symbols for /lib64/libfreeblpriv3.so
> 0x7fcf82256483 in epoll_wait () from /lib64/libc.so.6
> Missing separate debuginfos, use: debuginfo-install
> python-2.7.5-80.el7_6.x86_64
> (gdb) pt-bt
> Undefined command: "pt-bt".  Try "help".
>
>
> > Also, which version of ovirt are you using?
>
>
> Using 4.3.4
>
> > Can you also check the ceph logs for anything suspicious?
>
>
> I haven't seen anything so far, but is an entirely resonable possibility
> this is ceph misoconfiguraiton as we are learning about both tools.
>
>
> thanks,
>
> Dan
>
> >
> >
> > [1] - https://wiki.python.org/moin/DebuggingWithGdb
> > $ gdb python 
> > then `py-bt`
> >
> > On Thu, Jul 4, 2019 at 7:00 PM  wrote:
>
> > > > Can you provide logs? mainly engine.log and cinderlib.log
> > > > (/var/log/ovirt-engine/cinderlib/cinderlib.log
> > >
> > >
> > > If I create two volumes, the first one succeeds successfully, the
> > > second one hangs. If I look in the processlist after creating the
> > > second volume which doesn't succceed, I see the python ./cinderlib-
> > > client.py create_volume [...] command still running.
> > >
> > > On the ceph side, I can see only the one rbd volume.
> > >
> > > Logs below:
> > >
> > >
> > >
> > > --- cinderlib.log --
> > >
> > > 2019-07-04 16:46:30,863 - cinderlib-client - INFO - Fetch backend
> > > stats [b07698bb-1688-472f-841b-70a9d52a250d]
> > > 2019-07-04 16:46:56,308 - cinderlib-client - INFO - Creating volume
> > > '236285cc-ac01-4239-821c-4beadd66923f', with size '2' GB [0b0f0d6f-
> > > cb20-440a-bacb-7f5ead2b4b4d]
> > > 2019-07-04 16:47:21,671 - cinderlib-client - INFO - Creating volume
> > > '84886485-554a-44ca-964c-9758b4a16aae', with size '2' GB [a793bfc9-
> > > fc37-4711-a144-d74c100cc75b]
> > >
> > > --- engine.log ---
> > >
> > > 2019-07-04 16:46:54,062+01 INFO
> > > [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (default
> > > task-22) [0b0f0d6f-cb20-440a-bacb-7f5ead2b4b4d] Running command:
> > > AddDiskCommand internal: false. Entities affected :  ID: 31536d80-
> > > ff45-496b-9820-15441d505924 Type: StorageAction group CREATE_DISK
> > > with role type USER
> > > 2019-07-04 16:46:54,150+01 INFO
> > > [org.ovirt.engine.core.bll.storage.disk.managedblock.AddManagedBloc
> > > kStorageDiskCommand] (EE-ManagedThreadFactory-commandCoordinator-
> > > Thread-1) [0b0f0d6f-cb20-440a-bacb-7f5ead2b4b4d] Running command:
> > > AddManagedBlockStorageDiskCommand internal: true.
> > > 2019-07-04 16:46:56,863+01 INFO
> > > [org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor]
> > > (EE-ManagedThreadFactory-commandCoordinator-Thread-1) [0b0f0d6f-
> > > cb20-440a-bacb-7f5ead2b4b4d] cinderlib output:
> > > 2019-07-04 16:46:56,912+01 INFO
> > > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirect
> > > or] (default task-22) [] EVENT_ID:
> > > USER_ADD_DISK_FINISHED_SUCCESS(2,021), The disk 'test0' was
> > > successfully added.
> > > 2019-07-04 16:47:00,126+01 INFO
> > > [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback
> > > ] (EE-ManagedThreadFactory-engineScheduled-Thread-95) [0b0f0d6f-
> > > cb20-440a-bacb-7f5ead2b4b4d] Command 'AddDisk' id: '15fe157d-7adb-
> > > 4031-9e81-f51aa0b6528f' child commands '[d056397a-7ed9-4c01-b880-
> > > dd518421a2c6]' executions were completed, status 'SUCCEEDED'
> > > 2019-07-04 16:47:01,136+01 INFO
> > > [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (EE-
> > > ManagedThreadFactory-engineScheduled-Thread-99) [0b0f0d6f-cb20-
> > > 440a-bacb-7f5ead2b4b4d] Ending command
> > > 'org.ovirt.engine.core.bll.storage.disk.AddDiskCommand'
> > > successfully.
> > > 2019-07-04 16:47:01,141+01 INFO
> > > [org.ovirt.engine.core.bll.storage.disk.managedblock.AddManagedBloc
> > > kStorageDiskCommand] (EE-ManagedThreadFactory-engineScheduled-
> > > Thread-99) [0b0f0d6f-cb20-440a-bacb-7f5ead2b4b4d] Ending command
> > > 'org.ovirt.engine.core.bll.storage.disk.managedblock.AddManagedBloc
> > > kStorageDiskCommand' successfully.
> > > 2019-07-04 16:47:01,145+01 WARN
> > > [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (EE-
> > > ManagedThreadFactory-engineScheduled-Thread-99) [] VM is null - no
> > > unlocking
> > > 2019-07-04 16:47:01,186+01 INFO
> > > 

[ovirt-users] Re: Managed Block Storage

2019-07-07 Thread Benny Zlotnik
Hi,

Any chance you can setup gdb[1] so we can find out where it's stuck exactly?
Also, which version of ovirt are you using?
Can you also check the ceph logs for anything suspicious?


[1] - https://wiki.python.org/moin/DebuggingWithGdb
$ gdb python 
then `py-bt`

On Thu, Jul 4, 2019 at 7:00 PM  wrote:

> > Can you provide logs? mainly engine.log and cinderlib.log
> > (/var/log/ovirt-engine/cinderlib/cinderlib.log
>
>
> If I create two volumes, the first one succeeds successfully, the second
> one hangs. If I look in the processlist after creating the second volume
> which doesn't succceed, I see the python ./cinderlib-client.py
> create_volume [...] command still running.
>
> On the ceph side, I can see only the one rbd volume.
>
> Logs below:
>
>
>
> --- cinderlib.log --
>
> 2019-07-04 16:46:30,863 - cinderlib-client - INFO - Fetch backend stats
> [b07698bb-1688-472f-841b-70a9d52a250d]
> 2019-07-04 16:46:56,308 - cinderlib-client - INFO - Creating volume
> '236285cc-ac01-4239-821c-4beadd66923f', with size '2' GB
> [0b0f0d6f-cb20-440a-bacb-7f5ead2b4b4d]
> 2019-07-04 16:47:21,671 - cinderlib-client - INFO - Creating volume
> '84886485-554a-44ca-964c-9758b4a16aae', with size '2' GB
> [a793bfc9-fc37-4711-a144-d74c100cc75b]
>
> --- engine.log ---
>
> 2019-07-04 16:46:54,062+01 INFO
> [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (default task-22)
> [0b0f0d6f-cb20-440a-bacb-7f5ead2b4b4d] Running command: AddDiskCommand
> internal: false. Entities affected :  ID:
> 31536d80-ff45-496b-9820-15441d505924 Type: StorageAction group CREATE_DISK
> with role type USER
> 2019-07-04 16:46:54,150+01 INFO
> [org.ovirt.engine.core.bll.storage.disk.managedblock.AddManagedBlockStorageDiskCommand]
> (EE-ManagedThreadFactory-commandCoordinator-Thread-1)
> [0b0f0d6f-cb20-440a-bacb-7f5ead2b4b4d] Running command:
> AddManagedBlockStorageDiskCommand internal: true.
> 2019-07-04 16:46:56,863+01 INFO
> [org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor]
> (EE-ManagedThreadFactory-commandCoordinator-Thread-1)
> [0b0f0d6f-cb20-440a-bacb-7f5ead2b4b4d] cinderlib output:
> 2019-07-04 16:46:56,912+01 INFO
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-22) [] EVENT_ID: USER_ADD_DISK_FINISHED_SUCCESS(2,021), The
> disk 'test0' was successfully added.
> 2019-07-04 16:47:00,126+01 INFO
> [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
> (EE-ManagedThreadFactory-engineScheduled-Thread-95)
> [0b0f0d6f-cb20-440a-bacb-7f5ead2b4b4d] Command 'AddDisk' id:
> '15fe157d-7adb-4031-9e81-f51aa0b6528f' child commands
> '[d056397a-7ed9-4c01-b880-dd518421a2c6]' executions were completed, status
> 'SUCCEEDED'
> 2019-07-04 16:47:01,136+01 INFO
> [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-99)
> [0b0f0d6f-cb20-440a-bacb-7f5ead2b4b4d] Ending command
> 'org.ovirt.engine.core.bll.storage.disk.AddDiskCommand' successfully.
> 2019-07-04 16:47:01,141+01 INFO
> [org.ovirt.engine.core.bll.storage.disk.managedblock.AddManagedBlockStorageDiskCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-99)
> [0b0f0d6f-cb20-440a-bacb-7f5ead2b4b4d] Ending command
> 'org.ovirt.engine.core.bll.storage.disk.managedblock.AddManagedBlockStorageDiskCommand'
> successfully.
> 2019-07-04 16:47:01,145+01 WARN
> [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-99) [] VM is null - no
> unlocking
> 2019-07-04 16:47:01,186+01 INFO
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engineScheduled-Thread-99) [] EVENT_ID:
> USER_ADD_DISK_FINISHED_SUCCESS(2,021), The disk 'test0' was successfully
> added.
> 2019-07-04 16:47:19,446+01 INFO
> [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (default task-22)
> [a793bfc9-fc37-4711-a144-d74c100cc75b] Running command: AddDiskCommand
> internal: false. Entities affected :  ID:
> 31536d80-ff45-496b-9820-15441d505924 Type: StorageAction group CREATE_DISK
> with role type USER
> 2019-07-04 16:47:19,464+01 INFO
> [org.ovirt.engine.core.bll.storage.disk.managedblock.AddManagedBlockStorageDiskCommand]
> (EE-ManagedThreadFactory-commandCoordinator-Thread-2)
> [a793bfc9-fc37-4711-a144-d74c100cc75b] Running command:
> AddManagedBlockStorageDiskCommand internal: true.
> 2019-07-04 16:48:19,501+01 INFO
> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool
> 'commandCoordinator' is using 1 threads out of 10, 1 threads waiting for
> tasks.
> 2019-07-04 16:48:19,501+01 INFO
> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool
> 'default' is using 0 threads out of 1, 5 threads waiting for tasks.
> 2019-07-04 16:48:19,501+01 INFO
> [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> 

[ovirt-users] Re: Managed Block Storage

2019-07-04 Thread dan . poltawski
> Can you provide logs? mainly engine.log and cinderlib.log
> (/var/log/ovirt-engine/cinderlib/cinderlib.log


If I create two volumes, the first one succeeds successfully, the second one 
hangs. If I look in the processlist after creating the second volume which 
doesn't succceed, I see the python ./cinderlib-client.py create_volume [...] 
command still running.

On the ceph side, I can see only the one rbd volume. 

Logs below:



--- cinderlib.log -- 

2019-07-04 16:46:30,863 - cinderlib-client - INFO - Fetch backend stats 
[b07698bb-1688-472f-841b-70a9d52a250d]
2019-07-04 16:46:56,308 - cinderlib-client - INFO - Creating volume 
'236285cc-ac01-4239-821c-4beadd66923f', with size '2' GB 
[0b0f0d6f-cb20-440a-bacb-7f5ead2b4b4d]
2019-07-04 16:47:21,671 - cinderlib-client - INFO - Creating volume 
'84886485-554a-44ca-964c-9758b4a16aae', with size '2' GB 
[a793bfc9-fc37-4711-a144-d74c100cc75b]

--- engine.log ---

2019-07-04 16:46:54,062+01 INFO  
[org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (default task-22) 
[0b0f0d6f-cb20-440a-bacb-7f5ead2b4b4d] Running command: AddDiskCommand 
internal: false. Entities affected :  ID: 31536d80-ff45-496b-9820-15441d505924 
Type: StorageAction group CREATE_DISK with role type USER
2019-07-04 16:46:54,150+01 INFO  
[org.ovirt.engine.core.bll.storage.disk.managedblock.AddManagedBlockStorageDiskCommand]
 (EE-ManagedThreadFactory-commandCoordinator-Thread-1) 
[0b0f0d6f-cb20-440a-bacb-7f5ead2b4b4d] Running command: 
AddManagedBlockStorageDiskCommand internal: true.
2019-07-04 16:46:56,863+01 INFO  
[org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor] 
(EE-ManagedThreadFactory-commandCoordinator-Thread-1) 
[0b0f0d6f-cb20-440a-bacb-7f5ead2b4b4d] cinderlib output: 
2019-07-04 16:46:56,912+01 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default 
task-22) [] EVENT_ID: USER_ADD_DISK_FINISHED_SUCCESS(2,021), The disk 'test0' 
was successfully added.
2019-07-04 16:47:00,126+01 INFO  
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] 
(EE-ManagedThreadFactory-engineScheduled-Thread-95) 
[0b0f0d6f-cb20-440a-bacb-7f5ead2b4b4d] Command 'AddDisk' id: 
'15fe157d-7adb-4031-9e81-f51aa0b6528f' child commands 
'[d056397a-7ed9-4c01-b880-dd518421a2c6]' executions were completed, status 
'SUCCEEDED'
2019-07-04 16:47:01,136+01 INFO  
[org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-99) 
[0b0f0d6f-cb20-440a-bacb-7f5ead2b4b4d] Ending command 
'org.ovirt.engine.core.bll.storage.disk.AddDiskCommand' successfully.
2019-07-04 16:47:01,141+01 INFO  
[org.ovirt.engine.core.bll.storage.disk.managedblock.AddManagedBlockStorageDiskCommand]
 (EE-ManagedThreadFactory-engineScheduled-Thread-99) 
[0b0f0d6f-cb20-440a-bacb-7f5ead2b4b4d] Ending command 
'org.ovirt.engine.core.bll.storage.disk.managedblock.AddManagedBlockStorageDiskCommand'
 successfully.
2019-07-04 16:47:01,145+01 WARN  
[org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-99) [] VM is null - no unlocking
2019-07-04 16:47:01,186+01 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-99) [] EVENT_ID: 
USER_ADD_DISK_FINISHED_SUCCESS(2,021), The disk 'test0' was successfully added.
2019-07-04 16:47:19,446+01 INFO  
[org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (default task-22) 
[a793bfc9-fc37-4711-a144-d74c100cc75b] Running command: AddDiskCommand 
internal: false. Entities affected :  ID: 31536d80-ff45-496b-9820-15441d505924 
Type: StorageAction group CREATE_DISK with role type USER
2019-07-04 16:47:19,464+01 INFO  
[org.ovirt.engine.core.bll.storage.disk.managedblock.AddManagedBlockStorageDiskCommand]
 (EE-ManagedThreadFactory-commandCoordinator-Thread-2) 
[a793bfc9-fc37-4711-a144-d74c100cc75b] Running command: 
AddManagedBlockStorageDiskCommand internal: true.
2019-07-04 16:48:19,501+01 INFO  
[org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] 
(EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 
'commandCoordinator' is using 1 threads out of 10, 1 threads waiting for tasks.
2019-07-04 16:48:19,501+01 INFO  
[org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] 
(EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 
'default' is using 0 threads out of 1, 5 threads waiting for tasks.
2019-07-04 16:48:19,501+01 INFO  
[org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] 
(EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 
'engine' is using 0 threads out of 500, 9 threads waiting for tasks and 0 tasks 
in queue.
2019-07-04 16:48:19,501+01 INFO  
[org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService] 
(EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool 
'engineScheduled' is using 0 threads out of 100, 100 threads waiting for tasks.
2019-07-04 16:48:19,501+01 INFO  

[ovirt-users] Re: Managed Block Storage

2019-07-04 Thread Benny Zlotnik
On Thu, Jul 4, 2019 at 1:03 PM  wrote:

> I'm testing out the managed storage to connect to ceph and I have a few
> questions:

* Would I be correct in assuming that the hosted engine VM needs
> connectivity to the storage and not just the underlying hosts themselves?
> It seems like the cinderlib client runs from the engine?

Yes, this is correct

> * Does the ceph config and keyring need to replicated onto each
> hypervisor/host?
>
No, see[1], the keyring and ceph config can be present only on the engine
machine

> * I have managed to do one block operation so far (i've created a volume
> which is visible on the ceph side), but multiple other operations have
> failed and are 'running' in the engine task list. Is there any way I can
> increase debugging to see whats happening?
>
Can you provide logs? mainly engine.log and cinderlib.log
(/var/log/ovirt-engine/cinderlib/cinderlib.log
[1] -
https://ovirt.org/develop/release-management/features/storage/cinderlib-integration.html


>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/N6WAPFHQLVCQZDM7ON74ZQUFNVSOAFA5/