[ovirt-users] oVirt and Ceph iSCSI: separating discovery auth / target auth ?

2019-04-15 Thread Matthias Leopold

Hi,

I'm trying to use the Ceph iSCSI gateway with oVirt.

According to my tests with oVirt 4.3.2
* you cannot separate iSCSI discovery auth and target auth
* you cannot use an iSCSI gateway that has no discovery auth, but uses 
CHAP for targets
This means I'm forced to use the same credentials for discovery auth and 
target auth.


In Ceph iSCSI gateway I can have multiple targets with use different 
credentials, but I can define discovery auth only once for the whole 
gateway (or have no discovery auth).


If all of this is correct and I want to use the Ceph iSCSI gateway with 
oVirt

* I have to use discovery auth
* the credentials for discovery auth will give every other Ceph iSCSI 
gateway user access to the oVirt target


This is not a desirable situation.
Did I understand anything wrong? Are there other ways to solve this?

thx
matthias

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/IDXLXLROUG4JJ2CAKLHL6AQLYEJBYCCM/


Re: [ovirt-users] oVirt and Ceph

2016-06-27 Thread Alessandro De Salvo

Hi Nir,
yes indeed, we use the high-availability setup from oVirt for the 
Glance/Cinder VM, hosted on a high-available gluster storage. For the DB 
we use an SSD-backed Percona Cluster. The VM itself connects to the DB 
cluster via haproxy, so we should have the full high-availability.
The problem with the VM is the first time when you start the oVirt 
cluster, since you cannot start any VM using ceph volumes before you 
start the Glance/Cinder VM. It's easy to be solved, though, and even if 
you autostart all the machines they will automatically start in the 
correct order.

Cheers,

Alessandro

Il 27/06/16 11:24, Nir Soffer ha scritto:

On Mon, Jun 27, 2016 at 12:02 PM, Alessandro De Salvo
 wrote:

Hi,
the cinder container is broken since a while, since when the kollaglue
changed the installation method upstream, AFAIK.
Also, it seems that even the latest ovirt 4.0 pulls down the "kilo" version
of openstack, so you will need to install yours if you need a more recent
one.
We are using a VM managed by ovirt itself for keystone/glance/cinder with
our ceph cluster, and it works quite well with the Mitaka version, which is
the latest one. The DB is hosted outside, so that even if we loose the VM we
don't loose the state, besides all performance reasons. The installation is
not using containers, but installing the services directly via
puppet/Foreman.
So far we are happily using ceph in this way. The only drawback of this
setup is that if the VM is not up we cannot start machines with ceph volumes
attached, but the running machines survives without problems even if the
cinder VM is down.

Thanks for the info Alessandro!

This seems like the best way to run cinder/ceph, using other storage for
these vms, so cinder vm does not depend on the vm managing the
storage it runs on.

If you use highly available vms, ovirt will make sure they are up all the time,
and will migrated them to other hosts when needed.

Nir


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt and Ceph

2016-06-27 Thread Nir Soffer
On Mon, Jun 27, 2016 at 12:02 PM, Alessandro De Salvo
 wrote:
> Hi,
> the cinder container is broken since a while, since when the kollaglue
> changed the installation method upstream, AFAIK.
> Also, it seems that even the latest ovirt 4.0 pulls down the "kilo" version
> of openstack, so you will need to install yours if you need a more recent
> one.
> We are using a VM managed by ovirt itself for keystone/glance/cinder with
> our ceph cluster, and it works quite well with the Mitaka version, which is
> the latest one. The DB is hosted outside, so that even if we loose the VM we
> don't loose the state, besides all performance reasons. The installation is
> not using containers, but installing the services directly via
> puppet/Foreman.
> So far we are happily using ceph in this way. The only drawback of this
> setup is that if the VM is not up we cannot start machines with ceph volumes
> attached, but the running machines survives without problems even if the
> cinder VM is down.

Thanks for the info Alessandro!

This seems like the best way to run cinder/ceph, using other storage for
these vms, so cinder vm does not depend on the vm managing the
storage it runs on.

If you use highly available vms, ovirt will make sure they are up all the time,
and will migrated them to other hosts when needed.

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt and Ceph

2016-06-27 Thread Alessandro De Salvo

Hi,
the cinder container is broken since a while, since when the kollaglue 
changed the installation method upstream, AFAIK.
Also, it seems that even the latest ovirt 4.0 pulls down the "kilo" 
version of openstack, so you will need to install yours if you need a 
more recent one.
We are using a VM managed by ovirt itself for keystone/glance/cinder 
with our ceph cluster, and it works quite well with the Mitaka version, 
which is the latest one. The DB is hosted outside, so that even if we 
loose the VM we don't loose the state, besides all performance reasons. 
The installation is not using containers, but installing the services 
directly via puppet/Foreman.
So far we are happily using ceph in this way. The only drawback of this 
setup is that if the VM is not up we cannot start machines with ceph 
volumes attached, but the running machines survives without problems 
even if the cinder VM is down.

Cheers,

Alessandro


Il 27/06/16 09:37, Barak Korren ha scritto:

You may like to check this project providing production-ready openstack
containers:
https://github.com/openstack/kolla


Also, the oVirt installer can actually deploy these containers for you:

https://www.ovirt.org/develop/release-management/features/cinderglance-docker-integration/




___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt and Ceph

2016-06-27 Thread Barak Korren
>
> You may like to check this project providing production-ready openstack
> containers:
> https://github.com/openstack/kolla
>

Also, the oVirt installer can actually deploy these containers for you:

https://www.ovirt.org/develop/release-management/features/cinderglance-docker-integration/


-- 
Barak Korren
bkor...@redhat.com
RHEV-CI Team
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt and Ceph

2016-06-27 Thread Nir Soffer
On Sun, Jun 26, 2016 at 11:49 AM, Nicolás  wrote:
> Hi Nir,
>
> El 25/06/16 a las 22:57, Nir Soffer escribió:
>
> On Sat, Jun 25, 2016 at 11:47 PM, Nicolás  wrote:
>
> Hi,
>
> We're using Ceph along with an iSCSI gateway, so our storage domain is
> actually an iSCSI backend. So far, we have had zero issues with cca. 50 high
> IO rated VMs. Perhaps [1] might shed some light on how to set it up.
>
> Can you share more details on this setup and how you integrate with ovirt?
>
> For example, are you using ceph luns in regular iscsi storage domain, or
> attaching luns directly to vms?
>
>
> Fernando Frediani (responding to this thread) hit the nail on the head.
> Actually we have a 3-node Ceph infrastructure, so we created a few volumes
> on the Ceph nodes side (RBD) and then exported them to iSCSI, so it's oVirt
> who creates the LVs on the top, this way we don't need to attach luns
> directly.
>
> Once the volumes are exported on the iSCSI side, adding an iSCSI domain on
> oVirt is enough to make the whole thing work.
>
> As for experience, we have done a few tests and so far we've had zero
> issues:
>
> The main bottleneck is the iSCSI gateway interface bandwith. In our case we
> have a balance-alb bond over two 1G network interfaces. Later we realized
> this kind of bonding is useless because MAC addresses won't change, so in
> practice only 1G will be used at most. Making some heavy tests (i.e.,
> powering on 50 VMs at a time) we've reached this threshold at specific
> points but it didn't affect performance significantly.
> Doing some additional heavy tests (powering on and off all VMs at a time),
> we've reached the maximum value of cca. 1200 IOPS at a time. In normal
> conditions we don't surpass 200 IOPS, even when these 50 VMs do lots of disk
> operations.
> We've also done some tolerance tests, like removing one or more disks from a
> Ceph node, reinserting them, suddenly shut down one node, restoring it...
> The only problem we've experienced is a slower access to the iSCSI backend,
> which results in a message in the oVirt manager warning about this:
> something like "Storage is taking to long to respond...", which was maybe
> 15-20 seconds. We got no VM pauses at any time, though, nor any significant
> issue.

This setup works, but you are not fully using ceph potential.

You are actually using iscsi storage, so you are limited to 350 lvs per
storage domain (for performance reasons). You are also using ovirt thin
provisioning instead of ceph thin provisioning, so all your vms depend
on the spm to extend vms disks when needed, and your vms may pause
from time to time if the spm could not extend the disks fast enough.

When cloning disks (e.g. create vm from template), you are copying the
data from ceph to the spm node, and back to ceph. With cinder/ceph,
this operation happen inside the ceph cluster and is much more efficient,
possibly not copying anything.

Performance is limited by the iscsi gateway(s) - when using native ceph,
each vm is talking directly to the osds keeping its data. Reads and writes
are using multiple hosts.

On the other hand you are not limited by missing features in our current
ceph implementation (e.g. live storage migration, copy disks from other
storage domains, no monitoring).

It would be interesting to compare cinder/ceph with your system. You
can install a vm with cinder and the rest of the components, add another
pool for cinder, and compare vms using native ceph and iscsi/ceph.

You may like to check this project providing production-ready openstack
containers:
https://github.com/openstack/kolla

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt and Ceph

2016-06-26 Thread Kevin Hrpcek
Hello Charles,

The solution I came up with to solve this problem was to use RDO. I have
oVirt engine running on dedicated hardware. The best way to have oVirt
engine and RDO running on the same hardware is to build a VM on the same
hardware as the engine with virt manager or virsh using the local disk as
storage (you could possibly replace the VM with docker but I never explored
that option). I found it necessary to do this because the oVirt engine and
RDO http configs didn't play well together. They could probably be made to
work on the same OS instance, but it was taking much more time than I
wanted to figure out how to make httpd work with both. Once the VM is up
and running I set up the RDO repos on it and installed packstack. Use
packstack to generate an answers file, then go through the answers file and
set it up so that it only installs Cinder, Keystone, MariaDB, and RabbitMQ.
These are the only necessary pieces of openstack for cinder to work
correctly. Once it is installed you need to configure cinder and keystone
how you want since they only come with the admin tenant,user,project,etc...
I set up a ovirt user,tenant,project and configured cinder to use my ceph
cluster/pool.

It is much simpler to do than that long paragraph may make it seem at
first. I've also tested using CephFS as a POSIX storage domain in oVirt. It
works but in my experience there was at least a 25% performance decrease
over Cinder/RBD.

Kevin

On Fri, Jun 24, 2016 at 3:23 PM, Charles Gomes 
wrote:

> Hello
>
>
>
> I’ve been reading lots of material about implementing oVirt with Ceph,
> however all talk about using Cinder.
>
> Is there a way to get oVirt with Ceph without having to implement entire
> Openstack ?
>
> I’m already currently using Foreman to deploy Ceph and KVM nodes, trying
> to minimize the amount of moving parts. I heard something about oVirt
> providing a managed Cinder appliance, have any seen this ?
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt and Ceph

2016-06-26 Thread Yaniv Dary
Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109

Tel : +972 (9) 7692306
8272306
Email: yd...@redhat.com
IRC : ydary


On Sun, Jun 26, 2016 at 11:49 AM, Nicolás  wrote:

> Hi Nir,
>
> El 25/06/16 a las 22:57, Nir Soffer escribió:
>
> On Sat, Jun 25, 2016 at 11:47 PM, Nicolás  
>  wrote:
>
> Hi,
>
> We're using Ceph along with an iSCSI gateway, so our storage domain is
> actually an iSCSI backend. So far, we have had zero issues with cca. 50 high
> IO rated VMs. Perhaps [1] might shed some light on how to set it up.
>
> Can you share more details on this setup and how you integrate with ovirt?
>
> For example, are you using ceph luns in regular iscsi storage domain, or
> attaching luns directly to vms?
>
>
> Fernando Frediani (responding to this thread) hit the nail on the head.
> Actually we have a 3-node Ceph infrastructure, so we created a few volumes
> on the Ceph nodes side (RBD) and then exported them to iSCSI, so it's oVirt
> who creates the LVs on the top, this way we don't need to attach luns
> directly.
>
> Once the volumes are exported on the iSCSI side, adding an iSCSI domain on
> oVirt is enough to make the whole thing work.
>
> As for experience, we have done a few tests and so far we've had zero
> issues:
>
>- The main bottleneck is the iSCSI gateway interface bandwith. In our
>case we have a balance-alb bond over two 1G network interfaces. Later we
>realized this kind of bonding is useless because MAC addresses won't
>change, so in practice only 1G will be used at most. Making some heavy
>tests (i.e., powering on 50 VMs at a time) we've reached this threshold at
>specific points but it didn't affect performance significantly.
>
>
Did you try using ISCSI bonding to allow use of more than one path?


>
>- Doing some additional heavy tests (powering on and off all VMs at a
>time), we've reached the maximum value of cca. 1200 IOPS at a time. In
>normal conditions we don't surpass 200 IOPS, even when these 50 VMs do lots
>of disk operations.
>- We've also done some tolerance tests, like removing one or more
>disks from a Ceph node, reinserting them, suddenly shut down one node,
>restoring it... The only problem we've experienced is a slower access to
>the iSCSI backend, which results in a message in the oVirt manager warning
>about this: something like "Storage is taking to long to respond...", which
>was maybe 15-20 seconds. We got no VM pauses at any time, though, nor any
>significant issue.
>
> Did you try our dedicated cinder/ceph support and compared it with ceph
> iscsi gateway?
>
>
> Not actually, in order to avoid deploying Cinder we directly implemented
> the gateway as it looked easier to us.
>
> Nir
>
>
> Hope this helps.
>
> Regards.
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt and Ceph

2016-06-26 Thread Nicolás

Hi Nir,

El 25/06/16 a las 22:57, Nir Soffer escribió:

On Sat, Jun 25, 2016 at 11:47 PM, Nicolás  wrote:

Hi,

We're using Ceph along with an iSCSI gateway, so our storage domain is
actually an iSCSI backend. So far, we have had zero issues with cca. 50 high
IO rated VMs. Perhaps [1] might shed some light on how to set it up.

Can you share more details on this setup and how you integrate with ovirt?

For example, are you using ceph luns in regular iscsi storage domain, or
attaching luns directly to vms?


Fernando Frediani (responding to this thread) hit the nail on the head. 
Actually we have a 3-node Ceph infrastructure, so we created a few 
volumes on the Ceph nodes side (RBD) and then exported them to iSCSI, so 
it's oVirt who creates the LVs on the top, this way we don't need to 
attach luns directly.


Once the volumes are exported on the iSCSI side, adding an iSCSI domain 
on oVirt is enough to make the whole thing work.


As for experience, we have done a few tests and so far we've had zero 
issues:


 * The main bottleneck is the iSCSI gateway interface bandwith. In our
   case we have a balance-alb bond over two 1G network interfaces.
   Later we realized this kind of bonding is useless because MAC
   addresses won't change, so in practice only 1G will be used at most.
   Making some heavy tests (i.e., powering on 50 VMs at a time) we've
   reached this threshold at specific points but it didn't affect
   performance significantly.
 * Doing some additional heavy tests (powering on and off all VMs at a
   time), we've reached the maximum value of cca. 1200 IOPS at a time.
   In normal conditions we don't surpass 200 IOPS, even when these 50
   VMs do lots of disk operations.
 * We've also done some tolerance tests, like removing one or more
   disks from a Ceph node, reinserting them, suddenly shut down one
   node, restoring it... The only problem we've experienced is a slower
   access to the iSCSI backend, which results in a message in the oVirt
   manager warning about this: something like "Storage is taking to
   long to respond...", which was maybe 15-20 seconds. We got no VM
   pauses at any time, though, nor any significant issue.


Did you try our dedicated cinder/ceph support and compared it with ceph
iscsi gateway?


Not actually, in order to avoid deploying Cinder we directly implemented 
the gateway as it looked easier to us.



Nir


Hope this helps.

Regards.


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt and Ceph

2016-06-25 Thread Fernando Frediani

This solution looks intresting.

If I understand it correctly you first build your CEPH pool. Then you 
export RBD to iSCSI Target which exports it to oVirt which then will 
create LVMs on the top of it ?


Could you share more details about your experience ? Looks like a way to 
get CEPH + oVirt without Cinder.


Thanks

Fernando

On 25/06/2016 17:47, Nicolás wrote:

Hi,

We're using Ceph along with an iSCSI gateway, so our storage domain is 
actually an iSCSI backend. So far, we have had zero issues with cca. 
50 high IO rated VMs. Perhaps [1] might shed some light on how to set 
it up.


Regards.

[1]: 
https://www.suse.com/documentation/ses-2/book_storage_admin/data/cha_ceph_iscsi.html
En 24/6/2016 9:28 p. m., Charles Gomes  
escribió:


Hello

I’ve been reading lots of material about implementing oVirt with
Ceph, however all talk about using Cinder.

Is there a way to get oVirt with Ceph without having to implement
entire Openstack ?

I’m already currently using Foreman to deploy Ceph and KVM nodes,
trying to minimize the amount of moving parts. I heard something
about oVirt providing a managed Cinder appliance, have any seen this ?



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt and Ceph

2016-06-25 Thread Nir Soffer
On Sat, Jun 25, 2016 at 11:47 PM, Nicolás  wrote:
> Hi,
>
> We're using Ceph along with an iSCSI gateway, so our storage domain is
> actually an iSCSI backend. So far, we have had zero issues with cca. 50 high
> IO rated VMs. Perhaps [1] might shed some light on how to set it up.

Can you share more details on this setup and how you integrate with ovirt?

For example, are you using ceph luns in regular iscsi storage domain, or
attaching luns directly to vms?

Did you try our dedicated cinder/ceph support and compared it with ceph
iscsi gateway?

Nir
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt and Ceph

2016-06-25 Thread Nicolás
Hi,

We're using Ceph along with an iSCSI gateway, so our storage domain is actually an iSCSI backend. So far, we have had zero issues with cca. 50 high IO rated VMs. Perhaps [1] might shed some light on how to set it up.

Regards.

[1]: https://www.suse.com/documentation/ses-2/book_storage_admin/data/cha_ceph_iscsi.htmlEn 24/6/2016 9:28 p. m., Charles Gomes  escribió:



Hello
 
I’ve been reading lots of material about implementing oVirt with Ceph, however all talk about using Cinder.

Is there a way to get oVirt with Ceph without having to implement entire Openstack ?

I’m already currently using Foreman to deploy Ceph and KVM nodes, trying to minimize the amount of moving parts. I heard something about oVirt providing a managed Cinder appliance, have any seen this ?
 



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt and Ceph

2016-06-25 Thread Maor Lipchuk
Hi Charles,

Currently, oVirt communicates with Ceph only through Cinder.
If you want to avoid using Cinder perhaps you can try to use cephfs and
mount it as a posix storage domain instead.
Regarding Cinder appliance, it is not yet implemented though we are
currently investigating this option.

Regards,
Maor

On Fri, Jun 24, 2016 at 11:23 PM, Charles Gomes 
wrote:

> Hello
>
>
>
> I’ve been reading lots of material about implementing oVirt with Ceph,
> however all talk about using Cinder.
>
> Is there a way to get oVirt with Ceph without having to implement entire
> Openstack ?
>
> I’m already currently using Foreman to deploy Ceph and KVM nodes, trying
> to minimize the amount of moving parts. I heard something about oVirt
> providing a managed Cinder appliance, have any seen this ?
>
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] oVirt and Ceph

2016-06-24 Thread Charles Gomes
Hello

I've been reading lots of material about implementing oVirt with Ceph, however 
all talk about using Cinder.
Is there a way to get oVirt with Ceph without having to implement entire 
Openstack ?
I'm already currently using Foreman to deploy Ceph and KVM nodes, trying to 
minimize the amount of moving parts. I heard something about oVirt providing a 
managed Cinder appliance, have any seen this ?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users