I am not sure of its status -- it looks like it was part of 3.6 planning but it 
recently was moved to 4.0 on the wiki.  There is a video walkthrough of the 
running integration from this past August [1].  You would need to just deploy 
Cinder and Keystone -- no need for all the other bits.  Again, it appears that 
oVirt might even have some development to containerize a small Cinder/Glance 
OpenStack setup [2].

[1] https://www.youtube.com/watch?v=elEkGfjLITs
[2] http://www.ovirt.org/CinderGlance_Docker_Integration

-- 

Jason Dillaman 
Red Hat Ceph Storage Engineering 
dilla...@redhat.com 
http://www.redhat.com 


----- Original Message ----- 

> From: "Gaetan SLONGO" <gslo...@it-optics.com>
> To: "Hugo Slabbert" <h...@slabnet.com>
> Cc: ceph-users@lists.ceph.com, "Somnath Roy" <somnath....@sandisk.com>,
> "Jason Dillaman" <dilla...@redhat.com>
> Sent: Thursday, November 5, 2015 2:37:16 AM
> Subject: Re: [ceph-users] iSCSI over RDB is a good idea ?

> Thank you everybody for your interesting answers.

> I saw the Cinder integration in oVirt. Did someone already done that ? I
> don't know OpenStack (yet). Is it possible to deploy the Cinder component
> only without the complete OpenStack setup ?

> Thanks !

> ----- Original Message -----

> De: "Hugo Slabbert" <h...@slabnet.com>
> À: "Somnath Roy" <somnath....@sandisk.com>, "Jason Dillaman"
> <dilla...@redhat.com>, "Gaetan SLONGO" <gslo...@it-optics.com>
> Cc: ceph-users@lists.ceph.com
> Envoyé: Mercredi 4 Novembre 2015 23:30:56
> Objet: Re: RE: [ceph-users] iSCSI over RDB is a good idea ?

> > We are using SCST over RBD and not seeing much of a degradation...Need to
> > make sure you tune SCST properly and use multiple session..

> Sure. My post was not intended to say that iSCSI over RBD is *slow*, just
> that it scales differently than native RBD client access.

> If I have 10 OSD hosts with a 10G link each facing clients, provided the OSDs
> can saturate the 10G links, I have 100G of aggregate nominal throughput
> under ideal conditions. If I put an iSCSI target (or an active/passive pair
> of targets) in front of that to connect iSCSI initiators to RBD devices, my
> aggregate nominal throughput for iSCSI clients under ideal conditions is
> 10G.

> If you don't flat-top that, then it should perform just fine and the only hit
> should be the slight (possibly insignificant, depending on hardware and
> layout) latency bump from the extra hop.

> Don't get me wrong: I'm not trying to knock iSCSI over RBD at all. It's a
> perfectly legitimate and solid setup for connecting RBD-unaware clients into
> RBD storage. My intention was just to point out the difference in
> architecture and that sizing of the target hosts is a consideration that's
> different from a pure RBD environment.

> Though, I suppose if network utilization at the targets becomes an issue at
> any point, you could scale out with additional targets and balance the iSCSI
> clients across them.
> --
> Hugo
> h...@slabnet.com: email, xmpp/jabber
> also on Signal

> ---- From: Somnath Roy <somnath....@sandisk.com> -- Sent: 2015-11-04 - 13:48
> ----

> > We are using SCST over RBD and not seeing much of a degradation...Need to
> > make sure you tune SCST properly and use multiple session..
> >
> > Thanks & Regards
> > Somnath
> >
> > -----Original Message-----
> > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> > Hugo Slabbert
> > Sent: Wednesday, November 04, 2015 1:44 PM
> > To: Jason Dillaman; Gaetan SLONGO
> > Cc: ceph-users@lists.ceph.com
> > Subject: Re: [ceph-users] iSCSI over RDB is a good idea ?
> >
> >> The disadvantage of the iSCSI design is that it adds an extra hop between
> >> your VMs and the backing Ceph cluster.
> >
> > ...and introduces a bottleneck. iSCSI initiators are "dumb" in comparison
> > to native ceph/rbd clients. Whereas native clients will talk to all the
> > relevant OSDs directly, iSCSI initiators will just talk to the target
> > (unless there is some awesome magic in the RBD/tgt integration that I'm
> > unaware of). So the targets and their connectivity are a bottleneck.
> >
> > --
> > Hugo
> > h...@slabnet.com: email, xmpp/jabber
> > also on Signal
> >
> >

> --

> www.it-optics.com
> 
> Gaëtan SLONGO | IT & Project Manager
> Boulevard Initialis, 28 - 7000 Mons, BELGIUM
> Company :     +32 (0)65 84 23 85
> Direct :      +32 (0)65 32 85 88
> Fax :         +32 (0)65 84 66 76
> GPG Key :     gslongo-gpg_key.asc
> 

> - Please consider your environmental responsibility before printing this
> e-mail -
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to