Strictly speaking I don’t think this is the case anymore for Mitaka or later.  
Snapping nova does take more space as the image is flattened, but the dumb 
download then upload back into ceph has been cut out.  With careful attention 
paid to discard/TRIM I believe you can maintain the thin provisioning 
properties of RBD.  The workflow is explained here.  
https://www.sebastien-han.fr/blog/2015/10/05/openstack-nova-snapshots-on-ceph-rbd/
 
<https://www.sebastien-han.fr/blog/2015/10/05/openstack-nova-snapshots-on-ceph-rbd/>

> On Aug 1, 2017, at 11:14 AM, John Petrini <jpetr...@coredial.com> wrote:
> 
> Just my two cents here but we started out using mostly Ephemeral storage in 
> our builds and looking back I wish we hadn't. Note we're using Ceph as a 
> backend so my response is tailored towards Ceph's behavior.
> 
> The major pain point is snapshots. When you snapshot an nova volume an RBD 
> snapshot occurs and is very quick and uses very little additional storage, 
> however the snapshot is then copied into the images pool and in the process 
> is converted from a snapshot to a full size image. This takes a long time 
> because you have to copy a lot of data and it takes up a lot of space. It 
> also causes a great deal of IO on the storage and means you end up with a 
> bunch of "snapshot images" creating clutter. On the other hand volume 
> snapshots are near instantaneous without the other drawbacks I've mentioned.
> 
> On the plus side for ephemeral storage; resizing the root disk of images 
> works better. As long as your image is configured properly it's just a matter 
> of initiating a resize and letting the instance reboot to grow the root disk. 
> When using volumes as your root disk you instead have to shutdown the 
> instance, grow the volume and boot.
> 
> I hope this help! If anyone on the list knows something I don't know 
> regarding these issues please chime in. I'd love to know if there's a better 
> way.
> 
> Regards,
> John Petrini
> 
> 
> On Tue, Aug 1, 2017 at 10:50 AM, Kimball, Conrad <conrad.kimb...@boeing.com 
> <mailto:conrad.kimb...@boeing.com>> wrote:
> In our process of standing up an OpenStack internal cloud we are facing the 
> question of ephemeral storage vs. Cinder volumes for instance root disks.
> 
>  
> 
> As I look at public clouds such as AWS and Azure, the norm is to use 
> persistent volumes for the root disk.  AWS started out with images booting 
> onto ephemeral disk, but soon after they released Elastic Block Storage and 
> ever since the clear trend has been to EBS-backed instances, and now when I 
> look at their quick-start list of 33 AMIs, all of them are EBS-backed.  And 
> I’m not even sure one can have anything except persistent root disks in Azure 
> VMs.
> 
>  
> 
> Based on this and a number of other factors I think we want our user normal / 
> default behavior to boot onto Cinder-backed volumes instead of onto ephemeral 
> storage.  But then I look at OpenStack and its design point appears to be 
> booting images onto ephemeral storage, and while it is possible to boot an 
> image onto a new volume this is clumsy (haven’t found a way to make this the 
> default behavior) and we are experiencing performance problems (that 
> admittedly we have not yet run to ground).
> 
>  
> 
> So …
> 
> ·         Are other operators routinely booting onto Cinder volumes instead 
> of ephemeral storage?
> 
> ·         What has been your experience with this; any advice?
> 
>  
> 
> Conrad Kimball
> 
> Associate Technical Fellow
> 
> Chief Architect, Enterprise Cloud Services
> 
> Application Infrastructure Services / Global IT Infrastructure / Information 
> Technology & Data Analytics
> 
> conrad.kimb...@boeing.com <mailto:conrad.kimb...@boeing.com>
> P.O. Box 3707, Mail Code 7M-TE
> 
> Seattle, WA  98124-2207
> 
> Bellevue 33-11 bldg, office 3A6-3.9
> 
> Mobile:  425-591-7802 <tel:(425)%20591-7802>
>  
> 
> 
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org 
> <mailto:OpenStack-operators@lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators 
> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>
> 
> 
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Reply via email to