Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-07 Thread Saverio Proto
Hello Conrad,

I jump late on the conversation because I was away from the mailing
lists last week.

We run Openstack with both nova ephemeral root disks and cinder volume
boot disks. Both are with ceph rbd backend. It is the user that flags
"boot from volume" in Horizon when starting an instance.

Everything works in both cases, and there are pros and cons as you see
from the many answers you had.

But, if I have to give you a suggestion, I would choose only 1 way to
go and stick with it.

Having all this flexibility is great for us, operators that understand
Openstack internals. But it is a nightmare for the openstack users.

First of all looking at Horizon it is very difficult to understand the
kind of root volume being used.
It is also difficult to understand that you have a snapshot of the
nova instance, and a snapshot of the cinder volume.
We have different snapshot procedures depending on the type for root
disk, and users always get confused.
When the root disk is cinder, if you snapshot from the instance page,
you will get a 0 byte glance image connected to a cinder snapshot.

A user that has a instance with a disk, should not have to understand
if the disk is managed by nova or cinder to finally backup his data
with a snapshot.
Looking at Cloud usability, I would say that mixing the two solutions
is not the best. Probably this explains the Amazon and Azure choices
you described earlier.

Cheers,

Saverio



2017-08-01 16:50 GMT+02:00 Kimball, Conrad :
> In our process of standing up an OpenStack internal cloud we are facing the
> question of ephemeral storage vs. Cinder volumes for instance root disks.
>
>
>
> As I look at public clouds such as AWS and Azure, the norm is to use
> persistent volumes for the root disk.  AWS started out with images booting
> onto ephemeral disk, but soon after they released Elastic Block Storage and
> ever since the clear trend has been to EBS-backed instances, and now when I
> look at their quick-start list of 33 AMIs, all of them are EBS-backed.  And
> I’m not even sure one can have anything except persistent root disks in
> Azure VMs.
>
>
>
> Based on this and a number of other factors I think we want our user normal
> / default behavior to boot onto Cinder-backed volumes instead of onto
> ephemeral storage.  But then I look at OpenStack and its design point
> appears to be booting images onto ephemeral storage, and while it is
> possible to boot an image onto a new volume this is clumsy (haven’t found a
> way to make this the default behavior) and we are experiencing performance
> problems (that admittedly we have not yet run to ground).
>
>
>
> So …
>
> · Are other operators routinely booting onto Cinder volumes instead
> of ephemeral storage?
>
> · What has been your experience with this; any advice?
>
>
>
> Conrad Kimball
>
> Associate Technical Fellow
>
> Chief Architect, Enterprise Cloud Services
>
> Application Infrastructure Services / Global IT Infrastructure / Information
> Technology & Data Analytics
>
> conrad.kimb...@boeing.com
>
> P.O. Box 3707, Mail Code 7M-TE
>
> Seattle, WA  98124-2207
>
> Bellevue 33-11 bldg, office 3A6-3.9
>
> Mobile:  425-591-7802
>
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-02 Thread George Mihaiescu
I totally agree with Jay, this is the best, cheapest and most scalable way to 
build a cloud environment with Openstack.

We use local storage as the primary root disk source which lets us make good 
use of the slots available in each compute node (6), and coupled with the 
Raid10 gives good I/O performance.

We also have a multi petabyte Ceph cluster that we use to store large genomics 
files in object format, as well as backend for Cinder volumes, but the primary 
use case for the Ceph cluster is not booting up the instances.

In this way, we have small failure domains, and if a VM does a lot of IO it 
only impacts a few other neighbours. The latency for writes is low, and we 
don't spend money (and drive slots) on SSD journals improving write latency 
only until the Ceph journal needs to flush.

Speed of provisioning is not a concern because anyway with a small image 
library, most of the popular ones are already cached on the compute nodes, and 
the time it takes for the instance to boot is just a small percentage of the 
total instance runtime (days or weeks).

The drawback is that maintenances requiring reboots need to be scheduled in 
advance, but I would argue that booting from a shared storage and having to 
orchestrate the live migration of 1000 instances from 100 compute nodes without 
performance impact for the workloads running there (some migrations could fail 
because intense CPU or memory activity) is not very feasible either...

George 



> On Aug 1, 2017, at 11:59, Jay Pipes  wrote:
> 
>> On 08/01/2017 11:14 AM, John Petrini wrote:
>> Just my two cents here but we started out using mostly Ephemeral storage in 
>> our builds and looking back I wish we hadn't. Note we're using Ceph as a 
>> backend so my response is tailored towards Ceph's behavior.
>> The major pain point is snapshots. When you snapshot an nova volume an RBD 
>> snapshot occurs and is very quick and uses very little additional storage, 
>> however the snapshot is then copied into the images pool and in the process 
>> is converted from a snapshot to a full size image. This takes a long time 
>> because you have to copy a lot of data and it takes up a lot of space. It 
>> also causes a great deal of IO on the storage and means you end up with a 
>> bunch of "snapshot images" creating clutter. On the other hand volume 
>> snapshots are near instantaneous without the other drawbacks I've mentioned.
>> On the plus side for ephemeral storage; resizing the root disk of images 
>> works better. As long as your image is configured properly it's just a 
>> matter of initiating a resize and letting the instance reboot to grow the 
>> root disk. When using volumes as your root disk you instead have to shutdown 
>> the instance, grow the volume and boot.
>> I hope this help! If anyone on the list knows something I don't know 
>> regarding these issues please chime in. I'd love to know if there's a better 
>> way.
> 
> I'd just like to point out that the above is exactly the right way to think 
> about things.
> 
> Don't boot from volume (i.e. don't use a volume as your root disk).
> 
> Instead, separate the operating system from your application data. Put the 
> operating system on a small disk image (small == fast boot times), use a 
> config drive for injectable configuration and create Cinder volumes for your 
> application data.
> 
> Detach and attach the application data Cinder volume as needed to your server 
> instance. Make your life easier by not coupling application data and the 
> operating system together.
> 
> Best,
> -jay
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-02 Thread Van Leeuwen, Robert
>>> Mike Smith 
>>On the plus side, Cinder does allow you to do QOS to limit I/O, whereas I do 
>>not believe that’s an option with Nova ephemeral.

You can specify the IOPS limits in the flavor.
Drawbacks:
* You might end up with a lot of different flavors because of IOPS requirements
* Modifying an existing flavor won’t retroactively apply it to existing 
instances
   You can hack it directly in the database but the instances will still either 
need to be rebooted or you need to run a lot of virsh command.
   (not sure if this is any better for cinder)

>>> Mike Smith 
>> And, again depending on the Cinder solution employed, the disk I/O for this 
>> kind of setup can be significantly better t
>>han some other options including Nova ephemeral with a Ceph backend.
IMHO specifically ceph performance scales out very well (e.g. lots of 100 IOPS 
instances) but scaling up might be an issue (e.g. running a significant 
database with lots of sync writes doing 10K IOPS)
Even with an optimally tuned SSD/nvme clusters it still might not be as fast as 
you would like it to be.

>>>Kimball, Conrad 
>> and while it is possible to boot an image onto a new volume this is clumsy
As mentioned you can make RBD the default backend for ephemeral so you no 
longer need to specify boot from volume.
Another option would be to use some other automation tools to bring up our 
instances.
I recommend looking at e.g. terraform or some other way to automate deployments.
Running a single command to install a whole environment, and boot from volume 
if necessary, is really great and makes sure things are reproducible.
Our tech savvy users like it but if you have people who can just understand the 
web interface it might be a challenge ;)

Some more points regarding ephemeral local storage:

Pros ephemeral local storage:
* No SPOF for your cloud (e.g. if a ceph software upgrade goes wrong the whole 
cloud will hang)
* Assuming SSDs: great performance
* Discourages pets, people will get used to instances being down for 
maintenance or unrecoverable due to hardware failure and will build and 
automate accordingly
* No volume storage to manage, assuming you will not offer it anyway

Cons ephemeral local storage:
* IMHO live migration with block migrations is not really useable
(the instance will behave a bit slow for some time and e.g. the whole Cassandra 
or Elasticsearch cluster performance will tank)
* No independent scaling of compute and space. E.g. with ephemeral you might 
have lots of disk left but no mem/cpu on the compute node or the other way 
around.
* Hardware failure will mean loss of that local data for at least a period of 
time assuming recoverable at all. With enough compute nodes this will become 
weekly/daily events.
* Some pets (e.g. Jenkins boxes) are hard to get rid of even if you control the 
application landscape to a great degree.

I think that if you have a lot of “pets” or other reasons e.g. a 
server/rack/availability zone cannot go down for maintenance you probably want 
to run from volume storage.
You get your data highly available and can do live-migrations for maintenance.
Note that you still have to do some manual work to boot instances somewhere 
else if a hypervisor goes down but that’s being worked on IIRC.


>>>Kimball, Conrad 
>> Bottom line:  it depends what you need, as both options work well and there 
>> are people doing both out there in the wild.
Totally agree.


Cheers,
Robert van Leeuwen
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread John Petrini
Thanks for the info. Might have something to do with the Ceph version then.
We're running hammer and apparently the du option wasn't added until
in Infernalis.

John Petrini

On Tue, Aug 1, 2017 at 4:32 PM, Mike Lowe  wrote:

> Two things, first info does not show how much disk is used du does.
> Second, the semantics count, copy is different than clone and flatten.
> Clone and flatten which should happen if you have things working correctly
> is much faster than copy.  If you are using copy then you may be limited by
> the number of management ops in flight, this is a setting for more recent
> versions of ceph.  I don’t know if copy skips zero byte objects but clone
> and flatten certainly do.  You need to be sure that you have the proper
> settings in nova.conf for discard/unmap as well as using
> hw_scsi_model=virtio-scsi and hw_disk_bus=scsi in the image properties.
> Once discard is working and you have the qemu guest agent running in your
> instances you can force them to do a fstrim to reclaim space as an
> additional benefit.
>
> On Aug 1, 2017, at 3:50 PM, John Petrini  wrote:
>
> Maybe I'm just not understanding but when I create a nova snapshot the
> snapshot happens at RBD in the ephemeral pool and then it's copied to the
> images pool. This results in a full sized image rather than a snapshot with
> a reference to the parent.
>
> For example below is a snapshot of an ephemeral instance from our images
> pool. It's 80GB, the size of the instance, so rather than just capturing
> the state of the parent image I end up with a brand new image of the same
> size. It takes a long time to create this copy and causes high IO during
> the snapshot.
>
> rbd --pool images info d5404709-cb86-4743-b3d5-1dc7fba836c1
> rbd image 'd5404709-cb86-4743-b3d5-1dc7fba836c1':
> size 81920 MB in 20480 objects
> order 22 (4096 kB objects)
> block_name_prefix: rbd_data.93cdd43ca5efa8
> format: 2
> features: layering, striping
> flags:
> stripe unit: 4096 kB
> stripe count: 1
>
>
> John Petrini
>
> On Tue, Aug 1, 2017 at 3:24 PM, Mike Lowe  wrote:
>
>> There is no upload if you use Ceph to back your glance (like you should),
>> the snapshot is cloned from the ephemeral pool into the the images pool,
>> then flatten is run as a background task.  Net result is that creating a
>> 120GB image vs 8GB is slightly faster on my cloud but not at all what I’d
>> call painful.
>>
>> Running nova image-create for a 8GB image:
>>
>> real 0m2.712s
>> user 0m0.761s
>> sys 0m0.225s
>>
>> Running nova image-create for a 128GB image:
>>
>> real 0m2.436s
>> user 0m0.774s
>> sys 0m0.225s
>>
>>
>>
>>
>> On Aug 1, 2017, at 3:07 PM, John Petrini  wrote:
>>
>> Yes from Mitaka onward the snapshot happens at the RBD level which is
>> fast. It's the flattening and uploading of the image to glance that's the
>> major pain point. Still it's worlds better than the qemu snapshots to the
>> local disk prior to Mitaka.
>>
>> John Petrini
>>
>> Platforms Engineer   //   *CoreDial, LLC*   //   coredial.com   //   [image:
>> Twitter]    [image: LinkedIn]
>>    [image: Google Plus]
>>    [image: Blog]
>> 
>> 751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
>> *P:* 215.297.4400 x232 <(215)%20297-4400>   //   *F: *215.297.4401
>> <(215)%20297-4401>   //   *E: *jpetr...@coredial.com
>> 
>>
>> The information transmitted is intended only for the person or entity to
>> which it is addressed and may contain confidential and/or privileged
>> material. Any review, retransmission,  dissemination or other use of, or
>> taking of any action in reliance upon, this information by persons or
>> entities other than the intended recipient is prohibited. If you received
>> this in error, please contact the sender and delete the material from any
>> computer.
>>
>>
>>
>> On Tue, Aug 1, 2017 at 2:53 PM, Mike Lowe  wrote:
>>
>>> Strictly speaking I don’t think this is the case anymore for Mitaka or
>>> later.  Snapping nova does take more space as the image is flattened, but
>>> the dumb download then upload back into ceph has been cut out.  With
>>> careful attention paid to discard/TRIM I believe you can maintain the thin
>>> provisioning properties of RBD.  The workflow is explained here.
>>> https://www.sebastien-han.fr/blog/2015/10/05/openstack-nova
>>> -snapshots-on-ceph-rbd/
>>>
>>> On Aug 1, 2017, at 11:14 AM, John Petrini  wrote:
>>>
>>> Just my two cents here but we started out using mostly Ephemeral storage
>>> in our builds and looking back I wish we hadn't. Note we're using Ceph as a
>>> backend so my response is tailored towards Ceph's behavior.
>>>
>>> The major pain point is snapshots. When you snapshot an nova volume an
>>> RBD 

Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread Chris Friesen

On 08/01/2017 02:32 PM, Mike Lowe wrote:

Two things, first info does not show how much disk is used du does.  Second, the
semantics count, copy is different than clone and flatten.  Clone and flatten
which should happen if you have things working correctly is much faster than
copy.  If you are using copy then you may be limited by the number of management
ops in flight, this is a setting for more recent versions of ceph.  I don’t know
if copy skips zero byte objects but clone and flatten certainly do.  You need to
be sure that you have the proper settings in nova.conf for discard/unmap as well
as using hw_scsi_model=virtio-scsi and hw_disk_bus=scsi in the image properties.
  Once discard is working and you have the qemu guest agent running in your
instances you can force them to do a fstrim to reclaim space as an additional
benefit.



Just a heads-up...with virtio-scsi there is a bug where you cannot boot from 
volume and then attach another volume.


(The bug is 1702999, though it's possible the fix for 1686116 will address it in 
which case it'd be fixed in pike.)


Chris

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread Mike Lowe
Two things, first info does not show how much disk is used du does.  Second, 
the semantics count, copy is different than clone and flatten.  Clone and 
flatten which should happen if you have things working correctly is much faster 
than copy.  If you are using copy then you may be limited by the number of 
management ops in flight, this is a setting for more recent versions of ceph.  
I don’t know if copy skips zero byte objects but clone and flatten certainly 
do.  You need to be sure that you have the proper settings in nova.conf for 
discard/unmap as well as using hw_scsi_model=virtio-scsi and hw_disk_bus=scsi 
in the image properties.  Once discard is working and you have the qemu guest 
agent running in your instances you can force them to do a fstrim to reclaim 
space as an additional benefit.

> On Aug 1, 2017, at 3:50 PM, John Petrini  wrote:
> 
> Maybe I'm just not understanding but when I create a nova snapshot the 
> snapshot happens at RBD in the ephemeral pool and then it's copied to the 
> images pool. This results in a full sized image rather than a snapshot with a 
> reference to the parent.
> 
> For example below is a snapshot of an ephemeral instance from our images 
> pool. It's 80GB, the size of the instance, so rather than just capturing the 
> state of the parent image I end up with a brand new image of the same size. 
> It takes a long time to create this copy and causes high IO during the 
> snapshot.
> 
> rbd --pool images info d5404709-cb86-4743-b3d5-1dc7fba836c1
> rbd image 'd5404709-cb86-4743-b3d5-1dc7fba836c1':
>   size 81920 MB in 20480 objects
>   order 22 (4096 kB objects)
>   block_name_prefix: rbd_data.93cdd43ca5efa8
>   format: 2
>   features: layering, striping
>   flags: 
>   stripe unit: 4096 kB
>   stripe count: 1
> 
> 
> John Petrini
> 
> 
> On Tue, Aug 1, 2017 at 3:24 PM, Mike Lowe  > wrote:
> There is no upload if you use Ceph to back your glance (like you should), the 
> snapshot is cloned from the ephemeral pool into the the images pool, then 
> flatten is run as a background task.  Net result is that creating a 120GB 
> image vs 8GB is slightly faster on my cloud but not at all what I’d call 
> painful.
> 
> Running nova image-create for a 8GB image:
> 
> real  0m2.712s
> user  0m0.761s
> sys   0m0.225s
> 
> Running nova image-create for a 128GB image:
> 
> real  0m2.436s
> user  0m0.774s
> sys   0m0.225s
> 
> 
> 
> 
>> On Aug 1, 2017, at 3:07 PM, John Petrini > > wrote:
>> 
>> Yes from Mitaka onward the snapshot happens at the RBD level which is fast. 
>> It's the flattening and uploading of the image to glance that's the major 
>> pain point. Still it's worlds better than the qemu snapshots to the local 
>> disk prior to Mitaka.
>> 
>> John Petrini
>> 
>> Platforms Engineer   //   CoreDial, LLC   //   coredial.com 
>>    //
>> 
>> 
>>  
>> 751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
>> P: 215.297.4400 x232    //   F: 215.297.4401 
>>    //   E: jpetr...@coredial.com 
>> 
>>  
>> 
>> The information transmitted is intended only for the person or entity to 
>> which it is addressed and may contain confidential and/or privileged 
>> material. Any review, retransmission,  dissemination or other use of, or 
>> taking of any action in reliance upon, this information by persons or 
>> entities other than the intended recipient is prohibited. If you received 
>> this in error, please contact the sender and delete the material from any 
>> computer.
>> 
>>  
>> 
>> On Tue, Aug 1, 2017 at 2:53 PM, Mike Lowe > > wrote:
>> Strictly speaking I don’t think this is the case anymore for Mitaka or 
>> later.  Snapping nova does take more space as the image is flattened, but 
>> the dumb download then upload back into ceph has been cut out.  With careful 
>> attention paid to discard/TRIM I believe you can maintain the thin 
>> provisioning properties of RBD.  The workflow is explained here.  
>> https://www.sebastien-han.fr/blog/2015/10/05/openstack-nova-snapshots-on-ceph-rbd/
>>  
>> 
>> 
>>> On Aug 1, 2017, at 11:14 AM, John Petrini >> > wrote:
>>> 
>>> Just my two cents here but we started out using mostly Ephemeral storage in 
>>> our builds and looking back I wish we hadn't. Note we're using Ceph as a 
>>> backend so my response is tailored towards Ceph's behavior.
>>> 
>>> The major pain point is snapshots. When 

Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread Matt Riedemann

On 8/1/2017 10:47 AM, Sean McGinnis wrote:

Some sort of good news there. Starting with the Pike release, you will now
be able to extend an attached volume. As long as both Cinder and Nova are
at Pike or later, this should now be allowed.


And you're using the libvirt compute driver in Nova, and the volume type 
is iscsi or fibrechannel...


--

Thanks,

Matt

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread John Petrini
Maybe I'm just not understanding but when I create a nova snapshot the
snapshot happens at RBD in the ephemeral pool and then it's copied to the
images pool. This results in a full sized image rather than a snapshot with
a reference to the parent.

For example below is a snapshot of an ephemeral instance from our images
pool. It's 80GB, the size of the instance, so rather than just capturing
the state of the parent image I end up with a brand new image of the same
size. It takes a long time to create this copy and causes high IO during
the snapshot.

rbd --pool images info d5404709-cb86-4743-b3d5-1dc7fba836c1
rbd image 'd5404709-cb86-4743-b3d5-1dc7fba836c1':
size 81920 MB in 20480 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.93cdd43ca5efa8
format: 2
features: layering, striping
flags:
stripe unit: 4096 kB
stripe count: 1


John Petrini

On Tue, Aug 1, 2017 at 3:24 PM, Mike Lowe  wrote:

> There is no upload if you use Ceph to back your glance (like you should),
> the snapshot is cloned from the ephemeral pool into the the images pool,
> then flatten is run as a background task.  Net result is that creating a
> 120GB image vs 8GB is slightly faster on my cloud but not at all what I’d
> call painful.
>
> Running nova image-create for a 8GB image:
>
> real 0m2.712s
> user 0m0.761s
> sys 0m0.225s
>
> Running nova image-create for a 128GB image:
>
> real 0m2.436s
> user 0m0.774s
> sys 0m0.225s
>
>
>
>
> On Aug 1, 2017, at 3:07 PM, John Petrini  wrote:
>
> Yes from Mitaka onward the snapshot happens at the RBD level which is
> fast. It's the flattening and uploading of the image to glance that's the
> major pain point. Still it's worlds better than the qemu snapshots to the
> local disk prior to Mitaka.
>
> John Petrini
>
> Platforms Engineer   //   *CoreDial, LLC*   //   coredial.com   //   [image:
> Twitter]    [image: LinkedIn]
>    [image: Google Plus]
>    [image: Blog]
> 
> 751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
> *P:* 215.297.4400 x232 <(215)%20297-4400>   //   *F: *215.297.4401
> <(215)%20297-4401>   //   *E: *jpetr...@coredial.com
> 
>
> The information transmitted is intended only for the person or entity to
> which it is addressed and may contain confidential and/or privileged
> material. Any review, retransmission,  dissemination or other use of, or
> taking of any action in reliance upon, this information by persons or
> entities other than the intended recipient is prohibited. If you received
> this in error, please contact the sender and delete the material from any
> computer.
>
>
>
> On Tue, Aug 1, 2017 at 2:53 PM, Mike Lowe  wrote:
>
>> Strictly speaking I don’t think this is the case anymore for Mitaka or
>> later.  Snapping nova does take more space as the image is flattened, but
>> the dumb download then upload back into ceph has been cut out.  With
>> careful attention paid to discard/TRIM I believe you can maintain the thin
>> provisioning properties of RBD.  The workflow is explained here.
>> https://www.sebastien-han.fr/blog/2015/10/05/openstack-nova
>> -snapshots-on-ceph-rbd/
>>
>> On Aug 1, 2017, at 11:14 AM, John Petrini  wrote:
>>
>> Just my two cents here but we started out using mostly Ephemeral storage
>> in our builds and looking back I wish we hadn't. Note we're using Ceph as a
>> backend so my response is tailored towards Ceph's behavior.
>>
>> The major pain point is snapshots. When you snapshot an nova volume an
>> RBD snapshot occurs and is very quick and uses very little additional
>> storage, however the snapshot is then copied into the images pool and in
>> the process is converted from a snapshot to a full size image. This takes a
>> long time because you have to copy a lot of data and it takes up a lot of
>> space. It also causes a great deal of IO on the storage and means you end
>> up with a bunch of "snapshot images" creating clutter. On the other hand
>> volume snapshots are near instantaneous without the other drawbacks I've
>> mentioned.
>>
>> On the plus side for ephemeral storage; resizing the root disk of images
>> works better. As long as your image is configured properly it's just a
>> matter of initiating a resize and letting the instance reboot to grow the
>> root disk. When using volumes as your root disk you instead have to
>> shutdown the instance, grow the volume and boot.
>>
>> I hope this help! If anyone on the list knows something I don't know
>> regarding these issues please chime in. I'd love to know if there's a
>> better way.
>>
>> Regards,
>>
>> John Petrini
>>
>> On Tue, Aug 1, 2017 at 10:50 AM, Kimball, Conrad <
>> conrad.kimb...@boeing.com> wrote:
>>
>>> In our process of standing up an OpenStack internal cloud we are 

Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread Mike Lowe
There is no upload if you use Ceph to back your glance (like you should), the 
snapshot is cloned from the ephemeral pool into the the images pool, then 
flatten is run as a background task.  Net result is that creating a 120GB image 
vs 8GB is slightly faster on my cloud but not at all what I’d call painful.

Running nova image-create for a 8GB image:

real0m2.712s
user0m0.761s
sys 0m0.225s

Running nova image-create for a 128GB image:

real0m2.436s
user0m0.774s
sys 0m0.225s




> On Aug 1, 2017, at 3:07 PM, John Petrini  wrote:
> 
> Yes from Mitaka onward the snapshot happens at the RBD level which is fast. 
> It's the flattening and uploading of the image to glance that's the major 
> pain point. Still it's worlds better than the qemu snapshots to the local 
> disk prior to Mitaka.
> 
> John Petrini
> 
> Platforms Engineer   //   CoreDial, LLC   //   coredial.com 
>    //
> 
> 
>  
> 751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
> P: 215.297.4400 x232   //   F: 215.297.4401   //   E: jpetr...@coredial.com 
> 
>  
> 
> The information transmitted is intended only for the person or entity to 
> which it is addressed and may contain confidential and/or privileged 
> material. Any review, retransmission,  dissemination or other use of, or 
> taking of any action in reliance upon, this information by persons or 
> entities other than the intended recipient is prohibited. If you received 
> this in error, please contact the sender and delete the material from any 
> computer.
> 
>  
> 
> 
> On Tue, Aug 1, 2017 at 2:53 PM, Mike Lowe  > wrote:
> Strictly speaking I don’t think this is the case anymore for Mitaka or later. 
>  Snapping nova does take more space as the image is flattened, but the dumb 
> download then upload back into ceph has been cut out.  With careful attention 
> paid to discard/TRIM I believe you can maintain the thin provisioning 
> properties of RBD.  The workflow is explained here.  
> https://www.sebastien-han.fr/blog/2015/10/05/openstack-nova-snapshots-on-ceph-rbd/
>  
> 
> 
>> On Aug 1, 2017, at 11:14 AM, John Petrini > > wrote:
>> 
>> Just my two cents here but we started out using mostly Ephemeral storage in 
>> our builds and looking back I wish we hadn't. Note we're using Ceph as a 
>> backend so my response is tailored towards Ceph's behavior.
>> 
>> The major pain point is snapshots. When you snapshot an nova volume an RBD 
>> snapshot occurs and is very quick and uses very little additional storage, 
>> however the snapshot is then copied into the images pool and in the process 
>> is converted from a snapshot to a full size image. This takes a long time 
>> because you have to copy a lot of data and it takes up a lot of space. It 
>> also causes a great deal of IO on the storage and means you end up with a 
>> bunch of "snapshot images" creating clutter. On the other hand volume 
>> snapshots are near instantaneous without the other drawbacks I've mentioned.
>> 
>> On the plus side for ephemeral storage; resizing the root disk of images 
>> works better. As long as your image is configured properly it's just a 
>> matter of initiating a resize and letting the instance reboot to grow the 
>> root disk. When using volumes as your root disk you instead have to shutdown 
>> the instance, grow the volume and boot.
>> 
>> I hope this help! If anyone on the list knows something I don't know 
>> regarding these issues please chime in. I'd love to know if there's a better 
>> way.
>> 
>> Regards,
>> John Petrini
>> 
>> 
>> On Tue, Aug 1, 2017 at 10:50 AM, Kimball, Conrad > > wrote:
>> In our process of standing up an OpenStack internal cloud we are facing the 
>> question of ephemeral storage vs. Cinder volumes for instance root disks.
>> 
>>  
>> 
>> As I look at public clouds such as AWS and Azure, the norm is to use 
>> persistent volumes for the root disk.  AWS started out with images booting 
>> onto ephemeral disk, but soon after they released Elastic Block Storage and 
>> ever since the clear trend has been to EBS-backed instances, and now when I 
>> look at their quick-start list of 33 AMIs, all of them are EBS-backed.  And 
>> I’m not even sure one can have anything except persistent root disks in 
>> Azure VMs.
>> 
>>  
>> 
>> Based on this and a number of other factors I think we want our user normal 
>> / default behavior to boot onto Cinder-backed volumes instead of onto 
>> ephemeral storage.  But then I 

Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread John Petrini
Yes from Mitaka onward the snapshot happens at the RBD level which is fast.
It's the flattening and uploading of the image to glance that's the major
pain point. Still it's worlds better than the qemu snapshots to the local
disk prior to Mitaka.

John Petrini

Platforms Engineer   //   *CoreDial, LLC*   //   coredial.com   //   [image:
Twitter]    [image: LinkedIn]
   [image: Google Plus]
   [image: Blog]

751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
*P:* 215.297.4400 x232   //   *F: *215.297.4401   //   *E: *
jpetr...@coredial.com


The information transmitted is intended only for the person or entity to
which it is addressed and may contain confidential and/or privileged
material. Any review, retransmission,  dissemination or other use of, or
taking of any action in reliance upon, this information by persons or
entities other than the intended recipient is prohibited. If you received
this in error, please contact the sender and delete the material from any
computer.



On Tue, Aug 1, 2017 at 2:53 PM, Mike Lowe  wrote:

> Strictly speaking I don’t think this is the case anymore for Mitaka or
> later.  Snapping nova does take more space as the image is flattened, but
> the dumb download then upload back into ceph has been cut out.  With
> careful attention paid to discard/TRIM I believe you can maintain the thin
> provisioning properties of RBD.  The workflow is explained here.
> https://www.sebastien-han.fr/blog/2015/10/05/openstack-
> nova-snapshots-on-ceph-rbd/
>
> On Aug 1, 2017, at 11:14 AM, John Petrini  wrote:
>
> Just my two cents here but we started out using mostly Ephemeral storage
> in our builds and looking back I wish we hadn't. Note we're using Ceph as a
> backend so my response is tailored towards Ceph's behavior.
>
> The major pain point is snapshots. When you snapshot an nova volume an RBD
> snapshot occurs and is very quick and uses very little additional storage,
> however the snapshot is then copied into the images pool and in the process
> is converted from a snapshot to a full size image. This takes a long time
> because you have to copy a lot of data and it takes up a lot of space. It
> also causes a great deal of IO on the storage and means you end up with a
> bunch of "snapshot images" creating clutter. On the other hand volume
> snapshots are near instantaneous without the other drawbacks I've mentioned.
>
> On the plus side for ephemeral storage; resizing the root disk of images
> works better. As long as your image is configured properly it's just a
> matter of initiating a resize and letting the instance reboot to grow the
> root disk. When using volumes as your root disk you instead have to
> shutdown the instance, grow the volume and boot.
>
> I hope this help! If anyone on the list knows something I don't know
> regarding these issues please chime in. I'd love to know if there's a
> better way.
>
> Regards,
>
> John Petrini
>
> On Tue, Aug 1, 2017 at 10:50 AM, Kimball, Conrad <
> conrad.kimb...@boeing.com> wrote:
>
>> In our process of standing up an OpenStack internal cloud we are facing
>> the question of ephemeral storage vs. Cinder volumes for instance root
>> disks.
>>
>>
>>
>> As I look at public clouds such as AWS and Azure, the norm is to use
>> persistent volumes for the root disk.  AWS started out with images booting
>> onto ephemeral disk, but soon after they released Elastic Block Storage and
>> ever since the clear trend has been to EBS-backed instances, and now when I
>> look at their quick-start list of 33 AMIs, all of them are EBS-backed.  And
>> I’m not even sure one can have anything except persistent root disks in
>> Azure VMs.
>>
>>
>>
>> Based on this and a number of other factors I think we want our user
>> normal / default behavior to boot onto Cinder-backed volumes instead of
>> onto ephemeral storage.  But then I look at OpenStack and its design point
>> appears to be booting images onto ephemeral storage, and while it is
>> possible to boot an image onto a new volume this is clumsy (haven’t found a
>> way to make this the default behavior) and we are experiencing performance
>> problems (that admittedly we have not yet run to ground).
>>
>>
>>
>> So …
>>
>> · Are other operators routinely booting onto Cinder volumes
>> instead of ephemeral storage?
>>
>> · What has been your experience with this; any advice?
>>
>>
>>
>> *Conrad Kimball*
>>
>> Associate Technical Fellow
>>
>> Chief Architect, Enterprise Cloud Services
>>
>> Application Infrastructure Services / Global IT Infrastructure /
>> Information Technology & Data Analytics
>>
>> conrad.kimb...@boeing.com
>>
>> P.O. Box 3707, Mail Code 7M-TE
>>
>> Seattle, WA  98124-2207
>>
>> Bellevue 33-11 bldg, office 3A6-3.9
>>
>> 

Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread Mike Lowe
Strictly speaking I don’t think this is the case anymore for Mitaka or later.  
Snapping nova does take more space as the image is flattened, but the dumb 
download then upload back into ceph has been cut out.  With careful attention 
paid to discard/TRIM I believe you can maintain the thin provisioning 
properties of RBD.  The workflow is explained here.  
https://www.sebastien-han.fr/blog/2015/10/05/openstack-nova-snapshots-on-ceph-rbd/
 


> On Aug 1, 2017, at 11:14 AM, John Petrini  wrote:
> 
> Just my two cents here but we started out using mostly Ephemeral storage in 
> our builds and looking back I wish we hadn't. Note we're using Ceph as a 
> backend so my response is tailored towards Ceph's behavior.
> 
> The major pain point is snapshots. When you snapshot an nova volume an RBD 
> snapshot occurs and is very quick and uses very little additional storage, 
> however the snapshot is then copied into the images pool and in the process 
> is converted from a snapshot to a full size image. This takes a long time 
> because you have to copy a lot of data and it takes up a lot of space. It 
> also causes a great deal of IO on the storage and means you end up with a 
> bunch of "snapshot images" creating clutter. On the other hand volume 
> snapshots are near instantaneous without the other drawbacks I've mentioned.
> 
> On the plus side for ephemeral storage; resizing the root disk of images 
> works better. As long as your image is configured properly it's just a matter 
> of initiating a resize and letting the instance reboot to grow the root disk. 
> When using volumes as your root disk you instead have to shutdown the 
> instance, grow the volume and boot.
> 
> I hope this help! If anyone on the list knows something I don't know 
> regarding these issues please chime in. I'd love to know if there's a better 
> way.
> 
> Regards,
> John Petrini
> 
> 
> On Tue, Aug 1, 2017 at 10:50 AM, Kimball, Conrad  > wrote:
> In our process of standing up an OpenStack internal cloud we are facing the 
> question of ephemeral storage vs. Cinder volumes for instance root disks.
> 
>  
> 
> As I look at public clouds such as AWS and Azure, the norm is to use 
> persistent volumes for the root disk.  AWS started out with images booting 
> onto ephemeral disk, but soon after they released Elastic Block Storage and 
> ever since the clear trend has been to EBS-backed instances, and now when I 
> look at their quick-start list of 33 AMIs, all of them are EBS-backed.  And 
> I’m not even sure one can have anything except persistent root disks in Azure 
> VMs.
> 
>  
> 
> Based on this and a number of other factors I think we want our user normal / 
> default behavior to boot onto Cinder-backed volumes instead of onto ephemeral 
> storage.  But then I look at OpenStack and its design point appears to be 
> booting images onto ephemeral storage, and while it is possible to boot an 
> image onto a new volume this is clumsy (haven’t found a way to make this the 
> default behavior) and we are experiencing performance problems (that 
> admittedly we have not yet run to ground).
> 
>  
> 
> So …
> 
> · Are other operators routinely booting onto Cinder volumes instead 
> of ephemeral storage?
> 
> · What has been your experience with this; any advice?
> 
>  
> 
> Conrad Kimball
> 
> Associate Technical Fellow
> 
> Chief Architect, Enterprise Cloud Services
> 
> Application Infrastructure Services / Global IT Infrastructure / Information 
> Technology & Data Analytics
> 
> conrad.kimb...@boeing.com 
> P.O. Box 3707, Mail Code 7M-TE
> 
> Seattle, WA  98124-2207
> 
> Bellevue 33-11 bldg, office 3A6-3.9
> 
> Mobile:  425-591-7802 
>  
> 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators 
> 
> 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread Sean McGinnis
> 
> >·What has been your experience with this; any advice?
> 
> It works fine.  With Horizon you can do it in one step (select the image but
> tell it to boot from volume) but with the CLI I think you need two steps
> (make the volume from the image, then boot from the volume).  The extra
> steps are a moot point if you are booting programmatically (from a custom
> script or something like heat).
> 

One thing to keep in mind when using Horizon for this - there's currently
no way in Horizon to specify the volume type you would like to use for
creating this boot volume. So it will always only use the default volume
type.

That may be fine if you only have one, but if you have multiple backends,
or multiple settings controlled by volume types, then you will probably
want to use the CLI method for creating your boot volumes.

There has been some discussion about creating a Nova driver to just use
Cinder for ephemeral storage. There are some design challenges with how
to best implement that, but if operators are interested, it would be
great to hear that at the Forum and elsewhere so we can help raise the
priority of that between teams.

Sean

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread Jay Pipes

On 08/01/2017 11:14 AM, John Petrini wrote:
Just my two cents here but we started out using mostly Ephemeral storage 
in our builds and looking back I wish we hadn't. Note we're using Ceph 
as a backend so my response is tailored towards Ceph's behavior.


The major pain point is snapshots. When you snapshot an nova volume an 
RBD snapshot occurs and is very quick and uses very little additional 
storage, however the snapshot is then copied into the images pool and in 
the process is converted from a snapshot to a full size image. This 
takes a long time because you have to copy a lot of data and it takes up 
a lot of space. It also causes a great deal of IO on the storage and 
means you end up with a bunch of "snapshot images" creating clutter. On 
the other hand volume snapshots are near instantaneous without the other 
drawbacks I've mentioned.


On the plus side for ephemeral storage; resizing the root disk of images 
works better. As long as your image is configured properly it's just a 
matter of initiating a resize and letting the instance reboot to grow 
the root disk. When using volumes as your root disk you instead have to 
shutdown the instance, grow the volume and boot.


I hope this help! If anyone on the list knows something I don't know 
regarding these issues please chime in. I'd love to know if there's a 
better way.


I'd just like to point out that the above is exactly the right way to 
think about things.


Don't boot from volume (i.e. don't use a volume as your root disk).

Instead, separate the operating system from your application data. Put 
the operating system on a small disk image (small == fast boot times), 
use a config drive for injectable configuration and create Cinder 
volumes for your application data.


Detach and attach the application data Cinder volume as needed to your 
server instance. Make your life easier by not coupling application data 
and the operating system together.


Best,
-jay

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread Sean McGinnis
One other thing to think about - I think at least starting with the Mitaka
release, we added a feature called image volume cache. So if you create a
boot volume, the first time you do so it takes some time as the image is
pulled down and written to the backend volume.

With image volume cache enabled, that still happens on the first volume
creation of the image. But then any subsequent volume creations on that
backend for that image will be much, much faster.

This is something that needs to be configured. Details can be found here:

https://docs.openstack.org/cinder/latest/admin/blockstorage-image-volume-cache.html

Sean

On Tue, Aug 01, 2017 at 10:47:26AM -0500, Sean McGinnis wrote:
> On Tue, Aug 01, 2017 at 11:14:03AM -0400, John Petrini wrote:
> > 
> > On the plus side for ephemeral storage; resizing the root disk of images
> > works better. As long as your image is configured properly it's just a
> > matter of initiating a resize and letting the instance reboot to grow the
> > root disk. When using volumes as your root disk you instead have to
> > shutdown the instance, grow the volume and boot.
> > 
> 
> Some sort of good news there. Starting with the Pike release, you will now
> be able to extend an attached volume. As long as both Cinder and Nova are
> at Pike or later, this should now be allowed.
> 
> Sean
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread Sean McGinnis
On Tue, Aug 01, 2017 at 11:14:03AM -0400, John Petrini wrote:
> 
> On the plus side for ephemeral storage; resizing the root disk of images
> works better. As long as your image is configured properly it's just a
> matter of initiating a resize and letting the instance reboot to grow the
> root disk. When using volumes as your root disk you instead have to
> shutdown the instance, grow the volume and boot.
> 

Some sort of good news there. Starting with the Pike release, you will now
be able to extend an attached volume. As long as both Cinder and Nova are
at Pike or later, this should now be allowed.

Sean

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread Mike Smith
At Overstock we do both, in different clouds.  Our preferred option is a Ceph 
backend for Nova ephemeral storage.  We like it because it is fast to boot and 
makes resize easy.  Our use case doesn’t require snapshots nor do we have a 
need for keeping the data around if a server needs to be rebuilt.   It may not 
work for other people, but it works well for us.

In some of our other clouds, where we don’t have Ceph available, we do use 
Cinder volumes for booting VMs off of backend SAN services.  It works ok, but 
there are a few painpoints in regard to disk resizing - it’s a bit of a 
cumbersome process compared the experience with Nova ephemeral.  Depending on 
the solution used, creating the volume for boot can take much much longer and 
that can be annoying.   On the plus side, Cinder does allow you to do QOS to 
limit I/O, whereas I do not believe that’s an option with Nova ephemeral.  And, 
again depending on the Cinder solution employed, the disk I/O for this kind of 
setup can be significantly better than some other options including Nova 
ephemeral with a Ceph backend.

Bottom line:  it depends what you need, as both options work well and there are 
people doing both out there in the wild.

Good luck!


On Aug 1, 2017, at 9:14 AM, John Petrini 
> wrote:

Just my two cents here but we started out using mostly Ephemeral storage in our 
builds and looking back I wish we hadn't. Note we're using Ceph as a backend so 
my response is tailored towards Ceph's behavior.

The major pain point is snapshots. When you snapshot an nova volume an RBD 
snapshot occurs and is very quick and uses very little additional storage, 
however the snapshot is then copied into the images pool and in the process is 
converted from a snapshot to a full size image. This takes a long time because 
you have to copy a lot of data and it takes up a lot of space. It also causes a 
great deal of IO on the storage and means you end up with a bunch of "snapshot 
images" creating clutter. On the other hand volume snapshots are near 
instantaneous without the other drawbacks I've mentioned.

On the plus side for ephemeral storage; resizing the root disk of images works 
better. As long as your image is configured properly it's just a matter of 
initiating a resize and letting the instance reboot to grow the root disk. When 
using volumes as your root disk you instead have to shutdown the instance, grow 
the volume and boot.

I hope this help! If anyone on the list knows something I don't know regarding 
these issues please chime in. I'd love to know if there's a better way.

Regards,

John Petrini



On Tue, Aug 1, 2017 at 10:50 AM, Kimball, Conrad 
> wrote:
In our process of standing up an OpenStack internal cloud we are facing the 
question of ephemeral storage vs. Cinder volumes for instance root disks.

As I look at public clouds such as AWS and Azure, the norm is to use persistent 
volumes for the root disk.  AWS started out with images booting onto ephemeral 
disk, but soon after they released Elastic Block Storage and ever since the 
clear trend has been to EBS-backed instances, and now when I look at their 
quick-start list of 33 AMIs, all of them are EBS-backed.  And I’m not even sure 
one can have anything except persistent root disks in Azure VMs.

Based on this and a number of other factors I think we want our user normal / 
default behavior to boot onto Cinder-backed volumes instead of onto ephemeral 
storage.  But then I look at OpenStack and its design point appears to be 
booting images onto ephemeral storage, and while it is possible to boot an 
image onto a new volume this is clumsy (haven’t found a way to make this the 
default behavior) and we are experiencing performance problems (that admittedly 
we have not yet run to ground).

So …

• Are other operators routinely booting onto Cinder volumes instead of 
ephemeral storage?

• What has been your experience with this; any advice?

Conrad Kimball
Associate Technical Fellow
Chief Architect, Enterprise Cloud Services
Application Infrastructure Services / Global IT Infrastructure / Information 
Technology & Data Analytics
conrad.kimb...@boeing.com
P.O. Box 3707, Mail Code 7M-TE
Seattle, WA  98124-2207
Bellevue 33-11 bldg, office 3A6-3.9
Mobile:  425-591-7802


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




CONFIDENTIALITY 

Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread Jonathan Proulx
Hi Conrad,

We boot to ephemeral disk by default but our ephemeral disk is Ceph
RBD just like out cinder volumes.

Using Ceph for Cinder Volumes and Glance Images storage it is possible
to very quickly create new Persistent Volumes from Glance Images
becasue on the backend it's just a CoW snapshot operation (even though
we use seperate pools for ephemeral disk, persistent volumes, and
images). This is also what happens for ephemeral boting which is much
faster than copying image to local disk on hypervisor first so we get
quick starts and relatively easy live migrations (which we use for
maintenance like hypervisor reboots and reinstalls).

I don't know how to make it the "default" but ceph definately makes
it faster. Other backends I've used basicly mount the raw storage
volume download the imaage then 'dd' in into place which is
painfully slow.

As to why ephemeral rather than volume backed by default it's much
easier to boot amny copies of the same thing and be sure they're the
same using ephemeral storage and iamges or snapshots.  Volume backed
instances tend to drift.

That said workign in a research lab many of my users go for the more
"Pet" like persistent VM workflow.  We just manage it with docs and
education, though there is always someone who misses the red flashing
"ephemeral means it gets deleted when you turn it off" sign and is
sad.

-Jon

On Tue, Aug 01, 2017 at 02:50:45PM +, Kimball, Conrad wrote:
:In our process of standing up an OpenStack internal cloud we are facing the 
question of ephemeral storage vs. Cinder volumes for instance root disks.
:
:As I look at public clouds such as AWS and Azure, the norm is to use 
persistent volumes for the root disk.  AWS started out with images booting onto 
ephemeral disk, but soon after they released Elastic Block Storage and ever 
since the clear trend has been to EBS-backed instances, and now when I look at 
their quick-start list of 33 AMIs, all of them are EBS-backed.  And I'm not 
even sure one can have anything except persistent root disks in Azure VMs.
:
:Based on this and a number of other factors I think we want our user normal / 
default behavior to boot onto Cinder-backed volumes instead of onto ephemeral 
storage.  But then I look at OpenStack and its design point appears to be 
booting images onto ephemeral storage, and while it is possible to boot an 
image onto a new volume this is clumsy (haven't found a way to make this the 
default behavior) and we are experiencing performance problems (that admittedly 
we have not yet run to ground).
:
:So ...
:
:* Are other operators routinely booting onto Cinder volumes instead of 
ephemeral storage?
:
:* What has been your experience with this; any advice?
:
:Conrad Kimball
:Associate Technical Fellow
:Chief Architect, Enterprise Cloud Services
:Application Infrastructure Services / Global IT Infrastructure / Information 
Technology & Data Analytics
:conrad.kimb...@boeing.com
:P.O. Box 3707, Mail Code 7M-TE
:Seattle, WA  98124-2207
:Bellevue 33-11 bldg, office 3A6-3.9
:Mobile:  425-591-7802
:

:___
:OpenStack-operators mailing list
:OpenStack-operators@lists.openstack.org
:http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


-- 

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread Chris Friesen

On 08/01/2017 08:50 AM, Kimball, Conrad wrote:


·Are other operators routinely booting onto Cinder volumes instead of ephemeral
storage?


It's up to the end-user, but yes.


·What has been your experience with this; any advice?


It works fine.  With Horizon you can do it in one step (select the image but 
tell it to boot from volume) but with the CLI I think you need two steps (make 
the volume from the image, then boot from the volume).  The extra steps are a 
moot point if you are booting programmatically (from a custom script or 
something like heat).


I think that generally speaking the default is to use ephemeral storage because 
it's:


a) cheaper
b) "cloudy" in that if anything goes wrong you just spin up another instance

On the other hand, booting from volume does allow for faster migrations since it 
avoids the need to transfer the boot disk contents as part of the migration.


Chris

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread Kimball, Conrad
In our process of standing up an OpenStack internal cloud we are facing the 
question of ephemeral storage vs. Cinder volumes for instance root disks.

As I look at public clouds such as AWS and Azure, the norm is to use persistent 
volumes for the root disk.  AWS started out with images booting onto ephemeral 
disk, but soon after they released Elastic Block Storage and ever since the 
clear trend has been to EBS-backed instances, and now when I look at their 
quick-start list of 33 AMIs, all of them are EBS-backed.  And I'm not even sure 
one can have anything except persistent root disks in Azure VMs.

Based on this and a number of other factors I think we want our user normal / 
default behavior to boot onto Cinder-backed volumes instead of onto ephemeral 
storage.  But then I look at OpenStack and its design point appears to be 
booting images onto ephemeral storage, and while it is possible to boot an 
image onto a new volume this is clumsy (haven't found a way to make this the 
default behavior) and we are experiencing performance problems (that admittedly 
we have not yet run to ground).

So ...

* Are other operators routinely booting onto Cinder volumes instead of 
ephemeral storage?

* What has been your experience with this; any advice?

Conrad Kimball
Associate Technical Fellow
Chief Architect, Enterprise Cloud Services
Application Infrastructure Services / Global IT Infrastructure / Information 
Technology & Data Analytics
conrad.kimb...@boeing.com
P.O. Box 3707, Mail Code 7M-TE
Seattle, WA  98124-2207
Bellevue 33-11 bldg, office 3A6-3.9
Mobile:  425-591-7802

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators