Re: [Openstack-operators] custom build image is slow

2017-08-01 Thread Paras pradhan
how do we install virtio drivers if its missing? How do I verify it on the centos cloud image if its there? On Tue, Aug 1, 2017 at 12:19 PM, Abel Lopez wrote: > Your custom image is likely missing the virtIO drivers that the cloud > image has. > > Instead of running through

Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread John Petrini
Thanks for the info. Might have something to do with the Ceph version then. We're running hammer and apparently the du option wasn't added until in Infernalis. John Petrini On Tue, Aug 1, 2017 at 4:32 PM, Mike Lowe wrote: > Two things, first info does not show how much disk is

Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread Chris Friesen
On 08/01/2017 02:32 PM, Mike Lowe wrote: Two things, first info does not show how much disk is used du does. Second, the semantics count, copy is different than clone and flatten. Clone and flatten which should happen if you have things working correctly is much faster than copy. If you are

[Openstack-operators] OpenStack Operators mid-cycle meet up #1 2018 - venue selected!

2017-08-01 Thread Chris Morgan
Dear All, The ops meet ups team today confirmed the NTT proposal to host the 1st 2018 OpenStack Operators mid-cycle meet up in Tokyo March 7th,8th. As a reminder, that is as proposed here: https://etherpad.openstack.org/p/ops-meetup-venue-discuss-1st-2018 The vote was unanimous, this proposal

Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread Mike Lowe
Two things, first info does not show how much disk is used du does. Second, the semantics count, copy is different than clone and flatten. Clone and flatten which should happen if you have things working correctly is much faster than copy. If you are using copy then you may be limited by the

Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread Matt Riedemann
On 8/1/2017 10:47 AM, Sean McGinnis wrote: Some sort of good news there. Starting with the Pike release, you will now be able to extend an attached volume. As long as both Cinder and Nova are at Pike or later, this should now be allowed. And you're using the libvirt compute driver in Nova, and

Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread John Petrini
Maybe I'm just not understanding but when I create a nova snapshot the snapshot happens at RBD in the ephemeral pool and then it's copied to the images pool. This results in a full sized image rather than a snapshot with a reference to the parent. For example below is a snapshot of an ephemeral

Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread Mike Lowe
There is no upload if you use Ceph to back your glance (like you should), the snapshot is cloned from the ephemeral pool into the the images pool, then flatten is run as a background task. Net result is that creating a 120GB image vs 8GB is slightly faster on my cloud but not at all what I’d

Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread John Petrini
Yes from Mitaka onward the snapshot happens at the RBD level which is fast. It's the flattening and uploading of the image to glance that's the major pain point. Still it's worlds better than the qemu snapshots to the local disk prior to Mitaka. John Petrini Platforms Engineer // *CoreDial,

Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread Mike Lowe
Strictly speaking I don’t think this is the case anymore for Mitaka or later. Snapping nova does take more space as the image is flattened, but the dumb download then upload back into ceph has been cut out. With careful attention paid to discard/TRIM I believe you can maintain the thin

[Openstack-operators] [puppet] PTL wanted

2017-08-01 Thread Emilien Macchi
It's an unusual request but we need a new PTL for Queens. Alex Schultz and I have been leading Puppet OpenStack modules for some time now and it's time to rotate. We know you're out there consuming (and contributing) to the modules - if you want this project to survive, it's time to step-up and

Re: [Openstack-operators] custom build image is slow

2017-08-01 Thread Abel Lopez
Your custom image is likely missing the virtIO drivers that the cloud image has. Instead of running through the DVD installer, I'd suggest checking out diskimage-builder to make custom images for use on Openstack. On Tue, Aug 1, 2017 at 10:16 AM Paras pradhan wrote: >

Re: [Openstack-operators] custom build image is slow

2017-08-01 Thread Paras pradhan
Also this is what I've noticed with the centos cloud image I downloaded. If I add few packages( around a GB), the sizes goes up to 8GB from 1.3 GB. Running dd if=/dev/zero of=temp;sync;rm temp failed due to the size of the disk on the cloud images which is 10G. zeroing failed with No space left on

Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread Sean McGinnis
> > >·What has been your experience with this; any advice? > > It works fine. With Horizon you can do it in one step (select the image but > tell it to boot from volume) but with the CLI I think you need two steps > (make the volume from the image, then boot from the volume). The extra > steps

Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread Jay Pipes
On 08/01/2017 11:14 AM, John Petrini wrote: Just my two cents here but we started out using mostly Ephemeral storage in our builds and looking back I wish we hadn't. Note we're using Ceph as a backend so my response is tailored towards Ceph's behavior. The major pain point is snapshots. When

Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread Sean McGinnis
One other thing to think about - I think at least starting with the Mitaka release, we added a feature called image volume cache. So if you create a boot volume, the first time you do so it takes some time as the image is pulled down and written to the backend volume. With image volume cache

Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread Sean McGinnis
On Tue, Aug 01, 2017 at 11:14:03AM -0400, John Petrini wrote: > > On the plus side for ephemeral storage; resizing the root disk of images > works better. As long as your image is configured properly it's just a > matter of initiating a resize and letting the instance reboot to grow the > root

Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread Mike Smith
At Overstock we do both, in different clouds. Our preferred option is a Ceph backend for Nova ephemeral storage. We like it because it is fast to boot and makes resize easy. Our use case doesn’t require snapshots nor do we have a need for keeping the data around if a server needs to be

Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread Jonathan Proulx
Hi Conrad, We boot to ephemeral disk by default but our ephemeral disk is Ceph RBD just like out cinder volumes. Using Ceph for Cinder Volumes and Glance Images storage it is possible to very quickly create new Persistent Volumes from Glance Images becasue on the backend it's just a CoW snapshot

Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread Chris Friesen
On 08/01/2017 08:50 AM, Kimball, Conrad wrote: ·Are other operators routinely booting onto Cinder volumes instead of ephemeral storage? It's up to the end-user, but yes. ·What has been your experience with this; any advice? It works fine. With Horizon you can do it in one step (select

[Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-01 Thread Kimball, Conrad
In our process of standing up an OpenStack internal cloud we are facing the question of ephemeral storage vs. Cinder volumes for instance root disks. As I look at public clouds such as AWS and Azure, the norm is to use persistent volumes for the root disk. AWS started out with images booting

Re: [Openstack-operators] custom build image is slow

2017-08-01 Thread Paras pradhan
I ran virt-sparsify. So before running virt-sparsify it was 6.0GB and after it is 1.3GB. Thanks Paras. On Tue, Aug 1, 2017 at 2:53 AM, Tomáš Vondra wrote: > Hi! > > How big are the actual image files? Because qcow2 is a sparse format, it > does not store zeroes. If the

[Openstack-operators] [FEMDC] IRC meeting Tomorrow 15:00 UTC

2017-08-01 Thread lebre . adrien
Dear all, As usual, the agenda is available at: https://etherpad.openstack.org/p/massively_distributed_ircmeetings_2017 (line 991) Please feel free to add items. Best, Ad_rien_ PS: Paul-André will chair the meeting (I'm taking some holidays ;)) ___

[Openstack-operators] [publiccloud-wg] Reminder meeting PublicCloudWorkingGroup

2017-08-01 Thread Tobias Rydberg
Hi everyone, Don't forget tomorrows meeting for the PublicCloudWorkingGroup. A lot of important stuff to chat about =) 1400 UTC in IRC channel #openstack-meeting-3 Etherpad: https://etherpad.openstack.org/p/publiccloud-wg Regards, Tobias Rydberg smime.p7s Description: S/MIME Cryptographic

Re: [Openstack-operators] custom build image is slow

2017-08-01 Thread Tomáš Vondra
Hi! How big are the actual image files? Because qcow2 is a sparse format, it does not store zeroes. If the free space in one image is zeroed out, it will convert much faster. If that is the problem, use „dd if=/dev/zero of=temp;sync;rm temp“ or zerofree. Tomas From: Paras pradhan