[Openstack-operators] [publiccloud] Improvement for Horizon performance

2017-12-20 Thread Fei Long Wang
Hi there,

I think I should send this note more early given the holiday season :)
We (Catalyst Cloud) have done some work recently about the performance
of Horizon, especially for the instance panel, which more cloud
providers can be benefited. Here is the list:

1. Allow to skip API calls to Neutron in instance tables


2. Add cache for get_microversion() against Nova


3. Remove the quota check for "Launch Instance" button
   NOTE: This one has been
reverted. The comments from Horizon team is, though it's good, but the
behavior is not consistent with the other panels. Personally, I don't
like the excuse, I prefer to do the same thing for all the other panels.

4. Add white list for Nova extension
   NOTE: It's not accepted by
Horizon team, so we're going to keep it in our private repo.

5. Make image name on instances panel configurable
   NOTE: It's not accepted by
Horizon team, so we're going to keep it in our private repo.

So far, we haven't got too much performance issue on the other panels.
Because the performance issue about Horizon is really related to the the
API calls. There are too much API calls on instance panel and it's
related the number of instances when listing. Generally, when listing 20
instances per page could cost 10s-12s on our cloud before, with above 5
patches, it could be reduced to 3s-4s.

Hopefully above information is helpful.


-- 
Cheers & Best regards,
Feilong Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
-- 

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Migrating glance images to a new backend

2017-03-28 Thread Fei Long Wang
Hi Massimo,

Thanks for providing more information. As you can see from David's blog
and the script (https://github.com/dmsimard/migrate-glance-backend). The
most tricky part is how to keep current image id, otherwise, all the
existing instances will fail to rebuild. The way I'm suggesting can help
keep the image id and don't have to create another image. The steps are
as follow:

1. Download and re-upload

1.1 Iterate tenants and images, download images, convert images from
qcow to raw

1.2 upload images to RBD by following the same way like this
https://github.com/openstack/glance_store/blob/master/glance_store/_drivers/rbd.py#L426

2. Create new location for each image based on the location get from
#1.2. For this step, it would be nice to enable 
/show_multiple_locations=True/ Note: Glance team would suggest to
disable this after your migration.//However, if you want to use CoW, you
may still need to keep it :(/
/

3. Delete old locations based on GlusterFS

4. All done


*NOTE*:

1. For step #2 and #3, you could follow this blog about how
https://www.sebastien-han.fr/blog/2015/05/13/openstack-glance-use-multiple-location-for-an-image/

2. Step 1 and 2 can be done before your downtime window.

3. Technically, you can keep the two locations without deleting the old
location or at least getting a more smoother during migration by using
location strategy. For this case, you can set:

/ stores=rbd,file/

/ //location_strategy=store_type//
/

/ //store_type_preference=rbd,file/

/ /That means if there are 2 locations, Glance will try to use the
RBD location first, then filesystem location. See more info
https://github.com/openstack/glance/blob/master/etc/glance-api.conf#L4388



On 29/03/17 02:02, Massimo Sgaravatto wrote:
> First  of all thanks for your help
>
> This is a private cloud which is right now using gluster as backend.
> Most of the images are private (i.e. usable only within the project),
> uploaded by the end-users.  
> Most of these images were saved in qcow2 format ... 
>
>
> The ceph cluster is still being benchmarked. I am testing the
> integration between ceph and openstack (and studying the migration) on
> a small openstack testbed.
>
>  Having the glance service running during the migration is not
> strictly needed, i.e. we can plan a scheduled downtime of the service 
>
> Thanks again, Massimo
>
>
> 2017-03-28 5:24 GMT+02:00 Fei Long Wang <feil...@catalyst.net.nz
> <mailto:feil...@catalyst.net.nz>>:
>
> Hi Massimo,
>
> Though I don't have experience on the migration, but as the glance
> RBD driver maintainer and image service maintainer of our public
> cloud (Catalyst Cloud based in NZ), I'm happy to provide some
> information. Before I talk more, would you mind sharing some
> information of your environment?
>
> 1. Are you using CoW of Ceph?
>
> 2. Are you using multi locations? 
>
> show_multiple_locations=True
>
> 3. Are you expecting to migrate all the images in a maintenance
> time window or you want to keep the glance service running for end
> user during the migration?
>
> 4. Is it a public cloud?
>
>
> On 25/03/17 04:55, Massimo Sgaravatto wrote:
>> Hi
>>
>> In our Mitaka cloud we are currently using Gluster as storage
>> backend for Glance and Cinder.
>> We are now starting the migration to ceph: the idea is then to
>> dismiss gluster when we have done.
>>
>> I have a question concerning Glance. 
>>
>> I have understood (or at least I hope so) how to add ceph as
>> store backend for Glance so that new images will use ceph while
>> the previously created ones on the file backend will be still usable.
>>
>> My question is how I can migrate the images from the file backend
>> to ceph when I decide to dismiss the gluster based storage.
>>
>> The only documentation I found is this one:
>>
>> 
>> https://dmsimard.com/2015/07/18/migrating-glance-images-to-a-different-backend/
>> 
>> <https://dmsimard.com/2015/07/18/migrating-glance-images-to-a-different-backend/>
>>
>>
>> Could you please confirm that there aren't other better (simpler)
>> approaches for such image migration ?
>>
>> Thanks, Massimo
>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> <mailto:OpenStack-operators@lists.openstack.org>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>
>
>   

Re: [Openstack-operators] Migrating glance images to a new backend

2017-03-27 Thread Fei Long Wang
Hi Massimo,

Though I don't have experience on the migration, but as the glance RBD
driver maintainer and image service maintainer of our public cloud
(Catalyst Cloud based in NZ), I'm happy to provide some information.
Before I talk more, would you mind sharing some information of your
environment?

1. Are you using CoW of Ceph?

2. Are you using multi locations? 

show_multiple_locations=True

3. Are you expecting to migrate all the images in a maintenance time
window or you want to keep the glance service running for end user
during the migration?

4. Is it a public cloud?


On 25/03/17 04:55, Massimo Sgaravatto wrote:
> Hi
>
> In our Mitaka cloud we are currently using Gluster as storage backend
> for Glance and Cinder.
> We are now starting the migration to ceph: the idea is then to dismiss
> gluster when we have done.
>
> I have a question concerning Glance. 
>
> I have understood (or at least I hope so) how to add ceph as store
> backend for Glance so that new images will use ceph while the
> previously created ones on the file backend will be still usable.
>
> My question is how I can migrate the images from the file backend to
> ceph when I decide to dismiss the gluster based storage.
>
> The only documentation I found is this one:
>
> https://dmsimard.com/2015/07/18/migrating-glance-images-to-a-different-backend/
>
>
> Could you please confirm that there aren't other better (simpler)
> approaches for such image migration ?
>
> Thanks, Massimo
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-- 
Cheers & Best regards,
Feilong Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
-- 

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators