Re: [Openstack-operators] Problems with AggregateMultiTenancyIsolation while migrating an instance

2018-05-31 Thread Massimo Sgaravatto
Thanks a lot !! On Wed, May 30, 2018 at 8:06 PM, Matt Riedemann wrote: > On 5/30/2018 9:41 AM, Matt Riedemann wrote: > >> Thanks for your patience in debugging this Massimo! I'll get a bug >> reported and patch posted to fix it. >> > > I'm tracking the problem with this bug: > >

Re: [Openstack-operators] Problems with AggregateMultiTenancyIsolation while migrating an instance

2018-05-30 Thread Massimo Sgaravatto
s, Massimo On Wed, May 30, 2018 at 1:01 AM, Matt Riedemann wrote: > On 5/29/2018 3:07 PM, Massimo Sgaravatto wrote: > >> The VM that I am trying to migrate was created when the Cloud was already >> running Ocata >> > > OK, I'd added the tenant_id variable in

Re: [Openstack-operators] Problems with AggregateMultiTenancyIsolation while migrating an instance

2018-05-29 Thread Massimo Sgaravatto
The VM that I am trying to migrate was created when the Cloud was already running Ocata Cheers, Massimo On Tue, May 29, 2018 at 9:47 PM, Matt Riedemann wrote: > On 5/29/2018 12:44 PM, Jay Pipes wrote: > >> Either that, or the wrong project_id is being used when attempting to >> migrate? Maybe

[Openstack-operators] Problems with AggregateMultiTenancyIsolation while migrating an instance

2018-05-29 Thread Massimo Sgaravatto
I have a small testbed OpenStack cloud (running Ocata) where I am trying to debug a problem with Nova scheduling. In short: I see different behaviors when I create a new VM and when I try to migrate a VM Since I want to partition the Cloud so that each project uses only certain compute nodes,

Re: [Openstack-operators] Receipt to transfer the ownership of an instance

2018-04-23 Thread Massimo Sgaravatto
cumentation/migrating-resources/ > > the bad thing is that the new VM has a new IP address, so eventually > DNS records have to be updated by the users. > > Cheers, > > Saverio > > > 2018-04-23 10:17 GMT+02:00 Massimo Sgaravatto < > massimo.sgarava...@gmail.

[Openstack-operators] Receipt to transfer the ownership of an instance

2018-04-23 Thread Massimo Sgaravatto
As far as I understand there is not a clean way to transfer the ownership of an instance from a user to another one (the implementation of the blueprint https://blueprints.launchpad.net/nova/+spec/transfer-instance-ownership was abandoned). Is there at least a receipt (i.e. what needs to be

Re: [Openstack-operators] [openstack-dev] [nova] Default scheduler filters survey

2018-04-20 Thread Massimo Sgaravatto
enabled_filters =

Re: [Openstack-operators] AggregateMultiTenancyIsolation with multiple (many) projects

2018-02-06 Thread Massimo Sgaravatto
ing to do? > Le mar. 6 févr. 2018 à 06:45, Massimo Sgaravatto < > massimo.sgarava...@gmail.com> a écrit : > >> Hi >> >> I want to partition my OpenStack cloud so that: >> >> - Project p1, p2, .., pn can use only compute nodes C1, C2, ... Cx >> - Proje

[Openstack-operators] AggregateMultiTenancyIsolation with multiple (many) projects

2018-02-05 Thread Massimo Sgaravatto
Hi I want to partition my OpenStack cloud so that: - Project p1, p2, .., pn can use only compute nodes C1, C2, ... Cx - Projects pn+1.. pm can use only compute nodes Cx+1 ... Cy I read that CERN addressed this use case implementing the ProjectsToAggregateFilter but, as far as I understand, this

[Openstack-operators] Problems chaning access list for flavors using the dashboard

2017-12-19 Thread Massimo Sgaravatto
Dear Operators I have a Mitaka openstack installation and in such deployment I am able to modify the access list of flavors using the dashboard without problems. I am even able to modify a public flavor to a private one (specifying, using the dashboard, the list of projects allowed to use the

Re: [Openstack-operators] Cinder quota per volume types in the dashboard

2017-09-26 Thread Massimo Sgaravatto
erts here at CERN > I was referring to :) > > On 25 Sep 2017, at 10:41, Massimo Sgaravatto <massimo.sgarava...@gmail.com> > wrote: > > Just found that there is already this one: > > https://bugs.launchpad.net/horizon/+bug/1717342 > > 2017-09-25 10:28 GMT+02:00 Sa

Re: [Openstack-operators] Cinder quota per volume types in the dashboard

2017-09-25 Thread Massimo Sgaravatto
t;> If you are on IRC please join #openstack-horizon. > >> > >> We should ask the Horizon PTL how to get this feature request into > >> implementation. > >> > >> With the command line interface, can you already see the two different > >> quot

Re: [Openstack-operators] Cinder quota per volume types in the dashboard

2017-09-25 Thread Massimo Sgaravatto
the CLI ? > > thank you > > Saverio > > > 2017-09-25 9:55 GMT+02:00 Massimo Sgaravatto <massimo.sgarava...@gmail.com > >: > > We are currently running Mitaka (preparing to update to Ocata). I see the > > same behavior on an Ocata based testbed > > >

Re: [Openstack-operators] Cinder quota per volume types in the dashboard

2017-09-25 Thread Massimo Sgaravatto
Saverio > > 2017-09-25 9:13 GMT+02:00 Massimo Sgaravatto <massimo.sgarava...@gmail.com > >: > > Hi > > > > > > In our OpenStack cloud we have two backends for Cinder (exposed using two > > volume types), and we set different quotas for these two volume ty

[Openstack-operators] Cinder quota per volume types in the dashboard

2017-09-25 Thread Massimo Sgaravatto
Hi In our OpenStack cloud we have two backends for Cinder (exposed using two volume types), and we set different quotas for these two volume types. The problem happens when a user, using the dashboard, tries to create a volume using a volume type for which the project quota is over: - the

[Openstack-operators] Gluster storage for Cinder: migrating from Gluster to NFS driver

2017-07-01 Thread Massimo Sgaravatto
Hi I have a iscsi storage system which provides Cinder block storage to an OpenStack cloud (running Mitaka) using the gluster cinder driver. We are now preparing the update Mitaka --> Newton --> Ocata Since the cinder gluster driver is not supported anymore in Ocata, the idea is to expose that

Re: [Openstack-operators] Problems with ec2-service on Ocata

2017-06-08 Thread Massimo Sgaravatto
to be in error) Cheers, Massimo 2017-06-08 9:40 GMT+02:00 Massimo Sgaravatto <massimo.sgarava...@gmail.com>: > I am indeed using a HAProxy which also acts as SSL proxy. > > And, indeed I have the same problem using the nova CLI: > > # nova list > ERROR (ConnectFailure): Unable

[Openstack-operators] Problems with ec2-service on Ocata

2017-06-07 Thread Massimo Sgaravatto
Hi We are trying to configure the ec2-service on a Ocata OpenStack installation. If I try a euca-describe-images it works, but if I try to get the list of instances (euca-describe-instances) it fails. Looking at the log [*], it looks like to me that it initially uses the correct nova endpoint:

[Openstack-operators] nova_cell0 database connection string

2017-05-26 Thread Massimo Sgaravatto
Hi I am reading the RDO installation guide for Ocata. In the nova section [*] it is explained how to create the nova_cell0 database, but I can't find how to set the relevant connection string in the nova configuration file. Any hints ? Thanks, Massimo [*]

Re: [Openstack-operators] Mixed env for nova (ceph for some compute nodes, local disk for the rest): qcow2 or raw images ?

2017-04-05 Thread Massimo Sgaravatto
ll start a qcow2 version of the image, it will be > scheduled on your compute nodes with local disk and pull the qcow2 image > from Glance. > > Does it make sense? > > George > > On Wed, Apr 5, 2017 at 10:05 AM, Massimo Sgaravatto < > massimo.sgarava...@gmail.com> wrote: > >

[Openstack-operators] Mixed env for nova (ceph for some compute nodes, local disk for the rest): qcow2 or raw images ?

2017-04-05 Thread Massimo Sgaravatto
Hi Currently in our Cloud we are using a gluster storage for cinder and glance. For nova we are using a shared file system (implemented using gluster) for part of the compute nodes; the rest of the compute nodes use the local disk. We are now planning the replacement of gluster with ceph. The

Re: [Openstack-operators] Migrating glance images to a new backend

2017-03-28 Thread Massimo Sgaravatto
you want to keep the glance service running for end user during > the migration? > > 4. Is it a public cloud? > > > On 25/03/17 04:55, Massimo Sgaravatto wrote: > > Hi > > In our Mitaka cloud we are currently using Gluster as storage backend for > Glance and Cinder

[Openstack-operators] Migrating glance images to a new backend

2017-03-24 Thread Massimo Sgaravatto
Hi In our Mitaka cloud we are currently using Gluster as storage backend for Glance and Cinder. We are now starting the migration to ceph: the idea is then to dismiss gluster when we have done. I have a question concerning Glance. I have understood (or at least I hope so) how to add ceph as

Re: [Openstack-operators] User_id Based Policy Enforcement

2017-01-15 Thread Massimo Sgaravatto
Maybe this is relevant with: https://bugs.launchpad.net/nova/+bug/1539351 ? In our Mitaka installation we had to keep using v2.0 API to be able to use user_id in the policy file ... I don't know if there are better solutions ... Cheers, Massimo 2017-01-15 8:44 GMT+01:00 Hamza Achi

Re: [Openstack-operators] How to tune scheduling for "Insufficient compute resources" (race conditions ?)

2016-12-01 Thread Massimo Sgaravatto
ts = 10 > > Cheers, > George > > On Wed, Nov 30, 2016 at 9:56 AM, Massimo Sgaravatto < > massimo.sgarava...@gmail.com> wrote: > >> Hi all >> >> I have a problem with scheduling in our Mitaka Cloud, >> Basically when there are a lo

Re: [Openstack-operators] How to tune scheduling for "Insufficient compute resources" (race conditions ?)

2016-11-30 Thread Massimo Sgaravatto
me > compute node for different instances. > > Belmiro > > On Wed, Nov 30, 2016 at 3:56 PM, Massimo Sgaravatto < > massimo.sgarava...@gmail.com> wrote: > >> Hi all >> >> I have a problem with scheduling in our Mitaka Cloud, >> Basically when ther

[Openstack-operators] How to tune scheduling for "Insufficient compute resources" (race conditions ?)

2016-11-30 Thread Massimo Sgaravatto
Hi all I have a problem with scheduling in our Mitaka Cloud, Basically when there are a lot of requests for new instances, some of them fail because "Failed to compute_task_build_instances: Exceeded maximum number of retries". And the failures are because "Insufficient compute resources: Free

Re: [Openstack-operators] Missing Glance metadef_resource_types table

2016-11-17 Thread Massimo Sgaravatto
pez <alopg...@gmail.com>: > This is a manual step to load them. If your installation was complete, you > should have a bunch of json files in /etc/glance/metadefs. > You need to load them with glance-manage db_load_metadefs > > > On Nov 17, 2016, at 9:12 AM, Massimo Sgaravatto <