[Openstack-operators] Missing Glance metadef_resource_types table

2016-11-17 Thread Massimo Sgaravatto
Hi We have just done the update Kilo --> Liberty --> Mitaka of our Cloud. We went through Liberty just for the migration of the databases. In the updated (to Mitaka) installation, if I click "Update metadata" on a image, I am told: Error: Unable to retrieve the namespaces. In the glance api log

Re: [Openstack-operators] Missing Glance metadef_resource_types table

2016-11-17 Thread Massimo Sgaravatto
gt; This is a manual step to load them. If your installation was complete, you > should have a bunch of json files in /etc/glance/metadefs. > You need to load them with glance-manage db_load_metadefs > > > On Nov 17, 2016, at 9:12 AM, Massimo Sgaravatto < > massimo.sgara

[Openstack-operators] How to tune scheduling for "Insufficient compute resources" (race conditions ?)

2016-11-30 Thread Massimo Sgaravatto
Hi all I have a problem with scheduling in our Mitaka Cloud, Basically when there are a lot of requests for new instances, some of them fail because "Failed to compute_task_build_instances: Exceeded maximum number of retries". And the failures are because "Insufficient compute resources: Free memo

Re: [Openstack-operators] How to tune scheduling for "Insufficient compute resources" (race conditions ?)

2016-11-30 Thread Massimo Sgaravatto
me > compute node for different instances. > > Belmiro > > On Wed, Nov 30, 2016 at 3:56 PM, Massimo Sgaravatto < > massimo.sgarava...@gmail.com> wrote: > >> Hi all >> >> I have a problem with scheduling in our Mitaka Cloud, >> Basically when ther

Re: [Openstack-operators] How to tune scheduling for "Insufficient compute resources" (race conditions ?)

2016-12-01 Thread Massimo Sgaravatto
eorge > > On Wed, Nov 30, 2016 at 9:56 AM, Massimo Sgaravatto < > massimo.sgarava...@gmail.com> wrote: > >> Hi all >> >> I have a problem with scheduling in our Mitaka Cloud, >> Basically when there are a lot of requests for new instances, some of >>

Re: [Openstack-operators] User_id Based Policy Enforcement

2017-01-15 Thread Massimo Sgaravatto
Maybe this is relevant with: https://bugs.launchpad.net/nova/+bug/1539351 ? In our Mitaka installation we had to keep using v2.0 API to be able to use user_id in the policy file ... I don't know if there are better solutions ... Cheers, Massimo 2017-01-15 8:44 GMT+01:00 Hamza Achi : > Hello,

[Openstack-operators] Migrating glance images to a new backend

2017-03-24 Thread Massimo Sgaravatto
Hi In our Mitaka cloud we are currently using Gluster as storage backend for Glance and Cinder. We are now starting the migration to ceph: the idea is then to dismiss gluster when we have done. I have a question concerning Glance. I have understood (or at least I hope so) how to add ceph as stor

Re: [Openstack-operators] Migrating glance images to a new backend

2017-03-28 Thread Massimo Sgaravatto
ep the glance service running for end user during > the migration? > > 4. Is it a public cloud? > > > On 25/03/17 04:55, Massimo Sgaravatto wrote: > > Hi > > In our Mitaka cloud we are currently using Gluster as storage backend for > Glance and Cinder. > We are now sta

Re: [Openstack-operators] Migrating glance images to a new backend

2017-03-29 Thread Massimo Sgaravatto
egy. For this case, you can set: > > * stores=rbd,file* > > *location_strategy=store_type* > > *store_type_preference=rbd,file* > > That means if there are 2 locations, Glance will try to use the RBD > location first, then filesystem location. See more info > http

[Openstack-operators] Mixed env for nova (ceph for some compute nodes, local disk for the rest): qcow2 or raw images ?

2017-04-05 Thread Massimo Sgaravatto
Hi Currently in our Cloud we are using a gluster storage for cinder and glance. For nova we are using a shared file system (implemented using gluster) for part of the compute nodes; the rest of the compute nodes use the local disk. We are now planning the replacement of gluster with ceph. The ide

Re: [Openstack-operators] Mixed env for nova (ceph for some compute nodes, local disk for the rest): qcow2 or raw images ?

2017-04-05 Thread Massimo Sgaravatto
e, it will be > scheduled on your compute nodes with local disk and pull the qcow2 image > from Glance. > > Does it make sense? > > George > > On Wed, Apr 5, 2017 at 10:05 AM, Massimo Sgaravatto < > massimo.sgarava...@gmail.com> wrote: > >> Hi >> >&

[Openstack-operators] nova_cell0 database connection string

2017-05-26 Thread Massimo Sgaravatto
Hi I am reading the RDO installation guide for Ocata. In the nova section [*] it is explained how to create the nova_cell0 database, but I can't find how to set the relevant connection string in the nova configuration file. Any hints ? Thanks, Massimo [*] https://docs.openstack.org/ocata/install

Re: [Openstack-operators] nova_cell0 database connection string

2017-05-26 Thread Massimo Sgaravatto
mapping table entries in your nova_api db. > > > On May 26, 2017, at 10:56 AM, Massimo Sgaravatto < > massimo.sgarava...@gmail.com> wrote: > > Hi > > I am reading the RDO installation guide for Ocata. In the nova section [*] > it is explained how to create the

[Openstack-operators] Problems with ec2-service on Ocata

2017-06-07 Thread Massimo Sgaravatto
Hi We are trying to configure the ec2-service on a Ocata OpenStack installation. If I try a euca-describe-images it works, but if I try to get the list of instances (euca-describe-instances) it fails. Looking at the log [*], it looks like to me that it initially uses the correct nova endpoint: h

Re: [Openstack-operators] Problems with ec2-service on Ocata

2017-06-08 Thread Massimo Sgaravatto
be in error) Cheers, Massimo 2017-06-08 9:40 GMT+02:00 Massimo Sgaravatto : > I am indeed using a HAProxy which also acts as SSL proxy. > > And, indeed I have the same problem using the nova CLI: > > # nova list > ERROR (ConnectFailure): Unable to establish connection t

[Openstack-operators] Gluster storage for Cinder: migrating from Gluster to NFS driver

2017-07-01 Thread Massimo Sgaravatto
Hi I have a iscsi storage system which provides Cinder block storage to an OpenStack cloud (running Mitaka) using the gluster cinder driver. We are now preparing the update Mitaka --> Newton --> Ocata Since the cinder gluster driver is not supported anymore in Ocata, the idea is to expose that s

[Openstack-operators] Cinder quota per volume types in the dashboard

2017-09-25 Thread Massimo Sgaravatto
Hi In our OpenStack cloud we have two backends for Cinder (exposed using two volume types), and we set different quotas for these two volume types. The problem happens when a user, using the dashboard, tries to create a volume using a volume type for which the project quota is over: - the repor

Re: [Openstack-operators] Cinder quota per volume types in the dashboard

2017-09-25 Thread Massimo Sgaravatto
We are currently running Mitaka (preparing to update to Ocata). I see the same behavior on an Ocata based testbed Thanks, Massimo 2017-09-25 9:50 GMT+02:00 Saverio Proto : > Hello Massimo, > > what is your version of Openstack ?? > > thank you > > Saverio > > 2017-0

Re: [Openstack-operators] Cinder quota per volume types in the dashboard

2017-09-25 Thread Massimo Sgaravatto
ou > > Saverio > > > 2017-09-25 9:55 GMT+02:00 Massimo Sgaravatto >: > > We are currently running Mitaka (preparing to update to Ocata). I see the > > same behavior on an Ocata based testbed > > > > Thanks, Massimo > > > > 2017-09-25 9:50 GMT+

Re: [Openstack-operators] Cinder quota per volume types in the dashboard

2017-09-25 Thread Massimo Sgaravatto
>> We should ask the Horizon PTL how to get this feature request into > >> implementation. > >> > >> With the command line interface, can you already see the two different > >> quotas for the two different volume types ? Can you paste an example > >> o

Re: [Openstack-operators] Cinder quota per volume types in the dashboard

2017-09-26 Thread Massimo Sgaravatto
implementation (as we only have the user facing part, > not the admin panel). > > Cheers, > Arne > > > On 25 Sep 2017, at 10:46, Arne Wiebalck wrote: > > Ah, nice, wasn’t aware. Mateusz is one of the Horizon experts here at CERN > I was referring to :) > > On 25

[Openstack-operators] Problems chaning access list for flavors using the dashboard

2017-12-19 Thread Massimo Sgaravatto
Dear Operators I have a Mitaka openstack installation and in such deployment I am able to modify the access list of flavors using the dashboard without problems. I am even able to modify a public flavor to a private one (specifying, using the dashboard, the list of projects allowed to use the flav

[Openstack-operators] AggregateMultiTenancyIsolation with multiple (many) projects

2018-02-05 Thread Massimo Sgaravatto
Hi I want to partition my OpenStack cloud so that: - Project p1, p2, .., pn can use only compute nodes C1, C2, ... Cx - Projects pn+1.. pm can use only compute nodes Cx+1 ... Cy I read that CERN addressed this use case implementing the ProjectsToAggregateFilter but, as far as I understand, this

Re: [Openstack-operators] AggregateMultiTenancyIsolation with multiple (many) projects

2018-02-06 Thread Massimo Sgaravatto
2018 à 06:45, Massimo Sgaravatto < > massimo.sgarava...@gmail.com> a écrit : > >> Hi >> >> I want to partition my OpenStack cloud so that: >> >> - Project p1, p2, .., pn can use only compute nodes C1, C2, ... Cx >> - Projects pn+1.. pm can use

Re: [Openstack-operators] [openstack-dev] [nova] Default scheduler filters survey

2018-04-20 Thread Massimo Sgaravatto
enabled_filters = AggregateInstanceExtraSpecsFilter,AggregateMultiTenancyIsolation,RetryFilter,AvailabilityZoneFilter,RamFilter,CoreFilter,AggregateRamFilter,AggregateCoreFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinit

[Openstack-operators] Receipt to transfer the ownership of an instance

2018-04-23 Thread Massimo Sgaravatto
As far as I understand there is not a clean way to transfer the ownership of an instance from a user to another one (the implementation of the blueprint https://blueprints.launchpad.net/nova/+spec/transfer-instance-ownership was abandoned). Is there at least a receipt (i.e. what needs to be chang

Re: [Openstack-operators] Receipt to transfer the ownership of an instance

2018-04-23 Thread Massimo Sgaravatto
es/ > > the bad thing is that the new VM has a new IP address, so eventually > DNS records have to be updated by the users. > > Cheers, > > Saverio > > > 2018-04-23 10:17 GMT+02:00 Massimo Sgaravatto < > massimo.sgarava...@gmail.com>: > > As far as I und

[Openstack-operators] Problems with AggregateMultiTenancyIsolation while migrating an instance

2018-05-29 Thread Massimo Sgaravatto
I have a small testbed OpenStack cloud (running Ocata) where I am trying to debug a problem with Nova scheduling. In short: I see different behaviors when I create a new VM and when I try to migrate a VM Since I want to partition the Cloud so that each project uses only certain compute nodes, I

Re: [Openstack-operators] Problems with AggregateMultiTenancyIsolation while migrating an instance

2018-05-29 Thread Massimo Sgaravatto
The VM that I am trying to migrate was created when the Cloud was already running Ocata Cheers, Massimo On Tue, May 29, 2018 at 9:47 PM, Matt Riedemann wrote: > On 5/29/2018 12:44 PM, Jay Pipes wrote: > >> Either that, or the wrong project_id is being used when attempting to >> migrate? Maybe t

Re: [Openstack-operators] Problems with AggregateMultiTenancyIsolation while migrating an instance

2018-05-30 Thread Massimo Sgaravatto
s, Massimo On Wed, May 30, 2018 at 1:01 AM, Matt Riedemann wrote: > On 5/29/2018 3:07 PM, Massimo Sgaravatto wrote: > >> The VM that I am trying to migrate was created when the Cloud was already >> running Ocata >> > > OK, I'd added the tenant_id variabl

Re: [Openstack-operators] Problems with AggregateMultiTenancyIsolation while migrating an instance

2018-05-30 Thread Massimo Sgaravatto
Thanks a lot !! On Wed, May 30, 2018 at 8:06 PM, Matt Riedemann wrote: > On 5/30/2018 9:41 AM, Matt Riedemann wrote: > >> Thanks for your patience in debugging this Massimo! I'll get a bug >> reported and patch posted to fix it. >> > > I'm tracking the problem with this bug: > > https://bugs.lau