Hi
We have just done the update Kilo --> Liberty --> Mitaka of our Cloud. We
went through Liberty just for the migration of the databases.
In the updated (to Mitaka) installation, if I click "Update metadata" on a
image, I am told:
Error: Unable to retrieve the namespaces.
In the glance api log
gt; This is a manual step to load them. If your installation was complete, you
> should have a bunch of json files in /etc/glance/metadefs.
> You need to load them with glance-manage db_load_metadefs
>
> > On Nov 17, 2016, at 9:12 AM, Massimo Sgaravatto <
> massimo.sgara
Hi all
I have a problem with scheduling in our Mitaka Cloud,
Basically when there are a lot of requests for new instances, some of them
fail because "Failed to compute_task_build_instances: Exceeded maximum
number of retries". And the failures are because "Insufficient compute
resources: Free memo
me
> compute node for different instances.
>
> Belmiro
>
> On Wed, Nov 30, 2016 at 3:56 PM, Massimo Sgaravatto <
> massimo.sgarava...@gmail.com> wrote:
>
>> Hi all
>>
>> I have a problem with scheduling in our Mitaka Cloud,
>> Basically when ther
eorge
>
> On Wed, Nov 30, 2016 at 9:56 AM, Massimo Sgaravatto <
> massimo.sgarava...@gmail.com> wrote:
>
>> Hi all
>>
>> I have a problem with scheduling in our Mitaka Cloud,
>> Basically when there are a lot of requests for new instances, some of
>>
Maybe this is relevant with:
https://bugs.launchpad.net/nova/+bug/1539351
?
In our Mitaka installation we had to keep using v2.0 API to be able to use
user_id in the policy file ...
I don't know if there are better solutions ...
Cheers, Massimo
2017-01-15 8:44 GMT+01:00 Hamza Achi :
> Hello,
Hi
In our Mitaka cloud we are currently using Gluster as storage backend for
Glance and Cinder.
We are now starting the migration to ceph: the idea is then to dismiss
gluster when we have done.
I have a question concerning Glance.
I have understood (or at least I hope so) how to add ceph as stor
ep the glance service running for end user during
> the migration?
>
> 4. Is it a public cloud?
>
>
> On 25/03/17 04:55, Massimo Sgaravatto wrote:
>
> Hi
>
> In our Mitaka cloud we are currently using Gluster as storage backend for
> Glance and Cinder.
> We are now sta
egy. For this case, you can set:
>
> * stores=rbd,file*
>
> *location_strategy=store_type*
>
> *store_type_preference=rbd,file*
>
> That means if there are 2 locations, Glance will try to use the RBD
> location first, then filesystem location. See more info
> http
Hi
Currently in our Cloud we are using a gluster storage for cinder and glance.
For nova we are using a shared file system (implemented using gluster) for
part of the compute nodes; the rest of the compute nodes use the local disk.
We are now planning the replacement of gluster with ceph. The ide
e, it will be
> scheduled on your compute nodes with local disk and pull the qcow2 image
> from Glance.
>
> Does it make sense?
>
> George
>
> On Wed, Apr 5, 2017 at 10:05 AM, Massimo Sgaravatto <
> massimo.sgarava...@gmail.com> wrote:
>
>> Hi
>>
>&
Hi
I am reading the RDO installation guide for Ocata. In the nova section [*]
it is explained how to create the nova_cell0 database, but I can't find how
to set the relevant connection string in the nova configuration file.
Any hints ?
Thanks, Massimo
[*]
https://docs.openstack.org/ocata/install
mapping table entries in your nova_api db.
>
>
> On May 26, 2017, at 10:56 AM, Massimo Sgaravatto <
> massimo.sgarava...@gmail.com> wrote:
>
> Hi
>
> I am reading the RDO installation guide for Ocata. In the nova section [*]
> it is explained how to create the
Hi
We are trying to configure the ec2-service on a Ocata OpenStack
installation.
If I try a euca-describe-images it works, but if I try to get the list of
instances (euca-describe-instances) it fails.
Looking at the log [*], it looks like to me that it initially uses the
correct nova endpoint:
h
be in
error)
Cheers, Massimo
2017-06-08 9:40 GMT+02:00 Massimo Sgaravatto :
> I am indeed using a HAProxy which also acts as SSL proxy.
>
> And, indeed I have the same problem using the nova CLI:
>
> # nova list
> ERROR (ConnectFailure): Unable to establish connection t
Hi
I have a iscsi storage system which provides Cinder block storage to an
OpenStack cloud (running Mitaka) using the gluster cinder driver.
We are now preparing the update Mitaka --> Newton --> Ocata
Since the cinder gluster driver is not supported anymore in Ocata, the idea
is to expose that s
Hi
In our OpenStack cloud we have two backends for Cinder (exposed using two
volume types), and we set different quotas for these two volume types.
The problem happens when a user, using the dashboard, tries to create a
volume using a volume type for which the project quota is over:
- the repor
We are currently running Mitaka (preparing to update to Ocata). I see the
same behavior on an Ocata based testbed
Thanks, Massimo
2017-09-25 9:50 GMT+02:00 Saverio Proto :
> Hello Massimo,
>
> what is your version of Openstack ??
>
> thank you
>
> Saverio
>
> 2017-0
ou
>
> Saverio
>
>
> 2017-09-25 9:55 GMT+02:00 Massimo Sgaravatto >:
> > We are currently running Mitaka (preparing to update to Ocata). I see the
> > same behavior on an Ocata based testbed
> >
> > Thanks, Massimo
> >
> > 2017-09-25 9:50 GMT+
>> We should ask the Horizon PTL how to get this feature request into
> >> implementation.
> >>
> >> With the command line interface, can you already see the two different
> >> quotas for the two different volume types ? Can you paste an example
> >> o
implementation (as we only have the user facing part,
> not the admin panel).
>
> Cheers,
> Arne
>
>
> On 25 Sep 2017, at 10:46, Arne Wiebalck wrote:
>
> Ah, nice, wasn’t aware. Mateusz is one of the Horizon experts here at CERN
> I was referring to :)
>
> On 25
Dear Operators
I have a Mitaka openstack installation and in such deployment I am able to
modify the access list of flavors using the dashboard without problems. I
am even able to modify a public flavor to a private one (specifying, using
the dashboard, the list of projects allowed to use the flav
Hi
I want to partition my OpenStack cloud so that:
- Project p1, p2, .., pn can use only compute nodes C1, C2, ... Cx
- Projects pn+1.. pm can use only compute nodes Cx+1 ... Cy
I read that CERN addressed this use case implementing the
ProjectsToAggregateFilter but, as far as I understand, this
2018 à 06:45, Massimo Sgaravatto <
> massimo.sgarava...@gmail.com> a écrit :
>
>> Hi
>>
>> I want to partition my OpenStack cloud so that:
>>
>> - Project p1, p2, .., pn can use only compute nodes C1, C2, ... Cx
>> - Projects pn+1.. pm can use
enabled_filters =
AggregateInstanceExtraSpecsFilter,AggregateMultiTenancyIsolation,RetryFilter,AvailabilityZoneFilter,RamFilter,CoreFilter,AggregateRamFilter,AggregateCoreFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinit
As far as I understand there is not a clean way to transfer the ownership
of an instance from a user to another one (the implementation of the
blueprint
https://blueprints.launchpad.net/nova/+spec/transfer-instance-ownership was
abandoned).
Is there at least a receipt (i.e. what needs to be chang
es/
>
> the bad thing is that the new VM has a new IP address, so eventually
> DNS records have to be updated by the users.
>
> Cheers,
>
> Saverio
>
>
> 2018-04-23 10:17 GMT+02:00 Massimo Sgaravatto <
> massimo.sgarava...@gmail.com>:
> > As far as I und
I have a small testbed OpenStack cloud (running Ocata) where I am trying to
debug a problem with Nova scheduling.
In short: I see different behaviors when I create a new VM and when I try
to migrate a VM
Since I want to partition the Cloud so that each project uses only certain
compute nodes, I
The VM that I am trying to migrate was created when the Cloud was already
running Ocata
Cheers, Massimo
On Tue, May 29, 2018 at 9:47 PM, Matt Riedemann wrote:
> On 5/29/2018 12:44 PM, Jay Pipes wrote:
>
>> Either that, or the wrong project_id is being used when attempting to
>> migrate? Maybe t
s, Massimo
On Wed, May 30, 2018 at 1:01 AM, Matt Riedemann wrote:
> On 5/29/2018 3:07 PM, Massimo Sgaravatto wrote:
>
>> The VM that I am trying to migrate was created when the Cloud was already
>> running Ocata
>>
>
> OK, I'd added the tenant_id variabl
Thanks a lot !!
On Wed, May 30, 2018 at 8:06 PM, Matt Riedemann wrote:
> On 5/30/2018 9:41 AM, Matt Riedemann wrote:
>
>> Thanks for your patience in debugging this Massimo! I'll get a bug
>> reported and patch posted to fix it.
>>
>
> I'm tracking the problem with this bug:
>
> https://bugs.lau
31 matches
Mail list logo