Thanks a lot !!
On Wed, May 30, 2018 at 8:06 PM, Matt Riedemann wrote:
> On 5/30/2018 9:41 AM, Matt Riedemann wrote:
>
>> Thanks for your patience in debugging this Massimo! I'll get a bug
>> reported and patch posted to fix it.
>>
>
> I'm tracking the problem with this bug:
>
>
s, Massimo
On Wed, May 30, 2018 at 1:01 AM, Matt Riedemann wrote:
> On 5/29/2018 3:07 PM, Massimo Sgaravatto wrote:
>
>> The VM that I am trying to migrate was created when the Cloud was already
>> running Ocata
>>
>
> OK, I'd added the tenant_id variable in
The VM that I am trying to migrate was created when the Cloud was already
running Ocata
Cheers, Massimo
On Tue, May 29, 2018 at 9:47 PM, Matt Riedemann wrote:
> On 5/29/2018 12:44 PM, Jay Pipes wrote:
>
>> Either that, or the wrong project_id is being used when attempting to
>> migrate? Maybe
I have a small testbed OpenStack cloud (running Ocata) where I am trying to
debug a problem with Nova scheduling.
In short: I see different behaviors when I create a new VM and when I try
to migrate a VM
Since I want to partition the Cloud so that each project uses only certain
compute nodes,
cumentation/migrating-resources/
>
> the bad thing is that the new VM has a new IP address, so eventually
> DNS records have to be updated by the users.
>
> Cheers,
>
> Saverio
>
>
> 2018-04-23 10:17 GMT+02:00 Massimo Sgaravatto <
> massimo.sgarava...@gmail.
As far as I understand there is not a clean way to transfer the ownership
of an instance from a user to another one (the implementation of the
blueprint
https://blueprints.launchpad.net/nova/+spec/transfer-instance-ownership was
abandoned).
Is there at least a receipt (i.e. what needs to be
enabled_filters =
ing to do?
> Le mar. 6 févr. 2018 à 06:45, Massimo Sgaravatto <
> massimo.sgarava...@gmail.com> a écrit :
>
>> Hi
>>
>> I want to partition my OpenStack cloud so that:
>>
>> - Project p1, p2, .., pn can use only compute nodes C1, C2, ... Cx
>> - Proje
Hi
I want to partition my OpenStack cloud so that:
- Project p1, p2, .., pn can use only compute nodes C1, C2, ... Cx
- Projects pn+1.. pm can use only compute nodes Cx+1 ... Cy
I read that CERN addressed this use case implementing the
ProjectsToAggregateFilter but, as far as I understand, this
Dear Operators
I have a Mitaka openstack installation and in such deployment I am able to
modify the access list of flavors using the dashboard without problems. I
am even able to modify a public flavor to a private one (specifying, using
the dashboard, the list of projects allowed to use the
erts here at CERN
> I was referring to :)
>
> On 25 Sep 2017, at 10:41, Massimo Sgaravatto <massimo.sgarava...@gmail.com>
> wrote:
>
> Just found that there is already this one:
>
> https://bugs.launchpad.net/horizon/+bug/1717342
>
> 2017-09-25 10:28 GMT+02:00 Sa
t;> If you are on IRC please join #openstack-horizon.
> >>
> >> We should ask the Horizon PTL how to get this feature request into
> >> implementation.
> >>
> >> With the command line interface, can you already see the two different
> >> quot
the CLI ?
>
> thank you
>
> Saverio
>
>
> 2017-09-25 9:55 GMT+02:00 Massimo Sgaravatto <massimo.sgarava...@gmail.com
> >:
> > We are currently running Mitaka (preparing to update to Ocata). I see the
> > same behavior on an Ocata based testbed
> >
>
Saverio
>
> 2017-09-25 9:13 GMT+02:00 Massimo Sgaravatto <massimo.sgarava...@gmail.com
> >:
> > Hi
> >
> >
> > In our OpenStack cloud we have two backends for Cinder (exposed using two
> > volume types), and we set different quotas for these two volume ty
Hi
In our OpenStack cloud we have two backends for Cinder (exposed using two
volume types), and we set different quotas for these two volume types.
The problem happens when a user, using the dashboard, tries to create a
volume using a volume type for which the project quota is over:
- the
Hi
I have a iscsi storage system which provides Cinder block storage to an
OpenStack cloud (running Mitaka) using the gluster cinder driver.
We are now preparing the update Mitaka --> Newton --> Ocata
Since the cinder gluster driver is not supported anymore in Ocata, the idea
is to expose that
to be in
error)
Cheers, Massimo
2017-06-08 9:40 GMT+02:00 Massimo Sgaravatto <massimo.sgarava...@gmail.com>:
> I am indeed using a HAProxy which also acts as SSL proxy.
>
> And, indeed I have the same problem using the nova CLI:
>
> # nova list
> ERROR (ConnectFailure): Unable
Hi
We are trying to configure the ec2-service on a Ocata OpenStack
installation.
If I try a euca-describe-images it works, but if I try to get the list of
instances (euca-describe-instances) it fails.
Looking at the log [*], it looks like to me that it initially uses the
correct nova endpoint:
Hi
I am reading the RDO installation guide for Ocata. In the nova section [*]
it is explained how to create the nova_cell0 database, but I can't find how
to set the relevant connection string in the nova configuration file.
Any hints ?
Thanks, Massimo
[*]
ll start a qcow2 version of the image, it will be
> scheduled on your compute nodes with local disk and pull the qcow2 image
> from Glance.
>
> Does it make sense?
>
> George
>
> On Wed, Apr 5, 2017 at 10:05 AM, Massimo Sgaravatto <
> massimo.sgarava...@gmail.com> wrote:
>
>
Hi
Currently in our Cloud we are using a gluster storage for cinder and glance.
For nova we are using a shared file system (implemented using gluster) for
part of the compute nodes; the rest of the compute nodes use the local disk.
We are now planning the replacement of gluster with ceph. The
you want to keep the glance service running for end user during
> the migration?
>
> 4. Is it a public cloud?
>
>
> On 25/03/17 04:55, Massimo Sgaravatto wrote:
>
> Hi
>
> In our Mitaka cloud we are currently using Gluster as storage backend for
> Glance and Cinder
Hi
In our Mitaka cloud we are currently using Gluster as storage backend for
Glance and Cinder.
We are now starting the migration to ceph: the idea is then to dismiss
gluster when we have done.
I have a question concerning Glance.
I have understood (or at least I hope so) how to add ceph as
Maybe this is relevant with:
https://bugs.launchpad.net/nova/+bug/1539351
?
In our Mitaka installation we had to keep using v2.0 API to be able to use
user_id in the policy file ...
I don't know if there are better solutions ...
Cheers, Massimo
2017-01-15 8:44 GMT+01:00 Hamza Achi
ts = 10
>
> Cheers,
> George
>
> On Wed, Nov 30, 2016 at 9:56 AM, Massimo Sgaravatto <
> massimo.sgarava...@gmail.com> wrote:
>
>> Hi all
>>
>> I have a problem with scheduling in our Mitaka Cloud,
>> Basically when there are a lo
me
> compute node for different instances.
>
> Belmiro
>
> On Wed, Nov 30, 2016 at 3:56 PM, Massimo Sgaravatto <
> massimo.sgarava...@gmail.com> wrote:
>
>> Hi all
>>
>> I have a problem with scheduling in our Mitaka Cloud,
>> Basically when ther
Hi all
I have a problem with scheduling in our Mitaka Cloud,
Basically when there are a lot of requests for new instances, some of them
fail because "Failed to compute_task_build_instances: Exceeded maximum
number of retries". And the failures are because "Insufficient compute
resources: Free
pez <alopg...@gmail.com>:
> This is a manual step to load them. If your installation was complete, you
> should have a bunch of json files in /etc/glance/metadefs.
> You need to load them with glance-manage db_load_metadefs
>
> > On Nov 17, 2016, at 9:12 AM, Massimo Sgaravatto <
28 matches
Mail list logo