Hello,
I will be leaving this mailing list in a few days.
I am going to a new job and I will not be involved with Openstack at
least in the short term future.
Still, it was great working with the Openstack community in the past few years.
If you need to reach me about any bug/patch/review that
Hello,
we route the Ceph storage network in the same fabric. We did not have
problems with that so far.
Cheers
Saverio
Il giorno gio 16 ago 2018 alle ore 10:43 Paul Browne
ha scritto:
>
> Hi operators,
>
> I had a quick question for those operators who use a routed topology for
> their
Hello Rambo,
you can find information about other deployments reading the User Survey:
https://www.openstack.org/user-survey/survey-2018/landing
For blog posts with experiences from other operators check out:
https://superuser.openstack.org/ and http://planet.openstack.org/
Cheers
Saverio
Il
x:8774/compute
>
>
> Cheers,
> George
>
> On Tue, Aug 7, 2018 at 9:30 AM, Saverio Proto wrote:
>>
>> Hello Jimmy,
>>
>> thanks for your help. If I understand correctly the answer you linked,
>> that helps if you operate the cloud and you have acce
ck-is-installed/at
>
> Once you get the release number, you have to look it up here to match
> the release date: https://releases.openstack.org/
>
> I had to use this the other day when taking the COA.
>
> Cheers,
> Jimmy
>
> Saverio Proto wrote:
> > Hello,
> &
happen very often.
>
> Can it be RabbitMQ? I'm not sure where to check.
>
> Thanks,
> Radu
>
> On Fri, 2018-06-15 at 17:11 +0200, Saverio Proto wrote:
>
> Hello Radu,
>
>
> yours look more or less like a bug report. This you check existing
>
> open bugs for neutron
Hello Radu,
yours look more or less like a bug report. This you check existing
open bugs for neutron ? Also what version of openstack are you running
?
how did you configure enable_isolated_metadata and
enable_metadata_network options ?
Saverio
2018-06-13 12:45 GMT+02:00 Radu Popescu | eMAG,
on redirect etc).
>
> What do you need to know?
>
> Le lun. 28 mai 2018 à 14:50, Saverio Proto a écrit :
>>
>> Hello Chris,
>>
>> I finally had the time to write about my deployment:
>>
>> https://cloudblog.switch.ch/2018/05/22/openstack-horizon-runs-o
Hello Chris,
I finally had the time to write about my deployment:
https://cloudblog.switch.ch/2018/05/22/openstack-horizon-runs-on-kubernetes-in-production-at-switch/
in this blog post I explain why I use the kubernetes nginx-ingress
instead of Openstack LBaaS.
Cheers,
Saverio
2018-03-15
u Popescu | eMAG, Technology wrote:
>
> Hi,
>
> actually, I didn't know about that option. I'll enable it right now.
> Testing is done every morning at about 4:00AM ..so I'll know tomorrow
> morning if it changed anything.
>
> Thanks,
> Radu
>
> On Tue, 2018-05-
Sorry email went out incomplete.
Read this:
https://cloudblog.switch.ch/2017/08/28/starting-1000-instances-on-switchengines/
make sure that Openstack rootwrap configured to work in daemon mode
Thank you
Saverio
2018-05-22 15:29 GMT+02:00 Saverio Proto <ziopr...@gmail.com>:
> H
Hello Radu,
do you have the Openstack rootwrap configured to work in daemon mode ?
please read this article:
2018-05-18 10:21 GMT+02:00 Radu Popescu | eMAG, Technology
:
> Hi,
>
> so, nova says the VM is ACTIVE and actually boots with no network. We are
> setting some
Hello Massimo,
what we suggest to our users, is to migrate a volume, and to create a
new VM from that volume.
https://help.switch.ch/engines/documentation/migrating-resources/
the bad thing is that the new VM has a new IP address, so eventually
DNS records have to be updated by the users.
It works for me in Newton.
Try it at your own risk :)
Cheers,
Saverio
2018-04-09 13:23 GMT+02:00 Anwar Durrani <durrani.an...@gmail.com>:
> No this is different one. should i try this one ? if it works ?
>
> On Mon, Apr 9, 2018 at 4:11 PM, Saverio Proto <ziopr...@gmail.com&g
Hello Ignazio,
it would interesting to know how this works. For instances ports,
those ports are created by openvswitch on the compute nodes, where the
neutron-agent will take care of the security groups enforcement (via
iptables or openvswitch rules).
the LBaaS is a namespace that lives where
My idea is that if delete_on_termination flag is set to False the
Volume should never be deleted by Nova.
my 2 cents
Saverio
2018-03-14 15:10 GMT+01:00 Tim Bell :
> Matt,
>
> To add another scenario and make things even more difficult (sorry (), if the
> original volume has
plit('/templates/')" does not cause the trouble.
>
> Cheers,
> Mateusz
>
>> On 5 Feb 2018, at 14:44, Saverio Proto <saverio.pr...@switch.ch> wrote:
>>
>> Hello,
>>
>> I have tried to find a fix to this:
>>
>> https://ask.openstack
> If you’re willing to, I could share with you a way to get a FrankeinCloud
> using a Docker method with kolla to get a pike/queens/whatever cloud at the
> same time that your Ocata one.
I am interested in knowing more about this. If you have any link /
blog post please share them :)
thank you
is.
- are we two operators hitting a corner case ?
- No one else uses Horizon with custom themes in production with
version newer than Newton ?
This is all food for your brainstorming about LTS,bugfix branches,
release cycle changes
Cheers,
Saverio
--
SWITCH
Saverio Proto, Peta Solutions
Hello !
thanks for accepting the patch :)
It looks like the best is always to send an email and have a short
discussion together, when we are not sure about a patch.
thank you
Cheers,
Saverio
__
OpenStack Development
Horizon.
>
>> But merging a patch that changes a log file in Nova back to Newton was
>> OKAY few weeks ago.
> Could you provide a link to that one ?
sure, here it is:
https://review.openstack.org/#/q/If525313c63c4553abe8bea6f2bfaf75431ed18ea
Thank you
Saverio
--
SWITCH
Saveri
see this :
>>> https://docs.openstack.org/project-team-guide/stable-branches.html for
>>> current policies.
>>>
>>> On Wed, Nov 15, 2017 at 3:33 AM, Saverio Proto
>>> <saverio.pr...@switch.ch> wrote:
>>>>> Which stable policy does
Hello ops,
we have this working for Nova ephemeral images already, but Cinder did
not implement this spec:
https://specs.openstack.org/openstack/cinder-specs/specs/liberty/optimze-rbd-copy-volume-to-image.html
Is anyone carrying an unmerged patch that implements this spec ?
I could not believe
> 3.34.0 is a queens series release, which makes it more likely that more
> other dependencies would need to be updated. Even backporting the
> changes to the Ocata branch and releasing it from there would require
> updating several other libraries.
>
That is what I was fearing. Consider that
ch = logging.StreamHandler()
> ch.setLevel(logging.DEBUG)
>
> formatter = formatters.JSONFormatter()
> ch.setFormatter(formatter)
>
> LOG = logging.getLogger()
> LOG.setLevel(logging.DEBUG)
> LOG.addHandler(ch)
>
> ctx = context.RequestContext(request_id
slo.log to a more
>>> recent (and supported), although to do so you would have to get the
>>> package separately or build your own and that may complicate your
>>> deployment.
>>>
>>> More recent versions of the JSON formatter change the structure of
>
want to wait to debug this
> until you hit the latest supported version you're planning to deploy,
> in case the problem is already fixed there.
>
> Doug
>
> __________
> OpenStack Development Mailing List (not
eye on
> oslo.log bugs at this point, so be realistic in when it might get looked at.
>
> On 01/18/2018 03:06 AM, Saverio Proto wrote:
>> Hello Sean,
>> after the brief chat we had on IRC, do you think I should open a bug
>> about this issue ?
>>
>> thank yo
Hello Sean,
after the brief chat we had on IRC, do you think I should open a bug
about this issue ?
thank you
Saverio
On 13.01.18 07:06, Saverio Proto wrote:
>> I don't think this is a configuration problem.
>>
>> Which version of the oslo.log library do you have installe
> I don't think this is a configuration problem.
>
> Which version of the oslo.log library do you have installed?
Hello Doug,
I use the Ubuntu packages, at the moment I have this version:
python-oslo.log 3.16.0-0ubuntu1~cloud0
thank you
Saverio
e same problem. Anyway in my
Kibana I never so a req-UUID what so ever, so this looks like a problem
with all the openstack services.
Is it a problem with my logging configuration ?
thank you
Saverio
--
SWITCH
Saverio Proto, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
Hello,
probably someone here is using stuff like Kibana to look at Openstack
logs. We are trying here to use the json logging, and we are surprised
that the request-id is not printed in the json output.
I wrote this email to the devs:
Saverio
--
SWITCH
Saverio Proto, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15, direct +41 44 268 1573
saverio.pr...@switch.ch, http://www.switch.ch
http://www.switch.ch/stories
it makes sense and it is very valuable !
thanks
Saverio
Il 19 dic 2017 4:59 PM, "Matt Riedemann" ha scritto:
> During discussion in the TC channel today [1], we got talking about how
> there is a perception that you must upgrade all of the services together
> for anything
Hello,
we have this recurring problem with our users.
An advanced user deletes all the default security groups to create his
own. This user will define only ingress rules.
Because there is no egress rule, the cloud-init will fail to open a
connection to the metadata service.
The user will open
> Which stable policy does that patch violate? It's clearly a bug
> because the wrong information is being logged. I suppose it goes
> against the string freeze rule? Except that we've stopped translating
> log messages so maybe we don't need to worry about that in this case,
> since it isn't an
Hello,
here an example of a trivial patch that is important for people that
do operations, and they have to troubleshoot stuff.
with the old Stable Release thinking this patch would not be accepted
on old stable branches.
Let's see if this gets accepted back to stable/newton
Hello,
here an example of a trivial patch that is important for people that
do operations, and they have to troubleshoot stuff.
with the old Stable Release thinking this patch would not be accepted
on old stable branches.
Let's see if this gets accepted back to stable/newton
The 1 year release cycle makes a lot of sense to me too. +1
Saverio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Hello Christopher,
check out this:
https://ask.openstack.org/en/question/66918/how-to-delete-volume-with-available-status-and-attached-to/
Saverio
2017-10-16 20:45 GMT+02:00 Christopher Hull :
> Running Liberty.
> I'd like to be able to create new volumes from old ones.
Hello Blair,
I found this link in my browser history:
https://bugs.launchpad.net/ubuntu/+source/kvm/+bug/1583819
Is it the same messages that you are seeing in Xenial ?
Saverio
2017-10-12 23:26 GMT+02:00 Blair Bethwaite :
> Hi all,
>
> Has anyone seen guest
; Ah, nice, wasn’t aware. Mateusz is one of the Horizon experts here at CERN
>> I was referring to :)
>>
>> On 25 Sep 2017, at 10:41, Massimo Sgaravatto
>> <massimo.sgarava...@gmail.com> wrote:
>>
>> Just found that there is already this one:
>>
>>
Hello,
I agree this feature of injecting a new keypair is something of great
use. We are always dealing with users that cant access their VMs
anymore.
But AFAIU here we are talking about injecting a new key at REBUILD. So
it does not fit the scenario of a staff member that leaves !
We hardly
ts.openstack.org
> <mailto:openstack-operat...@lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
>
>
> ______
> OpenStack Develop
ased testbed
>
> Thanks, Massimo
>
> 2017-09-25 9:50 GMT+02:00 Saverio Proto <ziopr...@gmail.com>:
>>
>> Hello Massimo,
>>
>> what is your version of Openstack ??
>>
>> thank you
>>
>> Saverio
>>
>> 2017-09-25 9:13 GMT
Hello Massimo,
what is your version of Openstack ??
thank you
Saverio
2017-09-25 9:13 GMT+02:00 Massimo Sgaravatto :
> Hi
>
>
> In our OpenStack cloud we have two backends for Cinder (exposed using two
> volume types), and we set different quotas for these two
> The actual fix for this is trivial:
>
> https://review.openstack.org/#/c/505771/
Why the change is called:
Ignore original retried hosts when live migrating
?
Isn't it implementing the opposite ? Dont Ignore ?
thanks
Saverio
___
> checking http://169.254.169.254/2009-04-04/instance-id
> failed 1/20: up 188.93. request failed
> failed 2/20: up 191.21. request failed
> failed 3/20: up 193.36. request failed
> failed 4/20: up 195.54. request failed
> failed 5/20: up 197.68. request failed
> failed 6/20: up 199.83. request
Hello,
using:
openstack console log show
you can check if it is really the dhclient failing to have an address.
it is usually also good to have a look at the nova-compute.log file in
the compute node where the instance is scheduled.
Saverio
2017-08-31 19:55 GMT+02:00 Divneet Singh
> The ucast-mac-remote table is filled with information that don't match
> your comments. In my environment, I have created only one neutron
> network, one l2gw instance and one l2gw connection. However, the mac
> reflected in that table corresponds to the dhcp port of the Neutron
> network (I've
instance
> will be paused and nova will wait for info from neutron that port is active
> (You should also check credentials config in neutron server)
> If You will set also vif_plugging_is_fatal=True then nova will put instance
> in ERROR state if port will not be active after timeout time.
>
e VM until port will be set to ACTIVE in
> Neutron.
>
--
SWITCH
Saverio Proto, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15, direct +41 44 268 1573
saverio.pr...@switch.ch, http://www.switch.ch
http://
? It looks like a race condition where
nova boots the instance before the neutron port is really ready.
thank you for your feedback.
Saverio
--
SWITCH
Saverio Proto, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15, direct +41 44 268 1573
saverio.pr...
Hello Conrad,
I jump late on the conversation because I was away from the mailing
lists last week.
We run Openstack with both nova ephemeral root disks and cinder volume
boot disks. Both are with ceph rbd backend. It is the user that flags
"boot from volume" in Horizon when starting an instance.
[image: Google Plus]
> <https://plus.google.com/104062177220750809525/posts> [image: Blog]
> <http://success.coredial.com/blog>
> 751 Arbor Way, Hillcrest I, Suite 150, Blue Bell, PA 19422
> *P:* 215.297.4400 x232 <(215)%20297-4400> // *F: *215.297.4401
> <
Hello John,
a common problem is packets being dropped when they pass from the
hypervisor to the instance. There is bottleneck there.
check the 'virsh dumpxml' of one of the instances that is dropping
packets. Check for the interface section, should look like:
Kekane, Abhishek <abhishek.kek...@nttdata.com>:
> Hi Saverio,
>
> Thank you for reply.
>
> Currently we are using Ocata release for Openstack.
>
> Please let me know if you get any update.
>
> Thank you,
>
> Abhishek
>
> -----Original Message-
> From:
Hello Abhishek,
I am sorry I dont have an answer for your question. I would have to
try my self everything to give answer because I never experienced this
use case you describe.
I would suggest also to specify what version of Openstack you are
working with. Because the behaviour can change a lot
Allison I clicked on "Add Deployment" and I got a 404 page (with a cat)
The URL I was redirected to is:
https://www.openstack.org/%7B$Controller.Link%7D%7B$CurrentStep.Template.Title%7D/add-entity
Saverio
2017-06-26 23:44 GMT+02:00 Allison Price :
> Hi everyone,
>
> If
Hello Ignazio,
do you mean the instance has booted from volume ?
when is booted from volume this 0 bytes glance image is created
together with a cinder snapshot.
I wrote about it for my openstack users here:
https://help.switch.ch/engines/documentation/backup-with-snapshots/
I think that is by
Hello,
I try again. Any l2gw plugin user that wants to comment on my email ?
thank you
Saverio
On 29/05/17 16:54, Saverio Proto wrote:
> Hello,
>
> I have a question about the l2gw. I did a deployment, I described the
> steps here:
> https://review.openstack.org/#/c/453209/
&
I patched this back in liberty.
What version of Openstack are you using ?
Saverio
2017-06-08 19:36 GMT+02:00 Grant Morley :
> Ignore that now all,
>
> Managed to fix it by restarting the l3-agent. Looks like it must have been
> cached in memory.
>
> Thanks,
>
> On
out
of the game.
Is anyone running this in production and can shed some light ?
thanks
Saverio
--
SWITCH
Saverio Proto, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15, direct +41 44 268 1573
saverio.pr...@switch.ch, http://www.switch.ch
h
> We use provider networks to essentially take neutron-l3 out of the equation.
> Generally they are shared on all compute hosts, but usually there aren't huge
> numbers of computes.
Hello,
we have a datacenter completely L3, routing to the host.
to implement the provider networks we are using
is the instance scheduled to an hypervisor ? check this with openstack
server show uuid
(admin credential)
if yes check nova-compute.log on the hypervisor. maybe you find some good
information to debug
Saverio
Il 03 mag 2017 2:16 AM, "Steve Powell" ha
scritto:
[ml2]
h 1.5.0-1.el7 @3rdParty7
>
> Edgar
>
> On 5/2/17, 12:39 PM, "Saverio Proto" <ziopr...@gmail.com> wrote:
>
> Hello Edgar,
>
> what is the version of the openstack client ?
>
> did you export this?
>
> e
it work ?
thank you
Saverio
On 13/04/17 09:52, Rabi Mishra wrote:
> On Thu, Apr 13, 2017 at 1:04 PM, Saverio Proto <saverio.pr...@switch.ch
> <mailto:saverio.pr...@switch.ch>> wrote:
>
> Hello,
>
> I am looking at a strange change in default behavior in
Hello ops,
if anyone is interested I have problems with Heat and the Newton upgrade.
I sent an email about this here:
http://lists.openstack.org/pipermail/openstack-dev/2017-April/115412.html
If anyone already faced this issue any help would be appreciated !
thank you
Saverio
possible this is a
regression bug ?
Thank you
Saverio
--
SWITCH
Saverio Proto, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15, direct +41 44 268 1573
saverio.pr...@switch.ch, http://www.switch.ch
Hello Ops,
I got the mail about the Poll for OpenStack R Release Naming
I am shocked that there are proposed names like Raspberry or Root !
Think about troubleshooting and searching on google:
Openstack Raspberry "string of log file"
The Raspberry or Root words are anti-google words that will
Saverio
--
SWITCH
Saverio Proto, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15, direct +41 44 268 1573
saverio.pr...@switch.ch, http://www.switch.ch
http://www.switch.ch/stories
of thinking that the neutron-l2gw-agent had
to run on the switch where the actual briding happens.
thank you
Saverio
On 30/03/17 18:40, Armando M. wrote:
>
>
> On 30 March 2017 at 08:47, Saverio Proto <saverio.pr...@switch.ch
> <mailto:saverio.pr...@switch.ch>&g
the information
to make the all thing work ?
Saverio
On 30/03/17 18:40, Armando M. wrote:
>
>
> On 30 March 2017 at 08:47, Saverio Proto <saverio.pr...@switch.ch
> <mailto:saverio.pr...@switch.ch>> wrote:
>
> Hello,
>
> I am trying to use the neutron l2
the vtep openvswitch is not able
to talk to the compute nodes ?
thank you
Saverio
--
SWITCH
Saverio Proto, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15, direct +41 44 268 1573
saverio.pr...@switch.ch, http://www.switch.ch
http://www.switch.ch/stories
l2gw_alembic_version;
I would strongly suggest to have a common prefix like l2gw_ for all the
tables that belong to the same neutron plugin.
How can I figure out if I missed a table without reading all the code ?
Thank you
Saverio
--
SWITCH
Saverio Proto, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich
Hello all,
we use rclone a lot, and we are happy with it.
the real problem I would say is that a lot of these tools use the
latest AWS4 signature.
AFAIK the radosgw with Ceph Jewel and Openstack keystone integration
supports only AWS2 signature because of this bug:
Hello,
floating IPs is the real issue.
When using horizon it is very easy for users to allocate floating ips
but it is also very difficult to release them.
In our production cloud we had to change the default from 50 to 2. We
have to be very conservative with floatingips quota because our
Saverio
--
SWITCH
Saverio Proto, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15, direct +41 44 268 1573
saverio.pr...@switch.ch, http://www.switch.ch
http://www.switch.ch/stories
Hello Mike,
what version of openstack ?
is the instance booting from ephemeral disk or booting from cinder volume ?
When you boot from volume, that will be the root disk of your
instance. The user could have clicked on "Delete Volume on Instance
Delete". It can be selected when creating a new
(not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org
>> <mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ______
> OpenStack Development M
Hello !
thank you for the great event. Mariano & all the people from Milano
made an excellent work.
thanks all to all of you that helped moderating the sessions and
contributing to the etherpads.
it was really a great event and I am looking forward for the next ones.
Cheers,
Saverio
Hello Edgar,
there was a discussion about this here:
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2017/ops_meetup_team.2017-03-07-20.16.log.txt
we had several sponsor requests to make presentations. We agreed that as
long as the sponsor presentations are strongly technical everybody
> Can someone please put together a list of urls to all the sessions, there are
> so many urls flying around.
Hi,
you can find the list here:
https://etherpad.openstack.org/p/MIL-ops-meetup
Cheers
Saverio
___
OpenStack-operators mailing list
On 10/03/17 17:49, Michael Johnson wrote:
> Yes, folks have recently deployed the dashboard with success. I think you
> had that discussion on the IRC channel, so I won't repeat it here.
>
> Please note, the neutron-lbaas-dashboard does not support LBaaS v1, you must
> have LBaaS v2 deployed for
Hello there,
I am Italian speaking.
Does not make any sense to have the log messages translated. I think
everything has already being said.
Saverio
2017-03-11 9:20 GMT+01:00 George Shuklin :
> Whole idea with log translation is half-backed anyway. About the half of
>
:
https://ask.openstack.org/en/question/96790/lbaasv2-dashboard-issues/
Is there anyone that has a working setup ?
Should I open a bug here?
https://bugs.launchpad.net/octavia/+filebug
Thanks
Saverio
On 09/03/17 16:19, Saverio Proto wrote:
> Hello,
>
> I managed to do the database
the old tables from LBaaSV1 be dropped ?
Please give me feedback so I can fix the code and submit a review.
thank you
Saverio
On 09/03/17 13:38, Saverio Proto wrote:
>> I would recommend experimenting with the database-migration-from-v1-to-v2.py
>> script and working with your ve
> I would recommend experimenting with the database-migration-from-v1-to-v2.py
> script and working with your vendor (if you are using a vendor load
> balancing engine) on a migration path.
Hello,
there is no vendor here to help us :)
I made a backup of the current DB.
I identified this folder
Hello,
I prepared the skeleton for the session about Upgrades Patches and Packaging
https://etherpad.openstack.org/p/MIL-ops-upgrades-patches-packaging
Please contribute to the etherpad with stuff like:
* patches that you are carrying in production
* patches that you dont manage to get
o answer your original question, the LBaaS v1 API as removed in the newton
> release of neutron-lbaas
> (https://docs.openstack.org/releasenotes/neutron-lbaas/newton.html).
>
> Michael
>
>
> -Original Message-
> From: Saverio Proto [mailto:saverio.pr...@switch.
nstack.org/releasenotes/neutron/ocata.html
thank you
Saverio
--
SWITCH
Saverio Proto, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15, direct +41 44 268 1573
saverio.pr...@switch.ch, http://www.switch.ch
http://www.switch.
Hello there,
before doing the upgrade, when still in Mitaka (tag 13.1.3), I am
running the folllwing command:
nova-manage db online_data_migrations
I get this output:
Option "verbose" from group "DEFAULT" is deprecated for removal. Its
value may be silently ignored in the future.
Option
Hello Kendall Nelson,
thanks for you email about Story Board. I did not understand if as
Openstack operators, are we supposed to open bugs on storyboard
instead of Launchpad ? When this starts ?
thank you
Saverio
2017-03-03 20:49 GMT+01:00 Kendall Nelson :
> Hello!
>
>
us | active
>
> |
> | tags |
>
> |
> | updated_at | 2016-12-07T16:01:45Z
>
> |
> | virtual_size | None
>
> |
> | visibility | public
>
> |
> +--+
>
Can you share with us the output of:
openstack image show
for that image ?
Saverio
2017-03-02 13:54 GMT+01:00 Grant Morley <gr...@absolutedevops.io>:
> Unfortunately not, I still get the same error.
>
> Grant
>
> On 02/03/17 12:54, Saverio Proto wrote:
>
> If you
If you pass the uuid of the image does it work ?
Saverio
2017-03-02 13:49 GMT+01:00 Grant Morley <gr...@absolutedevops.io>:
> Hi Saverio,
>
> We are running Mitaka - sorry forgot to mention that.
>
> Grant
>
> On 02/03/17 12:45, Saverio Proto wrote:
>
> What ve
What version of Openstack are we talking about ?
Saverio
2017-03-02 12:11 GMT+01:00 Grant Morley :
> Hi All,
>
> Not sure if anyone can help, but as of today we are unable to launch any
> instances and I have traced back the error to glance. Whenever I try and
> launch
Hello Satya,
I would fill a bug on launchpad for this issue.
114 VMs is not much. Can you identify how to trigger the issue to
reproduce it ? or it just happens randomly ?
When you say rebooting the network node, do you mean the server
running the neutron-server process ?
what version and
Well,
I have no idea from this log file. Trying to make nova-compute more
verbose if you dont find anything in the logs
Saverio
2017-02-20 7:50 GMT+01:00 Anwar Durrani <durrani.an...@gmail.com>:
>
> On Thu, Feb 16, 2017 at 1:44 PM, Saverio Proto <ziopr...@gmail.com> wrote:
>
The three new compute nodes that you added are empty, so most likely
the new instances are scheduled to those three (3 attempts) and
something goes wrong.
with admin rights do:
openstack server show uuid
this should give you the info about the compute node where the
instance was scheduled. Check
1 - 100 of 189 matches
Mail list logo