Re: [Openstack-operators] port state is UP when admin-state-up is False (neutron/+bug/1672629)

2017-08-07 Thread Kevin Benton
What backend are you using? That bug is about the port showing ACTIVE when
admin_state_up=False but it's still being disconnected from the dataplane.
If you are seeing dataplane traffic with admin_state_up=False, then that is
a separate bug.

Also, keep in mind that marking the port down will still not be reflected
inside of the VM via ifconfig or ethtool. It will always show active in
there. So even after we fix bug 1672629, you are going to see the port is
connected inside of the VM.

On Mon, Aug 7, 2017 at 5:21 AM, Volodymyr Litovka  wrote:

> Hi colleagues,
>
> am I the only who care about this case? - https://bugs.launchpad.net/
> neutron/+bug/1672629
>
> The problem is when I set port admin_state_up to False, it still UP on the
> VM thus continuing to route statically configured networks (e.g. received
> from DHCP host_routes), sending DHCP reqs, etc
>
> As people discovered, in Kilo everything was ok - "I have checked the
> behavior of admin_state_up of Kilo version, when port admin-state-up is set
> to False, the port status will be DOWN." - but at least in Ocata it is
> broken.
>
> Anybody facing this problem too? Any ideas on how to work around it?
>
> Thank you.
>
> --
> Volodymyr Litovka
>   "Vision without Execution is Hallucination." -- Thomas Edison
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] User Committee meeting cancelled for 8/7/2017

2017-08-07 Thread Shamail Tahir
Hi everyone,

There are no new agenda topics for the User Committee and therefore we are 
cancelling the meeting for today. We hope the Ops meetup in Mexico City later 
this week is a fun and productive event for everyone!

Thanks,
User Committee


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova]

2017-08-07 Thread Volodymyr Litovka
If you don't recreate Neutron ports (just destroying VM, creating it as 
new and attaching old ports), then you can distinguish between 
interfaces by MAC addresses and store this in udev rules. You can do 
this on first boot (e.g. in cloud-init's "startcmd" command), using 
information from /sys/class/net directory.



On 7/31/17 9:14 PM, Morgenstern, Chad wrote:


Hi,

I am trying to programmatically rebuild a nova instance that has a 
persistent volume for its root device.


I am specifically trying to rebuild an instance that has multiple 
network interfaces and a floating ip.


I have observed that the order in which the network interface are 
attached  matters, floating ip attach to eth0 based.


How do I figure out which of the interfaces currently attached is 
associated with eth0?




___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


--
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] port state is UP when admin-state-up is False (neutron/+bug/1672629)

2017-08-07 Thread Volodymyr Litovka

Hi colleagues,

am I the only who care about this case? - 
https://bugs.launchpad.net/neutron/+bug/1672629


The problem is when I set port admin_state_up to False, it still UP on 
the VM thus continuing to route statically configured networks (e.g. 
received from DHCP host_routes), sending DHCP reqs, etc


As people discovered, in Kilo everything was ok - "I have checked the 
behavior of admin_state_up of Kilo version, when port admin-state-up is 
set to False, the port status will be DOWN." - but at least in Ocata it 
is broken.


Anybody facing this problem too? Any ideas on how to work around it?

Thank you.

--
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Experience with Cinder volumes as root disks?

2017-08-07 Thread Saverio Proto
Hello Conrad,

I jump late on the conversation because I was away from the mailing
lists last week.

We run Openstack with both nova ephemeral root disks and cinder volume
boot disks. Both are with ceph rbd backend. It is the user that flags
"boot from volume" in Horizon when starting an instance.

Everything works in both cases, and there are pros and cons as you see
from the many answers you had.

But, if I have to give you a suggestion, I would choose only 1 way to
go and stick with it.

Having all this flexibility is great for us, operators that understand
Openstack internals. But it is a nightmare for the openstack users.

First of all looking at Horizon it is very difficult to understand the
kind of root volume being used.
It is also difficult to understand that you have a snapshot of the
nova instance, and a snapshot of the cinder volume.
We have different snapshot procedures depending on the type for root
disk, and users always get confused.
When the root disk is cinder, if you snapshot from the instance page,
you will get a 0 byte glance image connected to a cinder snapshot.

A user that has a instance with a disk, should not have to understand
if the disk is managed by nova or cinder to finally backup his data
with a snapshot.
Looking at Cloud usability, I would say that mixing the two solutions
is not the best. Probably this explains the Amazon and Azure choices
you described earlier.

Cheers,

Saverio



2017-08-01 16:50 GMT+02:00 Kimball, Conrad :
> In our process of standing up an OpenStack internal cloud we are facing the
> question of ephemeral storage vs. Cinder volumes for instance root disks.
>
>
>
> As I look at public clouds such as AWS and Azure, the norm is to use
> persistent volumes for the root disk.  AWS started out with images booting
> onto ephemeral disk, but soon after they released Elastic Block Storage and
> ever since the clear trend has been to EBS-backed instances, and now when I
> look at their quick-start list of 33 AMIs, all of them are EBS-backed.  And
> I’m not even sure one can have anything except persistent root disks in
> Azure VMs.
>
>
>
> Based on this and a number of other factors I think we want our user normal
> / default behavior to boot onto Cinder-backed volumes instead of onto
> ephemeral storage.  But then I look at OpenStack and its design point
> appears to be booting images onto ephemeral storage, and while it is
> possible to boot an image onto a new volume this is clumsy (haven’t found a
> way to make this the default behavior) and we are experiencing performance
> problems (that admittedly we have not yet run to ground).
>
>
>
> So …
>
> · Are other operators routinely booting onto Cinder volumes instead
> of ephemeral storage?
>
> · What has been your experience with this; any advice?
>
>
>
> Conrad Kimball
>
> Associate Technical Fellow
>
> Chief Architect, Enterprise Cloud Services
>
> Application Infrastructure Services / Global IT Infrastructure / Information
> Technology & Data Analytics
>
> conrad.kimb...@boeing.com
>
> P.O. Box 3707, Mail Code 7M-TE
>
> Seattle, WA  98124-2207
>
> Bellevue 33-11 bldg, office 3A6-3.9
>
> Mobile:  425-591-7802
>
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators