Re: [Openstack-operators] [OSOps] [OpenStack-DefCore] Ansible work load test for interop patch set

2016-08-16 Thread Joseph Bajin
Sorry about that. I've been a little busy as of late, and was able to get
around to taking a look.

I have a question about these.   What exactly is the Interop Challenge?
The OSOps repos are usually for code that can help Operators maintain and
run their cloud.   These don't necessarily look like what we normally see
submitted.

Can you expand on what the InterOp Challenge is and if it is something that
Operators would use?

Thanks

Joe

On Tue, Aug 16, 2016 at 3:02 PM, Shamail  wrote:

>
>
> > On Aug 16, 2016, at 1:44 PM, Christopher Aedo  wrote:
> >
> > Tong Li, I think the best place to ask for a look would be the
> > Operators mailing list
> > (http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> ).
> > I've cc'd that list here, though it looks like you've already got a +2
> > on it at least.
> +1
>
> I had contacted JJ earlier and he told me that the best person to contact
> would be Joseph Bajin (RaginBajin in IRC).  I've also added an OSOps tag to
> this message.
> >
> > -Christopher
> >
> >> On Tue, Aug 16, 2016 at 7:59 AM, Tong Li  wrote:
> >> The patch set has been submitted to github for awhile, can some one
> please
> >> review the patch set here?
> >>
> >> https://review.openstack.org/#/c/354194/
> >>
> >> Thanks very much!
> >>
> >> Tong Li
> >> IBM Open Technology
> >> Building 501/B205
> >> liton...@us.ibm.com
> >>
> >>
> >> ___
> >> Defcore-committee mailing list
> >> defcore-commit...@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/defcore-committee
> >
> > ___
> > Defcore-committee mailing list
> > defcore-commit...@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/defcore-committee
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] neutron flat network on existing bridge fails

2016-08-16 Thread James Denton
I don’t have the exact steps offhand, but you should be able to create a veth 
pair manually, attach one end to your existing bridge, and specify the other 
end in the bridge_mappings section. Make sure you set both ends up using ‘ip 
link set  up’ prior to this. The veth pair will end up linking the bridges 
together.

Nova/Neutron should still handle putting the tap interface of the instance in 
the bridge, even if bridge_mappings is wrong or not working. The port’s device 
owner is compute:nova, which likely explains the message you saw in the log.

Hope that helps!

James


From: Paul Dekkers 
Date: Tuesday, August 16, 2016 at 12:18 PM
To: George Paraskevas , openstack-operators 

Subject: Re: [Openstack-operators] neutron flat network on existing bridge fails

Hi,

Thanks for your suggestion - I don't think that's it, because that gives me:

Unable to add br224 to brqf709c220-fd! Exception: Exit code: 1; Stdin: ; 
Stdout: ; Stderr: device br224 is a bridge device itself; can't enslave a 
bridge device to a bridge device.

and

Skip adding device tapf99013c6-6b to brqf709c220-fd. It is owned by 
compute:nova and thus added elsewhere. _add_tap_interface 
/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:473

while this tapf99013c6-6b now actually is enslaved to brqf709c220-fd, even with 
the wrong physical_interface_mappings setting. So I guess it's ignored because 
of the error, or some other code built that bridge (nova? as it's owned by 
compute:nova ? I had no idea a bridge/interface could have an owner...)

Regards,
Paul


On 16-08-16 17:39, George Paraskevas wrote:

Hello,
With Linux bridge, I believe you should use physical _interface_mappings 
=provider:br224
Beat regards
George

On Tue, 16 Aug 2016, 16:48 Paul Dekkers, 
> wrote:
Hi,

I'm using Ubuntu 16.04.1 with it's Mitaka release, and neutron flat
networking with linuxbridge+ml2:

I'd like to attach my flat neutron network to an existing linuxbridge on
my system. This fails with an error like:

2016-08-16 15:07:00.711 7982 DEBUG
neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent
[req-720266dc-fe8d-47c4-a7df-19fee7a8d679 - - - - -] Skip adding device
tapf99013c6-6b to br224. It is owned by compute:nova and thus added
elsewhere. _add_tap_interface
/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:473

with the result that the tap interface is added to a newly created
bridge instead of the existing (br224) bridge.

I've set
bridge_mappings = provider:br224
in /etc/neutron/plugins/ml2/linuxbridge_agent.ini

(because if I use
physical_interface_mappings = provider:vlan224
the vlan224 interface is actually detached from the original bridge)

I've created the bridge via /etc/network/interfaces:

auto br224
iface br224 inet manual
bridge_ports vlan224

auto vlan224
iface vlan224 inet manual
vlan_raw_device eno1

Reason for doing this is that I'd like to attach an lxc container to
this bridge (and only when neutron needs it the "brqf709c220-fd" (with
for me an unpredictable name) is setup, so I can't use that), and/or
like to have the machine itself use an interface on this network. (This
is also my reason for using flat networking, and not vlan.)

I've created the neutron networks with

neutron net-create default --shared --provider:physical_network provider
--provider:network_type flat

(I would repeat this with a different physical_network name if I need
more VLANs, instead of using the --provider:segmentation_id.)

Instance networking works when I let nova/neutron create the bridge
interfaces.

Any idea why neutron refuses to use the bridge_mappings and why it
creates a new interface?

Regards,
Paul

P.S. To me it feels like this is what people would need when setting up
a small single-network setup (both for instances and OpenStack), but
most examples use multiple networks anyway.

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [User-committee] [scientific-wg] [UX] Quota Management Interviews

2016-08-16 Thread Danielle Mundle
[Apologies for cross-posting]


Hi everyone,


A new research initiative is starting on behalf of the OpenStack UX Project
Team and we could use some volunteer feedback from operators. We're
interested in learning more detail about *quota management * – specifically
the *pain points around managing quotas*, and *quotas at scale* – as it is
consistently raised as a challenge.



Interview sessions will be held on Google Hangouts throughout this week (of
8/15) and next week (of 8/22) and will last no longer than one hour (time
can be flexible). The session would involve you, a moderator (me), and
possibly a few notetakers/observers from the community who otherwise would
not take part in the conversation. The findings will be used to create user
stories for the community and be reported anonymously in aggregate.



If you are interested, please indicate any available times on this doodle
poll (http://doodle.com/poll/7tid473za2hpi6e7) and I will follow up by
sending a meeting invitation with the Google Hangout link required to
access the scheduled session.



Thanks in advance for all your help in supporting UX research in the
OpenStack community!


--Danielle
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [OpenStack-DefCore] [openstack-operators][interop-challenge] Ansible work load test for interop patch set

2016-08-16 Thread Tong Li

The patch set has been submitted to github for awhile, can some one please
review the patch set at the link below?

https://review.openstack.org/#/c/354194/

Thanks very much!

Tong Li
IBM Open Technology
Building 501/B205
liton...@us.ibm.com
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [OSOps] [OpenStack-DefCore] Ansible work load test for interop patch set

2016-08-16 Thread Shamail


> On Aug 16, 2016, at 1:44 PM, Christopher Aedo  wrote:
> 
> Tong Li, I think the best place to ask for a look would be the
> Operators mailing list
> (http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators).
> I've cc'd that list here, though it looks like you've already got a +2
> on it at least.
+1

I had contacted JJ earlier and he told me that the best person to contact would 
be Joseph Bajin (RaginBajin in IRC).  I've also added an OSOps tag to this 
message.
> 
> -Christopher
> 
>> On Tue, Aug 16, 2016 at 7:59 AM, Tong Li  wrote:
>> The patch set has been submitted to github for awhile, can some one please
>> review the patch set here?
>> 
>> https://review.openstack.org/#/c/354194/
>> 
>> Thanks very much!
>> 
>> Tong Li
>> IBM Open Technology
>> Building 501/B205
>> liton...@us.ibm.com
>> 
>> 
>> ___
>> Defcore-committee mailing list
>> defcore-commit...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/defcore-committee
> 
> ___
> Defcore-committee mailing list
> defcore-commit...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/defcore-committee

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [OpenStack-DefCore] Ansible work load test for interop patch set

2016-08-16 Thread Christopher Aedo
Tong Li, I think the best place to ask for a look would be the
Operators mailing list
(http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators).
I've cc'd that list here, though it looks like you've already got a +2
on it at least.

-Christopher

On Tue, Aug 16, 2016 at 7:59 AM, Tong Li  wrote:
> The patch set has been submitted to github for awhile, can some one please
> review the patch set here?
>
> https://review.openstack.org/#/c/354194/
>
> Thanks very much!
>
> Tong Li
> IBM Open Technology
> Building 501/B205
> liton...@us.ibm.com
>
>
> ___
> Defcore-committee mailing list
> defcore-commit...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/defcore-committee
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [scientific] Seeking users of Infiniband

2016-08-16 Thread Tim Randles

Hi Stig,

I have InfiniBand in my test/development cluster although I haven't yet 
used it.  Figuring out where it fits in relation to my overall project 
is one of my goals.  I would be very interested in hearing more about 
everyones uses cases and goals.


This summer I mentored a team of students that did a comparative study 
of IB performance of bare metal, VMs, and containers.  OpenStack was not 
a component of the study but the results still may be of interest. 
Please let me know and I can forward on their poster and presentation, 
as well as discuss the methodology and results.


Tim

On 08/16/2016 10:28 AM, Stig Telfer wrote:

Hi All -

I’m looking for some new data points on people’s experience of using Infiniband 
within a virtualised OpenStack configuration.

Is there anyone on the list who is doing this, and how well does it work?

Many thanks,
Stig


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] neutron flat network on existing bridge fails

2016-08-16 Thread Paul Dekkers
Hi,

Thanks for your suggestion - I don't think that's it, because that gives me:

Unable to add br224 to brqf709c220-fd! Exception: Exit code: 1; Stdin: ;
Stdout: ; Stderr: device br224 is a bridge device itself; can't enslave
a bridge device to a bridge device.

and

Skip adding device tapf99013c6-6b to brqf709c220-fd. It is owned by
compute:nova and thus added elsewhere. _add_tap_interface
/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:473

while this tapf99013c6-6b now actually is enslaved to brqf709c220-fd,
even with the wrong physical_interface_mappings setting. So I guess it's
ignored because of the error, or some other code built that bridge
(nova? as it's owned by compute:nova ? I had no idea a bridge/interface
could have an owner...)

Regards,
Paul


On 16-08-16 17:39, George Paraskevas wrote:
>
> Hello,
> With Linux bridge, I believe you should use physical
> _interface_mappings =provider:br224
> Beat regards
> George
>
>
> On Tue, 16 Aug 2016, 16:48 Paul Dekkers,  > wrote:
>
> Hi,
>
> I'm using Ubuntu 16.04.1 with it's Mitaka release, and neutron flat
> networking with linuxbridge+ml2:
>
> I'd like to attach my flat neutron network to an existing
> linuxbridge on
> my system. This fails with an error like:
>
> 2016-08-16 15:07:00.711 7982 DEBUG
> neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent
> [req-720266dc-fe8d-47c4-a7df-19fee7a8d679 - - - - -] Skip adding
> device
> tapf99013c6-6b to br224. It is owned by compute:nova and thus added
> elsewhere. _add_tap_interface
> 
> /usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:473
>
> with the result that the tap interface is added to a newly created
> bridge instead of the existing (br224) bridge.
>
> I've set
> bridge_mappings = provider:br224
> in /etc/neutron/plugins/ml2/linuxbridge_agent.ini
>
> (because if I use
> physical_interface_mappings = provider:vlan224
> the vlan224 interface is actually detached from the original bridge)
>
> I've created the bridge via /etc/network/interfaces:
>
> auto br224
> iface br224 inet manual
> bridge_ports vlan224
>
> auto vlan224
> iface vlan224 inet manual
> vlan_raw_device eno1
>
> Reason for doing this is that I'd like to attach an lxc container to
> this bridge (and only when neutron needs it the "brqf709c220-fd" (with
> for me an unpredictable name) is setup, so I can't use that), and/or
> like to have the machine itself use an interface on this network.
> (This
> is also my reason for using flat networking, and not vlan.)
>
> I've created the neutron networks with
>
> neutron net-create default --shared --provider:physical_network
> provider
> --provider:network_type flat
>
> (I would repeat this with a different physical_network name if I need
> more VLANs, instead of using the --provider:segmentation_id.)
>
> Instance networking works when I let nova/neutron create the bridge
> interfaces.
>
> Any idea why neutron refuses to use the bridge_mappings and why it
> creates a new interface?
>
> Regards,
> Paul
>
> P.S. To me it feels like this is what people would need when
> setting up
> a small single-network setup (both for instances and OpenStack), but
> most examples use multiple networks anyway.
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific] Seeking users of Infiniband

2016-08-16 Thread Stig Telfer
Hi All - 

I’m looking for some new data points on people’s experience of using Infiniband 
within a virtualised OpenStack configuration.

Is there anyone on the list who is doing this, and how well does it work?

Many thanks,
Stig


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] NYC Ops MidCycle meetup

2016-08-16 Thread Edgar Magana
Hi,

I will be there and happy to moderate any session still needed of one.

Edgar

From: Chris Morgan 
Date: Tuesday, August 16, 2016 at 5:12 PM
To: OpenStack Operators 
Subject: [Openstack-operators] NYC Ops MidCycle meetup

Hello Everyone,

  The ops meetup team meeting today on IRC made some progress with finalising 
the itinerary and working out moderators for next weeks meetup. We had to make 
do without Tom Fifieldt but some progress happened (see below for links).

You can see the itinerary progress on the google docs

https://docs.google.com/spreadsheets/d/1EUSYMs3GfglnD8yfFaAXWhLe0F5y9hCUKqCYe0Vp1oA/edit?pref=2=1#gid=1046944156

Apologies about the formatting, this will be cleaned up for the final itinerary.

I'd like to request a volunteer or volunteers to moderate the packaging 
discussions. We have two, one specifically around ubuntu packages, for which 
some canonical people will be present (we requested this since there was a 
specific request for canonical to talk about how operators should deal with 
hotfixing before fixes are available from caonical). The second session is for 
other distros. Please let me know if you would like to moderate either session!

Thanks!

Chris

Minutes: 
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2016/ops_meetup_team.2016-08-16-14.00.html
11:00 AM Minutes (text): 
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2016/ops_meetup_team.2016-08-16-14.00.txt
11:00 AM Log: 
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2016/ops_meetup_team.2016-08-16-14.00.log.html

--
Chris Morgan >
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] NYC Ops MidCycle meetup

2016-08-16 Thread Chris Morgan
Hello Everyone,

  The ops meetup team meeting today on IRC made some progress with
finalising the itinerary and working out moderators for next weeks meetup.
We had to make do without Tom Fifieldt but some progress happened (see
below for links).

You can see the itinerary progress on the google docs

https://docs.google.com/spreadsheets/d/1EUSYMs3GfglnD8yfFaAXWhLe0F5y9hCUKqCYe0Vp1oA/edit?pref=2=1#gid=1046944156

Apologies about the formatting, this will be cleaned up for the final
itinerary.

I'd like to request a volunteer or volunteers to moderate the packaging
discussions. We have two, one specifically around ubuntu packages, for
which some canonical people will be present (we requested this since there
was a specific request for canonical to talk about how operators should
deal with hotfixing before fixes are available from caonical). The second
session is for other distros. Please let me know if you would like to
moderate either session!

Thanks!

Chris

Minutes:
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2016/ops_meetup_team.2016-08-16-14.00.html
11:00 AM Minutes (text):
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2016/ops_meetup_team.2016-08-16-14.00.txt
11:00 AM Log:
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2016/ops_meetup_team.2016-08-16-14.00.log.html

-- 
Chris Morgan 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific] CFP: HPCloudSys - The First Workshop on the Science of High-Performance Cloud Systems

2016-08-16 Thread Dmitry Duplyakin
On behalf of the organizers, I would like to encourage everyone to review
the following CFP and submit papers with the latest work on the relevant
topics. We will also greatly appreciate your help if you share information
about this brand-new venue among your colleagues who might be interested.
Please do not hesitate to contact me or other organizers if you have any
questions.

*The First Workshop on the Science of High-Performance Cloud Systems
(HPCloudSys) *
Co-located with CloudCom 2016. Luxembourg, December 12, 2016

Workshop page with the most up-to-date information:
http://2016.cloudcom.org/conf/workshops/hpcloudsys.html

*Scope*
The First Workshop on the Science of High-Performance Cloud Systems
(HPCloudSys) seeks to bring together researchers and practitioners who work
at the intersection of HPC and Cloud Computing to foster conversations
about the systems aspects of HPC computations in the cloud. While the cloud
promises cheap, efficient, and elastic compute cycles, there are a number
of challenges at a systems level to running efficient HPC computations in
the cloud. There has been a significant interest in recent years in science
and engineering research computing in cloud environments. Work on these
topics (some are listed below) is necessary in order to ensure effective
and efficient computation in the cloud. There also arise many
methodological challenges, in terms of the ways that environments are
built, computations are orchestrated, and data are collected, archived, and
shared. Our motivation in running this workshop is to foster more
conversation about clouds for research from the perspective of cloud
computing and the core recognized strengths of cloud systems, rather than
from a perspective of “HPC taken to the cloud.”

*Topics*
Topics of interest include (but are not limited to):
- Systems-level aspects of computing in the cloud, including:
  * Low-latency and low-jitter virtualization in the cloud
  * Storage and network support for HPC applications in the cloud,
including computation-aware resource provisioning
  * Technologies for capturing execution environments, such as containers
- Tools and methodologies for HPC in the cloud, including:
  * Orchestration of computation in the cloud
  * Dealing with variability of computations in the cloud
  * Collecting, processing, archiving, and cataloging of results
- Reproducibility and repeatability of science in the cloud

*Discussions at the Workshop*
The workshop will facilitate exchange of experience in the following areas:

- “HPC for the 99 Percent”, “Production Cloud”, “Deeply Programmable Cloud”
– What do these slogans actually mean at this point in time? Our overall
goal is to come up with a common understanding of the range of
state-of-the-art technologies that allow to build systems that best satisfy
researchers’ needs.
- Scale versus Functionality – Where are the discussed systems in this
spectrum?
Integration – How are these systems integrated with other campus and
national cyberinfrastructures?
- Return on investment - why should the NSF invest in clouds
- Lessons Learned – What major design and technology transformations have
occurred since these systems have been proposed and deployed?

The workshop will feature two keynote speakers (TBD): one from academia,
and one from industry. There will also be brief presentations from cloud
facilities that are available to researchers, to help acquaint participants
with facilities and technologies that may benefit their work. The remainder
of the time will be used for presentation of accepted papers, with ample
time reserved for discussion. Attendance will be open to all (not just
authors).

*Important Dates*
Paper submission: September 2, 2016
Notification of acceptance: September 15, 2016
Camera-ready version:   September 21, 2016
Workshop date: December 12, 2016
CloudCom conference dates: December 12-15, 2016

*Submissions*
Submissions should be in the IEEE CS format (no longer than 5 pages).
Submissions must be original and should not have been published previously
or be under consideration for publication while being evaluated for this
workshop.

Accepted workshop papers will be published in a supplement of the
Proceedings of IEEE CloudCom 2016, and submitted to IEEE Xplore
(conditioned by the presentation at the workshop by the author).

Authors are invited to submit papers through the conference submission
system: https://easychair.org/conferences/?conf=hpcloudsys16

*Organizers*
Dmitry Duplyakin, University of Colorado
Robert Ricci, University of Utah / CloudLab
Craig Stewart, Indiana University / Jetstream

*Program Committee*
Jed Brown, University of Colorado, Boulder
Matthew Woitaszek, Walmart Global eCommerce
Daniel McDonald, Institute for Systems Biology
Caleb Phillips, National Renewable Energy Laboratory
Paul Ruth , Renaissance Computing Institute
Amy Apon, Clemson University
Hari Sundar, University of Utah
Paul Müller, Technische Universität 

Re: [Openstack-operators] neutron flat network on existing bridge fails

2016-08-16 Thread George Paraskevas
Hello,
With Linux bridge, I believe you should use physical _interface_mappings
=provider:br224
Beat regards
George

On Tue, 16 Aug 2016, 16:48 Paul Dekkers,  wrote:

> Hi,
>
> I'm using Ubuntu 16.04.1 with it's Mitaka release, and neutron flat
> networking with linuxbridge+ml2:
>
> I'd like to attach my flat neutron network to an existing linuxbridge on
> my system. This fails with an error like:
>
> 2016-08-16 15:07:00.711 7982 DEBUG
> neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent
> [req-720266dc-fe8d-47c4-a7df-19fee7a8d679 - - - - -] Skip adding device
> tapf99013c6-6b to br224. It is owned by compute:nova and thus added
> elsewhere. _add_tap_interface
>
> /usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:473
>
> with the result that the tap interface is added to a newly created
> bridge instead of the existing (br224) bridge.
>
> I've set
> bridge_mappings = provider:br224
> in /etc/neutron/plugins/ml2/linuxbridge_agent.ini
>
> (because if I use
> physical_interface_mappings = provider:vlan224
> the vlan224 interface is actually detached from the original bridge)
>
> I've created the bridge via /etc/network/interfaces:
>
> auto br224
> iface br224 inet manual
> bridge_ports vlan224
>
> auto vlan224
> iface vlan224 inet manual
> vlan_raw_device eno1
>
> Reason for doing this is that I'd like to attach an lxc container to
> this bridge (and only when neutron needs it the "brqf709c220-fd" (with
> for me an unpredictable name) is setup, so I can't use that), and/or
> like to have the machine itself use an interface on this network. (This
> is also my reason for using flat networking, and not vlan.)
>
> I've created the neutron networks with
>
> neutron net-create default --shared --provider:physical_network provider
> --provider:network_type flat
>
> (I would repeat this with a different physical_network name if I need
> more VLANs, instead of using the --provider:segmentation_id.)
>
> Instance networking works when I let nova/neutron create the bridge
> interfaces.
>
> Any idea why neutron refuses to use the bridge_mappings and why it
> creates a new interface?
>
> Regards,
> Paul
>
> P.S. To me it feels like this is what people would need when setting up
> a small single-network setup (both for instances and OpenStack), but
> most examples use multiple networks anyway.
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ops Meetups Team - Next Meeting Coordinates and News

2016-08-16 Thread Chris Morgan
This meeting starts now on #openstack-operators

On Thu, Aug 11, 2016 at 3:03 PM, Tom Fifield  wrote:

> Ops Meetups Team,
>
>
> We're on the countdown to NYC! Draft agenda will be on this list soon, and
> the search for moderators will commence. The meeting overview can be found
> at [3]).
>
>
> ==Next Meeting==
> Unless there is further discussion, this the next meeting is at:
>
> ==> Tuesday, 16 of Aug at 1400 UTC[1] in #openstack-operators
>
> [2] will be kept up to date with information about the meeting time and
> agenda
>
>
> ===>  Due to travel, I won't be able to run the meeting. Can anyone
> volunteer to run it?
>
>
>
>
>
> Regards,
>
>
> Tom
>
>
> [1] http://www.timeanddate.com/worldclock/fixedtime.html?msg=Ops
> +Meetups+Team=20160816T22=241
>
> [2] https://wiki.openstack.org/wiki/Ops_Meetups_Team#Meeting_Information
>
> [3] http://eavesdrop.openstack.org/meetings/ops_meetups_team/201
> 6/ops_meetups_team.2016-08-09-14.00.html
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>



-- 
Chris Morgan 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] neutron flat network on existing bridge fails

2016-08-16 Thread Paul Dekkers
Hi,

I'm using Ubuntu 16.04.1 with it's Mitaka release, and neutron flat
networking with linuxbridge+ml2:

I'd like to attach my flat neutron network to an existing linuxbridge on
my system. This fails with an error like:

2016-08-16 15:07:00.711 7982 DEBUG
neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent
[req-720266dc-fe8d-47c4-a7df-19fee7a8d679 - - - - -] Skip adding device
tapf99013c6-6b to br224. It is owned by compute:nova and thus added
elsewhere. _add_tap_interface
/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:473

with the result that the tap interface is added to a newly created
bridge instead of the existing (br224) bridge.

I've set
bridge_mappings = provider:br224
in /etc/neutron/plugins/ml2/linuxbridge_agent.ini

(because if I use
physical_interface_mappings = provider:vlan224
the vlan224 interface is actually detached from the original bridge)

I've created the bridge via /etc/network/interfaces:

auto br224
iface br224 inet manual
bridge_ports vlan224

auto vlan224
iface vlan224 inet manual
vlan_raw_device eno1

Reason for doing this is that I'd like to attach an lxc container to
this bridge (and only when neutron needs it the "brqf709c220-fd" (with
for me an unpredictable name) is setup, so I can't use that), and/or
like to have the machine itself use an interface on this network. (This
is also my reason for using flat networking, and not vlan.)

I've created the neutron networks with

neutron net-create default --shared --provider:physical_network provider
--provider:network_type flat

(I would repeat this with a different physical_network name if I need
more VLANs, instead of using the --provider:segmentation_id.)

Instance networking works when I let nova/neutron create the bridge
interfaces.

Any idea why neutron refuses to use the bridge_mappings and why it
creates a new interface?

Regards,
Paul

P.S. To me it feels like this is what people would need when setting up
a small single-network setup (both for instances and OpenStack), but
most examples use multiple networks anyway.

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Liberty RabbitMQ and ZeroMQ

2016-08-16 Thread William Josefsson
Yes, thanks Mohammed. I got the same advice from Clint earlier, so
I'll prolly stick to RabbitMQ for now. I have my rabbittmq running on
my 3 controllers, setup with
http://docs.openstack.org/ha-guide/shared-messaging.html#configure-openstack-services-to-use-rabbit-ha-queues
please let me know if there are any additional parameters I should
consider that would make the base deployment more robust to failure,
network disruptions, single rabbitmq-server node down, etc. thanks
will

On Mon, Aug 15, 2016 at 11:13 PM, Mohammed Naser  wrote:
> The history of the ZeroMQ driver has been quite weird, going through
> stages where it completely did not work at all, to receiving a few
> patches to make it work and it's risked being removed from the
> oslo.messaging package a few times.
>
> For a cluster at a smaller scale, I'd suggest sticking to RabbitMQ in
> order to avoid dealing with problems which you'll be stuck with.  I
> also don't think the CI at OpenStack does any testing for ZeroMQ.
>
> https://wiki.openstack.org/wiki/ZeroMQ
>
> On Mon, Aug 15, 2016 at 10:14 AM, William Josefsson
>  wrote:
>> thx Clint! okay I will stick to RabbitMQ for now. Do you know any good
>> up2date guide for replacing RabbitMQ with ZeroMQ, or is the general
>> documentation 
>> http://docs.openstack.org/developer/oslo.messaging/zmq_driver.html
>>  have you tried this?
>>
>> I'm also not sure if the ZeroMQ support is here to stay, or whether it
>> will be removed going forward. thx will
>>
>> On Mon, Aug 15, 2016 at 3:24 AM, Clint Byrum  wrote:
>>> Excerpts from William Josefsson's message of 2016-08-14 15:39:06 +0800:
 Hi everyone,

 I see advice in replacing RabbitMQ with ZeroMQ. I've been running 2
 clusters Liberty/CentOS7 with RabbitMQ now for while. The larger
 cluster consists of 3x Controllers and 4x Compute nodes. RabbitMQ is
 running is HA mode as per:
 http://docs.openstack.org/ha-guide/shared-messaging.html#configure-rabbitmq-for-ha-queues.

>>>
>>> For 7 real computers, RabbitMQ is actually a better choice. You get
>>> centralized management and the most battle-tested driver of all.
>>>
>>> ZeroMQ is meant to remove the bottleneck and SPOF of a RabbitMQ cluster
>>> from much larger systems by making the data path for messaging directly
>>> peer-to-peer, but it still needs a central matchmaker database. So at
>>> that scale, you're not really winning much by using it.
>>>
>>> I can't really speak to the answers for your problems that you've seen,
>>> but in general I'd expect Liberty and Mitaka on RabbitMQ to handle your
>>> cluster size without breaking a sweat. Have you reported the errors as
>>> bugs in oslo.messaging? That might be where to start:
>>>
>>> https://bugs.launchpad.net/oslo.messaging/+filebug
>>>
>>> ___
>>> OpenStack-operators mailing list
>>> OpenStack-operators@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
> --
> Mohammed Naser — vexxhost
> -
> D. 514-316-8872
> D. 800-910-1726 ext. 200
> E. mna...@vexxhost.com
> W. http://vexxhost.com

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [oslo] RabbitMQ queue TTL issues moving to Liberty

2016-08-16 Thread Jake Yip
Hi Matt,

We seem to be doing ok with 3.6.3. IIRC 3.6.2 was causing the stats db to
fall over every now and then causing huge problems.

Regards,
Jake

Jake Yip,
DevOps Engineer,
Core Services, NeCTAR Research Cloud,
The University of Melbourne

On Tue, Aug 16, 2016 at 6:34 AM, Matt Fischer  wrote:

> Has anyone had any luck improving the statsdb issue by upgrading rabbit to
> 3.6.3 or newer? We're at 3.5.6 now and 3.6.2 has parallelized stats
> processing, then 3.6.3 has additional memory leak fixes for it. What we've
> been seeing is that we occasionally get slow & steady climbs of rabbit
> memory usage until the cluster falls over when it hits the memory limit.
> The last one occurred over 12 hours once we went back and looked at the
> charts.
>
> I'm hoping to try 3.6.5 but we have no way to repro this outside of
> production and even there short of bouncing neutron and all the agents over
> and over I'm not sure I could recreate it.
>
> Note - we already have the collect interval set to 30k, per recommendation
> from the Rabbit Ops talk in Tokyo, but no other optimizations for the
> statsdb. Some folks here are considering a cron job to bounce it every few
> hours.
>
>
> On Thu, Jul 28, 2016 at 9:10 AM, Kris G. Lindgren 
> wrote:
>
>> We also believe the change from auto-delete queues to 10min expiration
>> queues was the cause of our rabbit whoes a month or so ago.  Where we had
>> rabbitmq servers filling their stats DB and consuming 20+ GB of ram before
>> hitting the rabbitmq mem high watermark.  We were running for 6+ months
>> without issue under kilo and when we moved to Liberty rabbit consistently
>> started falling on its face.  We eventually turned down the stats
>> collection interval, but I would imagine keeping stats around for queue’s
>> for 10 minutes that were used for a single RPC message when we are passing
>> 1500+ messages per second wasn’t helping anything.  We haven’t tried
>> changing the timeout values to be lower, to see if that made things
>> better.  But we did identify this change as something that could contribute
>> to our rabbitmq issues.
>>
>>
>>
>>
>>
>> ___
>>
>> Kris Lindgren
>>
>> Senior Linux Systems Engineer
>>
>> GoDaddy
>>
>>
>>
>> *From: *Dmitry Mescheryakov 
>> *Date: *Thursday, July 28, 2016 at 6:17 AM
>> *To: *Sam Morrison 
>> *Cc: *OpenStack Operators 
>> *Subject: *Re: [Openstack-operators] [oslo] RabbitMQ queue TTL issues
>> moving to Liberty
>>
>>
>>
>>
>>
>>
>>
>> 2016-07-27 2:20 GMT+03:00 Sam Morrison :
>>
>>
>>
>> On 27 Jul 2016, at 4:05 AM, Dmitry Mescheryakov <
>> dmescherya...@mirantis.com> wrote:
>>
>>
>>
>>
>>
>>
>>
>> 2016-07-26 2:15 GMT+03:00 Sam Morrison :
>>
>> The queue TTL happens on reply queues and fanout queues. I don’t think it
>> should happen on fanout queues. They should auto delete. I can understand
>> the reason for having them on reply queues though so maybe that would be a
>> way to forward?
>>
>>
>>
>> Or am I missing something and it is needed on fanout queues too?
>>
>>
>>
>> I would say we do need fanout queues to expire for the very same reason
>> we want reply queues to expire instead of auto delete. In case of broken
>> connection, the expiration provides client time to reconnect and continue
>> consuming from the queue. In case of auto-delete queues, it was a frequent
>> case that RabbitMQ deleted the queue before client reconnects ... along
>> with all non-consumed messages in it.
>>
>>
>>
>> But in the case of fanout queues, if there is a broken connection can’t
>> the service just recreate the queue if it doesn’t exist? I guess that means
>> it needs to store the state of what the queue name is though?
>>
>>
>>
>> Yes they could loose messages directed at them but all the services I
>> know that consume on fanout queues have a re sync functionality for this
>> very case.
>>
>>
>>
>> If the connection is broken will oslo messaging know how to connect to
>> the same queue again anyway? I would’ve thought it would handle the
>> disconnect and then reconnect, either with the same queue name or a new
>> queue all together?
>>
>>
>>
>> oslo.messaging handles reconnect perfectly - on connect it just
>> unconditionally declares the queue and starts consuming from it. If queue
>> already existed, the declaration operation will just be ignored by RabbitMQ.
>>
>>
>>
>> For your earlier point that services re sync and hence messages lost in
>> fanout are not that important, I can't comment on that. But after some
>> thinking I do agree that having big expiration time for fanouts is
>> non-adequate for big deployments anyway. How about we split
>> rabbit_transient_queues_ttl into two parameters - one for reply queue
>> and one for fanout ones? In that case people concerned with messages piling
>>