Re: [openstack-dev] [TripleO][OVN] Switching the default network backend to ML2/OVN

2018-10-25 Thread Miguel Angel Ajo Pelayo
Daniel, thank you very much for the extensive and detailed email.

The plan looks good to me and it makes sense, also the OVS option will
still be
tested, and available when selected.



On Wed, Oct 24, 2018 at 4:41 PM Daniel Alvarez Sanchez 
wrote:

> Hi Stackers!
>
> The purpose of this email is to share with the community the intention
> of switching the default network backend in TripleO from ML2/OVS to
> ML2/OVN by changing the mechanism driver from openvswitch to ovn. This
> doesn’t mean that ML2/OVS will be dropped but users deploying
> OpenStack without explicitly specifying a network driver will get
> ML2/OVN by default.
>
> OVN in Short
> ==
>
> Open Virtual Network is managed under the OVS project, and was created
> by the original authors of OVS. It is an attempt to re-do the ML2/OVS
> control plane, using lessons learned throughout the years. It is
> intended to be used in projects such as OpenStack and Kubernetes.


Also oVirt / RHEV.


> OVN
> has a different architecture, moving us away from Python agents
> communicating with the Neutron API service via RabbitMQ to daemons
> written in C communicating via OpenFlow and OVSDB.
>
> OVN is built with a modern architecture that offers better foundations
> for a simpler and more performant solution. What does this mean? For
> example, at Red Hat we executed some preliminary testing during the
> Queens cycle and found significant CPU savings due to OVN not using
> RabbitMQ (CPU utilization during a Rally scenario using ML2/OVS [0] or
> ML2/OVN [1]). Also, we tested API performance and found out that most
> of the operations are significantly faster with ML2/OVN. Please see
> more details in the FAQ section.
>
> Here’s a few useful links about OpenStack’s integration of OVN:
>
> * OpenStack Boston Summit talk on OVN [2]
> * OpenStack networking-ovn documentation [3]
> * OpenStack networking-ovn code repository [4]
>
> How?
> 
>
> The goal is to merge this patch [5] during the Stein cycle which
> pursues the following actions:
>
> 1. Switch the default mechanism driver from openvswitch to ovn.
> 2. Adapt all jobs so that they use ML2/OVN as the network backend.
> 3. Create legacy environment file for ML2/OVS to allow deployments based
> on it.
> 4. Flip scenario007 job from ML2/OVN to ML2/OVS so that we continue
> testing it.
> 5. Continue using ML2/OVS in the undercloud.
> 6. Ensure that updates/upgrades from ML2/OVS don’t break and don’t
> switch automatically to the new default. As some parity gaps exist
> right now, we don’t want to change the network backend automatically.
> Instead, if the user wants to migrate from ML2/OVS to ML2/OVN, we’ll
> provide an ansible based tool that will perform the operation.
> More info and code at [6].
>
> Reviews, comments and suggestions are really appreciated :)
>
>
> FAQ
> ===
>
> Can you talk about the advantages of OVN over ML2/OVS?
>
> ---
>
> If asked to describe the ML2/OVS control plane (OVS, L3, DHCP and
> metadata agents using the messaging bus to sync with the Neutron API
> service) one would not tend to use the term ‘simple’. There is liberal
> use of a smattering of Linux networking technologies such as:
> * iptables
> * network namespaces
> * ARP manipulation
> * Different forms of NAT
> * keepalived, radvd, haproxy, dnsmasq
> * Source based routing,
> * … and of course OVS flows.
>
> OVN simplifies this to a single process running on compute nodes, and
> another process running on centralized nodes, communicating via OVSDB
> and OpenFlow, ultimately setting OVS flows.
>
> The simplified, new architecture allows us to re-do features like DVR
> and L3 HA in more efficient and elegant ways. For example, L3 HA
> failover is faster: It doesn’t use keepalived, rather OVN monitors
> neighbor tunnel endpoints. OVN supports enabling both DVR and L3 HA
> simultaneously, something we never supported with ML2/OVS.
>
> We also found out that not depending on RPC messages for agents
> communication brings a lot of benefits. From our experience, RabbitMQ
> sometimes represents a bottleneck and it can be very intense when it
> comes to resources utilization.
>
>
> What about the undercloud?
> --
>
> ML2/OVS will be still used in the undercloud as OVN has some
> limitations with regards to baremetal provisioning mainly (keep
> reading about the parity gaps). We aim to convert the undercloud to
> ML2/OVN to provide the operator a more consistent experience as soon
> as possible.
>
> It would be possible however to use the Neutron DHCP agent in the
> short term to solve this limitation but in the long term we intend to
> implement support for baremetal provisioning in the OVN built-in DHCP
> server.
>
>
> What about CI?
> -
>
> * networking-ovn has:
> * Devstack based Tempest (API, scenario from Tempest and Neutron
> Tempest plugin) against the latest released OVS 

Re: [openstack-dev] [Openstack-operators] [SIGS] Ops Tools SIG

2018-10-11 Thread Miguel Angel Ajo Pelayo
Adding the mailing lists back to your reply, thank you :)

I guess that +melvin.hills...@huawei.com  can
help us a little bit organizing the SIG,
but I guess the first thing would be collecting a list of tools which could
be published
under the umbrella of the SIG, starting by the ones already in Osops.

Publishing documentation for those tools, and the catalog under
docs.openstack.org
is possibly the next step (or a parallel step).


On Wed, Oct 10, 2018 at 4:43 PM Rob McAllister  wrote:

> Hi Miguel,
>
> I would love to join this. What do I need to do?
>
> Sent from my iPhone
>
> On Oct 9, 2018, at 03:17, Miguel Angel Ajo Pelayo 
> wrote:
>
> Hello
>
> Yesterday, during the Oslo meeting we discussed [6] the possibility of
> creating a new Special Interest Group [1][2] to provide home and release
> means for operator related tools [3] [4] [5]
>
> I continued the discussion with M.Hillsman later, and he made me aware
> of the operator working group and mailing list, which existed even before
> the SIGs.
>
> I believe it could be a very good idea, to give life and more
> visibility to all those very useful tools (for example, I didn't know some
> of them existed ...).
>
>Give this, I have two questions:
>
>1) Do you know or more tools which could find home under an Ops Tools
> SIG umbrella?
>
>2) Do you want to join us?
>
>
> Best regards and have a great day.
>
>
> [1] https://governance.openstack.org/sigs/
> [2] http://git.openstack.org/cgit/openstack/governance-sigs/tree/sigs.yaml
> [3] https://wiki.openstack.org/wiki/Osops
> [4] http://git.openstack.org/cgit/openstack/ospurge/tree/
> [5] http://git.openstack.org/cgit/openstack/os-log-merger/tree/
> [6]
> http://eavesdrop.openstack.org/meetings/oslo/2018/oslo.2018-10-08-15.00.log.html#l-130
>
>
>
> --
> Miguel Ángel Ajo
> OSP / Networking DFG, OVN Squad Engineering
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>

-- 
Miguel Ángel Ajo
OSP / Networking DFG, OVN Squad Engineering
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [SIGS] Ops Tools SIG

2018-10-09 Thread Miguel Angel Ajo Pelayo
Hello

Yesterday, during the Oslo meeting we discussed [6] the possibility of
creating a new Special Interest Group [1][2] to provide home and release
means for operator related tools [3] [4] [5]

I continued the discussion with M.Hillsman later, and he made me aware
of the operator working group and mailing list, which existed even before
the SIGs.

I believe it could be a very good idea, to give life and more
visibility to all those very useful tools (for example, I didn't know some
of them existed ...).

   Give this, I have two questions:

   1) Do you know or more tools which could find home under an Ops Tools
SIG umbrella?

   2) Do you want to join us?


Best regards and have a great day.


[1] https://governance.openstack.org/sigs/
[2] http://git.openstack.org/cgit/openstack/governance-sigs/tree/sigs.yaml
[3] https://wiki.openstack.org/wiki/Osops
[4] http://git.openstack.org/cgit/openstack/ospurge/tree/
[5] http://git.openstack.org/cgit/openstack/os-log-merger/tree/
[6]
http://eavesdrop.openstack.org/meetings/oslo/2018/oslo.2018-10-08-15.00.log.html#l-130



-- 
Miguel Ángel Ajo
OSP / Networking DFG, OVN Squad Engineering
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Ryu integration with Openstack

2018-10-05 Thread Miguel Angel Ajo Pelayo
have a look at dragonflow project, may be it's similar to what you're
trying to accomplish

On Fri, Oct 5, 2018, 1:56 PM Niket Agrawal  wrote:

> Hi,
>
> Thanks for the help. I am trying to run a custom Ryu app from the nova
> compute node and have all the openvswitches connected to this new
> controller. However, to be able to run this new app, I have to first stop
> the existing neutron openvswitch agents in the same node as they run Ryu
> app (integrated in Openstack) by default. Ryu in Openstack provides basic
> functionalities like L2 switching but does not support launching a custom
> app at the same time.
> I'd like to have a single instance of Ryu controller control all the
> openvswtich instances rather than having openvswitch agents in each node
> managing the openvswitches separately. For this, I'll probably have to
> migrate the existing functionality provided by Ryu app to this new app of
> mine. Could you share some suggestions or are you aware of any previous
> work done towards this, that I can read about?
>
> Regards,
> Niket
>
> On Thu, Sep 27, 2018 at 9:21 AM Slawomir Kaplonski 
> wrote:
>
>> Hi,
>>
>> Code of app is in
>> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_ryuapp.py
>> and classes for specific bridge types are in
>> https://github.com/openstack/neutron/tree/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native
>>
>> > Wiadomość napisana przez Niket Agrawal  w dniu
>> 27.09.2018, o godz. 00:08:
>> >
>> > Hi,
>> >
>> > Thanks for your reply. Is there a way to access the code that is
>> running in the app to see what is the logic implemented in the app?
>> >
>> > Regards,
>> > Niket
>> >
>> > On Wed, Sep 26, 2018 at 10:31 PM Slawomir Kaplonski <
>> skapl...@redhat.com> wrote:
>> > Hi,
>> >
>> > > Wiadomość napisana przez Niket Agrawal  w dniu
>> 26.09.2018, o godz. 18:11:
>> > >
>> > > Hello,
>> > >
>> > > I have a question regarding the Ryu integration in Openstack. By
>> default, the openvswitch bridges (br-int, br-tun and br-ex) are registered
>> to a controller running on 127.0.0.1 and port 6633. The output of ovs-vsctl
>> get-manager is ptcp:127.0.0.1:6640. This is noticed on the nova compute
>> node. However there is a different instance of the same Ryu controller
>> running on the neutron gateway as well and the three openvswitch bridges
>> (br-int, br-tun and br-ex) are registered to this instance of Ryu
>> controller. If I stop neutron-openvswitch agent on the nova compute node,
>> the bridges there are no longer connected to the controller, but the
>> bridges in the neutron gateway continue to remain connected to the
>> controller. Only when I stop the neutron openvswitch agent in the neutron
>> gateway as well, the bridges there get disconnected.
>> > >
>> > > I'm unable to find where in the Openstack code I can access this
>> implementation, because I intend to make a few tweaks to this architecture
>> which is present currently. Also, I'd like to know which app is the Ryu SDN
>> controller running by default at the moment. I feel the information in the
>> code can help me find it too.
>> >
>> > Ryu app is started by neutron-openvswitch-agent in:
>> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/main.py#L34
>> > Is it what You are looking for?
>> >
>> > >
>> > > Regards,
>> > > Niket
>> > >
>> __
>> > > OpenStack Development Mailing List (not for usage questions)
>> > > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> > —
>> > Slawek Kaplonski
>> > Senior software engineer
>> > Red Hat
>> >
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> —
>> Slawek Kaplonski
>> Senior software engineer
>> Red Hat
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

Re: [openstack-dev] [neutron][stadium][networking] Seeking proposals for non-voting Stadium projects in Neutron check queue

2018-10-03 Thread Miguel Angel Ajo Pelayo
That's fantastic,

   I believe we could add some of the networking ovn jobs, we need to
decide which one would be more beneficial.

On Tue, Oct 2, 2018 at 10:02 AM  wrote:

> Hi Miguel, all,
>
> The initiative is very welcome and will help make it more efficient to
> develop in stadium projects.
>
> legacy-tempest-dsvm-networking-bgpvpn-bagpipe would be a candidate, for
> networking-bgpvpn and networking-bagpipe (it covers API and scenario
> tests for the BGPVPN API (networking-bgpvpn) and given that
> networking-bagpipe is used as reference driver, it exercises a large
> portion of networking-bagpipe as well).
>
> Having this one will help a lot.
>
> Thanks,
>
> -Thomas
>
>
> On 9/30/18 2:42 AM, Miguel Lavalle wrote:
> > Dear networking Stackers,
> >
> > During the recent PTG in Denver, we discussed measures to prevent
> > patches merged in the Neutron repo breaking Stadium and related
> > networking projects in general. We decided to implement the following:
> >
> > 1) For Stadium projects, we want to add non-voting jobs to the Neutron
> > check queue
> > 2) For non stadium projects, we are inviting them to add 3rd party CI
> jobs
> >
> > The next step is for each project to propose the jobs that they want
> > to run against Neutron patches.
> >
> > Best regards
> >
> > Miguel
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> _
>
> Ce message et ses pieces jointes peuvent contenir des informations
> confidentielles ou privilegiees et ne doivent donc
> pas etre diffuses, exploites ou copies sans autorisation. Si vous avez
> recu ce message par erreur, veuillez le signaler
> a l'expediteur et le detruire ainsi que les pieces jointes. Les messages
> electroniques etant susceptibles d'alteration,
> Orange decline toute responsabilite si ce message a ete altere, deforme ou
> falsifie. Merci.
>
> This message and its attachments may contain confidential or privileged
> information that may be protected by law;
> they should not be distributed, used or copied without authorisation.
> If you have received this email in error, please notify the sender and
> delete this message and its attachments.
> As emails may be altered, Orange is not liable for messages that have been
> modified, changed or falsified.
> Thank you.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 
Miguel Ángel Ajo
OSP / Networking DFG, OVN Squad Engineering
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [quickstart] [networking-ovn] No more overcloud_prep-containers.sh script

2018-10-03 Thread Miguel Angel Ajo Pelayo
Hi Jirka & Daniel, thanks for your answers... more inline.

On Wed, Oct 3, 2018 at 10:44 AM Jiří Stránský  wrote:

> On 03/10/2018 10:14, Miguel Angel Ajo Pelayo wrote:
> > Hi folks
> >
> >I was trying to deploy neutron with networking-ovn via
> tripleo-quickstart
> > scripts on master, and this config file [1]. It doesn't work, overcloud
> > deploy cries with:
> >
> > 1) trying to deploy ovn I end up with a 2018-10-02 17:48:12 | "2018-10-02
> > 17:47:51,864 DEBUG: 26691 -- Error: image
> > tripleomaster/centos-binary-ovn-controller:current-tripleo not found",
> >
> > it seems like the overcloud_prep-containers.sh is not there anymore (I
> > guess overcloud deploy handles it automatically now? but it fails to
> > generate the ovn containers for some reason)
> >
> > Also, if you look at [2] which are our ansible migration scripts to
> migrate
> > ml2/ovs to ml2/networking-ovn, you will see that we make use of
> > overcloud_prep-containers.sh , I guess that we will need to make sure [1]
> > works and we will get [2] for free.
>
> Hi Miguel,
>
> i'm not subject matter expert but here's some relevant info:
>
> * overcloud_prep-containers.sh is not a production thing, it's
> automation from TripleO Quickstart, which is not part of production
> deployments. We shouldn't depend on it in docs/automation for OVN
> migration.
>
> Yes I know, but based on the deployment details we have for networking-ovn
it should be enough, we will have to update those documents with the new
changes anyway, because surprisingly this change has came for "Rocky" last
minute. Why did we have such last minute change? :-/

I understand the value of simplifying workflows to cloud operators, but
when we make workflow changes last minute we make others life harder (now I
need to rework something I want to be available in rocky, the migration
scripts/document).


> * For production envs, the image preparation steps used to be documented
> and performed manually. This now is now changing in Rocky+, as Steve
> Baker integrated the image prep into the deployment itself. There are
> docs about the current method [3].
>

Oops, I see

openstack tripleo container image prepare default \
  --output-env-file containers-prepare-parameter.yaml

Always outputs neutron_driver: null


@Emilien Macchi, @Steve Baker   How can I make sure it
provides "ovn" for example?   ^

I know I could manually change the file, but then, how would I run... " --
local-push-destination" ?


>
>
> * I hit similar issues with incorrect Neutron images being uploaded to
> undercloud registry, you can try deploying with this patch [4] which
> aims to fix that problem (also the depends-on patch is necessary).
>

Thanks a lot!

>
> Jirka
>
> > [1]
> https://github.com/openstack/networking-ovn/blob/master/tripleo/ovn.yml
> > [2]
> https://docs.openstack.org/networking-ovn/latest/install/migration.html
>
> [3]
> http://tripleo.org/install/advanced_deployment/container_image_prepare.html
> [4] https://review.openstack.org/#/c/604953/
>
>

-- 
Miguel Ángel Ajo
OSP / Networking DFG, OVN Squad Engineering
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] [quickstart] [networking-ovn] No more overcloud_prep-containers.sh script

2018-10-03 Thread Miguel Angel Ajo Pelayo
Hi folks

  I was trying to deploy neutron with networking-ovn via tripleo-quickstart
scripts on master, and this config file [1]. It doesn't work, overcloud
deploy cries with:

1) trying to deploy ovn I end up with a 2018-10-02 17:48:12 | "2018-10-02
17:47:51,864 DEBUG: 26691 -- Error: image
tripleomaster/centos-binary-ovn-controller:current-tripleo not found",

it seems like the overcloud_prep-containers.sh is not there anymore (I
guess overcloud deploy handles it automatically now? but it fails to
generate the ovn containers for some reason)

Also, if you look at [2] which are our ansible migration scripts to migrate
ml2/ovs to ml2/networking-ovn, you will see that we make use of
overcloud_prep-containers.sh , I guess that we will need to make sure [1]
works and we will get [2] for free.



[1] https://github.com/openstack/networking-ovn/blob/master/tripleo/ovn.yml
[2] https://docs.openstack.org/networking-ovn/latest/install/migration.html
-- 
Miguel Ángel Ajo
OSP / Networking DFG, OVN Squad Engineering
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures] Release of openstack/os-log-merger failed

2018-10-02 Thread Miguel Angel Ajo Pelayo
Thanks for the info Doug.

On Mon, Oct 1, 2018 at 6:25 PM Doug Hellmann  wrote:

> Miguel Angel Ajo Pelayo  writes:
>
> > Thank you for the guidance and ping Doug.
> >
> > Was this triggered by [1] ? or By the 1.1.0 tag pushed to gerrit?
>
> The release jobs are always triggered by the git tagging event. The
> patches in openstack/releases run a job that adds tags, but the patch
> you linked to hasn't been merged yet, so it looks like it was caused by
> pushing the tag manually.
>
> Doug
>


-- 
Miguel Ángel Ajo
OSP / Networking DFG, OVN Squad Engineering
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures] Release of openstack/os-log-merger failed

2018-10-01 Thread Miguel Angel Ajo Pelayo
Oh, ok 1.1.0 tag didn't have 'venv' in tox.ini, but master has it since:

https://review.openstack.org/#/c/548618/7/tox.ini@37



On Mon, Oct 1, 2018 at 10:01 AM Miguel Angel Ajo Pelayo 
wrote:

> Thank you for the guidance and ping Doug.
>
> Was this triggered by [1] ? or By the 1.1.0 tag pushed to gerrit?
>
>
> I'm working to make os-log-merger part of the OpenStack governance
> projects, and to make sure we release it as a tarball.
>
> It's a small tool I've been using for years making my life easier every
> time I've needed to debug complex scenarios. It's not a big project, but I
> hope the extra exposure will make developers, and admins life easier.
>
>
> Some projects use it as a way of aggregating logs [2] In a way that those
> can then be easily consumed by logstash/kibana.
>
>
> Best regards,
> Miguel Ángel Ajo
>
> [1] https://review.openstack.org/#/c/605641/
> [2]
> http://git.openstack.org/cgit/openstack/neutron/tree/neutron/tests/contrib/post_test_hook.sh#n41
> [3]
> http://logs.openstack.org/58/605358/4/check/neutron-functional/18de376/logs/dsvm-functional-index.txt.gz
>
>
> On Fri, Sep 28, 2018 at 5:45 PM Doug Hellmann 
> wrote:
>
>> z...@openstack.org writes:
>>
>> > Build failed.
>> >
>> > - release-openstack-python
>> http://logs.openstack.org/d4/d445ff62676bc5b2753fba132a3894731a289fb9/release/release-openstack-python/629c35f/
>> : FAILURE in 3m 57s
>> > - announce-release announce-release : SKIPPED
>> > - propose-update-constraints propose-update-constraints : SKIPPED
>>
>> The error here is
>>
>>   ERROR: unknown environment 'venv'
>>
>> It looks like os-log-merger is not set up for the
>> release-openstack-python job, which expects a specific tox setup.
>>
>>
>> http://logs.openstack.org/d4/d445ff62676bc5b2753fba132a3894731a289fb9/release/release-openstack-python/629c35f/ara-report/result/7c6fd37c-82d8-48f7-b653-5bdba90cbc31/
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> --
> Miguel Ángel Ajo
> OSP / Networking DFG, OVN Squad Engineering
>


-- 
Miguel Ángel Ajo
OSP / Networking DFG, OVN Squad Engineering
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures] Release of openstack/os-log-merger failed

2018-10-01 Thread Miguel Angel Ajo Pelayo
Thank you for the guidance and ping Doug.

Was this triggered by [1] ? or By the 1.1.0 tag pushed to gerrit?


I'm working to make os-log-merger part of the OpenStack governance
projects, and to make sure we release it as a tarball.

It's a small tool I've been using for years making my life easier every
time I've needed to debug complex scenarios. It's not a big project, but I
hope the extra exposure will make developers, and admins life easier.


Some projects use it as a way of aggregating logs [2] In a way that those
can then be easily consumed by logstash/kibana.


Best regards,
Miguel Ángel Ajo

[1] https://review.openstack.org/#/c/605641/
[2]
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/tests/contrib/post_test_hook.sh#n41
[3]
http://logs.openstack.org/58/605358/4/check/neutron-functional/18de376/logs/dsvm-functional-index.txt.gz


On Fri, Sep 28, 2018 at 5:45 PM Doug Hellmann  wrote:

> z...@openstack.org writes:
>
> > Build failed.
> >
> > - release-openstack-python
> http://logs.openstack.org/d4/d445ff62676bc5b2753fba132a3894731a289fb9/release/release-openstack-python/629c35f/
> : FAILURE in 3m 57s
> > - announce-release announce-release : SKIPPED
> > - propose-update-constraints propose-update-constraints : SKIPPED
>
> The error here is
>
>   ERROR: unknown environment 'venv'
>
> It looks like os-log-merger is not set up for the
> release-openstack-python job, which expects a specific tox setup.
>
>
> http://logs.openstack.org/d4/d445ff62676bc5b2753fba132a3894731a289fb9/release/release-openstack-python/629c35f/ara-report/result/7c6fd37c-82d8-48f7-b653-5bdba90cbc31/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 
Miguel Ángel Ajo
OSP / Networking DFG, OVN Squad Engineering
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Core status

2018-09-20 Thread Miguel Angel Ajo Pelayo
Good luck Gary, thanks for all those years on Neutron! :)

Best regards,
Miguel Ángel

On Wed, Sep 19, 2018 at 9:32 PM Nate Johnston 
wrote:

> On Wed, Sep 19, 2018 at 06:19:44PM +, Gary Kotton wrote:
>
> > I have recently transitioned to a new role where I will be working on
> other parts of OpenStack. Sadly I do not have the necessary cycles to
> maintain my core responsibilities in the neutron community. Nonetheless I
> will continue to be involved.
>
> Thanks for everything you've done over the years, Gary.  I know I
> learned a lot from your reviews back when I was a wee baby Neutron
> developer.  Best of luck on what's next!
>
> Nate
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 
Miguel Ángel Ajo
OSP / Networking DFG, OVN Squad Engineering
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [OVN] Tempest API / Scenario tests and OVN metadata

2018-04-09 Thread Miguel Angel Ajo Pelayo
I don't necessarily agree that rewriting test is the solution here.

May be for some extreme cases that could be fine, but from the maintenance
point of view doesn't sound very practical IMHO.

In some cases it can be just a parametrization of tests as they are, or
simply accounting for
a bit of extra headroom in quotas (when of course the purpose of such
specific tests is not
to verify the quota behaviour, for example).



On Sun, Apr 8, 2018 at 3:52 PM Gary Kotton <gkot...@vmware.com> wrote:

> Hi,
>
> There are some tempest tests that check realization of resources on the
> networking platform and connectivity. Here things are challenging as each
> networking platform may be more restrictive than the upstream ML2 plugin.
> My thinking here is that we should leverage the tempest plugins for each
> networking platform and they can overwrite the problematic tests and
> address them as suitable for the specific plugin.
>
> Thanks
>
> Gary
>
>
>
> *From: *Miguel Angel Ajo Pelayo <majop...@redhat.com>
> *Reply-To: *OpenStack List <openstack-dev@lists.openstack.org>
> *Date: *Saturday, April 7, 2018 at 8:56 AM
> *To: *OpenStack List <openstack-dev@lists.openstack.org>
> *Subject: *Re: [openstack-dev] [neutron] [OVN] Tempest API / Scenario
> tests and OVN metadata
>
>
>
> this issue isn't only for networking ovn, please note that it happens with
> a flew other vendor plugins (like nsx), at least this is something we have
> found in downstream certifications.
>
>
>
> Cheers,
>
> On Sat, Apr 7, 2018, 12:36 AM Daniel Alvarez <dalva...@redhat.com> wrote:
>
>
>
> > On 6 Apr 2018, at 19:04, Sławek Kapłoński <sla...@kaplonski.pl> wrote:
> >
> > Hi,
> >
> > Another idea is to modify test that it will:
> > 1. Check how many ports are in tenant,
> > 2. Set quota to actual number of ports + 1 instead of hardcoded 1 as it
> is now,
> > 3. Try to add 2 ports - exactly as it is now,
> >
> Cool, I like this one :-)
> Good idea.
>
> > I think that this should be still backend agnostic and should fix this
> problem.
> >
> >> Wiadomość napisana przez Sławek Kapłoński <sla...@kaplonski.pl> w dniu
> 06.04.2018, o godz. 17:08:
> >>
> >> Hi,
> >>
> >> I don’t know how networking-ovn is working but I have one question.
> >>
> >>
> >>> Wiadomość napisana przez Daniel Alvarez Sanchez <dalva...@redhat.com>
> w dniu 06.04.2018, o godz. 15:30:
> >>>
> >>> Hi,
> >>>
> >>> Thanks Lucas for writing this down.
> >>>
> >>> On Thu, Apr 5, 2018 at 11:35 AM, Lucas Alvares Gomes <
> lucasago...@gmail.com> wrote:
> >>> Hi,
> >>>
> >>> The tests below are failing in the tempest API / Scenario job that
> >>> runs in the networking-ovn gate (non-voting):
> >>>
> >>>
> neutron_tempest_plugin.api.admin.test_quotas_negative.QuotasAdminNegativeTestJSON.test_create_port_when_quotas_is_full
> >>>
> neutron_tempest_plugin.api.test_routers.RoutersIpV6Test.test_router_interface_status
> >>>
> neutron_tempest_plugin.api.test_routers.RoutersTest.test_router_interface_status
> >>>
> neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_prefixlen
> >>>
> neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_quota
> >>>
> neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_subnet_cidr
> >>>
> >>> Digging a bit into it I noticed that with the exception of the two
> >>> "test_router_interface_status" (ipv6 and ipv4) all other tests are
> >>> failing because the way metadata works in networking-ovn.
> >>>
> >>> Taking the "test_create_port_when_quotas_is_full" as an example. The
> >>> reason why it fails is because when the OVN metadata is enabled,
> >>> networking-ovn will metadata port at the moment a network is created
> >>> [0] and that will already fulfill the quota limit set by that test
> >>> [1].
> >>>
> >>> That port will also allocate an IP from the subnet which will cause
> >>> the rest of the tests to fail with a "No more IP addresses available
> >>> on network ..." error.
> >>>
> >>> With ML2/OVS we would run into the same Quota problem if DHCP would be
> >>> enabled for the created subnets. This means that if we modify the
> current tests
> >>> to 

Re: [openstack-dev] [neutron] [OVN] Tempest API / Scenario tests and OVN metadata

2018-04-06 Thread Miguel Angel Ajo Pelayo
this issue isn't only for networking ovn, please note that it happens with
a flew other vendor plugins (like nsx), at least this is something we have
found in downstream certifications.

Cheers,

On Sat, Apr 7, 2018, 12:36 AM Daniel Alvarez  wrote:

>
>
> > On 6 Apr 2018, at 19:04, Sławek Kapłoński  wrote:
> >
> > Hi,
> >
> > Another idea is to modify test that it will:
> > 1. Check how many ports are in tenant,
> > 2. Set quota to actual number of ports + 1 instead of hardcoded 1 as it
> is now,
> > 3. Try to add 2 ports - exactly as it is now,
> >
> Cool, I like this one :-)
> Good idea.
>
> > I think that this should be still backend agnostic and should fix this
> problem.
> >
> >> Wiadomość napisana przez Sławek Kapłoński  w dniu
> 06.04.2018, o godz. 17:08:
> >>
> >> Hi,
> >>
> >> I don’t know how networking-ovn is working but I have one question.
> >>
> >>
> >>> Wiadomość napisana przez Daniel Alvarez Sanchez 
> w dniu 06.04.2018, o godz. 15:30:
> >>>
> >>> Hi,
> >>>
> >>> Thanks Lucas for writing this down.
> >>>
> >>> On Thu, Apr 5, 2018 at 11:35 AM, Lucas Alvares Gomes <
> lucasago...@gmail.com> wrote:
> >>> Hi,
> >>>
> >>> The tests below are failing in the tempest API / Scenario job that
> >>> runs in the networking-ovn gate (non-voting):
> >>>
> >>>
> neutron_tempest_plugin.api.admin.test_quotas_negative.QuotasAdminNegativeTestJSON.test_create_port_when_quotas_is_full
> >>>
> neutron_tempest_plugin.api.test_routers.RoutersIpV6Test.test_router_interface_status
> >>>
> neutron_tempest_plugin.api.test_routers.RoutersTest.test_router_interface_status
> >>>
> neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_prefixlen
> >>>
> neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_quota
> >>>
> neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_subnet_cidr
> >>>
> >>> Digging a bit into it I noticed that with the exception of the two
> >>> "test_router_interface_status" (ipv6 and ipv4) all other tests are
> >>> failing because the way metadata works in networking-ovn.
> >>>
> >>> Taking the "test_create_port_when_quotas_is_full" as an example. The
> >>> reason why it fails is because when the OVN metadata is enabled,
> >>> networking-ovn will metadata port at the moment a network is created
> >>> [0] and that will already fulfill the quota limit set by that test
> >>> [1].
> >>>
> >>> That port will also allocate an IP from the subnet which will cause
> >>> the rest of the tests to fail with a "No more IP addresses available
> >>> on network ..." error.
> >>>
> >>> With ML2/OVS we would run into the same Quota problem if DHCP would be
> >>> enabled for the created subnets. This means that if we modify the
> current tests
> >>> to enable DHCP on them and we account this extra port it would be
> valid for
> >>> all networking-ovn as well. Does it sound good or we still want to
> isolate quotas?
> >>
> >> If DHCP will be enabled for networking-ovn, will it use one more port
> also or not? If so then You will still have the same problem with DHCP as
> in ML2/OVS You will have one port created and for networking-ovn it will be
> 2 ports.
> >> If it’s not like that then I think that this solution, with some
> comment in test code why DHCP is enabled should be good IMO.
> >>
> >>>
> >>> This is not very trivial to fix because:
> >>>
> >>> 1. Tempest should be backend agnostic. So, adding a conditional in the
> >>> tempest test to check whether OVN is being used or not doesn't sound
> >>> correct.
> >>>
> >>> 2. Creating a port to be used by the metadata agent is a core part of
> >>> the design implementation for the metadata functionality [2]
> >>>
> >>> So, I'm sending this email to try to figure out what would be the best
> >>> approach to deal with this problem and start working towards having
> >>> that job to be voting in our gate. Here are some ideas:
> >>>
> >>> 1. Simple disable the tests that are affected by the metadata approach.
> >>>
> >>> 2. Disable metadata for the tempest API / Scenario tests (here's a
> >>> test patch doing it [3])
> >>>
> >>> IMHO, we don't want to do this as metadata is likely to be enabled in
> all the
> >>> clouds either using ML2/OVS or OVN so it's good to keep exercising
> >>> this part.
> >>>
> >>>
> >>> 3. Same as 1. but also create similar tempest tests specific for OVN
> >>> somewhere else (in the networking-ovn tree?!)
> >>>
> >>> As we discussed on IRC I'm keen on doing this instead of getting bits
> in
> >>> tempest to do different things depending on the backend used. Unless
> >>> we want to enable DHCP on the subnets that these tests create :)
> >>>
> >>>
> >>> What you think would be the best way to workaround this problem, any
> >>> other ideas ?
> >>>
> >>> As for the "test_router_interface_status" tests that are failing
> >>> independent of the 

Re: [openstack-dev] [neutron]Does neutron-server support the main backup redundancy?

2018-03-21 Thread Miguel Angel Ajo Pelayo
You can run as many as you want, generally an haproxy is used in front of
them to balance load across neutron servers.

Also, keep in mind, that the db backend is a single mysql, you can also
distribute that with galera.

That is the configuration you will get by default when you deploy in HA
with RDO/TripleO or OSP/Director.

On Wed, Mar 21, 2018 at 3:34 AM Kevin Benton  wrote:

> You can run as many neutron server processes as you want in an
> active/active setup.
>
> On Tue, Mar 20, 2018, 18:35 Frank Wang  wrote:
>
>> Hi All,
>>  As far as I know, neutron-server only can be a single node, In order
>> to improve the reliability of the system, Does it support the main backup
>> or active/active redundancy? Any comment would be appreciated.
>>
>> Thanks,
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack Ansible Disk requirements [docs] [osa]

2018-03-16 Thread Miguel Angel Ajo Pelayo
Right, that's a little absurd, 1TB? :-) , I completely agree.

They could live with anything, but I'd try to estimate minimums across
distributions
for example, an RDO test deployment with containers looks like:

(undercloud) [stack@undercloud ~]$ ssh heat-admin@192.168.24.8 "sudo df -h
; sudo free -h;"

Filesystem  Size  Used Avail Use% Mounted on
/dev/vda250G  *7.4G*   43G  15% /
devtmpfs2.9G 0  2.9G   0% /dev
[]


tmpfs   581M 0  581M   0% /run/user/1000
  totalusedfree  shared  buff/cache
available
Mem:   5.7G*1.1G *   188M2.4M
4.4G4.1G
Swap:0B  0B  0B

Which looks rather lightweight. We need to consider logging space etc..
I'd say 20G could be enough without considering instance disks?



On Fri, Mar 16, 2018 at 9:39 AM Jean-Philippe Evrard <
jean-phili...@evrard.me> wrote:

> Hello,
>
> That's what it always was, but it was hidden in the pages. Now that I
> refactored the pages to be more visible, you spotted it :)
> Congratulations!
>
> More seriously, I'd like to remove that requirement, showing people
> can do whatever they like. It all depends on how/where they store
> images, ephemeral storage...
>
> Will commit a patch today.
>
> Best regards,
> Jean-Philippe Evrard
>
>
>
> On 15 March 2018 at 18:31, Gordon, Kent S
>  wrote:
> > Compute host disk requirements for Openstack Ansible seem high in the
> > documentation.
> >
> > I think I have used smaller compute hosts in the past.
> > Did something change in Queens?
> >
> >
> https://docs.openstack.org/project-deploy-guide/openstack-ansible/latest/overview-requirements.html
> >
> >
> > Compute hosts
> >
> > Disk space requirements depend on the total number of instances running
> on
> > each host and the amount of disk space allocated to each instance.
> >
> > Compute hosts must have a minimum of 1 TB of disk space available.
> >
> >
> >
> >
> > --
> > Kent S. Gordon
> > kent.gor...@verizonwireless.com Work:682-831-3601 <(682)%20831-3601>
> Mobile: 817-905-6518 <(817)%20905-6518>
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Dublin PTG Summary

2018-03-13 Thread Miguel Angel Ajo Pelayo
Very good summary, thanks for leading the PTG and neutron so well. :)


On Mon, Mar 12, 2018 at 11:25 PM fumihiko kakuma 
wrote:

> Hi Miguel,
>
> > * As part of the neutron-lib effort, we have found networking projects
> that
> > are very inactive. Examples are networking-brocade (no updates since May
> of
> > 2016) and networking-ofagent (no updates since March of 2017). Miguel
> > Lavalle will contact these projects leads to ascertain their situation.
> If
> > they are indeed inactive, we will not support them as part of neutron-lib
> > updates and will also try to remove them from code search
>
> networking-ofagent has been removed in the Newton release.
> So it will not be necessary to support it as part of neutron-lib updates.
>
> Thanks
> kakuma.
>
>
> On Mon, 12 Mar 2018 13:45:27 -0500
> Miguel Lavalle  wrote:
>
> > Hi All!
> >
> > First of all, I want to thank you the team for the productive week we had
> > in Dublin. Following below is a high level summary of the discussions we
> > had. If there is something I left out, please reply to this email thread
> to
> > add it. However, if you want to continue the discussion on any of the
> > individual points summarized below, please start a new thread, so we
> don't
> > have a lot of conversations going on attached to this update.
> >
> > You can find the etherpad we used during the PTG meetings here:
> > https://etherpad.openstack.org/p/neutron-ptg-rocky
> >
> >
> > Retrospective
> > ==
> >
> > * The team missed one community goal in the Pike cycle (
> > https://governance.openstack.org/tc/goals/pike/deploy-api-in-wsgi.html)
> and
> > one in the Queens cycle (https://governance.openstack.
> > org/tc/goals/queens/policy-in-code.html)
> >
> >- Akihiro Motoki will work on https://governance.openstack.o
> > rg/tc/goals/queens/policy-in-code.html during Rocky
> >
> >   - We need volunteers to complete https://governance.op
> > enstack.org/tc/goals/pike/deploy-api-in-wsgi.html) and the two new goals
> > for the Rocky cycle: https://governance.openstack.o
> > rg/tc/goals/rocky/enable-mutable-configuration.html and
> > https://governance.openstack.org/tc/goals/rocky/mox_removal.html.
> Akihiro
> > Motoki will lead the effort for mox removal
> >
> >   - We decided to add a section to our weekly meeting agenda where we are
> > going to track the progress towards catching up with the community goals
> > during the Rocky cycle
> >
> > * As part of the neutron-lib effort, we have found networking projects
> that
> > are very inactive. Examples are networking-brocade (no updates since May
> of
> > 2016) and networking-ofagent (no updates since March of 2017). Miguel
> > Lavalle will contact these projects leads to ascertain their situation.
> If
> > they are indeed inactive, we will not support them as part of neutron-lib
> > updates and will also try to remove them from code search
> >
> > * We will continue our efforts to recruit new contributors and develop
> core
> > reviewers. During the conversation on this topic, Nikolai de Figueiredo
> and
> > Pawel Suder announced that they will become active in Neutron. Both of
> > them, along with Hongbin Lu, indicated that are interested in working
> > towards becoming core reviewers.
> >
> > * The team went through the blueprints in the backlog. Here is the status
> > for those blueprints that are not discussed in other sections of this
> > summary:
> >
> >- Adopt oslo.versionedobjects for database interactions. This is a
> > continuing effort. The contact is Ihar Hrachyshka  (ihrachys).
> Contributors
> > are wanted. There is a weekly meeting led by Ihar where this topic is
> > covered: http://eavesdrop.openstack.org/#Neutron_Upgrades_Meeting
> >
> >- Enable adoption of an existing subnet into a subnetpool. The final
> > patch in the series to implement this feature is:
> > https://review.openstack.org/#/c/348080. Pawel Suder will drive this
> patch
> > to completion
> >
> >- Neutron in-tree API reference (https://blueprints.launchpad.
> > net/neutron/+spec/neutron-in-tree-api-ref). There are two remaining TODOs
> > to complete this blueprint:
> https://bugs.launchpad.net/neutron/+bug/1752274
> > and https://bugs.launchpad.net/neutron/+bug/1752275. We need volunteers
> for
> > these two work items
> >
> >- Add TCP/UDP port forwarding extension to L3. The spec was merged
> > recently: https://specs.openstack.org/openstack/neutron-specs/specs/qu
> > eens/port-forwarding.html. Implementation effort is in progress:
> > https://review.openstack.org/#/c/533850/ and
> https://review.openstack.org/#
> > /c/535647/
> >
> >- Pure Python driven Linux network configuration (
> > https://bugs.launchpad.net/neutron/+bug/1492714). This effort has been
> > going on for several cycles gradually adopting pyroute2. Slawek Kaplonski
> > is continuing it with https://review.openstack.org/#/c/545355 and
> > https://review.openstack.org/#/c/548267
> >
> >
> > Port 

[openstack-dev] [neutron] Increased port revisions on port creation after object engine facade patch

2018-02-28 Thread Miguel Angel Ajo Pelayo
On Mon, Feb 12, 2018 at 2:21 PM Ihar Hrachyshka <ihrac...@redhat.com> wrote:

> I would check how many commits are issued. If it's still one, there is
> no issue as long as revision numbers are increasing. Otherwise, we can
> take a look. BTW why do we discuss it here and not in upstream?


Good point, I'm moving this to the openstack-dev list


> Ihar
>
> On Mon, Feb 12, 2018 at 12:37 AM, Miguel Angel Ajo Pelayo
> <majop...@redhat.com> wrote:
> > Hi folks :)
> >
> >We were talking this morning about the change for the new engine
> facade
> > in neutron [1],
> >
> >And we guess this could incur in overhead on the DB layer because if
> we
> > look at the corresponding networking-ovn change [2], we detected that the
> > port revisions increases by 3 for port creation.
> >
> >We haven't looked at why, or how much overhead does it add to port
> > creation. It'd be great to verify that we don't incur in much overhead,
> or
> > see if there is room for optimization.
> >
> >Best regards,
> >
> >
> >
> > [1]
> >
> https://github.com/openstack/neutron/commit/6f83466307fb21aee5bb596974644d457ae1fa60#diff-94eb611a8a3b29dbf8cd2aa2466a53b9R34
> > [2]
> >
> https://review.openstack.org/#/c/543166/3/networking_ovn/tests/functional/test_revision_numbers.py
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [networking-ovn] Rocky PTG

2018-02-08 Thread Miguel Angel Ajo Pelayo
I have created an etherpad for networking-ovn, if
https://etherpad.openstack.org/p/networking-ovn-ptg-rocky with some topics
I thought are relevant.

But please feel free to add anything you believe it could be interesting
and fill attendance so it's easier to sync & meet. :)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-ovn] Stable branch maintainers for networking-ovn

2017-12-20 Thread Miguel Angel Ajo Pelayo
That may help, of course, but I gues it could also be capacity related.

On Wed, Dec 20, 2017 at 11:42 AM Takashi Yamamoto 
wrote:

> On Wed, Dec 20, 2017 at 7:18 PM, Lucas Alvares Gomes
>  wrote:
> > Hi,
> >
> >>> Hi all,
> >>>
> >>> Just sending this email to try to understand the model for stable
> branch maintenance in networking-ovn (potentially other neutron drivers
> too).
> >>>
> >>> Right now, only members of the ``neutron-stable-maint`` gerrit group
> are able to approve patches for the stable branches; this can cause some
> delays when fixing things (e.g [0]) because we don't have any member in
> that group that is also a ``networking-ovn-core`` member. So, sometimes we
> have to go around and ping people to take a look at the patches and it
> kinda sucks.
> >>
> >>
> >> We had a Gerrit dashboard that helped stable reviewers stay on top of
> things [1], but it looks like it doesn't seem to work anymore. My
> suggestion would be to look into that as the lack of visibility might be
> the source of the recent delay.
> >>
> >> [1]
> https://docs.openstack.org/neutron/latest/contributor/dashboards/index.html#gerrit-dashboards
> >
> > ++ indeed, lack of visibility is a problem as well.
>
> and lack of visibility of the fix of the dashboard? :-)
> https://review.openstack.org/#/c/479138/
>
> >
> >>> Is there any reason why things are set up in that way ?
> >>>
> >>> I was wondering if it would make sense to create a new group to help
> maintaining the stable branches in networking-ovn. The new group could
> include some of the core members willing to do the work +
> ``neutron-stable-maint`` as a subgroup. Is that reasonable, what you think
> about it?
> >>
> >>
> >> Rather than create yet another group(s), it makes sense to have an
> individual from each neutron project participate in the
> neutron-stable-maint team (whose admin rights I think are held by Ihar as
> neutron member), for those of whom have actually an interest in reviewing
> stable patches :)
> >>
> >
> > Having a member in the current group will help, if you are comfortable
> > with adding a new member to the current group that would be great.
> >
> > The reason why I was leaning towards having another group is because
> > of scope limitation. Members of the ``neutron-stable-maint`` group can
> > approve patches for all neutron-related projects stable branches. By
> > having a separated group, members would only be able to approve things
> > for a specific project.
> >
> > The new group would also have the ``neutron-stable-maint`` as a
> > sub-group to it , so the members of the original group would still
> > able approve things everywhere.
> >
> > Anyway, either ideas would help with the original problem, I'm good
> > with whatever approach people thinks is best.
> >
> > Cheers,
> > Lucas
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-ovn] Stable branch maintainers for networking-ovn

2017-12-20 Thread Miguel Angel Ajo Pelayo
If we could have one member from networking-ovn on the neutron-stable-maint
team that would be great. That means the member would have to be trusted
not to handle neutron-patches when not knowing what he's doing, and of
course, follow the stable guidelines, which are absolutely important. But I
believe everybody takes the role seriously.

If that's not a reasonable solution, then I'd vote for the specific stable
maintainers instead. But we need something to help us handle issues quicker
and at
the same time, in a controlled manner.

Best,
Miguel Ángel.

On Tue, Dec 19, 2017 at 5:48 PM Armando M.  wrote:

> On 19 December 2017 at 08:21, Lucas Alvares Gomes 
> wrote:
>
>> Hi all,
>>
>> Just sending this email to try to understand the model for stable branch
>> maintenance in networking-ovn (potentially other neutron drivers too).
>>
>> Right now, only members of the ``neutron-stable-maint`` gerrit group are
>> able to approve patches for the stable branches; this can cause some delays
>> when fixing things (e.g [0]) because we don't have any member in that group
>> that is also a ``networking-ovn-core`` member. So, sometimes we have to go
>> around and ping people to take a look at the patches and it kinda sucks.
>>
>
> We had a Gerrit dashboard that helped stable reviewers stay on top of
> things [1], but it looks like it doesn't seem to work anymore. My
> suggestion would be to look into that as the lack of visibility might be
> the source of the recent delay.
>
> [1]
> https://docs.openstack.org/neutron/latest/contributor/dashboards/index.html#gerrit-dashboards
>
>
>> Is there any reason why things are set up in that way ?
>>
>> I was wondering if it would make sense to create a new group to help
>> maintaining the stable branches in networking-ovn. The new group could
>> include some of the core members willing to do the work +
>> ``neutron-stable-maint`` as a subgroup. Is that reasonable, what you think
>> about it?
>>
>
> Rather than create yet another group(s), it makes sense to have an
> individual from each neutron project participate in the
> neutron-stable-maint team (whose admin rights I think are held by Ihar as
> neutron member), for those of whom have actually an interest in reviewing
> stable patches :)
>
> HTH
> Armando
>
>
>> [0] https://review.openstack.org/#/c/523623/
>>
>> Cheers,
>> Lucas
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] put router in vm rather than namespace

2017-12-04 Thread Miguel Angel Ajo Pelayo
That adds more latency, I believe some vendor plugins do it like that
(service VM).

Have you checked out networking-ovn?, it's all done in openflow, and you
have Ha (A/P) for free without extra namespaces, just flows and bfd
monitoring.

On Dec 4, 2017 4:22 PM, "Jaze Lee"  wrote:

> Hello,
> Can we put router into virtual machine rather than in namespace? Then
> the HA, and devops will be more elegant. You can live mirage, and use
> haproxy.
> The namespace can not be live mirage. It is bind to the host and lose
> flexible.
> Or is there some talks or specs about this?
>
> --
> 谦谦君子
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [networking-ovn] Non voting jobs for networking-ovn on neutron.

2017-12-04 Thread Miguel Angel Ajo Pelayo
Hi Folks,
 I wanted to rise this topic, I have been wanting to do it from long
ago,
but preferred to wait until the zuulv3 stuff was a little bit more stable,
may
be now it's a good time.

We were thinking about the option of having a couple of non-voting jobs
on
the neutron check for networking-ovn. It'd be great for us, in terms of
traceability,
we re-use a lot of the neutron unit test base clases/etc, and sometimes
we get hit by surprises.

Sometimes some other changes hit us on the neutron scenario tests.

So it'd be great to have them if you believe it's a reasonable thing.

Best regards,
Miguel Ángel.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ovn] networking-ovn core team update

2017-12-01 Thread Miguel Angel Ajo Pelayo
Welcome Daniel! :)

On Fri, Dec 1, 2017 at 5:45 PM, Lucas Alvares Gomes 
wrote:

> Hi all,
>
> I would like to welcome Daniel Alvarez to the networking-ovn core team!
>
> Daniel has been contributing with the project for a good time already
> and helping *a lot* with reviews and code.
>
> Welcome onboard man!
>
> Cheers,
> Lucas
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Propose Slawek Kaplonski for Neutron core

2017-11-29 Thread Miguel Angel Ajo Pelayo
"+1" I know, I'm not active, but I care about neutron, and slaweq is a
great contributor.

On Nov 29, 2017 8:37 PM, "Ihar Hrachyshka"  wrote:

> YES, FINALLY.
>
> On Wed, Nov 29, 2017 at 11:29 AM, Kevin Benton  wrote:
> > +1! ... even though I haven't been around. :)
> >
> > On Wed, Nov 29, 2017 at 1:21 PM, Miguel Lavalle 
> wrote:
> >>
> >> Hi Neutron Team,
> >>
> >> I want to nominate Slawek Kaplonski (irc: slaweq) to Neutron core.
> Slawek
> >> has been an active contributor to the project since the Mitaka cycle.
> He has
> >> been instrumental in the development of the QoS capabilities in Neutron,
> >> becoming the lead of the sub-team focused on that set of features. More
> >> recently, he has collaborated in the implementation of OVO and is an
> active
> >> participant in the CI sub-team. His number of code reviews during the
> Queens
> >> cycle is on par with the leading core members of the team:
> >> http://stackalytics.com/?module=neutron
> >>
> >> In my opinion, his efforts are highly valuable to the team and we will
> be
> >> very lucky to have him as a fully voting core.
> >>
> >> I will keep this nomination open for a week as customary,
> >>
> >> Thank you,
> >>
> >> Miguel
> >>
> >> 
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ovn] networking-ovn-core updates

2017-10-10 Thread Miguel Angel Ajo Pelayo
Thank you very much :-)

On Tue, Oct 10, 2017 at 4:09 PM, Lucas Alvares Gomes Martins <
lmart...@redhat.com> wrote:

> Hi,
>
> On Tue, Oct 10, 2017 at 2:25 PM, Russell Bryant 
> wrote:
> > Hello, everyone.  I'd like to welcome two new members to the
> > networking-ovn-core team: Miguel Angel Ajo and Yamamoto Takashi.
> >
>
> Great additions, welcome on board Miguel and Yamamoto!
>
> Cheers,
> Lucas
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][oslo.messaging][femdc]Topic names for every resource type RPC endpoint

2017-09-25 Thread Miguel Angel Ajo Pelayo
Yeah, you could may be segment messages by zones/cells etc.

Or group resources by buckets, for example taking the last 2-3 bytes of
each object identifier.

But you may need to do the math on how that's going to work, the more
objects an agent needs to process, the more likely that it will receive
unnecessary objects in that case.

The ideal, anyway is single object subscription (as long as the client and
rabbit would handle such scenario well)


On Sun, Sep 24, 2017 at 11:45 PM, Matthieu Simonin <
matthieu.simo...@inria.fr> wrote:

> Thanks Miguel for your feedback.
>
> I'll definetely dig more into this.
> Having a lot of messages broadcasted to all the neutron agents is not
> something you want especially in the context of femdc[1].
>
> Best,
>
> Matt
>
> [1]: https://wiki.openstack.org/wiki/Fog_Edge_Massively_Distributed_Clouds
>
> ----- Mail original -
> > De: "Miguel Angel Ajo Pelayo" <majop...@redhat.com>
> > À: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> > Envoyé: Mercredi 20 Septembre 2017 11:15:12
> > Objet: Re: [openstack-dev] [neutron][oslo.messaging][femdc]Topic names
> for every resource type RPC endpoint
> >
> > I wrote those lines.
> >
> > At that time, I tried a couple a publisher and a receiver at that scale.
> It
> > was the receiver side what crashed trying to subscribe, the sender was
> > completely fine.
> >
> > Sadly I don't keep the test examples, I should have stored them in github
> > or something. It shouldn't be hard to replicate though if you follow the
> > oslo_messaging docs.
> >
> >
> >
> > On Wed, Sep 20, 2017 at 9:58 AM, Matthieu Simonin <
> matthieu.simo...@inria.fr
> > > wrote:
> >
> > > Hello,
> > >
> > > In the Neutron docs about RPCs and Callbacks system, it is said[1] :
> > >
> > > "With the underlying oslo_messaging support for dynamic topics on the
> > > receiver
> > > we cannot implement a per “resource type + resource id” topic, rabbitmq
> > > seems
> > > to handle 1’s of topics without suffering, but creating 100’s of
> > > oslo_messaging receivers on different topics seems to crash."
> > >
> > > I wonder if this statements still holds for the new transports
> supported in
> > > oslo.messaging (e.g Kafka, AMQP1.0) or if it's more a design
> limitation.
> > > I'm interested in any relevant docs/links/reviews on the "topic" :).
> > >
> > > Moreover, I'm curious to get an idea on how many different resources a
> > > Neutron
> > > Agent would have to manage and thus how many oslo_messaging receivers
> > > would be
> > > required (e.g how many security groups a neutron agent has to manage
> ?) -
> > > at
> > > least the order of magnitude.
> > >
> > > Best,
> > >
> > > Matt
> > >
> > >
> > >
> > > [1]: https://docs.openstack.org/neutron/latest/contributor/
> > > internals/rpc_callbacks.html#topic-names-for-every-
> > > resource-type-rpc-endpoint
> > >
> > > 
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] MTU native ovs firewall driver

2017-09-22 Thread Miguel Angel Ajo Pelayo
It could be that too TBH I'm not sure :)

On Fri, Sep 22, 2017 at 11:02 AM, Sławomir Kapłoński <sla...@kaplonski.pl>
wrote:

> Isn't OVS setting MTU automatically MTU for bridge as lowest value from
> ports connected to this bridge?
>
>
> > Wiadomość napisana przez Miguel Angel Ajo Pelayo <majop...@redhat.com>
> w dniu 22.09.2017, o godz. 10:32:
> >
> > I believe that one of the problems is that if you set a certain MTU in
> an OVS switch, new connected ports will be automatically assigned to such
> MTU the ovs-vswitchd daemon.
> >
> >
> >
> > On Wed, Sep 20, 2017 at 10:45 PM, Ian Wells <ijw.ubu...@cack.org.uk>
> wrote:
> > Since OVS is doing L2 forwarding, you should be fine setting the MTU to
> as high as you choose, which would probably be the segment_mtu in the
> config, since that's what it defines - the largest MTU that (from the
> Neutron API perspective) is usable and (from the OVS perspective) will be
> used in the system.  A 1500MTU Neutron network will work fine over a
> 9000MTU OVS switch.
> >
> > What won't work is sending a 1500MTU network to a 9000MTU router port.
> So if you're doing any L3 (where the packet arrives at an interface, rather
> than travels a segment) you need to consider those MTUs in light of the
> Neutron network they're attached to.
> > --
> > Ian.
> >
> > On 20 September 2017 at 09:58, Ihar Hrachyshka <ihrac...@redhat.com>
> wrote:
> > On Wed, Sep 20, 2017 at 9:33 AM, Ajay Kalambur (akalambu)
> > <akala...@cisco.com> wrote:
> > > So I was forced to explicitly set the MTU on br-int
> > > ovs-vsctl set int br-int mtu_request=9000
> > >
> > >
> > > Without this the tap device added to br-int would get MTU 1500
> > >
> > > Would this be something the ovs l2 agent can handle since it creates
> the bridge?
> >
> > Yes, I guess we could do that if it fixes your problem. The issue
> > stems from the fact that we use a single bridge for different networks
> > with different MTUs, and it does break some assumptions kernel folks
> > make about a switch (that all attached ports steer traffic in the same
> > l2 domain, which is not the case because of flows we set). You may
> > want to report a bug against Neutron and we can then see how to handle
> > that. I will probably not be as simple as setting the value to 9000
> > because different networks have different MTUs, and plugging those
> > mixed ports in the same bridge may trigger MTU updates on unrelated
> > tap devices. We will need to test how kernel behaves then.
> >
> > Also, you may be interested in reviewing an old openvswitch-dev@
> > thread that I once started here:
> > https://mail.openvswitch.org/pipermail/ovs-dev/2016-June/316733.html
> > Sadly, I never followed up with a test scenario that wouldn't involve
> > OpenStack, for OVS folks to follow up on, so it never moved anywhere.
> >
> > Cheers,
> > Ihar
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> —
> Best regards
> Slawek Kaplonski
> sla...@kaplonski.pl
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] MTU native ovs firewall driver

2017-09-22 Thread Miguel Angel Ajo Pelayo
I believe that one of the problems is that if you set a certain MTU in an
OVS switch, new connected ports will be automatically assigned to such MTU
the ovs-vswitchd daemon.



On Wed, Sep 20, 2017 at 10:45 PM, Ian Wells  wrote:

> Since OVS is doing L2 forwarding, you should be fine setting the MTU to as
> high as you choose, which would probably be the segment_mtu in the config,
> since that's what it defines - the largest MTU that (from the Neutron API
> perspective) is usable and (from the OVS perspective) will be used in the
> system.  A 1500MTU Neutron network will work fine over a 9000MTU OVS switch.
>
> What won't work is sending a 1500MTU network to a 9000MTU router port.  So
> if you're doing any L3 (where the packet arrives at an interface, rather
> than travels a segment) you need to consider those MTUs in light of the
> Neutron network they're attached to.
> --
> Ian.
>
> On 20 September 2017 at 09:58, Ihar Hrachyshka 
> wrote:
>
>> On Wed, Sep 20, 2017 at 9:33 AM, Ajay Kalambur (akalambu)
>>  wrote:
>> > So I was forced to explicitly set the MTU on br-int
>> > ovs-vsctl set int br-int mtu_request=9000
>> >
>> >
>> > Without this the tap device added to br-int would get MTU 1500
>> >
>> > Would this be something the ovs l2 agent can handle since it creates
>> the bridge?
>>
>> Yes, I guess we could do that if it fixes your problem. The issue
>> stems from the fact that we use a single bridge for different networks
>> with different MTUs, and it does break some assumptions kernel folks
>> make about a switch (that all attached ports steer traffic in the same
>> l2 domain, which is not the case because of flows we set). You may
>> want to report a bug against Neutron and we can then see how to handle
>> that. I will probably not be as simple as setting the value to 9000
>> because different networks have different MTUs, and plugging those
>> mixed ports in the same bridge may trigger MTU updates on unrelated
>> tap devices. We will need to test how kernel behaves then.
>>
>> Also, you may be interested in reviewing an old openvswitch-dev@
>> thread that I once started here:
>> https://mail.openvswitch.org/pipermail/ovs-dev/2016-June/316733.html
>> Sadly, I never followed up with a test scenario that wouldn't involve
>> OpenStack, for OVS folks to follow up on, so it never moved anywhere.
>>
>> Cheers,
>> Ihar
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] team ptg photos

2017-09-22 Thread Miguel Angel Ajo Pelayo
Thanks! :)

On Thu, Sep 21, 2017 at 3:16 AM, Kevin Benton  wrote:

> https://photos.app.goo.gl/Aqa51E2aVkv5b4ah1
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][oslo.messaging][femdc]Topic names for every resource type RPC endpoint

2017-09-20 Thread Miguel Angel Ajo Pelayo
I wrote those lines.

At that time, I tried a couple a publisher and a receiver at that scale. It
was the receiver side what crashed trying to subscribe, the sender was
completely fine.

Sadly I don't keep the test examples, I should have stored them in github
or something. It shouldn't be hard to replicate though if you follow the
oslo_messaging docs.



On Wed, Sep 20, 2017 at 9:58 AM, Matthieu Simonin  wrote:

> Hello,
>
> In the Neutron docs about RPCs and Callbacks system, it is said[1] :
>
> "With the underlying oslo_messaging support for dynamic topics on the
> receiver
> we cannot implement a per “resource type + resource id” topic, rabbitmq
> seems
> to handle 1’s of topics without suffering, but creating 100’s of
> oslo_messaging receivers on different topics seems to crash."
>
> I wonder if this statements still holds for the new transports supported in
> oslo.messaging (e.g Kafka, AMQP1.0) or if it's more a design limitation.
> I'm interested in any relevant docs/links/reviews on the "topic" :).
>
> Moreover, I'm curious to get an idea on how many different resources a
> Neutron
> Agent would have to manage and thus how many oslo_messaging receivers
> would be
> required (e.g how many security groups a neutron agent has to manage ?) -
> at
> least the order of magnitude.
>
> Best,
>
> Matt
>
>
>
> [1]: https://docs.openstack.org/neutron/latest/contributor/
> internals/rpc_callbacks.html#topic-names-for-every-
> resource-type-rpc-endpoint
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Denver Team Dinner

2017-09-13 Thread Miguel Angel Ajo Pelayo
+1! Thanks for organizing

On Wed, Sep 13, 2017 at 10:11 AM, Sandhya Dasu (sadasu) 
wrote:

> +1
>
> Thanks for organizing.
>
> On 9/13/17, 7:28 AM, "Thomas Morin"  wrote:
>
> +1
>
> -Thomas
>
>
> Takashi Yamamoto, 2017-09-13 03:05:
> > +1
> >
> > On Wed, Sep 13, 2017 at 2:56 AM, IWAMOTO Toshihiro
> >  wrote:
> > > +1
> > > thanks for organizing!
> > >
> > > On Wed, 13 Sep 2017 14:18:45 +0900,
> > > Brian Haley wrote:
> > > >
> > > > +1
> > > >
> > > > On 09/12/2017 10:44 PM, Ihar Hrachyshka wrote:
> > > > > +1
> > > > >
> > > > > On Tue, Sep 12, 2017 at 9:44 PM, Kevin Benton  > > > > > wrote:
> > > > > > +1
> > > > > >
> > > > > > On Tue, Sep 12, 2017 at 8:50 PM, Sławek Kapłoński  > > > > > lonski.pl>
> > > > > > wrote:
> > > > > > >
> > > > > > > +1
> > > > > > >
> > > > > > > —
> > > > > > > Best regards
> > > > > > > Slawek Kaplonski
> > > > > > > sla...@kaplonski.pl
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > > Wiadomość napisana przez Miguel Lavalle  > > > > > > > com> w dniu
> > > > > > > > 12.09.2017, o godz. 17:23:
> > > > > > > >
> > > > > > > > Dear Neutrinos,
> > > > > > > >
> > > > > > > > Our social event will take place on Thursday September
> > > > > > > > 12th at 7:30pm.
> > > > > > > > The venue is going to be https://www.famousdaves.com/
> Denv
> > > > > > > > er-Stapleton. It is
> > > > > > > > located 0.4 miles from the Renaissance Hotel, which
> > > > > > > > translates to an easy 9
> > > > > > > > minutes walk.
> > > > > > > >
> > > > > > > > I have a reservation for a group of 30 people under my
> > > > > > > > name. Please
> > > > > > > > respond to this message with your attendance confirmation
> > > > > > > > by Wednesday
> > > > > > > > night, so I can get a more accurate head count.
> > > > > > > >
> > > > > > > > Looking forward to see y'all Thursday night
> > > > > > > >
> > > > > > > > Best regards
> > > > > > > >
> > > > > > > > Miguel
> > > > > > > >
> > > > > > > > __
> ___
> > > > > > > > _
> > > > > > > > OpenStack Development Mailing List (not for usage
> > > > > > > > questions)
> > > > > > > > Unsubscribe:
> > > > > > > > OpenStack-dev-request@lists.
> openstack.org?subject:unsubsc
> > > > > > > > ribe
> > > > > > > > http://lists.openstack.org/
> cgi-bin/mailman/listinfo/opens
> > > > > > > > tack-dev
> > > > > > >
> > > > > > >
> > > > > > > __
> _
> > > > > > > ___
> > > > > > > OpenStack Development Mailing List (not for usage
> > > > > > > questions)
> > > > > > > Unsubscribe: OpenStack-dev-request@lists.
> openstack.org?subj
> > > > > > > ect:unsubscribe
> > > > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/
> opensta
> > > > > > > ck-dev
> > > > > > >
> > > > > >
> > > > > >
> > > > > > 
> _
> > > > > > _
> > > > > > OpenStack Development Mailing List (not for usage questions)
> > > > > > Unsubscribe: OpenStack-dev-request@lists.
> openstack.org?subjec
> > > > > > t:unsubscribe
> > > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack
> > > > > > -dev
> > > > > >
> > > > >
> > > > > 
> ___
> > > > > ___
> > > > > OpenStack Development Mailing List (not for usage questions)
> > > > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject
> :
> > > > > unsubscribe
> > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack-d
> > > > > ev
> > > > >
> > > >
> > > >
> > > > 
> _
> > > > _
> > > > OpenStack Development Mailing List (not for usage questions)
> > > > Unsubscribe: OpenStack-dev-request@lists.
> openstack.org?subject:un
> > > > subscribe
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack-dev
> > >
> > > 
> ___
> > > ___
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: OpenStack-dev-request@lists.
> openstack.org?subject:unsu
> > > bscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > 
> _
> > _
> > OpenStack 

Re: [openstack-dev] [neutron] - transitioning PTL role to Miguel Lavalle

2017-09-12 Thread Miguel Angel Ajo Pelayo
Kevin!, and thank you for all the effort and energy you have put into
openstack-neutron during the last few years. It's been great to have you on
the project.

On Mon, Sep 11, 2017 at 5:18 PM, Ihar Hrachyshka 
wrote:

> It's very sad news for the team, but I hope that Kevin will still be
> able to contribute, or at least stay in touch. I am sure Miguel will
> successfully lead the community in Queens and beyond. Thanks to Miguel
> for stepping in the ring. Thanks to Kevin for his leadership in recent
> cycles.
>
> King is dead, long live the King!
>
> Ihar
>
> On Fri, Sep 8, 2017 at 8:59 PM, Kevin Benton  wrote:
> > Hi everyone,
> >
> > Due to a change in my role at my employer, I no longer have time to be
> the
> > PTL of Neutron. Effective immediately, I will be transitioning the PTL
> role
> > to Miguel Lavalle.
> >
> > Miguel is an excellent leader in the community and has experience
> reviewing
> > patches as a core, reviewing feature requests and specs as a driver,
> > implementing cross-project features, handling docs, and on-boarding new
> > contributors.
> >
> > We will make the switch official next week at the PTG with the
> appropriate
> > patches to the governance repo.
> >
> > If anyone has any concerns with this transition plan, please reach out to
> > me, Thierry, or Doug Hellmann.
> >
> > Cheers,
> > Kevin Benton
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg][nova][neutron] modelling network capabilities and capacity in placement and nova neutron port binding negociation.

2017-09-11 Thread Miguel Angel Ajo Pelayo
I'm also interested in this topic. :)

On Mon, Sep 11, 2017 at 11:12 AM, Jay Pipes  wrote:

> I'm interested in this. I get in to Denver this evening so if we can do
> this session tomorrow or later, that would be super.
>
> Best,
> -jay
>
>
> On 09/11/2017 01:11 PM, Mooney, Sean K wrote:
>
>> Hi everyone,
>>
>> I’m interested in set up a white boarding session at the ptg to discuss
>>
>> How to model network backend in placement and use that info as part of
>> scheduling
>>
>> This work would also intersect on the nova neutron port binding
>> negotiation
>>
>> Work that is also in flight so I think there is merit in combining both
>> topic into one
>>
>> Session.
>>
>> For several release we have been discussing a negotiation protocol that
>> would
>>
>> Allow nova/compute services to tell neutron what virtual and physical
>> interfaces
>>
>> a hypervisor can support and then allow neutron to select from that set
>> the most appriote
>>
>> vif type based on the capabilities of the network backend deployed by the
>> host.
>>
>> Extending that concept with the capabilities provided by placement and
>> trait
>>
>> Will enable us to model the network capablites of a specific network
>> backend
>>
>> In an scheduler friendly way without nova needing to understand
>> networking.
>>
>> To that end  if people are interested in  having a while boarding session
>> to dig
>>
>> Into this let me know.
>>
>> Regards
>>
>> Seán
>>
>> --
>> Intel Shannon Limited
>> Registered in Ireland
>> Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
>> Registered Number: 308263
>> Business address: Dromore House, East Park, Shannon, Co. Clare
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - transitioning PTL role to Miguel Lavalle

2017-09-11 Thread Miguel Angel Ajo Pelayo
Big +1 for Miguel Lavalle for me, Miguel, thank you for taking this
responsibility on behalf of the Neutron/OpenStack community.

On Fri, Sep 8, 2017 at 8:59 PM, Kevin Benton  wrote:

> Hi everyone,
>
> Due to a change in my role at my employer, I no longer have time to be the
> PTL of Neutron. Effective immediately, I will be transitioning the PTL role
> to Miguel Lavalle.
>
> Miguel is an excellent leader in the community and has experience
> reviewing patches as a core, reviewing feature requests and specs as a
> driver, implementing cross-project features, handling docs, and on-boarding
> new contributors.
>
> We will make the switch official next week at the PTG with the appropriate
> patches to the governance repo.
>
> If anyone has any concerns with this transition plan, please reach out to
> me, Thierry, or Doug Hellmann.
>
> Cheers,
> Kevin Benton
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - PTG neutron attendee info

2017-09-11 Thread Miguel Angel Ajo Pelayo
Thank you Kevin & Miguel! ;)

On Thu, Sep 7, 2017 at 4:04 PM, Kevin Benton  wrote:

> Hello everyone,
>
> With the help of Miguel we have a tentative schedule in the PTG. Please
> check the etherpad and if there is anything missing you wanted to see
> discussed, please reach out to me or Miguel right away to get it added.
>
> Cheers,
> Kevin Benton
>
> On Thu, Jul 27, 2017 at 9:53 PM, Kevin Benton  wrote:
>
>> Hi all,
>>
>> If you are planning on attending the PTG and the Neutron sessions, please
>> add your name to the etherpad[1] so we can get a rough size estimate.
>>
>>
>> 1. https://etherpad.openstack.org/p/neutron-queens-ptg
>>
>> Cheers,
>> Kevin Benton
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] removing screen from devstack - RSN

2017-09-11 Thread Miguel Angel Ajo Pelayo
I wonder if it makes sense to provide a helper script to do what it's
explained on the document.

So we could ~/devstack/tools/run_locally.sh n-sch.

If yes, I'll send the patch.

On Fri, Sep 8, 2017 at 3:00 PM, Eric Fried  wrote:

> Oh, are we talking about the logs produced by CI jobs?  I thought we
> were talking about on your local devstack itself.  Because there, I
> don't think you shouldn't be seeing log files like this anymore.
> Logging is done via systemd and can be viewed via journalctl [1].
>
> The exceptions are things that run under apache, like horizon, keystone,
> and placement - their log files can be found wherever apache is set up
> to send 'em.  E.g. [2].
>
> As far as the names go, I *think* we've done away with 'q' as the
> neutron prefixy thing at this point.  On my (pike-ish) setup, the
> devstack neutron API service is quite appropriately called
> devstack@neutron-api.service.
>
> [1] https://docs.openstack.org/devstack/latest/systemd#querying-logs
> [2] http://paste.openstack.org/raw/620754/
>
>
> On 09/08/2017 03:49 PM, Sean Dague wrote:
> > I would love to. Those were mostly left because devstack-gate (and
> > related tooling like elasticsearch) is not branch aware, so things get
> > ugly on the conditionals for changing expected output files.
> >
> > That might be a good popup infra topic at PTG.
> >
> > On 09/08/2017 04:17 PM, John Villalovos wrote:
> >> Does this mean we can now get more user friendly names for the log
> files?
> >>
> >> Currently I see names like:
> >> screen-dstat.txt.gz
> >> screen-etcd.txt.gz
> >> screen-g-api.txt.gz
> >> screen-g-reg.txt.gz
> >> screen-ir-api.txt.gz
> >> screen-ir-cond.txt.gz
> >> screen-keystone.txt.gz
> >> screen-n-api-meta.txt.gz
> >> screen-n-api.txt.gz
> >> screen-n-cauth.txt.gz
> >> screen-n-cond.txt.gz
> >> screen-n-cpu.txt.gz
> >> screen-n-novnc.txt.gz
> >> screen-n-sch.txt.gz
> >> screen-peakmem_tracker.txt.gz
> >> screen-placement-api.txt.gz
> >> screen-q-agt.txt.gz
> >> screen-q-dhcp.txt.gz
> >> screen-q-l3.txt.gz
> >> screen-q-meta.txt.gz
> >> screen-q-metering.txt.gz
> >> screen-q-svc.txt.gz
> >> screen-s-account.txt.gz
> >> screen-s-container.txt.gz
> >> screen-s-object.txt.gz
> >> screen-s-proxy.txt.gz
> >>
> >> People new to OpenStack don't really know that 'q' means neutron.
> >>
> >>
> >>
> >> On Thu, Sep 7, 2017 at 5:45 AM, Sean Dague  >> > wrote:
> >>
> >> On 08/31/2017 06:27 AM, Sean Dague wrote:
> >> > The work that started last cycle to make devstack only have a
> single
> >> > execution mode, that was the same between automated QA and local,
> is
> >> > nearing it's completion.
> >> >
> >> > https://review.openstack.org/#/c/499186/
> >>  is the patch that will
> remove
> >> > screen from devstack (which was only left as a fall back for
> things like
> >> > grenade during Pike). Tests are currently passing on all the
> gating jobs
> >> > for it. And experimental looks mostly useful.
> >> >
> >> > The intent is to merge this in about a week (right before PTG).
> So, if
> >> > you have a complicated devstack plugin you think might be
> affected by
> >> > this (and were previously making jobs pretend to be grenade to
> keep
> >> > screen running), now is the time to run tests against this patch
> and see
> >> > where things stand.
> >>
> >> This patch is in the gate and now merging, and with it devstack now
> has
> >> a single run mode, using systemd units, which is the same between
> test
> >> and development.
> >>
> >> Thanks to everyone helping with the transition!
> >>
> >> -Sean
> >>
> >> --
> >> Sean Dague
> >> http://dague.net
> >>
> >> 
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>  unsubscribe>
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> 
> >>
> >>
> >>
> >>
> >> 
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack 

Re: [openstack-dev] [neutron] mod_wsgi support (pike bug?)

2017-09-05 Thread Miguel Angel Ajo Pelayo
As a note, in OSP we also include configuration directories and things
alike:

https://review.rdoproject.org/r/gitweb?p=openstack/neutron-distgit.git;a=blob;f=neutron-server.service;h=e68024cb9dc06e474b1ac9473bff93c3d892b4d6;hb=48a9d1aa77506d0c60d5bc448b7c5c303083aa68#l8

Config directories make it a bit more future proof, and able to easily
integrate with vendor plugins without the need to modify the service file.


On Tue, Sep 5, 2017 at 9:27 AM, Miguel Angel Ajo Pelayo <majop...@redhat.com
> wrote:

> Why do we need to put all the configuration in a single file?
>
> That would be a big big change to deployers. It'd be great if we can think
> of an alternate solution. (not sure how that's being handled for other
> services though).
>
> Best regards,
> Miguel Ángel.
>
> On Mon, Sep 4, 2017 at 3:01 PM, Kevin Benton <ke...@benton.pub> wrote:
>
>> Yes, unfortunately I didn't make it back to the patch in time to adjust
>> devstack to dump all of the configuration into one file (instead of
>> /etc/neutron/neutron.conf /etc/neutron/plugins/ml2.conf etc). I did test
>> locally with my dev environment around the time that RPC server patch went
>> in, but there may have been a regression since it went in since it's not
>> tested as Matt pointed out.
>>
>> It appears that puppet is still spreading the config files for the server
>> into multiple locations as well[1]. Does it inherit that logic from
>> devstack? Because that will need to be changed to push all of the relevant
>> server config into one conf.
>>
>> 1. http://logs.openstack.org/82/500182/3/check/gate-puppet-o
>> penstack-integration-4-scenario004-tempest-ubuntu-xenial/
>> 791523c/logs/etc/neutron/plugins/
>>
>> On Sun, Sep 3, 2017 at 12:03 PM, Mohammed Naser <mna...@vexxhost.com>
>> wrote:
>>
>>> On Sun, Sep 3, 2017 at 3:03 PM, Mohammed Naser <mna...@vexxhost.com>
>>> wrote:
>>> > On Sun, Sep 3, 2017 at 2:20 PM, Matthew Treinish <mtrein...@kortar.org>
>>> wrote:
>>> >> On Sun, Sep 03, 2017 at 01:47:24PM -0400, Mohammed Naser wrote:
>>> >>> Hi folks,
>>> >>>
>>> >>> I've attempted to enable mod_wsgi support in our dev environment with
>>> >>> Puppet however it results in a traceback.  I figured it was an
>>> >>> environment thing so I looked into moving the Puppet CI to test using
>>> >>> mod_wsgi and it resulted in the same error.
>>> >>>
>>> >>> http://logs.openstack.org/82/500182/3/check/gate-puppet-open
>>> stack-integration-4-scenario004-tempest-ubuntu-xenial/791523
>>> c/logs/apache/neutron_wsgi_error.txt.gz
>>> >>>
>>> >>> Would anyone from the Neutron team be able to give input on this?
>>> >>> We'd love to add gating for Neutron deployed by mod_wsgi which can
>>> >>> help find similar issues.
>>> >>>
>>> >>
>>> >> Neutron never got their wsgi support working in Devstack either. The
>>> patch
>>> >> adding that: https://review.openstack.org/#/c/439191/ never passed
>>> the gate and
>>> >> seems to have lost the attention of the author. The wsgi support in
>>> neutron
>>> >> probably doesn't work yet, and is definitely untested. IIRC, the
>>> issue they were
>>> >> hitting was loading the config files. [1] I don't think I saw any
>>> progress on it
>>> >> after that though.
>>> >>
>>> >> The TC goal doc [2] probably should say something about it never
>>> landing and
>>> >> missing pike.
>>> >>
>>> >
>>> > That would make sense.  The release notes also state that it does
>>> > offer the ability to run inside mod_wsgi which can be misleading to
>>> > deployers (that was the main reason I thought we can start testing
>>> > using it):
>>> >
>>> Sigh, hit send too early.  Here is the link:
>>>
>>> http://git.openstack.org/cgit/openstack/neutron/commit/?id=9
>>> 16bc96ee214078496b4b38e1c93f36f906ce840
>>> >
>>> >>
>>> >> -Matt Treinish
>>> >>
>>> >>
>>> >> [1] http://lists.openstack.org/pipermail/openstack-dev/2017-June
>>> /117830.html
>>> >> [2] https://governance.openstack.org/tc/goals/pike/deploy-api-in
>>> -wsgi.html#neutron
>>> >>
>>> >> 

Re: [openstack-dev] [neutron] mod_wsgi support (pike bug?)

2017-09-05 Thread Miguel Angel Ajo Pelayo
Why do we need to put all the configuration in a single file?

That would be a big big change to deployers. It'd be great if we can think
of an alternate solution. (not sure how that's being handled for other
services though).

Best regards,
Miguel Ángel.

On Mon, Sep 4, 2017 at 3:01 PM, Kevin Benton  wrote:

> Yes, unfortunately I didn't make it back to the patch in time to adjust
> devstack to dump all of the configuration into one file (instead of
> /etc/neutron/neutron.conf /etc/neutron/plugins/ml2.conf etc). I did test
> locally with my dev environment around the time that RPC server patch went
> in, but there may have been a regression since it went in since it's not
> tested as Matt pointed out.
>
> It appears that puppet is still spreading the config files for the server
> into multiple locations as well[1]. Does it inherit that logic from
> devstack? Because that will need to be changed to push all of the relevant
> server config into one conf.
>
> 1. http://logs.openstack.org/82/500182/3/check/gate-puppet-
> openstack-integration-4-scenario004-tempest-ubuntu-
> xenial/791523c/logs/etc/neutron/plugins/
>
> On Sun, Sep 3, 2017 at 12:03 PM, Mohammed Naser 
> wrote:
>
>> On Sun, Sep 3, 2017 at 3:03 PM, Mohammed Naser 
>> wrote:
>> > On Sun, Sep 3, 2017 at 2:20 PM, Matthew Treinish 
>> wrote:
>> >> On Sun, Sep 03, 2017 at 01:47:24PM -0400, Mohammed Naser wrote:
>> >>> Hi folks,
>> >>>
>> >>> I've attempted to enable mod_wsgi support in our dev environment with
>> >>> Puppet however it results in a traceback.  I figured it was an
>> >>> environment thing so I looked into moving the Puppet CI to test using
>> >>> mod_wsgi and it resulted in the same error.
>> >>>
>> >>> http://logs.openstack.org/82/500182/3/check/gate-puppet-open
>> stack-integration-4-scenario004-tempest-ubuntu-xenial/
>> 791523c/logs/apache/neutron_wsgi_error.txt.gz
>> >>>
>> >>> Would anyone from the Neutron team be able to give input on this?
>> >>> We'd love to add gating for Neutron deployed by mod_wsgi which can
>> >>> help find similar issues.
>> >>>
>> >>
>> >> Neutron never got their wsgi support working in Devstack either. The
>> patch
>> >> adding that: https://review.openstack.org/#/c/439191/ never passed
>> the gate and
>> >> seems to have lost the attention of the author. The wsgi support in
>> neutron
>> >> probably doesn't work yet, and is definitely untested. IIRC, the issue
>> they were
>> >> hitting was loading the config files. [1] I don't think I saw any
>> progress on it
>> >> after that though.
>> >>
>> >> The TC goal doc [2] probably should say something about it never
>> landing and
>> >> missing pike.
>> >>
>> >
>> > That would make sense.  The release notes also state that it does
>> > offer the ability to run inside mod_wsgi which can be misleading to
>> > deployers (that was the main reason I thought we can start testing
>> > using it):
>> >
>> Sigh, hit send too early.  Here is the link:
>>
>> http://git.openstack.org/cgit/openstack/neutron/commit/?id=9
>> 16bc96ee214078496b4b38e1c93f36f906ce840
>> >
>> >>
>> >> -Matt Treinish
>> >>
>> >>
>> >> [1] http://lists.openstack.org/pipermail/openstack-dev/2017-June
>> /117830.html
>> >> [2] https://governance.openstack.org/tc/goals/pike/deploy-api-in
>> -wsgi.html#neutron
>> >>
>> >> 
>> __
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe: openstack-dev-requ...@lists.op
>> enstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][infra] Functional job failure rate at 100%

2017-08-10 Thread Miguel Angel Ajo Pelayo
Good (amazing) job folks. :)

El 10 ago. 2017 9:43, "Thierry Carrez"  escribió:

> Oh, that's good for us. Should still be fixed, if only so that we can
> test properly :)
>
> Kevin Benton wrote:
> > This is just the code simulating the conntrack entries that would be
> > created by real traffic in a production system, right?
> >
> > On Wed, Aug 9, 2017 at 11:46 AM, Jakub Libosvar  > > wrote:
> >
> > On 09/08/2017 18:23, Jeremy Stanley wrote:
> > > On 2017-08-09 15:29:04 +0200 (+0200), Jakub Libosvar wrote:
> > > [...]
> > >> Is it possible to switch used image for jenkins machines to use
> > >> back the older version? Any other ideas how to deal with the
> > >> kernel bug?
> > >
> > > Making our images use non-current kernel packages isn't trivial,
> but
> > > as Thierry points out in his reply this is not just a problem for
> > > our CI system. Basically Ubuntu has broken OpenStack (and probably
> a
> > > variety of other uses of conntrack) for a lot of people following
> > > kernel updates in 16.04 LTS so the fix needs to happen there
> > > regardless. Right now, basically, Ubuntu Xenial is not a good
> > > platform to be running OpenStack on until they get the kernel
> > > regression addressed.
> >
> > True. Fortunately, the impact is not that catastrophic for Neutron
> as it
> > might seem on the first look. Not sure about the other projects,
> though.
> > Neutron doesn't create conntrack entries in production code - only in
> > testing. That said, agents should work just fine even with the
> > kernel bug.
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-ovn] metadata agent implementation

2017-05-08 Thread Miguel Angel Ajo Pelayo
On Mon, May 8, 2017 at 2:48 AM, Michael Still  wrote:

> It would be interesting for this to be built in a way where other
> endpoints could be added to the list that have extra headers added to them.
>
> For example, we could end up with something quite similar to EC2 IAMS if
> we could add headers on the way through for requests to OpenStack endpoints.
>
> Do you think the design your proposing will be extensible like that?
>


I believe we may focus on achieving parity with the neutron reference
implementation first, and later on what you're proposing probably needs to
modelled on the neutron side.

Could you provide a practical example of how that would work anyway?


>
> Thanks,
> Michael
>
>
>
>
> On Fri, May 5, 2017 at 10:07 PM, Daniel Alvarez Sanchez <
> dalva...@redhat.com> wrote:
>
>> Hi folks,
>>
>> Now that it looks like the metadata proposal is more refined [0], I'd like
>> to get some feedback from you on the driver implementation.
>>
>> The ovn-metadata-agent in networking-ovn will be responsible for
>> creating the namespaces, spawning haproxies and so on. But also,
>> it must implement most of the "old" neutron-metadata-agent functionality
>> which listens on a UNIX socket and receives requests from haproxy,
>> adds some headers and forwards them to Nova. This means that we can
>> import/reuse big part of neutron code.
>>
>> Makes sense, you would avoid this way, depending on an extra co-hosted
service, reducing this way deployment complexity.


> I wonder what you guys think about depending on neutron tree for the
>> agent implementation despite we can benefit from a lot of code reuse.
>> On the other hand, if we want to get rid of this dependency, we could
>> probably write the agent "from scratch" in C (what about having C
>> code in the networking-ovn repo?) and, at the same time, it should
>> buy us a performance boost (probably not very noticeable since it'll
>> respond to requests from local VMs involving a few lookups and
>> processing simple HTTP requests; talking to nova would take most
>> of the time and this only happens at boot time).
>>
>
I would try to keep that part in Python, as everything on the networking-ovn
repo. I remember that Jakub made lots of improvements on the
neutron-metadata-agent area by caching, I'd make sure we reuse that if
it's of use to us (not sure if we used it for nova communication or not).

The neutron metadata agent, apparently has a get_ports RPC call [2] to
neutron-server plugin. We don't want RPC calls but ovsdb to get that info,
I have vague proof about caching also being used for those requests [1],
but with ovsdb we have that for free.

I don't know, the agent is 300 LOC, it seems to me like a whole re-write in
python (copying whatever is necessary) could be a reasonable way, but I
guess that trying to go down that rabbit hole would tell you better if I'm
wrong or if it makes sense.


>
>> I would probably aim for a Python implementation
>>
> +1000


> reusing/importing
>> code from neutron tree but I'm not sure how we want to deal with
>> changes in neutron codebase (we're actually importing code now).
>> Looking forward to reading your thoughts :)
>>
>
I guess the neutron-ns-metadata haproxy spawning [3] can be reused
from neutron, I wonder if it would make sense to move that to neutron_lib?
I believe that's the key thing that can be reused,

if we don't reuse it: we need to maintain it in two places,
if we reuse it, we can be broken by changes in neutron repo,
but I'm sure we're flexible enough to react to such changes,

Cheers! :D


>
>> Thanks,
>> Daniel
>>
>> [0] https://review.openstack.org/#/c/452811/
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Rackspace Australia
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] stepping down from neutron core team

2017-04-28 Thread Miguel Angel Ajo Pelayo
Hi everybody,

Some of you already know, but I wanted to make it official.

Recently I moved to work on the networking-ovn component,
and OVS/OVN itself, and while I'll stick around and I will be available
on IRC for any questions I'm already not doing a good work with
neutron reviews, so...

It's time to leave room for new reviewers.

It's always a pleasure to work with you folks.

Best regards,
Miguel Ángel Ajo.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][gate] functional job busted

2017-03-15 Thread Miguel Angel Ajo Pelayo
Thank you for the patches. I merged them, released 1.1.0 and proposed [1]

Cheers!,

[1] //review.openstack.org/445884


On Wed, Mar 15, 2017 at 10:14 AM, Gorka Eguileor 
wrote:

> On 14/03, Ihar Hrachyshka wrote:
> > Hi all,
> >
> > the patch that started to produce log index file for logstash [1] and
> > the patch that switched metadata proxy to haproxy [2] landed and
> > together busted the functional job because the latter produces log
> > messages with null-bytes inside, while os-log-merger is not resilient
> > against it.
> >
> > If functional job would be in gate and not just in check queue, that
> > would not happen.
> >
> > Attempt to fix the situation in multiple ways at [3]. (For
> > os-log-merger patches, we will need new release and then bump the
> > version used in gate, so short term neutron patches seem more viable.)
> >
> > I will need support from both authors of os-log-merger as well as
> > other neutron members to unravel that. I am going offline in a moment,
> > and hope someone will take care of patches up for review, and land
> > what's due.
> >
> > [1] https://review.openstack.org/#/c/442804/ [2]
> > https://review.openstack.org/#/c/431691/ [3]
> > https://review.openstack.org/#/q/topic:fix-os-log-merger-crash
> >
> > Thanks,
> > Ihar
>
> Hi Ihar,
>
> That is an unexpected case that never came up during our tests or usage,
> but it is indeed something the script should take into account.
>
> Thanks for the os-log-merger patches, I've reviewed them and they look
> good to me, so hopefully they'll land before you come back online.  ;-)
>
> Cheers,
> Gorka.
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Stepping down from Neutron roles

2017-03-07 Thread Miguel Angel Ajo Pelayo
Nate, it was a pleasure working with you, you and your team made great
contributions to OpenStack and neutron. I'll be very happy if we ever have
the chance to work again together.

Best regards, and very good luck, my friend.

On Tue, Mar 7, 2017 at 4:55 AM, Kevin Benton  wrote:

> Hi Nate,
>
> Thanks for all of your contributions and good luck in your future
> endeavors! You're always welcome back. :)
>
>
> On Mar 6, 2017 13:15, "Nate Johnston"  wrote:
>
> All,
>
> I have been delaying this long enough... sadly, due to changes in
> direction I
> am no longer able to spend time working on OpenStack, and as such I need to
> resign my duties as Services lieutenant, and liaison to the Infra team.  My
> time in the Neutron and FWaaS community has been one of the most rewarding
> experiences of my career.  Thank you to everyone I met at the summits and
> who
> took the time to work with me on my contributions.  And thank you to the
> many
> of you who have become my personal friends.  If I see an opportunity in the
> future to return to OpenStack development I will jump on it in a hot
> minute.
> But until then, I'll be cheering you on from the sidelines.
>
> All the best,
>
> --N.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] br-int to use registers instead of ovsdb tags (was: Re: Enable arp_responder without l2pop)

2017-02-22 Thread Miguel Angel Ajo Pelayo
On Wed, Feb 22, 2017 at 1:53 PM, Thomas Morin 
wrote:

> Wed Feb 22 2017 11:13:18 GMT-0500 (EST), Anil Venkata:
>
>
> While relevant, I think this is not possible until br-int allows to match
> the network a packet belongs to (the ovsdb port tags don't let you do that
> until the packet leaves br-int with a NORMAL action).
>
>> Ajo has told me yesterday that the OVS firewall driver uses registers
>> precisely to do that. Making this generic (and not specific to the OVS
>> firewall driver) would be a prerequisite before you can add ARP responder
>> rules in br-int.
>>
>>
> [...] Spoke to Ajo on this. He said we can follow above suggestion i.e do
> the same what firewall driver is doing  in br-int, or wait till OVS flow
> extension is implemented(but this will take time as lack of resources)
>
>
> I think using registers instead of ovsdb port tags should be seen as a
> common pre-requisite for both ARP responder in br-int and doing the OVS
> flow extension work.
> So waiting for resource on the later should not be seen as the problem..
> although you still need some resource to use register in br-int...
>
>
Those port/net tagging parts were designed as some of the fixed stages of
the openflow pipeline. If we wanted to pursue this I feel we may need to
wait for the pipeline to eventually be ready.

An alternative option would be moving the port/net tagging to a common
place for ovs firewall and hybrid firewall. But I'm not sure how complex
that could be.




> -Thomas
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Alternative approaches for L3 HA

2017-02-22 Thread Miguel Angel Ajo Pelayo
I have updated the spreadsheet. In the case of RH/RDO we're using the same
architecture
in the case of HA, pacemaker is not taking care of those anymore since the
HA-NG implementation.

We let systemd take care to restart the services that die, and we worked
with the community
to make sure that agents and services are robust in case of dependent
services (database, rabbitmq
) failures, to make sure they reconnect and continue when those become
available.

On Wed, Feb 22, 2017 at 11:28 AM, Adam Spiers  wrote:

> Kosnik, Lubosz  wrote:
> > About success of RDO we need to remember that this deployment utilizes
> Peacemaker and when I was working on this feature and even I spoke with
> Assaf this external application was doing everything to make this solution
> working.
> > Peacemaker was responsible for checking external and internal
> connectivity. To detect split brain. Elect master, even keepalived was
> running but Peacemaker was automatically killing all services and moving
> FIP.
> > Assaf - is there any change in this implementation in RDO? Or you’re
> still doing everything outside of Neutron?
> >
> > Because if RDO success is build on Peacemaker it means that yes, Neutron
> needs some solution which will be available for more than RH deployments.
>
> Agreed.
>
> With help from others, I have started an analysis of some of the
> different approaches to L3 HA:
>
> https://ethercalc.openstack.org/Pike-Neutron-L3-HA
>
> (although I take responsibility for all mistakes ;-)
>
> It would be great if someone from RH or RDO could provide information
> on how this RDO (and/or RH OSP?) solution based on Pacemaker +
> keepalived works - if so, I volunteer to:
>
>   - help populate column E of the above sheet so that we can
> understand if there are still remaining gaps in the solution, and
>
>   - document it (e.g. in the HA guide).  Even if this only ended up
> being considered as a shorter-term solution, I think it's still
> worth documenting so that it's another option available to
> everyone.
>
> Thanks!
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Neutron team social in Atlanta on Thursday

2017-02-20 Thread Miguel Angel Ajo Pelayo
+1 :-)

On Mon, Feb 20, 2017 at 9:16 AM, John Davidge 
wrote:

> +1
>
> On 2/20/17, 4:48 AM, "Carlos Gonçalves"  wrote:
>
> >+1
> >
> >On Mon, Feb 20, 2017 at 9:17 AM, Kevin Benton
> > wrote:
> >
> >No problem. Keep sending in RSPVs if you haven't already.
> >
> >On Mon, Feb 20, 2017 at 2:59 AM, Furukawa, Yushiro
> > wrote:
> >
> >
> >
> >+1
> >
> >Sorry for late, Kevin!!
> >
> >
> >  Yushiro Furukawa
> >
> >From: Kevin Benton [mailto:ke...@benton.pub]
> >
> >
> >
> >
> >Hi all,
> >
> >
> >I'm organizing a Neutron social event for Thursday evening in Atlanta
> >somewhere near the venue for dinner/drinks. If you're interested, please
> >reply to this email with a "+1" so I can get a general count for a
> >reservation.
> >
> >
> >
> >Cheers,
> >
> >Kevin Benton
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >___
> ___
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> >openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> >
> >
> >
> >___
> ___
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> >openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> >
> >
>
>
> 
> Rackspace Limited is a company registered in England & Wales (company
> registered number 03897010) whose registered office is at 5 Millington
> Road, Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy
> can be viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail
> message may contain confidential or privileged information intended for the
> recipient. Any dissemination, distribution or copying of the enclosed
> material is prohibited. If you receive this transmission in error, please
> notify us immediately by e-mail at ab...@rackspace.com and delete the
> original message. Your cooperation is appreciated.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Team photo

2017-02-20 Thread Miguel Angel Ajo Pelayo
Lol, ack :)

On Mon, Feb 20, 2017 at 2:37 AM, Kevin Benton  wrote:

> Clothes are strongly recommended as far as I understand it.
>
> On Mon, Feb 20, 2017 at 1:47 AM, Gary Kotton  wrote:
>
>> What is the dress code J
>>
>>
>>
>> *From: *"Das, Anindita" 
>> *Reply-To: *OpenStack List 
>> *Date: *Monday, February 20, 2017 at 5:16 AM
>> *To: *OpenStack List 
>> *Subject: *Re: [openstack-dev] [neutron] - Team photo
>>
>>
>>
>> +1
>>
>>
>>
>> *From: *Kevin Benton 
>> *Reply-To: *"OpenStack Development Mailing List (not for usage
>> questions)" 
>> *Date: *Friday, February 17, 2017 at 5:08 PM
>> *To: *"openstack-dev@lists.openstack.org" > .org>
>> *Subject: *[openstack-dev] [neutron] - Team photo
>>
>>
>>
>> Hello!
>>
>>
>>
>> Is everyone free Thursday at 11:20AM (right before lunch break) for 10
>> minutes for a group photo?
>>
>>
>>
>> Cheers,
>> Kevin Benton
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gate][neutron][infra] tempest jobs timing out due to general sluggishness of the node?

2017-02-10 Thread Miguel Angel Ajo Pelayo
I believe those are traces left by the reference implementation of cinder
setting very high debug level on tgtd. I'm not sure if that's related or
the culprit at all (probably the culprit is a mix of things).

I wonder if we could disable such verbosity on tgtd, which certainly is
going to slow down things.

On Fri, Feb 10, 2017 at 9:07 AM, Antonio Ojea  wrote:

> I guess it's an infra issue, specifically related to the storage, or the
> network that provide the storage.
>
> If you look at the syslog file [1] , there are a lot of this entries:
>
> Feb 09 04:20:42 
> 
>  ubuntu-xenial-rax-ord-7193667 tgtd[8542]: tgtd: iscsi_task_tx_start(2024) no 
> more dataFeb 09 04:20:42 
> 
>  ubuntu-xenial-rax-ord-7193667 tgtd[8542]: tgtd: iscsi_task_tx_start(1996) 
> found a task 71 131072 0 0Feb 09 04:20:42 
> 
>  ubuntu-xenial-rax-ord-7193667 tgtd[8542]: tgtd: iscsi_data_rsp_build(1136) 
> 131072 131072 0 26214471Feb 09 04:20:42 
> 
>  ubuntu-xenial-rax-ord-7193667 tgtd[8542]: tgtd: __cmd_done(1281) (nil) 
> 0x2563000 0 131072
>
> grep tgtd syslog.txt.gz| wc
>   139602 1710808 15699432
>
> [1] http://logs.openstack.org/95/429095/2/check/gate-tempest-
> dsvm-neutron-dvr-ubuntu-xenial/35aa22f/logs/syslog.txt.gz
>
>
>
> On Fri, Feb 10, 2017 at 5:59 AM, Ihar Hrachyshka 
> wrote:
>
>> Hi all,
>>
>> I noticed lately a number of job failures in neutron gate that all
>> result in job timeouts. I describe
>> gate-tempest-dsvm-neutron-dvr-ubuntu-xenial job below, though I see
>> timeouts happening in other jobs too.
>>
>> The failure mode is all operations, ./stack.sh and each tempest test
>> take significantly more time (like 50% to 150% more, which results in
>> job timeout triggered). An example of what I mean can be found in [1].
>>
>> A good run usually takes ~20 minutes to stack up devstack; then ~40
>> minutes to pass full suite; a bad run usually takes ~30 minutes for
>> ./stack.sh; and then 1:20h+ until it is killed due to timeout.
>>
>> It affects different clouds (we see rax, internap, infracloud-vanilla,
>> ovh jobs affected; we haven't seen osic though). It can't be e.g. slow
>> pypi or apt mirrors because then we would see slowdown in ./stack.sh
>> phase only.
>>
>> We can't be sure that CPUs are the same, and devstack does not seem to
>> dump /proc/cpuinfo anywhere (in the end, it's all virtual, so not sure
>> if it would help anyway). Neither we have a way to learn whether
>> slowliness could be a result of adherence to RFC1149. ;)
>>
>> We discussed the matter in neutron channel [2] though couldn't figure
>> out the culprit, or where to go next. At this point we assume it's not
>> neutron's fault, and we hope others (infra?) may have suggestions on
>> where to look.
>>
>> [1] http://logs.openstack.org/95/429095/2/check/gate-tempest-dsv
>> m-neutron-dvr-ubuntu-xenial/35aa22f/console.html#_2017-02-
>> 09_04_47_12_874550
>> [2] http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/
>> %23openstack-neutron.2017-02-10.log.html#t2017-02-10T04:06:01
>>
>> Thanks,
>> Ihar
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [gate] [all] openstack services footprint lead to oom-kill in the gate

2017-02-06 Thread Miguel Angel Ajo Pelayo
Jeremy Stanley wrote:


> It's an option of last resort, I think. The next consistent flavor
> up in most of the providers donating resources is double the one
> we're using (which is a fairly typical pattern in public clouds). As
> aggregate memory constraints are our primary quota limit, this would
> effectively halve our current job capacity.

Properly coordinated with all the cloud the providers, they could create
flavours which are private but available to our tenants, where a 25-50%
more RAM would be just enough.

I agree that should probably be a last resort tool, and we should keep
looking for proper ways to find where we consume unnecessary RAM and make
sure that's properly freed up.

It could be interesting to coordinate such flavour creation in the mean
time, even if we don't use it now, we could eventually test it or put it to
work if we find trapped anytime later.


On Sun, Feb 5, 2017 at 8:37 PM, Matt Riedemann  wrote:

> On 2/5/2017 1:19 PM, Clint Byrum wrote:
>
>>
>> Also I wonder if there's ever been any serious consideration given to
>> switching to protobuf? Feels like one could make oslo.versionedobjects
>> a wrapper around protobuf relatively easily, but perhaps that's already
>> been explored in a forum that I wasn't paying attention to.
>>
>
> I've never heard of anyone attempting that.
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [gate] [all] openstack services footprint lead to oom-kill in the gate

2017-02-03 Thread Miguel Angel Ajo Pelayo
On Fri, Feb 3, 2017 at 7:55 AM, IWAMOTO Toshihiro 
wrote:

> At Wed, 1 Feb 2017 16:24:54 -0800,
> Armando M. wrote:
> >
> > Hi,
> >
> > [TL;DR]: OpenStack services have steadily increased their memory
> > footprints. We need a concerted way to address the oom-kills experienced
> in
> > the openstack gate, as we may have reached a ceiling.
> >
> > Now the longer version:
> > 
> >
> > We have been experiencing some instability in the gate lately due to a
> > number of reasons. When everything adds up, this means it's rather
> > difficult to merge anything and knowing we're in feature freeze, that
> adds
> > to stress. One culprit was identified to be [1].
> >
> > We initially tried to increase the swappiness, but that didn't seem to
> > help. Then we have looked at the resident memory in use. When going back
> > over the past three releases we have noticed that the aggregated memory
> > footprint of some openstack projects has grown steadily. We have the
> > following:
>
> Not sure if it is due to memory shortage, VMs running CI jobs are
> experiencing sluggishness, which may be the cause of ovs related
> timeouts[1]. Tempest jobs run dstat to collect system info every
> second. When timeouts[1] happen, dstat outputs are also often missing
> for several seconds, which means a VM is having trouble scheduling
> both ovs related processes and the dstat process.
> Those ovs timeouts affect every project and happen much often than the
> oom-kills.
>
> Some details are on the lp bug page[2].
>
> Correlation of such sluggishness and VM paging activities are not
> clear. I wonder if VM hosts are under high load or if increasing VM
> memory would help. Those VMs have no free ram for file cache and file
> pages are read again and again, leading to extra IO loads on VM hosts
> and adversely affecting other VMs on the same host.
>
>
Iwamoto, that makes a lot of sense to me.

That makes me think that increasing the available RAM per instance could be
beneficial, even if we'd be able to run less workloads simultaneously.
Compute hosts would see their pressure reduced (since they can accommodate
less workload), instances would run more smoothly, because they'd have more
room for caching and buffers, and we may also see the OOM issues alleviated.

BUT, if that's even a suitable approach for all those problems which could
very well be inter-related, it still means that we should keep pursuing
finding the culprit of our memory footprint growth and taking counter
measures where reasonable.

Sometimes more RAM is just the cost of progress (new features, ability to
do online upgrades, better synchronisation patterns based in caching,
etc...), sometimes we'd be able to slash down the memory usage by
converting some of our small-repeatable services into other things (I'm
thinking of the neutron-ns-metadata proxy being converted to haproxy or
nginx + a neat piece of config).

So, would it be realistic to bump the flavors RAM to favor our stability in
the short term? (considering that the less amount of workload our clouds
will be able to take is fewer, but the failure rate will also be fewer, so
the rechecks will be reduced).




>
> [1] http://logstash.openstack.org/#dashboard/file/logstash.json?
> query=message%3A%5C%22no%20response%20to%20inactivity%20probe%5C%22
> [2] https://bugs.launchpad.net/neutron/+bug/1627106/comments/14
>
> --
> IWAMOTO Toshihiro
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] PTL nominations deadline and non-candidacy

2017-01-11 Thread Miguel Angel Ajo Pelayo
Armando, thank you very much for all the work you've done as PTL,
my best wishes, and happy to know that you'll be around!

Best regards,
Miguel Ángel.


On Wed, Jan 11, 2017 at 1:52 AM, joehuang  wrote:

> Sad to know that you will step down from Neutron PTL. Had several f2f talk
> with you, and got lots of valuable feedback from you. Thanks a lot!
>
> Best Regards
> Chaoyi Huang (joehuang)
> --
> *From:* Armando M. [arma...@gmail.com]
> *Sent:* 09 January 2017 22:11
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* [openstack-dev] [neutron] PTL nominations deadline and
> non-candidacy
>
> Hi neutrinos,
>
> The PTL nomination week is fast approaching [0], and as you might have
> guessed by the subject of this email, I am not planning to run for Pike. If
> I look back at [1], I would like to think that I was able to exercise the
> influence on the goals I set out with my first self-nomination [2].
>
> That said, when it comes to a dynamic project like neutron one can't never
> claim to be *done done* and for this reason, I will continue to be part of
> the neutron core team, and help the future PTL drive the next stage of the
> project's journey.
>
> I must admit, I don't write this email lightly, however I feel that it is
> now the right moment for me to step down, and give someone else the
> opportunity to grow in the amazing role of neutron PTL! I have certainly
> loved every minute of it!
>
> Cheers,
> Armando
>
> [0] https://releases.openstack.org/ocata/schedule.html
> [1] https://review.openstack.org/#/q/project:openstack/elect
> ion+owner:armando-migliaccio
> [2] https://review.openstack.org/#/c/223764/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] proposing Ryan Tidwell and Nate Johnston as service LTs

2016-12-16 Thread Miguel Angel Ajo Pelayo
+1 Good work. :)

On Fri, Dec 16, 2016 at 11:59 AM, Rossella Sblendido 
wrote:

> +1
>
> On 12/16/2016 09:25 AM, Ihar Hrachyshka wrote:
> > Armando M.  wrote:
> >
> >> Hi neutrinos,
> >>
> >> I would like to propose Ryan and Nate as the go-to fellows for
> >> service-related patches.
> >>
> >> Both are core in their repos of focus, namely neutron-dynamic-routing
> >> and neutron-fwaas, and have a good understanding of the service
> >> framework, the agent framework and other bits and pieces. At this
> >> point, entrusting them with the responsibility is almost a formality.
> >
> > Great, +1.
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] proposing Miguel Lavalle as neutron core and Brian Haley as L3 Lt

2016-12-16 Thread Miguel Angel Ajo Pelayo
+1 :)

On Fri, Dec 16, 2016 at 2:44 AM, Vasudevan, Swaminathan (PNB Roseville) <
swaminathan.vasude...@hpe.com> wrote:

> +1
>
>
>
> *From:* Armando M. [mailto:arma...@gmail.com]
> *Sent:* Thursday, December 15, 2016 3:15 PM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* [openstack-dev] [neutron] proposing Miguel Lavalle as neutron
> core and Brian Haley as L3 Lt
>
>
>
> Hi neutrinos,
>
>
>
> Miguel Lavalle has been driving the project forward consistently and
> reliably. I would like to propose him to be entrusted with +2/+A rights in
> the areas he's been most prolific, which are L3 and DHCP.
>
>
>
> At the same time, I'd like to propose Brian Haley as our next Chief L3
> Officer. Both of them have worked with Carl Baldwin extensively and that
> can only be a guarantee of quality.
>
>
>
> Cheers,
>
> Armando
>
>
>
> [1] https://review.openstack.org/#/c/411531/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Stepping down from core

2016-12-02 Thread Miguel Angel Ajo Pelayo
It's been an absolute pleasure working with you on every single interaction.


Very good luck Henry,


On Fri, Dec 2, 2016 at 8:14 AM, Andreas Scheuring <
scheu...@linux.vnet.ibm.com> wrote:

> Henry, it was a pleasure working with you! Thanks!
> All the best for your further journey!
>
>
> --
> -
> Andreas
> IRC: andreas_s
>
>
>
> On Do, 2016-12-01 at 17:51 -0500, Henry Gessau wrote:
> > I've already communicated this in the neutron meeting and in some neutron
> > policy patches, but yesterday the PTL actually updated the gerrit ACLs
> so I
> > thought I'd drop a note here too.
> >
> > My work situation has changed and leaves me little time to keep up with
> my
> > duties as core reviewer, DB lieutenant, and drivers team member.
> >
> > Working with the diverse and very talented contributors to Neutron has
> been
> > the best experience of my career (which started before many of you were
> born).
> > Thank you all for making the team such a great community. Because of you
> the
> > project is thriving and will continue to be successful!
> >
> > I will still be around on IRC, contribute some small patches here and
> there,
> > and generally try to keep abreast of Neutron's progress. Don't hesitate
> to
> > ping me.
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Stepping down from core

2016-11-18 Thread Miguel Angel Ajo Pelayo
Sad to see you go Carl,

   Thanks for so many years of hard work, as Brian said, OpenStack /
Neutron is better thanks to your contributions through the last years.

My best wishes for you.


On Fri, Nov 18, 2016 at 9:51 AM, Vikram Choudhary  wrote:

> It was really a good experience working with you Carl. Best of luck for
> your future endeavour!
>
> Thanks
> Vikram
>
> On Fri, Nov 18, 2016 at 12:38 PM, Trinath Somanchi <
> trinath.soman...@nxp.com> wrote:
>
>> Carl -
>>
>>
>>
>> You are an asset to Neutron community. Missing you as core is a hard
>> thing.
>>
>>
>>
>> I wish a grand U turn again at your work towards Neutron.
>>
>>
>>
>> Wishing you all the best.
>>
>>
>>
>> /Trinath
>>
>>
>>
>> *From:* Carl Baldwin [mailto:c...@ecbaldwin.net]
>> *Sent:* Friday, November 18, 2016 12:13 AM
>> *To:* OpenStack Development Mailing List > .org>
>> *Subject:* [openstack-dev] [Neutron] Stepping down from core
>>
>>
>>
>> Neutron (and Openstack),
>>
>>
>>
>> It is with regret that I report that my work situation has changed such
>> that I'm not able to keep up with my duties as a Neutron core reviewer, L3
>> lieutenant, and drivers team member. My participation has dropped off
>> considerably since Newton was released and I think it is fair to step down
>> and leave an opening for others to fill. There is no shortage of talent in
>> Neutron and Openstack and I know I'm leaving it in good hands.
>>
>>
>>
>> I will be more than happy to come back to full participation in Openstack
>> and Neutron in the future if things change again in that direction. This is
>> a great community and I've had a great time participating and learning with
>> you all.
>>
>>
>>
>> Well, I don't want to drag this out. I will still be around on IRC and
>> will be happy to help out where I am able. Feel free to ping me.
>>
>>
>>
>> Carl
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] OVO support

2016-11-15 Thread Miguel Angel Ajo Pelayo
I could be wrong, but I suspect we're doing it this way to be able to do
changes to several objects atomically, and roll back the transactions if at
some point in time what we're trying to accomplish is not possible.

Thoughts?

On Tue, Nov 15, 2016 at 10:06 AM, Gary Kotton  wrote:

> Hi,
>
> It seems like a lot of the object work is being done under database
> transactions. My understanding is that the objects should take care of this
> internally.
>
> Any thoughts?
>
> Thanks
>
> Gary
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron team social event in Barcelona

2016-10-17 Thread Miguel Angel Ajo Pelayo
I probably won't be able to go, but if you plan to hangout in any
other place around after/before dinner, may be I'll join.

 Cheers & Enjoy! :)

On Mon, Oct 17, 2016 at 12:56 PM, Nate Johnston  wrote:
> I responded to Miguel privately, but I'll be there as well!
>
> --N.
>
> On Fri, Oct 14, 2016 at 01:30:57PM -0500, Miguel Lavalle wrote:
>> set=UTF-8
>>
>> Dear Neutrinos,
>>
>> I am organizing a social event for the team on Thursday 27th at 19:30.
>> After doing some Google research, I am proposing Raco de la Vila, which is
>> located in Poblenou: http://www.racodelavila.com/en/index.htm. The menu is
>> here: http://www.racodelavila.com/en/carta-racodelavila.htm
>>
>> It is easy to get there by subway from the Summit venue:
>> https://goo.gl/maps/HjaTEcBbDUR2. I made a reservation for 25 people under
>> 'Neutron' or "Miguel Lavalle". Please confirm your attendance so we can get
>> a final count.
>>
>> Here's some reviews:
>> https://www.tripadvisor.com/Restaurant_Review-g187497-d1682057-Reviews-Raco_De_La_Vila-Barcelona_Catalonia.html
>>
>> Chee
>
>> n: inline
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/ope
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [lbaas] [octavia] Proposing Lubosz Kosnik (diltram) as Octavia Core

2016-10-11 Thread Miguel Angel Ajo Pelayo
+1!, even if my vote does not count :-)

On Tue, Oct 11, 2016 at 12:00 AM, Eichberger, German
 wrote:
> +1 (even if it doesn’t matter)
>
>
>
> From: Stephen Balukoff 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: Monday, October 10, 2016 at 4:39 PM
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Subject: Re: [openstack-dev] [lbaas] [octavia] Proposing Lubosz Kosnik
> (diltram) as Octavia Core
>
>
>
> I agree whole-heartedly with johnsom's assessment of diltram's
> contributions. +1 from me!
>
>
>
> On Mon, Oct 10, 2016 at 1:06 PM, Michael Johnson 
> wrote:
>
> Greetings Octavia and developer mailing list folks,
>
> I propose that we add Lubosz Kosnik (diltram) as an OpenStack Octavia
> core reviewer.
>
> His contributions [1] are in line with other cores and he has been an
> active member of our community.  He regularly attends our weekly
> meetings, contributes good code, and provides solid reviews.
>
> Overall I think Lubosz would make a great addition to the core review team.
>
> Current Octavia cores, please respond with +1/-1.
>
> Michael
>
> [1] http://stackalytics.com/report/contribution/octavia/90
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] proper cleanup of l3 resources (neutron-netns-cleanup)

2016-10-07 Thread Miguel Angel Ajo Pelayo
Hi Sergey!,

This was my point of view on a possible solution:

https://bugs.launchpad.net/neutron/+bug/1403455/comments/12

"""
After much thinking (and quite little doing) I believe the option "2"
I proposed is a rather reasonable one:

2) Before cleaning a namespace blindly in the end, identify any
network service in the namespace (via netstat), kill those processes,
so they aren't orphaned, and then, kill the namespace.

Any process should be safely killed that way, and if it's not, we can
complicate our lifes and code with "1":
1) Use stevedore HookManager to let out-of-tree repos register netns
prefixes declaration, and netns cleaners,
so every piece of code (in-tree or out-of-tree) declare which
netns prefixes they use, and provide a netns cleanup
hook to be called.

"""

Let me know what you think

On Fri, Oct 7, 2016 at 2:15 PM, Sergey Belous  wrote:
> Hello everyone.
>
> I’m very interesting in this one
> https://bugs.launchpad.net/neutron/+bug/1403455
> Can anybody tell me, what is the current status of this bug? Is anybody
> working on it now?
> And as I can see, there are some options, that was discussed in comments to
> this bug and… did anybody decide which solution is the best?
>
>
> --
> Best Regards,
> Sergey Belous
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia] Multi-node controller testing

2016-09-28 Thread Miguel Angel Ajo Pelayo
I just found this one created recently, and I will try to build on top of it:

https://review.openstack.org/#/c/371807/12



On Wed, Sep 28, 2016 at 1:52 PM, Miguel Angel Ajo Pelayo
<majop...@redhat.com> wrote:
> Refloating this thread.
>
> I posted this rfe/bug [1], and I'm planning to come up with an
> experimental job that triggers one of the basic neutron/lbaas tests
> with octavia.
>
> I wonder if even picking up the scenario one for now could make sense,
> it's not very stable at the moment, but may be spreading the load of
> VM creations between two compute nodes could, may be, ease it ?
>
> [1] https://bugs.launchpad.net/octavia/+bug/1628481
>
> On Thu, Aug 11, 2016 at 4:24 PM, Roman Vasilets <rvasil...@mirantis.com> 
> wrote:
>> Hi,
>>   "need to have something (tempest-plugin) to make sure that integration
>> works with nova & neutron" - Its easy to write scenarios that will test that
>> octavia works with nova and neutron
>>   "I guess rally is more suited to make sure that things work at scale, to
>> uncover any sort of race conditions (This would be specially beneficial in
>> multinode controllers)" - Rally is suitable for many kind of tests=)
>> Especially for testing at scale! If you have any question how to use Rally
>> feel free to ask Rally team!
>>
>> - Best regards, Roman Vasylets. Rally team member
>>
>> On Thu, Aug 11, 2016 at 11:46 AM, Miguel Angel Ajo Pelayo
>> <majop...@redhat.com> wrote:
>>>
>>> On Wed, Aug 10, 2016 at 9:51 PM, Stephen Balukoff <step...@balukoff.com>
>>> wrote:
>>> > Miguel--
>>> >
>>> > There have been a number of tempest patches in the review queue for a
>>> > long
>>> > time now, but I think the reason they're not getting attention is that
>>> > we
>>> > don't want to have to import a massive amount of tempest code into our
>>> > repository (which will become stale and need hot-fixing, as has happened
>>> > with neutron-lbaas on many occasions), and it appears tempest-lib
>>> > doesn't
>>> > yet support all the stuff we would need to do with it.
>>>
>>> I guess you mean [1]
>>>
>>>
>>> > People have suggested Rally, but so far nobody has come forth with code,
>>> > or
>>> > a strong desire to push it through.
>>>
>>> I guess rally is more suited to make sure that things work at scale,
>>> to uncover any sort of race conditions (This would be specially
>>> beneficial in multinode controllers).
>>>
>>> But I understand (I can be wrong) that we still need to have something
>>> (tempest-plugin) to make sure that integration works with nova &
>>> neutron. I'm going to check those patches to see what was the
>>> discussion and issues over there (I see this one [1] to start with,
>>> which is probably the most important)
>>>
>>> [1]
>>> https://review.openstack.org/#/q/status:open+project:openstack/octavia+branch:master+topic:octavia_basic_lb_scenario
>>>
>>> [2] https://review.openstack.org/#/c/172199/66..75/.testr.conf
>>>
>>>
>>> > Stephen
>>> >
>>> > On Tue, Aug 9, 2016 at 5:40 AM, Miguel Angel Ajo Pelayo
>>> > <majop...@redhat.com> wrote:
>>> >>
>>> >> On Mon, Aug 8, 2016 at 4:56 PM, Kosnik, Lubosz
>>> >> <lubosz.kos...@intel.com>
>>> >> wrote:
>>> >> > Great work with that multi-node setup Miguel.
>>> >>
>>> >> Thanks, I have to get my hands dirtier with octavia, it's just a tiny
>>> >> thing.
>>> >>
>>> >> > About that multinode Infra is supporting two nodes setup used
>>> >> > currently
>>> >> > by grenade jobs but in my opinion we don’t have any tests which can
>>> >> > cover
>>> >> > that type of testing. We’re still struggling with selecting proper
>>> >> > tool to
>>> >> > test Octavia from integration/functional perspective so probably it’s
>>> >> > too
>>> >> > early to make it happen.
>>> >>
>>> >>
>>> >> Well, any current tests we run should pass equally well in a multi
>>> >> node controller, and that's the point, that, regardless of the
>>> >> deployment architecture the behaviour shall not change at all. We may
>>> >> not need any specific test.
>>&g

Re: [openstack-dev] [octavia] Multi-node controller testing

2016-09-28 Thread Miguel Angel Ajo Pelayo
Refloating this thread.

I posted this rfe/bug [1], and I'm planning to come up with an
experimental job that triggers one of the basic neutron/lbaas tests
with octavia.

I wonder if even picking up the scenario one for now could make sense,
it's not very stable at the moment, but may be spreading the load of
VM creations between two compute nodes could, may be, ease it ?

[1] https://bugs.launchpad.net/octavia/+bug/1628481

On Thu, Aug 11, 2016 at 4:24 PM, Roman Vasilets <rvasil...@mirantis.com> wrote:
> Hi,
>   "need to have something (tempest-plugin) to make sure that integration
> works with nova & neutron" - Its easy to write scenarios that will test that
> octavia works with nova and neutron
>   "I guess rally is more suited to make sure that things work at scale, to
> uncover any sort of race conditions (This would be specially beneficial in
> multinode controllers)" - Rally is suitable for many kind of tests=)
> Especially for testing at scale! If you have any question how to use Rally
> feel free to ask Rally team!
>
> - Best regards, Roman Vasylets. Rally team member
>
> On Thu, Aug 11, 2016 at 11:46 AM, Miguel Angel Ajo Pelayo
> <majop...@redhat.com> wrote:
>>
>> On Wed, Aug 10, 2016 at 9:51 PM, Stephen Balukoff <step...@balukoff.com>
>> wrote:
>> > Miguel--
>> >
>> > There have been a number of tempest patches in the review queue for a
>> > long
>> > time now, but I think the reason they're not getting attention is that
>> > we
>> > don't want to have to import a massive amount of tempest code into our
>> > repository (which will become stale and need hot-fixing, as has happened
>> > with neutron-lbaas on many occasions), and it appears tempest-lib
>> > doesn't
>> > yet support all the stuff we would need to do with it.
>>
>> I guess you mean [1]
>>
>>
>> > People have suggested Rally, but so far nobody has come forth with code,
>> > or
>> > a strong desire to push it through.
>>
>> I guess rally is more suited to make sure that things work at scale,
>> to uncover any sort of race conditions (This would be specially
>> beneficial in multinode controllers).
>>
>> But I understand (I can be wrong) that we still need to have something
>> (tempest-plugin) to make sure that integration works with nova &
>> neutron. I'm going to check those patches to see what was the
>> discussion and issues over there (I see this one [1] to start with,
>> which is probably the most important)
>>
>> [1]
>> https://review.openstack.org/#/q/status:open+project:openstack/octavia+branch:master+topic:octavia_basic_lb_scenario
>>
>> [2] https://review.openstack.org/#/c/172199/66..75/.testr.conf
>>
>>
>> > Stephen
>> >
>> > On Tue, Aug 9, 2016 at 5:40 AM, Miguel Angel Ajo Pelayo
>> > <majop...@redhat.com> wrote:
>> >>
>> >> On Mon, Aug 8, 2016 at 4:56 PM, Kosnik, Lubosz
>> >> <lubosz.kos...@intel.com>
>> >> wrote:
>> >> > Great work with that multi-node setup Miguel.
>> >>
>> >> Thanks, I have to get my hands dirtier with octavia, it's just a tiny
>> >> thing.
>> >>
>> >> > About that multinode Infra is supporting two nodes setup used
>> >> > currently
>> >> > by grenade jobs but in my opinion we don’t have any tests which can
>> >> > cover
>> >> > that type of testing. We’re still struggling with selecting proper
>> >> > tool to
>> >> > test Octavia from integration/functional perspective so probably it’s
>> >> > too
>> >> > early to make it happen.
>> >>
>> >>
>> >> Well, any current tests we run should pass equally well in a multi
>> >> node controller, and that's the point, that, regardless of the
>> >> deployment architecture the behaviour shall not change at all. We may
>> >> not need any specific test.
>> >>
>> >>
>> >> > Maybe it’s great start to finally make some decision about testing
>> >> > tools
>> >> > and there will be a lot of work for you after that also with setting
>> >> > up an
>> >> > infra multi-node job for that.
>> >>
>> >> I'm not fully aware of what are we running today for octavia, so if
>> >> you can give me some pointers about where are those jobs configured,
>> >> and what do they target, it could be a start, to provide feedbac

Re: [openstack-dev] dhcp 'Address already in use' errors when trying to start a dnsmasq

2016-09-27 Thread Miguel Angel Ajo Pelayo
Ack, and thanks for the summary Ihar,

I will have a look on it tomorrow morning, please update this thread
with any progress.



On Tue, Sep 27, 2016 at 8:22 PM, Ihar Hrachyshka  wrote:
> Hi all,
>
> so we started getting ‘Address already in use’ when trying to start dnsmasq
> after the previous instance of the process is killed with kill -9. Armando
> spotted it today in logs for: https://review.openstack.org/#/c/377626/ but
> as per logstash it seems like an error we saw before (the earliest I see is
> 9/20), f.e.:
>
> http://logs.openstack.org/26/377626/1/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/b6953d4/logs/screen-q-dhcp.txt.gz
>
> Assuming I understand the flow of the failure, it runs as follows:
>
> - sync_state starts dnsmasq per network;
> - after agent lock is freed, some other notification event
> (port_update/subnet_update/...) triggers restart for one of the processes;
> - the restart is done not via reload_allocations (-SIGHUP) but thru
> restart/disable (kill -9);
> - once the old dnsmasq is killed with -9, we attempt to start a new process
> with new config files generated and fail with: “dnsmasq: failed to create
> listening socket for 10.1.15.242: Address already in use”
> - surprisingly, after several failing attempts to start the process, it
> succeeds to start it after a bunch of seconds and runs fine.
>
> It looks like once we kill the process with -9, it may hold for the socket
> resource for some time and may clash with the new process we try to spawn.
> It’s a bit weird because dnsmasq should have set REUSEADDR for the socket,
> so a new process should have started just fine.
>
> Lately, we landed several patches that touched reload logic for DHCP agent
> on notifications. Among those suspicious in the context are:
>
> - https://review.openstack.org/#/c/372595/ - note it requests ‘disable’ (-9)
> where it was using ‘reload_allocations’ (-SIGHUP) before, and it also does
> not unplug the port on lease release (maybe after we rip of the device, the
> address clash with the old dnsmasq state is gone even though the ’new’ port
> will use the same address?).
> - https://review.openstack.org/#/c/372236/6 - we were requesting
> reload_allocations in some cases before, and now we put the network into
> resync queue
>
> There were other related changes lately, you can check history of Kevin’s
> changes for the branch, it should capture most of them.
>
> I wonder whether we hit some long standing restart issue with dnsmasq here
> that was just never triggered before because we were not calling kill -9 so
> eagerly as we do now.
>
> Note: Jakub Libosvar validated that 'kill -9 && dnsmasq’ in loop does NOT
> result in the failure we see in gate logs.
>
> We need to understand what’s going with the failure, and come up with some
> plan for Newton. We either revert suspected patches as I believe Armando
> proposed before, but then it’s not clear until which point to do it; or we
> come up with some smart fix for that, that I don’t immediately grasp.
>
> I will be on vacation tomorrow, though I will check the email thread to see
> if we have a plan to act on. I really hope folks give the issue a priority
> since it seems like we buried ourselves under a pile of interleaved patches
> and now we don’t have a clear view of how to get out of the pile.
>
> Cheers,
> Ihar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Adding ihrachys to the neutron-drivers team

2016-09-20 Thread Miguel Angel Ajo Pelayo
Congratulations Ihar!, well deserved through hard work! :)

On Mon, Sep 19, 2016 at 8:03 PM, Brian Haley  wrote:
> Congrats Ihar!
>
> -Brian
>
>
> On 09/17/2016 12:40 PM, Armando M. wrote:
>>
>> Hi folks,
>>
>> I would like to propose Ihar to become a member of the Neutron drivers
>> team [1].
>>
>> Ihar wide knowledge of the Neutron codebase, and his longstanding duties
>> as
>> stable core, downstream package whisperer, release and oslo liaison (I am
>> sure I
>> am forgetting some other capacity he is in) is going to put him at great
>> comfort
>> in the newly appointed role, and help him grow and become wise even
>> further.
>>
>> Even though we have not been meeting regularly lately we will resume our
>> Thursday meetings soon [2], and having Ihar onboard by then will be highly
>> beneficial.
>>
>> Please, join me in welcome Ihar to the team.
>>
>> Cheers,
>> Armando
>>
>> [1]
>> http://docs.openstack.org/developer/neutron/policies/neutron-teams.html#drivers-team
>>
>> 
>> [2] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers
>> 
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][doc][neutron] what releases should API reference support?

2016-09-06 Thread Miguel Angel Ajo Pelayo
Option 2 sounds reasonable to me too. :)

On Tue, Sep 6, 2016 at 2:39 PM, Akihiro Motoki  wrote:
> What releases should we support in API references?
> There are several options.
>
> 1. The latest stable release + master
> 2. All supported stable releases + master
> 3. more older releases too?
>
> Option 2 sounds reasonable to me.
>
> This question is raised in the neutron api-ref patch [1].
> This patch drops the API reference of LBaaS v1 which was dropped in
> Newton release.
> At least Newton is not released yet, so I think it is better to keep
> it until Newton is released.
>
> I would like to get a community consensus before moving this patch forward.
>
> Thanks,
> Akihiro
>
> [1] https://review.openstack.org/#/c/362838/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Nova] Neutron mid-cycle summary report

2016-08-27 Thread Miguel Angel Ajo Pelayo
Hi Armando,

Thanks for the report, I'm adding some notes inline (OSC/SDK)

On Sat, Aug 27, 2016 at 2:13 AM, Armando M.  wrote:
> Hi Neutrinos,
>
> For those of you who couldn't join in person, please find a few notes below
> to capture some of the highlights of the event.
>
> I would like to thank everyone one who helped me put this report together,
> and everyone who helped make this mid-cycle a fruitful one.
>
> I would also like to thank IBM, and the individual organizers who made
> everything go smoothly. In particular Martin, who put up with our moody
> requests: thanks Martin!!
>
> Feel free to reach out/add if something is unclear, incorrect or incomplete.
>
> Cheers,
> Armando
>
> ~~~
>
> We touched on these topics (as initially proposed on
> https://etherpad.openstack.org/p/newton-neutron-midcycle-workitems)
>
> Keystone v3 and project-id adoption:
>
> dasm and amotoki have been working to making the Neutron server process
> project-id correctly [1]. Looking at the spec [2], we are half way through
> having completed the DB migration, being Keystone v3 complaint, and having
> updated the client bindings [3].
>
> [1] https://review.openstack.org/#/c/357977/
> [2] https://review.openstack.org/#/c/257362/
> [3] https://review.openstack.org/#/q/topic:bp/keystone-v3
>
> Neutron-lib:
>
> HenryG, dougwig and kevinbenton worked out a plan to get the common_db_mixin
> into neutron-lib. Because of the risk of regression, this is being deferred
> until Ocata opens up. However, simpler changes like the he model_base move
> to lib was agreed on and merged.
> A plan to provide test support was discussed. The current strategy involves
> providing test base classes in lib (this reverses the stance conveyed in
> Austin). The usual steps involved require to making public the currently
> private classes, ensure the lib's copies are up-to-date with core neutron,
> and deprecate the ones located in Neutron.
> rtheis and armax worked on having networking-ovn test periodically against
> neutron-lib [1,2,3].
>
> [1] https://review.openstack.org/#/c/357086/
> [2] https://review.openstack.org/#/c/359143/
> [3] https://review.openstack.org/#/c/357079/
>
> A tool (tools/migration_report.sh) helps project team determine the level of
> dependency they have with Neutron. It should be improved to report the exact
> offending imports.
> Right now neutron-lib 0.4.0 is released and available in
> global-requirements/upper-constraints.
>
> Objects and hitless upgrades:
>
> Ihar gave the team an overview and status update [1]
> There was a fruitful discussion that hopefully set the way forward for
> Ocata. The discussed plan was to start Ocata with the expectation that no
> new contract scripts are landing in Ocata, and to revisit the requirement
> later if for some reason we see any issue with applying the requirement in
> practice.
> Some work was done to deliver necessary objects for push-notifications.
> Patches up for review. Some review cycles were spent to work on landing
> patches moving model definitions under neutron/db/models
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2016-August/101838.html
>
> OSC transition:
>
> rtheis gave an update to the team on the state of the transition. Core
> resources commands are all available through OSC; QoS, Metering and *-aaS
> are still not converted.

QoS is being pushed up by Rodolfo in a series of bugs on SDK/OSC:
  
https://review.openstack.org/#/q/owner:rodolfo.alonso.hernandez%2540intel.com+status:open

Those are almost there.

> There is some confusion on how to tackle openstacksdk support. We discussed
> the future goal of python binding of Networking API. OSC uses OpenStack SDK
> for network commands and Neutron OSC plugin uses python bindings from
> python-neutronclient. A question is to which project developers who add new
> features implement, both, openstack SDK or python-neutronclient? There was
> no conclusion at the mid-cycle. It is not specific to neutron. Similar
> situation can happen for nova, cinder and other projects and we need to
> raise it to the community.
>
> Ocata is going to be the first release where the neutronclient CLI is
> officially deprecated. It may take us more than the usual two cycles to
> remove it altogether, but that's a signal to developer and users to
> seriously develop against OSC, and report bugs against OSC.
> Several pending contributions into osc-lib.
> An update is available on [1,2]
>
> [1] https://review.openstack.org/#/c/357844/
> [2] https://etherpad.openstack.org/p/osc-neutron-support
>
> Stability squash:
>
> armax was bug deputy for the week of the mid-cycle; nothing critical showed
> up in the gate, however pluggable ipam [1] switch merged, which might have
> some unexpected repercussions down the road.
> A number of bugs older than a year were made expirable [2].
> kevinbenton and armax devised a strategy and started working on [3] to
> ensure DB retriable errors are no longer handled 

Re: [openstack-dev] [neutron] Neutron Port MAC Address Uniqueness

2016-08-12 Thread Miguel Angel Ajo Pelayo
That was my feeling Moshe, thanks for checking.

Anil, which card and drivers are you using exactly?

You should probably contact your card vendor and check if they have a
fix for the issue, which seems more like a bug on their implementation
of the embedded switch, the card or the driver.

Best regards,
Miguel  Ángel.

On Thu, Aug 11, 2016 at 12:49 PM, Moshe Levi <mosh...@mellanox.com> wrote:
> Hi Anil,
>
>
> I tested it with Mellanox NIC and it working
>
> 16: enp6s0d1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP 
> mode DEFAULT group default qlen 1000
> link/ether 00:02:c9:e9:c2:12 brd ff:ff:ff:ff:ff:ff
> vf 0 MAC 00:00:00:00:00:00, vlan 4095, spoof checking off, link-state auto
> vf 1 MAC 00:00:00:00:00:00, vlan 4095, spoof checking off, link-state auto
> vf 2 MAC 00:00:00:00:00:00, vlan 4095, spoof checking off, link-state auto
> vf 3 MAC 00:00:00:00:00:00, vlan 4095, spoof checking off, link-state auto
> vf 4 MAC 00:00:00:00:00:00, vlan 4095, spoof checking off, link-state auto
> vf 5 MAC fa:16:3e:0d:8c:a2, vlan 192, spoof checking on, link-state enable
> vf 6 MAC fa:16:3e:0d:8c:a2, vlan 190, spoof checking on, link-state enable
> vf 7 MAC 00:00:00:00:00:00, vlan 4095, spoof checking off, link-state auto
>
> I guess the problem is with the SR-IOV NIC/ driver you are using maybe you 
> should contact them
>
>
> -Original Message-----
> From: Moshe Levi
> Sent: Wednesday, August 10, 2016 5:59 PM
> To: 'Miguel Angel Ajo Pelayo' <majop...@redhat.com>; OpenStack Development 
> Mailing List (not for usage questions) <openstack-dev@lists.openstack.org>
> Cc: Armando M. <arma...@gmail.com>
> Subject: RE: [openstack-dev] [neutron] Neutron Port MAC Address Uniqueness
>
> Miguel,
>
> I talked to our driver architect and according to him this is vendor 
> implementation (according to him this  should work with  Mellanox NIC) I need 
> to verify that this indeed working.
> I will update after I will prepare SR-IOV setup and try it myself.
>
>
> -Original Message-
> From: Miguel Angel Ajo Pelayo [mailto:majop...@redhat.com]
> Sent: Wednesday, August 10, 2016 12:04 PM
> To: OpenStack Development Mailing List (not for usage questions) 
> <openstack-dev@lists.openstack.org>
> Cc: Armando M. <arma...@gmail.com>; Moshe Levi <mosh...@mellanox.com>
> Subject: Re: [openstack-dev] [neutron] Neutron Port MAC Address Uniqueness
>
> @moshe, any insight on this?
>
> I guess that'd depend on the nic internal switch implementation and how the 
> switch ARP tables are handled there (per network, or global per switch).
>
> If that's the case for some sr-iov vendors (or all), would it make sense to 
> have a global switch to create globally unique mac addresses (for the same 
> neutron deployment, of course).
>
> On Wed, Aug 10, 2016 at 7:38 AM, huangdenghui <hdh_1...@163.com> wrote:
>> hi Armando
>> I think this feature causes problem in sriov scenario, since sriov
>> NIC don't support the vf has the same mac,even the port belongs to the
>> different network.
>>
>>
>> 发自网易邮箱手机版
>>
>>
>> On 2016-08-10 04:55 , Armando M. Wrote:
>>
>>
>>
>> On 9 August 2016 at 13:53, Anil Rao <anil@gigamon.com> wrote:
>>>
>>> Is the MAC address of a Neutron port on a tenant virtual network
>>> globally unique or unique just within that particular tenant network?
>>
>>
>> The latter:
>>
>> https://github.com/openstack/neutron/blob/master/neutron/db/models_v2.
>> py#L139
>>
>>>
>>>
>>>
>>> Thanks,
>>>
>>> Anil
>>>
>>>
>>> _
>>> _ OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>>
>> __
>>  OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia] Multi-node controller testing

2016-08-11 Thread Miguel Angel Ajo Pelayo
On Wed, Aug 10, 2016 at 9:51 PM, Stephen Balukoff <step...@balukoff.com> wrote:
> Miguel--
>
> There have been a number of tempest patches in the review queue for a long
> time now, but I think the reason they're not getting attention is that we
> don't want to have to import a massive amount of tempest code into our
> repository (which will become stale and need hot-fixing, as has happened
> with neutron-lbaas on many occasions), and it appears tempest-lib doesn't
> yet support all the stuff we would need to do with it.

I guess you mean [1]


> People have suggested Rally, but so far nobody has come forth with code, or
> a strong desire to push it through.

I guess rally is more suited to make sure that things work at scale,
to uncover any sort of race conditions (This would be specially
beneficial in multinode controllers).

But I understand (I can be wrong) that we still need to have something
(tempest-plugin) to make sure that integration works with nova &
neutron. I'm going to check those patches to see what was the
discussion and issues over there (I see this one [1] to start with,
which is probably the most important)

[1] 
https://review.openstack.org/#/q/status:open+project:openstack/octavia+branch:master+topic:octavia_basic_lb_scenario

[2] https://review.openstack.org/#/c/172199/66..75/.testr.conf


> Stephen
>
> On Tue, Aug 9, 2016 at 5:40 AM, Miguel Angel Ajo Pelayo
> <majop...@redhat.com> wrote:
>>
>> On Mon, Aug 8, 2016 at 4:56 PM, Kosnik, Lubosz <lubosz.kos...@intel.com>
>> wrote:
>> > Great work with that multi-node setup Miguel.
>>
>> Thanks, I have to get my hands dirtier with octavia, it's just a tiny
>> thing.
>>
>> > About that multinode Infra is supporting two nodes setup used currently
>> > by grenade jobs but in my opinion we don’t have any tests which can cover
>> > that type of testing. We’re still struggling with selecting proper tool to
>> > test Octavia from integration/functional perspective so probably it’s too
>> > early to make it happen.
>>
>>
>> Well, any current tests we run should pass equally well in a multi
>> node controller, and that's the point, that, regardless of the
>> deployment architecture the behaviour shall not change at all. We may
>> not need any specific test.
>>
>>
>> > Maybe it’s great start to finally make some decision about testing tools
>> > and there will be a lot of work for you after that also with setting up an
>> > infra multi-node job for that.
>>
>> I'm not fully aware of what are we running today for octavia, so if
>> you can give me some pointers about where are those jobs configured,
>> and what do they target, it could be a start, to provide feedback.
>>
>> What are the current options/tools we're considering?
>>
>>
>> >
>> > Cheers,
>> > Lubosz Kosnik
>> > Cloud Software Engineer OSIC
>> > lubosz.kos...@intel.com
>> >
>> >> On Aug 8, 2016, at 7:04 AM, Miguel Angel Ajo Pelayo
>> >> <majop...@redhat.com> wrote:
>> >>
>> >> Recently, I sent a series of patches [1] to make it easier for
>> >> developers to deploy a multi node octavia controller with
>> >> n_controllers x [api, cw, hm, hk] with an haproxy in front of the API.
>> >>
>> >> Since this is the way the service is designed to work (with horizontal
>> >> scalability in mind), and we want to have a good guarantee that any
>> >> bug related to such configuration is found early, and addressed, I was
>> >> thinking that an extra job that runs a two node controller deployment
>> >> could be beneficial for the project.
>> >>
>> >>
>> >> If we all believe it makes sense, I would be willing to take on this
>> >> work but I'd probably need some pointers and light help, since I've
>> >> never dealt with setting up or modifying existing jobs.
>> >>
>> >> How does this sound?
>> >>
>> >>
>> >> [1]
>> >> https://review.openstack.org/#/q/status:merged+project:openstack/octavia+branch:master+topic:multinode-devstack
>> >>
>> >>
>> >> __
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> > _

Re: [openstack-dev] [neutron] Neutron Port MAC Address Uniqueness

2016-08-10 Thread Miguel Angel Ajo Pelayo
@moshe, any insight on this?

I guess that'd depend on the nic internal switch implementation and
how the switch ARP tables are handled there (per network, or global
per switch).

If that's the case for some sr-iov vendors (or all), would it make
sense to have a global switch to create globally unique mac addresses
(for the same neutron deployment, of course).

On Wed, Aug 10, 2016 at 7:38 AM, huangdenghui  wrote:
> hi Armando
> I think this feature causes problem in sriov scenario, since sriov NIC
> don't support the vf has the same mac,even the port belongs to the different
> network.
>
>
> 发自网易邮箱手机版
>
>
> On 2016-08-10 04:55 , Armando M. Wrote:
>
>
>
> On 9 August 2016 at 13:53, Anil Rao  wrote:
>>
>> Is the MAC address of a Neutron port on a tenant virtual network globally
>> unique or unique just within that particular tenant network?
>
>
> The latter:
>
> https://github.com/openstack/neutron/blob/master/neutron/db/models_v2.py#L139
>
>>
>>
>>
>> Thanks,
>>
>> Anil
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia] Multi-node controller testing

2016-08-09 Thread Miguel Angel Ajo Pelayo
On Mon, Aug 8, 2016 at 4:56 PM, Kosnik, Lubosz <lubosz.kos...@intel.com> wrote:
> Great work with that multi-node setup Miguel.

Thanks, I have to get my hands dirtier with octavia, it's just a tiny thing.

> About that multinode Infra is supporting two nodes setup used currently by 
> grenade jobs but in my opinion we don’t have any tests which can cover that 
> type of testing. We’re still struggling with selecting proper tool to test 
> Octavia from integration/functional perspective so probably it’s too early to 
> make it happen.


Well, any current tests we run should pass equally well in a multi
node controller, and that's the point, that, regardless of the
deployment architecture the behaviour shall not change at all. We may
not need any specific test.


> Maybe it’s great start to finally make some decision about testing tools and 
> there will be a lot of work for you after that also with setting up an infra 
> multi-node job for that.

I'm not fully aware of what are we running today for octavia, so if
you can give me some pointers about where are those jobs configured,
and what do they target, it could be a start, to provide feedback.

What are the current options/tools we're considering?


>
> Cheers,
> Lubosz Kosnik
> Cloud Software Engineer OSIC
> lubosz.kos...@intel.com
>
>> On Aug 8, 2016, at 7:04 AM, Miguel Angel Ajo Pelayo <majop...@redhat.com> 
>> wrote:
>>
>> Recently, I sent a series of patches [1] to make it easier for
>> developers to deploy a multi node octavia controller with
>> n_controllers x [api, cw, hm, hk] with an haproxy in front of the API.
>>
>> Since this is the way the service is designed to work (with horizontal
>> scalability in mind), and we want to have a good guarantee that any
>> bug related to such configuration is found early, and addressed, I was
>> thinking that an extra job that runs a two node controller deployment
>> could be beneficial for the project.
>>
>>
>> If we all believe it makes sense, I would be willing to take on this
>> work but I'd probably need some pointers and light help, since I've
>> never dealt with setting up or modifying existing jobs.
>>
>> How does this sound?
>>
>>
>> [1] 
>> https://review.openstack.org/#/q/status:merged+project:openstack/octavia+branch:master+topic:multinode-devstack
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia] Multi-node controller testing

2016-08-09 Thread Miguel Angel Ajo Pelayo
Thank you!! :)

On Mon, Aug 8, 2016 at 5:49 PM, Michael Johnson <johnso...@gmail.com> wrote:
> Miguel,
>
> Thank you for your work here.  I would support an effort to setup a
> multi-node gate job.
>
> Michael
>
>
> On Mon, Aug 8, 2016 at 5:04 AM, Miguel Angel Ajo Pelayo
> <majop...@redhat.com> wrote:
>> Recently, I sent a series of patches [1] to make it easier for
>> developers to deploy a multi node octavia controller with
>> n_controllers x [api, cw, hm, hk] with an haproxy in front of the API.
>>
>> Since this is the way the service is designed to work (with horizontal
>> scalability in mind), and we want to have a good guarantee that any
>> bug related to such configuration is found early, and addressed, I was
>> thinking that an extra job that runs a two node controller deployment
>> could be beneficial for the project.
>>
>>
>> If we all believe it makes sense, I would be willing to take on this
>> work but I'd probably need some pointers and light help, since I've
>> never dealt with setting up or modifying existing jobs.
>>
>> How does this sound?
>>
>>
>> [1] 
>> https://review.openstack.org/#/q/status:merged+project:openstack/octavia+branch:master+topic:multinode-devstack
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][neutron] - best way to load 8021q kernel module into cirros

2016-08-09 Thread Miguel Angel Ajo Pelayo
Answers inline.

On Tue, Aug 9, 2016 at 8:08 AM, Antonio Ojea  wrote:
> What do you think about openwrt images?
>
> They are small, have documentation to build your custom images, have a
> packaging system and have tons of networking features (ipv6, vlans, ...) ,
> also seems that someone has done the work to adapt to openstack [1]
>
>
> [1] http://hackstack.org/x/blog/2014/08/17/openwrt-images-for-openstack/
>

At first glance, that could be a good idea, openwrt is low in memory
footprint, and high on network capabilities (which could be a good
thing for testing)

For example, having things like netperf/iperf could be a great thing
for tools likes shaker or bandwidth shaping policing tests.



>
> On Tue, Aug 9, 2016 at 1:56 AM, Ian Wienand  wrote:
>>
>> On 08/09/2016 02:10 AM, Jeremy Stanley wrote:
>>>
>>> I haven't personally tested the CirrOS build instructions, but have
>>> a feeling writing a diskimage-builder element wrapper for that
>>> wouldn't be particularly challenging.
>>
>>
>> I'm not exactly sure it fits that well into dib; it seems like
>> "bundle" has it mostly figured out.  I say that based on [1] where we
>> are discussing a similar thing for cirros images with watchdog
>> support.
>>
>> As mentioned we can easily build these and store them, and put them on
>> mirrors if we need them closer to nodes.  What I mentioned in [1] and
>> didn't particularly like is if the build of these images is totally
>> removed from where they're actually used (e.g. a custom script inside
>> a job in project-config, where basically anyone outside infra can't
>> easily replicate the build for a local test).  But if several projects
>> are building slightly different cusomised cirros images, it might be
>> worth consolidating.
>>

I agree with Ian here, we should be in control of how tiny test images
are built, so probably a project & tuneable job to build those images
would be a fantastic idea IMO, if that's what you mean.


>> -i
>>
>> [1]
>> https://review.openstack.org/#/c/338167/2/tools/build-watchdog-images.sh
>>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [octavia] Multi-node controller testing

2016-08-08 Thread Miguel Angel Ajo Pelayo
Recently, I sent a series of patches [1] to make it easier for
developers to deploy a multi node octavia controller with
n_controllers x [api, cw, hm, hk] with an haproxy in front of the API.

Since this is the way the service is designed to work (with horizontal
scalability in mind), and we want to have a good guarantee that any
bug related to such configuration is found early, and addressed, I was
thinking that an extra job that runs a two node controller deployment
could be beneficial for the project.


If we all believe it makes sense, I would be willing to take on this
work but I'd probably need some pointers and light help, since I've
never dealt with setting up or modifying existing jobs.

How does this sound?


[1] 
https://review.openstack.org/#/q/status:merged+project:openstack/octavia+branch:master+topic:multinode-devstack

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-ovs-dpdk] conntrack security group driver with ovs-dpdk

2016-08-08 Thread Miguel Angel Ajo Pelayo
Awesome Sean!,

   Keep us posted!! :)


On Sat, Aug 6, 2016 at 8:16 PM, Mooney, Sean K  wrote:
> Hi just a quick fyi,
>
> About 2 weeks ago I did some light testing with the conntrack security group
> driver and the newly
>
> Merged upserspace conntrack support in ovs.
>
>
>
> I can confirm that at least form my initial smoke tests where I
>
> Uses netcat ping and ssh to try and establish connections between two vms
> the
>
> Conntrack security group driver appears to function correctly with the
> userspace connection tracker.
>
>
>
> We have not looked at any of the performance yet but assuming it is at an
> acceptable level I am planning to
>
> Deprecate the learn action based driver in networking-ovs-dpdk and remove it
> once  we have cut the stable newton
>
> Branch.
>
>
>
> We hope to do some rfc 2544 throughput testing to evaluate the performance
> sometime mid-September.
>
> Assuming all goes well I plan on enabling the conntrack based security group
> driver by default when the
>
> Networking-ovs-dpdk devstack plugin is loaded. We will also evaluate
> enabling the security group tests
>
> In our third party ci to ensure it continues to function correctly  with
> ovs-dpdk.
>
>
>
> Regards
>
> Seán
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][neutron] - best way to load 8021q kernel module into cirros

2016-08-08 Thread Miguel Angel Ajo Pelayo
The problem with the other projects image builds is that they are
based for bigger systems, while cirros is an embedded-device-like
image which boots in a couple of seconds.

Couldn't we contribute to cirros to have such module load by default [1]?

Or may be it's time for Openstack to build their own "cirros-like"
image with all the capabilities we may be missing for general tempest
testing? (ipv6, vlan, etc..? )


[1] 
http://bazaar.launchpad.net/~cirros-dev/cirros/trunk/view/head:/bin/grab-kernels

On Sat, Aug 6, 2016 at 11:15 PM, Jeremy Stanley  wrote:
> On 2016-08-06 14:44:27 -0600 (-0600), Doug Wiegley wrote:
>> I would be tempted to make a custom image, and ask to put it on
>> our mirrors, or have nodepool manage the image building and
>> storing.
>
> Some projects (I think at least Ironic and Trove) have CI jobs to
> build custom virtual machine images they then boot under nova in
> DevStack using jobs. At the moment the image build jobs are
> uploading to tarballs.openstack.org and then test jobs are consuming
> them from there.
>
>> You can also likely just have the module on the local mirrors,
>> which would alleviate the random internet issue.
> [...]
>
> We've discussed this, and I think it makes sense. If we move our
> tarballs site into AFS, then we could serve its contents from our
> local AFS cache mirrors in each provider for improved performance.
> This may not work well for exceptionally large images due to the
> time it takes to pull them into the AFS cache over the Internet, but
> some experimentation with small and infrequently-updated custom disk
> images seems like it could prove worthwhile.
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Proposing Jakub Libosvar for testing core

2016-07-26 Thread Miguel Angel Ajo Pelayo
Ohhh, yikes, even though I'm late my vote would have been super +1!!


On Tue, Jul 26, 2016 at 5:04 PM, Jakub Libosvar  wrote:
> On 26/07/16 16:56, Assaf Muller wrote:
>>
>> We've hit critical mass from cores interesting in the testing area.
>>
>> Welcome Jakub to the core reviewer team. May you enjoy staring at the
>> Gerrit interface and getting yelled at by people... It's a glamorous
>> life.
>
>
> Thanks everyone for support! I'll try to do my best :)
>
>
>>
>>
>>
>> On Mon, Jul 25, 2016 at 10:49 PM, Brian Haley  wrote:
>>>
>>> +1
>>>
>>> On 07/22/2016 04:12 AM, Oleg Bondarev wrote:


 +1

 On Fri, Jul 22, 2016 at 2:36 AM, Doug Wiegley
 > wrote:

 +1

> On Jul 21, 2016, at 5:13 PM, Kevin Benton  > wrote:
>
> +1
>
> On Thu, Jul 21, 2016 at 2:41 PM, Carl Baldwin  > wrote:
>
> +1 from me
>
> On Thu, Jul 21, 2016 at 1:35 PM, Assaf Muller  > wrote:
>
> As Neutron's so called testing lieutenant I would like to
> propose
> Jakub Libosvar to be a core in the testing area.
>
> Jakub has demonstrated his inherent interest in the testing
> area over
> the last few years, his reviews are consistently insightful
> and his
> numbers [1] are in line with others and I know will improve
> if given
> the responsibilities of a core reviewer. Jakub is deeply
> involved with
> the project's testing infrastructures and CI systems.
>
> As a reminder the expectation from cores is found here [2],
> and
> specifically for cores interesting in helping out shaping
> Neutron's
> testing story:
>
> * Guide community members to craft a testing strategy for
> features [3]
> * Ensure Neutron's testing infrastructures are sufficiently
> sophisticated to achieve the above.
> * Provide leadership when determining testing Do's & Don'ts
> [4]. What
> makes for an effective test?
> * Ensure the gate stays consistently green
>
> And more tactically we're looking at finishing the
> Tempest/Neutron
> tests dedup [5] and to provide visual graphing for
> historical
> control
> and data plane performance results similar to [6].
>
> [1] http://stackalytics.com/report/contribution/neutron/90
> [2]
>
> http://docs.openstack.org/developer/neutron/policies/neutron-teams.html
> [3]
>
>
> http://docs.openstack.org/developer/neutron/devref/development.environment.html#testing-neutron
> [4]
> https://assafmuller.com/2015/05/17/testing-lightning-talk/
> [5] https://etherpad.openstack.org/p/neutron-tempest-defork
> [6]
>
> https://www.youtube.com/watch?v=a0qlsH1hoKs=youtu.be=24m22s
>
>
>
> __
> OpenStack Development Mailing List (not for usage
> questions)
> Unsubscribe:
>
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>
> 
>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>
> 
>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org
>
> ?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

 

Re: [openstack-dev] [neutron][qos][drivers] RFE about QoS extended validation

2016-07-15 Thread Miguel Angel Ajo Pelayo
Oh yikes, I was "hit by a plane" (delay) plus a huge jet lag and
didn't make it to the meeting, I'll be there next week. Thank you.

On Tue, Jul 12, 2016 at 9:48 AM, Miguel Angel Ajo Pelayo
<majop...@redhat.com> wrote:
> I'd like to ask for some prioritization on this RFE [1], since it's blocking
> one of the already existing RFEs for RFE (ingress bandwidth limiting),
> and we're trying to enhance the operator experience on the QoS service.
>
> It's been discussed on previous driver meetings, and it seems to have
> some consensus after some tweaks, so discussion shall be hopefully short.
>
> We also have a related -not necessary- spec, just for reference if somebody
> wants to sneak into the extended details [2], and even some initial
> implementation
> [3]
>
>
> Thank you for your time,
> Miguel Ángel Ajo
>
> [1] https://bugs.launchpad.net/neutron/+bug/1586056
> [2] https://review.openstack.org/#/c/323474/
> [3] https://review.openstack.org/#/c/319694/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][qos][drivers] RFE about QoS extended validation

2016-07-12 Thread Miguel Angel Ajo Pelayo
I'd like to ask for some prioritization on this RFE [1], since it's blocking
one of the already existing RFEs for RFE (ingress bandwidth limiting),
and we're trying to enhance the operator experience on the QoS service.

It's been discussed on previous driver meetings, and it seems to have
some consensus after some tweaks, so discussion shall be hopefully short.

We also have a related -not necessary- spec, just for reference if somebody
wants to sneak into the extended details [2], and even some initial
implementation
[3]


Thank you for your time,
Miguel Ángel Ajo

[1] https://bugs.launchpad.net/neutron/+bug/1586056
[2] https://review.openstack.org/#/c/323474/
[3] https://review.openstack.org/#/c/319694/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle

2016-05-18 Thread Miguel Angel Ajo Pelayo
Hey,

   Finally we took over the channel for 1h. mostly because the time was
already agreed between many people on opposed timezones and it was a bit
late to cancel it.

   The first point was finding a suitable timeslot for a biweekly meeting
-for some time- and alternatively following up on email. We should not take
over the neutron channel for these meetings anymore, I'm sorry for the
inconvenience.

  Please find the summary here:

http://eavesdrop.openstack.org/meetings/network_common_flow_classifier/2016/network_common_flow_classifier.2016-05-17-17.02.html

On Tue, May 17, 2016 at 8:10 PM, Kevin Benton <ke...@benton.pub> wrote:

> Yeah, no meetings in #openstack-neutron please. It leaves us nowhere to
> discuss development stuff during that hour.
>
> On Tue, May 17, 2016 at 2:54 AM, Miguel Angel Ajo Pelayo <
> majop...@redhat.com> wrote:
>
>> I agree, let's try to find a timeslot that works.
>>
>> using #openstack-neutron with the meetbot works, but it's going to
>> generate a lot of noise.
>>
>> On Tue, May 17, 2016 at 11:47 AM, Ihar Hrachyshka <ihrac...@redhat.com>
>> wrote:
>>
>>>
>>> > On 16 May 2016, at 15:47, Takashi Yamamoto <yamam...@midokura.com>
>>> wrote:
>>> >
>>> > On Mon, May 16, 2016 at 10:25 PM, Takashi Yamamoto
>>> > <yamam...@midokura.com> wrote:
>>> >> hi,
>>> >>
>>> >> On Mon, May 16, 2016 at 9:00 PM, Ihar Hrachyshka <ihrac...@redhat.com>
>>> wrote:
>>> >>> +1 for earlier time. But also, have we booked any channel for the
>>> meeting? Hijacking #openstack-neutron may not work fine during such a busy
>>> (US) time. I suggest we propose a patch for
>>> https://github.com/openstack-infra/irc-meetings
>>> >>
>>> >> i agree and submitted a patch.
>>> >> https://review.openstack.org/#/c/316830/
>>> >
>>> > oops, unfortunately there seems no meeting channel free at the time
>>> slot.
>>>
>>> This should be solved either by changing the slot, or by getting a new
>>> channel registered for meetings. Using unregistered channels, especially
>>> during busy hours, is not effective, and is prone to overlaps for relevant
>>> meetings. The meetings will also not get a proper slot at
>>> eavesdrop.openstack.org.
>>>
>>> Ihar
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle

2016-05-17 Thread Miguel Angel Ajo Pelayo
I agree, let's try to find a timeslot that works.

using #openstack-neutron with the meetbot works, but it's going to generate
a lot of noise.

On Tue, May 17, 2016 at 11:47 AM, Ihar Hrachyshka 
wrote:

>
> > On 16 May 2016, at 15:47, Takashi Yamamoto 
> wrote:
> >
> > On Mon, May 16, 2016 at 10:25 PM, Takashi Yamamoto
> >  wrote:
> >> hi,
> >>
> >> On Mon, May 16, 2016 at 9:00 PM, Ihar Hrachyshka 
> wrote:
> >>> +1 for earlier time. But also, have we booked any channel for the
> meeting? Hijacking #openstack-neutron may not work fine during such a busy
> (US) time. I suggest we propose a patch for
> https://github.com/openstack-infra/irc-meetings
> >>
> >> i agree and submitted a patch.
> >> https://review.openstack.org/#/c/316830/
> >
> > oops, unfortunately there seems no meeting channel free at the time slot.
>
> This should be solved either by changing the slot, or by getting a new
> channel registered for meetings. Using unregistered channels, especially
> during busy hours, is not effective, and is prone to overlaps for relevant
> meetings. The meetings will also not get a proper slot at
> eavesdrop.openstack.org.
>
> Ihar
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle

2016-05-06 Thread Miguel Angel Ajo Pelayo
Sounds good,

   I started by opening a tiny RFE, that may help in the organization
of flows inside OVS agent, for inter operability of features (SFC,
TaaS, ovs fw, and even port trunking with just openflow). [1] [2]


[1] https://bugs.launchpad.net/neutron/+bug/1577791
[2] http://paste.openstack.org/show/495967/


On Fri, May 6, 2016 at 12:35 AM, Cathy Zhang  wrote:
> Hi everyone,
>
> We had a discussion on the two topics during the summit. Here is the etherpad 
> link for the discussion.
> https://etherpad.openstack.org/p/Neutron-FC-OVSAgentExt-Austin-Summit
>
> We agreed to continue the discussion on Neutron channel on a weekly basis. It 
> seems UTC 1700 ~ UTC 1800 Tuesday is good for most people.
> Another option is UTC 1700 ~ UTC 1800 Friday.
>
> I will tentatively set the meeting time to UTC 1700 ~ UTC 1800 Tuesday. Hope 
> this time is good for all people who have interest and like to contribute to 
> this work. We plan to start the first meeting on May 17.
>
> Thanks,
> Cathy
>
>
> -Original Message-
> From: Cathy Zhang
> Sent: Thursday, April 21, 2016 11:43 AM
> To: Cathy Zhang; OpenStack Development Mailing List (not for usage 
> questions); Ihar Hrachyshka; Vikram Choudhary; Sean M. Collins; Haim Daniel; 
> Mathieu Rohon; Shaughnessy, David; Eichberger, German; Henry Fourie; 
> arma...@gmail.com; Miguel Angel Ajo; Reedip; Thierry Carrez
> Cc: Cathy Zhang
> Subject: RE: [openstack-dev] [neutron] work on Common Flow Classifier and OVS 
> Agent extension for Newton cycle
>
> Hi everyone,
>
> We have room 400 at 3:10pm on Thursday available for discussion of the two 
> topics.
> Another option is to use the common room with roundtables in "Salon C" during 
> Monday or Wednesday lunch time.
>
> Room 400 at 3:10pm is a closed room while the Salon C is a big open room 
> which can host 500 people.
>
> I am Ok with either option. Let me know if anyone has a strong preference.
>
> Thanks,
> Cathy
>
>
> -Original Message-
> From: Cathy Zhang
> Sent: Thursday, April 14, 2016 1:23 PM
> To: OpenStack Development Mailing List (not for usage questions); 'Ihar 
> Hrachyshka'; Vikram Choudhary; 'Sean M. Collins'; 'Haim Daniel'; 'Mathieu 
> Rohon'; 'Shaughnessy, David'; 'Eichberger, German'; Cathy Zhang; Henry 
> Fourie; 'arma...@gmail.com'
> Subject: RE: [openstack-dev] [neutron] work on Common Flow Classifier and OVS 
> Agent extension for Newton cycle
>
> Thanks for everyone's reply!
>
> Here is the summary based on the replies I received:
>
> 1.  We should have a meet-up for these two topics. The "to" list are the 
> people who have interest in these topics.
> I am thinking about around lunch time on Tuesday or Wednesday since some 
> of us will fly back on Friday morning/noon.
> If this time is OK with everyone, I will find a place and let you know 
> where and what time to meet.
>
> 2.  There is a bug opened for the QoS Flow Classifier 
> https://bugs.launchpad.net/neutron/+bug/1527671
> We can either change the bug title and modify the bug details or start with a 
> new one for the common FC which provides info on all requirements needed by 
> all relevant use cases. There is a bug opened for OVS agent extension 
> https://bugs.launchpad.net/neutron/+bug/1517903
>
> 3.  There are some very rough, ugly as Sean put it:-), and preliminary work 
> on common FC https://github.com/openstack/neutron-classifier which we can see 
> how to leverage. There is also a SFC API spec which covers the FC API for SFC 
> usage 
> https://github.com/openstack/networking-sfc/blob/master/doc/source/api.rst,
> the following is the CLI version of the Flow Classifier for your reference:
>
> neutron flow-classifier-create [-h]
> [--description ]
> [--protocol ]
> [--ethertype ]
> [--source-port : protocol port>]
> [--destination-port : destination protocol port>]
> [--source-ip-prefix ]
> [--destination-ip-prefix ]
> [--logical-source-port ]
> [--logical-destination-port ]
> [--l7-parameters ] FLOW-CLASSIFIER-NAME
>
> The corresponding code is here 
> https://github.com/openstack/networking-sfc/tree/master/networking_sfc/extensions
>
> 4.  We should come up with a formal Neutron spec for FC and another one for 
> OVS Agent extension and get everyone's review and approval. Here is the 
> etherpad catching our previous requirement discussion on OVS agent (Thanks 
> David for the link! I remember we had this discussion before) 
> https://etherpad.openstack.org/p/l2-agent-extensions-api-expansion
>
>
> More inline.
>
> Thanks,
> Cathy
>
>
> -Original Message-
> From: Ihar Hrachyshka [mailto:ihrac...@redhat.com]
> Sent: Thursday, April 14, 2016 3:34 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS 
> Agent extension for Newton cycle
>
> Cathy Zhang  wrote:
>
>> Hi everyone,

[openstack-dev] [neutron] [qos] gathering Friday 9:30

2016-04-28 Thread Miguel Angel Ajo Pelayo
Does governors ballroom in Hilton sound ok?

We can move to somewhere else if necessary.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle

2016-04-27 Thread Miguel Angel Ajo Pelayo
Please add me to whatsapp or telegram if you use that : +34636522569
El 27/4/2016 12:50, majop...@redhat.com escribió:

> Trying to find you folks. I was late
> El 27/4/2016 12:04, "Paul Carver"  escribió:
>
>> SFC team and anybody else dealing with flow selection/classification
>> (e.g. QoS),
>>
>> I just wanted to confirm that we're planning to meet in salon C today
>> (Wednesday) to get lunch but then possibly move to a quieter location to
>> discuss the common flow classifier ideas.
>>
>> On 4/21/2016 19:42, Cathy Zhang wrote:
>>
>>> I like Malini’s suggestion on meeting for a lunch to get to know each
>>> other, then continue on Thursday.
>>>
>>> So let’s meet at "Salon C" for lunch from 12:30pm~1:50pm on Wednesday
>>> and then continue the discussion at Room 400 at 3:10pm Thursday.
>>>
>>> Since Salon C is a big room, I will put a sign “Common Flow Classifier
>>> and OVS Agent Extension” on the table.
>>>
>>> I have created an etherpad for the discussion.
>>> https://etherpad.openstack.org/p/Neutron-FC-OVSAgentExt-Austin-Summit
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle

2016-04-27 Thread Miguel Angel Ajo Pelayo
Trying to find you folks. I was late
El 27/4/2016 12:04, "Paul Carver"  escribió:

> SFC team and anybody else dealing with flow selection/classification (e.g.
> QoS),
>
> I just wanted to confirm that we're planning to meet in salon C today
> (Wednesday) to get lunch but then possibly move to a quieter location to
> discuss the common flow classifier ideas.
>
> On 4/21/2016 19:42, Cathy Zhang wrote:
>
>> I like Malini’s suggestion on meeting for a lunch to get to know each
>> other, then continue on Thursday.
>>
>> So let’s meet at "Salon C" for lunch from 12:30pm~1:50pm on Wednesday
>> and then continue the discussion at Room 400 at 3:10pm Thursday.
>>
>> Since Salon C is a big room, I will put a sign “Common Flow Classifier
>> and OVS Agent Extension” on the table.
>>
>> I have created an etherpad for the discussion.
>> https://etherpad.openstack.org/p/Neutron-FC-OVSAgentExt-Austin-Summit
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle

2016-04-21 Thread Miguel Angel Ajo Pelayo
On Thu, Apr 21, 2016 at 9:54 AM, Vikram Choudhary <viks...@gmail.com> wrote:
> AFAIK, there is proposal about adding a 'priority' option in the existing
> flow classifier rule. This can ensure the rule ordering.
>
>

It's more complicated than that, there you're only considering Flow
Classifiers, while we need to make the full pipeline of features
(externally pluggable) work together.

> On Thu, Apr 21, 2016 at 12:58 PM, IWAMOTO Toshihiro <iwam...@valinux.co.jp>
> wrote:
>>
>> At Wed, 20 Apr 2016 14:12:07 +0200,
>> Miguel Angel Ajo Pelayo wrote:
>> >
>> > I think this is an interesting topic.
>> >
>> > What do you mean exactly by FC ? (feature chaining?)
>> >
>> > I believe we have three things to look at:  (sorry for the TL)
>> >
>> > 1) The generalization of traffic filters / traffic classifiers. Having
>> > common models, some sort of common API or common API structure
>> > available, and translators to convert those filters to iptables,
>> > openflow filters, etc..
>> >
>> > 2) The enhancement of extensiblity of agents via Extension API.
>> >
>> > 3) How we chain features in OpenFlow, which current approach of just
>> > inserting rules, renders into incompatible extensions. This becomes
>> > specially relevant for the new openvswitch firewall.
>> >
>> > 2 and 3 are interlinked, and a good mechanism to enhance (3) should be
>> > provided in (2).
>> >
>> > We need to resolve:
>> >
>> > a) The order of tables, and how openflow actions chain the
>> > different features in the pipeline.  Some naive thinking brings me
>> > into the idea that we need to identify different input/output stages
>> > of packet processing, and every feature/extension declares the point
>> > where it needs to be. And then when we have all features, every
>> > feature get's it's own table number, and the "next" action in
>> > pipeline.
>>
>> Can we create an API that allocates flow insertion points and table
>> numbers?  How can we ensure correct ordering of flows?

I believe that just an API to allocate flow insertion points and table
numbers wouldn't work, because, you need to get the "next" hop in the
table, and the next hop would not be still resolved when you ask for
it (may be yes, if we return a mutable object).

The idea, is than when all features are declared and inspected, we
have the next hops, and table numbers for all features.

Also another API for requesting openflow registers would be necessary,
as extensions consume registers for different purposes.

>> IMHO, it might be a time to collect low-level flow operation functions
>> into a single repository and test interoperability there.
>>

That's may be something we must consider, I don't completely disagree,
but if we can find a way to solve the issue dynamically, that would
lead to quicker evolution, and easy interoperability with
out-of-our-trees solutions, and cross version compatibility.

>> > b) We need to have a way to request openflow registers to use in
>> > extensions, so one extension doesn't overwrite other's registers
>> >
>> >c) Those registers need to be given a logical names that other
>> > extensions can query for (for example "port_number", "local_zone",
>> > etc..) , and those standard registers should be filled in for all
>> > extensions at the input stage.
>> >
>> >and probably c,d,e,f,g,h what I didn't manage to think of.
>> >
>> > On Fri, Apr 15, 2016 at 11:13 PM, Cathy Zhang <cathy.h.zh...@huawei.com>
>> > wrote:
>> > > Hi Reedip,
>> > >
>> > >
>> > >
>> > > Sure will include you in the discussion. Let me know if there are
>> > > other
>> > > Tap-as-a-Service members who would like to join this initiative.
>> > >
>> > >
>> > >
>> > > Cathy
>> > >
>> > >
>> > >
>> > > From: reedip banerjee [mailto:reedi...@gmail.com]
>> > > Sent: Thursday, April 14, 2016 7:03 PM
>> > > To: OpenStack Development Mailing List (not for usage questions)
>> > > Subject: Re: [openstack-dev] [neutron] work on Common Flow Classifier
>> > > and
>> > > OVS Agent extension for Newton cycle
>> > >
>> > >
>> > >
>> > > Speaking on behalf of Tap-as-a-Service members, we would also be very
>> > > much
>> > > interested in the following initiative.

Re: [openstack-dev] [neutron] [nova] scheduling bandwidth resources / NIC_BW_KB resource class

2016-04-20 Thread Miguel Angel Ajo Pelayo
Inline update.

On Mon, Apr 11, 2016 at 4:22 PM, Miguel Angel Ajo Pelayo
<majop...@redhat.com> wrote:
> On Mon, Apr 11, 2016 at 1:46 PM, Jay Pipes <jaypi...@gmail.com> wrote:
>> On 04/08/2016 09:17 AM, Miguel Angel Ajo Pelayo wrote:
[...]
>> Yes, Nova's conductor gathers information about the requested networks
>> *before* asking the scheduler where to place hosts:
>>
>> https://github.com/openstack/nova/blob/stable/mitaka/nova/conductor/manager.py#L362
>>
>>>  That would require identifying that the port has a "qos_policy_id"
>>> attached to it, and then, asking neutron for the specific QoS policy
>>>   [3], then look out for a minimum bandwidth rule (still to be defined),
>>> and extract the required bandwidth from it.
>>
>>
>> Yep, exactly correct.
>>
>>> That moves, again some of the responsibility to examine and
>>> understand external resources to nova.
>>
>>
>> Yep, it does. The alternative is more retries for placement decisions
>> because accurate decisions cannot be made until the compute node is already
>> selected and the claim happens on the compute node.
>>
>>>  Could it make sense to make that part pluggable via stevedore?, so
>>> we would provide something that takes the "resource id" (for a port in
>>> this case) and returns the requirements translated to resource classes
>>> (NIC_BW_KB in this case).
>>
>>
>> Not sure Stevedore makes sense in this context. Really, we want *less*
>> extensibility and *more* consistency. So, I would envision rather a system
>> where Nova would call to Neutron before scheduling when it has received a
>> port or network ID in the boot request and ask Neutron whether the port or
>> network has any resource constraints on it. Neutron would return a
>> standardized response containing each resource class and the amount
>> requested in a dictionary (or better yet, an os_vif.objects.* object,
>> serialized). Something like:
>>
>> {
>>   'resources': {
>> '': {
>>   'NIC_BW_KB': 2048,
>>   'IPV4_ADDRESS': 1
>> }
>>   }
>> }
>>
>
> Oh, true, that's a great idea, having some API that translates a
> neutron resource, to scheduling constraints. The external call will be
> still required, but the coupling issue is removed.
>
>


I had a talk yesterday with @iharchys, @dansmith, and @sbauzas about
this, and we believe the synthesis of resource usage / scheduling
constraints from neutron makes sense.

We should probably look into providing those details in a read only
dictionary during port creation/update/show in general, in that way,
we would not be adding an extra API call to neutron from the nova
scheduler to figure out any of those details. That extra optimization
is something we may need to discuss with the neutron community.



>> In the case of the NIC_BW_KB resource class, Nova's scheduler would look for
>> compute nodes that had a NIC with that amount of bandwidth still available.
>> In the case of the IPV4_ADDRESS resource class, Nova's scheduler would use
>> the generic-resource-pools interface to find a resource pool of IPV4_ADDRESS
>> resources (i.e. a Neutron routed network or subnet allocation pool) that has
>> available IP space for the request.
>>
>
> Not sure about the IPV4_ADDRESS part because I still didn't look on
> how they resolve routed networks with this new framework, but for
> other constraints makes perfect sense to me.
>
>> Best,
>> -jay
>>
>>
>>> Best regards,
>>> Miguel Ángel Ajo
>>>
>>>
>>> [1]
>>>
>>> http://lists.openstack.org/pipermail/openstack-dev/2016-February/086371.html
>>> [2] https://bugs.launchpad.net/neutron/+bug/1560963
>>> [3]
>>> http://developer.openstack.org/api-ref-networking-v2-ext.html#showPolicy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle

2016-04-20 Thread Miguel Angel Ajo Pelayo
I think this is an interesting topic.

What do you mean exactly by FC ? (feature chaining?)

I believe we have three things to look at:  (sorry for the TL)

1) The generalization of traffic filters / traffic classifiers. Having
common models, some sort of common API or common API structure
available, and translators to convert those filters to iptables,
openflow filters, etc..

2) The enhancement of extensiblity of agents via Extension API.

3) How we chain features in OpenFlow, which current approach of just
inserting rules, renders into incompatible extensions. This becomes
specially relevant for the new openvswitch firewall.

2 and 3 are interlinked, and a good mechanism to enhance (3) should be
provided in (2).

We need to resolve:

a) The order of tables, and how openflow actions chain the
different features in the pipeline.  Some naive thinking brings me
into the idea that we need to identify different input/output stages
of packet processing, and every feature/extension declares the point
where it needs to be. And then when we have all features, every
feature get's it's own table number, and the "next" action in
pipeline.

b) We need to have a way to request openflow registers to use in
extensions, so one extension doesn't overwrite other's registers

   c) Those registers need to be given a logical names that other
extensions can query for (for example "port_number", "local_zone",
etc..) , and those standard registers should be filled in for all
extensions at the input stage.

   and probably c,d,e,f,g,h what I didn't manage to think of.

On Fri, Apr 15, 2016 at 11:13 PM, Cathy Zhang  wrote:
> Hi Reedip,
>
>
>
> Sure will include you in the discussion. Let me know if there are other
> Tap-as-a-Service members who would like to join this initiative.
>
>
>
> Cathy
>
>
>
> From: reedip banerjee [mailto:reedi...@gmail.com]
> Sent: Thursday, April 14, 2016 7:03 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [neutron] work on Common Flow Classifier and
> OVS Agent extension for Newton cycle
>
>
>
> Speaking on behalf of Tap-as-a-Service members, we would also be very much
> interested in the following initiative :)
>
>
>
> On Fri, Apr 15, 2016 at 5:14 AM, Ihar Hrachyshka 
> wrote:
>
> Cathy Zhang  wrote:
>
>
> I think there is no formal spec or anything, just some emails around there.
>
> That said, I don’t follow why it’s a requirement for SFC to switch to l2
> agent extension mechanism. Even today, with SFC maintaining its own agent,
> there are no clear guarantees for flow priorities that would avoid all
> possible conflicts.
>
> Cathy> There is no requirement for SFC to switch. My understanding is that
> current L2 agent extension does not solve the conflicting entry issue if two
> features inject the same priority table entry. I think this new L2 agent
> effort is try to come up with a mechanism to resolve this issue. Of course
> if each feature( SFC or Qos) uses its own agent, then there is no
> coordination and no way to avoid conflicts.
>
>
> Sorry, I probably used misleading wording. I meant, why do we consider the
> semantic flow management support in l2 agent extension framework a
> *prerequisite* for SFC to switch to l2 agent extensions? The existing
> framework should already allow SFC to achieve what you have in the
> subproject tree implemented as a separate agent (essentially a fork of OVS
> agent). It will also set SFC to use standard extension mechanisms instead of
> hacky inheritance from OVS agent classes. So even without the strict
> semantic flow management, there is benefit for the subproject.
>
> With that in mind, I would split this job into 3 pieces:
> * first, adopt l2 agent extension mechanism for SFC functionality (dropping
> custom agent);
> * then, work on semantic flow management support in OVS agent API class [1];
> * once the feature emerges, switch SFC l2 agent extension to the new
> framework to manage SFC flows.
>
> I would at least prioritize the first point and target it to Newton-1. Other
> bullet points may take significant time to bake.
>
> [1]
> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_agent_extension_api.py
>
>
>
> Ihar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> --
>
> Thanks and Regards,
> Reedip Banerjee
>
> IRC: reedip
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [neutron] work on Common Flow Classifier and OVS Agent extension for Newton cycle

2016-04-20 Thread Miguel Angel Ajo Pelayo
Sorry, I just saw, FC = flow classifier :-), I made it a multi purpose
abrev. now ;)

On Wed, Apr 20, 2016 at 2:12 PM, Miguel Angel Ajo Pelayo
<majop...@redhat.com> wrote:
> I think this is an interesting topic.
>
> What do you mean exactly by FC ? (feature chaining?)
>
> I believe we have three things to look at:  (sorry for the TL)
>
> 1) The generalization of traffic filters / traffic classifiers. Having
> common models, some sort of common API or common API structure
> available, and translators to convert those filters to iptables,
> openflow filters, etc..
>
> 2) The enhancement of extensiblity of agents via Extension API.
>
> 3) How we chain features in OpenFlow, which current approach of just
> inserting rules, renders into incompatible extensions. This becomes
> specially relevant for the new openvswitch firewall.
>
> 2 and 3 are interlinked, and a good mechanism to enhance (3) should be
> provided in (2).
>
> We need to resolve:
>
> a) The order of tables, and how openflow actions chain the
> different features in the pipeline.  Some naive thinking brings me
> into the idea that we need to identify different input/output stages
> of packet processing, and every feature/extension declares the point
> where it needs to be. And then when we have all features, every
> feature get's it's own table number, and the "next" action in
> pipeline.
>
> b) We need to have a way to request openflow registers to use in
> extensions, so one extension doesn't overwrite other's registers
>
>c) Those registers need to be given a logical names that other
> extensions can query for (for example "port_number", "local_zone",
> etc..) , and those standard registers should be filled in for all
> extensions at the input stage.
>
>and probably c,d,e,f,g,h what I didn't manage to think of.
>
> On Fri, Apr 15, 2016 at 11:13 PM, Cathy Zhang <cathy.h.zh...@huawei.com> 
> wrote:
>> Hi Reedip,
>>
>>
>>
>> Sure will include you in the discussion. Let me know if there are other
>> Tap-as-a-Service members who would like to join this initiative.
>>
>>
>>
>> Cathy
>>
>>
>>
>> From: reedip banerjee [mailto:reedi...@gmail.com]
>> Sent: Thursday, April 14, 2016 7:03 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [neutron] work on Common Flow Classifier and
>> OVS Agent extension for Newton cycle
>>
>>
>>
>> Speaking on behalf of Tap-as-a-Service members, we would also be very much
>> interested in the following initiative :)
>>
>>
>>
>> On Fri, Apr 15, 2016 at 5:14 AM, Ihar Hrachyshka <ihrac...@redhat.com>
>> wrote:
>>
>> Cathy Zhang <cathy.h.zh...@huawei.com> wrote:
>>
>>
>> I think there is no formal spec or anything, just some emails around there.
>>
>> That said, I don’t follow why it’s a requirement for SFC to switch to l2
>> agent extension mechanism. Even today, with SFC maintaining its own agent,
>> there are no clear guarantees for flow priorities that would avoid all
>> possible conflicts.
>>
>> Cathy> There is no requirement for SFC to switch. My understanding is that
>> current L2 agent extension does not solve the conflicting entry issue if two
>> features inject the same priority table entry. I think this new L2 agent
>> effort is try to come up with a mechanism to resolve this issue. Of course
>> if each feature( SFC or Qos) uses its own agent, then there is no
>> coordination and no way to avoid conflicts.
>>
>>
>> Sorry, I probably used misleading wording. I meant, why do we consider the
>> semantic flow management support in l2 agent extension framework a
>> *prerequisite* for SFC to switch to l2 agent extensions? The existing
>> framework should already allow SFC to achieve what you have in the
>> subproject tree implemented as a separate agent (essentially a fork of OVS
>> agent). It will also set SFC to use standard extension mechanisms instead of
>> hacky inheritance from OVS agent classes. So even without the strict
>> semantic flow management, there is benefit for the subproject.
>>
>> With that in mind, I would split this job into 3 pieces:
>> * first, adopt l2 agent extension mechanism for SFC functionality (dropping
>> custom agent);
>> * then, work on semantic flow management support in OVS agent API class [1];
>> * once the feature emerges, switch SFC l2 agent extension to the new
>> framework to manage SFC flows.
>>
>> I would at least prioritize the first point and tar

Re: [openstack-dev] [Neutron] OVS flow modification performance

2016-04-15 Thread Miguel Angel Ajo Pelayo
On Fri, Apr 15, 2016 at 7:32 AM, IWAMOTO Toshihiro
<iwam...@valinux.co.jp> wrote:
> At Mon, 11 Apr 2016 14:42:59 +0200,
> Miguel Angel Ajo Pelayo wrote:
>>
>> On Mon, Apr 11, 2016 at 11:40 AM, IWAMOTO Toshihiro
>> <iwam...@valinux.co.jp> wrote:
>> > At Fri, 8 Apr 2016 12:21:21 +0200,
>> > Miguel Angel Ajo Pelayo wrote:
>> >>
>> >> Hi, good that you're looking at this,
>> >>
>> >>
>> >> You could create a lot of ports with this method [1] and a bit of extra
>> >> bash, without the extra expense of instance RAM.
>> >>
>> >>
>> >> [1]
>> >> http://www.ajo.es/post/89207996034/creating-a-network-interface-to-tenant-network-in
>> >>
>> >>
>> >> This effort is going to be still more relevant in the context of
>> >> openvswitch firewall. We still need to make sure it's tested with the
>> >> native interface, and eventually we will need flow bundling (like in
>> >> ovs-ofctl --bundle add-flows) where the whole 
>> >> addition/removal/modification
>> >> is sent to be executed atomically by the switch.
>> >
>> > Bad news is that ovs-firewall isn't currently using the native
>> > of_interface much.  I can add install_xxx methods to
>> > OpenFlowSwitchMixin classes so that ovs-firewall can use the native
>> > interface.
>> > Do you have a plan for implementing flow bundling or using conjunction?
>> >
>>
>> Adding Jakub to the thread,
>>
>> IMO, if the native interface is going to provide us with greater speed
>> for rule manipulation, we should look into it.
>>
>> We don't use bundling or conjunctions yet, but it's part of the plan.
>> Bundling will allow atomicity of operations with rules (switching
>> firewall rules, etc, as we have with iptables-save /
>> iptables-restore), and conjunctions will reduce the number of entries.
>> (No expansion of IP addresses for remote groups, no expansion of
>> security group rules per port, when several ports are on the same
>> security group on the same compute host).
>>
>> Do we have any metric of bare rule manipulation time (ms/rule, for example)?
>
> No bare numbers but from a graph in the other mail I sent last week,
> bind_devices for 160 ports (iirc, that amounts to 800 flows) takes
> 4.5sec with of_interface=native and 8sec with of_interface=ovs-ofctl,
> which means an native add-flow is 4ms faster than the other.
>
> As the ovs firewall uses DeferredOVSBridge and has less exec
> overheads, I have no idea how much gain the native of_interface
> brings.
>
>> As a note, we're around 80 rules/port with IPv6 + IPv4 on the default
>> sec group plus a couple of rules.
>
> I booted 120VMs on one network and the default security group
> generated 62k flows.  It seems using conjunction is the #1 item for
> performance.
>

Ouch, hello again cartesian product!, luckily we already know how to
optimize that, now we need to get our hands on it.

@iwamoto, thanks for trying it.



>
>
>>
>> >> On Thu, Apr 7, 2016 at 10:00 AM, IWAMOTO Toshihiro <iwam...@valinux.co.jp>
>> >> wrote:
>> >>
>> >> > At Thu, 07 Apr 2016 16:33:02 +0900,
>> >> > IWAMOTO Toshihiro wrote:
>> >> > >
>> >> > > At Mon, 18 Jan 2016 12:12:28 +0900,
>> >> > > IWAMOTO Toshihiro wrote:
>> >> > > >
>> >> > > > I'm sending out this mail to share the finding and discuss how to
>> >> > > > improve with those interested in neutron ovs performance.
>> >> > > >
>> >> > > > TL;DR: The native of_interface code, which has been merged recently
>> >> > > > and isn't default, seems to consume less CPU time but gives a mixed
>> >> > > > result.  I'm looking into this for improvement.
>> >> > >
>> >> > > I went on to look at implementation details of eventlet etc, but it
>> >> > > turned out to be fairly simple.  The OVS agent in the
>> >> > > of_interface=native mode waits for a openflow connection from
>> >> > > ovs-vswitchd, which can take up to 5 seconds.
>> >> > >
>> >> > > Please look at the attached graph.
>> >> > > The x-axis is time from agent restarts, the y-axis is numbers of ports
>> >> > > processed (in treat_devices and bind_devices).  Each port is counted
>> >> > > twice; the first slope is tre

Re: [openstack-dev] [neutron] [nova] scheduling bandwidth resources / NIC_BW_KB resource class

2016-04-11 Thread Miguel Angel Ajo Pelayo
On Mon, Apr 11, 2016 at 1:46 PM, Jay Pipes <jaypi...@gmail.com> wrote:
> Hi Miguel Angel, comments/answers inline :)
>
> On 04/08/2016 09:17 AM, Miguel Angel Ajo Pelayo wrote:
>>
>> Hi!,
>>
>> In the context of [1] (generic resource pools / scheduling in nova)
>> and [2] (minimum bandwidth guarantees -egress- in neutron), I had a talk
>> a few weeks ago with Jay Pipes,
>>
>> The idea was leveraging the generic resource pools and scheduling
>> mechanisms defined in [1] to find the right hosts and track the total
>> available bandwidth per host (and per host "physical network"),
>> something in neutron (still to be defined where) would notify the new
>> API about the total amount of "NIC_BW_KB" available on every host/physnet.
>
>
> Yes, what we discussed was making it initially per host, meaning the host
> would advertise a total aggregate bandwidth amount for all NICs that it uses
> for the data plane as a single amount.
>
> The other way to track this resource class (NIC_BW_KB) would be to make the
> NICs themselves be resource providers and then the scheduler could pick a
> specific NIC to bind the port to based on available NIC_BW_KB on a
> particular NIC.
>
> The former method makes things conceptually easier at the expense of
> introducing greater potential for retrying placement decisions (since the
> specific NIC to bind a port to wouldn't be known until the claim is made on
> the compute host). The latter method adds complexity to the filtering and
> scheduler in order to make more accurate placement decisions that would
> result in fewer retries.
>
>> That part is quite clear to me,
>>
>> From [1] I'm not sure which blueprint introduces the ability to
>> schedule based on the resource allocation/availability itself,
>> ("resource-providers-scheduler" seems more like an optimization to the
>> schedule/DB interaction, right?)
>
>
> Yes, you are correct about the above blueprint; it's only for moving the
> Python-side filters to be a DB query.
>
> The resource-providers-allocations blueprint:
>
> https://review.openstack.org/300177
>
> Is the one where we convert the various consumed resource amount fields to
> live in the single allocations table that may be queried for usage
> information.
>
> We aim to use the ComputeNode object as a facade that hides the migration of
> these data fields as much as possible so that the scheduler actually does
> not need to know that the schema has changed underneath it. Of course, this
> only works for *existing* resource classes, like vCPU, RAM, etc. It won't
> work for *new* resource classes like the discussed NET_BW_KB because,
> clearly, we don't have an existing field in the instance_extra or other
> tables that contain that usage amount and therefore can't use ComputeNode
> object as a facade over a non-existing piece of data.
>
> Eventually, the intent is to change the ComputeNode object to return a new
> AllocationList object that would contain all of the compute node's resources
> in a tabular format (mimicking the underlying allocations table):
>
> https://review.openstack.org/#/c/282442/20/nova/objects/resource_provider.py
>
> Once this is done, the scheduler can be fitted to query this AllocationList
> object to make resource usage and placement decisions in the Python-side
> filters.
>
> We are still debating on the resource-providers-scheduler-db-filters
> blueprint:
>
> https://review.openstack.org/#/c/300178/
>
> Whether to change the existing FilterScheduler or create a brand new
> scheduler driver. I could go either way, frankly. If we made a brand new
> scheduler driver, it would do a query against the compute_nodes table in the
> DB directly. The legacy FilterScheduler would manipulate the AllocationList
> object returned by the ComputeNode.allocations attribute. Either way we get
> to where we want to go: representing all quantitative resources in a
> standardized and consistent fashion.
>
>>  And, that brings me to another point: at the moment of filtering
>> hosts, nova  I guess, will have the neutron port information, it has to
>> somehow identify if the port is tied to a minimum bandwidth QoS policy.
>
>
> Yes, Nova's conductor gathers information about the requested networks
> *before* asking the scheduler where to place hosts:
>
> https://github.com/openstack/nova/blob/stable/mitaka/nova/conductor/manager.py#L362
>
>>  That would require identifying that the port has a "qos_policy_id"
>> attached to it, and then, asking neutron for the specific QoS policy
>>   [3], then look out for a minimum bandwidth rule (

Re: [openstack-dev] [Neutron] OVS flow modification performance

2016-04-11 Thread Miguel Angel Ajo Pelayo
On Mon, Apr 11, 2016 at 11:40 AM, IWAMOTO Toshihiro
<iwam...@valinux.co.jp> wrote:
> At Fri, 8 Apr 2016 12:21:21 +0200,
> Miguel Angel Ajo Pelayo wrote:
>>
>> Hi, good that you're looking at this,
>>
>>
>> You could create a lot of ports with this method [1] and a bit of extra
>> bash, without the extra expense of instance RAM.
>>
>>
>> [1]
>> http://www.ajo.es/post/89207996034/creating-a-network-interface-to-tenant-network-in
>>
>>
>> This effort is going to be still more relevant in the context of
>> openvswitch firewall. We still need to make sure it's tested with the
>> native interface, and eventually we will need flow bundling (like in
>> ovs-ofctl --bundle add-flows) where the whole addition/removal/modification
>> is sent to be executed atomically by the switch.
>
> Bad news is that ovs-firewall isn't currently using the native
> of_interface much.  I can add install_xxx methods to
> OpenFlowSwitchMixin classes so that ovs-firewall can use the native
> interface.
> Do you have a plan for implementing flow bundling or using conjunction?
>

Adding Jakub to the thread,

IMO, if the native interface is going to provide us with greater speed
for rule manipulation, we should look into it.

We don't use bundling or conjunctions yet, but it's part of the plan.
Bundling will allow atomicity of operations with rules (switching
firewall rules, etc, as we have with iptables-save /
iptables-restore), and conjunctions will reduce the number of entries.
(No expansion of IP addresses for remote groups, no expansion of
security group rules per port, when several ports are on the same
security group on the same compute host).

Do we have any metric of bare rule manipulation time (ms/rule, for example)?

As a note, we're around 80 rules/port with IPv6 + IPv4 on the default
sec group plus a couple of rules.






>> On Thu, Apr 7, 2016 at 10:00 AM, IWAMOTO Toshihiro <iwam...@valinux.co.jp>
>> wrote:
>>
>> > At Thu, 07 Apr 2016 16:33:02 +0900,
>> > IWAMOTO Toshihiro wrote:
>> > >
>> > > At Mon, 18 Jan 2016 12:12:28 +0900,
>> > > IWAMOTO Toshihiro wrote:
>> > > >
>> > > > I'm sending out this mail to share the finding and discuss how to
>> > > > improve with those interested in neutron ovs performance.
>> > > >
>> > > > TL;DR: The native of_interface code, which has been merged recently
>> > > > and isn't default, seems to consume less CPU time but gives a mixed
>> > > > result.  I'm looking into this for improvement.
>> > >
>> > > I went on to look at implementation details of eventlet etc, but it
>> > > turned out to be fairly simple.  The OVS agent in the
>> > > of_interface=native mode waits for a openflow connection from
>> > > ovs-vswitchd, which can take up to 5 seconds.
>> > >
>> > > Please look at the attached graph.
>> > > The x-axis is time from agent restarts, the y-axis is numbers of ports
>> > > processed (in treat_devices and bind_devices).  Each port is counted
>> > > twice; the first slope is treat_devices and the second is
>> > > bind_devices.  The native of_interface needs some more time on
>> > > start-up, but bind_devices is about 2x faster.
>> > >
>> > > The data was collected with 160 VMs with the devstack default settings.
>> >
>> > And if you wonder how other services are doing meanwhile, here is a
>> > bonus chart.
>> >
>> > The ovs agent was restarted 3 times with of_interface=native, then 3
>> > times with of_interface=ovs-ofctl.
>> >
>> > As the test machine has 16 CPUs, 6.25% CPU usage can mean a single
>> > threaded process is CPU bound.
>> >
>> > Frankly, the OVS agent would have little rooms for improvement than
>> > other services.  Also, it might be fun to draw similar charts for
>> > other types of workloads.
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nova] scheduling bandwidth resources / NIC_BW_KB resource class

2016-04-11 Thread Miguel Angel Ajo Pelayo
On Sun, Apr 10, 2016 at 10:07 AM, Moshe Levi <mosh...@mellanox.com> wrote:

>
>
>
>
> *From:* Miguel Angel Ajo Pelayo [mailto:majop...@redhat.com]
> *Sent:* Friday, April 08, 2016 4:17 PM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* [openstack-dev] [neutron] [nova] scheduling bandwidth
> resources / NIC_BW_KB resource class
>
>
>
>
>
> Hi!,
>
>
>
>In the context of [1] (generic resource pools / scheduling in nova) and
> [2] (minimum bandwidth guarantees -egress- in neutron), I had a talk a few
> weeks ago with Jay Pipes,
>
>
>
>The idea was leveraging the generic resource pools and scheduling
> mechanisms defined in [1] to find the right hosts and track the total
> available bandwidth per host (and per host "physical network"), something
> in neutron (still to be defined where) would notify the new API about the
> total amount of "NIC_BW_KB" available on every host/physnet.
>



> I believe that NIC bandwidth can be taken from Libvirt see [4] and the
> only piece that is missing is to tell nova the mapping of physnet to
> network interface name. (In case of SR-IOV this is already known)
>
> I see bandwidth (speed)  as one of many capabilities of NIC, therefore I
> think we should take all of them in the same way in this case libvirt.  I
> was think of adding a new NIC as new resource to nova.
>

Yes, at the low level, thats one way to do it. We may need neutron agents
or plugins to collect such information, since, in some cases one devices
will be tied to one physical network, other devices will be tied to other
physical networks, or even some devices could be connected to the same
physnet. In some cases, connectivity depends on L3 tunnels, and in that
case, bandwidth calculation is more complicated (depending on routes, etc..
-I'm not even looking at that case yet-)



>
>
> [4] - 
>
>   net_enp129s0_e4_1d_2d_2d_8c_41
>
>
> /sys/devices/pci:80/:80:01.0/:81:00.0/net/enp129s0
>
>   pci__81_00_0
>
>   
>
> enp129s0
>
> e4:1d:2d:2d:8c:41
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
>   
>
> 
>
>
>
>That part is quite clear to me,
>
>
>
>From [1] I'm not sure which blueprint introduces the ability to
> schedule based on the resource allocation/availability itself,
> ("resource-providers-scheduler" seems more like an optimization to the
> schedule/DB interaction, right?)
>
> My understating is that the resource provider blueprint is just a rough
> filter of compute nodes before passing them to the scheduler filters. The
> existing filters here [6] will do the  accurate filtering of resources.
>
> see [5]
>
>
>
> [5] -
> http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2016-04-04.log.html#t2016-04-04T16:24:10
>
>
> [6] - http://docs.openstack.org/developer/nova/filter_scheduler.html
>
>
>

Thanks, yes, if those filters can operate on the generic resource pools,
then, great, we will just need to write the right filters.



> And, that brings me to another point: at the moment of filtering
> hosts, nova  I guess, will have the neutron port information, it has to
> somehow identify if the port is tied to a minimum bandwidth QoS policy.
>
>
>
> That would require identifying that the port has a "qos_policy_id"
> attached to it, and then, asking neutron for the specific QoS policy  [3],
> then look out for a minimum bandwidth rule (still to be defined), and
> extract the required bandwidth from it.
>
> I am not sure if that is the correct  way to do it, but you can create NIC
> bandwidth filter (or NIC capabilities filter)  and in it you can implement
> the way to retrieve Qos policy information by using neutron client.
>

That's my concern, that logic would have to live on the nova side, again,
and it's tightly couple to the neutron models. I'd be glad to find a way to
uncouple nova from that as much as possible. And, even better if we could
find a way to avoid the need for nova to retrieve policies as it discovers
ports.


>
>
>That moves, again some of the responsibility to examine and understand
> external resources to nova.
>
>
>
> Could it make sense to make that part pluggable via stevedore?, so we
> would provide something that takes the "resource id" (for a port in this
> case) and returns the requirements translated to resource classes
> (NIC_BW_KB in this case).
>
>
>
>
>
> Best regards,
>
> Migue

[openstack-dev] [neutron] [nova] scheduling bandwidth resources / NIC_BW_KB resource class

2016-04-08 Thread Miguel Angel Ajo Pelayo
Hi!,

   In the context of [1] (generic resource pools / scheduling in nova) and
[2] (minimum bandwidth guarantees -egress- in neutron), I had a talk a few
weeks ago with Jay Pipes,

   The idea was leveraging the generic resource pools and scheduling
mechanisms defined in [1] to find the right hosts and track the total
available bandwidth per host (and per host "physical network"), something
in neutron (still to be defined where) would notify the new API about the
total amount of "NIC_BW_KB" available on every host/physnet.

   That part is quite clear to me,

   From [1] I'm not sure which blueprint introduces the ability to schedule
based on the resource allocation/availability itself,
("resource-providers-scheduler" seems more like an optimization to the
schedule/DB interaction, right?)

And, that brings me to another point: at the moment of filtering hosts,
nova  I guess, will have the neutron port information, it has to somehow
identify if the port is tied to a minimum bandwidth QoS policy.

That would require identifying that the port has a "qos_policy_id"
attached to it, and then, asking neutron for the specific QoS policy  [3],
then look out for a minimum bandwidth rule (still to be defined), and
extract the required bandwidth from it.

   That moves, again some of the responsibility to examine and understand
external resources to nova.

Could it make sense to make that part pluggable via stevedore?, so we
would provide something that takes the "resource id" (for a port in this
case) and returns the requirements translated to resource classes
(NIC_BW_KB in this case).


Best regards,
Miguel Ángel Ajo


[1]
http://lists.openstack.org/pipermail/openstack-dev/2016-February/086371.html
[2] https://bugs.launchpad.net/neutron/+bug/1560963
[3] http://developer.openstack.org/api-ref-networking-v2-ext.html#showPolicy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] OVS flow modification performance

2016-04-08 Thread Miguel Angel Ajo Pelayo
Hi, good that you're looking at this,


You could create a lot of ports with this method [1] and a bit of extra
bash, without the extra expense of instance RAM.


[1]
http://www.ajo.es/post/89207996034/creating-a-network-interface-to-tenant-network-in


This effort is going to be still more relevant in the context of
openvswitch firewall. We still need to make sure it's tested with the
native interface, and eventually we will need flow bundling (like in
ovs-ofctl --bundle add-flows) where the whole addition/removal/modification
is sent to be executed atomically by the switch.






On Thu, Apr 7, 2016 at 10:00 AM, IWAMOTO Toshihiro 
wrote:

> At Thu, 07 Apr 2016 16:33:02 +0900,
> IWAMOTO Toshihiro wrote:
> >
> > At Mon, 18 Jan 2016 12:12:28 +0900,
> > IWAMOTO Toshihiro wrote:
> > >
> > > I'm sending out this mail to share the finding and discuss how to
> > > improve with those interested in neutron ovs performance.
> > >
> > > TL;DR: The native of_interface code, which has been merged recently
> > > and isn't default, seems to consume less CPU time but gives a mixed
> > > result.  I'm looking into this for improvement.
> >
> > I went on to look at implementation details of eventlet etc, but it
> > turned out to be fairly simple.  The OVS agent in the
> > of_interface=native mode waits for a openflow connection from
> > ovs-vswitchd, which can take up to 5 seconds.
> >
> > Please look at the attached graph.
> > The x-axis is time from agent restarts, the y-axis is numbers of ports
> > processed (in treat_devices and bind_devices).  Each port is counted
> > twice; the first slope is treat_devices and the second is
> > bind_devices.  The native of_interface needs some more time on
> > start-up, but bind_devices is about 2x faster.
> >
> > The data was collected with 160 VMs with the devstack default settings.
>
> And if you wonder how other services are doing meanwhile, here is a
> bonus chart.
>
> The ovs agent was restarted 3 times with of_interface=native, then 3
> times with of_interface=ovs-ofctl.
>
> As the test machine has 16 CPUs, 6.25% CPU usage can mean a single
> threaded process is CPU bound.
>
> Frankly, the OVS agent would have little rooms for improvement than
> other services.  Also, it might be fun to draw similar charts for
> other types of workloads.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Proposing Hirofumi Ichihara to Neutron Core Reviewer Team

2016-04-08 Thread Miguel Angel Ajo Pelayo
On Fri, Apr 8, 2016 at 11:28 AM, Ihar Hrachyshka 
wrote:

> Kevin Benton  wrote:
>
> I don't know if my vote counts in this area, but +1!
>>
>
> What the gentleman said ^, +1.


"me too ^" , +1 !




> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] Routed networks / Generic Resource pools

2016-03-21 Thread Miguel Angel Ajo Pelayo
On Mon, Mar 21, 2016 at 3:17 PM, Jay Pipes <jaypi...@gmail.com> wrote:
> On 03/21/2016 06:22 AM, Miguel Angel Ajo Pelayo wrote:
>>
>> Hi,
>>
>> I was doing another pass on this spec, to see if we could leverage
>> it as-is for QoS / bandwidth tracking / bandwidth guarantees, and I
>> have a question [1]
>>
>> I guess I'm just missing some detail, but looking at the 2nd scenario,
>> why wouldn't availability zones allow the same exactly if we used one
>> availability zone per subnet?
>>
>>What's the advantage of modelling it via a generic resource pool?
>
>
> Hi Miguel,
>
> On the (Nova) scheduler side, we don't actually care whether Neutron uses
> availability zone or subnet pool to model the boundaries of a pool of some
> resource. The generic-resource-pools functionality being added to Nova (as
> the new placement API meant to become the split-out scheduler RESTful API)
> just sees a resource provider UUID and an inventory of some type of
> resource.

That means, that we could also match a pool by the requisites of the resources
bound to the instance we're trying to deploy (i.e disk space (GB), bandwidth
(NIC_KB)).

>
> In the case of Neutron QoS, the first thing to determine would be what is
> the resource type exactly? The resource type must be able to be represented
> with an integer amount of something. For QoS, I *think* the resource type
> would be "NIC_BANDWIDTH_KB" or something like that. Is that correct?

The resource could be NIC_BANDWIDTH_KB, yes, in a simplified case
we could care about just tenant networks connectivity, but we can also
have provider networks bound to this. And they would be separate counts.

>This
> would represent the amount of total network bandwidth that a workload can
> consume on a particular compute node. Is that statement correct?

This would represent the amount of total network bandwidth a port could
consume (and by consume I mean: asking for a "min" bandwidth guarantee).

>
> Now, the second thing that would need to be determined is what resource
> boundary this resource type would have. I *think* it is the amount of
> bandwidth consumed on a set of compute nodes? Like, amount of bandwidth
> consumed within a rack?

No, what we're trying to model first is, the maximum bandwidth available
on a compute node [+physnet combination].

(please note this is coming from NFV / telco requisites):
When they schedule VNFs, they want to be 100% sure the throughput a VNF
can provide is exactly what they asked for, and not less (because for example
you had 10Gb throughput on a NIC, but you schedule 3 VNFs supposed
to push 5Gb each).



> Or some similar segmentation of a network, like an
> aggregate, which is a generic grouping of compute nodes. If so, then the
> bandwidth resource would be considered a *shared* resource, shared among the
> compute nodes in the aggregate. And if this is the case, then
> generic-resource-pools are intended for *exactly* this type of scenario.

We could certainly use generic resource pools to model rack switches, and their
bandwidth capabilities. But that would not satisfy my above paragraph, they are
two levels of verification that would be independent.



__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron] Routed networks / Generic Resource pools

2016-03-21 Thread Miguel Angel Ajo Pelayo
Hi,

   I was doing another pass on this spec, to see if we could leverage
it as-is for QoS / bandwidth tracking / bandwidth guarantees, and I
have a question [1]

   I guess I'm just missing some detail, but looking at the 2nd scenario,
why wouldn't availability zones allow the same exactly if we used one
availability zone per subnet?

  What's the advantage of modelling it via a generic resource pool?


Best regards,
Miguel Ángel.

[1] 
https://review.openstack.org/#/c/253187/14/specs/newton/approved/generic-resource-pools.rst

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][release] Releasing python-neutronclient 4.1.2?

2016-03-09 Thread Miguel Angel Ajo Pelayo
On Wed, Mar 9, 2016 at 4:16 PM, Doug Hellmann  wrote:

> Excerpts from Armando M.'s message of 2016-03-08 15:43:05 -0700:
> > On 8 March 2016 at 15:07, Doug Hellmann  wrote:
> >
> > > Excerpts from Armando M.'s message of 2016-03-08 12:49:16 -0700:
> > > > Hi folks,
> > > >
> > > > There's a feature or two that are pending to be delivered in Mitaka
> > > [1,2],
> > > > and those involve changes to both the server and client sides.
> Ideally
> > > we'd
> > > > merge both sides in time for Mitaka RC and that implies that we
> would be
> > > > able to release a new version of the client including changes [3,4].
> This
> > > > is especially important since a new client release would be
> beneficial to
> > > > improving test coverage as needed by [5].
> > > >
> > > > Considering what we released already, and what the tip of master is
> for
> > > the
> > > > client [6], I can't see any side effect that a new neutronclient
> release
> > > > may introduce.
> > > >
> > > > Having said that, I am leaning towards the all-or-none approach, but
> the
> > > > 'all' approach is predicated on the fact that we are indeed allowed
> to
> > > > release a new client and touch the global requirements.
> > > >
> > > > What's the release team's recommendation? Based on it, we may want to
> > > > decide to defer these to as soon as N master opens up.
> > >
> > > I'm a bit reluctant to start touching the requirements lists for
> > > feature work. We do have some bug fixes in the pipeline that will
> > > require library releases, but those are for bugs not new features.
> > > We also have one or two libs where feature work needed to be extended,
> > > but none of those have dependencies outside of the project producing
> > > them.
> > >
> > > The main reason to require a client release is for some *other* project
> > > to take advantage of the new feature work. Is that planned?
> > >
> >
> > Thanks for the prompt reply. Neutron would be the only consumer of these
> > additions, and no other project has pending work to leverage these
> > capabilities.
>
> In that case, I don't think we want to make an exception. Although
> Neutron is the only user of this feature, I counted more than 50 other
> projects that have python-neutronclient in a requirements file, and
> that's a lot of potential for impact with a new release.
>
> It seems like the options are to wait for Newton to land both parts of
> the feature, or to land the server side during Mitaka and release a
> feature update to the client as soon as Newton development opens.
>
> Doug
>

Yes, if anyone want's more detail, we discussed that in the
qos-meeting today [1], thank you Doug, for joining us.

I would like to ask for the inclusion of the server side, regardless
of the client bits. Fullstack would have to stay out, but I believe
the api-tests, unit tests, and functional tests included in the patch
will maintain the feature stability.

Users would have the chance to make use of the feature via direct
API calls without the client, or by bumping to neutronclient 4.2.x when
that's available. Distros would be able to backport the neutronclient
patch at their will.

I ask for it, not only for the sake of the feature, which I believe is not
critical, but because Comcast and other related contributors have been
patient enough (5-6 cycles?), learned and collaborated the U/S way to
get this finally in while helping with L2 agent extensibility and other
technical debt in the way. And because, the earlier the feature gets
used, the early we could iron out any possible bug features come with.


Best regards,
Miguel Ángel.

[1]
http://eavesdrop.openstack.org/meetings/neutron_qos/2016/neutron_qos.2016-03-09-14.03.log.html
 (around 15:37)


>
> >
> > >
> > > Doug
> > >
> > > >
> > > > Many thanks,
> > > > Armando
> > > >
> > > > [1] https://review.openstack.org/#/q/topic:bug/1468353
> > > > [2] https://review.openstack.org/#/q/topic:bug/1521783
> > > > [3] https://review.openstack.org/#/c/254280/
> > > > [4] https://review.openstack.org/#/c/288187/
> > > > [5] https://review.openstack.org/#/c/288392/
> > > > [6]
> > > >
> > >
> https://github.com/openstack/python-neutronclient/commit/8460b0dbb354a304a112be13c63cb933ebe1927a
> > >
> > >
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-26 Thread Miguel Angel Ajo Pelayo

> On 26 Feb 2016, at 02:38, Sean McGinnis  wrote:
> 
> On Thu, Feb 25, 2016 at 04:13:56PM +0800, Qiming Teng wrote:
>> Hi, All,
>> 
>> After reading through all the +1's and -1's, we realized how difficult
>> it is to come up with a proposal that makes everyone happy. When we are
>> discussing this proposal with some other contributors, we came up with a
>> proposal which is a little bit different. This idea could be very
>> impractical, very naive, given that we don't know much about the huge
>> efforts behind the scheduling, planning, coordination ... etc etc. So,
>> please treat this as a random thought.
>> 
>> Maybe we can still have the Summit and the Design Summit colocated, but
>> we can avoid the overlap that has been the source of many troubles. The
>> idea is to have both events scheduled by the end of a release cycle. For
>> example:
>> 
>> Week 1:
>>  Wednesday-Friday: 3 days Summit.
>>* Primarily an event for marketing, sales, CTOs, architects,
>>  operators, journalists, ...
>>* Contributors can decide whether they want to attend this.
>>  Saturday-Sunday:
>>* Social activities: contributors meet-up, hang outs ...
>> 
>> Week 2:
>>  Monday-Wednesday: 3 days Design Summit
>>* Primarily an event for developers.
>>* Operators can hold meetups during these days, or join project
>>  design summits.
>> 


A proposal like this one seems much more rational to me, 

  * no need for two trips
  * no overlap of the summit/design (I end up running back and forth otherwise)
  
Otherwise, separating both parts of the summit increases the gap
between engineering and the final OpenStack users/ops. I couldn’t go
to summit-related-events 4 times a year for family reasons. But I like
to have the opportunity to spend some time close to the user/op side
of things to understand how people is using OpenStack, what are they
missing, what are we doing good.


>> If you need to attend both events, you don't need two trips. Scheduling
>> both events by the end of a release cycle can help gather more
>> meaningful feedbacks, experiences or lessons from previous releases and
>> ensure a better plan for the coming release.
>> 
>> If you want to attend just the main Summit or only the Design Summit,
>> you can plan your trip accordingly.
>> 
>> Thoughts?

I really like it. Not sure how well does it work for others, or from
the organisational point of view.

>> 
>> - Qiming
>> 
> 
> This would eliminate the need for a second flight, and it would
> net be total less time away than attending two separate events. I could
> see this working.
> 
> Sean
> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon][Neutron][QoS]horizon angular network QoS panel

2016-02-25 Thread Miguel Angel Ajo Pelayo
Hi Masco!,

   Thanks a lot for working on this, I’m not following the [Horizon] tag and I 
missed
this. I’ve added the Neutron and QoS tags.

   I will give it a try as soon as I can. 

   Keep up the good work!,

Cheers,
Miguel Ángel.
> On 10 Feb 2016, at 13:04, masco  wrote:
> 
> 
> Hello All,
> 
> As most of you people knows the 'QoS' feature is added in neutron during 
> liberty release.
> It will be nice to have this feature in horizon, so I have added a 'network 
> qos' panel for the same in angularJS.
> It will be very helpful if you people reviewing this patches and helping to 
> land this feature in horizon.
> 
> gerrit links:
> 
> https://review.openstack.org/#/c/247997/ 
> 
> https://review.openstack.org/#/c/259022/11 
> 
> https://review.openstack.org/#/c/272928/4 
> 
> https://review.openstack.org/#/c/277743/3 
> 
> 
> 
> To set test env:
> here is some steps how to enable a QoS in neutron.
> If you want to test it will help you.
> 
> 
>   To enable the QoS in devstack please add below two
>   lines in the local.conf enable_plugin neutron
>   git://git.openstack.org/openstack/neutron
>   enable_service q-qos and rebuild your stack
>   (./stack.sh)
> 
> Thanks,
> Masco.
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Being more aggressive with our defaults

2016-02-10 Thread Miguel Angel Ajo Pelayo

> On 09 Feb 2016, at 21:43, Sean M. Collins  wrote:
> 
> Kevin Benton wrote:
>> I agree with the mtu setting because there isn't much of a downside to
>> enabling it. However, the others do have reasons to be disabled.
>> 
>> csum - requires runtime detection of support for a feature and then auto
>> degradation for systems that don't support it. People were against those so
>> we have the whole sanity check framework instead. I wouldn't be opposed to
>> revisiting that decision, but it's definitely a blocker right now.
> 
> Agree - I think the work that can be done here is to do some
> self-discovery to see if the system supports it, and enable it.

The risk of doing such thing, and this is why we stayed with sanity checks, 
is that we slow down agent startup, it could be trivial at the start, but as we 
keep piling checks, it could be come an excessive overhead.

We could cache the system discoveries, which are unlikely to change, but that
could bring other issues, like switching hardware/network settings requiring to
cleanup the “facts” cache.

Another approach could be making the sanity checks generate configuration file
additions or modifications on request.

IMHO we should keep any setting which is an optimization OFF, and let the 
administrator
tune it up.

What do we want?, a super performant neutron reference implementation that 
doesn’t
work for 40% (random number) of the deployers, or a neutron reference 
implantation
that works for all but can be tuned?



> 
>> dvr - doesn't work in non-VM cases (e.g. floating IP pointing to allowed
>> address pair or bare metal host) and consumes more public IPs than legacy
>> or HA.
> 
> Yes it does have tradeoffs currently. But I like to think back to
> Nova-Network. It was extremely common to run it in multi_host=True mode.
> 
> Despite the fact that the default is False.
> 
> https://github.com/openstack/nova/blob/da019e89976f9673c4f80575909dda3bab3e1a24/nova/network/rpcapi.py#L31
> 
> It's been a little while for me since I looked at nova-network (Essex,
> Folsom era) so things may have moved around a bit, but that's at least
> what I recall.
> 
> I'd like to see some grizzled Nova network veterans chime in, but at
> least from the operator standpoint the whole pain point for Neutron
> (which endangered Neutron's existence for a long time) was the fact that
> we didn't have an equivalent feature to multi_host - hence DVR being
> written.
> 
> So, even Nova may have a couple things turned off by default probably a
> majority of deployers have to consciously turn the knob for.
> 
>> l2pop - this one is weird because it's an ML2 driver. It makes no sense to
>> have it always enabled because an operator could be using an l2pop
>> incompatible backend. We also don't have a notion of a driver enabled by
>> default so if we did want to do it, it would take a bunch of changes to
>> ML2.
> 
> I think in this case, the point is - enable L2Pop for things where it
> really makes sense. Meaning if you are using a tunnel protocol for
> tenant networking, and you do not have something like vxlan multicast
> group configured. I don't think Open vSwitch supports it, so in that
> deployment model I think we can bet that it should be enabled.
> 
> Linux Bridge supports l2pop and vxlan multicast, so even in that case
> I'd say - enable l2pop but put good docs in to say "hey if you have
> multicast vxlan set up, switch it over to use that instead" 
> 
>> Whenever we have a knob, it usually stems from the fact that we don't use
>> runtime feature detection or the feature has a tradeoff that doesn't make
>> its use obvious in all cases.
> 
> Right, but I think we've been very cautious in the past, where we don't
> want to make any decision, so we just turn it all off and force
> operators to enable it. In some cases we've decided to do nothing and
> the result is forcing everyone to make the decision, where a high % of
> people end up making the same decision. Perhaps we can use the user
> survey and the ops meetups to find options where "80% of people use this 
> option
> and have to be proactive and enable it" - and think about turning them
> on by default.
> 
> It's not cut and dry, but maybe taking a stab at it will help us clarify
> which really options really are a toss up between on/off and which
> should be defaults.
> 
> 
> -- 
> Sean M. Collins
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >