Re: [openstack-dev] [stable][octavia] Backport patch adding new configuration options

2018-10-08 Thread Assaf Muller
On Mon, Oct 8, 2018 at 12:34 PM Matt Riedemann  wrote:
>
> On 10/8/2018 11:05 AM, Carlos Goncalves wrote:
> > The Octavia team merged a patch in master [1] that fixed an issue where
> > load balancers could be deleted whenever queue_event_streamer driver is
> > enabled and RabbitMQ goes down [2].
> >
> > As this is a critical bug, we would like to backport as much back as
> > possible. The question is whether these backports comply with the stable
> > policy because it adds two new configuration options and deprecates one.
> > The patch was prepared so that the deprecated option has precedence if
> > set over the other two.
> >
> > Reading the review guidelines [3], I only see "Incompatible config file
> > changes" as relevant, but the patch doesn't seem to go against that. We
> > had a patch that added a new config option backported to Queens that
> > raised some concern, so we'd like to be on the safe side this time ;-)
> >
> > We'd appreciate guidance to whether such backports are acceptable or not.
> >
>
> Well, a few things:
>
> * I would have introduced the new config options as part of the bug fix
> but *not* deprecated the existing option in the same change but rather
> as a follow up. Then the new options, which do nothing by default (?),
> could be backported and the deprecation would remain on master.
>
> * The release note mentions the new options as a feature, but that's not
> really correct is it? They are for fixing a bug, not new feature
> functionality as much.
>
> In general, as long as the new options don't introduce new behavior by
> default for existing configuration (as you said, the existing option
> takes precedence if set), and don't require configuration then it should
> be OK to backport those new options. But the sticky parts here are (1)
> deprecating an option on stable (we shouldn't do that) and (2) the
> release note mentioning a feature.

I would classify this as a critical bug fix. I think it's important to
fix the bug on stable branches, even for deployments that will get the
fix but not change their configuration options. How that's done with
respect to configuration options & backports is another matter, but I
do think that whatever approach is chosen should end up with the bug
fixed on stable branches without requiring operators to use new
options or otherwise make changes to their existing configuration
files.

>
> What I'd probably do is (1) change the 'feature' release note to a
> 'fixes' release note on master and then (2) backport the change but (a)
> drop the deprecation and (b) fix the release note in the backport to not
> call out a feature (since it's not a feature I don't think?) - and just
> make it clear with a note in the backport commit message why the
> backport is different from the original change.
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Appointing Slawek Kaplonski to the Neutron Drivers team

2018-08-27 Thread Assaf Muller
On Mon, Aug 27, 2018 at 12:42 PM, Miguel Lavalle  wrote:
> Dear Neutron team,
>
> In order to help the Neutron Drivers team to perform its very important job
> of guiding the community to evolve the OpenStack Networking architecture to
> meet the needs of our current and future users [1], I have asked Slawek
> Kaplonski to join it. Over the past few years, he has gained very valuable
> experience with OpenStack Networking, both as a deployer  and more recently
> working with one of our key packagers. He played a paramount role in
> implementing our QoS (Quality of Service) features, currently leading that
> sub-team. He also leads the CI sub-team, making sure the prompt discovery
> and fixing of bugs in our software. On top of that, he is one of our most
> active reviewers, contributor of code to our reference implementation and
> fixer of bugs. I am very confident in Slawek making great contributions to
> the Neutron Drivers team.

Congratulations Slawek, I think you'll do a great job :)

>
> Best regards
>
> Miguel
>
> [1]
> https://docs.openstack.org/neutron/latest/contributor/policies/neutron-teams.html#drivers-team
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] neutron ipv6 radvd sends out link-local or nothing as def gw (L3 HA issue?)

2018-08-20 Thread Assaf Muller
On Mon, Aug 20, 2018 at 6:06 AM, Tobias Urdin  wrote:
> When I removed those ips and set accept_ra to 0 on the backup router:
>
> ip netns exec qrouter-0775785e-a93a-4501-917b-be92ff03f36a sysctl -w
> net.ipv6.conf.qr-7fad6b1b-c9.accept_ra=0
> ip netns exec qrouter-0775785e-a93a-4501-917b-be92ff03f36a sysctl -w
> net.ipv6.conf.qr-5be04815-68.accept_ra=0
> ip netns exec qrouter-0775785e-a93a-4501-917b-be92ff03f36a ip a l
> ip netns exec qrouter-0775785e-a93a-4501-917b-be92ff03f36a ip addr del
> ::0:1:f816:3eff:fe66:dea8/64 dev qr-7fad6b1b-c9
> ip netns exec qrouter-0775785e-a93a-4501-917b-be92ff03f36a ip addr del
> ::0:1:f816:3eff:fec3:85bd/64 dev qr-5be04815-68
>
> And enabled ipv6 forwarding on the active router:
> ip netns exec qrouter-0775785e-a93a-4501-917b-be92ff03f36a sysctl -w
> net.ipv6.conf.all.forwarding=1
>
> It started working again, I think this is an issue when disabling a router,
> change it to L3 HA and enable it again, so a bug?

Quite possibly. Are you able to find a minimal reproducer?

>
> Best regards
> Tobias
>
>
> On 08/20/2018 11:58 AM, Tobias Urdin wrote:
>
> Continuing forward, these patches should've fixed that
> https://review.openstack.org/#/q/topic:bug/1667756+(status:open+OR+status:merged)
> I'm on Queens.
>
> The two inside interfaces on the backup router:
> [root@controller2 ~]# ip netns exec
> qrouter-0775785e-a93a-4501-917b-be92ff03f36a cat
> /proc/sys/net/ipv6/conf/qr-7fad6b1b-c9/accept_ra
> 1
> [root@controller2 ~]# ip netns exec
> qrouter-0775785e-a93a-4501-917b-be92ff03f36a cat
> /proc/sys/net/ipv6/conf/qr-5be04815-68/accept_ra
> 1
>
> Perhaps the accept_ra patches does not apply for enable/disable or routers
> changing from a normal router to a L3 HA router?
> Best regards
>
> On 08/20/2018 11:50 AM, Tobias Urdin wrote:
>
> Ok, so the issue here seems to be that I have a L3 HA router with SLAAC,
> both the active and standby router will
> configure the SLAAC obtained address causing a conflict since both side
> share the same MAC address.
>
> Is there any workaround for this? Should SLAAC even be enabled for
> interfaces on the standby router?
>
> Best regards
> Tobias
>
> On 08/20/2018 11:37 AM, Tobias Urdin wrote:
>
> Forgot [neutron] tag.
>
> On 08/20/2018 11:36 AM, Tobias Urdin wrote:
>
> Hello,
>
> Note: before reading, this router was a regular router but was then disable,
> changed ha=true so it's now a L3 HA router, then it was enabled again.
> CC openstack-dev for help or feedback if it's a possible bug.
>
> I've been testing around with IPv6 and overall the experience has been
> positive but I've met some weird issue that I cannot put my head around.
> So this is a neutron L3 router with an outside interface with a ipv4 and
> ipv6 from the provider network and one inside interface for ipv4 and one
> inside interface for ipv6.
>
> The instances for some reason get's there default gateway as the ipv6
> link-local (in fe80::/10) from the router with SLAAC and radvd.
>
> (. is provider network, . is inside network, they are masked
> so don't pay attention to the number per se)
>
> interfaces inside router:
> 15: ha-9bde1bb1-bd:  mtu 1450 qdisc noqueue
> state UNKNOWN group default qlen 1000
> link/ether fa:16:3e:05:80:32 brd ff:ff:ff:ff:ff:ff
> inet 169.254.192.7/18 brd 169.254.255.255 scope global ha-9bde1bb1-bd
>valid_lft forever preferred_lft forever
> inet 169.254.0.1/24 scope global ha-9bde1bb1-bd
>valid_lft forever preferred_lft forever
> inet6 fe80::f816:3eff:fe05:8032/64 scope link
>valid_lft forever preferred_lft forever
> 19: qg-86e465f6-33:  mtu 1500 qdisc noqueue
> state UNKNOWN group default qlen 1000
> link/ether fa:16:3e:3b:8b:a5 brd ff:ff:ff:ff:ff:ff
> inet 1.2.3.4/22 scope global qg-86e465f6-33
>valid_lft forever preferred_lft forever
> inet6 :::f/64 scope global nodad
>valid_lft forever preferred_lft forever
> inet6 fe80::f816:3eff:fe3b:8ba5/64 scope link nodad
>valid_lft forever preferred_lft forever
> 1168: qr-5be04815-68:  mtu 1450 qdisc
> noqueue state UNKNOWN group default qlen 1000
> link/ether fa:16:3e:c3:85:bd brd ff:ff:ff:ff:ff:ff
> inet 192.168.99.1/24 scope global qr-5be04815-68
>valid_lft forever preferred_lft forever
> inet6 fe80::f816:3eff:fec3:85bd/64 scope link
>valid_lft forever preferred_lft forever
> 1169: qr-7fad6b1b-c9:  mtu 1450 qdisc
> noqueue state UNKNOWN group default qlen 1000
> link/ether fa:16:3e:66:de:a8 brd ff:ff:ff:ff:ff:ff
> inet6 ::0:1::1/64 scope global nodad
>valid_lft forever preferred_lft forever
> inet6 fe80::f816:3eff:fe66:dea8/64 scope link
>valid_lft forever preferred_lft forever
>
> I get this error messages in dmesg on the network node:
> [581085.858869] IPv6: qr-5be04815-68: IPv6 duplicate address
> ::0:1:f816:3eff:fec3:85bd detected!
> [581085.997497] IPv6: qr-7fad6b1b-c9: IPv6 duplicate address
> 

Re: [openstack-dev] [tc][all] A culture change (nitpicking)

2018-06-06 Thread Assaf Muller
On Tue, May 29, 2018 at 12:41 PM, Mathieu Gagné  wrote:
> Hi Julia,
>
> Thanks for the follow up on this topic.
>
> On Tue, May 29, 2018 at 6:55 AM, Julia Kreger
>  wrote:
>>
>> These things are not just frustrating, but also very inhibiting for
>> part time contributors such as students who may also be time limited.
>> Or an operator who noticed something that was clearly a bug and that
>> put forth a very minor fix and doesn't have the time to revise it over
>> and over.
>>
>
> What I found frustrating is receiving *only* nitpicks, addressing them
> to only receive more nitpicks (sometimes from the same reviewer) with
> no substantial review on the change itself afterward.
> I wouldn't mind addressing nitpicks if more substantial reviews were
> made in a timely fashion.

The behavior that I've tried to promote in communities I've partaken
is: If your review is comprised solely of nits, either abandon it, or
don't -1.

>
> --
> Mathieu
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [OVN] L3 traffic

2018-02-14 Thread Assaf Muller
On Tue, Feb 13, 2018 at 11:24 PM, Numan Siddique <nusid...@redhat.com> wrote:
>
>
> On Wed, Feb 14, 2018 at 4:19 AM, Assaf Muller <as...@redhat.com> wrote:
>>
>> I'm not aware of plans for OVN to supported distributed SNAT, therefor
>> a networking node will still be required for the foreseeable future.
>>
>> On Mon, Jan 15, 2018 at 2:18 AM, wenran xiao <wenranx...@gmail.com> wrote:
>> > Hey all,
>> > I have found Network OVN will support to distributed floating ip
>> >
>> > (https://docs.openstack.org/releasenotes/networking-ovn/unreleased.html),
>> > how about the snat in the future? Still need network node or not?
>> > Any suggestions are welcomed.
>
>
> OVN can select any node (or nodes if HA is enabled) to schedule a router as
> long as the node has ovn-controller service running in it and
> ovn-bridge-mappings configured properly.
> So, If you have external connectivity in your compute nodes and you are fine
> with any of these compute nodes doing the centralized snat, you don't need
> to have a network node.

To be clear that is at parity with ML2/OVS, you can install L3 agents
on any node with external connectivity, regardless if it's also a
compute node. Some deployment tools support this, like TripleO.

>
> Thanks
> Numan
>
>> >
>> >
>> > Best regards
>> > Ran
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [OVN] L3 traffic

2018-02-13 Thread Assaf Muller
I'm not aware of plans for OVN to supported distributed SNAT, therefor
a networking node will still be required for the foreseeable future.

On Mon, Jan 15, 2018 at 2:18 AM, wenran xiao  wrote:
> Hey all,
> I have found Network OVN will support to distributed floating ip
> (https://docs.openstack.org/releasenotes/networking-ovn/unreleased.html),
> how about the snat in the future? Still need network node or not?
> Any suggestions are welcomed.
>
>
> Best regards
> Ran
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Generalized issues in the unit testing of ML2 mechanism drivers

2018-02-13 Thread Assaf Muller
On Wed, Dec 13, 2017 at 7:30 AM, Michel Peterson  wrote:
> Through my work in networking-odl I've found what I believe is an issue
> present in a majority of ML2 drivers. An issue I think needs awareness so
> each project can decide a course of action.
>
> The issue stems from the adopted practice of importing
> `neutron.tests.unit.plugins.ml2.test_plugin` and creating classes with noop
> operation to "inherit" tests for free [1]. The idea behind is nice, you
> inherit >600 tests that cover several scenarios.
>
> There are several issues of adopting this pattern, two of which are
> paramount:
>
> 1. If the mechanism driver is not loaded correctly [2], the tests then don't
> test the mechanism driver but still succeed and therefore there is no
> indication that there is something wrong with the code. In the case of
> networking-odl it wasn't discovered until last week, which means that for >1
> year it this was adding PASSed tests uselessly.
>
> 2. It gives a false sense of reassurance. If the code of those tests is
> analyzed it's possible to see that the code itself is mostly centered around
> testing the REST endpoint of neutron than actually testing that the
> mechanism succeeds on the operation it was supposed to test. As a result of
> this, there is marginally added value on having those tests. To be clear,
> the hooks for the respective operations are called on the mechanism driver,
> but the result of the operation is not asserted.
>
> I would love to hear more voices around this, so feel free to comment.
>
> Regarding networking-odl the solution I propose is the following:
>   **First**, discard completely the change mentioned in the footnote #2.
>   **Second**, create a patch that completely removes the tests that follow
> this pattern.

An interesting exercise would be to add 'raise ValueError' type
exceptions in various ODL ML2 mech driver flows and seeing which tests
fail. Basically, if a test passes without the ODL mech driver loaded,
or with a faulty ODL mech driver, then you don't need to run the test
for networking-odl changes. I'd be hesitant to remove all tests
though, it's a good investment of time to figure out which tests are
valuable to you.

>   **Third**, incorporate the neutron tempest plugin into the CI and rely on
> that for assuring coverage of the different scenarios.
>
> Also to mention that when discovered this issue in networking-odl we took a
> decision not to merge more patches until the PS of footnote #2 was
> addressed. I think we can now decide to overrule that decision and proceed
> as usual.
>
>
>
> [1]: http://codesearch.openstack.org/?q=class%20.*\(.*TestMl2
> [2]: something that was happening in networking-odl and addressed by
> https://review.openstack.org/#/c/523934
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Stepping down from core

2017-12-21 Thread Assaf Muller
On Fri, Dec 15, 2017 at 2:01 PM, Armando M.  wrote:
> Hi neutrinos,
>
> To some of you this email may not come as a surprise.
>
> During the past few months my upstream community engagements have been more
> and more sporadic. While I tried hard to stay committed and fulfill my core
> responsibilities I feel like I failed to retain the level of quality and
> consistency that I would have liked ever since I stepped down from being the
> Neutron PTL back at the end of Ocata.
>
> I stated many times when talking to other core developers that being core is
> a duty rather than a privilege, and I personally feel like it's way overdue
> for me to recognize on the mailing list that it's the time that I state
> officially my intention to step down due to other commitments.
>
> This does not mean that I will disappear tomorrow. I'll continue to be on
> neutron IRC channels, support the neutron team, being the release liasion
> for Queens, participate at meetings, and be open to providing feedback to
> anyone who thinks my opinion is still valuable, especially when dealing with
> the neutron quirks for which I might be (git) blamed :)

How weird. You're such a fixture in the Neutron community, I can't
imagine you not being there. I can't think of many people who made a
greater impact on both the code and the community than you. You'd
probably get a kick knowing that I point new team members to your
reviews as a positive example of deep, substantive review practices.
Even highly effective people benefit from some good amount of luck,
so, good luck to you Armando :)

>
> Cheers,
> Armando
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Propose Slawek Kaplonski for Neutron core

2017-12-13 Thread Assaf Muller
On Thu, Dec 7, 2017 at 2:55 AM, Sławomir Kapłoński  wrote:
> Hi,
>
> Thanks all of You for support. I'm very happy to be part of the team.

Congratulations Slawek, and job well done!

>
> —
> Best regards
> Slawek Kaplonski
> sla...@kaplonski.pl
>
>
>> Wiadomość napisana przez Miguel Lavalle  w dniu 
>> 06.12.2017, o godz. 16:58:
>>
>> Hi Neutron Team,
>>
>> It has been a week since Slawek's nomination to Neutron core went out. We 
>> have received great positive feedback from the team and community members at 
>> large. As a consequence, I want to welcome Slawek to the core team and 
>> congratulate him on his hard work and many contributions to OpenStack 
>> Networking. Keep it up!
>>
>> Cheers
>>
>> Miguel
>>
>> On Mon, Dec 4, 2017 at 7:06 AM, Anna Taraday  
>> wrote:
>> +1 !
>> Well deserved!
>>
>> On Sun, Dec 3, 2017 at 1:03 PM Gary Kotton  wrote:
>> +1
>>
>>
>>
>> Welcome to the team!
>>
>>
>>
>> From: Miguel Lavalle 
>> Reply-To: OpenStack List 
>> Date: Wednesday, November 29, 2017 at 9:21 PM
>> To: OpenStack List 
>> Subject: [openstack-dev] [Neutron] Propose Slawek Kaplonski for Neutron core
>>
>>
>>
>> Hi Neutron Team,
>>
>>
>>
>> I want to nominate Slawek Kaplonski (irc: slaweq) to Neutron core. Slawek 
>> has been an active contributor to the project since the Mitaka cycle. He has 
>> been instrumental in the development of the QoS capabilities in Neutron, 
>> becoming the lead of the sub-team focused on that set of features. More 
>> recently, he has collaborated in the implementation of OVO and is an active 
>> participant in the CI sub-team. His number of code reviews during the Queens 
>> cycle is on par with the leading core members of the team: 
>> http://stackalytics.com/?module=neutron
>>
>>
>>
>> In my opinion, his efforts are highly valuable to the team and we will be 
>> very lucky to have him as a fully voting core.
>>
>>
>>
>> I will keep this nomination open for a week as customary,
>>
>>
>>
>> Thank you,
>>
>>
>>
>> Miguel
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> --
>> Regards,
>> Ann Taraday
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Networking-OVN] Unable to run OVN functional tests

2017-08-22 Thread Assaf Muller
On Sat, Aug 19, 2017 at 3:02 PM, Trinath Somanchi 
wrote:

> Hi-
>
>
>
> Using [1] I have enabled devstack setup.
>
>
>
> Now that when I try dvsm-functional tests, I get all tests failed.
>
>
>
> Please find the error log [2]  help me resolve this issue.
>
>
>
> Also, were there any tempests tests possible for OVN?
>

The networking-ovn repo doesn't add new Tempest tests, but the Tempest
networking tests, as well as the Tempest tests in the Neutron tree can be
run against a cloud using OVN.


>
>
> [1] https://docs.openstack.org/networking-ovn/latest/
> contributor/testing.html
>
> [2] http://paste.openstack.org/show/618837/
>
>
>
>
>
> Best Regards,
>
> */ Trinath Somanchi.*
>
> [image: cid:image001.png@01D28854.B7934C80]
>
> *Trinath Somanchi.*
>
> Hyderabad Software Development Center (HSDC), GSD , DN,
>
> NXP India Pvt Limited, 1st Floor, Block 3, DLF Cyber City, Gachibowli,
>
> Hyderabad, Telangana, 500032, India
>
>
>
> Email: *trinath.soman...@nxp.com *  | Mobile: +91
> 9866235130 <+91%2098662%2035130> | Off: +91 4033504051
> <+91%2040%203350%204051>
>
> [image: cid:image002.png@01D28854.B7934C80]
>
>
>
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][l3] Conntrackd in L3 HA ?

2017-04-14 Thread Assaf Muller
On Fri, Apr 14, 2017 at 3:16 AM, zhi  wrote:

> hi, all.
>
> I have a question about conntrackd in L3 HA. I read document about L3 HA
> at [1] and I find " conntrack " in L3 HA diagram . Does neutron L3 HA
> support conntrackd currently? If so, how
> can I find  the conntrack info?
>

It does not.


>
>
> Many Thanks
> Zhi Chang
>
>
> [1]: https://wiki.openstack.org/wiki/Neutron/L3_High_
> Availability_VRRP#Conntrackd_template
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] pingtest vs tempest

2017-04-06 Thread Assaf Muller
On Wed, Apr 5, 2017 at 4:49 PM, Emilien Macchi  wrote:

> Greetings dear owls,
>
> I would like to bring back an old topic: running tempest in the gate.
>
> == Context
>
> Right now, TripleO gate is running something called pingtest to
> validate that the OpenStack cloud is working. It's an Heat stack, that
> deploys a Nova server, some volumes, a glance image, a neutron network
> and sometimes a little bit more.
> To deploy the pingtest, you obviously need Heat deployed in your overcloud.
>
> == Problems:
>
> Although pingtest has been very helpful over the last years:
> - easy to understand, it's an Heat template, like an OpenStack user
> would do to deploy their apps.
> - fast: the stack takes a few minutes to be created and validated
>
> It has some limitations:
> - Limitation to what Heat resources support (example: some OpenStack
> resources can't be managed from Heat)
> - Impossible to run a dynamic workflow (test a live migration for example)
>

Another limitation which is obvious but I think still worth mentioning is
that Tempest has much better coverage than the pingtest. Security groups,
for example, have been accidentally disabled in every TripleO version as of
late. That lets regressions such as [1] slip in. We wouldn't have that
problem if we switched to Tempest and selected an intelligent subset of
tests to run.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1411690


>
> == Solutions
>
> 1) Switch pingtest to Tempest run on some specific tests, with feature
> parity of what we had with pingtest.
> For example, we could imagine to run the scenarios that deploys VM and
> boot from volume. It would test the same thing as pingtest (details
> can be discussed here).
> Each scenario would run more tests depending on the service that they
> run (scenario001 is telemetry, so it would run some tempest tests for
> Ceilometer, Aodh, Gnocchi, etc).
> We should work at making the tempest run as short as possible, and the
> close as possible from what we have with a pingtest.
>
> 2) Run custom scripts in TripleO CI tooling, called from the pingtest
> (heat template), that would run some validations commands (API calls,
> etc).
> It has been investigated in the past but never implemented AFIK.
>
> 3) ?
>
> I tried to make this text short and go straight to the point, please
> bring feedback now. I hope we can make progress on $topic during Pike,
> so we can increase our testing coverage and detect deployment issues
> sooner.
>
> Thanks,
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Alternative approaches for L3 HA

2017-02-22 Thread Assaf Muller
On Wed, Feb 22, 2017 at 1:40 PM, Miguel Angel Ajo Pelayo
 wrote:
> I have updated the spreadsheet. In the case of RH/RDO we're using the same
> architecture
> in the case of HA, pacemaker is not taking care of those anymore since the
> HA-NG implementation.
>
> We let systemd take care to restart the services that die, and we worked
> with the community
> to make sure that agents and services are robust in case of dependent
> services (database, rabbitmq
> ) failures, to make sure they reconnect and continue when those become
> available.

Thanks Miguel, I added a little bit of info the spreadsheet as well.

>
> On Wed, Feb 22, 2017 at 11:28 AM, Adam Spiers  wrote:
>>
>> Kosnik, Lubosz  wrote:
>> > About success of RDO we need to remember that this deployment utilizes
>> > Peacemaker and when I was working on this feature and even I spoke with
>> > Assaf this external application was doing everything to make this solution
>> > working.
>> > Peacemaker was responsible for checking external and internal
>> > connectivity. To detect split brain. Elect master, even keepalived was
>> > running but Peacemaker was automatically killing all services and moving
>> > FIP.
>> > Assaf - is there any change in this implementation in RDO? Or you’re
>> > still doing everything outside of Neutron?
>> >
>> > Because if RDO success is build on Peacemaker it means that yes, Neutron
>> > needs some solution which will be available for more than RH deployments.
>>
>> Agreed.
>>
>> With help from others, I have started an analysis of some of the
>> different approaches to L3 HA:
>>
>> https://ethercalc.openstack.org/Pike-Neutron-L3-HA
>>
>> (although I take responsibility for all mistakes ;-)
>>
>> It would be great if someone from RH or RDO could provide information
>> on how this RDO (and/or RH OSP?) solution based on Pacemaker +
>> keepalived works - if so, I volunteer to:
>>
>>   - help populate column E of the above sheet so that we can
>> understand if there are still remaining gaps in the solution, and
>>
>>   - document it (e.g. in the HA guide).  Even if this only ended up
>> being considered as a shorter-term solution, I think it's still
>> worth documenting so that it's another option available to
>> everyone.
>>
>> Thanks!
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Alternative approaches for L3 HA

2017-02-14 Thread Assaf Muller
On Fri, Feb 10, 2017 at 12:27 PM, Anna Taraday
 wrote:
> Hello everyone!
>
> In Juno in Neutron was implemented L3 HA feature based on Keepalived (VRRP).
> During next cycles it was improved, we performed scale testing [1] to find
> weak places and tried to fix them. The only alternative for L3 HA with VRRP
> is router rescheduling performed by Neutron server, but it is significantly
> slower and depends on control plane.
>
> What issues we experienced with L3 HA VRRP?
>
> Bugs in Keepalived (bad versions) [2]
> Split brain [3]
> Complex structure (ha networks, ha interfaces) - which actually cause races
> that we were fixing during Liberty, Mitaka and Newton.
>
> This all is not critical, but this is a bad experience and not everyone
> ready (or want) to use Keepalived approach.
>
> I think we can make things more flexible. For example, we can allow user to
> use external services like etcd instead of Keepalived to synchronize current
> HA state across agents. I've done several experiments and I've got failover
> time comparable to L3 HA with VRRP. Tooz [4] can be used to abstract from
> concrete backend. For example, it can allow us to use Zookeeper, Redis and
> other backends to store HA state.
>
> What I want to propose?
>
> I want to bring up idea that Neutron should have some general classes for L3
> HA which will allow to use not only Keepalived but also other backends for
> HA state. This at least will make it easier to try some other approaches and
> compare them with existing ones.
>
> Does this sound reasonable?

I understand that the intention is to add pluggability upstream so
that you could examine the viability of alternative solutions. I'd
advise instead to do the research locally, and if you find concrete
benefits to an alternative solution, come back, show your work and
have a discussion about it then. Merging extra complexity in the form
of a plug point without knowing if we're actually going to need it
seems risky.

On another note, after years of work the stability issues have largely
been resolved and L3 HA is in a good state with modern releases of
OpenStack. It's not a authoritative solution in the sense that it
doesn't cover every possible failure mode, but it covers the major
ones and in that sense better than not having any form of HA, and as
you pointed out the existing alternatives are not in a better state.
The subtext in your email is that now L3 HA is technically where we
want it, but some users are resisting adoption because of bad PR or a
bad past experience, but not for technical reasons. If that is the
case, then perhaps some good PR would be a more cost effective
investment than investigating, implementing, stabilizing and
maintaining a different backend that will likely take at least a cycle
to get merged and another 1 to 2 cycles to iron out kinks. Would you
have a critical mass of developers ready to support a pluggable L3 HA
now and in the long term?

Finally, I can share that L3 HA has been the default in RDO-land for a
few cycles now and is being used widely and successfully, in some
cases at significant scale.

>
> [1] -
> http://docs.openstack.org/developer/performance-docs/test_results/neutron_features/index.html
> [2] - https://bugs.launchpad.net/neutron/+bug/1497272
> https://bugs.launchpad.net/neutron/+bug/1433172
> [3] - https://bugs.launchpad.net/neutron/+bug/1375625
> [4] - http://docs.openstack.org/developer/tooz/
>
>
>
>
> --
> Regards,
> Ann Taraday
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Dynamic Routing] Plans for scenario testing?

2016-12-06 Thread Assaf Muller
On Tue, Dec 6, 2016 at 5:44 PM, Tidwell, Ryan <ryan.tidw...@hpe.com> wrote:
> This is at the top of my list to look at.

Thanks Ryan that's good to know. I think that gating on some tests
that demonstrates that the project works end to end is a signal of
maturity and will help adoption.

> I've been thinking a lot about how to implement some tests. For instance, do 
> we need to actually stand up a BGP peer of some sort to peer neutron with and 
> assert the announcements somehow? Or should we assume that Ryu works properly 
> and make sure we have solid coverage of the driver interface somehow. I'm 
> open to suggestions as to how to approach this.
>
> -Ryan
>
> -Original Message-
> From: Assaf Muller [mailto:as...@redhat.com]
> Sent: Tuesday, December 06, 2016 2:36 PM
> To: OpenStack Development Mailing List (not for usage questions) 
> <openstack-dev@lists.openstack.org>
> Subject: [openstack-dev] [Neutron][Dynamic Routing] Plans for scenario 
> testing?
>
> Hi all,
>
> General query - Is there anyone in the Dynamic Routing community that is 
> planning on contributing a scenario test? As far as I could tell, none of the 
> current API tests would fail if, for example, the BGP agent was not running. 
> Please correct me if I'm wrong.
>
> Thank you.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][Dynamic Routing] Plans for scenario testing?

2016-12-06 Thread Assaf Muller
Hi all,

General query - Is there anyone in the Dynamic Routing community that
is planning on contributing a scenario test? As far as I could tell,
none of the current API tests would fail if, for example, the BGP
agent was not running. Please correct me if I'm wrong.

Thank you.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Stepping down as testing lieutenant and from the core & drivers teams

2016-11-28 Thread Assaf Muller
Hi all,

For the past few months I've been gaining more responsibilities within
Red Hat. I have less time to dedicate to personal contribution and
it's had a considerable tole on my ability to perform my duties
upstream to the degree of effectiveness I am satisfied with. To that
end, I've decided that the right thing to do is to hand over my
testing lieutenant responsibilities to Jakub Libosvar, and to step
down from the core and drivers team.

This is a bitter sweet moment. I'm enormously proud to see the
progress Neutron as a platform, community and as a reference
implementation have made over the last 3 years and I'm thrilled to
have taken part in that. It's grown from an experiment to a ubiquitous
OpenStack project with a proven, robust and scalable
batteries-included implementation used at the largest retail,
insurance, banking, web and telco (And more!) companies in the world.
My focus will remain on OpenStack networking but my contributions will
be indirect through my amazing team members. The people leading
Quantum when I joined are nearly all gone and luckily we've seen a
continuous influx of fresh talent ready to take over leadership
responsibilities. I'm confident the wheel will keep on spinning and
the project will continue stepping down the right path.

I'll still be on IRC and I'll be working over the upcoming couple of
weeks to hand off any specific tasks I had going on. Have fun folks.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Stepping down from core

2016-11-18 Thread Assaf Muller
On Thu, Nov 17, 2016 at 1:42 PM, Carl Baldwin  wrote:
> Neutron (and Openstack),
>
> It is with regret that I report that my work situation has changed such that
> I'm not able to keep up with my duties as a Neutron core reviewer, L3
> lieutenant, and drivers team member. My participation has dropped off
> considerably since Newton was released and I think it is fair to step down
> and leave an opening for others to fill. There is no shortage of talent in
> Neutron and Openstack and I know I'm leaving it in good hands.
>
> I will be more than happy to come back to full participation in Openstack
> and Neutron in the future if things change again in that direction. This is
> a great community and I've had a great time participating and learning with
> you all.
>
> Well, I don't want to drag this out. I will still be around on IRC and will
> be happy to help out where I am able. Feel free to ping me.

I wish you happiness and fulfillment in your upcoming work.

This is a great loss to the Neutron community. The Routed Networks
work you led was a testament to your technical prowess and influence
that you've demonstrated consistently throughout the years. You've
made massive marks on Neutron and any project or community is lucky to
have you. Come back soon :)

>
> Carl
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Do we need to rfe to implement active-active router?

2016-11-16 Thread Assaf Muller
On Wed, Nov 16, 2016 at 10:42 AM, huangdenghui  wrote:
> hi
> Currently, neutron support DVR router and legacy router.  For high
> availability, there is HA router in reference implementation of legacy mode
> and DVR  mode. I am considering whether is active-active router needed in
> both mode?

Yes, an RFE would be required and likely a spec describing the high
level approach of the implementation.

>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Ocata End user and operator feedback recap

2016-11-08 Thread Assaf Muller
On Tue, Nov 8, 2016 at 12:57 AM, Oleg Bondarev  wrote:
>
>
> On Tue, Nov 8, 2016 at 12:22 AM, Carl Baldwin  wrote:
>>
>> Ocata End user and operator feedback recap
>> 
>>
>> The purpose of this session was to gather feedback from end users and
>> operators to help direct the efforts of developers during (at least) the
>> Ocata cycle. Feedback was captured on the etherpad [1]. There was a related
>> session in the operators' track [2] which may also be of interest.
>>
>> We began with a short discussion about client compatibility which was
>> deferred to the following session specifically about the client [3].
>>
>> There was a discussion about if Neutron will implement cells like Nova
>> has. There is currently no plan. I have heard this come up for the past
>> couple of summits but there is little concrete evidence of the scale issues
>> being encountered by operators. If you are running Neutron at scale, we
>> could use more of your input.
>>
>> There was some discussion about an issue around moving a floating ip from
>> one instance to another when keepalive is in use. We needed more information
>> to debug this. It should be sending a gratuitous arp. Another keepalive
>> issue was discussed. Too many SIGHUPs within a period of time can cause
>> failover resulting in disruption. I do not know if a bug was filed for this
>> to follow up. If this was you, please follow up with a link to the bug
>> report.
>
>
> https://bugs.launchpad.net/neutron/+bug/1639315 looks related

That covers keepalived not sending GARPs on SIGHUP. This will be
resolved via a workaround in Neutron with a patch by Jakub Libosvar
(Linked from the bug report). The other issue of keepalived hanging
when receiving frequent consecutive SIGHUPs (When a floating IP is
added/removed) will also be resolved via a workaround, but I don't see
a bug report yet. This will be handled by John Schwarz.

>
>>
>>
>> There was a short discussion about Horizon support for security groups on
>> a port. A bug was filed for this.
>>
>> Finally, there was some discussion about enabling end users to create
>> Neutron L3 routers which are backed by hardware resources. There is no such
>> thing yet but Neutron does have a new concept in development called "L3
>> flavors". This would enable a driver (to be written) which would allow this
>> sort of thing.
>>
>> Carl Baldwin
>>
>> [1]
>> https://etherpad.openstack.org/p/ocata-neutron-end-user-operator-feedback
>> [2] https://etherpad.openstack.org/p/newton-neutron-pain-points
>> [3] https://etherpad.openstack.org/p/ocata-neutron-client
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] bug deputy report

2016-10-03 Thread Assaf Muller
On Mon, Oct 3, 2016 at 10:15 AM, Ihar Hrachyshka  wrote:
> Hi all,
>
> I was serving as a bug deputy for the last two weeks. (Well, I planned to
> serve for one week only, but then I forgot to set new deputies in the last
> meeting I chaired, so the 2nd week was my punishment for short memory.)
>
> There were several bugs that I did not know how to triage properly, so help
> with that is highly welcome.
>
> 1. https://bugs.launchpad.net/neutron/+bug/1625829 - instance shutdown
> breaks floating ip connectivity, probably a duplicate of
> https://bugs.launchpad.net/neutron/+bug/1549443. the immediate suspect is
> iptables rules ordering.
>
> 2. https://bugs.launchpad.net/neutron/+bug/1627480 - Kevin suspects that
> IPAM ip allocation code may succeed without allocating an IP address for a
> subnet requested. That one gave us some headache in late RC1-2 time. Would
> be great to see IPAM folks chime in to nail it down.
>
> 3. https://bugs.launchpad.net/neutron/+bug/1628044 - bgp listener not
> listening on mitaka. Someone should validate the bug in latest code.
>
> 4. https://bugs.launchpad.net/neutron/+bug/1628385 - router-port-list not
> showing gateway port. I assume it’s as designed because of special status of
> the gateway port, but I would like to confirm with L3 folks before setting
> to Opinion.
>
> 5. https://bugs.launchpad.net/neutron/+bug/1629097 - that’s a scary one:
> rootwrap processes are hanging after ovsdb-client dies, exhausting memory.
> OVS restart exits those processes. A suspicious patch is detected.

Added info here.

>
> 6. https://bugs.launchpad.net/neutron/+bug/1629539 - dvr not working with
> lbaasv1? Not sure if it’s a valid case now considering the fact we dropped
> lbaasv1, but lbaas folks should chime in.
>
> I hope we’ll get new deputies today that will start processing bugs starting
> from the time of writing the email.
>
> Cheers.
> Ihar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][infra][all] Measuring code coverage in integration tests

2016-09-29 Thread Assaf Muller
On Thu, Sep 29, 2016 at 5:27 AM, milanisko k <vetri...@gmail.com> wrote:

>
>
> út 27. 9. 2016 v 20:12 odesílatel Assaf Muller <as...@redhat.com> napsal:
>
>> On Tue, Sep 27, 2016 at 2:05 PM, Assaf Muller <as...@redhat.com> wrote:
>>
>>>
>>>
>>> On Tue, Sep 27, 2016 at 12:18 PM, Timur Nurlygayanov <
>>> tnurlygaya...@mirantis.com> wrote:
>>>
>>>> Hi milan,
>>>>
>>>> we have measured the test coverage for OpenStack components with
>>>> coverage.py tool [1]. It is very easy tool and it allows measure the
>>>> coverage by lines of code and etc. (several metrics are available).
>>>>
>>>> [1] https://coverage.readthedocs.io/en/coverage-4.2/
>>>>
>>>
>>> coverage also supports aggregating results from multiple runs, so you
>>> can measure results from combinations such as:
>>>
>>>
>>
>>> 1) Unit tests
>>> 2) Functional tests
>>> 3) Integration tests
>>> 4) 1 + 2
>>> 5) 1 + 2 + 3
>>>
>>> To my eyes 3 and 4 make the most sense. Unit and functional tests are
>>> supposed to give you low level coverage, keeping in mind that 'functional
>>> tests' is an overloaded term and actually means something else in every
>>> community. Integration tests aren't about code coverage, they're about user
>>> facing flows, so it'd be interesting to measure coverage
>>> from integration tests,
>>>
>>
>> Sorry, replace integration with unit + functional.
>>
>>
>>> then comparing coverage coming from integration tests, and getting the
>>> set difference between the two: That's the area that needs more unit and
>>> functional tests.
>>>
>>
>> To reiterate:
>>
>> Run coverage from integration tests, let this be c
>> Run coverage from unit and functional tests, let this be c'
>>
>> Let diff = c \ c'
>>
>> 'diff' is where you're missing unit and functional tests coverage.
>>
>
> Assaf, the tool I linked is a monkey-patched coverage.py but the collector
> stores the stats in Redis --- gives the same accumulative collecting.
> Is there any interest/effort to collect coverage stats from selected jobs
> in CI, no matter the tool used?
>

Some projects already collect coverage stats on their post-merge queue:
http://logs.openstack.org/61/61af70a734b99e61e751cfb494ddc93a85eec394/post/nova-coverage-db-ubuntu-xenial/55210aa/

It's invoked with 'tox -e cover' which you define in your project's tox.ini
file, I imagine most projects if not all have it set up to gather coverage
from a unit tests run.


>
>
>>
>>
>>>
>>>
>>>>
>>>> On Tue, Sep 27, 2016 at 1:06 PM, Jordan Pittier <
>>>> jordan.pitt...@scality.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> On Tue, Sep 27, 2016 at 11:43 AM, milanisko k <vetri...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Dear Stackers,
>>>>>> I'd like to gather some overview on the $Sub: is there some
>>>>>> infrastructure in place to gather such stats? Are there any groups
>>>>>> interested in it? Any plans to establish such infrastructure?
>>>>>>
>>>>> I am working on such a tool with mixed results so far. Here's my
>>>>> approach taking let's say Nova as an example:
>>>>>
>>>>> 1) Print all the routes known to nova (available as a python-routes
>>>>> object:  nova.api.openstack.compute.APIRouterV21())
>>>>> 2) "Normalize" the Nova routes
>>>>> 3) Take the logs produced by Tempest during a tempest run (in
>>>>> logs/tempest.txt.gz). Grep for what looks like a Nova URL (based on port
>>>>> 8774)
>>>>> 4) "Normalize" the tested-by-tempest Nova routes.
>>>>> 5) Compare the two sets of routes
>>>>> 6) 
>>>>> 7) Profit !!
>>>>>
>>>>> So the hard part is obviously the normalizing of the URLs. I am
>>>>> currently using a tons of regex :) That's not fun.
>>>>>
>>>>> I'll let you guys know if I have something to show.
>>>>>
>>>>> I think there's real interest on the topic (it comes up every year or
>>>>> so), but no definitive answer/tool.
>>>>>
>>>>> Cheers,
>>>>> Jordan
>>>>>
>>&g

Re: [openstack-dev] [QA][infra][all] Measuring code coverage in integration tests

2016-09-27 Thread Assaf Muller
On Tue, Sep 27, 2016 at 2:05 PM, Assaf Muller <as...@redhat.com> wrote:

>
>
> On Tue, Sep 27, 2016 at 12:18 PM, Timur Nurlygayanov <
> tnurlygaya...@mirantis.com> wrote:
>
>> Hi milan,
>>
>> we have measured the test coverage for OpenStack components with
>> coverage.py tool [1]. It is very easy tool and it allows measure the
>> coverage by lines of code and etc. (several metrics are available).
>>
>> [1] https://coverage.readthedocs.io/en/coverage-4.2/
>>
>
> coverage also supports aggregating results from multiple runs, so you can
> measure results from combinations such as:
>
> 1) Unit tests
> 2) Functional tests
> 3) Integration tests
> 4) 1 + 2
> 5) 1 + 2 + 3
>
> To my eyes 3 and 4 make the most sense. Unit and functional tests are
> supposed to give you low level coverage, keeping in mind that 'functional
> tests' is an overloaded term and actually means something else in every
> community. Integration tests aren't about code coverage, they're about user
> facing flows, so it'd be interesting to measure coverage
> from integration tests,
>

Sorry, replace integration with unit + functional.


> then comparing coverage coming from integration tests, and getting the set
> difference between the two: That's the area that needs more unit and
> functional tests.
>

To reiterate:

Run coverage from integration tests, let this be c
Run coverage from unit and functional tests, let this be c'

Let diff = c \ c'

'diff' is where you're missing unit and functional tests coverage.


>
>
>>
>> On Tue, Sep 27, 2016 at 1:06 PM, Jordan Pittier <
>> jordan.pitt...@scality.com> wrote:
>>
>>> Hi,
>>>
>>> On Tue, Sep 27, 2016 at 11:43 AM, milanisko k <vetri...@gmail.com>
>>> wrote:
>>>
>>>> Dear Stackers,
>>>> I'd like to gather some overview on the $Sub: is there some
>>>> infrastructure in place to gather such stats? Are there any groups
>>>> interested in it? Any plans to establish such infrastructure?
>>>>
>>> I am working on such a tool with mixed results so far. Here's my
>>> approach taking let's say Nova as an example:
>>>
>>> 1) Print all the routes known to nova (available as a python-routes
>>> object:  nova.api.openstack.compute.APIRouterV21())
>>> 2) "Normalize" the Nova routes
>>> 3) Take the logs produced by Tempest during a tempest run (in
>>> logs/tempest.txt.gz). Grep for what looks like a Nova URL (based on port
>>> 8774)
>>> 4) "Normalize" the tested-by-tempest Nova routes.
>>> 5) Compare the two sets of routes
>>> 6) 
>>> 7) Profit !!
>>>
>>> So the hard part is obviously the normalizing of the URLs. I am
>>> currently using a tons of regex :) That's not fun.
>>>
>>> I'll let you guys know if I have something to show.
>>>
>>> I think there's real interest on the topic (it comes up every year or
>>> so), but no definitive answer/tool.
>>>
>>> Cheers,
>>> Jordan
>>>
>>>
>>>
>>>
>>> <https://www.scality.com/backup/?utm_source=signatures_medium=email_campaign=backup2016>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>>
>> Timur,
>> Senior QA Manager
>> OpenStack Projects
>> Mirantis Inc
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][infra][all] Measuring code coverage in integration tests

2016-09-27 Thread Assaf Muller
On Tue, Sep 27, 2016 at 12:18 PM, Timur Nurlygayanov <
tnurlygaya...@mirantis.com> wrote:

> Hi milan,
>
> we have measured the test coverage for OpenStack components with
> coverage.py tool [1]. It is very easy tool and it allows measure the
> coverage by lines of code and etc. (several metrics are available).
>
> [1] https://coverage.readthedocs.io/en/coverage-4.2/
>

coverage also supports aggregating results from multiple runs, so you can
measure results from combinations such as:

1) Unit tests
2) Functional tests
3) Integration tests
4) 1 + 2
5) 1 + 2 + 3

To my eyes 3 and 4 make the most sense. Unit and functional tests are
supposed to give you low level coverage, keeping in mind that 'functional
tests' is an overloaded term and actually means something else in every
community. Integration tests aren't about code coverage, they're about user
facing flows, so it'd be interesting to measure coverage
from integration tests, then comparing coverage coming from integration
tests, and getting the set difference between the two: That's the area that
needs more unit and functional tests.


>
> On Tue, Sep 27, 2016 at 1:06 PM, Jordan Pittier <
> jordan.pitt...@scality.com> wrote:
>
>> Hi,
>>
>> On Tue, Sep 27, 2016 at 11:43 AM, milanisko k  wrote:
>>
>>> Dear Stackers,
>>> I'd like to gather some overview on the $Sub: is there some
>>> infrastructure in place to gather such stats? Are there any groups
>>> interested in it? Any plans to establish such infrastructure?
>>>
>> I am working on such a tool with mixed results so far. Here's my approach
>> taking let's say Nova as an example:
>>
>> 1) Print all the routes known to nova (available as a python-routes
>> object:  nova.api.openstack.compute.APIRouterV21())
>> 2) "Normalize" the Nova routes
>> 3) Take the logs produced by Tempest during a tempest run (in
>> logs/tempest.txt.gz). Grep for what looks like a Nova URL (based on port
>> 8774)
>> 4) "Normalize" the tested-by-tempest Nova routes.
>> 5) Compare the two sets of routes
>> 6) 
>> 7) Profit !!
>>
>> So the hard part is obviously the normalizing of the URLs. I am currently
>> using a tons of regex :) That's not fun.
>>
>> I'll let you guys know if I have something to show.
>>
>> I think there's real interest on the topic (it comes up every year or
>> so), but no definitive answer/tool.
>>
>> Cheers,
>> Jordan
>>
>>
>>
>>
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
>
> Timur,
> Senior QA Manager
> OpenStack Projects
> Mirantis Inc
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Adding ihrachys to the neutron-drivers team

2016-09-17 Thread Assaf Muller
Well deserved is an understatement! Ihar is consistently integral to
the Neutron project. Ihar has wide knowledge not only of Neutron but
of OpenStack workings and is a key factor to Neutron playing nice with
other projects. Ihar has shown consistent good judgement of priorities
and will in no doubt strengthen the drivers team.

On Sat, Sep 17, 2016 at 1:19 PM, Dariusz Śmigiel
 wrote:
> Congrats Ihar. You deserve this!
>
> 2016-09-17 18:40 GMT+02:00 Armando M. :
>> Hi folks,
>>
>> I would like to propose Ihar to become a member of the Neutron drivers team
>> [1].
>>
>> Ihar wide knowledge of the Neutron codebase, and his longstanding duties as
>> stable core, downstream package whisperer, release and oslo liaison (I am
>> sure I am forgetting some other capacity he is in) is going to put him at
>> great comfort in the newly appointed role, and help him grow and become wise
>> even further.
>>
>> Even though we have not been meeting regularly lately we will resume our
>> Thursday meetings soon [2], and having Ihar onboard by then will be highly
>> beneficial.
>>
>> Please, join me in welcome Ihar to the team.
>>
>> Cheers,
>> Armando
>>
>> [1]
>> http://docs.openstack.org/developer/neutron/policies/neutron-teams.html#drivers-team
>> [2] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Darek "dasm" Śmigiel
>
> 
> Q: Why is this email five sentences or less?
> A: http://five.sentenc.es
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] neutronclient check queue is broken

2016-08-25 Thread Assaf Muller
On Thu, Aug 25, 2016 at 6:58 AM, Ihar Hrachyshka  wrote:
> Akihiro Motoki  wrote:
>
>> In the neutronclient check queue,
>> gate-neutronclient-test-dsvm-functional is broken now [1].
>> Please avoid issuing 'recheck'.
>>
>> [1] https://bugs.launchpad.net/python-neutronclient/+bug/1616749
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> The proposed fix (removal tests for lbaasv1) made me wonder why we don’t
> gate on neutron stable branches in client master branch. Isn’t it a test
> matrix gap that could allow a new client to introduce a regression that
> would break interactions with older clouds?

Absolutely. Feel free to send a project-config patch :)

>
> I see that some clients (nova) validate stable server branches against
> master client patches. Shouldn’t we do the same?
>
> Ihar
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][heat] Removing LBaaS v1 - are weready?

2016-08-25 Thread Assaf Muller
On Thu, Aug 25, 2016 at 7:35 AM, Gary Kotton  wrote:
> Hi,
> At the moment it is still not clear to me the upgrade process from V1 to V2. 
> The migration script https://review.openstack.org/#/c/289595/ has yet to be 
> approved. Does this support all drivers or is this just the default reference 
> implementation driver?

The migration script doesn't have a test, so we really have no idea if
it's going to work.

> Are there people still using V1?
> Thanks
> Gary
>
> On 8/25/16, 4:25 AM, "Doug Wiegley"  wrote:
>
>
> > On Mar 23, 2016, at 4:17 PM, Doug Wiegley 
>  wrote:
> >
> > Migration script has been submitted, v1 is not going anywhere from 
> stable/liberty or stable/mitaka, so it’s about to disappear from master.
> >
> > I’m thinking in this order:
> >
> > - remove jenkins jobs
> > - wait for heat to remove their jenkins jobs ([heat] added to this 
> thread, so they see this coming before the job breaks)
> > - remove q-lbaas from devstack, and any references to lbaas v1 in 
> devstack-gate or infra defaults.
> > - remove v1 code from neutron-lbaas
>
> FYI, all of the above have completed, and the final removal is in the 
> merge queue: https://review.openstack.org/#/c/286381/
>
> Mitaka will be the last stable branch with lbaas v1.
>
> Thanks,
> doug
>
> >
> > Since newton is now open for commits, this process is going to get 
> started.
> >
> > Thanks,
> > doug
> >
> >
> >
> >> On Mar 8, 2016, at 11:36 AM, Eichberger, German 
>  wrote:
> >>
> >> Yes, it’s Database only — though we changed the agent driver in the DB 
> from V1 to V2 — so if you bring up a V2 with that database it should 
> reschedule all your load balancers on the V2 agent driver.
> >>
> >> German
> >>
> >>
> >>
> >>
> >> On 3/8/16, 3:13 AM, "Samuel Bercovici"  wrote:
> >>
> >>> So this looks like only a database migration, right?
> >>>
> >>> -Original Message-
> >>> From: Eichberger, German [mailto:german.eichber...@hpe.com]
> >>> Sent: Tuesday, March 08, 2016 12:28 AM
> >>> To: OpenStack Development Mailing List (not for usage questions)
> >>> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are 
> weready?
> >>>
> >>> Ok, for what it’s worth we have contributed our migration script: 
> https://review.openstack.org/#/c/289595/ — please look at this as a starting 
> point and feel free to fix potential problems…
> >>>
> >>> Thanks,
> >>> German
> >>>
> >>>
> >>>
> >>>
> >>> On 3/7/16, 11:00 AM, "Samuel Bercovici"  wrote:
> >>>
>  As far as I recall, you can specify the VIP in creating the LB so 
> you will end up with same IPs.
> 
>  -Original Message-
>  From: Eichberger, German [mailto:german.eichber...@hpe.com]
>  Sent: Monday, March 07, 2016 8:30 PM
>  To: OpenStack Development Mailing List (not for usage questions)
>  Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are 
> weready?
> 
>  Hi Sam,
> 
>  So if you have some 3rd party hardware you only need to change the
>  database (your steps 1-5) since the 3rd party hardware will just keep
>  load balancing…
> 
>  Now for Kevin’s case with the namespace driver:
>  You would need a 6th step to reschedule the loadbalancers with the 
> V2 namespace driver — which can be done.
> 
>  If we want to migrate to Octavia or (from one LB provider to 
> another) it might be better to use the following steps:
> 
>  1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools, 
> Health
>  Monitors , Members) into some JSON format file(s) 2. Delete LBaaS v1 
> 3.
>  Uninstall LBaaS v1 4. Install LBaaS v2 5. Transform the JSON format
>  file into some scripts which recreate the load balancers with your
>  provider of choice —
> 
>  6. Run those scripts
> 
>  The problem I see is that we will probably end up with different VIPs
>  so the end user would need to change their IPs…
> 
>  Thanks,
>  German
> 
> 
> 
>  On 3/6/16, 5:35 AM, "Samuel Bercovici"  wrote:
> 
> > As for a migration tool.
> > Due to model changes and deployment changes between LBaaS v1 and 
> LBaaS v2, I am in favor for the following process:
> >
> > 1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools,
> > Health Monitors , Members) into some JSON format file(s) 2. Delete 
> LBaaS v1 3.
> > Uninstall LBaaS v1 4. Install LBaaS v2 5. Import the data from 1 
> 

Re: [openstack-dev] [neutron][networking-ovs-dpdk] conntrack security group driver with ovs-dpdk

2016-08-15 Thread Assaf Muller
+ Jakub.

On Wed, Aug 10, 2016 at 9:54 AM,   wrote:
> Hi,
>> [Mooney, Sean K]
>> In ovs 2.5 only linux kernel conntrack was supported assuming you had a 4.x
>> kernel that supported it. that means that the feature was not available on
>> bsd,windows or with dpdk.
> Yup, I also thought about something like that.
> I think I was at-least-slightly misguided by
> http://docs.openstack.org/draft/networking-guide/adv-config-ovsfwdriver.html
> and there is currently a statement
> "The native OVS firewall implementation requires kernel and user space 
> support for conntrack, thus requiring minimum versions of the Linux kernel 
> and Open vSwitch. All cases require Open vSwitch version 2.5 or newer."

I agree, that statement is misleading.

>
> Do you agree that this is something to change? I think it is not OK to state 
> OVS 2.6 without that being released, but in case I am not confusing then:
> -OVS firewall driver with OVS that uses kernel datapath requires OVS 2.5 and 
> Linux kernel 4.3
> -OVS firewall driver with OVS that uses userspace datapath with DPDK (aka 
> ovs-dpdk  aka DPDK vhost-user aka netdev datapath) doesn't have a Linux 
> kernel prerequisite
> That is documented in table in " ### Q: Are all features available with all 
> datapaths?":
> http://openvswitch.org/support/dist-docs/FAQ.md.txt
> where currently 'Connection tracking' row says 'NO' for 'Userspace' - but 
> that's exactly what has been merged recently /to become feature of OVS 2.6
>
> Also when it comes to performance I came across
> http://openvswitch.org/pipermail/dev/2016-June/071982.html, but I would guess 
> that devil could be the exact flows/ct actions that will be present in 
> real-life scenario.
>
>
> BR,
> Konstantin
>
>
>> -Original Message-
>> From: Mooney, Sean K [mailto:sean.k.moo...@intel.com]
>> Sent: Tuesday, August 09, 2016 2:29 PM
>> To: Volenbovskyi Kostiantyn, INI-ON-FIT-CXD-ELC
>> ; openstack-
>> d...@lists.openstack.org
>> Subject: RE: [openstack-dev] [neutron][networking-ovs-dpdk] conntrack 
>> security
>> group driver with ovs-dpdk
>>
>>
>> > -Original Message-
>> > From: kostiantyn.volenbovs...@swisscom.com
>> > [mailto:kostiantyn.volenbovs...@swisscom.com]
>> > Sent: Tuesday, August 9, 2016 12:58 PM
>> > To: openstack-dev@lists.openstack.org; Mooney, Sean K
>> > 
>> > Subject: RE: [openstack-dev] [neutron][networking-ovs-dpdk] conntrack
>> > security group driver with ovs-dpdk
>> >
>> > Hi,
>> > (sorry for using incorrect threading)
>> >
>> > > > About 2 weeks ago I did some light testing with the conntrack
>> > > > security group driver and the newly
>> > > >
>> > > > Merged upserspace conntrack support in ovs.
>> > > >
>> > By 'recently' - whether you mean patch v4
>> > http://openvswitch.org/pipermail/dev/2016-June/072700.html
>> > or you used OVS 2.5 itself (which I think includes v2 of the same
>> > patch series)?
>> [Mooney, Sean K] I used http://openvswitch.org/pipermail/dev/2016-
>> June/072700.html or specifically i used the following commit
>> https://github.com/openvswitch/ovs/commit/0c87efe4b5017de4c5ae99e7b9c3
>> 6e8a6e846669
>> which is just after userspace conntrack was merged,
>> >
>> > So in general - I am a bit confused about conntrack support in OVS.
>> >
>> > OVS 2.5 release notes http://openvswitch.org/pipermail/announce/2016-
>> > February/81.html state:
>> > "This release includes the highly anticipated support for connection
>> > tracking in the Linux kernel.  This feature makes it possible to
>> > implement stateful firewalls and will be the basis for future stateful
>> > features such as NAT and load-balancing.  Work is underway to bring
>> > connection tracking to the userspace datapath (used by DPDK) and the
>> > port to Hyper-V."  - in the way that 'work is underway' (=work is
>> > ongoing) means that a time of OVS 2.5 release the feature was not
>> > 'classified' as ready?
>> [Mooney, Sean K]
>> In ovs 2.5 only linux kernel conntrack was supported assuming you had a 4.x
>> kernel that supported it. that means that the feature was not available on
>> bsd,windows or with dpdk.
>>
>> In the upcoming ovs 2.6 release conntrack support has been added to the
>> Netdev datapath which is used with dpdk and on bsd. As far as I am aware
>> windows conntrack support is still Missing but I may be wrong.
>>
>> If you are interested the devstack local.conf I used to test that it 
>> functioned is
>> available here http://paste.openstack.org/show/552434/
>>
>> I used an OpenStack vm using the Ubuntu 16.04 and 2 e1000 interfaces to do 
>> the
>> testing.
>>
>>
>> >
>> >
>> > BR,
>> > Konstantin
>> >
>> >
>> >
>> > > On Sat, Aug 6, 2016 at 8:16 PM, Mooney, Sean K
>> > 
>> > > wrote:
>> > > > Hi just a quick fyi,
>> > > >
>> > > > About 2 weeks ago I did some light testing with the conntrack
>> > security
>> > > > group driver and the newly
>> > > >
>> 

Re: [openstack-dev] [Neutron] SFC stable/mitaka version

2016-07-28 Thread Assaf Muller
On Thu, Jul 28, 2016 at 3:02 PM, Cathy Zhang  wrote:
> Hi Ihar and all,
>
> Yes, we have been preparing for such a release. We will do one more round of 
> testing to make sure everything works fine, and then I will submit the 
> release request.
> There is a new patch on "stadium: adopt openstack/releases in subproject 
> release process" which is not Merged yet.
> Shall I follow this 
> http://docs.openstack.org/developer/neutron/stadium/sub_project_guidelines.html#sub-project-release-process
>  to submit the request?
> Do you have a good bug example for Neutron sub-project release request?
>
> BTW, a functional and tempest patch for networking-sfc has been uploaded and 
> it might take some time for the team to complete the review. The test is 
> non-voting. Do you think we should wait until this patch is merged or release 
> can be done without it?

The ideal is that any testing you're doing downstream or manually
should be happening upstream and via CI. If you feel the need to run
things one more time then that means that the upstream CI that is
running for SFC is insufficient. A secondary incentive is to boost
adoption - People tend to be attracted to stable projects with higher
quality testing. I would advise accelerating the functional and
tempest tests patches and releasing when your CI is in a better state.

>
> Thanks,
> Cathy
>
> -Original Message-
> From: Ihar Hrachyshka [mailto:ihrac...@redhat.com]
> Sent: Wednesday, July 27, 2016 1:24 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron] SFC stable/mitaka version
>
> Tony Breeds  wrote:
>
>> On Wed, Jul 06, 2016 at 12:40:48PM +, Gary Kotton wrote:
>>> Hi,
>>> Is anyone looking at creating a stable/mitaka version? What if
>>> someone want to use this for stable/mitaka?
>>
>> If that's a thing you need it's a matter of Armando asking the release
>> managers to create it.
>
> I only suggest Armando is not dragged into it, the release liaison (currently 
> me) should be able to handle the request if it comes from the core team for 
> the subproject.
>
> Ihar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Proposing Jakub Libosvar for testing core

2016-07-26 Thread Assaf Muller
We've hit critical mass from cores interesting in the testing area.

Welcome Jakub to the core reviewer team. May you enjoy staring at the
Gerrit interface and getting yelled at by people... It's a glamorous
life.



On Mon, Jul 25, 2016 at 10:49 PM, Brian Haley <brian.ha...@hpe.com> wrote:
> +1
>
> On 07/22/2016 04:12 AM, Oleg Bondarev wrote:
>>
>> +1
>>
>> On Fri, Jul 22, 2016 at 2:36 AM, Doug Wiegley
>> <doug...@parksidesoftware.com
>> <mailto:doug...@parksidesoftware.com>> wrote:
>>
>> +1
>>
>>> On Jul 21, 2016, at 5:13 PM, Kevin Benton <ke...@benton.pub
>>> <mailto:ke...@benton.pub>> wrote:
>>>
>>> +1
>>>
>>> On Thu, Jul 21, 2016 at 2:41 PM, Carl Baldwin <c...@ecbaldwin.net
>>> <mailto:c...@ecbaldwin.net>> wrote:
>>>
>>> +1 from me
>>>
>>> On Thu, Jul 21, 2016 at 1:35 PM, Assaf Muller <as...@redhat.com
>>> <mailto:as...@redhat.com>> wrote:
>>>
>>> As Neutron's so called testing lieutenant I would like to
>>> propose
>>> Jakub Libosvar to be a core in the testing area.
>>>
>>> Jakub has demonstrated his inherent interest in the testing
>>> area over
>>> the last few years, his reviews are consistently insightful
>>> and his
>>> numbers [1] are in line with others and I know will improve
>>> if given
>>> the responsibilities of a core reviewer. Jakub is deeply
>>> involved with
>>> the project's testing infrastructures and CI systems.
>>>
>>> As a reminder the expectation from cores is found here [2],
>>> and
>>> specifically for cores interesting in helping out shaping
>>> Neutron's
>>> testing story:
>>>
>>> * Guide community members to craft a testing strategy for
>>> features [3]
>>> * Ensure Neutron's testing infrastructures are sufficiently
>>> sophisticated to achieve the above.
>>> * Provide leadership when determining testing Do's & Don'ts
>>> [4]. What
>>> makes for an effective test?
>>> * Ensure the gate stays consistently green
>>>
>>> And more tactically we're looking at finishing the
>>> Tempest/Neutron
>>> tests dedup [5] and to provide visual graphing for historical
>>> control
>>> and data plane performance results similar to [6].
>>>
>>> [1] http://stackalytics.com/report/contribution/neutron/90
>>> [2]
>>>
>>> http://docs.openstack.org/developer/neutron/policies/neutron-teams.html
>>> [3]
>>>
>>> http://docs.openstack.org/developer/neutron/devref/development.environment.html#testing-neutron
>>> [4]
>>> https://assafmuller.com/2015/05/17/testing-lightning-talk/
>>> [5] https://etherpad.openstack.org/p/neutron-tempest-defork
>>> [6]
>>>
>>> https://www.youtube.com/watch?v=a0qlsH1hoKs=youtu.be=24m22s
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>>
>>> <http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe>
>>>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>>
>>> <http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org
>>>
>>> <mailto:openstack-dev-requ...@lists.openstack.org>?

[openstack-dev] [Neutron] Proposing Jakub Libosvar for testing core

2016-07-21 Thread Assaf Muller
As Neutron's so called testing lieutenant I would like to propose
Jakub Libosvar to be a core in the testing area.

Jakub has demonstrated his inherent interest in the testing area over
the last few years, his reviews are consistently insightful and his
numbers [1] are in line with others and I know will improve if given
the responsibilities of a core reviewer. Jakub is deeply involved with
the project's testing infrastructures and CI systems.

As a reminder the expectation from cores is found here [2], and
specifically for cores interesting in helping out shaping Neutron's
testing story:

* Guide community members to craft a testing strategy for features [3]
* Ensure Neutron's testing infrastructures are sufficiently
sophisticated to achieve the above.
* Provide leadership when determining testing Do's & Don'ts [4]. What
makes for an effective test?
* Ensure the gate stays consistently green

And more tactically we're looking at finishing the Tempest/Neutron
tests dedup [5] and to provide visual graphing for historical control
and data plane performance results similar to [6].

[1] http://stackalytics.com/report/contribution/neutron/90
[2] http://docs.openstack.org/developer/neutron/policies/neutron-teams.html
[3] 
http://docs.openstack.org/developer/neutron/devref/development.environment.html#testing-neutron
[4] https://assafmuller.com/2015/05/17/testing-lightning-talk/
[5] https://etherpad.openstack.org/p/neutron-tempest-defork
[6] https://www.youtube.com/watch?v=a0qlsH1hoKs=youtu.be=24m22s

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [lbaas]HA for lbaas v2 agent

2016-06-23 Thread Assaf Muller
On Thu, Jun 23, 2016 at 3:43 PM, Akshay Kumar Sanghai
<akshaykumarsang...@gmail.com> wrote:
> Thanks Assaf.
> I have few questions for lbaas:
> -  if i run agents on multiple nodes, will the request be ditrsibuted by
> neutron-server?
> - Does neutron lbaas agent forward the request to octavia-api or the
> neutron-server?

The LBaaS v2 API has multiple implementations. One of which is based
off haproxy and namespaces, known as the agent based implementation.
Do you have neutron-lbaas-agent running on your network nodes? The
second implementation is called Octavia and is based off service VMs
instead of agents and namespaces. Octavia calls out to Nova to create
VMs and inside those VMs is an agent that talks back to Octavia, and
that creates an haproxy instance to perform the actual loadbalancing.
The answer to both of your questions depends on which of these two
implementations you're going with. There's a bunch of summit sessions
about Octavia you can look in to.

>
> Thanks
> Akshay
>
> On Thu, Jun 23, 2016 at 1:00 AM, Assaf Muller <as...@redhat.com> wrote:
>>
>> On Wed, Jun 22, 2016 at 3:17 PM, Akshay Kumar Sanghai
>> <akshaykumarsang...@gmail.com> wrote:
>> > Hi,
>> > I have a multinode openstack installation (3 controller, 3 network
>> > nodes,
>> > and some compute nodes).
>> > Like l3 agent, is high availability feature available for the lbaas v2
>> > agent?
>>
>> It is not. Nir Magnezi is working on a couple of patches to implement
>> a simplistic HA solution for LBaaS v2 with haproxy:
>> https://review.openstack.org/#/c/28/
>> https://review.openstack.org/#/c/327966/
>>
>> >
>> > Thanks
>> > Akshay
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [lbaas]HA for lbaas v2 agent

2016-06-22 Thread Assaf Muller
On Wed, Jun 22, 2016 at 3:17 PM, Akshay Kumar Sanghai
 wrote:
> Hi,
> I have a multinode openstack installation (3 controller, 3 network nodes,
> and some compute nodes).
> Like l3 agent, is high availability feature available for the lbaas v2
> agent?

It is not. Nir Magnezi is working on a couple of patches to implement
a simplistic HA solution for LBaaS v2 with haproxy:
https://review.openstack.org/#/c/28/
https://review.openstack.org/#/c/327966/

>
> Thanks
> Akshay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] L3HA problem

2016-06-22 Thread Assaf Muller
On Wed, Jun 22, 2016 at 12:02 PM, fabrice grelaud
<fabrice.grel...@u-bordeaux.fr> wrote:
>
> Le 22 juin 2016 à 17:35, fabrice grelaud <fabrice.grel...@u-bordeaux.fr> a
> écrit :
>
>
> Le 22 juin 2016 à 15:45, Assaf Muller <as...@redhat.com> a écrit :
>
> On Wed, Jun 22, 2016 at 9:24 AM, fabrice grelaud
> <fabrice.grel...@u-bordeaux.fr> wrote:
>
> Hi,
>
> we deployed our openstack infrastructure with your « exciting » project
> openstack-ansible (mitaka 13.1.2) but we have some problems with L3HA after
> create router.
>
> Our infra (closer to the doc):
> 3 controllers nodes (with bond0 (br-mgmt, br-storage), bond1 (br-vxlan,
> br-vlan))
> 2 compute nodes (same for network)
>
> We create an external network (vlan type), an internal network (vxlan type)
> and a router connected to both networks.
> And when we launch an instance (cirros), we can’t receive an ip on the vm.
>
> We have:
>
> root@p-osinfra03-utility-container-783041da:~# neutron
> l3-agent-list-hosting-router router-bim
> +--+---++---+--+
> | id   | host
> | admin_state_up | alive | ha_state |
> +--+---++---+--+
> | 3c7918e5-3ad6-4f82-a81b-700790e3c016 |
> p-osinfra01-neutron-agents-container-f1ab9c14 | True   | :-)   |
> active   |
> | f2bf385a-f210-4dbc-8d7d-4b7b845c09b0 |
> p-osinfra02-neutron-agents-container-48142ffe | True   | :-)   |
> active   |
> | 55350fac-16aa-488e-91fd-a7db38179c62 |
> p-osinfra03-neutron-agents-container-2f6557f0 | True   | :-)   |
> active   |
> +--+---++---+—+
>
> I know, i got a problem now because i should have :-) active, :-) standby,
> :-) standby… Snif...
>
> root@p-osinfra01-neutron-agents-container-f1ab9c14:~# ip netns
> qrouter-eeb2147a-5cc6-4b5e-b97c-07cfc141e8e6
> qdhcp-0ba266fb-15c4-4566-ae88-92d4c8fd2036
>
> root@p-osinfra01-neutron-agents-container-f1ab9c14:~# ip netns exec
> qrouter-eeb2147a-5cc6-4b5e-b97c-07cfc141e8e6 ip a sh
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
> default
>link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>inet 127.0.0.1/8 scope host lo
>   valid_lft forever preferred_lft forever
>inet6 ::1/128 scope host
>   valid_lft forever preferred_lft forever
> 2: ha-4a5f0287-91@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc
> pfifo_fast state UP group default qlen 1000
>link/ether fa:16:3e:c2:67:a9 brd ff:ff:ff:ff:ff:ff
>inet 169.254.192.1/18 brd 169.254.255.255 scope global ha-4a5f0287-91
>   valid_lft forever preferred_lft forever
>inet 169.254.0.1/24 scope global ha-4a5f0287-91
>   valid_lft forever preferred_lft forever
>inet6 fe80::f816:3eff:fec2:67a9/64 scope link
>   valid_lft forever preferred_lft forever
> 3: qr-44804d69-88@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc
> pfifo_fast state UP group default qlen 1000
>link/ether fa:16:3e:a5:8c:f2 brd ff:ff:ff:ff:ff:ff
>inet 192.168.100.254/24 scope global qr-44804d69-88
>   valid_lft forever preferred_lft forever
>inet6 fe80::f816:3eff:fea5:8cf2/64 scope link
>   valid_lft forever preferred_lft forever
> 4: qg-c5c7378e-1d@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> pfifo_fast state UP group default qlen 1000
>link/ether fa:16:3e:b6:4c:97 brd ff:ff:ff:ff:ff:ff
>inet 147.210.240.11/23 scope global qg-c5c7378e-1d
>   valid_lft forever preferred_lft forever
>inet 147.210.240.12/32 scope global qg-c5c7378e-1d
>   valid_lft forever preferred_lft forever
>inet6 fe80::f816:3eff:feb6:4c97/64 scope link
>   valid_lft forever preferred_lft forever
>
> Same result on infra02 and infra03, qr and qg interfaces have the same ip,
> and ha interfaces the address 169.254.0.1.
>
> If we stop 2 neutron agent containers (p-osinfra02, p-osinfra03) and we
> restart the first (p-osinfra01), we can reboot the instance and we got an
> ip, a floating ip and we can access by ssh from internet to the vm. (Note:
> after few time, we loss our connectivity too).
>
> But if we restart the two containers, we got a ha_state to « standby » until
> the three become « active » and finally we have the problem again.
>
> The three routers on infra 01/02/03 are seen as master.
>
> If we ping from our instance to the router (internal network 192.168.100.4
> to 192.168.100.254) we can see some ARP R

Re: [openstack-dev] [openstack-ansible] L3HA problem

2016-06-22 Thread Assaf Muller
On Wed, Jun 22, 2016 at 9:24 AM, fabrice grelaud
 wrote:
> Hi,
>
> we deployed our openstack infrastructure with your « exciting » project
> openstack-ansible (mitaka 13.1.2) but we have some problems with L3HA after
> create router.
>
> Our infra (closer to the doc):
> 3 controllers nodes (with bond0 (br-mgmt, br-storage), bond1 (br-vxlan,
> br-vlan))
> 2 compute nodes (same for network)
>
> We create an external network (vlan type), an internal network (vxlan type)
> and a router connected to both networks.
> And when we launch an instance (cirros), we can’t receive an ip on the vm.
>
> We have:
>
> root@p-osinfra03-utility-container-783041da:~# neutron
> l3-agent-list-hosting-router router-bim
> +--+---++---+--+
> | id   | host
> | admin_state_up | alive | ha_state |
> +--+---++---+--+
> | 3c7918e5-3ad6-4f82-a81b-700790e3c016 |
> p-osinfra01-neutron-agents-container-f1ab9c14 | True   | :-)   |
> active   |
> | f2bf385a-f210-4dbc-8d7d-4b7b845c09b0 |
> p-osinfra02-neutron-agents-container-48142ffe | True   | :-)   |
> active   |
> | 55350fac-16aa-488e-91fd-a7db38179c62 |
> p-osinfra03-neutron-agents-container-2f6557f0 | True   | :-)   |
> active   |
> +--+---++---+—+
>
> I know, i got a problem now because i should have :-) active, :-) standby,
> :-) standby… Snif...
>
> root@p-osinfra01-neutron-agents-container-f1ab9c14:~# ip netns
> qrouter-eeb2147a-5cc6-4b5e-b97c-07cfc141e8e6
> qdhcp-0ba266fb-15c4-4566-ae88-92d4c8fd2036
>
> root@p-osinfra01-neutron-agents-container-f1ab9c14:~# ip netns exec
> qrouter-eeb2147a-5cc6-4b5e-b97c-07cfc141e8e6 ip a sh
> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group
> default
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
>valid_lft forever preferred_lft forever
> inet6 ::1/128 scope host
>valid_lft forever preferred_lft forever
> 2: ha-4a5f0287-91@if6:  mtu 1450 qdisc
> pfifo_fast state UP group default qlen 1000
> link/ether fa:16:3e:c2:67:a9 brd ff:ff:ff:ff:ff:ff
> inet 169.254.192.1/18 brd 169.254.255.255 scope global ha-4a5f0287-91
>valid_lft forever preferred_lft forever
> inet 169.254.0.1/24 scope global ha-4a5f0287-91
>valid_lft forever preferred_lft forever
> inet6 fe80::f816:3eff:fec2:67a9/64 scope link
>valid_lft forever preferred_lft forever
> 3: qr-44804d69-88@if9:  mtu 1450 qdisc
> pfifo_fast state UP group default qlen 1000
> link/ether fa:16:3e:a5:8c:f2 brd ff:ff:ff:ff:ff:ff
> inet 192.168.100.254/24 scope global qr-44804d69-88
>valid_lft forever preferred_lft forever
> inet6 fe80::f816:3eff:fea5:8cf2/64 scope link
>valid_lft forever preferred_lft forever
> 4: qg-c5c7378e-1d@if12:  mtu 1500 qdisc
> pfifo_fast state UP group default qlen 1000
> link/ether fa:16:3e:b6:4c:97 brd ff:ff:ff:ff:ff:ff
> inet 147.210.240.11/23 scope global qg-c5c7378e-1d
>valid_lft forever preferred_lft forever
> inet 147.210.240.12/32 scope global qg-c5c7378e-1d
>valid_lft forever preferred_lft forever
> inet6 fe80::f816:3eff:feb6:4c97/64 scope link
>valid_lft forever preferred_lft forever
>
> Same result on infra02 and infra03, qr and qg interfaces have the same ip,
> and ha interfaces the address 169.254.0.1.
>
> If we stop 2 neutron agent containers (p-osinfra02, p-osinfra03) and we
> restart the first (p-osinfra01), we can reboot the instance and we got an
> ip, a floating ip and we can access by ssh from internet to the vm. (Note:
> after few time, we loss our connectivity too).
>
> But if we restart the two containers, we got a ha_state to « standby » until
> the three become « active » and finally we have the problem again.
>
> The three routers on infra 01/02/03 are seen as master.
>
> If we ping from our instance to the router (internal network 192.168.100.4
> to 192.168.100.254) we can see some ARP Request
> ARP, Request who-has 192.168.100.254 tell 192.168.100.4, length 28
> ARP, Request who-has 192.168.100.254 tell 192.168.100.4, length 28
> ARP, Request who-has 192.168.100.254 tell 192.168.100.4, length 28
>
> And on the compute node we see all these frames on the various interfaces
> tap / vxlan-89 / br-vxlan / bond1.vxlanvlan / bond1 / em2 but nothing back.
>
> We also have on ha interface, on each router, the VRRP communication
> (heartbeat packets over a hidden project network that connects all ha
> routers (vxlan 70) ) . Priori as normal, each router thinks to be 

Re: [openstack-dev] [TripleO] Consolidating TripleO validations with Browbeat validations

2016-06-20 Thread Assaf Muller
On Mon, Jun 20, 2016 at 12:43 PM, Joe Talerico  wrote:
> On Mon, Jun 20, 2016 at 12:41 PM, Ihar Hrachyshka  wrote:
>>
>>> On 20 Jun 2016, at 18:37, Joe Talerico  wrote:
>>>
>>> Hello - It would seem there is a little bit of overlap with TripleO
>>> validations ( clapper validations ) and Browbeat *Checks*. I would
>>> like to see these two come together, and I wanted to get some feedback
>>> on this.
>>>
>>> For reference here are the Browbeat checks :
>>> https://github.com/openstack/browbeat/tree/master/ansible/check
>>>
>>> We check for common deployment mistakes, possible deployment
>>> performance issues and some bugs that could impact the scale and
>>> performance of your cloud... At the end we build a report of found
>>> issues with the cloud, like :
>>> https://github.com/openstack/browbeat/blob/master/ansible/check/browbeat-example-bug_report.log
>>>
>>> We eventually wanted to take these findings and push them to
>>> ElasticSearch as metadata for our result data (just so we would be
>>> aware of any BZs or possibly missed tuning).
>>>
>>> Anyhoo, I just would like to get feedback on consolidating these
>>> checks into TripleO Validations if that makes sense. If this does make
>>> sense, who could I work with to see that this happens?
>>
>> Sorry for hijacking the thread somewhat, but it seems that 
>> neutron-sanity-check would cover for some common deployment issues, if 
>> utilized by projects like browbeat. Has anyone considered the tool?
>>
>> http://docs.openstack.org/cli-reference/neutron-sanity-check.html
>>
>> If there are projects that are interested in integrating checks that are 
>> implemented by neutron community, we would be glad to give some guidance.
>>
>> Ihar
>
> Hey Ihar - the TripleO validations are using this :
> https://github.com/rthallisey/clapper/blob/0881300a815f8b801a38d117b8d01b42a00c7f7b/ansible-tests/validations/neutron-sanity-check.yaml

Oops, that's missing a bunch of configuration files. Here's the
configuration values it expects:
https://github.com/openstack/neutron/blob/master/neutron/cmd/sanity_check.py#L32

And here's how the tool uses them:
https://github.com/openstack/neutron/blob/master/neutron/cmd/sanity_check.py#L272

It runs specific checks according to configuration file values. Is
ipset enabled in the OVS agent configuration file? Great, let's check
we can use it and report back if there's any errors.

>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] our 3 voting processes in detail

2016-06-17 Thread Assaf Muller
On Fri, Jun 17, 2016 at 9:41 PM, Gerard Braad  wrote:
> Thanks Steve,
>
>
> Very useful. Would be great if for future reference we would only need
> to pint people to an URL on the Wiki for instance... what do you
> think?

I would recommend in-tree policies .rsts instead of Wiki entries. We
do that in Neutron-land and the resulting HTML is published here:
http://docs.openstack.org/developer/neutron/policies/index.html

There is a higher cost to make changes, but they have to go through
the review process, and the content will survive as long as the .git
repo does.

>
> regards,
>
>
> Gerard
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Stable/liberty breakage

2016-06-15 Thread Assaf Muller
On Wed, Jun 15, 2016 at 2:59 PM, Ihar Hrachyshka  wrote:
>
>> On 15 Jun 2016, at 20:18, Gary Kotton  wrote:
>>
>> Hi,
>> The following patch breaks stable liberty drivers - 
>> https://review.openstack.org/#/c/238745/
>> This means that plugins will need to be updated to support this.
>
> Would you mind sharing details about the breakage? It was assumed that the 
> patch backported includes the relevant compatibility bits to avoid any 
> breakage. If that’s not the case, we should definitely come up with a way to 
> get existing drivers unbroken again.
>
>> What do we do:
>>   • Revert – which could break people using latest stable/liberty
>>   • Have a requirement that Neutron plugins be updated when they use 
>> stable/liberty
>
> It may be either revert, or a new patch that would accommodate for your 
> broken driver. It depends on breakage details. So please share those.
>
>> This is really bad.
>
> Absolutely. The change was not expected to require any changes for 3party 
> drivers. If it happened, that’s our fault, and maybe we should land 
> something, or revert. We should not leave it broken for 3party drivers.

Check out an IRC conversation from earlier today:
http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2016-06-15.log.html#t2016-06-15T18:23:42

Specifically this is the patch to fix VMware drivers:
https://review.openstack.org/#/c/231217/

The question is can we send a patch to stable/liberty to ensure
compatibility for non-ML2 plugins. Failing that we'll need to revert.

>
>> Any suggestions.
>> Thanks
>> Gary
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports

2016-06-15 Thread Assaf Muller
On Wed, Jun 15, 2016 at 2:01 PM, Peters, Rawlin  wrote:
> On Tuesday, June 14, 2016 6:27 PM, Kevin Benton (ke...@benton.pub) wrote:
>> >which generates an arbitrary name
>>
>> I'm not a fan of this approach because it requires coordinated assumptions.
>> With the OVS hybrid plug strategy we have to make guesses on the agent side
>> about the presence of bridges with specific names that we never explicitly
>> requested and that we were never explicitly told about. So we end up with 
>> code
>> like [1] that is looking for a particular end of a veth pair it just hopes is
>> there so the rules have an effect.
>
> I don't think this should be viewed as a downside of Strategy 1 because, at
> least when we use patch port pairs, we can easily get the peer name from the
> port on br-int, then use the equivalent of "ovs-vsctl iface-to-br "
> to get the name of the bridge. If we allow supporting veth pairs to implement
> the subports, then getting the arbitrary trunk bridge/veth names isn't as
> trivial.
>
> This also brings up the question: do we even need to support veth pairs over
> patch port pairs anymore? Are there any distros out there that support
> openstack but not OVS patch ports?

I really doubt it. This stopped being an issue in Fedora/CentOS/RHEL
like 18~ months ago.

>
>>
>> >it seems that the LinuxBridge implementation can simply use an L2 agent
>> >extension for creating the vlan interfaces for the subports
>>
>> LinuxBridge implementation is the same regardless of the strategy for OVS. 
>> The
>> whole reason we have to come up with these alternative approaches for OVS is
>> because we can't use the obvious architecture of letting it plug into the
>> integration bridge due to VLANs already being used for network isolation. I'm
>> not sure pushing complexity out to os-vif to deal with this is a great
>> long-term strategy.
>
> The complexity we'd be pushing out to os-vif is not much worse than the 
> current
> complexity of the hybrid_ovs strategy already in place today.
>
>>
>> >Also, we didn’t make the OVS agent monitor for new linux bridges in the
>> >hybrid_ovs strategy so that Neutron could be responsible for creating the 
>> >veth
>> >pair.
>>
>> Linux Bridges are outside of the domain of OVS and even its agent. The L2 
>> agent
>> doesn't actually do anything with the bridge itself, it just needs a veth
>> device it can put iptables rules on. That's in contrast to these new OVS
>> bridges that we will be managing rules for, creating additional patch ports,
>> etc.
>
> I wouldn't say linux bridges are totally outside of its domain because it 
> relies
> on them for security groups. Rather than relying on an arbitrary naming
> convention between Neutron and Nova, we could've implemented monitoring for 
> new
> linux bridges to create veth pairs and firewall rules on. I'm glad we didn't,
> because that logic is specific to that particular firewall driver, similar to
> how this trunk bridge monitoring would be specific to only vlan-aware-vms. I
> think the logic lives best within an L2 agent extension, outside of the core
> of the OVS agent.
>
>>
>> >Why shouldn't we use the tools that are already available to us?
>>
>> Because we're trying to build a house and all we have are paint brushes. :)
>
> To me it seems like we already have a house that just needs a little paint :)
>
>>
>>
>> 1.
>> https://github.com/openstack/neutron/blob/f78e5b4ec812cfcf5ab8b50fca62d1ae0dd7741d/neutron/agent/linux/iptables_firewall.py#L919-L923
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports

2016-06-13 Thread Assaf Muller
On Mon, Jun 13, 2016 at 4:35 AM, Daniel P. Berrange  wrote:
> On Thu, Jun 09, 2016 at 05:31:13PM -0600, Carl Baldwin wrote:
>> Hi,
>>
>> You may or may not be aware of the vlan-aware-vms effort [1] in
>> Neutron.  If not, there is a spec and a fair number of patches in
>> progress for this.  Essentially, the goal is to allow a VM to connect
>> to multiple Neutron networks by tagging traffic on a single port with
>> VLAN tags.
>>
>> This effort will have some effect on vif plugging because the datapath
>> will include some changes that will effect how vif plugging is done
>> today.
>>
>> The design proposal for trunk ports with OVS adds a new bridge for
>> each trunk port.  This bridge will demux the traffic and then connect
>> to br-int with patch ports for each of the networks.  Rawlin Peters
>> has some ideas for expanding the vif capability to include this
>> wiring.
>>
>> There is also a proposal for connecting to linux bridges by using
>> kernel vlan interfaces.
>>
>> This effort is pretty important to Neutron in the Newton timeframe.  I
>> wanted to send this out to start rounding up the reviewers and other
>> participants we need to see how we can start putting together a plan
>> for nova integration of this feature (via os-vif?).
>
> I've not taken a look at the proposal, but on the timing side of things
> it is really way to late to start this email thread asking for design
> input from os-vif or nova. We're way past the spec proposal deadline
> for Nova in the Newton cycle, so nothing is going to happen until the
> Ocata cycle no matter what Neutron want  in Newton. For os-vif our
> focus right now is exclusively on getting existing functionality ported
> over, and integrated into Nova in Newton. So again we're not really looking
> to spend time on further os-vif design work right now.
>
> In the Ocata cycle we'll be looking to integrate os-vif into Neutron to
> let it directly serialize VIF objects and send them over to Nova, instead
> of using the ad-hoc port-binding dicts.  From the Nova side, we're not
> likely to want to support any new functionality that affects port-binding
> data until after Neutron is converted to os-vif. So Ocata at the earliest,
> but probably more like P, unless the Neutron conversion to os-vif gets
> completed unexpectedly quickly.

In light of this feature being requested by the NFV, container and
baremetal communities, and that Neutron's os-vif integration work
hasn't begun, does it make sense to block Nova VIF work? Are we
comfortable, from a wider OpenStack perspective, to wait until
possibly the P release? I think it's our collective responsibility as
developers to find creative ways to meet deadlines, not serializing
work on features and letting processes block us.

>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org  -o- http://virt-manager.org :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Tempest unstable interfaces in plugins

2016-06-12 Thread Assaf Muller
On Sat, Jun 11, 2016 at 4:04 PM, Ken'ichi Ohmichi <ken1ohmi...@gmail.com> wrote:
> 2016-06-10 17:01 GMT-07:00 Assaf Muller <as...@redhat.com>:
>> On Fri, Jun 10, 2016 at 12:02 PM, Andrea Frittoli
>> <andrea.fritt...@gmail.com> wrote:
>>> Dear all,
>>>
>>> I'm working on making the client manager in Tempest a stable interface, so
>>> that in future it may be used safely by plugins to easily gain access
>>> service clients [0].
>>>
>>> This work inevitably involves changing the current client manager (unstable)
>>> interface.
>>> Several tempest plugins in OpenStack already consume that interface (namely
>>> the manager.Manager class) [1], so my work is likely to break them.
>>>
>>> I would ask the people maintaining the plugins to be careful about using
>>> unstable interfaces, as they are likely to change, especially since we're
>>> working on converting them to stable.
>>>
>>> If you maintain a plugin (in OpenStack or outside of OpenStack) that is
>>> likely to be affected by my work, please keep an eye on my gerrit review
>>> [0], leave a comment there or ping me on IRC (andreaf), I would very much
>>> like to make sure the transition is as painless as possible for everyone.
>>
>> FWIW this doesn't seem to break Neutron:
>> https://review.openstack.org/#/c/328398/.
>>
>> I would appreciate it if changes are made in a backwards compatible
>> manner (Similar to this:
>> https://review.openstack.org/#/c/322492/13/tempest/common/waiters.py)
>> so that projects with Tempest plugins may adapt and not break voting
>> jobs. The reason projects are using interfaces outside of tempest.lib
>> is that that's all there is, and the alternative of copy/pasting in to
>> the repo isn't amazing.
>
> Yeah, copy/pasting of tempest code which is outside of tempest.lib is
> not amazing.
> However, that is a possible option to continue gate testing on each project.
> We did that to pass Ceilometer gate as a workaround[1], then
> we(QA-team) knew what lib code is necessary and are concentrating on
> making the code as tempest.lib.
> After finishing, we can remove the copy/pasting code from Ceilometer
> by using new tempest.lib code.
>
> During this work, I feel it is nice to add a new hacking rule to block
> importing the local tempest code from other projects.
> From viewpoints of outside of QA team, it would be difficult to know
> the stability of tempest code I guess.
> Then by adding a rule, most projects know that and it is nice to
> ignore it by understanding the stability.

I added a comment on the patch, but when I looked in to this a couple
of months ago, Neutron, Ironic and Heat all imported
tempest.{|test|manager}.

>
> The hacking rule patch is https://review.openstack.org/#/c/328651/
> And tempest itself needs to ignore that if merging the rule ;-) [2]
>
> Thanks
> Ken Ohmichi
> ---
> [1]: https://review.openstack.org/#/c/325727/
> [2]: https://review.openstack.org/#/c/328652/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Tempest unstable interfaces in plugins

2016-06-10 Thread Assaf Muller
On Fri, Jun 10, 2016 at 12:02 PM, Andrea Frittoli
 wrote:
> Dear all,
>
> I'm working on making the client manager in Tempest a stable interface, so
> that in future it may be used safely by plugins to easily gain access
> service clients [0].
>
> This work inevitably involves changing the current client manager (unstable)
> interface.
> Several tempest plugins in OpenStack already consume that interface (namely
> the manager.Manager class) [1], so my work is likely to break them.
>
> I would ask the people maintaining the plugins to be careful about using
> unstable interfaces, as they are likely to change, especially since we're
> working on converting them to stable.
>
> If you maintain a plugin (in OpenStack or outside of OpenStack) that is
> likely to be affected by my work, please keep an eye on my gerrit review
> [0], leave a comment there or ping me on IRC (andreaf), I would very much
> like to make sure the transition is as painless as possible for everyone.

FWIW this doesn't seem to break Neutron:
https://review.openstack.org/#/c/328398/.

I would appreciate it if changes are made in a backwards compatible
manner (Similar to this:
https://review.openstack.org/#/c/322492/13/tempest/common/waiters.py)
so that projects with Tempest plugins may adapt and not break voting
jobs. The reason projects are using interfaces outside of tempest.lib
is that that's all there is, and the alternative of copy/pasting in to
the repo isn't amazing.

>
> andrea
>
> [0] https://review.openstack.org/#/c/326683/
> [1]
> http://codesearch.openstack.org/?q=from%20tempest%20import%20manager=nope==
>
> IRC: andreaf
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-ovn] OVN vs. OpenDayLight

2016-06-09 Thread Assaf Muller
On Thu, Jun 9, 2016 at 5:06 PM, Kyle Mestery <mest...@mestery.com> wrote:
> On Thu, Jun 9, 2016 at 2:11 PM, Assaf Muller <as...@redhat.com> wrote:
>> On Thu, Jun 9, 2016 at 1:48 PM, Ben Pfaff <b...@ovn.org> wrote:
>>> On Thu, Jun 09, 2016 at 10:28:31AM -0700, rezroo wrote:
>>>> I'm trying to reconcile differences and similarities between OVN and
>>>> OpenDayLight in my head. Can someone help me compare these two technologies
>>>> and explain if they solve the same problem, or if there are fundamental
>>>> differences between them?
>>>
>>> OVN implements network virtualization for clouds of VMs or containers or
>>> a mix.  Open Daylight is a platform for managing networks that can do
>>> anything you want.
>>
>> That is true, but when considering a Neutron backend for OpenStack
>> deployments, people choose a subset of OpenDaylight projects and the
>> end result is a solution that is comparable in scope and feature set.
>> There are objective differences in where the projects are in their
>> lifetime, the HA architecture, the project's consistency model between
>> the neutron-server process and the backend, the development velocity,
>> the community size and the release model.
>>
> Fundamentally, the main difference is that OVN does one thing: It does
> network virtualization. OpenDaylight _MAY_ do network virtualization,
> among other things, and it likely does network virtualization in many
> different ways. Like Ben said:
>
> "Open Daylight is a platform for managing networks that can do
> anything you want."

I agree, but I don't think that was what was asked or makes for an
interesting discussion. I think the obvious comparison is OVN to
ML2/ODL using the ovsdb ODL project.

>
> Thanks,
> Kyle
>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-ovn] OVN vs. OpenDayLight

2016-06-09 Thread Assaf Muller
On Thu, Jun 9, 2016 at 1:48 PM, Ben Pfaff  wrote:
> On Thu, Jun 09, 2016 at 10:28:31AM -0700, rezroo wrote:
>> I'm trying to reconcile differences and similarities between OVN and
>> OpenDayLight in my head. Can someone help me compare these two technologies
>> and explain if they solve the same problem, or if there are fundamental
>> differences between them?
>
> OVN implements network virtualization for clouds of VMs or containers or
> a mix.  Open Daylight is a platform for managing networks that can do
> anything you want.

That is true, but when considering a Neutron backend for OpenStack
deployments, people choose a subset of OpenDaylight projects and the
end result is a solution that is comparable in scope and feature set.
There are objective differences in where the projects are in their
lifetime, the HA architecture, the project's consistency model between
the neutron-server process and the backend, the development velocity,
the community size and the release model.

>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][devstack] Does the openvswitch-agent need to be run along side the neutron-l3-agent?

2016-06-06 Thread Assaf Muller
On Mon, Jun 6, 2016 at 1:59 PM, Sean M. Collins  wrote:
> While reviewing https://review.openstack.org/#/c/292778/5 I think I
> might have found a bit of coupling between the neutron l2 agent and the
> l3 agent when it comes to DevStack.
>
> In the DevStack neutron guide - the "control node" currently
> does double duty as both an API server and also as a compute host.
>
> https://github.com/openstack-dev/devstack/blob/master/doc/source/guides/neutron.rst#devstack-configuration
>
> Extra compute nodes have a pretty short configuration
>
> https://github.com/openstack-dev/devstack/blob/master/doc/source/guides/neutron.rst#devstack-compute-configuration
>
> So, recently I poked at having a pure control node on the "devstack-1"
> host, by removing the q-agt and n-cpu entries from ENABLED_SERVICES,
> while leaving q-l3.
>
> It appears that the code in DevStack, relies on the presence of q-agt in
> order to create the integration bridge (br-int), so when the L3 agent
> comes up it complains because br-int hasn't been created.
>
> https://github.com/openstack-dev/devstack/blob/master/lib/neutron_plugins/ovs_base#L20
>
> Anyway, here's the fix.
>
> https://review.openstack.org/#/c/326063/

The L3 agent requires an L2 agent on the same host. It's not just
about creating the bridge, it's also about plugging the router/dhcp
ports correctly.

>
> --
> Sean M. Collins
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][DevStack] Do we still need the neutron-debug command?

2016-05-19 Thread Assaf Muller
On Thu, May 19, 2016 at 11:36 AM, Sean M. Collins  wrote:
> Ryan Moats and I chatted a week or so ago about neutron-debug in the
> context of DevStack.
>
> Ryan pushed a patch to see what it actually does[1].
>
> Currently, it's not clear what it is used for, and in some instances it
> seems to be more trouble than it is really worth. Instead of going
> through and disabling it[2] in every job - can we just delete it?

As far as I know Devstack is/was the only user. From my perspective,
I've never heard anyone using neutron-debug, reporting a bug against
it or asking any questions about it. I think it's reasonable to remove
it.

>
> [1]: https://review.openstack.org/#/c/314079/
> [2]: https://review.openstack.org/#/c/318739/
>
> --
> Sean M. Collins
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Get one network's usage

2016-05-16 Thread Assaf Muller
On Mon, May 16, 2016 at 4:38 AM, zhi  wrote:
> hi, all
>
> Many times we want to get one network's usage. Like this command:
> "neutron get-network-usage". We can get how many ports were used in this
> network. Besides, we can get floatingips usage  from external network.
>
> Do we need this?
>
>  Hope for your reply. ;-)

Sounds similar to:
http://specs.openstack.org/openstack/neutron-specs/specs/mitaka/network-ip-availability-api.html

>
>
> Thanks
> Zhi Chang
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][QA] Call to action - Neutron/Tempest API tests dedup

2016-05-14 Thread Assaf Muller
On Sat, May 14, 2016 at 8:03 AM, zhi <changzhi1...@gmail.com> wrote:
> hi, Muller.
>
> As you mentioned, is there will have a individual Tempest plugin named
> "Neutron Tempest plugin" in future?

It already exists :)

http://docs.openstack.org/developer/neutron/devref/development.environment.html#id3

The plugin was contributed by Daniel Mellado and we use it at the gate:
https://github.com/openstack-infra/project-config/blob/master/jenkins/jobs/neutron.yaml#L34

>
>
> Thanks
> Zhi Chang
>
> 2016-05-14 6:53 GMT+08:00 Assaf Muller <as...@redhat.com>:
>>
>> TL;DR: I'm looking for volunteers for tasks 1, 2 and 3 listed below.
>> Help would be hugely appreciated and %(local_drink)s will be bought in
>> Barcelona. I've posted example patches that demonstrate the idea.
>> Needless to say I'm here to provide reviews and to answer questions.
>> Additionally, most of the discussions have been within the Neutron
>> community and I'm looking for feedback from Tempest folks.
>>
>>
>> The context:
>> The Neutron community has been engaged in a long running effort to
>> move some of the networking API tests to the Neutron tree. We started
>> by copying the api/network directory tree, later keeping only the
>> tests, importing the test infrastructure itself from Tempest. We
>> continued by minimizing the imports we do from Tempest (Excluding
>> tempest.lib), and introduced a Neutron Tempest plugin.
>>
>> One issue that remains is that some of the tests are still found in
>> both repositories. This confuses contributors and wastes compute
>> resources. Since the tests run against stable/{liberty|mitaka} and
>> master, it should be safe to dedup. I proposed a line in the sand so
>> that 'core resources' remain to be tested in Tempest and more
>> 'advanced' APIs are tested in Neutron. The concept was agreed upon by
>> the Neutron and (Then) Tempest PTLs, and the specifics were discussed
>> and a consensus was found in patch [2]. Here is the resulting doc for
>> your viewing pleasure [5].
>>
>>
>> The work:
>> After I removed the API tests for core resources from the Neutron
>> tree, there remain three tasks to finish the de-dup:
>>
>> 1*) Remove tests for advanced APIs from Tempest. The full list of
>> tests that I propose be removed from Tempest is tracked here [1] (With
>> the rationale found at [2]), and an example patch may be found here
>> [4].
>> 2) Push tests for Neutron core resources that were added after the
>> fork from Tempest, then delete these from Neutron. This is also
>> tracked in [1], with example patches found here [6]. This is not a
>> strict cut/paste as the way Tempest and Neutron interact with clients
>> is slightly different. Fun!
>> 3) Sync tests for Neutron core resources that were updated after the
>> fork from Tempest. Test modifications include: Bug fixes for raceful
>> tests, py3 fixes, doc string typos and more. This is also tracked in
>> [1], with example patches found here [3].
>>
>> * I believe that as far as the Tempest test removal criteria found at
>> [7], this case falls under the first exception: 'The class of testing
>> has been decided to be outside the scope of tempest' and we may skip
>> the three prong rule for removal. Input welcome.
>>
>> [1] https://etherpad.openstack.org/p/neutron-tempest-defork
>> [2] https://review.openstack.org/#/c/280427/
>> [3] https://review.openstack.org/#/c/316280/ +
>> https://review.openstack.org/#/c/316283/
>> [4] https://review.openstack.org/#/c/316183/
>> [5]
>> docs.openstack.org/developer/neutron/devref/development.environment.html#api-tests
>> [6] https://review.openstack.org/#/c/316265/ +
>> https://review.openstack.org/#/c/316269/
>> [7] https://wiki.openstack.org/wiki/QA/Tempest-test-removal
>>
>> The work is tracked via:
>> * https://review.openstack.org/#/q/topic:bug/1552960
>> * https://bugs.launchpad.net/neutron/+bug/1552960
>> * https://etherpad.openstack.org/p/neutron-tempest-defork
>> * My head
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][QA] Call to action - Neutron/Tempest API tests dedup

2016-05-13 Thread Assaf Muller
TL;DR: I'm looking for volunteers for tasks 1, 2 and 3 listed below.
Help would be hugely appreciated and %(local_drink)s will be bought in
Barcelona. I've posted example patches that demonstrate the idea.
Needless to say I'm here to provide reviews and to answer questions.
Additionally, most of the discussions have been within the Neutron
community and I'm looking for feedback from Tempest folks.


The context:
The Neutron community has been engaged in a long running effort to
move some of the networking API tests to the Neutron tree. We started
by copying the api/network directory tree, later keeping only the
tests, importing the test infrastructure itself from Tempest. We
continued by minimizing the imports we do from Tempest (Excluding
tempest.lib), and introduced a Neutron Tempest plugin.

One issue that remains is that some of the tests are still found in
both repositories. This confuses contributors and wastes compute
resources. Since the tests run against stable/{liberty|mitaka} and
master, it should be safe to dedup. I proposed a line in the sand so
that 'core resources' remain to be tested in Tempest and more
'advanced' APIs are tested in Neutron. The concept was agreed upon by
the Neutron and (Then) Tempest PTLs, and the specifics were discussed
and a consensus was found in patch [2]. Here is the resulting doc for
your viewing pleasure [5].


The work:
After I removed the API tests for core resources from the Neutron
tree, there remain three tasks to finish the de-dup:

1*) Remove tests for advanced APIs from Tempest. The full list of
tests that I propose be removed from Tempest is tracked here [1] (With
the rationale found at [2]), and an example patch may be found here
[4].
2) Push tests for Neutron core resources that were added after the
fork from Tempest, then delete these from Neutron. This is also
tracked in [1], with example patches found here [6]. This is not a
strict cut/paste as the way Tempest and Neutron interact with clients
is slightly different. Fun!
3) Sync tests for Neutron core resources that were updated after the
fork from Tempest. Test modifications include: Bug fixes for raceful
tests, py3 fixes, doc string typos and more. This is also tracked in
[1], with example patches found here [3].

* I believe that as far as the Tempest test removal criteria found at
[7], this case falls under the first exception: 'The class of testing
has been decided to be outside the scope of tempest' and we may skip
the three prong rule for removal. Input welcome.

[1] https://etherpad.openstack.org/p/neutron-tempest-defork
[2] https://review.openstack.org/#/c/280427/
[3] https://review.openstack.org/#/c/316280/ +
https://review.openstack.org/#/c/316283/
[4] https://review.openstack.org/#/c/316183/
[5] 
docs.openstack.org/developer/neutron/devref/development.environment.html#api-tests
[6] https://review.openstack.org/#/c/316265/ +
https://review.openstack.org/#/c/316269/
[7] https://wiki.openstack.org/wiki/QA/Tempest-test-removal

The work is tracked via:
* https://review.openstack.org/#/q/topic:bug/1552960
* https://bugs.launchpad.net/neutron/+bug/1552960
* https://etherpad.openstack.org/p/neutron-tempest-defork
* My head

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Diagnostics & troubleshooting design summit summary and next steps

2016-05-09 Thread Assaf Muller
On Mon, May 9, 2016 at 1:28 PM, Boden Russell <boden...@gmail.com> wrote:
> Assaf, thanks for driving this session.
>
> As a newbie to the design sessions, I think presenting a brief "context"
> up-front is helpful. IMO the key word here is "brief" (5 min or less for
> example) and furthermore should not open the floor for digression given
> the short time-frame we have per session. Some of us will be experts on
> the topic, others of us will not have had time to do the proper research
> before heading into the session. This "intro" gets us all on the same page,
> or closer to it.
>
> Finally, it might be nice to collaborate on the intro content with the main
> players of the session topic. Not saying the intro needs to be reviewed,
> but typically it seems there are a few individuals with vested
> interest / knowledge in the space and it would be nice if they could work
> together on developing the content.

That's good input. For future reference I think you will find that if
you reach out to session chairs, no one will say no :)

>
>
> On 5/6/16 2:38 PM, Assaf Muller wrote:
>> It is my personal experience that unless I do my homework, design
>> summit discussions largely go over my head. I'd guess that most people
>> don't have time to research the topic of every design session they
>> intend to go to, so for the session I lead I decided to do the
>> unthinkable and present the context of the discussion [1] with a few
>> slides [2] (That part of the session took I think 3 minutes). I'd love
>> to get feedback particularly on that, if people found it useful we may
>> consider increasing adoption of that habit for the Barcelona design
>> summit.
>>
>> The goal for the session was to achieve consensus on the very high
>> level topics: Do we want to do Neutron diagnostics in-tree and via the
>> API. I believe that goal was achieved, and the answer to both
>> questions is 'yes'.
>>
>> Since there's been at least 4 RFEs submitted in this domain, the next
>> step is to try and converge on one and iterate on an API. For these
>> purposes we will be using Hynek's spec, under review here [3]. I was
>> approached by multiple people that are interested in assisting with
>> the implementation phase, please say so on the spec so that Hynek will
>> be able to add you as a contributor.
>>
>> I foresee a few contention points, chief of which is the abstraction
>> level of the API and how best to present diagnostics information in a
>> way that is plugin agnostic. The trick will be to find an API that is
>> not specific to the reference implementation while still providing a
>> great user experience to the vast majority of OpenStack users.
>>
>> A couple of projects in the domain were mentioned, specifically
>> Monasca and Steth. Contributors from these projects are highly
>> encouraged to review the spec.
>>
>> [1] https://etherpad.openstack.org/p/newton-neutron-troubleshooting
>> [2] 
>> https://docs.google.com/presentation/d/1IBVZ6defUwhql4PEmnhy3fl9qWEQVy4iv_IR6pzkFKw/edit?usp=sharing
>> [3] https://review.openstack.org/#/c/308973/
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Diagnostics & troubleshooting design summit summary and next steps

2016-05-06 Thread Assaf Muller
It is my personal experience that unless I do my homework, design
summit discussions largely go over my head. I'd guess that most people
don't have time to research the topic of every design session they
intend to go to, so for the session I lead I decided to do the
unthinkable and present the context of the discussion [1] with a few
slides [2] (That part of the session took I think 3 minutes). I'd love
to get feedback particularly on that, if people found it useful we may
consider increasing adoption of that habit for the Barcelona design
summit.

The goal for the session was to achieve consensus on the very high
level topics: Do we want to do Neutron diagnostics in-tree and via the
API. I believe that goal was achieved, and the answer to both
questions is 'yes'.

Since there's been at least 4 RFEs submitted in this domain, the next
step is to try and converge on one and iterate on an API. For these
purposes we will be using Hynek's spec, under review here [3]. I was
approached by multiple people that are interested in assisting with
the implementation phase, please say so on the spec so that Hynek will
be able to add you as a contributor.

I foresee a few contention points, chief of which is the abstraction
level of the API and how best to present diagnostics information in a
way that is plugin agnostic. The trick will be to find an API that is
not specific to the reference implementation while still providing a
great user experience to the vast majority of OpenStack users.

A couple of projects in the domain were mentioned, specifically
Monasca and Steth. Contributors from these projects are highly
encouraged to review the spec.

[1] https://etherpad.openstack.org/p/newton-neutron-troubleshooting
[2] 
https://docs.google.com/presentation/d/1IBVZ6defUwhql4PEmnhy3fl9qWEQVy4iv_IR6pzkFKw/edit?usp=sharing
[3] https://review.openstack.org/#/c/308973/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] - exception for stable/kilo DVR back-ports

2016-05-04 Thread Assaf Muller
On Wed, May 4, 2016 at 4:54 PM, Kevin Benton  wrote:
> Hello,
>
> I would like to propose a freeze exception for
> https://review.openstack.org/#/c/312253/ and
> https://review.openstack.org/#/c/312254/ . They address a bug in DVR that
> causes floating IPs to eventually break after an L3 agent has been
> restarted. It's a serious bug but it's very subtle because it takes a busy
> system and bad luck to trigger it.
>
> If we decide against the back-port a workaround could be to advise all
> distros/operators to call the namespace cleanup script every time the l3
> agent is restarted, which would prevent this issue, but at the cost of
> disrupting traffic on the agent restart.

That's not something I could seriously suggest to users, meaning that
said users will just cherry pick these patches anyway. Might as well
prevent the pain proactively and merge it to stable/kilo.

>
>
> Cheers,
> Kevin Benton
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][tc] Neutron stadium evolution from Austin

2016-05-02 Thread Assaf Muller
On Mon, May 2, 2016 at 3:18 PM, Gal Sagie  wrote:
> Maybe it can help if instead of trying to define criteria to which projects
> dont fit into
> the stadium, try to define in your spec what IT IS, and for what purpose its
> there.

Well said. This came up multiple times in the Gerrit discussion(s) a
couple of months back. The design summit discussion highlighted again
that people don't know what the stadium is (The usage of the word
'supported' or 'official' really demonstrates this. Supported by
who?).

The stadium is not a lot more than perception at this point. What I'd
like to do is realign the terminology we use to describe networking
projects.

Can we consider a stadium project as an 'assisted' project, and a
project outside of the stadium as an 'independent' project? I'd like
to avoid using terms that reflect poorly on projects outside of the
stadium, and reverse the situation: Let's refer to projects outside
the stadium using either a neutral or a positive word, and be
consistent about using that word in any public facing document. I
would normally avoid debating semantics, but since these days the
stadium is more about perception than anything else, I think we should
focus on semantics and explaining what the stadium exactly is.

Another thing we should ask ourselves is the existence of the stadium.
The idea that the Neutron team can vouch for a project is insane to
me. Neutron cores cannot vouch for ODL, only ODL developers can vouch
for ODL. For my money, Neutron cores currently do not vouch for
anything other than Neutron anyway, so this is just about reflecting
reality, not performing any real changes. The only sticking point I
see (And here I definitely agree with Armando) from a governance point
of view is the ability to have control over OpenStack's networking API
(It's conceivable to have the people with control over OpenStack's
networking API to be different than the current group of Neutron
cores). If we're OK going forward with no centralized place to manage
networking projects, we're also OK with having no control over the
API, and the danger here is to allow new projects (Think SFC and
SFC-new 6 months later) that solve similar use cases to define their
own API. That seems counter productive to OpenStack's longevity. One
way to resolve this is to go forward with Armando's suggestion to have
a centralized location to discuss and approve new APIs. I'm not sure
we must enforce the API declarations via some technical mechanism, it
might be possible to have everything on paper instead and assume
people are generally decent. Note that control over the API and the
stadium are essentially two independent problems, it may be convenient
to tackle both under the stadium discussion, but it's not necessary.

>
>
> On Mon, May 2, 2016 at 8:53 PM, Kyle Mestery  wrote:
>>
>> On Mon, May 2, 2016 at 12:22 PM, Armando M.  wrote:
>> >
>> >
>> > On 30 April 2016 at 14:24, Fawad Khaliq  wrote:
>> >>
>> >> Hi folks,
>> >>
>> >> Hope everyone had a great summit in Austin and got back safe! :)
>> >>
>> >> At the design summit, we had a Neutron stadium evolution session, which
>> >> needs your immediate attention as it will impact many stakeholders of
>> >> Neutron.
>> >
>> >
>> > It's my intention to follow up with a formal spec submission to
>> > neutron-specs as soon as I recover from the trip. Then you'll have a
>> > more
>> > transparent place to voice your concern.
>> >
>> >>
>> >>
>> >> To summarize for everyone, our Neutron leadership made the following
>> >> proposal for the “greater-good” of Neutron to improve and reduce burden
>> >> on
>> >> the Neutron PTL and core team to avoid managing more Neutron drivers:
>> >
>> >
>> > It's not just about burden. It's about consistency first and foremost.
>> >
>> >>
>> >>
>> >> Quoting the etherpad [1]
>> >>
>> >> "No request for inclusion are accepted for projects focussed solely on
>> >> implementations and/or API extensions to non-open solutions."
>> >
>> >
>> > By the way, this was brought forward and discussed way before the
>> > Summit. In
>> > fact this is already implemented at the Neutron governance level [1].
>> >
>> >>
>> >> To summarize for everyone what this means is that all Neutron drivers,
>> >> which implement non open source networking backends are instantly out
>> >> of the
>> >> Neutron stadium and are marked as "unofficial/unsupported/remotely
>> >> affiliated" and rest are capable of being tagged as
>> >> "supported/official”.
>> >
>> >
>> > Totally false.
>> >
>> > All this means is that these projects do not show up in list [1] (minus
>> > [2],
>> > which I forgot): ie. these projects are the projects the Neutron team
>> > vouches for. Supportability is not a property tracked by this list. You,
>> > amongst many, should know that it takes a lot more than being part of a
>> > list
>> > to be considered a supported solution, and I am actually even surprised

Re: [openstack-dev] [Neutron][Infra] Post processing of gate hooks on job timeouts

2016-04-11 Thread Assaf Muller
On Mon, Apr 11, 2016 at 1:56 PM, Clark Boylan  wrote:
> On Mon, Apr 11, 2016, at 10:52 AM, Jakub Libosvar wrote:
>> On 04/11/2016 06:41 PM, Clark Boylan wrote:
>> > On Mon, Apr 11, 2016, at 03:07 AM, Jakub Libosvar wrote:
>> >> Hi,
>> >>
>> >> recently we hit an issue in Neutron with tests getting stuck [1]. As a
>> >> side effect we discovered logs are not collected properly which makes it
>> >> hard to find the root cause. The reason of missing logs is that we send
>> >> SIGKILL to whatever gate hook is running when we hit the global timeout
>> >> per gate job [2]. This gives no time to running process to perform any
>> >> post-processing. In post_gate_hook function in Neutron, we collect logs
>> >> from /tmp directory, compress them and move them to /opt/stack/logs to
>> >> make them exposed.
>> >>
>> >> I have in mind two solutions to which I'd like to get feedback before
>> >> sending patches.
>> >>
>> >> 1) In Neutron, we execute tests in post_gate_hook (dunno why). But even
>> >> if we would have moved test execution into gate_hook and tests get stuck
>> >> then the post_gate_hook won't be triggered [3]. So the solution I
>> >> propose here is to terminate gate_hook N minutes before global timeout
>> >> and still execute post_gate_hook (with timeout) as post-processing
>> >> routine.
>> >>
>> >> 2) Second proposal is to let timeout wrapped commands know they are
>> >> about to be killed. We can send let's say SIGTERM instead of SIGKILL and
>> >> after certain amount of time, send SIGKILL. Example: We send SIGTERM 3
>> >> minutes before global timeout, letting these 3 minutes to 'command' to
>> >> handle the SIGTERM signal.
>> >>
>> >>  timeout -s 15 -k 3 $((REMAINING_TIME-3))m bash -c "command"
>> >>
>> >> With the 2nd approach we can trap the signal that kills running test
>> >> suite and collects logs with same functions we currently have.
>> >>
>> >>
>> >> I would personally go with second option but I want to hear if anybody
>> >> has a better idea about post processing in gate jobs or if there is
>> >> already a tool we can use to collect logs.
>> >>
>> >> Thanks,
>> >> Kuba
>> >
>> > Devstack gate already does a "soft" timeout [0] then proceeds to cleanup
>> > (part of which is collecting logs) [1], then Jenkins does the "hard"
>> > timeout [2]. Why aren't we collecting the required log files as part of
>> > the existing cleanup?
>> This existing cleanup doesn't support hooks. Neutron tests produce a lot
>> of logs by default stored in /tmp/dsvm- so we need to compress
>> and move them to /opt/stack/logs in order to get them collected by [1].
>
> My suggestion would be to stop writing these log files to /tmp and
> instead write them to the log dir where they will be automagically
> compressed and collected.

Yeah that's what I'm doing here https://review.openstack.org/#/c/303594/.

>
>>
>> >
>> > [0]
>> > https://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/devstack-vm-gate-wrap.sh#n569
>> > [1]
>> > https://git.openstack.org/cgit/openstack-infra/devstack-gate/tree/devstack-vm-gate-wrap.sh#n594
>> > [2]
>> > https://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/devstack-gate.yaml#n325
>> >
>> > Clark
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Infra] Post processing of gate hooks on job timeouts

2016-04-11 Thread Assaf Muller
On Mon, Apr 11, 2016 at 9:39 AM, Morales, Victor
 wrote:
>
>
>
>
>
> On 4/11/16, 5:07 AM, "Jakub Libosvar"  wrote:
>
>>Hi,
>>
>>recently we hit an issue in Neutron with tests getting stuck [1]. As a
>>side effect we discovered logs are not collected properly which makes it
>>hard to find the root cause. The reason of missing logs is that we send
>>SIGKILL to whatever gate hook is running when we hit the global timeout
>>per gate job [2]. This gives no time to running process to perform any
>>post-processing. In post_gate_hook function in Neutron, we collect logs
>>from /tmp directory, compress them and move them to /opt/stack/logs to
>>make them exposed.
>>
>>I have in mind two solutions to which I'd like to get feedback before
>>sending patches.
>>
>>1) In Neutron, we execute tests in post_gate_hook (dunno why). But even
>>if we would have moved test execution into gate_hook and tests get stuck
>>then the post_gate_hook won't be triggered [3]. So the solution I
>>propose here is to terminate gate_hook N minutes before global timeout
>>and still execute post_gate_hook (with timeout) as post-processing routine.
>>
>>2) Second proposal is to let timeout wrapped commands know they are
>>about to be killed. We can send let's say SIGTERM instead of SIGKILL and
>>after certain amount of time, send SIGKILL. Example: We send SIGTERM 3
>>minutes before global timeout, letting these 3 minutes to 'command' to
>>handle the SIGTERM signal.
>>
>> timeout -s 15 -k 3 $((REMAINING_TIME-3))m bash -c "command"
>>
>>With the 2nd approach we can trap the signal that kills running test
>>suite and collects logs with same functions we currently have.
>>
>>
>>I would personally go with second option but I want to hear if anybody
>>has a better idea about post processing in gate jobs or if there is
>>already a tool we can use to collect logs.
>
> I also like the second option, it seems less aggressive and give opportunity 
> to catch
> more information before killing processes.  Ideally, timeouts are ultimatums 
> for worst-case scenarios
> and should be never reach it.

Kuba and I discussed this issue at length - I also think the 2nd
approach is reasonable but I'd like to see what more Devstack oriented
folks think.

>
>>
>>Thanks,
>>Kuba
>>
>>
>>[1] https://bugs.launchpad.net/bugs/1567668
>>[2]
>>https://github.com/openstack-infra/devstack-gate/blob/master/functions.sh#L1151
>>[3]
>>https://github.com/openstack-infra/devstack-gate/blob/master/devstack-vm-gate-wrap.sh#L581
>>
>>__
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][neutron] Eliminating the DevStack layer

2016-04-08 Thread Assaf Muller
On Fri, Apr 8, 2016 at 3:37 PM, Doug Wiegley
<doug...@parksidesoftware.com> wrote:
>
>> On Apr 8, 2016, at 1:28 PM, Sean M. Collins <s...@coreitpro.com> wrote:
>>
>> Assaf Muller wrote:
>>> I do want to say that ML2's "mechanism_drivers" option probably does
>>> not have a default for the same reason we do not have a default for
>>> the core_plugin value, we don't want to play favorites. From Neutron's
>>> point of view, ignoring the existence of Devstack and upstream CI, I
>>> think that makes sense.
>>>
>>
>> True, I do see your point.
>>
>> I do however think, that if you do pick the ML2 plugin as your
>> core_plugin, it should have some mechanism drivers enabled by default. You
>> shouldn't have to pick core_plugin, then be forced to pick
>> mechanism_drivers. I'd rather see some mechanism_drivers already
>> enabled, and if you have a difference in opinion, set mechanism_drivers
>> in your local.conf.
>
> I previously thought that a default there made no sense, but really, how is a 
> default core plugin of ml2 with a default mech of local going to hurt anyone?

I was playing devil's advocate. I'm fine with picking ML2 and OVS+LB.
You will face resistance from people that have an interest in having
the ML2 reference implementation gone.

>
> We had a big argument of whether to have a default DNS resolver… 8.8.8.8 
> leaks internal info to a third-party, hypervisor default potentially leaks 
> infrastructure details.  Not having a default there at least has some 
> security/privacy implications.
>
> There are likely things that we can start defaulting in a saner way.
>
> doug
>
>
>
>>
>> --
>> Sean M. Collins
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][neutron] Eliminating the DevStack layer

2016-04-08 Thread Assaf Muller
On Fri, Apr 8, 2016 at 11:57 AM, Sean M. Collins  wrote:
> Edgar Magana wrote:
>> This is a very solid plan. Maybe to fair on the current state of the 
>> devstack with neutron functionality, what will be the disadvantage(s) of 
>> this change from your perspective?
>>
>
> A user's local.conf will probably get a little bigger - and I think a
> lot of the issues about Neutron's inability to run out of the box will
> be exposed.
>
> I mean let's face it - Neutron, installed from source, with no
> configuration Does Not Work™. There are not enough settings that have
> defaults set, for it to actually run.
>
> This was made painfully obvious to me when I had to make new revisions
> to the Neutron DevStack refactor, where I had to add more inisets, in
> order for Neutron to finish stacking correctly.
>
> Did you know, for example, that we rely on DevStack[1] to set the list
> of mechanism_drivers? Without this, you'll get an empty mechanism_driver
> list and nothing will ever be wired up.

I don't want to detract from what you're saying Sean, and I largely
agree that we can be more opinionated in Neutron and rely less on
Devstack. I also never liked Devstack's "macros" and have always
preferred configuring everything myself via local.conf when that was
made an option, simply because I already know how to configure Neutron
and I didn't want to learn Devstack's options. I do want to say that
ML2's "mechanism_drivers" option probably does not have a default for
the same reason we do not have a default for the core_plugin value, we
don't want to play favorites. From Neutron's point of view, ignoring
the existence of Devstack and upstream CI, I think that makes sense.

>
> I'm sure there is an argument that can be made about why there is no
> default for mechanism_drivers in ML2, since there are lots of options.
> But, I think that we can at least enable the ones that we have in
> Neutron's main tree. Packagers who make packages for each mechanism
> driver (LB, OVS, etc..) already had to handle things like
> mechanism_drivers in the Ml2 configuration already, so it shouldn't
> really impact them since we're only setting a default if nothing is set,
> and their packages should explicitly set it.
>
> [1]: 
> https://github.com/openstack-dev/devstack/blob/master/lib/neutron_plugins/ml2#L27
>
>
> --
> Sean M. Collins
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Proposing Hirofumi Ichihara to Neutron Core Reviewer Team

2016-04-08 Thread Assaf Muller
+1

On Fri, Apr 8, 2016 at 6:58 AM, Henry Gessau  wrote:
> +1, Hirofumi will make a great addition.
>
> Akihiro Motoki  wrote:
>> Hi Neutrinos,
>>
>> As the API Lieutenant of Neutron team,
>> I would like to propose Hirofumi Ichihara (irc: hichihara) as a member of
>> Neutron core reviewer team mainly focuing on the API/DB area.
>>
>> Hirofumi has been contributing neutron actively in the recent two
>> releases constantly.
>> He was involved in key features in API/DB areas in Mitaka such as
>> tagging support and network availability zones.
>> I believe his knowledge and involvement will be great addition to our team.
>> He have been reviewing constantly [1] and I expect he continue to work
>> for Newton or later.
>>
>> Existing API/DB core reviews (and other Neutron core reviewers),
>> please vote +1/-1 for the addition of Hirofumi to the team.
>>
>> Thanks!
>> Akihiro
>>
>>
>> [1] http://stackalytics.com/report/contribution/neutron/90
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] OVSDB native interface as default in gate jobs

2016-04-05 Thread Assaf Muller
On Tue, Apr 5, 2016 at 12:35 PM, Sean M. Collins  wrote:
> Russell Bryant wrote:
>> because they are related to two different command line utilities
>> (ovs-vsctl vs ovs-ofctl) that speak two different protocols (OVSDB vs
>> OpenFlow) that talk to two different daemons on the system (ovsdb-server vs
>> ovs-vswitchd) ?
>
> True, they influence two different daemons - but it's really two options
> that both have two settings:
>
> * "talk to it via the CLI tool"
> * "talk to it via a native interface"
>
> How likely is it to have one talking via native interface and the other
> via CLI?

The ovsdb native interface is a couple of cycles more mature than the
openflow one, I see how some users would use one but not the other.

>
> Also, if the native interface is faster, I think we should consider
> making it the default.

Definitely. I'd prefer to deprecate and delete the cli interfaces and
keep only the native interfaces in the long run.

>
> --
> Sean M. Collins
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Adding amuller to the neutron-drivers team

2016-04-04 Thread Assaf Muller
On Fri, Apr 1, 2016 at 7:58 PM, Edgar Magana  wrote:
> Congratulations Assaf!
>
> From: "Armando M." 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: Thursday, March 31, 2016 at 5:48 PM
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Subject: [openstack-dev] [Neutron] Adding amuller to the neutron-drivers
> team
>
> Hi folks,
>
> Assaf's tenacity is a great asset for the Neutron team at large. I believe
> that the drivers team would benefit from that tenacity, and therefore I
> would like to announce him to be a new member of the Neutron Drivers team
> [1].

Thank you everyone :)

I'm very excited to join the team! I'm bringing in a lot of energy and
hope to bring valuable contributions to the table.

>
> At the same time, I would like to thanks mestery as he steps down. Mestery
> has been instrumental in many decisions taken by this team and for
> spearheading the creation of the very team back in the Juno days.
>
> As I mentioned in the past, having a propension to attendance, and desire to
> review of RFEs puts you on the right foot to join the group, whose members
> are rotated regularly so that everyone is given the opportunity to grow, and
> no-one burns out.
>
> The team [1] meets regularly on Thursdays [2], and anyone is welcome to
> attend.
>
> Please, join me in welcome Assaf to the team.
>
> Cheers,
> Armando
>
> [1]
> http://docs.openstack.org/developer/neutron/policies/neutron-teams.html#drivers-team
> [2] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [All][Neutron] Improving development and review velocity - Is it time for a drastic change?

2016-03-31 Thread Assaf Muller
Have you been negatively impacted by slow development and review
velocity? Read on.

OpenStack has had a slow review velocity for as long as I can
remember. This has a cascading effect where people take up multiple
tasks, so that they can work on something while the other is being
reviewed. This adds even more patches to ever growing queues. Features
miss releases and bugs never get fixed. Even worse, we turn away new
contributors due to an agonizing process.

In the Neutron community, we've tried a few things over the years.
Neutron's growing scope was identified and load balancing, VPN and
firewall as a service were split out to their own repositories.
Neutron core reviewers had less load, *aaS contributors could iterate
faster, it was a win win. Following that, Neutron plugins were split
off as well. Neutron core reviewers did not have the expertise or
access to specialized hardware of vendors anyway, vendors could
iterate faster, and everybody were happy. Finally, a specialization
system was created. Areas of the Neutron code base were determined and
a "Lieutenant" was chosen for each area. That lieutenant could then
nominate core reviewers, and those reviewers were then expected to +2
only within their area. This led to doubling the core team, and for my
money was a great success. Leading us to today.

Today, I think it's clear we still have a grave problem. Patches sit
idle for months, turning contributors away. I believe we've reached a
tipping point, and now is the time for out of the box thinking. I am
proposing two changes:

1) Changing what a core reviewer is. It is time to move to a system of
trust: Everyone have +2 rights to begin with, and the system
self-regulates by shaming offending individuals and eventually taking
away rights for repeated errors in judgement. I've proposed a Neutron
governance change here:

https://review.openstack.org/300271

2) Now, transform yourself six to twelve months in the future. We now
face a new problem. Patches are flying in. You're no longer working on
a dozen patches in parallel. You push up something, it is reviewed
promptly, and you move on to the next thing. Our next issue is then CI
run-time. The time it takes to test (Check queue), approve and test a
patch again (Gate queue) is simply too long. How do we cut this down?
Again, by using a proven open source methodology of trust. As
Neutron's testing lieutenant, I hereby propose that we remove the
tests. Why deal with a problem you can avoid in the first place? The
Neutron team has been putting out fires in the form of gate issues on
a weekly basis, double so late in to a release cycle. The gate has so
many false negatives, the tests are riddled with race conditions,
we've clearly failed to get testing right. Needless to say, my
proposal keeps pep8 in place. We all know how important a consistent
style is. I've proposed a patch that removes Neutron's tests here:

https://review.openstack.org/300272

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][heat] Removing LBaaS v1 - are weready?

2016-03-24 Thread Assaf Muller
On Thu, Mar 24, 2016 at 1:48 AM, Takashi Yamamoto  wrote:
> On Thu, Mar 24, 2016 at 6:17 AM, Doug Wiegley
>  wrote:
>> Migration script has been submitted, v1 is not going anywhere from 
>> stable/liberty or stable/mitaka, so it’s about to disappear from master.
>>
>> I’m thinking in this order:
>>
>> - remove jenkins jobs
>> - wait for heat to remove their jenkins jobs ([heat] added to this thread, 
>> so they see this coming before the job breaks)
>
> magnum is relying on lbaasv1.  (with heat)

Is there anything blocking you from moving to v2?

>
>> - remove q-lbaas from devstack, and any references to lbaas v1 in 
>> devstack-gate or infra defaults.
>> - remove v1 code from neutron-lbaas
>>
>> Since newton is now open for commits, this process is going to get started.
>>
>> Thanks,
>> doug
>>
>>
>>
>>> On Mar 8, 2016, at 11:36 AM, Eichberger, German  
>>> wrote:
>>>
>>> Yes, it’s Database only — though we changed the agent driver in the DB from 
>>> V1 to V2 — so if you bring up a V2 with that database it should reschedule 
>>> all your load balancers on the V2 agent driver.
>>>
>>> German
>>>
>>>
>>>
>>>
>>> On 3/8/16, 3:13 AM, "Samuel Bercovici"  wrote:
>>>
 So this looks like only a database migration, right?

 -Original Message-
 From: Eichberger, German [mailto:german.eichber...@hpe.com]
 Sent: Tuesday, March 08, 2016 12:28 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are 
 weready?

 Ok, for what it’s worth we have contributed our migration script: 
 https://review.openstack.org/#/c/289595/ — please look at this as a 
 starting point and feel free to fix potential problems…

 Thanks,
 German




 On 3/7/16, 11:00 AM, "Samuel Bercovici"  wrote:

> As far as I recall, you can specify the VIP in creating the LB so you 
> will end up with same IPs.
>
> -Original Message-
> From: Eichberger, German [mailto:german.eichber...@hpe.com]
> Sent: Monday, March 07, 2016 8:30 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are 
> weready?
>
> Hi Sam,
>
> So if you have some 3rd party hardware you only need to change the
> database (your steps 1-5) since the 3rd party hardware will just keep
> load balancing…
>
> Now for Kevin’s case with the namespace driver:
> You would need a 6th step to reschedule the loadbalancers with the V2 
> namespace driver — which can be done.
>
> If we want to migrate to Octavia or (from one LB provider to another) it 
> might be better to use the following steps:
>
> 1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools, Health
> Monitors , Members) into some JSON format file(s) 2. Delete LBaaS v1 3.
> Uninstall LBaaS v1 4. Install LBaaS v2 5. Transform the JSON format
> file into some scripts which recreate the load balancers with your
> provider of choice —
>
> 6. Run those scripts
>
> The problem I see is that we will probably end up with different VIPs
> so the end user would need to change their IPs…
>
> Thanks,
> German
>
>
>
> On 3/6/16, 5:35 AM, "Samuel Bercovici"  wrote:
>
>> As for a migration tool.
>> Due to model changes and deployment changes between LBaaS v1 and LBaaS 
>> v2, I am in favor for the following process:
>>
>> 1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools,
>> Health Monitors , Members) into some JSON format file(s) 2. Delete LBaaS 
>> v1 3.
>> Uninstall LBaaS v1 4. Install LBaaS v2 5. Import the data from 1 back
>> over LBaaS v2 (need to allow moving from falvor1-->flavor2, need to
>> make room to some custom modification for mapping between v1 and v2
>> models)
>>
>> What do you think?
>>
>> -Sam.
>>
>>
>>
>>
>> -Original Message-
>> From: Fox, Kevin M [mailto:kevin@pnnl.gov]
>> Sent: Friday, March 04, 2016 2:06 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are 
>> weready?
>>
>> Ok. Thanks for the info.
>>
>> Kevin
>> 
>> From: Brandon Logan [brandon.lo...@rackspace.com]
>> Sent: Thursday, March 03, 2016 2:42 PM
>> To: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are 
>> weready?
>>
>> Just for clarity, V2 did not reuse tables, all the tables it uses are 
>> only for it.  The main problem is that v1 and 

Re: [openstack-dev] [neutron][stable] proactive backporting

2016-03-23 Thread Assaf Muller
On Wed, Mar 23, 2016 at 12:52 PM, Ihar Hrachyshka  wrote:
> Hey folks,
>
> some update on proactive backporting for neutron, and a call for action from
> subteam leaders.
>
> As you probably know, lately we started to backport a lot of bug fixes in
> latest stable branch (liberty atm) + became more systematic in getting High+
> bug fixes into older stable branch (kilo atm).
>
> I work on some tooling lately to get the process a bit more straight:
>
> https://review.openstack.org/#/q/project:openstack-infra/release-tools+owner:%22Ihar+Hrachyshka+%253Cihrachys%2540redhat.com%253E%22
>
> I am at the point where I can issue a single command and get the list of
> bugs fixed in master since previous check, with Wishlist bugs filtered out
> [since those are not applicable for backporting]. The pipeline looks like:
>
> ./bugs-fixed-since.py neutron  | ./lp-filter-bugs-by-importance.py
> --importance=Wishlist neutron | ./get-lp-links.py

Kudos on the new tooling, this will make at least part of the process easier.

>
> For Kilo, we probably also need to add another filter for Low impact bugs:
>
> ./lp-filter-bugs-by-importance.py --importance=Low neutron
>
> There are more ideas on how to automate the process (specifically, kilo
> backports should probably be postponed till Liberty patches land and be
> handled in a separate workflow pipeline since old-stable criteria are
> different; also, the pipeline should fully automate ‘easy' backport
> proposals, doing cherry-pick and PS upload for the caller).
>
> However we generate the list of backport candidates, in the end the bug list
> is briefly triaged and categorized and put into the etherpad:
>
> https://etherpad.openstack.org/p/stable-bug-candidates-from-master
>
> I backport some fixes that are easy to cherry-pick myself. (easy == with a
> press of a button in gerrit UI)
>
> Still, we have a lot of backport candidates that require special attention
> in the etherpad.
>
> I ask folks that cover specific topics in our community (f.e. Assaf for
> testing; Carl and Oleg for DVR/L3; John for IPAM; etc.) to look at the
> current list, book some patches for your subteams to backport, and make sure
> the fixes land in stable.
>
> Note that the process generates a lot of traffic on stable branches, and
> that’s why we want more frequent releases. We can’t achieve that on kilo
> since kilo stable is still in the integrated release mode, but starting from
> Liberty we should release more often. It’s on my todo to document release
> process in neutron devref.
>
> For your reference, it’s just a matter of calling inside openstack/releases
> repo:
>
> ./tools/new_release.sh liberty neutron bugfix
>
> FYI I just posted a new Liberty release patch at:
> https://review.openstack.org/296608
>
> Thanks for attention,

Ideally, proactive backporting will continue for a long time by being
self sufficient, and that means we get buy in from a sufficiently
large group of people in the Neutron community and obtain critical
mass. I think the incentive is there - Assuming you take part in
delivering OpenStack based on a stable branch, you want that branch as
bug-free as possible so that you don't have to put out fires as people
report them, rather you prevent issues before they happen. This is
much cheaper in the long run for everyone involved.

>
>
> Ihar Hrachyshka  wrote:
>
>> Ihar Hrachyshka  wrote:
>>
>>> Rossella Sblendido  wrote:
>>>
 Hi,

 thanks Ihar for the etherpad and for raising this point.
 .


 On 12/18/2015 06:18 PM, Ihar Hrachyshka wrote:
>
> Hi all,
>
> just wanted to note that the etherpad page [1] with backport candidates
> has a lot of work for those who have cycles for backporting relevant
> pieces to Liberty (and Kilo for High+ bugs), so please take some on
> your
> plate and propose backports, then clean up from the page. And please
> don’t hesitate to check the page for more worthy patches in the future.
>
> It can’t be a one man army if we want to run the initiative in long
> term.


 I completely agree, it can't be one man army.
 I was thinking that maybe we can be even more proactive.
 How about adding as requirement for a bug fix to be merged to have the
 backport to relevant branches? I think that could help
>>>
>>>
>>> I don’t think it will work. First, not everyone should be required to
>>> care about stable branches. It’s my belief that we should avoid formal
>>> requirements that mechanically offload burden from stable team to those who
>>> can’t possible care less about master.
>>
>>
>> Of course I meant ‘about stable branches’.
>>
>> Ihar
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 

Re: [openstack-dev] [Neutron]Debug with Pycharm

2016-03-23 Thread Assaf Muller
On Tue, Mar 22, 2016 at 11:36 PM, Nguyen Hoai Nam  wrote:
> Hi everybody,
> Have you configured PyCharm to debug Neutron project. I confuged but it's
> not ok. If you have any archive, could you please share it with openstacker
> ?

http://lists.openstack.org/pipermail/openstack-dev/2014-June/036988.html

>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] CI jobs take pretty long, can we improve that?

2016-03-22 Thread Assaf Muller
On Tue, Mar 22, 2016 at 9:31 AM, Kevin Benton  wrote:
> Thanks for doing this. I dug into the test_volume_boot_pattern test to see
> what was going on.
>
> On the first boot, Nova called Neutron to create a port at 23:29:44 and it
> took 441ms to return the port to Nova.[1]
> Nova then plugged the interface for that port into OVS a little over 6
> seconds later at 23:29:50.[2]
> The Neutron agent attempted to process this on the iteration at 23:29:52
> [3]; however, it didn't get the ofport populated from the OVSDB monitor... a
> bug![4] The Neutron agent did catch it on the next iteration two seconds
> later on a retry and notified the Neutron server at 23:29:54.[5]

Good work as usual Kevin, just approved the fix to this bug.

> The Neutron server processed the port ACTIVE change in just under 80ms[6],
> but it did not dispatch the notification to Nova until 2 seconds later at
> 23:29:56 [7] due to the Nova notification batching mechanism[8].
>
> Total time between port create and boot is about 12 seconds. 6 in Nova and 6
> in Neutron.
>
> For the Neutron side, the bug fix should eliminate 2 seconds. We could
> probably make the Nova notifier batching mechanism a little more aggressive
> so it only batches up calls in a very short interval rather than making 2
> second buckets at all times. The remaining 2 seconds is just the agent
> processing loop interval, which can be tuned with a config but it should be
> okay if that's the only bottleneck.
>
> For Nova, we need to improve that 6 seconds after it has created the Neutron
> port before it has plugged it into the vswitch. I can see it makes some
> other calls to Neutron in this time to list security groups and floating
> IPs. Maybe this can be done asynchronously because I don't think they should
> block the initial VM boot to pause that plugs in the VIF.
>
> Completely unrelated to the boot process, the entire tempest run spent ~412
> seconds building and destroying Neutron resources in setup and teardown.[9]
> However, considering the number of tests executed, this seems reasonable so
> I'm not sure we need to work on optimizing that yet.
>
>
> Cheers,
> Kevin Benton
>
>
> 1.
> http://logs.openstack.org/87/295487/1/check/gate-tempest-dsvm-neutron-full/5022853/logs/screen-q-svc.txt.gz#_2016-03-21_23_29_45_341
> 2.
> http://logs.openstack.org/87/295487/1/check/gate-tempest-dsvm-neutron-full/5022853/logs/screen-n-cpu.txt.gz#_2016-03-21_23_29_50_629
> 3.
> http://logs.openstack.org/87/295487/1/check/gate-tempest-dsvm-neutron-full/5022853/logs/screen-q-agt.txt.gz#_2016-03-21_23_29_52_216
> 4. https://bugs.launchpad.net/neutron/+bug/1560464
> 5.
> http://logs.openstack.org/87/295487/1/check/gate-tempest-dsvm-neutron-full/5022853/logs/screen-q-agt.txt.gz#_2016-03-21_23_29_54_738
> 6.
> http://logs.openstack.org/87/295487/1/check/gate-tempest-dsvm-neutron-full/5022853/logs/screen-q-svc.txt.gz#_2016-03-21_23_29_54_813
> 7.
> http://logs.openstack.org/87/295487/1/check/gate-tempest-dsvm-neutron-full/5022853/logs/screen-q-svc.txt.gz#_2016-03-21_23_29_56_782
> 8.
> http://git.openstack.org/cgit/openstack/neutron/tree/neutron/notifiers/nova.py
> 9. egrep -R 'tearDown|setUp' tempest.txt.gz | grep 9696 | awk '{print
> $(NF)}' | ./fsum
>
> On Mon, Mar 21, 2016 at 5:09 PM, Clark Boylan  wrote:
>>
>> On Mon, Mar 21, 2016, at 01:23 PM, Sean Dague wrote:
>> > On 03/21/2016 04:09 PM, Clark Boylan wrote:
>> > > On Mon, Mar 21, 2016, at 11:49 AM, Clark Boylan wrote:
>> > >> On Mon, Mar 21, 2016, at 11:08 AM, Clark Boylan wrote:
>> > >>> On Mon, Mar 21, 2016, at 09:32 AM, Armando M. wrote:
>> >  Do you have an a better insight of job runtimes vs jobs in other
>> >  projects?
>> >  Most of the time in the job runtime is actually spent setting the
>> >  infrastructure up, and I am not sure we can do anything about it,
>> >  unless
>> >  we
>> >  take this with Infra.
>> > >>>
>> > >>> I haven't done a comparison yet buts lets break down the runtime of
>> > >>> a
>> > >>> recent successful neutron full run against neutron master [0].
>> > >>
>> > >> And now for some comparative data from the gate-tempest-dsvm-full job
>> > >> [0]. This job also ran against a master change that merged and ran in
>> > >> the same cloud and region as the neutron job.
>> > >>
>> > > snip
>> > >> Generally each step of this job was quicker. There were big
>> > >> differences
>> > >> in devstack and tempest run time though. Is devstack much slower to
>> > >> setup neutron when compared to nova net? For tempest it looks like we
>> > >> run ~1510 tests against neutron and only ~1269 against nova net. This
>> > >> may account for the large difference there. I also recall that we run
>> > >> ipv6 tempest tests against neutron deployments that were inefficient
>> > >> and
>> > >> booted 2 qemu VMs per test (not sure if that is still the case but
>> > >> illustrates that the tests themselves may not be very quick in the
>> > >> neutron 

Re: [openstack-dev] [neutron] CI jobs take pretty long, can we improve that?

2016-03-21 Thread Assaf Muller
On Mon, Mar 21, 2016 at 9:26 PM, Clark Boylan <cboy...@sapwetik.org> wrote:
> On Mon, Mar 21, 2016, at 06:15 PM, Assaf Muller wrote:
>> On Mon, Mar 21, 2016 at 8:09 PM, Clark Boylan <cboy...@sapwetik.org>
>> wrote:
>> > On Mon, Mar 21, 2016, at 01:23 PM, Sean Dague wrote:
>> >> On 03/21/2016 04:09 PM, Clark Boylan wrote:
>> >> > On Mon, Mar 21, 2016, at 11:49 AM, Clark Boylan wrote:
>> >> >> On Mon, Mar 21, 2016, at 11:08 AM, Clark Boylan wrote:
>> >> >>> On Mon, Mar 21, 2016, at 09:32 AM, Armando M. wrote:
>> >> >>>> Do you have an a better insight of job runtimes vs jobs in other
>> >> >>>> projects?
>> >> >>>> Most of the time in the job runtime is actually spent setting the
>> >> >>>> infrastructure up, and I am not sure we can do anything about it, 
>> >> >>>> unless
>> >> >>>> we
>> >> >>>> take this with Infra.
>> >> >>>
>> >> >>> I haven't done a comparison yet buts lets break down the runtime of a
>> >> >>> recent successful neutron full run against neutron master [0].
>> >> >>
>> >> >> And now for some comparative data from the gate-tempest-dsvm-full job
>> >> >> [0]. This job also ran against a master change that merged and ran in
>> >> >> the same cloud and region as the neutron job.
>> >> >>
>> >> > snip
>> >> >> Generally each step of this job was quicker. There were big differences
>> >> >> in devstack and tempest run time though. Is devstack much slower to
>> >> >> setup neutron when compared to nova net? For tempest it looks like we
>> >> >> run ~1510 tests against neutron and only ~1269 against nova net. This
>> >> >> may account for the large difference there. I also recall that we run
>> >> >> ipv6 tempest tests against neutron deployments that were inefficient 
>> >> >> and
>> >> >> booted 2 qemu VMs per test (not sure if that is still the case but
>> >> >> illustrates that the tests themselves may not be very quick in the
>> >> >> neutron case).
>> >> >
>> >> > Looking at the tempest slowest tests output for each of these jobs
>> >> > (neutron and nova net) some tests line up really well across jobs and
>> >> > others do not. In order to get a better handle on the runtime for
>> >> > individual tests I have pushed https://review.openstack.org/295487 which
>> >> > will run tempest serially reducing the competition for resources between
>> >> > tests.
>> >> >
>> >> > Hopefully the subunit logs generated by this change can provide more
>> >> > insight into where we are losing time during the tempest test runs.
>> >
>> > The results are in, we have gate-tempest-dsvm-full [0] and
>> > gate-tempest-dsvm-neutron-full [1] job results where tempest ran
>> > serially to reduce resource contention and provide accurateish per test
>> > timing data. Both of these jobs ran on the same cloud so should have
>> > comparable performance from the underlying VMs.
>> >
>> > gate-tempest-dsvm-full
>> > Time spent in job before tempest: 700 seconds
>> > Time spent running tempest: 2428
>> > Tempest tests run: 1269 (113 skipped)
>> >
>> > gate-tempest-dsvm-neutron-full
>> > Time spent in job before tempest: 789 seconds
>> > Time spent running tempest: 4407 seconds
>> > Tempest tests run: 1510 (76 skipped)
>> >
>> > All times above are wall time as recorded by Jenkins.
>> >
>> > We can also compare the 10 slowest tests in the non neutron job against
>> > their runtimes in the neutron job. (note this isn't a list of the top 10
>> > slowest tests in the neutron job because that job runs extra tests).
>> >
>> > nova net job
>> > tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_volume_boot_pattern
>> >   85.232
>> > tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern
>> > 83.319
>> > tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_volume_backed_instance
>> >  50.338
>> > tempest.scenario.test_snapshot_pat

Re: [openstack-dev] [neutron] CI jobs take pretty long, can we improve that?

2016-03-21 Thread Assaf Muller
On Mon, Mar 21, 2016 at 8:09 PM, Clark Boylan  wrote:
> On Mon, Mar 21, 2016, at 01:23 PM, Sean Dague wrote:
>> On 03/21/2016 04:09 PM, Clark Boylan wrote:
>> > On Mon, Mar 21, 2016, at 11:49 AM, Clark Boylan wrote:
>> >> On Mon, Mar 21, 2016, at 11:08 AM, Clark Boylan wrote:
>> >>> On Mon, Mar 21, 2016, at 09:32 AM, Armando M. wrote:
>>  Do you have an a better insight of job runtimes vs jobs in other
>>  projects?
>>  Most of the time in the job runtime is actually spent setting the
>>  infrastructure up, and I am not sure we can do anything about it, unless
>>  we
>>  take this with Infra.
>> >>>
>> >>> I haven't done a comparison yet buts lets break down the runtime of a
>> >>> recent successful neutron full run against neutron master [0].
>> >>
>> >> And now for some comparative data from the gate-tempest-dsvm-full job
>> >> [0]. This job also ran against a master change that merged and ran in
>> >> the same cloud and region as the neutron job.
>> >>
>> > snip
>> >> Generally each step of this job was quicker. There were big differences
>> >> in devstack and tempest run time though. Is devstack much slower to
>> >> setup neutron when compared to nova net? For tempest it looks like we
>> >> run ~1510 tests against neutron and only ~1269 against nova net. This
>> >> may account for the large difference there. I also recall that we run
>> >> ipv6 tempest tests against neutron deployments that were inefficient and
>> >> booted 2 qemu VMs per test (not sure if that is still the case but
>> >> illustrates that the tests themselves may not be very quick in the
>> >> neutron case).
>> >
>> > Looking at the tempest slowest tests output for each of these jobs
>> > (neutron and nova net) some tests line up really well across jobs and
>> > others do not. In order to get a better handle on the runtime for
>> > individual tests I have pushed https://review.openstack.org/295487 which
>> > will run tempest serially reducing the competition for resources between
>> > tests.
>> >
>> > Hopefully the subunit logs generated by this change can provide more
>> > insight into where we are losing time during the tempest test runs.
>
> The results are in, we have gate-tempest-dsvm-full [0] and
> gate-tempest-dsvm-neutron-full [1] job results where tempest ran
> serially to reduce resource contention and provide accurateish per test
> timing data. Both of these jobs ran on the same cloud so should have
> comparable performance from the underlying VMs.
>
> gate-tempest-dsvm-full
> Time spent in job before tempest: 700 seconds
> Time spent running tempest: 2428
> Tempest tests run: 1269 (113 skipped)
>
> gate-tempest-dsvm-neutron-full
> Time spent in job before tempest: 789 seconds
> Time spent running tempest: 4407 seconds
> Tempest tests run: 1510 (76 skipped)
>
> All times above are wall time as recorded by Jenkins.
>
> We can also compare the 10 slowest tests in the non neutron job against
> their runtimes in the neutron job. (note this isn't a list of the top 10
> slowest tests in the neutron job because that job runs extra tests).
>
> nova net job
> tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_volume_boot_pattern
>   85.232
> tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern
> 83.319
> tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_volume_backed_instance
>  50.338
> tempest.scenario.test_snapshot_pattern.TestSnapshotPattern.test_snapshot_pattern
> 43.494
> tempest.scenario.test_minimum_basic.TestMinimumBasicScenario.test_minimum_basic_scenario
> 40.225
> tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_instance
>39.653
> tempest.api.volume.admin.test_volumes_backup.VolumesBackupsV1Test.test_volume_backup_create_get_detailed_list_restore_delete
> 37.720
> tempest.api.volume.admin.test_volumes_backup.VolumesBackupsV2Test.test_volume_backup_create_get_detailed_list_restore_delete
> 36.355
> tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_resize_server_confirm_from_stopped
>27.375
> tempest.scenario.test_encrypted_cinder_volumes.TestEncryptedCinderVolumes.test_encrypted_cinder_volumes_luks
> 27.025
>
> neutron job
> tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_volume_boot_pattern
>  110.345
> tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern
>108.170
> tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_volume_backed_instance
>  63.852
> tempest.scenario.test_shelve_instance.TestShelveInstance.test_shelve_instance
>  

Re: [openstack-dev] [QA][all] Propose to remove negative tests from Tempest

2016-03-19 Thread Assaf Muller
On Wed, Mar 16, 2016 at 10:41 PM, Jim Rollenhagen
 wrote:
> On Wed, Mar 16, 2016 at 06:20:11PM -0700, Ken'ichi Ohmichi wrote:
>> Hi
>>
>> I have one proposal[1] related to negative tests in Tempest, and
>> hoping opinions before doing that.
>>
>> Now Tempest contains negative tests and sometimes patches are being
>> posted for adding more negative tests, but I'd like to propose
>> removing them from Tempest instead.
>>
>> Negative tests verify surfaces of REST APIs for each component without
>> any integrations between components. That doesn't seem integration
>> tests which are scope of Tempest.
>> In addition, we need to spend the test operating time on different
>> component's gate if adding negative tests into Tempest. For example,
>> we are operating negative tests of Keystone and more
>> components on the gate of Nova. That is meaningless, so we need to
>> avoid more negative tests into Tempest now.
>>
>> If wanting to add negative tests, it is a nice option to implement
>> these tests on each component repo with Tempest plugin interface. We
>> can avoid operating negative tests on different component gates and
>> each component team can decide what negative tests are valuable on the
>> gate.
>>
>> In long term, all negative tests will be migrated into each component
>> repo with Tempest plugin interface. We will be able to operate
>> valuable negative tests only on each gate.
>
> So, positive tests in tempest, negative tests as a plugin.
>
> Is there any longer term goal to have all tests for all projects in a
> plugin for that project? Seems odd to separate them.

I'd love to see this idea explored further. What happens if Tempest
ends up without tests, as a library for shared code as well as a
centralized place to run tests from via plugins?

>
> // jim
>
>>
>> Any thoughts?
>>
>> Thanks
>> Ken Ohmichi
>>
>> ---
>> [1]: https://review.openstack.org/#/c/293197/
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [neutron-lib] adding a new flag to neutron-lib

2016-03-14 Thread Assaf Muller
On Mon, Mar 14, 2016 at 1:27 PM, Gary Kotton  wrote:
> Hi,
> It would be nice if we could get a little more reviews in the neutron-lib.
> I think that we should maybe strive to cut a new version prior to the
> release.
> Thanks
> Gary
>
> On 3/14/16, 6:17 PM, "Venkata Anil"  wrote:
>
>>Hi All
>>
>>  I have added a new flag in neutron-lib
>>https://review.openstack.org/#/c/291641/3/neutron_lib/constants.py
>>  and wanted to use that flag in neutron's change
>>https://review.openstack.org/#/c/291651/
>>  How to add the dependency?
>>
>>  I added neutron-lib change in neutron's change as "Depends-On:" in
>>commit message.
>>  Also added the change suggested in
>>https://wiki.openstack.org/wiki/Neutron/Lib as  "Depends-On:" in commit
>>message. But still neutron change is not resolving the flag added in
>>neutron-lib.

On a somewhat related note, I'd love to pick the brains of the people
involved as for why a separate repo was chosen over placing the code
in the main Neutron repo (Under say, neutron.lib). It would mean
easier/faster movement of code in to the lib, thus faster velocity and
adoption. As it is, it requires 1 patch to move something in to
neutron_lib (With a debtcollector), releasing a new version of it to
PyPi, then another patch to use the code in Neutron. The Tempest
community moved from tempest_lib to tempest.lib very recently, but I
don't know all of the motivations behind that move, I can imagine the
reason I stated above was also on their minds.

>>
>>Thanks
>>Anil
>>
>>
>>
>>
>>
>>__
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][oslo]neutron agent not receiving callback

2016-03-09 Thread Assaf Muller
On Wed, Mar 9, 2016 at 9:40 AM, Ihar Hrachyshka  wrote:
> Vikash Kumar  wrote:
>
>>
>>
>> On Wed, Mar 9, 2016 at 3:42 PM, Vikash Kumar
>>  wrote:
>> I have written a sample neutron agent which subscribe for the AFTER_CREATE
>> event of router. I have defined a sample method as callback, but the method
>> doesn't gets called anytime.
>>
>> Also, in logs:
>>
>> 2016-03-09 01:36:08.220 7075 DEBUG neutron.callbacks.manager [-]
>> Subscribe:  router after_create
>> subscribe /opt/stack/neutron/neutron/callbacks/manager.py:41
>>
>>
>> which means the subscription is successful.
>>
>>Do I need to enable anything in config file to get that ? Or am I
>> missing something ?
>
>
> First, nothing oslo specific is discussed here, so [oslo] tag is probably
> redundant.
>
> Overall, I believe you try to rely on wrong thing that won’t deliver for
> you: callbacks are internal to neutron-server, so events triggered by
> neutron-servers will never reach any other processes (like your agent).

The same callbacks mechanism is also used in the L3 agent, but as Ihar
said, events in one process (neutron-server) will not trigger
callbacks in another process (l3-agent). If that's what you want,
you'll need RPC.

>
> More info: http://docs.openstack.org/developer/neutron/devref/callbacks.html
>
> Ihar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mitaka, Xenial, OVS Firewall Driver, DPDK, VXLAN and Provider Networks

2016-02-28 Thread Assaf Muller
On Sat, Feb 27, 2016 at 6:55 PM, Martinx - ジェームズ
 wrote:
> Hey guys!
>
>  Next Ubuntu and Mitaka are promising something ultra mega cool!
>
>  Look at this!
>
> ---
> root@mitaka-1:~# apt install neutron-openvswitch-agent
> Reading package lists... Done
> Building dependency tree
> Reading state information... Done
> The following additional packages will be installed:
>   dpdk libdpdk0 openvswitch-common openvswitch-switch
> ---
>
>  Xenial will brings DPDK-2.2 fully supported for 5 years!
>
>  However, I am curious about the following scenarios:
>
>  Will be possible to use, at the same time (same Network and Compute nodes /
> Host Aggregate):
>
>  1- Regular OVS bridges without DPDK for VXLAN Networks, with
> OVS-Firewall-Driver and;
>
>  2- OVS powered by DPDK for Provider Networks only ( without any firewall,
> current case anyway, due to
> https://bugs.launchpad.net/neutron/+bug/1531205).

Currently, a host may run a single OVS agent, configured for either
regular OVS or OVS-DPDK. You cannot run both on a single host. You can
mix and match between different hosts though. It is something we
discussed a bit, but no concrete plans to change this at this time.

We could support this by allowing an OVS agent to support two
datapaths simultaneously by configuring two integration bridges, each
with its own type. We would add a DPDK VNIC type so Nova would plug
the VNIC to the correct bridge. Each integration bridge would have its
own bridge mappings (The kernel datapath integration bridge would be
connected to br-tun or to a VLAN bridge, and the DPDK datapath
integration bridge would be connected to its own set of VLAN provider
bridges. Another way to accomplish this use case is to start two OVS
agents on the same host, each configured appropriately, but we'd need
to make changes to ML2 to support this, perhaps differentiate between
the two agents via an agent_type and bind ports appropriately. Again,
we'd need a new VNIC type for DPDK ports.

>
> ?
>
>  I have NFV Instances that are also, DPDK L2 Bridges running on KVM Guest /
> VirtIO, that are physically wired using Provider Networks (flat and vlans).
>
>  So, for the Instance's vNICs (eth1 and eth2) that are used as a L2 bridge,
> I don't want any kind of ovs-firewall (I'm not affected by LP #1531205 on
> this case) and I want OVS+DPDK under it but, for SSH into the Instance to
> manage it (via its eth0), it is still using regular VXLAN with Security
> Groups - OVS-Firewall from now on (no need for DPDK under eth0 / VXLAN).
>
>  I'm curious about this specially because the OVS Ubuntu package, makes use
> of Debian's Alternatives subsystem, and we need to choose one OVS (default),
> or another (with DPDK), via "update-alternatives", so, will be possible to
> select OVS with DPDK but, use regular bridges with it as well (for VXLAN
> networks)?
>
>  If yes, how to create a VXLAN network with regular OVS and another
> FLAT/VLAN network with OVS+DPDK ?
>
>  Thanks in advance!
>
> Best,
> Thiago
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Check queue broken on master

2016-02-17 Thread Assaf Muller
On Wed, Feb 17, 2016 at 2:16 PM, Armando M.  wrote:
>
>
> On 17 February 2016 at 11:12, Armando M.  wrote:
>>
>> Hi folks,
>>
>> It looks like something slipped in and how we got persistent failures on
>> functional/fullstack jobs [1]. Has anyone triaged? I couldn't find anything
>> in [2].
>
>
> Looks like [1] fixed it. Thanks Assaf.

You mean thanks Jakub!

>
> Be safe outta there. It's a scary world.
>
>
> [1] https://bugs.launchpad.net/neutron/+bug/1546506
>
>>
>> The effect for this: we can't merge anything until this gets resolved.
>> Some might argue this is not necessarily a bad thing...
>>
>> Cheers,
>> Armando
>>
>> [1]
>> http://docs.openstack.org/developer/neutron/dashboards/check.dashboard.html
>> [2]
>> https://bugs.launchpad.net/neutron/+bugs?field.tag=gate-failure=-datecreated=0
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Tempest]Run only multinode tests in multinode jobs

2016-02-16 Thread Assaf Muller
On Tue, Feb 16, 2016 at 2:52 PM, Matthew Treinish  wrote:
> On Tue, Feb 16, 2016 at 10:07:19AM +0100, Jordan Pittier wrote:
>> Hi list,
>> I understood we need to limit the number of tests and jobs that are run for
>> each Tempest patch because our resources are not unlimited.
>>
>> In Tempest, we have 5 multinode experimental jobs:
>>
>> experimental-tempest-dsvm-multinode-full-dibtest
>> gate-tempest-dsvm-multinode-full
>> gate-tempest-dsvm-multinode-live-migration
>> gate-tempest-dsvm-neutron-multinode-full
>> gate-tempest-dsvm-neutron-dvr-multinode-full
>>
>> These jobs largely overlap with the non-multinode jobs. What about tagging
>> (with a python decorator) each test that really requires multiple nodes and
>> only run those tests as part of the multinode jobs ?
>
> So I don't think this is wise. I'm fine with adding a tag (or more
> realistically a new decorator that sets the attr and bakes in the skip checks)
> to mark tests that require more than 1 node to work. But, limiting all the
> multinode jobs to just that set doesn't make too much sense to me. For most of
> those jobs you listed the point is to verify that everything things work the
> same at >1 node, not just features that require more than 1 node. (with likely
> the exception of the live-migration job which I assume just runs live 
> migration
> tests)
>
> What is probably a better question to ask is why we need 5 different 
> multi-node
> jobs in the tempest experimental queue? Tempest will always have a higher than
> average number of tempest-dsvm jobs running because so much of the code is 
> self
> verifying. But, do all of those jobs really improve our coverage of tempest
> code? Like what does the dibtest job buy us? Or why do we need 2 different
> types of neutron deployments running?

I can't speak for the other three jobs, but these two:
gate-tempest-dsvm-neutron-multinode-full
gate-tempest-dsvm-neutron-dvr-multinode-full

Are both in the check queue and are non-voting. Both are hovering
around 50% failure rate for a while now. Ihar and Sean (CC'd) are
working on the non-DVR job, solving issues around MTU.

>
> -Matt Treinish
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Tempest]Run only multinode tests in multinode jobs

2016-02-16 Thread Assaf Muller
On Tue, Feb 16, 2016 at 12:26 PM, Clark Boylan  wrote:
> On Tue, Feb 16, 2016, at 01:07 AM, Jordan Pittier wrote:
>> Hi list,
>> I understood we need to limit the number of tests and jobs that are run
>> for
>> each Tempest patch because our resources are not unlimited.
>>
>> In Tempest, we have 5 multinode experimental jobs:
>>
>> experimental-tempest-dsvm-multinode-full-dibtest
>> gate-tempest-dsvm-multinode-full
>> gate-tempest-dsvm-multinode-live-migration
>> gate-tempest-dsvm-neutron-multinode-full
>> gate-tempest-dsvm-neutron-dvr-multinode-full
>>
>> These jobs largely overlap with the non-multinode jobs. What about
>> tagging
>> (with a python decorator) each test that really requires multiple nodes
>> and
>> only run those tests as part of the multinode jobs ?
>
> One of the goals I had was to hopefully replace the single node jobs
> with the multinode jobs because as you point out there is a lot of
> redundancy and 2 VMs < 3 VMs. One of the prerequisites for this to
> happen is to have an easy way to reproduce the multinode test envs using
> something like vagrant. I have been meaning to work on that this cycle
> but adding new cloud resources (and keeping existing resources happy)
> have taken priority.

These are not conflicting efforts, are they? We could attack it on
both fronts: Send a patch that tags a dozen (?) or so tests with
'multinode' run only those in the multinode jobs. You could accomplish
that almost immediately. In parallel work on replacing the single node
jobs with multinode (And then change their tests regex from
'multinode' back to full).

>
> Clark
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-12 Thread Assaf Muller
On Fri, Feb 12, 2016 at 8:43 AM, Ihar Hrachyshka  wrote:
> Eoghan Glynn  wrote:
>
>>
>>
 [...]
   * much of the problem with the lavish parties is IMO related to the
 *exclusivity* of certain shindigs, as opposed to devs socializing at
 summit being inappropriate per se. In that vein, I think the cores
 party sends the wrong message and has run its course, while the TC
 dinner ... well, maybe Austin is the time to show some leadership
 on that? ;)
>>>
>>>
>>> Well, Tokyo was the time to show some leadership on that -- there was no
>>> "TC dinner" there :)
>>
>>
>> Excellent, that is/was indeed a positive step :)
>>
>> For the cores party, much as I enjoyed the First Nation cuisine in
>> Vancouver
>> or the performance art in Tokyo, IMO it's probably time to draw a line
>> under
>> that excess also, as it too projects a notion of exclusivity that runs
>> counter
>> to building a community.
>
>
> A lot of people I care about ignore the core reviewer party for those exact
> reasons: because it’s too elitist and divisive. I agree with them, and I
> ignore the party. I suggest everyone does the same.

I 'boycott' (Kind of a strong word since nobody cares in the first
place) the core party for the same reasons.

>
> Ihar
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] Request to do stable point releases more often if there are going to be a lot of backports

2016-02-09 Thread Assaf Muller
On Tue, Feb 9, 2016 at 5:10 PM, Matt Riedemann
 wrote:
> While reviewing the neutron 7.0.2 stable/liberty point release, I noticed
> there were a lot of changes since 7.0.1. [1]
>
> There are 48 non-merge commits by my count.
>
> While there is no rule about how many backports should land before we cut a
> point release, it would be helpful on reviewers for the release request if
> it was fewer than 48. :)
>
> I think the Neutron team is by far the most active in backporting changes to
> stable, which is good.

Side note: The reason for this is that we decided to adopt a proactive
backporting approach:
http://lists.openstack.org/pipermail/openstack-dev/2015-October/077236.html

I'm seeing positive results and I would encourage other projects to do the same.

> We might want to consider releasing more often though
> if the backport rate is going to be this high.
>
> I'd also be interested in hearing from deployers/operators (if any are
> reading this) to know how frequently they are picking up stable point
> releases, or if they are taking an approach of waiting to upgrade from kilo
> to liberty until there have at least been a few stable/liberty point
> releases across the projects.
>
> [1]
> http://logs.openstack.org/88/272688/2/check/gate-releases-tox-list-changes/aa8e270/console.html.gz
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Being more aggressive with our defaults

2016-02-08 Thread Assaf Muller
I'm generally sympathetic to what you're saying, and I agree that we
need to do something about disabled-by-default features, at the very
least on the testing front. Comments in-line.

On Mon, Feb 8, 2016 at 4:47 PM, Sean M. Collins  wrote:
> Hi,
>
> With the path_mtu issue - our default was to set path_mtu to zero, and
> do no calculation against the physical segment MTU and the overhead for
> the tunneling protocol that was selected for a tenant network. Which
> means the network would break.
>
> I am working on patches to change our behavior to set the MTU to 1500 by
> default[1], so that at least our out of the box experience is more
> sensible.
>
> This brings me to the csum feature of recent linux kernel versions and
> related network components.
>
> Patches:
>
> https://review.openstack.org/#/c/220744/
> https://review.openstack.org/#/c/261409/
>
> Bugs/RFEs:
>
> https://bugs.launchpad.net/neutron/+bug/1515069
> https://bugs.launchpad.net/neutron/+bug/1492111
>
> Basically, we see that enabling the csum feature creates the conditions
> where 10gig link were able to be fully utilized[2] in one instance[3]. My
> thinking is - yes I too would like to fully utilize the links that I
> paid good money for. Someone with more knowledge can correct me
> , but is there any reason not to enable this feature? If your hardware
> supports it, we should utilize it. If your hardware doesn't support it,
> then we shouldn't.
>
> tl;dr - why do we keep merging features that create more knobs that
> deployers and deployment tools need to keep turning? The fact that we
> merge features that are disabled by default means that they are not as
> thoroughly tested as features that are enabled by default.

That is partially a testing issue which fullstack is supposed to
solve. We can't afford to set up a job for every combination of
Neutron configuration values, not upstream and not in different
downstream CI environments. Fullstack can test different
configurations knobs quickly, and it's something that a developer can
do on his own without depending on infra changes. It's also easy to
run, and thus easy to iterate.

As for concrete actions, we do have a fullstack test that enables
l2pop and checks connectivity between two VMs on different nodes. It's
the only code patch that actually covers l2pop at the upstream gate!
It already caught a regression that Armando and I fixed a while ago.
As for DVR, I'm searching for someone to pick up the gauntlet and
contribute some L3 fullstack tests. I'd be more than happy to review
it! I even have an abandoned patch that gets the ball rolling (The
idea is to test L3 east/west, north/south with FIP and north/south
without FIP for all four router types: Legacy, HA, DVR and DVR HA. You
run the same test in four different configurations, fullstack is
basically purpose built for this).

>
> Neutron should have a lot of things enabled by default that improve
> performance (l2pop? path_mtu? dvr?), and by itself, try and enable these
> features. If for some reason the hardware doesn't support it, log that
> it wasn't successful and then disable.

I don't know if this is what you wanted to talk about (It feels more
like a side note to me, so I'm sorry if I'm about to hijack the
conversation!), but I think that if an admin sets a certain
configuration option, the software should respect it in a predictable
manner. If an agent tries to use a certain config knob and fails, it
should error out (Saying specifically what's wrong), and not disable
the option but keep on living, because that is surprising behavior,
and there's nothing telling the admin that the option he expects to be
on is actually off, until he notices it the hard way some time later.

>
> OK - that's it for me. Thanks for reading. I'll put on my asbestos
> undies now.
>
>
> [1]: https://review.openstack.org/#/c/276411/
> [2]: http://openvswitch.org/pipermail/dev/2015-August/059335.html
>
> [3]: Yes, it's only one data point
>
> --
> Sean M. Collins
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Evolving the stadium concept

2016-02-04 Thread Assaf Muller
On Thu, Feb 4, 2016 at 5:55 PM, Sean M. Collins <s...@coreitpro.com> wrote:
> On Thu, Feb 04, 2016 at 04:20:50AM EST, Assaf Muller wrote:
>> I understand you see 'Dragonflow being part of the Neutron stadium'
>> and 'Dragonflow having high visibility' as tied together. I'm curious,
>> from a practical perspective, how does being a part of the stadium
>> give Dragonflow visibility? If it were not a part of the stadium and
>> you had your own PTL etc, what specifically would change so that
>> Dragonflow would be less visible.
>
>> Currently I don't understand why
>> being a part of the stadium is good or bad for a networking project,
>> or why does it matter.
>
>
> I think the issue is of public perception.

That's what I was trying to point out. But it must be something other
than perception, otherwise we could remove the inclusion list
altogether. A project would not be in or out.

> As others have stated, the
> issue is the "in" vs. "out" problem. We had a similar situation
> with 3rd party CI, where we had a list of drivers that were "nice" and
> had CI running vs drivers that were "naughty" and didn't. Prior to the
> vendor decomposition effort, We had a multitude of drivers that were
> in-tree, with the public perception that drivers that were in Neutron's
> tree were "sanctioned" by the Neutron project.
>
> That may not have been the intention, but that's what I think happened.
>
> --
> Sean M. Collins
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Evolving the stadium concept

2016-02-04 Thread Assaf Muller
On Thu, Feb 4, 2016 at 8:33 AM, Gal Sagie  wrote:
> As i have commented on the patch i will also send this to the mailing list:
>
> I really dont see why Dragonflow is not part of this list, given the
> criteria you listed.
>
> Dragonflow is fully developed under Neutron/OpenStack, no other
> repositories. It is fully Open source and already have a community of people
> contributing and interest from various different companies and OpenStack
> deployers. (I can prepare the list of active contributions and of interested
> parties) It also puts OpenStack Neutron APIs and use cases as first class
> citizens and working on being an integral part of OpenStack.
>
> I agree that OVN needs to be part of the list, but you brought up this
> criteria in regards to ODL, so: OVN like ODL is not only Neutron and
> OpenStack and is even running/being implemented on a whole different
> governance model and requirements to it.
>
> I think you also forgot to mention some other projects as well that are
> fully open source with a vibrant and diverse community, i will let them
> comment here by themselves.
>
> Frankly this approach disappoints me, I have honestly worked hard to make
> Dragonflow fully visible and add and support open discussion and follow the
> correct guidelines to work in a project. I think that Dragonflow community
> has already few members from various companies and this is only going to
> grow in the near future. (in addition to deployers that are considering it
> as a solution)  we also welcome anyone that wants to join and be part of the
> process to step in, we are very welcoming
>
> I also think that the correct way to do this is to actually add as reviewers
> all lieutenants of the projects you are now removing from Neutron big
> stadium and letting them comment.
>
> Gal.

I understand you see 'Dragonflow being part of the Neutron stadium'
and 'Dragonflow having high visibility' as tied together. I'm curious,
from a practical perspective, how does being a part of the stadium
give Dragonflow visibility? If it were not a part of the stadium and
you had your own PTL etc, what specifically would change so that
Dragonflow would be less visible. Currently I don't understand why
being a part of the stadium is good or bad for a networking project,
or why does it matter. Looking at Russell's patch, it's concerned with
placing projects (e.g. ODL, OVN, Dragonflow) either in or out of the
stadium and the criteria for doing so, I'm just asking how do you
(Gal) perceive the practical effect of that decision.

>
>
> On Wed, Feb 3, 2016 at 11:48 PM, Russell Bryant  wrote:
>>
>> On 11/30/2015 07:56 PM, Armando M. wrote:
>> > I would like to suggest that we evolve the structure of the Neutron
>> > governance, so that most of the deliverables that are now part of the
>> > Neutron stadium become standalone projects that are entirely
>> > self-governed (they have their own core/release teams, etc).
>>
>> After thinking over the discussion in this thread for a while, I have
>> started the following proposal to implement the stadium renovation that
>> Armando originally proposed in this thread.
>>
>> https://review.openstack.org/#/c/275888
>>
>> --
>> Russell Bryant
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> Best Regards ,
>
> The G.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Evolving the stadium concept

2016-02-04 Thread Assaf Muller
On Thu, Feb 4, 2016 at 10:20 AM, Assaf Muller <amul...@redhat.com> wrote:
> On Thu, Feb 4, 2016 at 8:33 AM, Gal Sagie <gal.sa...@gmail.com> wrote:
>> As i have commented on the patch i will also send this to the mailing list:
>>
>> I really dont see why Dragonflow is not part of this list, given the
>> criteria you listed.
>>
>> Dragonflow is fully developed under Neutron/OpenStack, no other
>> repositories. It is fully Open source and already have a community of people
>> contributing and interest from various different companies and OpenStack
>> deployers. (I can prepare the list of active contributions and of interested
>> parties) It also puts OpenStack Neutron APIs and use cases as first class
>> citizens and working on being an integral part of OpenStack.
>>
>> I agree that OVN needs to be part of the list, but you brought up this
>> criteria in regards to ODL, so: OVN like ODL is not only Neutron and
>> OpenStack and is even running/being implemented on a whole different
>> governance model and requirements to it.
>>
>> I think you also forgot to mention some other projects as well that are
>> fully open source with a vibrant and diverse community, i will let them
>> comment here by themselves.
>>
>> Frankly this approach disappoints me, I have honestly worked hard to make
>> Dragonflow fully visible and add and support open discussion and follow the
>> correct guidelines to work in a project. I think that Dragonflow community
>> has already few members from various companies and this is only going to
>> grow in the near future. (in addition to deployers that are considering it
>> as a solution)  we also welcome anyone that wants to join and be part of the
>> process to step in, we are very welcoming
>>
>> I also think that the correct way to do this is to actually add as reviewers
>> all lieutenants of the projects you are now removing from Neutron big
>> stadium and letting them comment.
>>
>> Gal.
>
> I understand you see 'Dragonflow being part of the Neutron stadium'
> and 'Dragonflow having high visibility' as tied together. I'm curious,
> from a practical perspective, how does being a part of the stadium
> give Dragonflow visibility? If it were not a part of the stadium and
> you had your own PTL etc, what specifically would change so that
> Dragonflow would be less visible. Currently I don't understand why
> being a part of the stadium is good or bad for a networking project,
> or why does it matter. Looking at Russell's patch, it's concerned with
> placing projects (e.g. ODL, OVN, Dragonflow) either in or out of the
> stadium and the criteria for doing so, I'm just asking how do you
> (Gal) perceive the practical effect of that decision.

Allow me to expand:
It seems to me like there is no significance to who is 'in or out'.
However, people, including potential customers, look at the list of
the Neutron stadium and deduce that project X is better than Y because
X is in but Y is out, and *that* in itself is the value of being in or
out, even though it has no meaning. Maybe we should explain what
exactly does it mean being in or out. It's just a governance decision,
it doesn't reflect in any way of the quality or appeal of a project
(For example some of the open source Neutron drivers out of the
stadium are much more mature, stable and feature full than other
drivers in the stadium).

>
>>
>>
>> On Wed, Feb 3, 2016 at 11:48 PM, Russell Bryant <rbry...@redhat.com> wrote:
>>>
>>> On 11/30/2015 07:56 PM, Armando M. wrote:
>>> > I would like to suggest that we evolve the structure of the Neutron
>>> > governance, so that most of the deliverables that are now part of the
>>> > Neutron stadium become standalone projects that are entirely
>>> > self-governed (they have their own core/release teams, etc).
>>>
>>> After thinking over the discussion in this thread for a while, I have
>>> started the following proposal to implement the stadium renovation that
>>> Armando originally proposed in this thread.
>>>
>>> https://review.openstack.org/#/c/275888
>>>
>>> --
>>> Russell Bryant
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> --
>> Best Regards ,
>>
>> The G.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Orphaned process cleanup

2016-01-27 Thread Assaf Muller
On Wed, Jan 27, 2016 at 4:52 PM, Sean M. Collins <s...@coreitpro.com> wrote:
> On Wed, Jan 27, 2016 at 04:24:00PM EST, Assaf Muller wrote:
>> On Wed, Jan 27, 2016 at 4:10 PM, Sean M. Collins <s...@coreitpro.com> wrote:
>> > Hi,
>> >
>> > I started poking a bit at https://bugs.launchpad.net/devstack/+bug/1535661
>> >
>> > We have radvd processes that the l3 agent launches, and if the l3 agent
>> > is terminated these radvd processes continue to run. I think we should
>> > probably terminate them when the l3 agent is terminated, like if we are
>> > in DevStack and doing an unstack.sh[1]. There's a fix on the DevStack
>> > side but I'm waffling a bit on if it's the right thing to do or not[2].
>> >
>> > The only concern I have is if there are situations where the l3 agent
>> > terminates, but we don't want data plane disruption. For example, if
>> > something goes wrong and the L3 agent dies, if the OS will be sending a
>> > SIGABRT (which my WIP patch doesn't catch[3] and radvd would continue to 
>> > run) or if a
>> > SIGTERM is issued, or worse, an OOM event occurs (I think thats a
>> > SIGTERM too?) and you get an outage.
>>
>> RDO systemd init script for the L3 agent will send a signal 15 when
>> 'systemctl restart neutron-l3-agent' is executed. I assume
>> Debian/Ubuntu do the same. It is imperative that agent restarts do not
>> cause data plane interruption. This has been the case for the L3 agent
>
> But wouldn't it really be wiser to use SIGHUP to communicate the intent
> to restart a process?

Maybe. I just checked and on a Liberty based RDO installation, sending
SIGHUP to a L3 agent doesn't actually do anything. Specifically it
doesn't resync its routers (Which restarting it with signal 15 does).

>
> --
> Sean M. Collins
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Orphaned process cleanup

2016-01-27 Thread Assaf Muller
On Wed, Jan 27, 2016 at 5:20 PM, Sean M. Collins <s...@coreitpro.com> wrote:
> On Wed, Jan 27, 2016 at 05:06:03PM EST, Assaf Muller wrote:
>> >> RDO systemd init script for the L3 agent will send a signal 15 when
>> >> 'systemctl restart neutron-l3-agent' is executed. I assume
>> >> Debian/Ubuntu do the same. It is imperative that agent restarts do not
>> >> cause data plane interruption. This has been the case for the L3 agent
>> >
>> > But wouldn't it really be wiser to use SIGHUP to communicate the intent
>> > to restart a process?
>>
>> Maybe. I just checked and on a Liberty based RDO installation, sending
>> SIGHUP to a L3 agent doesn't actually do anything. Specifically it
>> doesn't resync its routers (Which restarting it with signal 15 does).
>
> See, but there must be something that is starting the neutron l3 agent
> again, *after* sending it a SIGTERM (signal 15).

That's why I wrote 'restarting it with signal 15'.

> Then the l3 agent does
> a full resync since it's started back up, based on some state accounting
> done in what appears to be the plugin. Nothing about signal 15 actually
> does any restarting. It just terminates the process.

Yup. The point stands, there's a difference between sig 15 then start,
and a SIGHUP. Currently, Neutron agents don't resync after a SIGHUP
(And I wouldn't expect them to. I'd just expect a SIGHUP to reload
configuration). Restarting an agent shouldn't stop any agent spawned
processes like radvd, keepalived, or perform any clean ups to its
resources (Namespaces, etc), just like you wouldn't want the OVS agent
to destroy bridges and ports, and you wouldn't want a restart to
nova-compute to interfere with its qemu-kvm processes.

>
>> 2016-01-27 20:45:35.075 14651 INFO neutron.agent.l3.agent [-] Agent has just 
>> been revived. Doing a full sync.
>
> https://github.com/openstack/neutron/blob/ea8cafdfc0789bd01cf6b26adc6e5b7ee6b141d6/neutron/agent/l3/agent.py#L697
>
> https://github.com/openstack/neutron/blob/ea8cafdfc0789bd01cf6b26adc6e5b7ee6b141d6/neutron/agent/l3/agent.py#L679
>
>
> --
> Sean M. Collins
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L3] Orphaned process cleanup

2016-01-27 Thread Assaf Muller
On Wed, Jan 27, 2016 at 4:10 PM, Sean M. Collins  wrote:
> Hi,
>
> I started poking a bit at https://bugs.launchpad.net/devstack/+bug/1535661
>
> We have radvd processes that the l3 agent launches, and if the l3 agent
> is terminated these radvd processes continue to run. I think we should
> probably terminate them when the l3 agent is terminated, like if we are
> in DevStack and doing an unstack.sh[1]. There's a fix on the DevStack
> side but I'm waffling a bit on if it's the right thing to do or not[2].
>
> The only concern I have is if there are situations where the l3 agent
> terminates, but we don't want data plane disruption. For example, if
> something goes wrong and the L3 agent dies, if the OS will be sending a
> SIGABRT (which my WIP patch doesn't catch[3] and radvd would continue to run) 
> or if a
> SIGTERM is issued, or worse, an OOM event occurs (I think thats a
> SIGTERM too?) and you get an outage.

RDO systemd init script for the L3 agent will send a signal 15 when
'systemctl restart neutron-l3-agent' is executed. I assume
Debian/Ubuntu do the same. It is imperative that agent restarts do not
cause data plane interruption. This has been the case for the L3 agent
for a while, and recently for the OVS agent. There's a difference
between an uninstallation (unstack.sh) and an agent restart/upgrade,
let's keep it that way :)

>
> [1]: 
> https://github.com/openstack-dev/devstack/blob/master/lib/neutron-legacy#L767
>
> [2]: https://review.openstack.org/269560
>
> [3]: https://review.openstack.org/273228
> --
> Sean M. Collins
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] questions about uni test

2016-01-25 Thread Assaf Muller
On Mon, Jan 25, 2016 at 10:05 AM, Gareth  wrote:
> Hey neutron guys
>
> What will happen if I remove this line[0]?
>
> When running unit test, do we create tables in real database?
>
> [0] 
> https://github.com/openstack/neutron/blob/master/neutron/tests/unit/testlib_api.py#L78

That line has nothing to do with what database backend you use (SQLite
or mysql or whatever), it's used to clear the contents of the tables
so the next test starts with a clean DB.

>
> --
> Gareth (Kun Huang)
>
> Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball
> OpenStack contributor, kun_huang@freenode
> My promise: if you find any spelling or grammar mistakes in my email
> from Mar 1 2013, notify me
> and I'll donate $1 or ¥1 to an open organization you specify.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Testing Neutron with latest OVS

2016-01-14 Thread Assaf Muller
On Thu, Jan 14, 2016 at 9:28 AM, Russell Bryant  wrote:
> On 01/13/2016 11:51 PM, Tony Breeds wrote:
>> The challenge for you guys is the kernel side of things but if I
>> understood correctly you can get the kenrel module from the ovs
>> source tree and just compile it against the stock ubuntu kernel
>> (assuming the kernel devel headers are available) is that right?
>
> It's kernel and userspace.  There's multiple current development
> efforts that involve changes to OpenStack, OVS userspace, and the
> appropriate datapath (OVS kernel module or DPDK).
>
> The consensus I'm picking up roughly is that for those working on the
> features, testing with source builds seems to be working fine.  It's
> just not something anyone wants to gate the main Neutron repo with.
> That seems quite reasonable.  If the features aren't in proper
> releases yet, I don't see gating as that important anyway.

I want to have voting tests for new features. For the past year the OVS agent
ARP responder feature has been without proper coverage, and now it's the
upcoming OVS firewall driver. I think that as long as we compile from a specific
OVS patch (And not a moving target), I don't see much of a difference between
gating on OVS 2.0 and gating on, for example, the current tip of the
OVS 2.5 branch
(But continuing to gate on that patch, so when the OVS 2.5 branch gets backports
we won't gate on those, and we'll be able to move to a new tip in our own pace).
As long as we pick a patch to compile against and run the functional
tests a few times
and verify that it works, I think it's reasonable. We've been gating
against OVS 2.0
for the past few years, that to me seems unreasonable. We're gating
against an OVS
version nobody is using in production anymore.

>
> --
> Russell Bryant
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Testing Neutron with latest OVS

2016-01-13 Thread Assaf Muller
On Wed, Jan 13, 2016 at 4:50 AM, Jakub Libosvar  wrote:
> Hi all,
>
> recently I was working on firewall driver [1] that requires latest
> features in OVS, specifically conntrack support. In order to get the
> driver tested, we need to have the latest OVS kernel modules on machines
> running tests but AFAIK there is no stable "2.5 like" release of OVS yet.
>
> Facing the same problem OVN did in the past, I decided to try to steal
> their solution and apply it in our Neutron repo [2]. Sean and Matthew
> rightfully expressed worries about this approach on review of [2].
>
> So I'd like to bring this to a broader audience and ask for help or
> suggestion on how to test such Neutron features that require some newer
> versions.
>
> The first idea was to have an option to compile trunk ovs and use it in
> particular jobs like functional one. That would bring faster development
> of features going along with ovs features.

I think we should use a newer OVS version that for the functional and fullstack
jobs at the very least. This will enable us to test the OVS firewall
driver you're
working on, as well as the OVS ARP responder (Which was merged a long time
ago, but lacks proper testing because it needs OVS 2.1+ and we gate
using OVS 2.0).

So, that's the problem. How we solve it is another manner - I was
under the impression
that compiling it is our only option. Right now OVN are compiling
master, I think we
should avoid that and compile a specific git hash instead so the gate
won't break every
time OVS breaks. Then, when OVS branch out 2.5, we can 'pin' on that.
At that point
we could perhaps switch to a packaged OVS off some sort of experimental repo.

>
> Please share your ideas :)
>
> Thanks!
> Kuba
>
> [1] https://review.openstack.org/#/c/249337
> [2] https://review.openstack.org/#/c/266423
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-dev][Neutron] Is there a tool of generating network topology?

2015-12-22 Thread Assaf Muller
Check it out:
https://github.com/jbenc/plotnetcfg

On Mon, Dec 21, 2015 at 11:22 PM, Gareth  wrote:
> Hi, networking guys,
>
> For example, we could use it to generate a png file or print ascii in
> command line. Is there something like this?
>
> --
> Gareth
>
> Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball
> OpenStack contributor, kun_huang@freenode
> My promise: if you find any spelling or grammar mistakes in my email
> from Mar 1 2013, notify me
> and I'll donate $1 or ¥1 to an open organization you specify.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][QA] New testing guidelines

2015-12-16 Thread Assaf Muller
On Wed, Dec 16, 2015 at 2:32 PM, Boris Pavlovic <bo...@pavlovic.me> wrote:
> Assaf,
>
> We can as well add Rally testing for scale/performance/regression testing.

There's mention of it in the doc but not the rationale of using it like for the
other testing frameworks. I'd appreciate it if a Rally dev could send the patch
and add me as a reviewer.

>
> Best regards,
> Boris Pavlovic
>
> On Wed, Dec 16, 2015 at 7:00 AM, Fawad Khaliq <fa...@plumgrid.com> wrote:
>>
>> Very useful information. Thanks, Assaf.
>>
>> Fawad Khaliq
>>
>>
>> On Thu, Dec 10, 2015 at 6:26 AM, Assaf Muller <amul...@redhat.com> wrote:
>>>
>>> Today we merged [1] which adds content to the Neutron testing guidelines:
>>>
>>> http://docs.openstack.org/developer/neutron/devref/development.environment.html#testing-neutron
>>>
>>> The document details Neutron's different testing infrastructures:
>>> * Unit
>>> * Functional
>>> * Fullstack (Integration testing with services deployed by the testing
>>> infra itself)
>>> * In-tree Tempest
>>>
>>> The new documentation provides:
>>> * Examples
>>> * Do's and don'ts
>>> * Good and bad usage of mock
>>> * The anatomy of a good unit test
>>>
>>> And primarily the advantages and use cases for each testing framework.
>>>
>>> It's short - I encourage developers to go through it. Reviewers may
>>> use it as reference / link when testing anti-pattern pop up.
>>>
>>> Please send feedback on this thread or better yet in the form of a
>>> devref patch. Thank you!
>>>
>>>
>>> [1] https://review.openstack.org/#/c/245984/
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][taas] neutron ovs-agent deletes taas flows

2015-12-15 Thread Assaf Muller
SFC are going to hit this issue as well. Really any out of tree
Neutron project that extends the OVS agent and expects things to work
:)

On Tue, Dec 15, 2015 at 9:30 AM, Ihar Hrachyshka  wrote:
> Soichi Shigeta  wrote:
>
>>
>>  Hi,
>>
>>   We find a problem that neutron ovs-agent deletes taas flows.
>>
>>   o) Problem description:
>>
>>  Background:
>>   At Liberty, a bug fix to drop only old flows was merged
>>   to Neutron.
>>   When ovs-agent is restarted, the cleanup logic drops flow
>>   entries unless they are stamped by agent_uuid (recorded as
>>   a cookie).
>>
>>   bug: #1383674
>>"Restarting neutron openvswitch agent causes network
>> hiccup by throwing away all flows"
>>https://bugs.launchpad.net/neutron/+bug/1383674
>>
>>   commit: 73673beacd75a2d9f51f15b284f1b458d32e992e (patch)
>>
>> https://git.openstack.org/cgit/openstack/neutron/commit/?id=73673beacd75a2d9f51f15b284f1b458d32e992e
>>
>>
>>  Problem:
>>   Cleanup will be done only once, but it seems not to work
>>   until port configuration is changed.
>>
>>   Therefore, taas flows will be deleted as follows:
>>1. Start a new compute node or restart an existing compute node.
>>2. Start taas agent on the compute node.
>>   --> taas agent creates flows
>>   (these flows are not stamped by using ovs-agent's uuid)
>>3. Deploy a vm on the compute node.
>>   --> 1. neutron changes port configuration
>>   2. subsequently, the cleanup logic is invoked
>>   3. ovs-agent drops taas flows
>>
>>  Specifically, following taas flows in br_tun are dropped:
>>  -
>>   table=35, priority=2,reg0=0x0 actions=resubmit(,36)
>>   table=35, priority=1,reg0=0x1 actions=resubmit(,36)
>>   table=35, priority=1,reg0=0x2 actions=resubmit(,37)
>>  -
>>
>>  log in q-agt.log
>>  -
>> neutron.plugins.ml2.drivers.openvswitch.agent.openflow.ovs_ofctl.ofswitch
>> req-e5739280-7116-4802-b5ba-d6964b4c5557 Deleting flow
>> cookie=0x0, duration=434.59s, table=35, n_packets=0, n_bytes=0,
>> idle_age=434, priority=2,reg0=0x0 actions=resubmit(,36)
>> neutron.plugins.ml2.drivers.openvswitch.agent.openflow.ovs_ofctl.ofswitch
>> req-e5739280-7116-4802-b5ba-d6964b4c5557 Deleting flow
>> cookie=0x0, duration=434.587s, table=35, n_packets=0, n_bytes=0,
>> idle_age=434, priority=1,reg0=0x1 actions=resubmit(,36)
>> neutron.plugins.ml2.drivers.openvswitch.agent.openflow.ovs_ofctl.ofswitch
>> req-e5739280-7116-4802-b5ba-d6964b4c5557 Deleting flow
>> cookie=0x0, duration=434.583s, table=35, n_packets=0, n_bytes=0,
>> idle_age=434, priority=1,reg0=0x2 actions=resubmit(,37)
>>  -
>>
>>
>>   o) Impact for TaaS:
>>
>>  Because flows in br_tun are dropped by the cleanup logic, mirrored
>>  packets will not send to a monitoring vm running on another host.
>>
>>  Note: Mirrored packets are sent in case of both source vm and
>>monitoring vm are running on the same host. (not affected by
>>flows in br_tun)
>>
>>
>>   o) How to reproduce:
>>
>>  1. Start a new compute node or restart an existing compute node.
>> (Actually, restarting ovs-agent is enough.)
>>  2. Start (or restart) taas agent on the compute node.
>>  3. Deploy a vm on the compute node.
>> --> The cleanup logic drops taas flows.
>>
>>
>>   o) Workaround:
>>
>>  After a vm is deployed on a (re)started compute node, restart taas
>>  agent before creating a tap-service or tap-flow.
>>  That is, create taas flows after cleanup has been done.
>>
>>  Note that cleanup will be done only once during an ovs-agent is
>>  running.
>>
>>
>>   o) An idea to fix:
>>
>>  1. Set "taas" stamp(*) to taas flows.
>>  2. Modify the cleanup logic in ovs-agent not to delete entries
>> stamped as "taas".
>>
>>  * Maybe a static string.
>>If we need to use a string which generated dynamically
>>(e.g. uuid), API to interact with ovs-agent is required.
>
>
> API proposal with some consideration for flow cleanup not dropping flows for
> external code is covered in the following email thread:
> http://lists.openstack.org/pipermail/openstack-dev/2015-December/081264.html
>
> I believe you would need to adopt the extensions API once it’s in, moving
> from setup with a separate agent for your feature to l2 agent extension for
> taas that will run inside OVS agent.
>
> Ihar
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] neutron metadata-agent HA

2015-12-14 Thread Assaf Muller
If you're not running HA routers, you have a couple of options that
I'm aware of:

1) Use multiple L3 agents in A/A, and enable
neutron.conf:allow_automatic_l3agent_failover. In which case you'd
enable the metadata agent on each node. There's pros and cons of this
approach vs. HA routers. Significantly slower failover (Hours instead
of seconds, depending on number of routers), reliable on the control
plane for a successful failover, but simpler with less room for bugs.
I recommend HA routers, but I'm biased.
2) Use Pacemaker or similar to manage a cluster (Or clusters) of
network nodes in A/P, in which case all four Neutron agents (L2,
metadata, DHCP, L3) are enabled on only one machine in a cluster at a
time. This is fairly out of date at this point.

On Mon, Dec 14, 2015 at 12:33 PM, Fox, Kevin M <kevin@pnnl.gov> wrote:
> What about the case where your not running ha routers? Should you still run 
> more then one?
>
> Thanks,
> Kevin
> ________
> From: Assaf Muller [amul...@redhat.com]
> Sent: Saturday, December 12, 2015 12:44 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] neutron metadata-agent HA
>
> The neutron metadata agent is stateless. It takes requests from the
> metadata proxies running in the router namespaces and moves the
> requests on to the nova server. If you're using HA routers, start the
> neutron-metadata-agent on every machine the L3 agent runs, and just
> make sure that the metadata-agent is restarted in case it crashes and
> you're done. Nothing else you need to do.
>
> On Fri, Dec 11, 2015 at 3:24 PM, Fabrizio Soppelsa
> <fsoppe...@mirantis.com> wrote:
>>
>> On Dec 10, 2015, at 12:56 AM, Alvise Dorigo <alvise.dor...@pd.infn.it>
>> wrote:
>>
>> So my question is: is there any progress on this topic ? is there a way
>> (something like a cronjob script) to make the metadata-agent redundant
>> without involving the clustering software Pacemaker/Corosync ?
>>
>>
>> Reason for such a dirty solution instead of rely onto pacemaker?
>>
>> I’m not aware of such initiatives - just checked the blueprints in Neutron
>> and I found no relevant. I can suggest to file a proposal to the
>> correspondent launchpad page, by elaborating your idea.
>>
>> F.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] neutron metadata-agent HA

2015-12-13 Thread Assaf Muller
The L3 agent monitors the metadata proxies it spawns and restarts them
automatically. You should be using an external tool to restart the
metadata *agent* in case that crashes.

On Sun, Dec 13, 2015 at 7:49 AM, Gary Kotton <gkot...@vmware.com> wrote:
>
>
> From: Eugene Nikanorov <enikano...@mirantis.com>
> Reply-To: OpenStack List <openstack-dev@lists.openstack.org>
> Date: Sunday, December 13, 2015 at 12:09 PM
> To: OpenStack List <openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] neutron metadata-agent HA
>
> It is as 'single' as active L3 router that is handling traffic at current
> point of time.
>
> [Gary] But if the l3 agent is up and running and the metadataproxy is not
> then all of the instances using that agent will not be able to get their
> metadata.
>
> On Sun, Dec 13, 2015 at 11:13 AM, Gary Kotton <gkot...@vmware.com> wrote:
>>
>>
>>
>>
>>
>>
>> On 12/12/15, 10:44 PM, "Assaf Muller" <amul...@redhat.com> wrote:
>>
>> >The neutron metadata agent is stateless. It takes requests from the
>> >metadata proxies running in the router namespaces and moves the
>> >requests on to the nova server. If you're using HA routers, start the
>> >neutron-metadata-agent on every machine the L3 agent runs, and just
>> >make sure that the metadata-agent is restarted in case it crashes and
>> >you're done.
>>
>> So does this mean that it could be the single point of failure?
>>
>> >Nothing else you need to do.
>> >
>> >On Fri, Dec 11, 2015 at 3:24 PM, Fabrizio Soppelsa
>> ><fsoppe...@mirantis.com> wrote:
>> >>
>> >> On Dec 10, 2015, at 12:56 AM, Alvise Dorigo <alvise.dor...@pd.infn.it>
>> >> wrote:
>> >>
>> >> So my question is: is there any progress on this topic ? is there a way
>> >> (something like a cronjob script) to make the metadata-agent redundant
>> >> without involving the clustering software Pacemaker/Corosync ?
>> >>
>> >>
>> >> Reason for such a dirty solution instead of rely onto pacemaker?
>> >>
>> >> I’m not aware of such initiatives - just checked the blueprints in
>> >> Neutron
>> >> and I found no relevant. I can suggest to file a proposal to the
>> >> correspondent launchpad page, by elaborating your idea.
>> >>
>> >> F.
>> >>
>> >>
>> >> __
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >
>>
>> > >__
>> >OpenStack Development Mailing List (not for usage questions)
>> >Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] neutron metadata-agent HA

2015-12-12 Thread Assaf Muller
The neutron metadata agent is stateless. It takes requests from the
metadata proxies running in the router namespaces and moves the
requests on to the nova server. If you're using HA routers, start the
neutron-metadata-agent on every machine the L3 agent runs, and just
make sure that the metadata-agent is restarted in case it crashes and
you're done. Nothing else you need to do.

On Fri, Dec 11, 2015 at 3:24 PM, Fabrizio Soppelsa
 wrote:
>
> On Dec 10, 2015, at 12:56 AM, Alvise Dorigo 
> wrote:
>
> So my question is: is there any progress on this topic ? is there a way
> (something like a cronjob script) to make the metadata-agent redundant
> without involving the clustering software Pacemaker/Corosync ?
>
>
> Reason for such a dirty solution instead of rely onto pacemaker?
>
> I’m not aware of such initiatives - just checked the blueprints in Neutron
> and I found no relevant. I can suggest to file a proposal to the
> correspondent launchpad page, by elaborating your idea.
>
> F.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][QA] New testing guidelines

2015-12-09 Thread Assaf Muller
Today we merged [1] which adds content to the Neutron testing guidelines:
http://docs.openstack.org/developer/neutron/devref/development.environment.html#testing-neutron

The document details Neutron's different testing infrastructures:
* Unit
* Functional
* Fullstack (Integration testing with services deployed by the testing
infra itself)
* In-tree Tempest

The new documentation provides:
* Examples
* Do's and don'ts
* Good and bad usage of mock
* The anatomy of a good unit test

And primarily the advantages and use cases for each testing framework.

It's short - I encourage developers to go through it. Reviewers may
use it as reference / link when testing anti-pattern pop up.

Please send feedback on this thread or better yet in the form of a
devref patch. Thank you!


[1] https://review.openstack.org/#/c/245984/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] DVR + L3 HA + L2pop - Mapping out the work

2015-12-04 Thread Assaf Muller
There's a patch up for review to integrate DVR and L3 HA:
https://review.openstack.org/#/c/143169/

Let me outline all of the work that has to happen before that patch
would be useful:

In order for DVR + L3 HA to work in harmony, each feature would have
to be stable on its own. DVR has its share of problems, and this is
being tackled full on, with more folks joining the good fight soon. L3
HA also has its own problems:

* https://review.openstack.org/#/c/238122/
* https://review.openstack.org/#/c/230481/
* https://review.openstack.org/#/c/250040/

DVR requires l2pop, and l2pop on its own is also problematic
(Regardless if DVR or L3 HA is turned on). One notable issue is that
it screws up live migration:
https://bugs.launchpad.net/neutron/+bug/1443421.
I'd really like to see more focus on Vivek's patch that attempts to
resolve this issue:
https://review.openstack.org/#/c/175383/

Finally the way L3 HA integrates with l2pop is not something I would
recommend in production, as described here:
https://bugs.launchpad.net/neutron/+bug/1522980. If I cannot find an
owner for this work I will reach out to some folks soon.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Call for review focus

2015-11-25 Thread Assaf Muller
On Mon, Nov 23, 2015 at 7:02 AM, Rossella Sblendido <rsblend...@suse.com> wrote:
>
>
> On 11/20/2015 03:54 AM, Armando M. wrote:
>>
>>
>>
>> On 19 November 2015 at 18:26, Assaf Muller <amul...@redhat.com
>> <mailto:amul...@redhat.com>> wrote:
>>
>> On Wed, Nov 18, 2015 at 9:14 PM, Armando M. <arma...@gmail.com
>> <mailto:arma...@gmail.com>> wrote:
>> > Hi Neutrites,
>> >
>> > We are nearly two weeks away from the end of Mitaka 1.
>> >
>> > I am writing this email to invite you to be mindful to what you
>> review,
>> > especially in the next couple of weeks. Whenever you have the time
>> to review
>> > code, please consider giving priority to the following:
>> >
>> > Patches that target blueprints targeted for Mitaka;
>> > Patches that target bugs that are either critical or high;
>> > Patches that target rfe-approved 'bugs';
>> > Patches that target specs that have followed the most current
>> submission
>> > process;
>>
>> Is it possible to create Gerrit dashboards for patches that answer
>> these
>> criteria, and then persist the links in Neutron's dashboards devref
>> page?
>> http://docs.openstack.org/developer/neutron/dashboards/index.html
>> That'd be super useful.
>>
>>
>> We should look into that, but to be perfectly honest I am not sure how
>> easy it would be, since we'd need to cross-reference content that lives
>> into gerrit as well as launchpad. Would that even be possible?
>
>
> To cross-reference we can use the bug ID or the blueprint name.
>
> I created a script that queries launchpad to get:
> 1) Bug number of the bugs tagged with approved-rfe
> 2) Bug number of the critical/high bugs
> 3) list of blueprints targeted for the current milestone (mitaka-1)
>
> With this info the script builds a .dash file that can be used by
> gerrit-dash-creator [2] to produce a dashboard url .
>
> The script prints also the queries that can be used in gerrit UI directly,
> e.g.:
> Critical/High Bugs
> (topic:bug/1399249 OR topic:bug/1399280 OR topic:bug/1443421 OR
> topic:bug/1453350 OR topic:bug/1462154 OR topic:bug/1478100 OR
> topic:bug/1490051 OR topic:bug/1491131 OR topic:bug/1498790 OR
> topic:bug/1505575 OR topic:bug/1505843 OR topic:bug/1513678 OR
> topic:bug/1513765 OR topic:bug/1514810)
>
>
> This is the dashboard I get right now [3]
>
> I tried in many ways to get Gerrit to filter patches if the commit message
> contains a bug ID. Something like:
>
> (message:"#1399249" OR message:"#1399280" OR message:"#1443421" OR
> message:"#1453350" OR message:"#1462154" OR message:"#1478100" OR
> message:"#1490051" OR message:"#1491131" OR message:"#1498790" OR
> message:"#1505575" OR message:"#1505843" OR message:"#1513678" OR
> message:"#1513765" OR message:"#1514810")
>
> but it doesn't work well, the result of the filter contains patches that
> have nothing to do with the bugs queried.
> That's why I had to filter using the topic.
>
> CAVEAT: To make the dashboard work, bug fixes must use the topic "bug/ID"
> and patches implementing a blueprint the topic "bp/name". If a patch is not
> following this convention it won't be showed in the dashboard, since the
> topic is used as filter. Most of us use this convention already anyway so I
> hope it's not too much of a burden.
>
> Feedback is appreciated :)

Rossella this is exactly what I wanted :) Let's iterate on the patch
and merge it.
We could then consider running the script automatically on a daily
basis and publishing the
resulting URL in a nice bookmarkable place.

>
> [1] https://review.openstack.org/248645
> [2] https://github.com/openstack/gerrit-dash-creator
> [3] https://goo.gl/sglSbp
>
>>
>> Btw, I was looking at the current blueprint assignments [1] for Mitaka:
>> there are some blueprints that still need assignee, approver and
>> drafter; we should close the gap. If there are volunteers, please reach
>> out to me.
>>
>> Thanks,
>> Armando
>>
>> [1] https://blueprints.launchpad.net/neutron/mitaka/+assignments
>>
>>
>> >
>> > Everything else should come later, no matter how easy or interesting
>> it is
>> > to review; remember that as a community we have the collective duty
>> to work
>> > towards a common (set of) target(s), as

Re: [openstack-dev] [Neutron] Call for review focus

2015-11-19 Thread Assaf Muller
On Wed, Nov 18, 2015 at 9:14 PM, Armando M.  wrote:
> Hi Neutrites,
>
> We are nearly two weeks away from the end of Mitaka 1.
>
> I am writing this email to invite you to be mindful to what you review,
> especially in the next couple of weeks. Whenever you have the time to review
> code, please consider giving priority to the following:
>
> Patches that target blueprints targeted for Mitaka;
> Patches that target bugs that are either critical or high;
> Patches that target rfe-approved 'bugs';
> Patches that target specs that have followed the most current submission
> process;

Is it possible to create Gerrit dashboards for patches that answer these
criteria, and then persist the links in Neutron's dashboards devref page?
http://docs.openstack.org/developer/neutron/dashboards/index.html
That'd be super useful.

>
> Everything else should come later, no matter how easy or interesting it is
> to review; remember that as a community we have the collective duty to work
> towards a common (set of) target(s), as being planned in collaboration with
> the Neutron Drivers team and the larger core team.
>
> I would invite submitters to ensure that the Launchpad resources
> (blueprints, and bug report) capture the most updated view in terms of
> patches etc. Work with your approver to help him/her be focussed where it
> matters most.
>
> Finally, we had plenty of discussions at the design summit, and some of
> those discussions will have to be followed up with actions (aka code in
> OpenStack lingo). Even though, we no longer have deadlines for feature
> submission, I strongly advise you not to leave it last minute. We can only
> handle so much work for any given release, and past experience tells us that
> we can easily hit a breaking point at around the ~30 blueprint mark.
>
> Once we reached it, it's likely we'll have to start pushing back work for
> Mitaka and allow us some slack; things are fluid as we all know, and the
> random gate breakage is always lurking round the corner! :)
>
> Happy hacking,
> Armando
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][neutron] How we handle Kilo backports

2015-11-18 Thread Assaf Muller
On Wed, Nov 18, 2015 at 12:38 PM, Carl Baldwin  wrote:
> On Wed, Nov 18, 2015 at 9:44 AM, Ihar Hrachyshka  wrote:
>> Hi all,
>>
>> as per [1] I imply that all projects under stable-maint-core team
>> supervision must abide the stable policy [2] which limits the types of
>> backports for N-2 branches (now it’s stable/kilo) to "Only critical bugfixes
>> and security patches”. With that, I remind all stable core members about the
>> rule.
>>
>> Since we are limited to ‘critical bugfixes’ only, and since there is no
>> clear definition of what ‘critical’ means, I guess we should define it for
>> ourselves.
>>
>> In Neutron world, we usually use Critical importance for those bugs that
>> break gate. High is used for those bugs that have high impact production
>> wise. With that in mind, I suggest we define ‘critical’ bugfixes as Critical
>> + High in LP. Comments on that?
>
> I was wondering about this today too.  Ihar is correct about how we
> use Critical importance in launchpad for Neutron bugs.  The number of
> Critical neutron bugs is very small and most of them are not relevant
> to stable releases because they are targeted at gate breakage incurred
> by new development in master.
>
> I'll +1 that we should extend this to Critical + High in launchpad.
> Otherwise, we would severely limit our ability to backport important
> bug fixes to a stable release that is only 6 months old and many
> deployers are only beginning to turn their attention to it.

+1

In many ways stable/kilo is more important than stable/liberty today.

>
>> (My understanding is that we can also advocate for the change in the global
>> policy if we think the ‘critical only’ rule should be relaxed, but till then
>> it makes sense to stick to what policy says.)
>
> +1
>
> Carl
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Couldn't ping/ssh cirros instance using its floating ip

2015-11-09 Thread Assaf Muller
You will have a much better time on ask.openstack.org - It's a super
active Q site
for questions exactly like this one. You posted your question to a
developers mailing list
where we choose release names and make other ultra important mission
critical decisions.

On Mon, Nov 9, 2015 at 6:34 PM, Aishwarya Thangappa_Professional
 wrote:
> Hi there,
>
> In a fresh devstack(master branch) install,
>
> 1. I booted up a cirros instance and associated it with a floating ip.
> 2. Created a security group rule to allow tcp port 22 and associated it with
> the nova instance
> 3. From the qrouter namespace, I can ping both the private and fip address
> of the instance.
> 4. But, couldn’t ssh into the instance from the external network using its
> fip.
>
>
> neutron net-list
> +--+-+--+
> | id   | name| subnets
> |
> +--+-+--+
> | 376357b1-6abe-46c1-844b-548a051391d5 | public  |
> 41b86431-41d6-4503-8329-767f84bad4d5 172.24.4.0/24   |
> |  | |
> 79f0bf72-8c98-478b-a463-b6e3a101e6b7 2001:db8::/64   |
> | ebe713c9-5064-48ec-9094-e44e150d36ad | private |
> c7ebd45c-5a1f-4d97-a90e-b221f19c7177 10.0.0.0/24 |
> |  | |
> d7aac86f-0b2c-4dd4-88cf-246bfb58006e fd69:7a94:27b7::/64 |
> +--+-+—-+
>
> $ neutron router-list
> +--+-++-+---+
> | id   | name| external_gateway_info
> | distributed | ha|
> +--+-++-+---+
> | 46715086-3f9c-4fb1-91b4-b41da24baa2f | router1 | {"network_id":
> "376357b1-6abe-46c1-844b-548a051391d5", "enable_snat": true,
> "external_fixed_ips": [{"subnet_id": "41b86431-41d6-4503-8329-767f84bad4d5",
> "ip_address": "172.24.4.2"}, {"subnet_id":
> "79f0bf72-8c98-478b-a463-b6e3a101e6b7", "ip_address": "2001:db8::1"}]} |
> True| False |
> +--+-++-+---+
>
> $ neutron security-group-rule-list
> +--++---+---+---+-+
> | id   | security_group | direction |
> ethertype | protocol/port | remote  |
> +--++---+---+---+-+
> | 1cfb9a69-61e0-4df3-b04c-f9f9f4a54cc3 | default| egress| IPv4
> | any   | any |
> | 4afe5008-c192-4582-95c8-21b1f64ab2a5 | default| ingress   | IPv6
> | any   | default (group) |
> | 5ce1e34d-7b9d-41d8-9a15-94711824ae68 | secgroup1  | ingress   | IPv4
> | 22/tcp| any |
> | 6b3a8008-b446-4004-a72a-6ea2c9bbf375 | default| egress| IPv6
> | any   | any |
> | 7feb5969-5f9d-4525-93a3-a108db59f65b | default| egress| IPv6
> | any   | any |
> | 7ff6a82f-6c8c-4bb5-b893-d06272b0d69b | default| ingress   | IPv4
> | any   | default (group) |
> | 90f385c9-de19-4ede-b4ef-bf199537b49b | secgroup1  | egress| IPv6
> | any   | any |
> | c21ed80d-fbee-4db6-8518-60a1070aff20 | secgroup1  | egress| IPv4
> | 22/tcp| any |
> | c3d1f6ea-b7c4-47ea-ace3-f9b3b1bf8d25 | default| egress| IPv4
> | any   | any |
> | dc09a10a-37db-4a33-9abc-00798221254e | secgroup1  | egress| IPv4
> | any   | any |
> | df4d7930-6ce0-43c8-996f-ced126c7cba0 | default| ingress   | IPv4
> | any   | default (group) |
> | e0d84fea-e47c-48f6-a29b-d41231674256 | default| ingress   | IPv6
> | any   | default (group) |
> 

Re: [openstack-dev] [stable][infra][neutron] ZUUL_BRANCH not set for periodic stable jobs

2015-11-09 Thread Assaf Muller
On Mon, Nov 9, 2015 at 5:30 PM, Ihar Hrachyshka  wrote:
> Jeremy Stanley  wrote:
>
>> On 2015-11-09 17:31:00 +0100 (+0100), Ihar Hrachyshka wrote:
>> [...]
>>>
>>> From the failure log, I determined that the tests fail because they
>>> assume
>>> neutron/liberty code, but actually run against neutron/master (that does
>>> not
>>> have that neutron.plugins.embrane.* namespace because the plugin was
>>> removed
>>> in Mitaka).
>>>
>>> I then compared how we fetch neutron in gate and in periodic jobs, and I
>>> see
>>> that ZUUL branch is not set in the latter jobs.
>>
>> [...]
>>
>> Short answer is that the periodic trigger in Zuul is changeless and
>> thus branchless. It just wakes up at the specified time and starts a
>> list of jobs associated with that pipeline for any projects. This is
>> why the working periodic jobs have different names than their gerrit
>> triggered pipeline equivalents... they need to hard-code a branch
>> (usually as a JJB parameter).
>> --
>> Jeremy Stanley
>
>
> UPD: we discussed the matter with Jeremy in irc in more details and came up
> to agreement that the best way is actually modifying tools/tox_install.sh in
> neutron-lbaas (and other projects affected) to set ZUUL_BRANCH if not passed
> from Jenkins.
>
> For neutron-lbaas, I came up with the following patch:
> https://review.openstack.org/#/c/24/
>
> Once it’s validated and reviewed, I will propose some more for other
> projects (I believe at least vpnaas should be affected).
>
> Once it’s merged in master, I will propose backports for Liberty.

Great work Ihar, thanks for taking this on.

>
> Ihar
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] proactive backporting

2015-10-16 Thread Assaf Muller
On Fri, Oct 16, 2015 at 5:32 PM, Kevin Benton  wrote:

> We can't put in code and just hope for testing later. The tests are even
> more important in back-ports because there could be unexpected differences
> in the stable branch that make the patch not work correctly.
>
> However, we do need to make sure that people aren't complaining about the
> testing methodology in the back-ports because it's too late for that.
>

I am quick to point out testing deficiencies so I hope I wasn't too tired
and unknowingly did the unspeakable that which you speak of (I think people
can fail to notice the branch the patch is submitted against). Reviewing a
backport should be 90% about backport procedures: Does the commit message
contain everything it should? Does the patch meet backport criteria? (Any
DB, RPC, configuration or API changes? Is it a new feature or a high value
/ low risk bug risk? Will this somehow break users on upgrade?) A backport
should be ideally identical to its master counterpart, so I agree that
commenting on spelling mistakes, comments, code placement, design and the
like should be avoided. If that bothers you so much, send a patch to master
to fix the grave error you spotted!


>
> On Fri, Oct 16, 2015 at 2:23 PM, Fox, Kevin M  wrote:
>
>> It would also help if the process could split out bug fixes from unit
>> tests. I had a bug fix get stuck while the unit tests were bikesheded for a
>> while, and then the delay of not getting quickly backported to the stable
>> branches then kicked in. All the while I had to patch production clouds by
>> hand with a non upstreamed fix. I'm all for having unit tests to ensure the
>> bugs don't creep back in, but regression bugs should be fixed as quickly as
>> possible.
>>
>> Thanks,
>> Kevin
>> 
>> From: Edgar Magana [edgar.mag...@workday.com]
>> Sent: Friday, October 16, 2015 2:04 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [neutron][stable] proactive backporting
>>
>> + 2 and total support for it.
>>
>> Looking forward to reviewing the first set of them.
>>
>> Edgar
>>
>>
>>
>> On 10/16/15, 5:33 AM, "Ihar Hrachyshka"  wrote:
>>
>> >Hi all,
>> >
>> >I’d like to introduce a new initiative around stable branches for
>> neutron official projects (neutron, neutron-*aas, python-neutronclient)
>> that is intended to straighten our backporting process and make us more
>> proactive in fixing bugs in stable branches. ‘Proactive' meaning: don’t
>> wait until a known bug hits a user that consumes stable branches, but
>> backport fixes in advance quickly after they hit master.
>> >
>> >The idea is simple: every Fri I walk thru the new commits merged into
>> master since last check; produce lists of bugs that are mentioned in
>> Related-Bug/Closes-Bug; paste them into:
>> >
>> >https://etherpad.openstack.org/p/stable-bug-candidates-from-master
>> >
>> >Then I click thru the bug report links to determine whether it’s worth a
>> backport and briefly classify them. If I have cycles, I also request
>> backports where it’s easy (== a mere 'Cherry-Pick to' button click).
>> >
>> >After that, those interested in maintaining neutron stable branches can
>> take those bugs one by one and handle them, which means: checking where it
>> really applies for backport; creating backport reviews (solving conflicts,
>> making tests pass). After it’s up for review for all branches affected and
>> applicable, the bug is removed from the list.
>> >
>> >I started on that path two weeks ago, doing initial swipe thru all
>> commits starting from stable/liberty spin off. If enough participants join
>> the process, we may think of going back into git history to backport
>> interesting fixes from stable/liberty into stable/kilo.
>> >
>> >Don’t hesitate to ask about details of the process, and happy
>> backporting,
>> >
>> >Ihar
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Kevin Benton
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage 

  1   2   >