[openstack-dev] [ironic] Stepping down from Ironic core

2018-02-23 Thread Vasyl Saienko
Hey Ironic community!

Unfortunately I don't work on Ironic as much as I used to any more, so i'm
stepping down from core reviewers.

So, thanks for everything everyone, it's been great to work with you
all for all these years!!!


Sincerely,
Vasyl Saienko
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Nominating Hironori Shiina for ironic-core

2018-02-06 Thread Vasyl Saienko
+1

On Mon, Feb 5, 2018 at 8:12 PM, Julia Kreger 
wrote:

> I would like to nominate Hironori Shiina to ironic-core. He has been
> working in the ironic community for some time, and has been helping
> over the past several cycles with more complex features. He has
> demonstrated an understanding of Ironic's code base, mechanics, and
> overall community style. His review statistics are also extremely
> solid. I personally have a great deal of trust in his reviews.
>
> I believe he would make a great addition to our team.
>
> Thanks,
>
> -Julia
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] Multi-tenant network with multiple port

2017-12-20 Thread Vasyl Saienko
Hello,

We using the following order when picking candidate port/portgroups when
attaching VIFs:

   - For Ironic <= Ocata
   1. portgroups
  2. ports with pxe_enabled=True
  3. any other ports

  - For Ironic >= Pike (port has new attribute physical_netowrk:
  1. portgroups with physical_network field set
  2. ports with physical_network field set
  3. portgroups without physical_network field
  4. ports without physical_network field
  5. ports with pxe_enabled = True
  6. other ports

In both cases pxe_enabled ports are prefered when connecting tenant VIF
compare to non-pxe ports.
You can configure fake portgroup with 1 port if you using Ironic <= Ocata
and Ironic will attach tenant VIF to portgroup (which will be actually your
second port). The drawback here is that nova will do portgroup
configuration via cloudinit on the instance.
Or add physical_network field to port you want to connect tenant network
to, but do not add it to other ports. Will work with ironic >= Pike


https://github.com/openstack/ironic/blob/stable/pike/ironic/drivers/modules/network/common.py#L506

On Wed, Dec 20, 2017 at 3:10 PM, Hieu LE  wrote:

> Hello Ironic guys,
>
> In my lab environment, I have finished setting up the multi-tenant network
> environment for Ironic using networking-generic-switch (Cisco IOS device).
> The official Ironic doc only talked about one BM node with one port for
> provisioning and tenant network.
>
> My process here: I have created 2 ports, 01 port with pxe_enabled for
> provisioning network and remaining port with pxe disabled; then using nova
> boot with --nic option, hoping it can get network information via Neutron.
> But it failed.
>
> So my question here is:
> 1. Is this possible for enroll a node, then start provisioning it via one
> port and then configuring tenant network via another port?
> 2. Is my process correct, if not, could you provide some guides for the
> right way?
>
> Thanks,
> Hieu.
>
> --
> -BEGIN GEEK CODE BLOCK-
> Version: 3.1
> GCS/CM/IT/M/MU d-@? s+(++):+(++) !a C()$ ULC(++)$ P L++(+++)$
> E !W N* o+ K w O- M V- PS+ PE++ Y+ PGP+ t 5 X R tv+ b+(++)>+++ DI- D+ G
> e++(+++) h-- r(++)>+++ y-
> --END GEEK CODE BLOCK--
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] networking-generic-switch core team update

2017-11-28 Thread Vasyl Saienko
Hello Ironic!

Recently networking-generic-switch was added to OpenStack baremetal program
[0]. And I would like to announce n-g-s core team changes:

 * add ironic-core to n-g-s-core as it is done for other ironic sub-projects
 * add mgoddard to n-g-s-core team as his reviews/commits are very useful



[0] https://review.openstack.org/#/c/521894/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic][nova][grenade] Ironic CI grenade job degradation

2017-08-15 Thread Vasyl Saienko
Hello Community!

Recently with CI performance degradation ironic team meet with the
following problem. Quick automated cleaning is enabled on grenade jobs
which is started exactly after nova instance is deleted.
We do not wait for cleaning is finished in nova virt driver before mark
instance as deleted [0] as result
new tests may be started when ironic perform cleaning of nodes from
previous tests.
During last time CI become much slower which leads to grenade job failures.

To fix it we need to wait for cleaning is completed before start new
tests/grenade phases (ironic resources should be available again after base
smoke tests/resources destroy phase).
Since grenade cleanup resources in reverse order [1] there is no way to
wait for resources are available again on ironic side.

The possible options here are:

   1. Wait for resource are available again in ironic grenade plugin after
   base smoke tests finished before running resources phase.
   2. Ensure that ironic node is available again right after destroy phase.
   Two options are available here
  1. Modify nova resources destroy phase [2] to honor ironic case and
  wait for resources there.
  2. Add new phase right after 'destroy'. (Previously there were
  'force_destroy' phase which we tried to use [3] )

[0]
https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L1137
[1]
 
https://github.com/openstack-dev/grenade/blob/11dd94308ed5c25a8f28f86b03b20b251f0a05a1/inc/plugin#L111

[2]
https://github.com/openstack-dev/grenade/blob/11dd94308ed5c25a8f28f86b03b20b251f0a05a1/projects/60_nova/resources.sh#L142
[3] https://review.openstack.org/#/c/489410/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Goodbye ironic o/

2017-05-03 Thread Vasyl Saienko
Mario,

So sorry to hear that you won't be working on Ironic anymore :(
Good luck with your new assignments!

On Tue, May 2, 2017 at 7:10 AM, Ranganathan, Shobha <
shobha.ranganat...@intel.com> wrote:

> Hi Mario,
>
>
>
> Sorry to hear that you won’t be working on Ironic anymore!
>
> Best of luck on whatever you are doing next!
>
>
>
> Shobha
>
>
>
> *From:* John Villalovos [mailto:openstack@sodarock.com]
> *Sent:* Monday, May 1, 2017 9:14 AM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* Re: [openstack-dev] [ironic] Goodbye ironic o/
>
>
>
> Mario,
>
> So sorry you won't be working with us on Ironic anymore :( You have been
> an great part of Ironic and I'm glad I got to know you.
>
> Hopefully I will get to work with you again. Best of luck for the future!
>
> John
>
>
>
> On Fri, Apr 28, 2017 at 9:12 AM, Mario Villaplana <
> mario.villapl...@gmail.com> wrote:
>
> Hi ironic team,
>
>
>
> You may have noticed a decline in my upstream contributions the past few
> weeks. Unfortunately, I'm no longer being paid to work on ironic. It's
> unlikely that I'll be contributing enough to keep up with the project in my
> new job, too, so please do feel free to remove my core access.
>
>
>
> It's been great working with all of you. I've learned so much about open
> source, baremetal provisioning, Python, and more from all of you, and I
> will definitely miss it. I hope that we all get to work together again in
> the future someday.
>
>
>
> I am not sure that I'll be at the Forum during the day, but please do ping
> me for a weekend or evening hangout if you're attending. I'd love to show
> anyone who's interested around the Boston area if our schedules align.
>
>
>
> Also feel free to contact me via IRC/email/carrier pigeon with any
> questions about work in progress I had upstream.
>
>
>
> Good luck with the project, and thanks for everything!
>
>
>
> Best wishes,
>
> Mario Villaplana
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [stable] ironic-stable-maint update proposal

2017-04-27 Thread Vasyl Saienko
On Thu, Apr 27, 2017 at 5:21 PM, Dmitry Tantsur  wrote:

> Hi all!
>
> I'd like to propose the following changes to the ironic-stable-maint group
> [0]:
>
> 1. Add Ruby Loo (rloo) to the group. Ruby does not need introduction in
> the Ironic community, she has been with the project for really long time
> and is well known for her high-quality and thorough reviews. She has been
> pretty active on stable branches as well [1].
>

++


> 2. Remove Jay Faulkner (sigh..) per his request at [2].
>
> 3. Remove Devananda (sigh again..) as he's no longer active on the project
> and was removed from ironic-core several months ago [3].
>

Sad to hear that folks are leaving Ironic team, but this things we can't
change...


>
> So for those on the team already, please reply with a +1 or -1 vote.
> I'll also need somebody to apply this change, as I don't have ACL for that.
>
> [0] https://review.openstack.org/#/admin/groups/950,members
> [1] https://review.openstack.org/#/q/project:openstack/ironic+NO
> T+branch:master+reviewer:%22Ruby+Loo+%253Cruby.loo%2540intel.com%253E%22
> [2] http://lists.openstack.org/pipermail/openstack-dev/2017-Apri
> l/115968.html
> [3] http://lists.openstack.org/pipermail/openstack-dev/2017-Febr
> uary/112442.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [neutron] Should the ironic-neutron meeting start back up for pike?

2017-03-10 Thread Vasyl Saienko
Michael thanks for raising this question.

I don't have strong opinion here. We didn't spent much time for network
related things during past several meetings. But now we have a
'networking-baremetal' repo under our control.
We will need a time to talk about design of our Neutron plugin,
implementation etc... So my feeling that in 1-2 month we will definitely
need more time to talk about networking in Ironic
than today.

On Thu, Mar 9, 2017 at 8:49 PM, Dmitry Tantsur  wrote:

> On 03/07/2017 08:44 PM, Michael Turek wrote:
>
>> Hey all,
>>
>> So at yesterday's ironic IRC meeting the question of whether or not the
>> ironic
>> neutron integration meeting should start back up. My understanding is
>> that this
>> meeting died down as it became more status oriented.
>>
>> I'm wondering if it'd be worthwhile to kick it off again as 4 of pike's
>> high
>> priority items are neutron integration focused.
>>
>> Personally it'd be a meeting I'd attend this cycle but I could understand
>> if
>> it's more trouble than it's worth.
>>
>
> I feel quite the same. I'd find it useful for me to learn from more
> knowledgeable folks, but I'd like to hear their opinion ;)
>
>
>
>> Thoughts?
>>
>> Thanks,
>> Mike
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] End-of-Ocata core team updates

2017-02-23 Thread Vasyl Saienko
Thanks everyone for this great opportunity I will do my best on this new
position.
Congrats Mario, well deserved!

Devananda I hope to see you back soon, it was pleasure to work with you.

On Wed, Feb 22, 2017 at 10:33 AM, Jim Rollenhagen <j...@jimrollenhagen.com>
wrote:

> We've a clear majority here, I took the liberty of adding Mario and Vasyl
> while Dmitry is busy running our sessions today. Welcome to the team, y'all
> :)
>
>
> // jim
>
> On Fri, Feb 17, 2017 at 4:40 AM, Dmitry Tantsur <dtant...@redhat.com>
> wrote:
>
>> Hi all!
>>
>> I'd like to propose a few changes based on the recent contributor
>> activity.
>>
>> I have two candidates that look very good and pass the formal barrier of
>> 3 reviews a day on average [1].
>>
>> First, Vasyl Saienko (vsaienk0). I'm pretty confident in him, his stats
>> [2] are high, he's doing a lot of extremely useful work around networking
>> and CI.
>>
>> Second, Mario Villaplana (mariojv). His stats [3] are quite high too, he
>> has been doing some quality reviews for critical patches in the Ocata cycle.
>>
>> Active cores and interested contributors, please respond with your +-1 to
>> these suggestions.
>>
>> Unfortunately, there is one removal as well. Devananda, our team leader
>> for several cycles since the very beginning of the project, has not been
>> active on the project for some time [4]. I propose to (hopefully temporary)
>> remove him from the core team. Of course, when (look, I'm not even saying
>> "if"!) he comes back to active reviewing, I suggest we fast-forward him
>> back. Thanks for everything Deva, good luck with your current challenges!
>>
>> Thanks,
>> Dmitry
>>
>> [1] http://stackalytics.com/report/contribution/ironic-group/90
>> [2] http://stackalytics.com/?user_id=vsaienko=marks
>> [3] http://stackalytics.com/?user_id=mario-villaplana-j=marks
>> [4] http://stackalytics.com/?user_id=devananda=marks
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic][neutron] PTG cross team session

2017-02-16 Thread Vasyl Saienko
Hello Ironic/Neutron teams,


Ironic team would like to schedule cross session with Neutron team on Mon -
Tues except for Tue 9.30 - 10.00
The topics we would like to talk are added to:
https://etherpad.openstack.org/p/neutron-ptg-pike L151


Sincerely,
Vasyl Saienko
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][qa][grenade] Release blocked on grenade job not testing from newton

2017-02-10 Thread Vasyl Saienko
The root cause why ironic grenade job was broken is described in
https://bugs.launchpad.net/ironic/+bug/1663371
In two words, during Ocata we removed DEFAULT_IMAGE_NAME setting logic from
devstack [0]. As soon stable/ocata
was cut for devstack variable DEFAULT_IMAGE_NAME is not visible in grenade,
and default value in nova resources.sh
script was picked [1].

It was fixed by [2] by sourcing ironic vars (now we set DEFAULT_IMAGE_NAME
there) in grenade, to made it available for all
grenade scripts.

The problem described earlier (we do testing upgrade from master to master)
still exist.
It affects not only ironic, but all projects that do not have latest stable
branch (ie stable/ocata now). As soon they cut it all becomes
to normal. But we have a short period of time when all projects testing
upgrades from master to master.
Related bug [3].



[0]
https://github.com/openstack-dev/devstack/commit/d89b175321ac293454ad15caaee13c0ae46b0bd6
[1]
https://github.com/openstack-dev/grenade/blob/master/projects/60_nova/resources.sh#L31
[2] https://review.openstack.org/#/c/431369/
[3] https://bugs.launchpad.net/grenade/+bug/1663505

On Thu, Feb 9, 2017 at 4:02 PM, Jim Rollenhagen 
wrote:

> On Thu, Feb 9, 2017 at 7:00 AM, Jim Rollenhagen 
> wrote:
>
>> Hey folks,
>>
>> Ironic plans to release Ocata this week, once we have a couple small
>> patches
>> and a release note cleanup landed.
>>
>> However, our grenade job is now testing master->master, best I can tell.
>> This
>> is pretty clearly due to this d-s-g commit:
>> https://github.com/openstack-infra/devstack-gate/commit/9c75
>> 2b02fbd57c7021a7c9295bf4d68a0d1973a8
>>
>> Evidence:
>>
>> * it appears to be checking out a change on master into the old side:
>>   http://logs.openstack.org/44/354744/10/check/gate-grenade-ds
>> vm-ironic-ubuntu-xenial/4b395ff/logs/grenade.sh.txt.gz#_
>> 2017-02-09_07_15_32_979
>>
>> * and somewhat coincidentally, our grenade job seems to be broken when
>> master
>>   (ocata) is on the old side, because we now select instance images in our
>>   devstack plugin:
>>   http://logs.openstack.org/44/354744/10/check/gate-grenade-ds
>> vm-ironic-ubuntu-xenial/4b395ff/logs/grenade.sh.txt.gz#_
>> 2017-02-09_08_07_10_946
>>
>> So, we're currently blocking the ironic release on this, as obviously we
>> don't
>> want to release if we don't know upgrades work. As I see it, we have two
>> options:
>>
>> 1) Somehow fix devstack-gate and configure our jobs in project-config
>> such that
>> this job upgrades newton->master. I might need some help on navigating
>> this
>> one.
>>
>> 2) Make our grenade job non-voting for now, release 7.0.0 anyway, and
>> immediately make sure that the stable/ocata branch runs grenade as
>> expected and
>> passes. If it isn't passing, fix what we need to and cut 7.0.1 ASAP.
>>
>
> After talking to Doug and Sean on IRC, I think this is the best
> option. We don't necessarily need to make it non-voting if we
> can fix it quickly (Vasyl is working on this already).
>
> We still have a week to release from the Ocata branch if we need
> to get more things in. They'll just need to go through the backport
> process.
>
> // jim
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Vlan aware VMs or trunking

2016-12-07 Thread Vasyl Saienko
On Wed, Dec 7, 2016 at 7:34 PM, Kevin Benton <ke...@benton.pub> wrote:

> >It work only when whole switch is aimed by single customer, it will not
> work when several customers sharing the same switch.
>
> Do you know what vendors have this limitation? I know the broadcom
> chipsets didn't prevent this (we allowed VLAN rewrites scoped to ports at
> Big Switch). If it's common to Cisco/Juniper then I guess we are stuck
> reflecting bad hardware in the API. :(
>

@Kevin
It looks that I was wrong, on the example I provided I expected to
configure VLAN mapping on Gig0/1 uplink. It will not work in this case, but
if configure VLAN mapping at ports where baremetal are plugged (i.e: Fa0/1
- 0/5) it should work :)
I definitely need more practice with VLAN mapping...



>
> On Wed, Dec 7, 2016 at 9:22 AM, Vasyl Saienko <vsaie...@mirantis.com>
> wrote:
>
>>
>>
>> On Wed, Dec 7, 2016 at 7:12 PM, Kevin Benton <ke...@benton.pub> wrote:
>>
>>>
>>>
>>> On Wed, Dec 7, 2016 at 8:47 AM, Vasyl Saienko <vsaie...@mirantis.com>
>>> wrote:
>>>
>>>> @Armando: IMO the spec [0] is not about enablement of Trunks and
>>>> baremetal. This spec is rather about trying to make user request with any
>>>> network configuration (number of requested NICs) to be able successfully
>>>> deployed on ANY ironic node (even when number of hardware interfaces is
>>>> less than number of requested attached networks to instance) by implicitly
>>>> creating neutron trunks on the fly.
>>>>
>>>> I have  a concerns about it and left a comment [1]. The guaranteed
>>>> number of NICs on hardware server should be  available to user via nova
>>>> flavor information. User should decide if he needs a trunk or not only by
>>>> his own, as his image may even not support trunking. I suggest that
>>>> creating trunks implicitly (w/o user knowledge) shouldn't happen.
>>>>
>>>> Current trunks implementation in Neutron will work just fine with
>>>> baremetal case with one small addition:
>>>>
>>>> 1. segmentation_type and segmentation_id should not be API mandatory
>>>> fields at least for the case when provider segmentation is VLAN.
>>>>
>>>> 2. User still should know what segmentation_id was picked to configure
>>>> it on Instance side. (Not sure if it is done automatically via network
>>>> metadata at the moment.). So it should be inherited from network
>>>> provider:segmentation_id and visible to the user.
>>>>
>>>>
>>>> @Kevin: Having VLAN mapping support on the switch will not solve
>>>> problem described in scenario 3 when multiple users pick the same
>>>> segmentation_id for different networks and their instances were spawned to
>>>> baremetal nodes on the same switch.
>>>>
>>>> I don’t see other option than to control uniqueness of segmentation_id
>>>> on Neutron side for baremetal case.
>>>>
>>>
>>> Well unless there is a limitation in the switch hardware, VLAN mapping
>>> is scoped to each individual port so users can pick the same local
>>> segmentation_id. The point of the feature on switches is for when you have
>>> customers that specify their own VLANs and you need to map them to service
>>> provider VLANs (i.e. what is happening here).
>>>
>>
>> It work only when whole switch is aimed by single customer, it will not
>> work when several customers sharing the same switch.
>>
>>
>>>
>>>
>>>>
>>>> Reference:
>>>>
>>>> [0] https://review.openstack.org/#/c/277853/
>>>> [1] https://review.openstack.org/#/c/277853/10/specs/approved/VL
>>>> AN-aware-baremetal-instances.rst@35
>>>>
>>>> On Wed, Dec 7, 2016 at 6:14 PM, Kevin Benton <ke...@benton.pub> wrote:
>>>>
>>>>> Just to be clear, in this case the switches don't support VLAN
>>>>> translation (e.g. [1])? Because that also solves the problem you are
>>>>> running into. This is the preferable path for bare metal because it avoids
>>>>> exposing provider details to users and doesn't tie you to VLANs on the
>>>>> backend.
>>>>>
>>>>> 1. http://ipcisco.com/vlan-mapping-vlan-translation-%E2%80%93-part-2/
>>>>>
>>>>> On Wed, Dec 7, 2016 at 7:49 AM, Armando M. <arma...@gmail.com> wrote:
>>>>>
>>>>&g

Re: [openstack-dev] [neutron] Vlan aware VMs or trunking

2016-12-07 Thread Vasyl Saienko
On Wed, Dec 7, 2016 at 7:12 PM, Kevin Benton <ke...@benton.pub> wrote:

>
>
> On Wed, Dec 7, 2016 at 8:47 AM, Vasyl Saienko <vsaie...@mirantis.com>
> wrote:
>
>> @Armando: IMO the spec [0] is not about enablement of Trunks and
>> baremetal. This spec is rather about trying to make user request with any
>> network configuration (number of requested NICs) to be able successfully
>> deployed on ANY ironic node (even when number of hardware interfaces is
>> less than number of requested attached networks to instance) by implicitly
>> creating neutron trunks on the fly.
>>
>> I have  a concerns about it and left a comment [1]. The guaranteed number
>> of NICs on hardware server should be  available to user via nova flavor
>> information. User should decide if he needs a trunk or not only by his own,
>> as his image may even not support trunking. I suggest that creating trunks
>> implicitly (w/o user knowledge) shouldn't happen.
>>
>> Current trunks implementation in Neutron will work just fine with
>> baremetal case with one small addition:
>>
>> 1. segmentation_type and segmentation_id should not be API mandatory
>> fields at least for the case when provider segmentation is VLAN.
>>
>> 2. User still should know what segmentation_id was picked to configure it
>> on Instance side. (Not sure if it is done automatically via network
>> metadata at the moment.). So it should be inherited from network
>> provider:segmentation_id and visible to the user.
>>
>>
>> @Kevin: Having VLAN mapping support on the switch will not solve problem
>> described in scenario 3 when multiple users pick the same segmentation_id
>> for different networks and their instances were spawned to baremetal nodes
>> on the same switch.
>>
>> I don’t see other option than to control uniqueness of segmentation_id on
>> Neutron side for baremetal case.
>>
>
> Well unless there is a limitation in the switch hardware, VLAN mapping is
> scoped to each individual port so users can pick the same local
> segmentation_id. The point of the feature on switches is for when you have
> customers that specify their own VLANs and you need to map them to service
> provider VLANs (i.e. what is happening here).
>

It work only when whole switch is aimed by single customer, it will not
work when several customers sharing the same switch.


>
>
>>
>> Reference:
>>
>> [0] https://review.openstack.org/#/c/277853/
>> [1] https://review.openstack.org/#/c/277853/10/specs/approved/VL
>> AN-aware-baremetal-instances.rst@35
>>
>> On Wed, Dec 7, 2016 at 6:14 PM, Kevin Benton <ke...@benton.pub> wrote:
>>
>>> Just to be clear, in this case the switches don't support VLAN
>>> translation (e.g. [1])? Because that also solves the problem you are
>>> running into. This is the preferable path for bare metal because it avoids
>>> exposing provider details to users and doesn't tie you to VLANs on the
>>> backend.
>>>
>>> 1. http://ipcisco.com/vlan-mapping-vlan-translation-%E2%80%93-part-2/
>>>
>>> On Wed, Dec 7, 2016 at 7:49 AM, Armando M. <arma...@gmail.com> wrote:
>>>
>>>>
>>>>
>>>> On 7 December 2016 at 04:02, Vasyl Saienko <vsaie...@mirantis.com>
>>>> wrote:
>>>>
>>>>> Armando, Kevin,
>>>>>
>>>>> Thanks for your comments.
>>>>>
>>>>> To be more clear we are trying to use neutron trunks implementation
>>>>> with baremetal servers (Ironic). Baremetal servers are plugged to Tor (Top
>>>>> of the Rack) switch. User images are spawned directly onto hardware.
>>>>>
>>>> Ironic uses Neutron ML2 drivers to plug baremetal servers to Neutron
>>>>> networks (it looks like changing vlan on the port to segmentation_id from
>>>>> Neutron network, scenario 1 in the attachment). Ironic works with VLAN
>>>>> segmentation only for now, but some vendors ML2 like arista allows to use
>>>>> VXLAN (in this case VXLAN to VLAN mapping is created on the switch.).
>>>>> Different users may have baremetal servers connected to the same ToR 
>>>>> switch.
>>>>>
>>>>> By trying to apply current neutron trunking model leads to the
>>>>> following errors:
>>>>>
>>>>> *Scenario 2: single user scenario, create VMs with trunk and non-trunk
>>>>> ports.*
>>>>>
>>>>>- User create two networks:
>>>>

Re: [openstack-dev] [neutron] Vlan aware VMs or trunking

2016-12-07 Thread Vasyl Saienko
@Armando: IMO the spec [0] is not about enablement of Trunks and baremetal.
This spec is rather about trying to make user request with any network
configuration (number of requested NICs) to be able successfully deployed
on ANY ironic node (even when number of hardware interfaces is less than
number of requested attached networks to instance) by implicitly creating
neutron trunks on the fly.

I have  a concerns about it and left a comment [1]. The guaranteed number
of NICs on hardware server should be  available to user via nova flavor
information. User should decide if he needs a trunk or not only by his own,
as his image may even not support trunking. I suggest that creating trunks
implicitly (w/o user knowledge) shouldn't happen.

Current trunks implementation in Neutron will work just fine with baremetal
case with one small addition:

1. segmentation_type and segmentation_id should not be API mandatory fields
at least for the case when provider segmentation is VLAN.

2. User still should know what segmentation_id was picked to configure it
on Instance side. (Not sure if it is done automatically via network
metadata at the moment.). So it should be inherited from network
provider:segmentation_id and visible to the user.


@Kevin: Having VLAN mapping support on the switch will not solve problem
described in scenario 3 when multiple users pick the same segmentation_id
for different networks and their instances were spawned to baremetal nodes
on the same switch.

I don’t see other option than to control uniqueness of segmentation_id on
Neutron side for baremetal case.

Reference:

[0] https://review.openstack.org/#/c/277853/
[1]
https://review.openstack.org/#/c/277853/10/specs/approved/VLAN-aware-baremetal-instances.rst@35

On Wed, Dec 7, 2016 at 6:14 PM, Kevin Benton <ke...@benton.pub> wrote:

> Just to be clear, in this case the switches don't support VLAN translation
> (e.g. [1])? Because that also solves the problem you are running into. This
> is the preferable path for bare metal because it avoids exposing provider
> details to users and doesn't tie you to VLANs on the backend.
>
> 1. http://ipcisco.com/vlan-mapping-vlan-translation-%E2%80%93-part-2/
>
> On Wed, Dec 7, 2016 at 7:49 AM, Armando M. <arma...@gmail.com> wrote:
>
>>
>>
>> On 7 December 2016 at 04:02, Vasyl Saienko <vsaie...@mirantis.com> wrote:
>>
>>> Armando, Kevin,
>>>
>>> Thanks for your comments.
>>>
>>> To be more clear we are trying to use neutron trunks implementation with
>>> baremetal servers (Ironic). Baremetal servers are plugged to Tor (Top of
>>> the Rack) switch. User images are spawned directly onto hardware.
>>>
>> Ironic uses Neutron ML2 drivers to plug baremetal servers to Neutron
>>> networks (it looks like changing vlan on the port to segmentation_id from
>>> Neutron network, scenario 1 in the attachment). Ironic works with VLAN
>>> segmentation only for now, but some vendors ML2 like arista allows to use
>>> VXLAN (in this case VXLAN to VLAN mapping is created on the switch.).
>>> Different users may have baremetal servers connected to the same ToR switch.
>>>
>>> By trying to apply current neutron trunking model leads to the following
>>> errors:
>>>
>>> *Scenario 2: single user scenario, create VMs with trunk and non-trunk
>>> ports.*
>>>
>>>- User create two networks:
>>>net-1: (provider:segmentation_id: 100)
>>>net-2: (provider:segmentation_id: 101)
>>>
>>>- User create 1 trunk port
>>>port0 - parent port in net-1
>>>port1 - subport in net-2 and define user segmentation_id: 300
>>>
>>>- User boot VMs:
>>>BM1: with trunk (connected to ToR Fa0/1)
>>>BM4: in net-2 (connected to ToR Fa0/4)
>>>
>>>- VLAN on the switch are configured as follow:
>>>Fa0/1 - trunk, native 100, allowed vlan 300
>>>Fa0/4 - access vlan 101
>>>
>>> *Problem:* BM1 has no access BM4 on net-2
>>>
>>>
>>> *Scenario 3: multiple user scenario, create VMs with trunk.*
>>>
>>>- User1 create two networks:
>>>net-1: (provider:segmentation_id: 100)
>>>net-2: (provider:segmentation_id: 101)
>>>
>>>- User2 create two networks:
>>>net-3: (provider:segmentation_id: 200)
>>>net-4: (provider:segmentation_id: 201)
>>>
>>>- User1 create 1 trunk port
>>>port0 - parent port in net-1
>>>port1 - subport in net-2 and define user segmentation_id: 300
>>>
>>>- User2 create 1 trunk port
>>>port0 - parent port in ne

Re: [openstack-dev] [neutron] Vlan aware VMs or trunking

2016-12-07 Thread Vasyl Saienko
Armando, Kevin,

Thanks for your comments.

To be more clear we are trying to use neutron trunks implementation with
baremetal servers (Ironic). Baremetal servers are plugged to Tor (Top of
the Rack) switch. User images are spawned directly onto hardware.
Ironic uses Neutron ML2 drivers to plug baremetal servers to Neutron
networks (it looks like changing vlan on the port to segmentation_id from
Neutron network, scenario 1 in the attachment). Ironic works with VLAN
segmentation only for now, but some vendors ML2 like arista allows to use
VXLAN (in this case VXLAN to VLAN mapping is created on the switch.).
Different users may have baremetal servers connected to the same ToR switch.

By trying to apply current neutron trunking model leads to the following
errors:

*Scenario 2: single user scenario, create VMs with trunk and non-trunk
ports.*

   - User create two networks:
   net-1: (provider:segmentation_id: 100)
   net-2: (provider:segmentation_id: 101)

   - User create 1 trunk port
   port0 - parent port in net-1
   port1 - subport in net-2 and define user segmentation_id: 300

   - User boot VMs:
   BM1: with trunk (connected to ToR Fa0/1)
   BM4: in net-2 (connected to ToR Fa0/4)

   - VLAN on the switch are configured as follow:
   Fa0/1 - trunk, native 100, allowed vlan 300
   Fa0/4 - access vlan 101

*Problem:* BM1 has no access BM4 on net-2


*Scenario 3: multiple user scenario, create VMs with trunk.*

   - User1 create two networks:
   net-1: (provider:segmentation_id: 100)
   net-2: (provider:segmentation_id: 101)

   - User2 create two networks:
   net-3: (provider:segmentation_id: 200)
   net-4: (provider:segmentation_id: 201)

   - User1 create 1 trunk port
   port0 - parent port in net-1
   port1 - subport in net-2 and define user segmentation_id: 300

   - User2 create 1 trunk port
   port0 - parent port in net-3
   port1 - subport in net-4 and define user segmentation_id: 300

   - User1 boot VM:
   BM1: with trunk (connected to ToR Fa0/1)

   - User2 boot VM:
   BM4: with trunk (connected to ToR Fa0/4)

   - VLAN on the switch are configured as follow:
   Fa0/1 - trunk, native 100, allowed vlan 300
   Fa0/4 - trunk, native 200, allowed vlan 300

*Problem:* User1 BM1 has access to User2 BM4 on net-2, Conflict in VLAN
mapping provider vlan 101 should be mapped to user vlan 300, and provider
vlan 201 should be also mapped to vlan 300


Making segmentation_id on trunk subport optional and inheriting it from
port network segmentation_id solves such problems.
According to original spec both segmentation_type and segmentation_id are
optional [0].

Does Neutron/Nova place information about user's VLAN onto instance via
network metadata?

Reference:
[0]
https://review.openstack.org/#/c/308521/1/specs/newton/vlan-aware-vms.rst@118

Thanks in advance,
Vasyl Saienko

On Tue, Dec 6, 2016 at 7:08 PM, Armando M. <arma...@gmail.com> wrote:

>
>
> On 6 December 2016 at 08:49, Vasyl Saienko <vsaie...@mirantis.com> wrote:
>
>> Hello Neutron Community,
>>
>>
>> I've found that nice feature vlan-aware-vms was implemented in Newton [0].
>> However the usage of this feature for regular users is impossible, unless
>> I'm missing something.
>>
>> As I understood correctly it should work in the following way:
>>
>>1. It is possible to group neutron ports to trunks.
>>2. When trunk is created parent port should be defined:
>>Only one port can be parent.
>>segmentation of parent port is set as native or untagged vlan on the
>>trunk.
>>3. Other ports may be added as subports to existing trunk.
>>When subport is added to trunk *segmentation_type* and *segmentation_id
>>*should be specified.
>>segmentation_id of subport is set as allowed vlan on the trunk
>>
>> Non-admin user do not know anything about *segmentation_type* and
>> *segmentation_id.*
>>
>
> Segmentation type and ID are used to multiplex/demultiplex traffic in/out
> of the guest associated to a particular trunk. Aside the fact that the only
> supported type is VLAN at the moment (if ever), the IDs are user provided
> to uniquely identify the traffic coming in/out of the trunked networks so
> that it can reach the appropriate vlan interface within the guest. The
> documentation [1] is still in flight, but it clarifies this point.
>
> HTH
> Armando
>
> [1] https://review.openstack.org/#/c/361776
>
>
>> So it is strange that those fields are mandatory when subport is added to
>> trunk. Furthermore they may conflict with port's network segmentation_id
>> and type. Why we can't inherit segmentation_type and segmentation_id from
>> network settings of subport?
>>
>> References:
>> [0] https://blueprints.launchpa

[openstack-dev] [neutron] Vlan aware VMs or trunking

2016-12-06 Thread Vasyl Saienko
Hello Neutron Community,


I've found that nice feature vlan-aware-vms was implemented in Newton [0].
However the usage of this feature for regular users is impossible, unless
I'm missing something.

As I understood correctly it should work in the following way:

   1. It is possible to group neutron ports to trunks.
   2. When trunk is created parent port should be defined:
   Only one port can be parent.
   segmentation of parent port is set as native or untagged vlan on the
   trunk.
   3. Other ports may be added as subports to existing trunk.
   When subport is added to trunk *segmentation_type* and
*segmentation_id *should
   be specified.
   segmentation_id of subport is set as allowed vlan on the trunk

Non-admin user do not know anything about *segmentation_type* and
*segmentation_id.* So it is strange that those fields are mandatory when
subport is added to trunk. Furthermore they may conflict with port's
network segmentation_id and type. Why we can't inherit segmentation_type
and segmentation_id from network settings of subport?

References:
[0] https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms
[1]
https://review.openstack.org/#/c/361776/15/doc/networking-guide/source/config-trunking.rst
[2] https://etherpad.openstack.org/p/trunk-api-dump-newton

Thanks in advance,
Vasyl Saienko
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][devstack][devstack-gate][openstack-infra][networking-generic-switch] Ironic multinode job

2016-10-17 Thread Vasyl Saienko
Hello openstack-infra, community,

Ironic team got stable multinode job
<https://review.openstack.org/#/c/368173/> and want to move on and add it
to check/gate pipelines. At the moment we are blocked by devstack-gate
changes (3 patches in the list below).
We are kindly asking to help review/merge infra part which is blocker for
the rest of Ironic changes.

*devstack:*

   - "Use userrc_early for all nodes"
   https://review.openstack.org/350801/
   - "Drop SERVICE_HOST=127.0.0.1 from setup_localrc()" https://
   review.openstack.org/#/c/368870/
   <https://review.openstack.org/#/c/368870/>


*devstack-gate:*

   -  "Setup ssh-key on subnodes for Ironic" https://review.opensta
   ck.org/#/c/364830
   - "Update ENABLED_SERVICE on subnode with ironic"
   https://review.openstack.org/#/c/368611
   <https://review.openstack.org/#/c/368611>
   - "Update local.conf for ironic-multinode case"
   https://review.openstack.org/#/c/352790/
   <https://review.openstack.org/#/c/352790/>


 *ironic:*

   - "Skip create_ovs_taps() for multitenancy case"
*https://review.openstack.org/#/c/382360
   <https://review.openstack.org/#/c/382360>*
   - "Ignore required_services for multinode topology"
   https://review.openstack.org/#/c/352793
   <https://review.openstack.org/#/c/352793>
   - "Skip db configuration on subnodes" https://review.opens
   tack.org/#/c/353303
   - "Fix setting custom IRONIC_VM_NETWORK_BRIDGE" http
   s://review.openstack.org/#/c/365116/
   <https://review.openstack.org/#/c/365116/>
   - "Update devstack provision net config for multihost"
   https://review.openstack.org/#/c/368644/
   <https://review.openstack.org/#/c/368644/>
   - "Update ironic node names for multinode case"
   https://review.openstack.org/#/c/368645/
   <https://review.openstack.org/#/c/368645/>
   - "Skip some steps for multinode case" https://review.openstack
   .org/#/c/368646/
   - "Add devstack setup_vxlan_network()" https:/
   /review.openstack.org/#/c/368647
   <https://review.openstack.org/#/c/368647>
   - "Update iptables rules and services IPs for multinode"
   https://review.openstack.org/#/c/368648/
   <https://review.openstack.org/#/c/368648/>


"DO NOT MERGE: Testing multinode stuff" https://review.openstac
k.org/#/c/368173/

Sincerely,
Vasyl Saienko

On Tue, Sep 13, 2016 at 12:04 PM, Vasyl Saienko <vsaie...@mirantis.com>
wrote:

> Hello Community,
>
> I'm happy to announce that we got stable ironic multinode job. There are a
> lot of patches  (around 20) to different projects needed to be merged
> before we can move this job to check pipeline.  That is why I'm writing
> this email to openstack-dev. I'm kindly asking cores from the devstack,
> devstack-gate, networking-generic-switch, ironic to review related patches
> from the following list:
>
> *devstack:*
> "Fix common functions to work with V2" https://review.openstack.o
> rg/#/c/366922/
> "Drop SERVICE_HOST=127.0.0.1 from setup_localrc()" https://
> review.openstack.org/#/c/368870/
>
> *devstack-gate:*
> "Add c-vol,c-bak on subnode when c-api enabled"
> https://review.openstack.org/#/c/352909
> "Preparing multinode networking for Ironic" https://review.opensta
> ck.org/#/c/335981
> "Setup ssh-key on subnodes for Ironic" https://review.opensta
> ck.org/#/c/364830
> "Update ENABLED_SERVICE on subnode with ironic"
> https://review.openstack.org/#/c/368611
> "Update local.conf for ironic-multinode case"
> https://review.openstack.org/#/c/352790/
>
> *networking-generic-switch*:
> "Setup multinode avare config" https://review.opensta
> ck.org/#/c/364848/
>
> *ironic:*
> "Configure clean network to provision network"
> https://review.openstack.org/#/c/356632
> "Ignore required_services for multinode topology"
> https://review.openstack.org/#/c/352793
> "Source openrc on subnode in multinode topology"
> https://review.openstack.org/#/c/353302/
> "Skip db configuration on subnodes" https://review.opens
> tack.org/#/c/353303
> "Fix setting custom IRONIC_VM_NETWORK_BRIDGE" http
> s://review.openstack.org/#/c/365116/
> "Update devstack provision net config for multihost"
> https://review.openstack.org/#/c/368644/
> "Update ironic node names for multinode case"
> https://review.openstack.org/#/c/368645/
> &q

Re: [openstack-dev] [ironic] [infra] RFC: consolidating and extending Ironic CI jobs

2016-10-12 Thread Vasyl Saienko
On Wed, Oct 12, 2016 at 4:10 PM, Dmitry Tantsur <dtant...@redhat.com> wrote:

> On 10/12/2016 03:01 PM, Vasyl Saienko wrote:
>
>> Hello Dmitry,
>>
>> Thanks for raising this question. I think the problem is deeper. There
>> are a lot
>> of use-cases that are not covered by our CI like cleaning, adoption etc...
>>
>
> This is nice, but here I'm trying to solve a pretty specific problem: we
> can't reasonably add more jobs to even cover all supported partitioning
> scenarios.
>
>
>> The main problem is that we need to change ironic configuration to apply
>> specific use-case. Unfortunately tempest doesn't allow to change cloud
>> configuration during tests run.
>>
>>
>> Recently I've started working on PoC that should solve this problem [0].
>> The
>> main idea is to have ability to change ironic configuration during single
>> gate
>> job run, and launch the same tempest tests after each configuration
>> change.
>>
>> We can't change other components configuration as it will require
>> reinstalling
>> whole devstack, so launching flat network and multitenant network
>> scenarios is
>> not possible in single job.
>>
>>
>> For example:
>>
>> 1. Setup devstack with agent_ssh wholedisk ipxe configuration
>>
>> 2. Run tempest tests
>>
>> 3. Update localrc to use agent_ssh localboot image
>>
>
> For this particular example, my approach will be much, much faster, as all
> instances will be built in parallel.


 One the gates we've using 7 VMs and we never boot all 7 nodes in parallel
not sure how slow environment will be in this case.




>
>> 4. Unstack ironic component only. Not whole devstack.
>>
>> 5. Install/configure ironic component only
>>
>> 6. Run tempest tests
>>
>> 7. Repeat steps 3-6 with other Ironic-only configuration change.
>>
>>
>> Running step 4,5 takes near 2-3 minutes.
>>
>>
>> Below is an non-exhaustive list of configuration choices we could try to
>> mix-and-match in single tempest run to have a maximal overall code
>> coverage in a
>> sibl:
>>
>>   *
>>
>> cleaning enabled / disabled
>>
>
> This is the only valid example, for other cases you don't need a devstack
> update.
>

There are other use-cases like: portgroups, security groups, boot from
volume which will require configuration changes.


>
>>   *
>>
>> using pxe_* drivers / agent_* drivers
>>
>>   *
>>
>> using netboot / localboot
>>
>>   * using partitioned / wholedisk images
>>
>>
>>
>> [0] https://review.openstack.org/#/c/369021/
>>
>>
>>
>>
>> On Wed, Oct 12, 2016 at 3:01 PM, Dmitry Tantsur <dtant...@redhat.com
>> <mailto:dtant...@redhat.com>> wrote:
>>
>> Hi folks!
>>
>> I'd like to propose a plan on how to simultaneously extend the
>> coverage of
>> our jobs and reduce their number.
>>
>> Currently, we're running one instance per job. This was reasonable
>> when the
>> coreos-based IPA image was the default, but now with tinyipa we can
>> run up
>> to 7 instances (and actually do it in the grenade job). I suggest we
>> use 6
>> fake bm nodes to make a single CI job cover many scenarios.
>>
>> The jobs will be grouped based on driver (pxe_ipmitool and
>> agent_ipmitool)
>> to be more in sync with how 3rd party CI does it. A special
>> configuration
>> option will be used to enable multi-instance testing to avoid
>> breaking 3rd
>> party CI systems that are not ready for it.
>>
>> To ensure coverage, we'll only leave a required number of nodes
>> "available",
>> and deploy all instances in parallel.
>>
>> In the end, we'll have these jobs on ironic:
>> gate-tempest-ironic-pxe_ipmitool-tinyipa
>> gate-tempest-ironic-agent_ipmitool-tinyipa
>>
>> Each job will cover the following scenarious:
>> * partition images:
>> ** with local boot:
>> ** 1. msdos partition table and BIOS boot
>> ** 2. GPT partition table and BIOS boot
>> ** 3. GPT partition table and UEFI boot  <*>
>> ** with netboot:
>> ** 4. msdos partition table and BIOS boot <**>
>> * whole disk images:
>> * 5. with msdos partition table embedded and BIOS boot
>> * 6. with GPT partition table embedded and UEFI boot  <*>
>>
>>
Am I right that we need to increase number 

Re: [openstack-dev] [ironic] [infra] RFC: consolidating and extending Ironic CI jobs

2016-10-12 Thread Vasyl Saienko
Hello Dmitry,

Thanks for raising this question. I think the problem is deeper. There are
a lot of use-cases that are not covered by our CI like cleaning, adoption
etc...

The main problem is that we need to change ironic configuration to apply
specific use-case. Unfortunately tempest doesn't allow to change cloud
configuration during tests run.

Recently I've started working on PoC that should solve this problem [0].
The main idea is to have ability to change ironic configuration during
single gate job run, and launch the same tempest tests after each
configuration change.

We can't change other components configuration as it will require
reinstalling whole devstack, so launching flat network and multitenant
network scenarios is not possible in single job.

For example:

1. Setup devstack with agent_ssh wholedisk ipxe configuration

2. Run tempest tests

3. Update localrc to use agent_ssh localboot image

4. Unstack ironic component only. Not whole devstack.

5. Install/configure ironic component only

6. Run tempest tests

7. Repeat steps 3-6 with other Ironic-only configuration change.

Running step 4,5 takes near 2-3 minutes.

Below is an non-exhaustive list of configuration choices we could try to
mix-and-match in single tempest run to have a maximal overall code coverage
in a sibl:

   -

   cleaning enabled / disabled
   -

   using pxe_* drivers / agent_* drivers
   -

   using netboot / localboot
   - using partitioned / wholedisk images



[0] https://review.openstack.org/#/c/369021/




On Wed, Oct 12, 2016 at 3:01 PM, Dmitry Tantsur  wrote:

> Hi folks!
>
> I'd like to propose a plan on how to simultaneously extend the coverage of
> our jobs and reduce their number.
>
> Currently, we're running one instance per job. This was reasonable when
> the coreos-based IPA image was the default, but now with tinyipa we can run
> up to 7 instances (and actually do it in the grenade job). I suggest we use
> 6 fake bm nodes to make a single CI job cover many scenarios.
>
> The jobs will be grouped based on driver (pxe_ipmitool and agent_ipmitool)
> to be more in sync with how 3rd party CI does it. A special configuration
> option will be used to enable multi-instance testing to avoid breaking 3rd
> party CI systems that are not ready for it.
>
> To ensure coverage, we'll only leave a required number of nodes
> "available", and deploy all instances in parallel.
>
> In the end, we'll have these jobs on ironic:
> gate-tempest-ironic-pxe_ipmitool-tinyipa
> gate-tempest-ironic-agent_ipmitool-tinyipa
>
> Each job will cover the following scenarious:
> * partition images:
> ** with local boot:
> ** 1. msdos partition table and BIOS boot
> ** 2. GPT partition table and BIOS boot
> ** 3. GPT partition table and UEFI boot  <*>
> ** with netboot:
> ** 4. msdos partition table and BIOS boot <**>
> * whole disk images:
> * 5. with msdos partition table embedded and BIOS boot
> * 6. with GPT partition table embedded and UEFI boot  <*>
>
>  <*> - in the future, when we figure our UEFI testing
>  <**> - we're moving away from defaulting to netboot, hence only one
> scenario
>
> I suggest creating the jobs for Newton and Ocata, and starting with Xenial
> right away.
>
> Any comments, ideas and suggestions are welcome.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic][neutron] ML2 plugin for Ironic needs

2016-10-10 Thread Vasyl Saienko
Hello Community,

Ironic and Neutron projects have become integrated even closer with
multitenancy implementation in Ironic.

There are 2 bugs that require separate ML2 driver specifically for Ironic
needs:


   -

   Booting ironic instance, Neutron port remains in down state [0]
   -

   Ironic needs to synchronize port status change events with Neutron [1]


I was told (commented) that keeping code in Neutron tree is not right
approach [2]. Of course I agree that Neutron has powerful and flexible
support of out-of-tree ML2 drivers and such functionality must be a
separate ML2 plugin.

So the question to consult with the whole community:

Do we need a new networking-ironic-* ML2 driver?

To fix [0] and [1] we can use existing networking-generic-switch [3] ML2
driver [4,5]. It was designed specially for Ironic case. It already helps
us to test Ironic multitenancy on the gates.

My team would prefer supporting one driver without multiplying entities.

[0] https://bugs.launchpad.net/neutron/+bug/1599836

[1] https://bugs.launchpad.net/ironic/+bug/1304673

[2] https://bugs.launchpad.net/neutron/+bug/1610898

[3] https://github.com/openstack/networking-generic-switch

[4] https://review.openstack.org/357779

[5] https://review.openstack.org/357780


Sincerely,
Vasyl Saienko
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic][devstack][devstack-gate][networking-generic-switch] Ironic multinode job

2016-09-13 Thread Vasyl Saienko
Hello Community,

I'm happy to announce that we got stable ironic multinode job. There are a
lot of patches  (around 20) to different projects needed to be merged
before we can move this job to check pipeline.  That is why I'm writing
this email to openstack-dev. I'm kindly asking cores from the devstack,
devstack-gate, networking-generic-switch, ironic to review related patches
from the following list:

*devstack:*
"Fix common functions to work with V2" https://review.openstack.
org/#/c/366922/
"Drop SERVICE_HOST=127.0.0.1 from setup_localrc()"
https://review.openstack.org/#/c/368870/

*devstack-gate:*
"Add c-vol,c-bak on subnode when c-api enabled"  https://review.
openstack.org/#/c/352909
"Preparing multinode networking for Ironic" https://review.
openstack.org/#/c/335981
"Setup ssh-key on subnodes for Ironic" https://review.
openstack.org/#/c/364830
"Update ENABLED_SERVICE on subnode with ironic" https://review.
openstack.org/#/c/368611
"Update local.conf for ironic-multinode case" https://review.
openstack.org/#/c/352790/

*networking-generic-switch*:
"Setup multinode avare config" https://review.
openstack.org/#/c/364848/

*ironic:*
"Configure clean network to provision network" https://review.
openstack.org/#/c/356632
"Ignore required_services for multinode topology" https://review.
openstack.org/#/c/352793
"Source openrc on subnode in multinode topology" https://review.
openstack.org/#/c/353302/
"Skip db configuration on subnodes" https://review.
openstack.org/#/c/353303
"Fix setting custom IRONIC_VM_NETWORK_BRIDGE" http
s://review.openstack.org/#/c/365116/
"Update devstack provision net config for multihost" https://review.
openstack.org/#/c/368644/
"Update ironic node names for multinode case" https://review.
openstack.org/#/c/368645/
"Skip some steps for multinode case" https://review.
openstack.org/#/c/368646/
"Add devstack setup_vxlan_network()" https:/
/review.openstack.org/#/c/368647
"Update iptables rules and services IPs for multinode"
https://review.openstack.org/#/c/368648/
"Testing multinode stuff" https://review.openstack.org/#/c/368173/

Sincerely,
Vasyl Saienko
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] static Portgroup support.

2016-08-09 Thread Vasyl Saienko
Hello Ironic'ers!

We've recorded a demo that shows how static portgroup works at the moment:

Flat network scenario: https://youtu.be/vBlH0ie6Lm4
Multitenant network scenario: https://youtu.be/Kk5Cc_K1tV8

Sincerely,
Vasyl Saienko

On Tue, Jul 19, 2016 at 3:30 PM, Vasyl Saienko <vsaie...@mirantis.com>
wrote:

> Hello Community,
>
> Current portgroup scenario is not fully clear for me. The related spec [3]
> doesn't clearly describe it. And based on implementation [1] and [2] I
> guess it should work in the following fashion for node with 3 NICs, where
> eth1 and eth2 are members of Porgroup Po0/1
>
> Node network connection info:
> eth1 (aa:bb:cc:dd:ee:f1) <---> Gig0/1
> eth2 (aa:bb:cc:dd:ee:f2) <---> Gig0/2
> eth3 (aa:bb:cc:dd:ee:f3) <---> Gig0/3
>
> For FLAT network scenario:
> 1. Administrator enrol ironic node.
> 2. Administrator creates a 3 ports for each interface, and a portgroup
> that contains eth0 and eth1 ports.
> 3. The ports Gig0/1 and Gig0/2 are added to portgroup Po0/1 manually on
> the switch.
> 4. When user request to boot an instance, Nova randomly picks interface
> [2], it might be a portgroup or single NIC interface. Proposed change [1]
> do not allow to specify what exactly network type we would like to use
> single NIC or portgroup.
>
> For multitenancy case:
> All looks the same, in addition administrator adds local_link_connection
> information for each port (local_link_connection 'port_id' field is
> 'Gig0/1' for eth1, 'Gig0/2' for eth2 and 'Gig0/3' for eth3, ). Ironic send
> this information to Neutron who plugs ports to needed network.
>
> The same user-scenario is available at the moment without any changes to
> Nova or Ironic. The difference is that administrator creates one port for
> single interface eth3 with local_link_connection 'port_id'='Gig0/3',  and a
> port that is a logical representation of portgroup (eth1 and eth2) with
> local_link_connection 'port_id'='Po0/1'.
>
> Please let me know if I've missed something or misunderstood current
> portgroup scenario.
>
> Reference:
> [0] https://review.openstack.org/206163
> [1] https://review.openstack.org/332177
> [2] https://github.com/openstack/nova/blob/06c537fbe5bb4ac5a3012642c899df
> 815872267c/nova/network/neutronv2/api.py#L270
> [3] https://specs.openstack.org/openstack/ironic-specs/specs/
> not-implemented/ironic-ml2-integration.html
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] network_interface, defaults, and explicitness

2016-08-02 Thread Vasyl Saienko
The proposed approach is reasonable.
Just a small adding. I think for a longterm concept it would be good to
avoid setting binding_host_id in ironic virt_driver. We should force users
to set it when 'binding_profile' is updated. It looks weird that we telling
Neutron to bind port at one place and adding binding information in another.

On Mon, Aug 1, 2016 at 8:13 PM, Jim Rollenhagen 
wrote:

> On Mon, Aug 01, 2016 at 08:10:18AM -0400, Jim Rollenhagen wrote:
> > Hey all,
> >
> > Our nova patch for networking[0] got stuck for a bit, because Nova needs
> > to know which network interface is in use for the node, in order to
> > properly set up the port.
> >
> > The code landed for network_interface follows the following order for
> > what is actually used for the node:
> > 1) node.network_interface, if that is None:
> > 2) CONF.default_network_interface, if that isNone:
> > 3) flat, if using neutron DHCP
> > 4) noop, if not using neutron DHCP
> >
> > The API will return None for node.network_interface in the API (GET
> > /v1/nodes/uuid). This won't work for Nova, because Nova can't know what
> > CONF.default_network_interface is.
> >
> > I propose that if a network_interface is not sent in the node-create
> > call, we write whatever the current default is, so that it is always set
> > and not using an implicit value that could change.
> >
> > For nodes that exist before the upgrade, we do a database migration to
> > set network_interface to CONF.default_network_interface (or if that's
> > None, set to flat/noop depending on the DHCP provider).
> >
> > An alternative is to keep the existing behavior, but have the API return
> > whatever interface is actually being used. This keeps the implicit
> > behavior (which I don't think is good), and also doesn't provide a way
> > to find out from the API if the interface is actually set, or if it's
> > using the configurable default.
> >
> > I'm going to go ahead and execute on that plan now, do speak up if you
> > have major objections to it.
>
> By the way, the patch chain to do this is here:
>
> https://review.openstack.org/#/q/status:open+project:openstack/ironic+branch:master+topic:bug/1608511
>
> // jim
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][neutron][nova] Sync port state changes.

2016-07-22 Thread Vasyl Saienko
Kevin, thanks for reply,

On Fri, Jul 22, 2016 at 11:50 AM, Kevin Benton <ke...@benton.pub> wrote:

> Hi,
>
> Once you solve the issue of getting the baremetal ports to transition to
> the ACTIVE state, a notification will automatically be emitted to Nova of
> 'network-vif-plugged' with the port ID. Will ironic not have access to that
> event via Nova?
>
> To solve issues of getting the baremetal ports to transition to the ACTIVE
state we should do the following:

   1. Use FLAT network instead of VXLAN for Ironic gate jobs [3].
   2. On Nova side set vnic_type to baremetal for Ironic hypervisor [0].
   3. On Neutron side, perform fake 'baremetal' port binding [2] in case of
   FLAT network.

We need to receive direct notifications from Neutron to Ironic, because
Ironic creates ports in provisioning network by his own.
Nova doesn't know anything about provisioning ports.

If not, Ironic could develop a service plugin that just listens for port
> update events and relays them to Ironic.
>
>
I already prepared PoC [4] to Neutron that allows to send notifications to
Ironic on port_update event.

Reference:
[0] https://review.openstack.org/339143
[1] https://review.openstack.org/339129
[3] https://review.openstack.org/340695
[4] https://review.openstack.org/345211


> On Tue, Jul 12, 2016 at 4:07 AM, Vasyl Saienko <vsaie...@mirantis.com>
> wrote:
>
>> Hello Community,
>>
>> I'm working to make Ironic be aware about  Neutron port state changes [0].
>> The issue consists of two parts:
>>
>>- Neutron ports for baremetal instances remain in DOWN state [1]. The
>>issue occurs because there is no mechanism driver that binds ports. To
>>solve it we need to create port with  vnic_type='baremetal' in Nova [2],
>>and bind in Neutron. New mechanism driver that supports baremetal 
>> vnic_type
>>is needed [3].
>>
>>- Sync Neutron events with Ironic. According to Neutron architecture
>>[4] mechanism drivers work synchronously. When the port is bound by ml2
>>mechanism driver it becomes ACTIVE. While updating dhcp information 
>> Neutron
>>uses dhcp agent, which is asynchronous call. I'm confused here, since
>>ACTIVE port status doesn't mean that it operates (dhcp agent may fail to
>>setup port). The issue was solved by [5]. So starting from [5] when ML2
>>uses new port status update flow, port update is always asynchronous
>>operation. And the most efficient way is to implement callback mechanism
>>between Neutron and Ironic is like it's done for Neutron/Nova.
>>
>>
>> Neutron/Nova/Ironic teams let me know your thoughts on this.
>>
>> Reference:
>> [0] https://bugs.launchpad.net/ironic/+bug/1304673
>> [1] https://bugs.launchpad.net/neutron/+bug/1599836
>> [2] https://review.openstack.org/339143
>> [3] https://review.openstack.org/#/c/339129/
>> [4]
>> https://www.packtpub.com/sites/default/files/Article-Images/B04751_01.png
>> [5]
>> https://github.com/openstack/neutron/commit/b672c26cb42ad3d9a17ed049b506b5622601e891
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] static Portgroup support.

2016-07-19 Thread Vasyl Saienko
Hello Community,

Current portgroup scenario is not fully clear for me. The related spec [3]
doesn't clearly describe it. And based on implementation [1] and [2] I
guess it should work in the following fashion for node with 3 NICs, where
eth1 and eth2 are members of Porgroup Po0/1

Node network connection info:
eth1 (aa:bb:cc:dd:ee:f1) <---> Gig0/1
eth2 (aa:bb:cc:dd:ee:f2) <---> Gig0/2
eth3 (aa:bb:cc:dd:ee:f3) <---> Gig0/3

For FLAT network scenario:
1. Administrator enrol ironic node.
2. Administrator creates a 3 ports for each interface, and a portgroup that
contains eth0 and eth1 ports.
3. The ports Gig0/1 and Gig0/2 are added to portgroup Po0/1 manually on the
switch.
4. When user request to boot an instance, Nova randomly picks interface
[2], it might be a portgroup or single NIC interface. Proposed change [1]
do not allow to specify what exactly network type we would like to use
single NIC or portgroup.

For multitenancy case:
All looks the same, in addition administrator adds local_link_connection
information for each port (local_link_connection 'port_id' field is
'Gig0/1' for eth1, 'Gig0/2' for eth2 and 'Gig0/3' for eth3, ). Ironic send
this information to Neutron who plugs ports to needed network.

The same user-scenario is available at the moment without any changes to
Nova or Ironic. The difference is that administrator creates one port for
single interface eth3 with local_link_connection 'port_id'='Gig0/3',  and a
port that is a logical representation of portgroup (eth1 and eth2) with
local_link_connection 'port_id'='Po0/1'.

Please let me know if I've missed something or misunderstood current
portgroup scenario.

Reference:
[0] https://review.openstack.org/206163
[1] https://review.openstack.org/332177
[2]
https://github.com/openstack/nova/blob/06c537fbe5bb4ac5a3012642c899df815872267c/nova/network/neutronv2/api.py#L270
[3]
https://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/ironic-ml2-integration.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic][neutron][nova] Sync port state changes.

2016-07-12 Thread Vasyl Saienko
Hello Community,

I'm working to make Ironic be aware about  Neutron port state changes [0].
The issue consists of two parts:

   - Neutron ports for baremetal instances remain in DOWN state [1]. The
   issue occurs because there is no mechanism driver that binds ports. To
   solve it we need to create port with  vnic_type='baremetal' in Nova [2],
   and bind in Neutron. New mechanism driver that supports baremetal vnic_type
   is needed [3].

   - Sync Neutron events with Ironic. According to Neutron architecture [4]
   mechanism drivers work synchronously. When the port is bound by ml2
   mechanism driver it becomes ACTIVE. While updating dhcp information Neutron
   uses dhcp agent, which is asynchronous call. I'm confused here, since
   ACTIVE port status doesn't mean that it operates (dhcp agent may fail to
   setup port). The issue was solved by [5]. So starting from [5] when ML2
   uses new port status update flow, port update is always asynchronous
   operation. And the most efficient way is to implement callback mechanism
   between Neutron and Ironic is like it's done for Neutron/Nova.


Neutron/Nova/Ironic teams let me know your thoughts on this.

Reference:
[0] https://bugs.launchpad.net/ironic/+bug/1304673
[1] https://bugs.launchpad.net/neutron/+bug/1599836
[2] https://review.openstack.org/339143
[3] https://review.openstack.org/#/c/339129/
[4]
https://www.packtpub.com/sites/default/files/Article-Images/B04751_01.png
[5]
https://github.com/openstack/neutron/commit/b672c26cb42ad3d9a17ed049b506b5622601e891
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] why do we need setting network driver per node?

2016-06-29 Thread Vasyl Saienko
Dmitry thanks for rising this question!

On Tue, Jun 28, 2016 at 6:32 PM, Dmitry Tantsur  wrote:

> Hi folks!
>
> I was reviewing https://review.openstack.org/317391 and realized I don't
> quite understand why we want to have node.network_interface. What's the
> real life use case for it?
>
> Do we expect some nodes to use Neutron, some - not?
>

Neutron already provides great flexibility by ML2 drivers. Multi hardware
environments can be managed without any problems if Ironic uses Neutron, by
using different ML2 driver for different hardware types.

With multitenancy there might be cases when user don't want to use Neutron
and Neutron ML2 drivers. In this case only specifying network_interface per
node may give full flexibility in multivendor environments.


> Do we expect some nodes to benefit from network separation, some - not?
> There may be a use case here, but then we have to expose this field to Nova
> for scheduling, so that users can request a "secure" node or a "less
> secure" one. If we don't do that, Nova will pick at random, which makes the
> use case unclear again.
> If we do that, the whole work goes substantially beyond what we were
> trying to do initially: isolate tenants from the provisioning network and
> from each other.
>
> Flexibility it good, but the patches raise upgrade concerns, because it's
> unclear how to provide a good default for the new field. And anyway it
> makes the whole thing much more complex than it could be.
>
>
I vote for greater flexibility in future, even there might be some difficulties
during upgrades.


> Any hints are welcome.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][qa][ironic][nova] When Nova should mark instance as successfully deleted?

2016-05-27 Thread Vasyl Saienko
Lucas, Andrew

Thanks for fast response.

On Fri, May 27, 2016 at 4:53 PM, Andrew Laski  wrote:

>
>
> On Fri, May 27, 2016, at 09:25 AM, Lucas Alvares Gomes wrote:
> > Hi,
> >
> > Thanks for bringing this up Vasyl!
> >
> > > At the moment Nova with ironic virt_driver consider instance as
> deleted,
> > > while on Ironic side server goes to cleaning which can take a while. As
> > > result current implementation of Nova tempest tests doesn't work for
> case
> > > when Ironic is enabled.
>
> What is the actual failure? Is it a capacity issue because nodes do not
> become available again quickly enough?
>
>
The actual failure is that temepest community doesn't want to accept 1
option.
https://review.openstack.org/315422/
And I'm not sure that it is the right way.

> >
> > > There are two possible options how to fix it:
> > >
> > >  Update Nova tempest test scenarios for Ironic case to wait when
> cleaning is
> > > finished and Ironic node goes to 'available' state.
> > >
> > > Mark instance as deleted in Nova only after cleaning is finished on
> Ironic
> > > side.
> > >
> > > I'm personally incline to 2 option. From user side successful instance
> > > termination means that no instance data is available any more, and
> nobody
> > > can access/restore that data. Current implementation breaks this rule.
> > > Instance is marked as successfully deleted while in fact it may be not
> > > cleaned, it may fail to clean and user will not know anything about it.
> > >

>
> > I don't really like option #2, cleaning can take several hours
> > depending on the configuration of the node. I think that it would be a
> > really bad experience if the user of the cloud had to wait a really
> > long time before his resources are available again once he deletes an
> > instance. The idea of marking the instance as deleted in Nova quickly
> > is aligned with our idea of making bare metal deployments
> > look-and-feel like VMs for the end user. And also (one of) the
> > reason(s) why we do have a separated state in Ironic for DELETING and
> > CLEANING.
>

The resources will be available only if there are other available baremetal
nodes in the cloud.
User doesn't have ability to track for status of available resources
without admin access.


> I agree. From a user perspective once they've issued a delete their
> instance should be gone. Any delay in that actually happening is purely
> an internal implementation detail that they should not care about.
>
> >
> > I think we should go with #1, but instead of erasing the whole disk
> > for real maybe we should have a "fake" clean step that runs quickly
> > for tests purposes only?
> >
>

At the gates we just waiting for bootstrap and callback from node when
cleaning starts. All heavy operations are postponed. We can disable
automated_clean, which means it is not tested.


> > Cheers,
> > Lucas
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tempest][qa][ironic][nova] When Nova should mark instance as successfully deleted?

2016-05-27 Thread Vasyl Saienko
Hello Community!


At the moment Nova with ironic virt_driver consider instance as deleted,
while on Ironic side server goes to cleaning which can take a while. As
result current implementation of Nova tempest tests doesn't work for case
when Ironic is enabled.

There are two possible options how to fix it:

   1.  Update Nova tempest test scenarios for Ironic case to wait when
   cleaning is finished and Ironic node goes to 'available' state.

   2. Mark instance as deleted in Nova only after cleaning is finished on
   Ironic side.


I'm personally incline to 2 option. From user side successful instance
termination means that no instance data is available any more, and nobody
can access/restore that data. Current implementation breaks this rule.
Instance is marked as successfully deleted while in fact it may be not
cleaned, it may fail to clean and user will not know anything about it.


Sincerely,

Vasyl Saienko
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] Cruft entries found in global-requirements.txt

2016-05-13 Thread Vasyl Saienko
netmiko is in networking-generic-switch requirements.txt
https://github.com/openstack/networking-generic-switch/blob/master/requirements.txt#L2

On Sat, May 7, 2016 at 5:35 AM, Haïkel  wrote:

> Started on removing some entries, I guess I have big cleanup to do RDO
> side.
>
> H.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova][neutron][qa] Injecting code into grenade resource create phase

2016-05-12 Thread Vasyl Saienko
Hello Jim,

Thanks for rising this question.

My personal feeling that we don't need to tune tests that won't pass due to
design limitations.
It is Ironic pre-requirement to have network access from control plane to
server during provisioning.
Until Ironic Neutron integration is completed we may skip this test.

On Thu, May 12, 2016 at 12:53 AM, Jim Rollenhagen 
wrote:

> I've ran into a bit of a wedge working on ironic grenade tests.
>
> In a normal dsvm run, the ironic setup taps the control plane into the
> tenant network (as that's how it's currently intended to be deployed).
> That code is here[0].
>
> However, in a grenade run, during the resource create phase, a new
> network is created. This happens in the neutron bits[1], and is used to
> boot a server in the nova bits[2].
>
> Since the control plane can't communicate with the machine on that
> network, our ramdisk doesn't reach back to ironic after booting up, and
> provisioning fails[3][4].
>
> Curious if any grenade experts have thoughts on how we might be able to
> set up that tap in between the neutron and nova resource creation.
>
> One alternative I've considered is a method to have nova resource
> creation not boot an instance, and replicate that functionality in the
> ironic plugin, after we tap into that network.
>
> I'm sure there's other alternatives here that I haven't thought of;
> suggestions welcome. Thanks in advance. :)
>
> // jim
>
> [0]
> https://github.com/openstack/ironic/blob/95ff5badbdea0898d7877e651893916008561760/devstack/lib/ironic#L653
> [1]
> https://github.com/openstack-dev/grenade/blob/fce63f40d21abea926d343e9cddd620e3f03684a/projects/50_neutron/resources.sh#L34
> [2]
> https://github.com/openstack-dev/grenade/blob/fce63f40d21abea926d343e9cddd620e3f03684a/projects/60_nova/resources.sh#L79
> [3]
> http://logs.openstack.org/65/311865/3/experimental/gate-grenade-dsvm-ironic/e635dec/logs/grenade.sh.txt.gz#_2016-05-10_18_10_03_648
> [4]
> http://logs.openstack.org/65/311865/3/experimental/gate-grenade-dsvm-ironic/e635dec/logs/old/screen-ir-cond.txt.gz?level=WARNING
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic-staging-drivers] Tests at the gates

2016-04-21 Thread Vasyl Saienko
Hello Andreas,

Thanks for comment, I didn't know about other-requirements.

There is a tool 'bindep' [0] that allows to parse other-requirements.txt
It is possible to mix python/system dependencies in single
other-requirements.txt. But mixing packages from different distros are not
supported.
Also it doesn't allow to install dependencies from sources. I'm not sure
that it is what we need.

I would prefer to have shell script that is maintained by driver's owner
and provides complete freedom in configuration.


[0] https://github.com/openstack-infra/bindep

On Wed, Apr 20, 2016 at 9:38 PM, Andreas Jaeger <a...@suse.com> wrote:

> On 04/20/2016 04:57 PM, Vasyl Saienko wrote:
> > Hello Ironic-staging-drivers team,
> >
> > At the moment there is no tests for ironic-staging-drivers at the gates.
> > I think we need to have a simple test that install drivers with theirs
> > dependencies and ensures that ironic-conductor is able to start.
> > It may be performed in the following way. Each staging driver contain
> > two files:
> >
> >   * python-requirements.txt - file for python libraries
> >   * other-requirements.sh - script that will install all non-python
>
> The file other-requirements.txt is already one way to install additional
> packages, just use that one. Best ask on #openstack-infra for details,
>
> Andreas
>
> > driver requirements.
> >
> > During devstack installation phase for each driver we launch:
> >
> >   * pip install -r
> >
>  ironic-staging-drivers/ironic-staging-drivers/$driver/python-requirements.txt
> >   * bash
> >
>  ironic-staging-drivers/ironic-staging-drivers/$driver/other-requirements.sh
> >   * add drivers to enabled_driver list
> >
> > At the end ironic will try to register a node with some *_ssh driver. So
> > if it succeed it means that conductor with staging drivers has started
> > successfully.
> >
> > The devstack plugin is on review already:
> > https://review.openstack.org/#/c/299229/
> >
> > Sincerely,
> > Vasyl Saienko
> >
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic-staging-drivers] Tests at the gates

2016-04-20 Thread Vasyl Saienko
Hello Ironic-staging-drivers team,

At the moment there is no tests for ironic-staging-drivers at the gates.
I think we need to have a simple test that install drivers with theirs
dependencies and ensures that ironic-conductor is able to start.
It may be performed in the following way. Each staging driver contain two
files:

   - python-requirements.txt - file for python libraries
   - other-requirements.sh - script that will install all non-python driver
   requirements.

During devstack installation phase for each driver we launch:

   - pip install -r
   ironic-staging-drivers/ironic-staging-drivers/$driver/python-requirements.txt
   - bash
   ironic-staging-drivers/ironic-staging-drivers/$driver/other-requirements.sh
   - add drivers to enabled_driver list

At the end ironic will try to register a node with some *_ssh driver. So if
it succeed it means that conductor with staging drivers has started
successfully.

The devstack plugin is on review already:
https://review.openstack.org/#/c/299229/

Sincerely,
Vasyl Saienko
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Neutron] Integration status

2016-04-20 Thread Vasyl Saienko
Hello Haomeng,

You want to test VLAN aware instances that support trunk interfaces [0] or
try Ironic provisioning in separate provision network?

First case is not supported by networking-generic-switch at the moment and
requires some modification.
Second case is supported, we already have an experimental gate-job
*'gate-tempest-dsvm-ironic-multitenant-network-nv'* and tempest tests [1]

[0] https://review.openstack.org/#/c/277853/
[1] https://review.openstack.org/#/c/269157

Sincerely,
Vasyl

On Mon, Apr 18, 2016 at 5:33 AM, Haomeng, Wang <wanghaom...@gmail.com>
wrote:

> Hi Vasy,
>
> I am interested with this ironic-neutron integration to support VLAN, so
> can you help to share some doc/guide/steps for me, and let me try also?
>
> Thanks
> Haomeng
>
>
>
> On Thu, Mar 31, 2016 at 9:20 PM, Jim Rollenhagen <j...@jimrollenhagen.com>
> wrote:
>
>> On Thu, Mar 31, 2016 at 03:37:53PM +0300, Vasyl Saienko wrote:
>> > Hello Community,
>> >
>> > I'm happy to announce that new experimental job
>> > 'ironic-multitenant-network' is stabilized and working. This job allows
>> to
>> > test Ironic multitenancy patches at the gates with help of
>> > networking-generic-switch [1]. Unfortunately depends-on doesn't work for
>> > python-ironicclient [2] since it is installed from pip. There is
>> workaround
>> > for it [3].
>>
>> This is so awesome. Amazing work here by everyone involved. \o/
>>
>> > The full list of patches is [4].
>> >  I'm kindly asking to review them as Ironic multitenancy is very
>> desirable
>> > feature by Ironic customers in Newton release.
>>
>> +1. I'd love to get this stuff in the ironic tree before the summit, and
>> have the Nova stuff at least ready to land by then.
>>
>> I'll be trying to review this today/tomorrow, hoping others can do the
>> same. :)
>>
>> // jim
>>
>> > [1] https://github.com/openstack/networking-generic-switch
>> > [2] https://review.openstack.org/206144/
>> > [3] https://review.openstack.org/296432/
>> > [4]
>> >
>> https://review.openstack.org/#/q/status:open+project:openstack/ironic+branch:master+topic:bug/1526403
>> >
>> > Sincerely,
>> > Vasyl Saienko
>>
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic][Neutron] Integration status

2016-03-31 Thread Vasyl Saienko
Hello Community,

I'm happy to announce that new experimental job
'ironic-multitenant-network' is stabilized and working. This job allows to
test Ironic multitenancy patches at the gates with help of
networking-generic-switch [1]. Unfortunately depends-on doesn't work for
python-ironicclient [2] since it is installed from pip. There is workaround
for it [3].

The full list of patches is [4].
 I'm kindly asking to review them as Ironic multitenancy is very desirable
feature by Ironic customers in Newton release.

[1] https://github.com/openstack/networking-generic-switch
[2] https://review.openstack.org/206144/
[3] https://review.openstack.org/296432/
[4]
https://review.openstack.org/#/q/status:open+project:openstack/ironic+branch:master+topic:bug/1526403

Sincerely,
Vasyl Saienko
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tempest] [Devstack] Where to keep tempest configuration?

2016-03-18 Thread Vasyl Saienko
Hello Community,

We started using tempest/devstack plugins. They allows to do not bother
other teams when Project specific changes need to be done. Tempest
configuration is still performed at devstack [0].
So I would like to rise the following questions:


   - Where we should keep Projects specific tempest configuration? Example
   [1]
   - Where to keep shared between projects tempest configuration? Example
   [2]

As for me it would be good to move Projects related tempest configuration
to Projects repositories.

[0] https://github.com/openstack-dev/devstack/blob/master/lib/tempest
[1]
https://github.com/openstack-dev/devstack/blob/master/lib/tempest#L509-L513
[2]
https://github.com/openstack-dev/devstack/blob/master/lib/tempest#L514-L523

Thank you in advance,
Vasyl Saienko
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic][Neutron] Integration status

2015-12-17 Thread Vasyl Saienko
Hello Ironic/Neutron community,

Ironic patches were stale and were in merge conflict during last time.
Yesterday I've rebased those pathes and put them in single chain. I already
replied/resolved some comments and will do it for the rest in nearest
future.

I'm happy to announce that it is possible to test Ironic/Neutron
integration on devstack.
Devstack should be patched with [0]. local.conf can be found here [1].

I'm kindly asking to start actively reviewing patches [2]. It would be cool
to have this feature in Mitaka.


[0] https://review.openstack.org/256364
[1]
https://review.openstack.org/#/c/258596/3/devstack/doc/source/guides/ironic-neutron-integration.rst
[2]
https://review.openstack.org/#/q/status:open+project:openstack/ironic+branch:master+topic:bp/ironic-ml2-integration
[3] https://etherpad.openstack.org/p/ironic-neutron-mid-cycle

Sincerely,
Vasyl Saienko
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Neutron] Integration status

2015-12-17 Thread Vasyl Saienko
On Thu, Dec 17, 2015 at 12:23 PM, Vasyl Saienko <vsaie...@mirantis.com>
wrote:

> Hello Ironic/Neutron community,
>
> Ironic patches were stale and were in merge conflict during last time.
> Yesterday I've rebased those pathes and put them in single chain. I already
> replied/resolved some comments and will do it for the rest in nearest
> future.
>
> I'm happy to announce that it is possible to test Ironic/Neutron
> integration on devstack.
> Devstack should be patched with [0]. local.conf can be found here [1].
>
> I'm kindly asking to start actively reviewing patches [2]. It would be
> cool to have this feature in Mitaka.
>
>
> [0] https://review.openstack.org/256364
>
Correct patch to devstack is https://review.openstack.org/#/c/256294/

>
> [1]
> https://review.openstack.org/#/c/258596/3/devstack/doc/source/guides/ironic-neutron-integration.rst
> [2]
> https://review.openstack.org/#/q/status:open+project:openstack/ironic+branch:master+topic:bp/ironic-ml2-integration
> [3] https://etherpad.openstack.org/p/ironic-neutron-mid-cycle
>
> Sincerely,
> Vasyl Saienko
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic]Boot physical machine fails, says "PXE-E11 ARP Timeout"

2015-12-09 Thread Vasyl Saienko
Hello Zhi,

It seems that there is no connectivity between HW server and Gateway/TFTP
server.
You can boot live CD on it, assign the same IP manually and check if you
are able to ping 10.0.0.1

Sincerely,
Vasyl Saienko

On Wed, Dec 9, 2015 at 3:59 PM, Zhi Chang <chang...@unitedstack.com> wrote:

> hi, all
> I treat a normal physical machine as a bare metal machine. The
> physical machine booted when I run "nova boot xxx" in command line. But
> there is an error happens. I upload a movie in youtube, link:
> https://www.youtube.com/watch?v=XZQCNsrkyMI=youtu.be. Could
> someone give me some advice?
>
> Thx
> Zhi Chang
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Neutron] Testing of Ironic/Neutron integration on devstack

2015-11-26 Thread Vasyl Saienko
Hello Kevin,

I've added some pictures that illustrates how it works with HW switch and
with VMs on devstack.


On Wed, Nov 25, 2015 at 10:53 PM, Kevin Benton <blak...@gmail.com> wrote:

> This is cool. I didn't know you were working on an OVS driver for testing
> in CI as well. :)
>
> Does this work by getting the port wired into OVS so the agent recognizes
> it like a regular port so it can be put into VXLAN/VLAN or whatever the
> node is configured with? From what I can tell it looks like it's on a
> completely different bridge so they wouldn't have connectivity to the rest
> of the network.
>
> Driver works with VLAN at the moment, I don't see any reason why it
wouldn't work with VXLAN.
Ironic VMs are created on devstack by [0]. They are not registered in
Nova/Neutron so neutron-ovs-agent doesn't know anything about them.
In single node devstack you can't launch regular nova VM instances since
compute_driver=ironic doesn't allow this. They would have connectivity to
rest of network via 'br-int'.

I have some POC code[1] for 'baremetal' support directly in the OVS agent
> so ports get treated just like VM ports. However, it requires upstream
> changes so if yours accomplishes the same thing without any upstream
> changes, that will be the best way to go.
>
>
In real setup neutron will plug baremetal server to specific network via
ML2 driver.
We should keep as much closer to real ironic use-case scenario in testing
model. That is why we should have ML2 that allows us to interact with OVS.


> Perhaps we can merge your approach (config via ssh) with mine (getting the
> 'baremetal' ports wired up for real connectivity) so we don't need upstream
> changes.
>
> 1. https://review.openstack.org/#/c/249265/
>
> Cheers,
> Kevin Benton
>
> On Wed, Nov 25, 2015 at 7:27 AM, Vasyl Saienko <vsaie...@mirantis.com>
> wrote:
>
>> Hello Community,
>>
>> As you know Ironic/Neutron integration is planned in Mitaka. And at the
>> moment we don't have any CI that will test it. Unfortunately we can't test
>> Ironic/Neutron integration on HW as we don't have it.
>> So probably the best way is to develop ML2 driver that will work with OVS.
>>
>> At the moment we have a PoC [1] of ML2 driver that works with Cisco and
>> OVS on linux.
>> Also we have some patches to devstack that allows to try Ironic/Neutron
>> integration on VM and real HW. And quick guide how to test it locally [0]
>>
>> https://review.openstack.org/#/c/247513/
>> https://review.openstack.org/#/c/248048/
>> https://review.openstack.org/#/c/249717/
>> https://review.openstack.org/#/c/248074/
>>
>> I'm interested in Neutron/Ironic integration. It would be great if we
>> have it in Mitaka.
>> I'm asking Community to check [0] and [1] and share your thoughts.
>>
>>  Also I would like to request a repo on openstack.org for [1]
>>
>>
>> [0]
>> https://github.com/jumpojoy/ironic-neutron/blob/master/devstack/examples/ironic-neutron-vm.md
>> [1] https://github.com/jumpojoy/generic_switch
>>
>> --
>> Sincerely
>> Vasyl Saienko
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Kevin Benton
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
[0]
https://github.com/openstack-dev/devstack/blob/master/tools/ironic/scripts/create-node
[1] https://review.openstack.org/#/c/249717

--
Sincerely
Vasyl Saienko
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Neutron] Testing of Ironic/Neutron integration on devstack

2015-11-26 Thread Vasyl Saienko
Hi Sukhdev,

I didn't have a change to be present on previous meeting due to personal
reasons, but I will be on next meeting.
It is important to keep CI testing as much possible closer to real Ironic
use-case scenario.

At the moment we don't have any test-case that covers ironic/neutron
integration in tempests.
I think it is nice time to discuss it. So my vision of ironic/neutron
test-case is next:

1. Setup Devstack with 3 ironic nodes
2. In project: *demo *

   - create a network 10.0.100.0/24
   - boot vm on it with fixed ip 10.0.100.10
   - boot vm2 on it with fixed ip 10.0.100.11

3. In project: *alt_demo*

   - create network 10.0.100.0/24 with same prefix as in project *demo *
   - boot vm on it with fixed ip 10.0.100.20

4. Wait for both instances become active

5. Check that we *can't* ping *demo: vm* from *alt_demo vm*

6. Check that we *can* access to vm1 from vm in project demo

7. Make sure that there is no packets with MAC of *alt_demo vm *on *demo:
vm *(can use tcpdump)
--
Sincerely
Vasyl Saienko

On Wed, Nov 25, 2015 at 11:06 PM, Sukhdev Kapur <sukhdevka...@gmail.com>
wrote:

> Hi Vasyl,
>
> This is great. Kevin and I was working on the similar thing. I just
> finished testing his patch and gave a +1.
> This is a missing (and needed) functionality for getting the
> Ironic/Neutron integration completed.
>
> As Kevin suggests, it will be best if we can combine these approaches and
> come up with the best solution.
>
> If you are available, please join us in our next weekly meeting at 8AM
> (pacific time) at #openstack-meeting-4.
> I am sure team will be excited to know about this solution and this will
> give an opportunity to make sure we cover all angles of this testing.
>
> Thanks
> -Sukhdev
>
>
> On Wed, Nov 25, 2015 at 7:27 AM, Vasyl Saienko <vsaie...@mirantis.com>
> wrote:
>
>> Hello Community,
>>
>> As you know Ironic/Neutron integration is planned in Mitaka. And at the
>> moment we don't have any CI that will test it. Unfortunately we can't test
>> Ironic/Neutron integration on HW as we don't have it.
>> So probably the best way is to develop ML2 driver that will work with OVS.
>>
>> At the moment we have a PoC [1] of ML2 driver that works with Cisco and
>> OVS on linux.
>> Also we have some patches to devstack that allows to try Ironic/Neutron
>> integration on VM and real HW. And quick guide how to test it locally [0]
>>
>> https://review.openstack.org/#/c/247513/
>> https://review.openstack.org/#/c/248048/
>> https://review.openstack.org/#/c/249717/
>> https://review.openstack.org/#/c/248074/
>>
>> I'm interested in Neutron/Ironic integration. It would be great if we
>> have it in Mitaka.
>> I'm asking Community to check [0] and [1] and share your thoughts.
>>
>>  Also I would like to request a repo on openstack.org for [1]
>>
>>
>> [0]
>> https://github.com/jumpojoy/ironic-neutron/blob/master/devstack/examples/ironic-neutron-vm.md
>> [1] https://github.com/jumpojoy/generic_switch
>>
>> --
>> Sincerely
>> Vasyl Saienko
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic][Neutron] Testing of Ironic/Neutron integration on devstack

2015-11-25 Thread Vasyl Saienko
Hello Community,

As you know Ironic/Neutron integration is planned in Mitaka. And at the
moment we don't have any CI that will test it. Unfortunately we can't test
Ironic/Neutron integration on HW as we don't have it.
So probably the best way is to develop ML2 driver that will work with OVS.

At the moment we have a PoC [1] of ML2 driver that works with Cisco and OVS
on linux.
Also we have some patches to devstack that allows to try Ironic/Neutron
integration on VM and real HW. And quick guide how to test it locally [0]

https://review.openstack.org/#/c/247513/
https://review.openstack.org/#/c/248048/
https://review.openstack.org/#/c/249717/
https://review.openstack.org/#/c/248074/

I'm interested in Neutron/Ironic integration. It would be great if we have
it in Mitaka.
I'm asking Community to check [0] and [1] and share your thoughts.

 Also I would like to request a repo on openstack.org for [1]


[0]
https://github.com/jumpojoy/ironic-neutron/blob/master/devstack/examples/ironic-neutron-vm.md
[1] https://github.com/jumpojoy/generic_switch

--
Sincerely
Vasyl Saienko
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Nova][Neutron] Multi-tenancy support

2015-11-17 Thread Vasyl Saienko
Hello Sean, Jim,

Thanks for your comments.
I've created separate repo:  https://github.com/jumpojoy/generic_switch

--
Sincerely
Vasyl Saienko

On Tue, Nov 17, 2015 at 1:38 AM, Jim Rollenhagen <j...@jimrollenhagen.com>
wrote:

> On Mon, Nov 16, 2015 at 06:38:28PM +, Sean M. Collins wrote:
> > On Mon, Nov 16, 2015 at 12:47:13PM EST, Vasyl Saienko wrote:
> > > [0] https://github.com/jumpojoy/neutron
> >
> > The way you created the repository in GitHub, it is impossible to diff
> > it against master to see what you did.
> >
> > https://github.com/jumpojoy/neutron/compare
>
> FWIW, it appears to just be a single commit on top of Neutron from about
> a week ago.
>
> https://github.com/jumpojoy/neutron/commits/generic_switch
>
> https://github.com/jumpojoy/neutron/commit/96de518c30459c91c06789fcc6d17a5d29ed3adc
>
> This should totally be a separate repo with just the plugin, though.
>
> // jim
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Nova][Neutron] Multi-tenancy support

2015-11-16 Thread Vasyl Saienko
Hello Community,

There is 'raw' version of Generic ML2 Driver [0]. It uses 'Netmiko' [1]
library. At the moment it supports Cisco Catalyst switches but it can be
easily extended with ANY ssh enabled switch.

[0] https://github.com/jumpojoy/neutron
[1] https://github.com/ktbyers/netmiko

On Wed, Nov 11, 2015 at 3:15 PM, Sukhdev Kapur <sukhdevka...@gmail.com>
wrote:

> Hi Vasyl,
>
> I have not cross checked every patch in your list, but, the list looks
> about right.
> From the Ironic, Nova, and Neutron point of the code is pretty much in
> place with these patches.
>
> In this week's meeting we discussed the plan for merging these patches.
> Couple of things are holding us - namely the CI and documentation. We are
> working on getting the CI addressed so that automated testing can be kicked
> off, which will enable us to merge these patches (hopefully in M1).
> Documentation is also underway.
>
> As to ML2 driver (which you are looking for), in order make the CI work,
> we are considering couple of options - either write a canned ML2 driver to
> test this or enhance OVS driver to allow/accept/deal with new interface. We
> did not have full quorum in this week's meeting. Hopefully, we will have
> some concrete plans by the next week. But, this ML2 driver is being
> considered to deal with devstack/CI related testing only.
>
> In order to test the real world scenarios, you will need real HW and
> vendor ML2 driver. The only two vendors that I am aware of who has this
> working are HP and Arista. I do not know if HP is in a position to release
> it yet. Arista will take some time to release it, as we follow very strict
> quality control guidelines before releasing any software. I am only techie
> and do not control the release of software, but, my guess is, its release
> will be aligned with release of Mitaka.
>
> If you believe you can be good with canned ML2 driver for devstack
> initially, that may become available much earlier.
> We meet every Monday at 1700 UTC (8am Pacific time) on
> #openstack-meeting-4. Feel free to drop by or join us - as this is one of
> the things we plan on discussing next Monday's meeting. This will give you
> a better feel.
>
> Hope this helps.
>
> -Sukhdev
> P.S. feel free to ping me on IRC (IRC handle: Sukhdev) on neutron or
> Ironic channels
>
>
> On Tue, Nov 10, 2015 at 3:05 AM, Vasyl Saienko <vsaie...@mirantis.com>
> wrote:
>
>> Hello community,
>>
>> I would like to start preliminary testing of Ironic multi-tenant network
>> setup which is supported by Neutron in Liberty according to [1]. I found
>> the following patches that are on review. Also neutron ML2 plugin is
>> needed. I can't find any plugin that supports multi-tenancy and Cisco
>> (Catalyst)/Arista switches. I would be grateful for any information on
>> the matter.
>>
>> *Ironic:*
>>
>> https://review.openstack.org/#/c/206232/
>>
>> https://review.openstack.org/#/c/206238/
>>
>> https://review.openstack.org/#/c/206243/
>>
>> https://review.openstack.org/#/c/206244/
>>
>> https://review.openstack.org/#/c/206245/
>>
>> https://review.openstack.org/#/c/139687/
>>
>> https://review.openstack.org/#/c/213262/
>> https://review.openstack.org/#/c/228496/
>>
>> *Nova:*
>>
>> https://review.openstack.org/#/c/186855/
>> https://review.openstack.org/#/c/194413/
>>
>> *python-ironicclient*:
>> https://review.openstack.org/#/c/206144
>>
>>
>> [1]
>> https://blueprints.launchpad.net/neutron/+spec/neutron-ironic-integration
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic][Nova][Neutron] Multi-tenancy support

2015-11-10 Thread Vasyl Saienko
Hello community,

I would like to start preliminary testing of Ironic multi-tenant network
setup which is supported by Neutron in Liberty according to [1]. I found
the following patches that are on review. Also neutron ML2 plugin is
needed. I can't find any plugin that supports multi-tenancy and Cisco
(Catalyst)/Arista switches. I would be grateful for any information on the
matter.

*Ironic:*

https://review.openstack.org/#/c/206232/

https://review.openstack.org/#/c/206238/

https://review.openstack.org/#/c/206243/

https://review.openstack.org/#/c/206244/

https://review.openstack.org/#/c/206245/

https://review.openstack.org/#/c/139687/

https://review.openstack.org/#/c/213262/
https://review.openstack.org/#/c/228496/

*Nova:*

https://review.openstack.org/#/c/186855/
https://review.openstack.org/#/c/194413/

*python-ironicclient*:
https://review.openstack.org/#/c/206144


[1]
https://blueprints.launchpad.net/neutron/+spec/neutron-ironic-integration
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev