Re: [openstack-dev] [QA][Tempest] Use tempest-config for tempest-cli-improvements

2015-11-29 Thread Yair Fried
Hi,
I agree with Jordan.
We don't have to use the tool as part of the gate. It's target audience is
people and not CI systems. More specifically - new users.
However, we could add a gate (or a few) for the tool that makes sure a
proper conf file is generated. It doesn't have to run the tests, just
compare the output of the script to the conf generated by devstack.

Re Rally - I believe the best place for tempest configuration script is
within tempest. That said, if the Tempest community doesn't want this tool,
we'll have to settle for the Rally solution.

Regards
Yair

On Fri, Nov 27, 2015 at 11:31 AM, Jordan Pittier  wrote:

> Hi,
> I think this script is valuable to some users: Rally and Red Hat expressed
> their needs, they seem clear.
>
> This tool is far from bullet proof and if used blindly or in case of bugs,
> Tempest could be misconfigured. So, we could have this tool inside the
> Tempest repository (in the tools/) but not use it at all for the Gate.
>
> I am not sure I fully understand the resistance for this, if we don"t use
> this config generator for the gate, what's the risk ?
>
> Jordan
>
> On Fri, Nov 27, 2015 at 8:05 AM, Ken'ichi Ohmichi 
> wrote:
>
>> 2015-11-27 15:40 GMT+09:00 Daniel Mellado :
>> > I still do think that even if there are some issues addressed to the
>> > feature, such as skipping tests in the gate, the feature itself it's
>> still
>> > good -we just won't use it for the gates-
>> > Instead it'd be used as a wrapper for a user who would be interested on
>> > trying it against a real/reals clouds.
>> >
>> > Ken, do you really think a tempest user should know all tempest options?
>> > As you pointed out there are quite a few of them and even if they
>> should at
>> > least know their environment, this script would set a minimum acceptable
>> > default. Do you think PTL and Pre-PTL concerns that we spoke of would
>> still
>> > apply to that scenario?
>>
>> If Tempest users run part of tests of Tempest, they need to know the
>> options which are used with these tests only.
>> For example, current Tempest contains ironic API tests and the
>> corresponding options.
>> If users don't want to run these tests because the cloud don't support
>> ironic API, they don't need to know/setup these options.
>> I feel users need to know necessary options which are used on tests
>> they want, because they need to investigate the reason if facing a
>> problem during Tempest tests.
>>
>> Now Tempest options contain their default values, but you need a
>> script for changing them from the default.
>> Don't these default values work for your cloud at all?
>> If so, these values should be changed to better.
>>
>> Thanks
>> Ken Ohmichi
>>
>> ---
>>
>> > Andrey, Yaroslav. Would you like to revisit the blueprint to adapt it to
>> > tempest-cli improvements? What do you think about this, Masayuki?
>> >
>> > Thanks for all your feedback! ;)
>> >
>> > El 27/11/15 a las 00:15, Andrey Kurilin escribió:
>> >
>> > Sorry for wrong numbers. The bug-fix for issue with counters is merged.
>> > Correct numbers(latest result from rally's gate[1]):
>> >  - total number of executed tests: 1689
>> >  - success: 1155
>> >  - skipped: 534 (neutron,heat,sahara,ceilometer are disabled. [2] should
>> > enable them)
>> >  - failed: 0
>> >
>> > [1] -
>> >
>> http://logs.openstack.org/27/246627/11/gate/gate-rally-dsvm-verify-full/800bad0/rally-verify/7_verify_results_--html.html.gz
>> > [2] - https://review.openstack.org/#/c/250540/
>> >
>> > On Thu, Nov 26, 2015 at 3:23 PM, Yaroslav Lobankov <
>> yloban...@mirantis.com>
>> > wrote:
>> >>
>> >> Hello everyone,
>> >>
>> >> Yes, I am working on this now. We have some success already, but there
>> is
>> >> a lot of work to do. Of course, some things don't work ideally. For
>> example,
>> >> in [2] from the previous letter we have not 24 skipped tests, actually
>> much
>> >> more. So we have a bug somewhere :)
>> >>
>> >> Regards,
>> >> Yaroslav Lobankov.
>> >>
>> >> On Thu, Nov 26, 2015 at 3:59 PM, Andrey Kurilin > >
>> >> wrote:
>> >>>
>> >>> Hi!
>> >>> Boris P. and I tried to push a spec[1] for automation tempest config
>> >>> generator, but we did not succeed to merge it. Imo, qa-team doesn't
>> want to
>> >>> have such tool:(
>> >>>
>> >>> >However, there is a big concern:
>> >>> >If the script contain a bug and creates the configuration which makes
>> >>> >most tests skipped, we cannot do enough tests on the gate.
>> >>> >Tempest contains 1432 tests and difficult to detect which tests are
>> >>> >skipped as unexpected.
>> >>>
>> >>> Yaroslav Lobankov is working on improvement for tempest config
>> generator
>> >>> in Rally. Last time when we launch full tempest run[2], we got 1154
>> success
>> >>> tests and only 24 skipped. Also, there is a patch, which adds x-fail
>> >>> mechanism(it based on subunit-filter): you can transmit a file with
>> test
>> >>> names + 

Re: [openstack-dev] [rally] Rally boot tests fails with Error Forbidden: It is not allowed to create an interface on external networks

2015-10-20 Thread Yair Fried
On Tue, Oct 20, 2015 at 2:06 PM, Behzad Dastur 
wrote:

> I have a contrail/OpenStack cloud deployed on which I am trying to run
> some rally benchmarks. But I am having trouble getting the rally boot tests
> to run. It throws the "Error Forbidden: It is not allowed to create an
> interface on external network"
>
> It seems it is trying to create an interface on the external network,
> however in this case that operation is not required as the contrail plugin
> handles that.
>
What version of Rally are you using?
Could you please provide your task file? Looks like you are explicitly
telling rally to use your external network for the VMs.


>  Is there a way to tell the rally scenario to avoid doing that. SImply
> put the operations that need to happen are:
>
> 1. nova boot (create private network/ or use private network provided)
>
The "network" context should allow you to dynamically create the networks.
Also, all scenarios that boot an instance can propagate boot arguments even
if they aren't explicitly listed (for more details try "$ rally plugin info
"), so you should be able to pass "{networks: {uuid: }}"
to the scenario.

2. neutron floating ip create, and assign it to the port eg (neutron
> floatingip-create --port-id   id="">)
>
Only in VMTask AFAIK.


> Here is the error log:
>
> 2015-10-20 00:24:12.759 19075 INFO 
> rally.plugins.openstack.context.keystone.users [-] Task 3000fcbd-2762-400d
> -920f-dfbfb667e7ec | Starting:  Enter context: `users`2015-10-20 00:24:14.711 
> 19075 INFO rally.plugins.openstack.context.keystone.users [-] Task 
> 3000fcbd-2762-400d-920f-dfbfb667e7ec | Completed: Enter context: 
> `users`2015-10-20 00:24:16.222 19264 INFO rally.task.runner [-] Task 
> 3000fcbd-2762-400d-920f-dfbfb667e7ec | ITER: 0 START2015-10-20 00:24:16.227 
> 19264 INFO rally.task.runner [-] Task 3000fcbd-2762-400d-920f-dfbfb667e7ec | 
> ITER: 1 START2015-10-20 00:24:18.420 19264 INFO rally.task.runner [-] Task 
> 3000fcbd-2762-400d-920f-dfbfb667e7ec | ITER: 0 END: Error Forbidden: It is 
> not allowed to create an interface on external network 
> 2de28d39-34f9-48c5-bbac-609e258b7aad (HTTP 403) (Request-ID: 
> req-fe32bcf8-f624-4a2d-a083-7b6c5d1f24ab)
>
>
> regards,
> Behzad
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] Rally boot tests fails with Error Forbidden: It is not allowed to create an interface on external networks

2015-10-20 Thread Yair Fried
On Tue, Oct 20, 2015 at 8:39 PM, Behzad Dastur <behzad.das...@gmail.com>
wrote:

> Hi Yair,
> The rally version I am using is 0.1.2
>
> > rally --version
> 0.1.2
>
> Also the task file is as shown below. Do you have an example of the
> "network" context to skip creation on the interface on the xternal network?
>
have you seen the plugin reference?
https://rally.readthedocs.org/en/latest/plugin/plugin_reference.html
Looks like there's also existing_network [context] but I'm unfamiliar with
it.

> vagrant@rally:~/rally$ more /vagrant/boot.json
>
> {% set flavor_name = flavor_name or "m1.tiny" %}
>
> {
>
> "NovaServers.boot_server": [
>
> {
>
> "args": {
>
> "flavor": {
>
> "name": "{{flavor_name}}"
>
> },
>
"auto_assign_nic": true,

> "image": {
>
> "name": "cirros-0.3.1-x86_64"
>
> },
>
> "use_floatingip": false
>
I think this should be true (or maybe even removed)

> },
>
> "runner": {
>
> "type": "constant",
>
> "times": 10,
>
> "concurrency": 2
>
> },
>
>     "context": {
>
"network": {"networks_per_tenant": 1},

> "users": {
>
> "tenants": 3,
>
> "users_per_tenant": 2
>
> }
>
> }
>
> }
>
> ]
>
> }
>
> regards,
> Behzad
>
> Date: Tue, 20 Oct 2015 15:04:46 +0300
> From: Yair Fried <yfr...@redhat.com>
> To: "OpenStack Development Mailing List (not for usage questions)"
> <openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [rally] Rally boot tests fails with Error
> Forbidden: It is not allowed to create an interface on external
> networks
> Message-ID:
> 

Re: [openstack-dev] [Rally] Improve review process

2015-05-05 Thread Yair Fried
+1

- Original Message -
From: Boris Pavlovic bpavlo...@mirantis.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Tuesday, May 5, 2015 6:09:43 PM
Subject: Re: [openstack-dev] [Rally] Improve review process

Roman, 


Well done! This really simplifies life! 


Best regards, 
Boris Pavlovic 

On Tue, May 5, 2015 at 4:07 PM, Mikhail Dubov  mdu...@mirantis.com  wrote: 



Hi Roman, 

a truly great job. Very impressive and useful. Thanks a lot! 

Best regards, 
Mikhail Dubov 

Engineering OPS 
Mirantis, Inc. 
E-Mail: mdu...@mirantis.com 
Skype: msdubov 

On Tue, May 5, 2015 at 3:11 PM, Roman Vasilets  rvasil...@mirantis.com  
wrote: 





Hi, Rally Team. 

I have created Rally Gerrit dashboard that organized patches in groups: 
Critical for next release, Waiting for final approve, Bug fixes, Proposed 
specs, Important patches, Ready for review, Has -1 but passed tests. Please use 
link http://goo.gl/iRxA5t for you comfortable. Patch is here 
https://review.openstack.org/#/c/179610/ It was made by gerrit-dash-creator. 

First group are the patches that are needed to merge to the nearest release. 
Content of the next three groups is obvious from the titles. Important patches 
- its just patches chosen(starred) by Boris Pavlovic or Mikhail Dubov. Ready 
for review - patches that are waiting for attention. And the last section - its 
patches with -1 mark but they passed CI. 


Roman Vasilets, Mirantis Inc. 

Intern Software Engineer 

__ 
OpenStack Development Mailing List (not for usage questions) 
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 



__ 
OpenStack Development Mailing List (not for usage questions) 
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] Weekly meeting

2015-05-05 Thread Yair Fried
Thank you for moving it to a more reasonable time for me.

- Original Message -
From: Boris Pavlovic bo...@pavlovic.me
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org, carlos torres 
carlos.tor...@rackspace.com, yfr...@redhat.com, yingjun li 
yingjun...@kylin-cloud.com, Aleksandr Maretskiy amarets...@mirantis.com, 
Andrey Kurilin akuri...@mirantis.com, Mikhail Dubov msdu...@gmail.com, 
Oleg Anufriev oanufr...@mirantis.com, Roman Vasilets 
rvasil...@mirantis.com, Sergey Skripnick sskripn...@mirantis.com
Sent: Tuesday, May 5, 2015 6:28:52 PM
Subject: Re: [openstack-dev] [rally] Weekly meeting

+Rally team

Just to make sure that everybody saw this.

Best regards,
Boris Pavlovic

On Tue, May 5, 2015 at 6:19 PM, Mikhail Dubov mdu...@mirantis.com wrote:

 Hi everyone,

 sorry for the previous disinformative message. We have decided to move our
 weekly meeting to Wednesdays at 14:00 UTC (IRC, *#openstack-meeting*). As
 said before, all the relevant information including the meeting agenda can
 be found on the wiki page https://wiki.openstack.org/wiki/Meetings/Rally
 .

 Best regards,
 Mikhail Dubov

 Engineering OPS
 Mirantis, Inc.
 E-Mail: mdu...@mirantis.com
 Skype: msdubov

 On Tue, May 5, 2015 at 6:10 PM, Mikhail Dubov mdu...@mirantis.com wrote:

 Hi everyone,

 let me remind you that today there is the weekly Rally meeting at 17:00
 UTC (IRC, *#openstack-meeting*).

 Starting from today, we will be posting our meeting agenda at the
 corresponding wiki page https://wiki.openstack.org/wiki/Meetings/Rally
 . Fell free to comment on the agenda / to propose new topics.

 Best regards,
 Mikhail Dubov

 Engineering OPS
 Mirantis, Inc.
 E-Mail: mdu...@mirantis.com
 Skype: msdubov

 On Tue, Apr 28, 2015 at 1:05 PM, Mikhail Dubov mdu...@mirantis.com
 wrote:

 Hi everyone,

 let me remind you that today there is the weekly Rally meeting at 17:00
 UTC (IRC, *#openstack-meeting*).

 Here is the agenda for today:

1. Rally QA week: tasks, assignees, progress
2. Upcoming Rally 0.0.4 release: progress on critical patches
3. Spec on refactoring scenario utils: review and discussion (
https://review.openstack.org/#/c/172831/)
4. Spec on in-tree functional tests: review and discussion (
https://review.openstack.org/#/c/166487/)
5. Free discussion

 The meeting is going to be chaired by Alexander Maretskiy.

 Fell free to comment on the agenda / to propose new topics.


 Best regards,
 Mikhail Dubov

 Engineering OPS
 Mirantis, Inc.
 E-Mail: mdu...@mirantis.com
 Skype: msdubov




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [opentack-dev][meetings] Proposing changes in Rally meetings

2015-04-18 Thread Yair Fried
- Original Message -

 From: Aleksandr Maretskiy amarets...@mirantis.com
 To: Andrey Kurilin akuri...@mirantis.com
 Cc: Boris Pavlovic bo...@pavlovic.me, OpenStack Development Mailing
 List openstack-dev@lists.openstack.org, yfr...@redhat.com, yingjun li
 yingjun...@kylin-cloud.com, Mikhail Dubov msdu...@gmail.com, Oleg
 Anufriev oanufr...@mirantis.com, Roman Vasilets
 rvasil...@mirantis.com, Sergey Skripnick sskripn...@mirantis.com
 Sent: Saturday, April 18, 2015 1:00:55 PM
 Subject: Re: [opentack-dev][meetings] Proposing changes in Rally meetings

 Agreed with everything, but I think it would be a bit better if move one
 meeting from Monday to another day (Andrey K. is right)

 On Fri, Apr 17, 2015 at 5:35 PM, Andrey Kurilin  akuri...@mirantis.com 
 wrote:

   - We should start making agenda for each meeting and publish it to Rally
   wiki
 

  +1
 

+1 

   * Second is release management meeting, where we are discussing
   priorities
   for
 
   current  next release. So core team will know what to review first.
 

  It would be nice to post list of high priority patches to etherpad or
  google
  docs after each meeting
 

   - Move meetings from #openstack-meeting to #openstack-rally chat.
 

  doesn't matter for me:)
 

As long as the records are kept. 

   - We should adjust better time for current Rally team.
 

  yeah. Current time is not good:( +1 for 15:00 UTC
 

I'd like even earlier, but if it works for everyone else, I'll make the effort. 

   - Do meetings every Monday and Wednesday
 

  Monday?) Monday is very hard day...
 

  On Fri, Apr 17, 2015 at 4:26 PM, Boris Pavlovic  bo...@pavlovic.me 
  wrote:
 

   Rally team,
  
 

   I would like to propose next changes in Rally meetings:
  
 

   - We should start making agenda for each meeting and publish it to Rally
   wiki
  
 

   - We should do 2 meeting per week:
  
 

   * First is regular meeting (like we have now) where we are discussing
   everything
  
 
   * Second is release management meeting, where we are discussing
   priorities
   for
  
 
   current  next release. So core team will know what to review first.
  
 

Seems like the 2nd meeting is mainly for core, so maybe we can set a 
better(earlier) time for it among a smaller group? 

   - Move meetings from #openstack-meeting to #openstack-rally chat.
  
 

   - We should adjust better time for current Rally team. Like at the moment
   it
   is too late
  
 
   for few of cores in Rally. it's 17:00 UTC and I would like to suggest to
   make
   at 15:00 UTC.
  
 

   - Do meetings every Monday and Wednesday
  
 

   Thoughts ?
  
 

   Best regards,
  
 
   Boris Pavlovic
  
 

  --
 
  Best regards,
 
  Andrey Kurilin.
 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Lack of consistency in returning response from tempest clients

2014-08-31 Thread Yair Fried
Hi,
I'd rather not subclass dict directly.
for various reasons adding extra attributes to normal python dict seems prone 
to errors since people will be expecting regular dicts, and on the other hand 
if we want to expand it in the future we might run into problems playing with 
dict methods (such as update)

I suggets (roughly):

class ResponseBody(dict): 
def __init__(self, body=None, resp=None): 
self_data_dict = body or {} 
self.resp = resp 

def __getitem__(self, index):
return self._data_dict[index]


Thus we can keep the previous dict interface, but protect the data and make 
sure the object will behave exactly as we expect it to. if we want it to have 
more dict attributes/method we can add them explicitly


- Original Message -
From: Boris Pavlovic bpavlo...@mirantis.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Saturday, August 30, 2014 2:53:37 PM
Subject: Re: [openstack-dev] [qa] Lack of consistency in returning response 
from tempest clients

Sean, 




class ResponseBody(dict): 
def __init__(self, body={}, resp=None): 
self.update(body) 
self.resp = resp 


Are you sure that you would like to have default value {} for method argument 
and not something like: 


class ResponseBody(dict): 
def __init__(self, body=None, resp=None): 
body = body or {} 
self.update(body) 
self.resp = resp 

In your case you have side effect. Take a look at: 
http://stackoverflow.com/questions/1132941/least-astonishment-in-python-the-mutable-default-argument
 

Best regards, 
Boris Pavlovic 


On Sat, Aug 30, 2014 at 10:08 AM, GHANSHYAM MANN  ghanshyamm...@gmail.com  
wrote: 



+1. That will also help full for API coming up with microversion like Nova. 


On Fri, Aug 29, 2014 at 11:56 PM, Sean Dague  s...@dague.net  wrote: 


On 08/29/2014 10:19 AM, David Kranz wrote: 
 While reviewing patches for moving response checking to the clients, I 
 noticed that there are places where client methods do not return any value. 
 This is usually, but not always, a delete method. IMO, every rest client 
 method should return at least the response. Some services return just 
 the response for delete methods and others return (resp, body). Does any 
 one object to cleaning this up by just making all client methods return 
 resp, body? This is mostly a change to the clients. There were only a 
 few places where a non-delete method was returning just a body that was 
 used in test code. 

Yair and I were discussing this yesterday. As the response correctness 
checking is happening deeper in the code (and you are seeing more and 
more people assigning the response object to _ ) my feeling is Tempest 
clients should probably return a body obj that's basically. 

class ResponseBody(dict): 
def __init__(self, body={}, resp=None): 
self.update(body) 
self.resp = resp 

Then all the clients would have single return values, the body would be 
the default thing you were accessing (which is usually what you want), 
and the response object is accessible if needed to examine headers. 

-Sean 

-- 
Sean Dague 
http://dague.net 

___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 



-- 
Thanks  Regards 
Ghanshyam Mann 


___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [LBaaS] LBaaS v2 API syntax additions/changes

2014-08-28 Thread Yair Fried
I would like to add a question to John's list



- Original Message -
 From: John Schwarz jschw...@redhat.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Tuesday, August 26, 2014 2:22:33 PM
 Subject: Re: [openstack-dev] [Neutron] [LBaaS] LBaaS v2 API syntax
 additions/changes
 
 
 
 On 08/25/2014 10:06 PM, Brandon Logan wrote:
 
  2. Therefor, there should be some configuration to specifically enable
  either version (not both) in case LBaaS is needed. In this case, the
  other version is disabled (ie. a REST query for non-active version
  should return a not activated error). Additionally, adding a
  'lb-version' command to return the version currently active seems like a
  good user-facing idea. We should see how this doesn't negatively effect
  the db migration process (for example, allowing read-only commands for
  both versions?)
  
  A /version endpoint can be added for both v1 and v2 extensions and
  service plugins.  If it doesn't already exist, it would be nice if
  neutron had an endpoint that would return the list of loaded extensions
  and their versions.
  
 There is 'neutron ext-list', but I'm not familiar enough with it or with
 the REST API to say if we can use that.
 
  3. Another decision that's needed to be made is the syntax for v2. As
  mentioned, the current new syntax is 'neutron lbaas-object-command'
  (against the old 'lb-object-action'), keeping in mind that once v1
  is deprecated, a syntax like 'lbv2-object-action' would be probably
  unwanted. Is 'lbaas-object-command' okay with everyone?
  
  That is the reason we with with lbaas because lbv2 looks ugly and we'd
  be stuck with it for the lifetime of v2, unless we did another migration
  back to lb for it.  Which seemed wrong to do, since then we'd have to
  accept both lbv2 and lb commands, and then deprecate lbv2.
  
  I assume this also means you are fine with the prefix in the API
  resource of /lbaas as well then?
  
 I don't mind, as long there is a similar mechanism which disables the
 non-active REST API commands. Does anyone disagree?
 
  4. If we are going for different API between versions, appropriate
  patches also need to be written for lbaas-related scripts and also
  Tempest, and their maintainers should probably be notified.
  
  Could you elaborate on this? I don't understand what you mean by
  different API between version.
  
 The intention was that the change of the user-facing API also forces
 changes on other levels - not only neutronclient needs to be modified
 accordingly, but also tempest system tests, horizon interface regarding
 LBaaS...


5. If we accept #3 and #4 to mean that the python-client API and CLI must be 
changed/updated and so does Tempest clients and tests, then what about other 
projects consuming the Neutron API? How are Heat and Ceilometer going to be 
affected by this change?

Yair


 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] The role of an abstract client in tempest

2014-07-28 Thread Yair Fried




- Original Message -
 From: David Kranz dkr...@redhat.com
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Sent: Friday, July 25, 2014 4:54:22 PM
 Subject: [openstack-dev] [qa] The role of an abstract client in tempest
 
 Even as a core contributor for several years, it has never been clear
 what the scope of these tests should be.
 As we move forward with the necessity of moving functional testing to
 projects, we need to answer this question for real, understanding that
 part of the mission for these tests now is validation of clouds.  Doing
 so is made difficult by the fact that the tempest api tests take a very
 opinionated view of how services are invoked. In particular, the tempest
 client is very low-level and at present the way a functional test is
 written depends on how and where it is going to run.
 
 In an ideal world, functional tests could execute in a variety of
 environments ranging from those that completely bypass wsgi layers and
 make project api calls directly, to running in a fully integrated real
 environment as the tempest tests currently do. The challenge is that
 there are mismatches between how the tempest client looks to test code
 and how doing object-model api calls looks. Most of this discrepancy is
 because many pieces of invoking a service are hard-coded into the tests
 rather than being abstracted in a client. Some examples are:
 
 1. Response validation
 2. json serialization/deserialization
 3. environment description (tempest.conf)
 4. Forced usage of addCleanUp
 
 Maru Newby and I have proposed changing the test code to use a more
 abstract client by defining the expected signature and functionality
 of methods on the client. Roughly, the methods would take positional
 arguments for pieces that go in the url part of a REST call, and kwargs
 for the json payload. The client would take care of these enumerated
 issues (if necessary) and return an attribute dict. The test code itself
 would then just be service calls and checks of returned data. Returned
 data would be inspected as resource.id instead of resource['id']. There
 is a strawman example of this for a few neutron apis here:
 https://review.openstack.org/#/c/106916/
 Doing this would have the twin advantages of eliminating the need for
 boilerplate code in tests and making it possible to run the tests in
 different environments. It would also allow the inclusion of project
 functional tests in more general validation scenarios.
 
 Since we are proposing to move parts of tempest into a stable library
 https://review.openstack.org/108858, we need to define the client in a
 way that meets all the needs outlined here before doing so. The actual
 work of defining the client in tempest and changing the code that uses
 it could largely be done one service at a time, in the tempest code,
 before being split out.
 
 What do folks think about this idea?

I agree.
I also believe that streamlining the clients should take (at leaset partial)
precedence to 
https://blueprints.launchpad.net/tempest/+spec/tempest-client-scenarios.
I fear that in their current state, the tempest client are not ready to be used
in integration testing, and using them will push as backward on the scenario 
front
since we will loose much of the reusable code.

However, I believe that a week of coordinated and concentrated effort should 
get us
to a good enough state, to move on both tempest-client-scenarios bp, and 
further 
enhancing the tempest clients one at a time

Yair

 
   -David
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Questions about test policy for scenario test

2014-06-24 Thread Yair Fried
- Original Message -

 From: Fei Long Wang feil...@catalyst.net.nz
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org
 Cc: br...@catalyst.net.nz
 Sent: Tuesday, June 24, 2014 8:29:03 AM
 Subject: [openstack-dev] [QA] Questions about test policy for scenario test

 Greetings,

 We're leveraging the scenario test of Tempest to do the end-to-end
 functional test to make sure everything work great after upgrade,
 patching, etc. And We're happy to fill the gaps we found. However, I'm a
 little bit confused about the test policy from the scenario test
 perspective, especially comparing with the API test. IMHO, scenario test
 will cover some typical work flows of one specific service or mixed
 services, and it would be nice to make sure the function is really
 working instead of just checking the object status from OpenStack
 perspective. Is that correct?

 For example, live migration of Nova, it has been covered in API test of
 Tempest (see
 https://github.com/openstack/tempest/blob/master/tempest/api/compute/test_live_block_migration.py).
 But as you see, it just checks if the instance is Active or not instead
 of checking if the instance can be login/ssh successfully
Seems to me, that what you want is to add migration test to 
https://github.com/openstack/tempest/blob/master/tempest/scenario/test_network_advanced_server_ops.py
 
This scenario does exactly what you are looking for 
1. check VM connectivity 
2. mess with VM (reboot, resize, or in your case - migrate) 
3. check VM connectivity 

 . Obviously,
 from an real world view, we'd like to check if it's working indeed. So
 the question is, should this be improved? If so, the enhanced code
 should be in API test, scenario test or any other places? Thanks you.

 --
 Cheers  Best regards,
 Fei Long Wang (王飞龙)
 --
 Senior Cloud Software Engineer
 Tel: +64-48032246
 Email: flw...@catalyst.net.nz
 Catalyst IT Limited
 Level 6, Catalyst House, 150 Willis Street, Wellington
 --

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Tempest Release Naming

2014-05-27 Thread Yair Fried
- Original Message -
 From: Sean Dague s...@dague.net
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Friday, May 23, 2014 3:40:16 AM
 Subject: Re: [openstack-dev] [QA] Tempest Release Naming
 
 On 05/22/2014 08:33 PM, Jeremy Stanley wrote:
  On 2014-05-22 18:33:34 -0400 (-0400), Matthew Treinish wrote:
  [...]
  I'd like to stick with one scheme and not decide to change it
  later on. I figured I should bring this out to a wider audience to
  see if there were other suggestions or opinions before I pushed
  out the tag, especially because the tags are primarily for the
  consumers of Tempest.
  
  I think as long as your release notes clearly indicate what you
  think can safely be tested with a given release of Tempest, it
  should be versioned just like any other piece of software (which is
  to say, with version numbers chosen by one or more humans to
  reflect
  the degree of improvement/breakage reflected within the release, a
  la Semantic Versioning).
 
 Given that we're not really going to have major / minor bumps at this
 point (more a continous roll), I might argue that we should just go
 Firefox on it and bump the integer on every release.
 
 Tempest 1 is now, Tempest 2 is Julyish, Tempest 3 is at Juno, Tempest
 4
 is next winter, etc.
 
   -Sean

+1
very practical, very simple. clearly independent of Openstack releases

 
 --
 Sean Dague
 http://dague.net
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][QA]Request for input for Juno Design Summit, Atlanta

2014-04-28 Thread Yair Fried
Hi,
For everyone's convenience, I've added to the pad short descriptions of Network 
Scenarios that are currently in tree (or under review) that I am familiar with.
Feel free to add/edit


Regards
Yair
 

- Original Message -
 From: Miguel Lavalle mig...@mlavalle.com
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Sent: Tuesday, April 29, 2014 1:48:23 AM
 Subject: [openstack-dev] [Neutron][QA]Request for input for Juno Design   
 Summit, Atlanta
 
 
 
 Dear fellow Neutron'ers and QA'ers,
 
 During the Atlanta Design Summit we have been assigned 20 minutes (
 http://junodesignsummit.sched.org/event/48ccd60090740ae80b4d1811b9a61303#.U12EsqbwBPq
 ) to agree on the Tempest testing that will be developed for Neutron
 during the Juno cycle. In order to make the most out of those 20
 minutes, we want to start the conversation ahead of time, so, to the
 extent possible, we concentrate on reaching agreement during the
 Atlanta session. To get the conversation rolling, here's an initial
 list of topics where we, as a community, need to reach consensus:
 
 
 * Scenario testing. While during Icehouse we achieved a good
 level of community engagement and coverage in API testing,
 scenarios have received little attention, even though a few
 developers made great contributions. During Juno, we want to
 significantly expand this effort, along the following lines:
 
 
 * We are looking for ideas for new scenarios from anyone and
 everyone (dev, qa, automantion, manual, users, etc). There
 are no bad ideas. We need ideas, not necessarily fully
 formed blueprints, though the latter would be even better.
 Don't let constraints (whitebox, multi-host, etc) to refrain
 you from proposing an idea. We will sort through them later.
 Ideas are our initial gap right now.
 * Creation of blueprints for the agreed upon scenarios, so
 potential contributors can volunteer to implement them and
 progress tracking can be accomplished.
 * Creation of a how to or primer wiki page on how to
 implement Neutron scenario tests
 * Documentation of scenario tests. While api tests are to a
 great extent self explanatory, scenarios are more complex
 and it's not easy for people other than the writers of a
 specific test to understand. We need to improve
 documentation.
 
 
 * One solution might be to assign scenario tests owners
 to keep them up to date and well documented
 *
 API tests. The challenge in this area seems to be in:
 
 
 * Closing the gaps that might haven been left open during
 Icehouse
 * Adding new tests needed as a consequence of changes and
 evolution of the Neutron API
 * Define an on going process to prevent api tests to become
 outdated or stale
 *
 Nova Networking - Neutron parity sub-project. Are there any specific
 needs in this sub-project that can be covered with Tempest based
 testing?
 * Other Neutron sub-projects. Are there specific needs of other
 Neutron sub-projects that can be covered with Tempest based
 testing?
 
 
 
 This is a list of topics meant to start the conversation on this
 subject. Please feel free to chime in, either in the mailing list or
 at this etherpad page
 https://etherpad.openstack.org/p/TempestAndNeutronJuno
 
 
 Thanks in advance for your input
 
 Miguel Lavalle
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tempest][qa] : backporting Test Security Groups Basic Ops in stable/havana ?

2014-04-14 Thread Yair Fried
Hi Julien
I've tried to backport it in the past, and it was rejected, as it needs quite a 
lot of work.

If you are able to convince everyone otherwise, I will gladly help you with the 
code.

However, I think the better solution would be to somehow create a gate that 
runs the scenarios in the Master branch against Havana, because, AFAIK, most 
those scenarios should work against it.

https://bugs.launchpad.net/tempest/+bug/1277040
https://review.openstack.org/#/q/status:abandoned+project:openstack/tempest+branch:stable/havana+topic:bug/1252620,n,z


- Original Message -
 From: LELOUP Julien julien.lel...@3ds.com
 To: openstack-dev@lists.openstack.org
 Sent: Wednesday, April 2, 2014 12:48:31 PM
 Subject: [openstack-dev] [Tempest][qa] : backporting Test Security Groups 
 Basic Ops in stable/havana ?
 
 Hi everyone,
 
 I'm interested in having the Tempest scenario Test Security Groups
 Basic Ops available in the stable/Havana branch .
 This scenario is nice for acceptance tests and it's running fine with
 an Havana deployment.
 
 Can  someone in the Stable-maint team can backport it from master to
 stable/Havana ? If it's OK for you of course :)
 
 Best Regards,
 
 Julien LELOUP
 julien.lel...@3ds.com
 
 
 This email and any attachments are intended solely for the use of the
 individual or entity to whom it is addressed and may be confidential
 and/or privileged.
 
 If you are not one of the named recipients or have received this
 email in error,
 
 (i) you should not read, disclose, or copy it,
 
 (ii) please notify sender of your receipt by reply email and delete
 this email and all attachments,
 
 (iii) Dassault Systemes does not accept or assume any liability or
 responsibility for any use of or reliance on this email.
 
 For other languages, go to http://www.3ds.com/terms/email-disclaimer
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] The future of nosetests with Tempest

2014-03-06 Thread Yair Fried

- Original Message - 

 From: Alexei Kornienko alexei.kornie...@gmail.com
 To: openstack-dev@lists.openstack.org
 Sent: Friday, February 28, 2014 12:43:13 PM
 Subject: Re: [openstack-dev] [QA] The future of nosetests with
 Tempest

 Hi,

 Let me express my concerns on this topic:

  With some recent changes made to Tempest compatibility with
 
  nosetests is going away.
 
 I think that we should not drop nosetests support from tempest or any
 other project. The problem with testrepository is that it's not
 providing any debugger support at all (and will never provide). It
 also has some issues with proving error traces in human readable
 form and it can be quite hard to find out what is actually broken.

 Because of this I think we should try to avoid any kind of test
 libraries that break compatibility with conventional test runners.

 Our tests should be able to run correctly with nosetests, teststools
 or plain old unittest runner. If for some reason test libraries
 (like testscenarios) doesn't provide support for this we should fix
 this libraries or avoid their usage.

+1
I have the same concern about debugging. It's an essential tool (for me, at 
least) in creating scenario tests. The more complex the test, the harder it is 
to rely on simple log-prints. 

 Regards,
 Alexei Kornienko

 On 02/27/2014 06:36 PM, Frittoli, Andrea (HP Cloud) wrote:

  This is another example of achieving the same result (exclusion
  from
  a
 
  list):
  https://git.openstack.org/cgit/openstack/tripleo-image-elements/tree/element
  s/tempest/tests2skip.py
  https://git.openstack.org/cgit/openstack/tripleo-image-elements/tree/element
  s/tempest/tests2skip.txt
 

  andrea
 

  -Original Message-
 
  From: Matthew Treinish [ mailto:mtrein...@kortar.org ]
 
  Sent: 27 February 2014 15:49
 
  To: OpenStack Development Mailing List (not for usage questions)
 
  Subject: Re: [openstack-dev] [QA] The future of nosetests with
  Tempest
 

  On Tue, Feb 25, 2014 at 07:46:23PM -0600, Matt Riedemann wrote:
 
   On 2/12/2014 1:57 PM, Matthew Treinish wrote:
  
 
On Wed, Feb 12, 2014 at 11:32:39AM -0700, Matt Riedemann wrote:
   
  
 
 On 1/17/2014 8:34 AM, Matthew Treinish wrote:

   
  
 
  On Fri, Jan 17, 2014 at 08:32:19AM -0500, David Kranz
  wrote:
 

   
  
 
   On 01/16/2014 10:56 PM, Matthew Treinish wrote:
  
 

   
  
 
Hi everyone,
   
  
 

   
  
 

With some recent changes made to Tempest compatibility
with
   
  
 

   
  
 
nosetests is going away. We've started using newer
features
that
   
  
 

   
  
 
nose just doesn't support. One example of this is that
we've
   
  
 

   
  
 
started using testscenarios and we're planning to do
this
in
more
   
  
 

   
  
 
  places moving forward.
 
So at Icehouse-3 I'm planning to push the patch out to
remove
   
  
 

   
  
 
nosetests from the requirements list and all the
workarounds
and
   
  
 

   
  
 
references to nose will be pulled out of the tree.
Tempest
will
   
  
 

   
  
 
also start raising an unsupported exception when you
try
to
run
   
  
 

   
  
 
it with nose so that there isn't any confusion on this
moving
   
  
 

   
  
 
forward. We talked about doing this at summit briefly
and
I've
   
  
 

   
  
 
brought it up a couple of times before, but I believe
it
is
time
   
  
 

   
  
 
to do this now. I feel for tempest to move forward we
need
to
do
this
   
  
 

   
  
 
  now so that there isn't any ambiguity as we add even more features
  and new
 
  types of testing.
 
   I'm with you up to here.
  
 

   
  
 
Now, this will have implications for people running
tempest
with
   
  
 

   
  
 
python 2.6 since up until now we've set nosetests.
There
is
a
   
  
 

   
  
 
workaround for getting tempest to run with python 2.6
and
testr
see:
https://review.openstack.org/#/c/59007/1/README.rst but
essentially
this means that when nose is marked as
   
  
 

   
  
 
unsupported on tempest python 2.6 will also be
unsupported
by
   
  
 

   
  
 
Tempest. (which honestly it basically has been for
while
now
just
   
  
 

   
  
 
we've gone without making it official)
   
  
 

   
  
 
   

[openstack-dev] [Neutron] port forwarding from gateway to internal hosts

2014-02-18 Thread Yair Fried
Hi,
What's the status of this BP? - 
https://blueprints.launchpad.net/neutron/+spec/router-port-forwarding
will it be ready for I3?
The BP is approved but the patch is abandoned

Regards
Yair

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] port forwarding from gateway to internal hosts

2014-02-17 Thread Yair Fried
Hi,
What's the status of this BP? - 
https://blueprints.launchpad.net/neutron/+spec/router-port-forwarding
will it be ready for I3?
The BP is approved but the patch is abandoned

Regards
Yair

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][Neutron][Tempest][Network] Break down NetworkBasicOps to smaller test cases

2014-01-20 Thread Yair Fried
I seem to be unable to convey my point using generalization, so I will give a 
specific example:
I would like to have update dns server as an additional network scenario. 
Currently I could add it to the existing module:

1. tests connectivity
2. re-associate floating ip
3. update dns server

In which case, failure to re-associate ip will prevent my test from running, 
even though these are completely unrelated scenarios, and (IMO) we would like 
to get feedback on both of them.

Another way, is to copy the entire network_basic_ops module, remove 
re-associate floating ip and add update dns server. For the obvious reasons 
- this also seems like the wrong way to go.

I am looking for an elegant way to share the code of these scenarios.

Yair


- Original Message -
From: Salvatore Orlando sorla...@nicira.com
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Monday, January 20, 2014 7:22:22 PM
Subject: Re: [openstack-dev] [qa][Neutron][Tempest][Network] Break down 
NetworkBasicOps to smaller test cases



Yair is probably referring to statistically independent tests, or whatever case 
for which the following is true (P(x) is the probably that a test succeeds): 


P(4|3|2|1) = P(4|1) * P(3|1) * P(2|1) 



This might apply to the tests we are adding to network_basic_ops scenario; 
however it is worth noting that: 


- in some cases the above relationship does not hold. For instance a public 
network connectivity test can hardly succeeds if the private connectivity test 
failed (is that correct? I'm not sure anymore of anything this days!) 
- Sean correctly pointed out that splitting test will cause repeated activities 
which will just make the test run longer without any additional benefit. 


On the other hand, I understand and share the feeling that we are adding too 
much to the same workflow. Would it make sense to identify a few conceptually 
independent workflows, identify one or more advanced network scenarios, and 
keep only internal + public connectivity checks in basic_ops? 


Salvatore 



On 20 January 2014 09:23, Jay Pipes  jaypi...@gmail.com  wrote: 



On Sun, 2014-01-19 at 07:17 -0500, Yair Fried wrote: 
 OK, 
 but considering my pending patch (#3 and #4) 
 what about: 
 
 #1 - #2 
 #1 - #3 
 #1 - #4 
 
 instead of 
 
 #1 - #2 - #3 - #4 
 
 a failure in #2 will prevent #3 and #4 from running even though they are 
 completely unrelated 

Seems to me, that the above is a logical fault. If a failure in #2 
prevents #3 or #4 from running, then by nature they are related to #2. 

-jay 




___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][Neutron][Tempest][Network] Break down NetworkBasicOps to smaller test cases

2014-01-19 Thread Yair Fried


- Original Message -
 From: Sean Dague s...@dague.net
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Sunday, January 19, 2014 1:53:21 PM
 Subject: Re: [openstack-dev] [qa][Neutron][Tempest][Network] Break down 
 NetworkBasicOps to smaller test cases
 
 On 01/19/2014 02:06 AM, Yair Fried wrote:
  MT:Is your issue here that it's just called basic ops and you
  don't think that's
  reflective of what is being tested in that file anymore
  
  No.
  My issue is, that the current scenario is, in fact, at least 2
  separate scenarios:
  1. original basic_ops
  2. reassociate_floating_ip
  to which I would like to add (
  https://review.openstack.org/#/c/55146/ ):
  4. check external/internal connectivity
  3. update dns
  
  While #2 includes #1 as part of its setup, its failing shouldn't
  prevent #1 from passing. the obvious solution would be to create
  separate modules for each test case, but since they all share the
  same setup sequence, IMO, they should at least share code.
  Notice that in my patch, #2 still includes #1.
  
  Actually, the more network scenario we get, the more we will need
  to do something in that direction, since most of the scenarios
  will require the setup of a VM with a floating-ip to ssh into.
  
  So either we do this, or we move all of this code to
  scenario.manager which is also becoming very complicated
 
 If #2 is always supposed to work, then I don't actually understand
 why
 #1 being part of the test or not part of the test is really relevant.
 And being part of the same test saves substantial time.
 
 If you have tests that do:
  * A - B - C
  * A - B - D - F
 
 There really isn't value in a test for A - B *as long* as you have
 sufficient sign posting to know in the fail logs that A - B worked
 fine.
 
 And there are sufficient detriments in making it a separate test,
 because it's just adding time to the runs without actually testing
 anything different.

OK,
but considering my pending patch (#3 and #4)
what about:

#1 - #2
#1 - #3
#1 - #4

instead of 

#1 - #2 - #3 - #4

a failure in #2 will prevent #3 and #4 from running even though they are 
completely unrelated


 
   -Sean
 
  
  Yair
  
  - Original Message -
  From: Matthew Treinish mtrein...@kortar.org
  To: OpenStack Development Mailing List (not for usage questions)
  openstack-dev@lists.openstack.org
  Sent: Friday, January 17, 2014 6:17:55 AM
  Subject: Re: [openstack-dev] [qa][Neutron][Tempest][Network] Break
  down NetworkBasicOps to smaller test cases
  
  On Wed, Jan 15, 2014 at 11:20:22AM -0500, Yair Fried wrote:
  Hi Guys
  As Maru pointed out - NetworkBasicOps scenario has grown out of
  proportion and is no longer basic ops.
  
  Is your issue here that it's just called basic ops and you don't
  think that's
  reflective of what is being tested in that file anymore. If that's
  the case
  then just change the name.
  
 
  So, I started breaking it down to smaller test cases that can fail
  independently.
  
  I'm not convinced this is needed. Some scenarios are going to be
  very involved
  and complex. Each scenario tests is designed to simulate real use
  cases in the
  cloud, so obviously some of them will be fairly large. The solution
  for making
  debugging easier in these cases is to make sure that any failures
  have clear
  messages. Also make sure there is plenty of signposting debug log
  messages so
  when something goes wrong you know what state the test was in.
  
  If you split things up into smaller individual tests you'll most
  likely end up
  making tests that are really aren't scenario tests. They'd be
  closer to API
  tests, just using the official clients, which really shouldn't be
  in the
  scenario tests.
  
 
  Those test cases share the same setup and tear-down code:
  1. create network resources (and verify them)
  2. create VM with floating IP.
 
  I see 2 options to manage these resources:
  1. Completely isolated - resources are created and cleaned using
  setUp() and tearDown() methods [1]. Moving cleanup to tearDown
  revealed this bug [2]. Apparently the previous way (with
  tearDownClass) wasn't as fast). This has the side effect of
  having expensive resources (ie VMs and floating IPs) created and
  discarded  multiple times though they are unchanged.
 
  2. Shared Resources - Using the idea of (or actually using)
  Fixtures - use the same resources unless a test case fails, in
  which case resources are deleted and created by the next test
  case [3].
  
  If you're doing this and splitting things into smaller tests then
  it has to be
  option 1. Scenarios have to be isolated if there are resources
  shared between
  scenario tests that really is only one scenario and shouldn't be
  split. In fact
  I've been working on a change that fixes the scenario test
  tearDowns that has the
  side effect of enforcing this policy.
  
  Also just for the record we've tried doing option 2

Re: [openstack-dev] [qa][Neutron][Tempest][Network] Break down NetworkBasicOps to smaller test cases

2014-01-18 Thread Yair Fried
MT:Is your issue here that it's just called basic ops and you don't think 
that's
reflective of what is being tested in that file anymore

No.
My issue is, that the current scenario is, in fact, at least 2 separate 
scenarios:
1. original basic_ops
2. reassociate_floating_ip
to which I would like to add ( https://review.openstack.org/#/c/55146/ ):
4. check external/internal connectivity
3. update dns

While #2 includes #1 as part of its setup, its failing shouldn't prevent #1 
from passing. the obvious solution would be to create separate modules for each 
test case, but since they all share the same setup sequence, IMO, they should 
at least share code.
Notice that in my patch, #2 still includes #1.

Actually, the more network scenario we get, the more we will need to do 
something in that direction, since most of the scenarios will require the setup 
of a VM with a floating-ip to ssh into.

So either we do this, or we move all of this code to scenario.manager which is 
also becoming very complicated

Yair

- Original Message -
From: Matthew Treinish mtrein...@kortar.org
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Friday, January 17, 2014 6:17:55 AM
Subject: Re: [openstack-dev] [qa][Neutron][Tempest][Network] Break down 
NetworkBasicOps to smaller test cases

On Wed, Jan 15, 2014 at 11:20:22AM -0500, Yair Fried wrote:
 Hi Guys
 As Maru pointed out - NetworkBasicOps scenario has grown out of proportion 
 and is no longer basic ops.

Is your issue here that it's just called basic ops and you don't think that's
reflective of what is being tested in that file anymore. If that's the case
then just change the name.

 
 So, I started breaking it down to smaller test cases that can fail 
 independently.

I'm not convinced this is needed. Some scenarios are going to be very involved
and complex. Each scenario tests is designed to simulate real use cases in the
cloud, so obviously some of them will be fairly large. The solution for making
debugging easier in these cases is to make sure that any failures have clear
messages. Also make sure there is plenty of signposting debug log messages so
when something goes wrong you know what state the test was in. 

If you split things up into smaller individual tests you'll most likely end up
making tests that are really aren't scenario tests. They'd be closer to API
tests, just using the official clients, which really shouldn't be in the
scenario tests.

 
 Those test cases share the same setup and tear-down code:
 1. create network resources (and verify them)
 2. create VM with floating IP.
 
 I see 2 options to manage these resources:
 1. Completely isolated - resources are created and cleaned using setUp() and 
 tearDown() methods [1]. Moving cleanup to tearDown revealed this bug [2]. 
 Apparently the previous way (with tearDownClass) wasn't as fast). This has 
 the side effect of having expensive resources (ie VMs and floating IPs) 
 created and discarded  multiple times though they are unchanged.
 
 2. Shared Resources - Using the idea of (or actually using) Fixtures - use 
 the same resources unless a test case fails, in which case resources are 
 deleted and created by the next test case [3].

If you're doing this and splitting things into smaller tests then it has to be
option 1. Scenarios have to be isolated if there are resources shared between
scenario tests that really is only one scenario and shouldn't be split. In fact
I've been working on a change that fixes the scenario test tearDowns that has 
the
side effect of enforcing this policy.

Also just for the record we've tried doing option 2 in the past, for example
there used to be a tenant-reuse config option. The problem with doing that was
actually tends to cause more non-deterministic failures or adding a not
insignificant wait time to ensure the state is clean when you start the next
test. Which is why we ended up pulling this out of tree. What ends up happening
is that you get leftover state from previous tests and the second test ends up
failing because things aren't in the clean state that the test case assumes. If
you look at some of the oneserver files in the API that is the only place we
still do this in the tempest, and we've had many issues on making that work
reliably. Those tests are in a relatively good place now but those are much
simpler tests. Also between each test setUp has to check and ensure that the
shared server is in the proper state. If it's not then the shared server has to
be rebuilt. This methodology would become far more involved for the scenario
tests where you have to manage more than one shared resource.

 
 I would like to hear your opinions, and know if anyone has any thoughts or 
 ideas on which direction is best, and why.
 
 Once this is completed, we can move on to other scenarios as well
 
 Regards
 Yair
 
 [1] fully isolated - https://review.openstack.org/#/c/66879/
 [2] https://bugs.launchpad.net/nova

[openstack-dev] [qa][Neutron][Tempest][Network] Break down NetworkBasicOps to smaller test cases

2014-01-15 Thread Yair Fried
Hi Guys
As Maru pointed out - NetworkBasicOps scenario has grown out of proportion and 
is no longer basic ops.

So, I started breaking it down to smaller test cases that can fail 
independently.

Those test cases share the same setup and tear-down code:
1. create network resources (and verify them)
2. create VM with floating IP.

I see 2 options to manage these resources:
1. Completely isolated - resources are created and cleaned using setUp() and 
tearDown() methods [1]. Moving cleanup to tearDown revealed this bug [2]. 
Apparently the previous way (with tearDownClass) wasn't as fast). This has the 
side effect of having expensive resources (ie VMs and floating IPs) created and 
discarded  multiple times though they are unchanged.

2. Shared Resources - Using the idea of (or actually using) Fixtures - use the 
same resources unless a test case fails, in which case resources are deleted 
and created by the next test case [3].

I would like to hear your opinions, and know if anyone has any thoughts or 
ideas on which direction is best, and why.

Once this is completed, we can move on to other scenarios as well

Regards
Yair

[1] fully isolated - https://review.openstack.org/#/c/66879/
[2] https://bugs.launchpad.net/nova/+bug/1269407/+choose-affected-product
[3] shared resources - https://review.openstack.org/#/c/64622/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa][FWaaS][Neutron]Firewall service disabled on gate

2013-12-29 Thread Yair Fried
Hi,
I'm trying to push a firewall api test [1] and I see it cannot run on the 
current gate.
I was FWaaS is disabled since it broke the gate.
Does anyone knows if this is still an issue?
If so - how do we overcome this?
I would like to do some work on this service (scenarios) and don't want to 
waste time if this is something that cannot be done right now

Thank you
Yair

[1] https://review.openstack.org/64362

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Neutron] About paramiko's SSHException: Error reading SSH protocol banner

2013-12-26 Thread Yair Fried
This might be completely off, in isolated creds, a private network is created 
for each tenant, while the test already creates its own private tenant network, 
thereby changing the behavior from how it was intended to, and how it is in 
simple mode. Could this be related?
I have this patch addressing this - https://review.openstack.org/#/c/63886/
You could try and see if it makes any difference

Yair 

- Original Message -
From: Salvatore Orlando sorla...@nicira.com
To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
Sent: Friday, December 27, 2013 2:53:59 AM
Subject: [openstack-dev] [QA][Neutron] About paramiko's SSHException: Error 
reading SSH protocol banner



I put together all the patches which we prepared for making parallel testing 
work, and ran a few times 'check experimental' on the gate to see whether it 
worked or not. 


With parallel testing, the only really troubling issue are the scenario tests 
which require to access a VM from a floating IP, and the new patches we've 
squashed together in [1] should address this issue. However, the result is that 
timeouts are still observed but with a different message [2]. 
I'm not really familiar with it, and I've never observed it in local testing. I 
wonder if it just happens to be the usual problem about the target host not 
being reachable, or if it is something specific to paramiko. 


Any hint would be appreciated, since from the logs is appears everything is 
wired properly. 


Salvatore 


[1] https://review.openstack.org/#/c/57420/ 
[2] 
http://logs.openstack.org/20/57420/40/experimental/check-tempest-dsvm-neutron-isolated-parallel/a74bdc8/console.html#_2013-12-26_22_51_31_817
 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra][qa][Tempest][Network] [Infra]Test for external connectivity

2013-12-25 Thread Yair Fried
Hi, 
I wanted to retrieve the gate's DNS server (or google's 8.8.8.8) in order to 
check external connectivity, but I've been told that I'm running the risk of 
getting the Jenkins gate blacklisted by either DNS server. 
It was also brought to my attention, that testing actual external connectivity 
is not really helpful because my current test actually tests the external 
server's resolution ability and not the actual Openstack instance. 

So Here's my question: 
Infra - suppose that I can get devstack to configure it's own DNS server into 
tempest.conf (from resolve.conf), could I get the gate blacklisted for abusing 
the DNS (either local or google's) 
Neutron/Tempest/Maru - What is there to gain from pinging an external 
address/url? Could pinging the public network's default GW be enough? Please 
take into account the fact that we now have CrossTenant test checking L3 
routing 

Yair 

- Original Message -

From: Yair Fried yfr...@redhat.com 
To: openstack-in...@lists.openstack.org 
Sent: Monday, December 23, 2013 5:57:44 PM 
Subject: [OpenStack-Infra] Fwd: [openstack-dev] 
[Openstack][qa][Tempest][Network] [Infra]Test for external connectivity 



- Forwarded Message - 
From: Yair Fried yfr...@redhat.com 
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org 
Sent: Sunday, December 22, 2013 10:00:52 PM 
Subject: Re: [openstack-dev] [Openstack][qa][Tempest][Network] [Infra]Test for 
external connectivity 

Hi Guys, 
So, I'm done with my patch: 
https://review.openstack.org/#/c/55146 
1. ping an external IP 
2. ping a url to test the dns 
3. check VM's resolve.conf 
4. change the dns server and recheck resolve.conf 

My issue now (same as before) is that this test cannot be executed (and 
therefore pass) on the gate unless the gate is configured for external 
connectivity. 

A. How do I get the neutron gate to allow it's VM external connectivity (i.e. 
ping 8.8.8.8)? 

Considering Jeremy's comment, I agree that depending on external ip/url 
introduces an unnecessary point of failure. However IMO, using multiple 
addresses is not the best way to proceed , I'd rather test against a local 
node. Therefore: 

B. Can I get the neutron gate to enter it's own (local) DNS server? A local 
url? 
C. I this a change I can push to some project by myself, or do I need someone 
to change this for me (infra?)? 

I would really like your input in this matter, as I am in the final stretch of 
this patch and cannot move any farther by myself 

Regards 
Yair Fried 

- Original Message -

From: Jeremy Stanley fu...@yuggoth.org 
To: openstack-dev@lists.openstack.org 
Sent: Thursday, November 21, 2013 12:17:52 AM 
Subject: Re: [openstack-dev] [Openstack][qa][Tempest][Network] Test for 
external connectivity 

On 2013-11-20 14:07:49 -0800 (-0800), Sean Dague wrote: 
 On 11/18/2013 02:41 AM, Yair Fried wrote: 
 [...] 
  2. add fields in tempest.conf for 
  * external connectivity = False/True 
  * external ip to test against (ie 8.8.8.8) 
 
 +1 for #2. In the gate we'll need to think about what that address 
 can / should be. It may be different between different AZs. At this 
 point I'd leave the rest of the options off the table until #2 is 
 working reliably. 
[...] 

Having gone down this path in the past, I suggest the test check for 
no fewer than three addresses, sending several probes to each, and 
be considered successful if at least one gets a response. 
-- 
Jeremy Stanley 

___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 

___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 

___ 
OpenStack-Infra mailing list 
openstack-in...@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][qa] test_network_basic_ops and the FloatingIPChecker control point

2013-12-23 Thread Yair Fried


- Original Message -
 From: Brent Eagles beag...@redhat.com
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org
 Sent: Monday, December 23, 2013 10:48:50 PM
 Subject: Re: [openstack-dev] [neutron][qa] test_network_basic_ops and the 
 FloatingIPChecker control point
 
 Salvatore Orlando wrote:
  Before starting this post I confess I did not read with the
  required level
  of attention all this thread, so I apologise for any repetition.
 
  I just wanted to point out that floating IPs in neutron are created
  asynchronously when using the l3 agent, and I think this is clear
  to
  everybody.
  So when the create floating IP call returns, this does not mean the
  floating IP has actually been wired, ie: IP configured on
  qg-interface and
  SNAT/DNAT rules added.
 
  Unfortunately, neutron lacks a concept of operational status for a
  floating
  IP which would tell clients, including nova (it acts as a client
  wrt nova
  api), when a floating IP is ready to be used. I started work in
  this
  direction, but it has been suspended now for a week. If anybody
  wants to
  take over and deems this a reasonable thing to do, it will be
  great.
 
 Unless somebody picks it up before I get from the break, I'd like to
 discuss this further with you.
 
  I think neutron tests checking connectivity might return more
  meaningful
  failure data if they would gather the status of the various
  components
  which might impact connectivity.
  These are:
  - The floating IP
  - The router internal interface
  - The VIF port
  - The DHCP agent
I wrote something addressing at least some of these points: 
https://review.openstack.org/#/c/55146/
 
 I agree wholeheartedly. In fact, I think if we are going to rely on
 timeouts for pass/fail we need to do more for post-mortem details.
 
  Collecting info about the latter is very important but a bit
  trickier. I
  discussed with Sean and Maru that it would be great for a starter,
  grep the
  console log to check whether the instance obtained an IP.
  Other things to consider would be:
  - adding an operational status to a subnet, which would express
  whether the
  DHCP agent is in sync with that subnet (this information won't make
  sense
  for subnets with dhcp disabled)
  - working on a 'debug' administrative API which could return, for
  instance,
  for each DHCP agent the list of configured networks and leases.
 
 Interesting!
 
  Regarding timeouts, I think it's fair for tempest to define a
  timeout and
  ask that everything from VM boot to Floating IP wiring completes
  within
  that timeout.
 
  Regards,
  Salvatore
 
 I would agree. It would be impossible to have reasonable automated
 testing otherwise.
 
 Cheers,
 
 Brent
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tempest][qa] Adding tags to commit messages

2013-12-23 Thread Yair Fried
Hi,
Suggestion: Please consider tagging your Tempest commit messages the same way 
you do your mails in the mailing list

Explanation: Since tempest is a single project testing multiple Openstack 
project we have a very diverse collection of patches as well as reviewers. 
Tagging our commit messages will allow us to classify patches and thus:
1. Allow reviewer to focus on patches related to their area of expertise
2. Track trends in patches - I think we all know that we lack in Neutron 
testing for example, but can we assess how many network related patches are for 
awaiting review
3. Future automation of flagging interesting patches

You can usually tell all of this from reviewing the patch, but by then - you've 
spent time on a patch you might not even be qualified to review.
I suggest we tag our patches with, to start with, the components we are looking 
to test, and the type of test (sceanrio, api, ...) and that reviewers should -1 
untagged patches.

I think the tagging should be the 2nd line in the message:

==
Example commit message

[Neutron][Nova][Network][Scenario]

Explanation of how this scenario tests both Neutron and Nova
Network performance

Chang-id XXX
===

I would like this to start immediately but what do you guys think?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][qa][Tempest][Network] [Infra]Test for external connectivity

2013-12-22 Thread Yair Fried
Hi Guys,
So, I'm done with my patch:
https://review.openstack.org/#/c/55146
1. ping an external IP
2. ping a url to test the dns
3. check VM's resolve.conf
4. change the dns server and recheck resolve.conf

My issue now (same as before) is that this test cannot be executed (and 
therefore pass) on the gate unless the gate is configured for external 
connectivity.

A. How do I get the neutron gate to allow it's VM external connectivity (i.e. 
ping 8.8.8.8)?

Considering Jeremy's comment, I agree that depending on external ip/url 
introduces an unnecessary point of failure. However IMO, using multiple 
addresses is not the best way to proceed , I'd rather test against a local 
node. Therefore:

B. Can I get the neutron gate to enter it's own (local) DNS server? A local url?
C. I this a change I can push to some project by myself, or do I need someone 
to change this for me (infra?)?

I would really like your input in this matter, as I am in the final stretch of 
this patch and cannot move any farther by myself

Regards
Yair Fried

- Original Message -
From: Jeremy Stanley fu...@yuggoth.org
To: openstack-dev@lists.openstack.org
Sent: Thursday, November 21, 2013 12:17:52 AM
Subject: Re: [openstack-dev] [Openstack][qa][Tempest][Network] Test for 
external connectivity

On 2013-11-20 14:07:49 -0800 (-0800), Sean Dague wrote:
 On 11/18/2013 02:41 AM, Yair Fried wrote:
 [...]
  2. add fields in tempest.conf for
   * external connectivity = False/True
   * external ip to test against (ie 8.8.8.8)
 
 +1 for #2. In the gate we'll need to think about what that address
 can / should be. It may be different between different AZs. At this
 point I'd leave the rest of the options off the table until #2 is
 working reliably.
[...]

Having gone down this path in the past, I suggest the test check for
no fewer than three addresses, sending several probes to each, and
be considered successful if at least one gets a response.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][qa] test_network_basic_ops and the FloatingIPChecker control point

2013-12-19 Thread Yair Fried
Hi Guys, 
I run into this issue trying to incorporate this test into 
cross_tenant_connectivity scenario: 
launching 2 VMs in different tenants 
What I saw, is that in the gate it fails half the time (the original test 
passes without issues) and ONLY on the 2nd VM (the first FLIP propagates fine). 
https://bugs.launchpad.net/nova/+bug/1262529 

I don't see this in: 
1. my local RHOS-Havana setup 
2. the cross_tenant_connectivity scenario without the control point (test 
passes without issues) 
3. test_network_basic_ops runs in the gate 

So here's my somewhat less experienced opinion: 
1. this happens due to stress (more than a single FLIP/VM) 
2. (as Brent said) Timeout interval between polling are too short 
3. FLIP is usually reachable long before it is seen in the nova DB (also from 
manual experience), so blocking the test until it reaches the nova DB doesn't 
make sense for me. if we could do this in different thread, then maybe, but 
using a Pass/Fail criteria to test for a timing issue seems wrong. Especially 
since as I understand it, the issue is on IF it reaches nova DB, only WHEN. 

I would like to, at least, move this check from its place as a blocker to later 
in the test. Before this is done, I would like to know if anyone else has seen 
the same problems Brent describes prior to this patch being merged. 

Regarding Jay's scenario suggestion, I think this should not be a part of 
network_basic_ops, but rather a separate stress scenario creating multiple VMs 
and testing for FLIP associations and propagation time. 

Regards Yair 
(Also added my comments inline) 

- Original Message -

From: Jay Pipes jaypi...@gmail.com 
To: openstack-dev@lists.openstack.org 
Sent: Thursday, December 19, 2013 5:54:29 AM 
Subject: Re: [openstack-dev] [neutron][qa] test_network_basic_ops and the 
FloatingIPChecker control point 

On 12/18/2013 10:21 PM, Brent Eagles wrote: 
 Hi, 
 
 Yair and I were discussing a change that I initiated and was 
 incorporated into the test_network_basic_ops test. It was intended as a 
 configuration control point for floating IP address assignments before 
 actually testing connectivity. The question we were discussing was 
 whether this check was a valid pass/fail criteria for tests like 
 test_network_basic_ops. 
 
 The initial motivation for the change was that test_network_basic_ops 
 had a less than 50/50 chance of passing in my local environment for 
 whatever reason. After looking at the test, it seemed ridiculous that it 
 should be failing. The problem is that more often than not the data that 
 was available in the logs all pointed to it being set up correctly but 
 the ping test for connectivity was timing out. From the logs it wasn't 
 clear that the test was failing because neutron did not do the right 
 thing, did not do it fast enough, or is something else happening? Of 
 course if I paused the test for a short bit between setup and the checks 
 to manually verify everything the checks always passed. So it's a timing 
 issue right? 
 

DID anyone else see experience this issue? locally or on the gate? 

 Two things: adding more timeout to a check is as appealing to me as 
 gargling glass AND I was less annoyed that the test was failing as I 
 was that it wasn't clear from reading logs what had gone wrong. I tried 
 to find an additional intermediate control point that would split 
 failure modes into two categories: neutron is too slow in setting things 
 up and neutron failed to set things up correctly. Granted it still is 
 adding timeout to the test, but if I could find a control point based on 
 settling so that if it passed, then there is a good chance that if the 
 next check failed it was because neutron actually screwed up what it was 
 trying to do. 
 
 Waiting until the query on the nova for the floating IP information 
 seemed a relatively reasonable, if imperfect, settling criteria before 
 attempting to connect to the VM. Testing to see if the floating IP 
 assignment gets to the nova instance details is a valid test and, 
 AFAICT, missing from the current tests. However, Yair has the reasonable 
 point that connectivity is often available long before the floating IP 
 appears in the nova results and that it could be considered invalid to 
 use non-network specific criteria as pass/fail for this test. 

But, Tempest is all about functional integration testing. Using a call 
to Nova's server details to determine whether a dependent call to 
Neutron succeeded (setting up the floating IP) is exactly what I think 
Tempest is all about. It's validating that the integration between Nova 
and Neutron is working as expected. 

So, I actually think the assertion on the floating IP address appearing 
(after some timeout/timeout-backoff) is entirely appropriate. 

Blocking the connectivity check until DB is updated doesn't make sense to me, 
since we know FLIP is reachable well before nova DB is updated (this is seen 
also in manual mode, not just by 

Re: [openstack-dev] [neutron][qa] test_network_basic_ops and the FloatingIPChecker control point

2013-12-19 Thread Yair Fried
I would also like to point out that, since Brent used compute.build_timeout as 
the timeout value
***It takes more time to update FLIP in nova DB, than for a VM to build***

Yair

- Original Message -
From: Sean Dague s...@dague.net
To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Sent: Thursday, December 19, 2013 12:42:56 PM
Subject: Re: [openstack-dev] [neutron][qa] test_network_basic_ops and the 
FloatingIPChecker control point

On 12/19/2013 03:31 AM, Yair Fried wrote:
 Hi Guys,
 I run into this issue trying to incorporate this test into
 cross_tenant_connectivity scenario:
 launching 2 VMs in different tenants
 What I saw, is that in the gate it fails half the time (the original
 test passes without issues) and ONLY on the 2nd VM (the first FLIP
 propagates fine).
 https://bugs.launchpad.net/nova/+bug/1262529
 
 I don't see this in:
 1. my local RHOS-Havana setup
 2. the cross_tenant_connectivity scenario without the control point
 (test passes without issues)
 3. test_network_basic_ops runs in the gate
 
 So here's my somewhat less experienced opinion:
 1. this happens due to stress (more than a single FLIP/VM)
 2. (as Brent said) Timeout interval between polling are too short
 3. FLIP is usually reachable long before it is seen in the nova DB (also
 from manual experience), so blocking the test until it reaches the nova
 DB doesn't make sense for me. if we could do this in different thread,
 then maybe, but using a Pass/Fail criteria to test for a timing issue
 seems wrong. Especially since as I understand it, the issue is on IF it
 reaches nova DB, only WHEN.
 
 I would like to, at least, move this check from its place as a blocker
 to later in the test. Before this is done, I would like to know if
 anyone else has seen the same problems Brent describes prior to this
 patch being merged.

 Regarding Jay's scenario suggestion, I think this should not be a part
 of network_basic_ops, but rather a separate stress scenario creating
 multiple VMs and testing for FLIP associations and propagation time.

+1 there is no need to overload that one scenario. A dedicated one would
be fine.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][qa][Tempest][Network] Test for external connectivity

2013-11-28 Thread Yair Fried
Thanks for the input. I apologize for the delay.

1. A working patch is here - https://review.openstack.org/#/c/55146. Reviews 
will be much appreciated.
2. Default setting has external_connectivity=False so tempest gate doesn't 
check this. I wonder if we could somehow set the neutron gate to configure 
external access and enable this feature for all tests?
3. Tomoe How can we test the internal network connectivity? -- I'm pinging 
router and dhcp ports from the VM via ssh and floating IP, since l2 and l3 
agents might reside on different hosts.
4. Jeremy Stanley - test check for no fewer than three addresses -- Why?

- Original Message -
From: Jeremy Stanley fu...@yuggoth.org
To: openstack-dev@lists.openstack.org
Sent: Thursday, November 21, 2013 12:17:52 AM
Subject: Re: [openstack-dev] [Openstack][qa][Tempest][Network] Test for 
external connectivity

On 2013-11-20 14:07:49 -0800 (-0800), Sean Dague wrote:
 On 11/18/2013 02:41 AM, Yair Fried wrote:
 [...]
  2. add fields in tempest.conf for
   * external connectivity = False/True
   * external ip to test against (ie 8.8.8.8)
 
 +1 for #2. In the gate we'll need to think about what that address
 can / should be. It may be different between different AZs. At this
 point I'd leave the rest of the options off the table until #2 is
 working reliably.
[...]

Having gone down this path in the past, I suggest the test check for
no fewer than three addresses, sending several probes to each, and
be considered successful if at least one gets a response.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack][qa][Tempest][Network] Test for external connectivity

2013-11-18 Thread Yair Fried
I'm editing tempest/scenario/test_network_basic_ops.py for external 
connectivity as the TODO listed in its docstring.


the test cases are for pinging against external ip and url to test 
connectivity and dns respectivly.
since default deployement (devstack gate) doesn't have external 
connectivity I was thinking on one or all of the following


1. test against public network gateway
2. add fields in tempest.conf for
 * external connectivity = False/True
 * external ip to test against (ie 8.8.8.8)
3. Regarding DNS:
1. assume that external connectivity means dns is also configured
2. activly configure dns on subnet creation (based on tempest.conf

Comments/suggestions will be much appriciated

Regards
Yair Fried

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev