Re: [openstack-dev] [rally] KeyError: 'admin'

2014-09-18 Thread Daisuke Morita


Boris,

I am so sorry for my too late reply.
I retried verify command after I updated my rally, then everything
worked well. Thanks for your quick bug fix.


Best regards,
Daisuke Morita

On 2014/09/09 6:48, Boris Pavlovic wrote:

Daisuke,


This patch https://review.openstack.org/#/c/116766/ introduced this bug:
https://bugs.launchpad.net/rally/+bug/1366824
That was fixed by this commit: https://review.openstack.org/#/c/119790/

So update your rally and try one more time. Everything should work now.


Best regards,
Boris Pavlovic

On Mon, Sep 8, 2014 at 4:05 PM, Daisuke Morita
morita.dais...@lab.ntt.co.jp mailto:morita.dais...@lab.ntt.co.jp wrote:


Thanks, Boris.

I tried rally-manage db recreate before registering a deployment,
but nothing changed at all in running Tempest...

It is late in Japan, so I will try it tomorrow.


Best regards,
Daisuke

On 2014/09/08 20:38, Boris Pavlovic wrote:

Daisuke,

We have as well changes in our DB models.

So running:

$ rally-manage db recreate

Will be as well required..


Best regards,
Boris Pavlovic



On Mon, Sep 8, 2014 at 3:24 PM, Mikhail Dubov
mdu...@mirantis.com mailto:mdu...@mirantis.com
mailto:mdu...@mirantis.com mailto:mdu...@mirantis.com wrote:

 Hi Daisuke,

 seems like your issue is connected to the change in the
deployment
 configuration file format for existing clouds we've merged
 https://review.openstack.org/__#/c/116766/
https://review.openstack.org/#/c/116766/ recently.

 Please see the updated Wiki How to page


https://wiki.openstack.org/__wiki/Rally/HowTo#Step_1.___Deployment_initialization_.__28use_existing_cloud.29

https://wiki.openstack.org/wiki/Rally/HowTo#Step_1._Deployment_initialization_.28use_existing_cloud.29
that
 describes the new format. In your case, you can just update the
 deployment configuration file and run again /rally deployment
 create/. Everything should work then.



 Best regards,
 Mikhail Dubov

 Mirantis, Inc.
 E-Mail: mdu...@mirantis.com mailto:mdu...@mirantis.com
mailto:mdu...@mirantis.com mailto:mdu...@mirantis.com
 Skype: msdubov

 On Mon, Sep 8, 2014 at 3:16 PM, Daisuke Morita
 morita.dais...@lab.ntt.co.jp
mailto:morita.dais...@lab.ntt.co.jp
mailto:morita.daisuke@lab.__ntt.co.jp
mailto:morita.dais...@lab.ntt.co.jp

 wrote:


 Hi, rally developers!

 Now, I am trying to use Rally to devstack cluster on AWS VM
 (all-in-one). I'm following a blog post

https://www.mirantis.com/blog/__rally-openstack-tempest-__testing-made-simpler/

https://www.mirantis.com/blog/rally-openstack-tempest-testing-made-simpler/
 . I successfully installed Devstack, Rally and Tempest.
Now, I
 just ran
 Tempest by 'rally verify start' command, but the
command failed
 with the
 following stacktrace.


 2014-09-08 10:57:57.803 17176 CRITICAL rally [-]
KeyError: 'admin'
 2014-09-08 10:57:57.803 17176 TRACE rally Traceback
(most recent
 call last):
 2014-09-08 10:57:57.803 17176 TRACE rally   File
 /usr/local/bin/rally,
 line 10, in module
 2014-09-08 10:57:57.803 17176 TRACE rally
  sys.exit(main())
 2014-09-08 10:57:57.803 17176 TRACE rally   File

/usr/local/lib/python2.7/__dist-packages/rally/cmd/main.__py, line
 40, in main
 2014-09-08 10:57:57.803 17176 TRACE rally return
 cliutils.run(sys.argv, categories)
 2014-09-08 10:57:57.803 17176 TRACE rally   File

/usr/local/lib/python2.7/__dist-packages/rally/cmd/__cliutils.py,
line
 184, in run
 2014-09-08 10:57:57.803 17176 TRACE rally ret =
fn(*fn_args,
 **fn_kwargs)
 2014-09-08 10:57:57.803 17176 TRACE rally   File
string,
 line 2, in
 start
 2014-09-08 10:57:57.803 17176 TRACE rally   File

/usr/local/lib/python2.7/__dist-packages/rally/cmd/__envutils.py,
 line 64,
 in default_from_global
 2014-09-08 10:57:57.803 17176 TRACE rally return
f(*args,
 **kwargs)
 2014-09-08 10:57:57.803 17176 TRACE rally   File


/usr/local/lib/python2.7/__dist-packages/rally/cmd/__commands/verify.py,
 line 59, in start
 2014-09-08 10:57:57.803 17176 TRACE rally

[openstack-dev] [rally] KeyError: 'admin'

2014-09-08 Thread Daisuke Morita

Hi, rally developers!

Now, I am trying to use Rally to devstack cluster on AWS VM
(all-in-one). I'm following a blog post
https://www.mirantis.com/blog/rally-openstack-tempest-testing-made-simpler/
. I successfully installed Devstack, Rally and Tempest. Now, I just ran
Tempest by 'rally verify start' command, but the command failed with the
following stacktrace.


2014-09-08 10:57:57.803 17176 CRITICAL rally [-] KeyError: 'admin'
2014-09-08 10:57:57.803 17176 TRACE rally Traceback (most recent call last):
2014-09-08 10:57:57.803 17176 TRACE rally   File /usr/local/bin/rally,
line 10, in module
2014-09-08 10:57:57.803 17176 TRACE rally sys.exit(main())
2014-09-08 10:57:57.803 17176 TRACE rally   File
/usr/local/lib/python2.7/dist-packages/rally/cmd/main.py, line 40, in main
2014-09-08 10:57:57.803 17176 TRACE rally return
cliutils.run(sys.argv, categories)
2014-09-08 10:57:57.803 17176 TRACE rally   File
/usr/local/lib/python2.7/dist-packages/rally/cmd/cliutils.py, line
184, in run
2014-09-08 10:57:57.803 17176 TRACE rally ret = fn(*fn_args,
**fn_kwargs)
2014-09-08 10:57:57.803 17176 TRACE rally   File string, line 2, in
start
2014-09-08 10:57:57.803 17176 TRACE rally   File
/usr/local/lib/python2.7/dist-packages/rally/cmd/envutils.py, line 64,
in default_from_global
2014-09-08 10:57:57.803 17176 TRACE rally return f(*args, **kwargs)
2014-09-08 10:57:57.803 17176 TRACE rally   File
/usr/local/lib/python2.7/dist-packages/rally/cmd/commands/verify.py,
line 59, in start
2014-09-08 10:57:57.803 17176 TRACE rally api.verify(deploy_id,
set_name, regex)
2014-09-08 10:57:57.803 17176 TRACE rally   File
/usr/local/lib/python2.7/dist-packages/rally/orchestrator/api.py, line
153, in verify
2014-09-08 10:57:57.803 17176 TRACE rally
verifier.verify(set_name=set_name, regex=regex)
2014-09-08 10:57:57.803 17176 TRACE rally   File
/usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/tempest/tempest.py,
line 247, in verify
2014-09-08 10:57:57.803 17176 TRACE rally
self._prepare_and_run(set_name, regex)
2014-09-08 10:57:57.803 17176 TRACE rally   File
/usr/local/lib/python2.7/dist-packages/rally/utils.py, line 165, in
wrapper
2014-09-08 10:57:57.803 17176 TRACE rally result = f(self, *args,
**kwargs)
2014-09-08 10:57:57.803 17176 TRACE rally   File
/usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/tempest/tempest.py,
line 146, in _prepare_and_run
2014-09-08 10:57:57.803 17176 TRACE rally self.generate_config_file()
2014-09-08 10:57:57.803 17176 TRACE rally   File
/usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/tempest/tempest.py,
line 89, in generate_config_file
2014-09-08 10:57:57.803 17176 TRACE rally
config.TempestConf(self.deploy_id).generate(self.config_file)
2014-09-08 10:57:57.803 17176 TRACE rally   File
/usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/tempest/config.py,
line 242, in generate
2014-09-08 10:57:57.803 17176 TRACE rally func()
2014-09-08 10:57:57.803 17176 TRACE rally   File
/usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/tempest/config.py,
line 115, in _set_boto
2014-09-08 10:57:57.803 17176 TRACE rally
self.conf.set(section_name, 'ec2_url', self._get_url('ec2'))
2014-09-08 10:57:57.803 17176 TRACE rally   File
/usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/tempest/config.py,
line 105, in _get_url
2014-09-08 10:57:57.803 17176 TRACE rally return
service['admin']['publicURL']
2014-09-08 10:57:57.803 17176 TRACE rally KeyError: 'admin'
2014-09-08 10:57:57.803 17176 TRACE rally


I tried to dig into the root cause of above error, but I did not have
any idea where to look into. The most doubtful may be automatically
generated configuration file, but I did not find anything odd.

If possible, could you give me some hints on what to do?

Sorry for bothering you. Thanks in advance.



Best Regards,
Daisuke

-- 
Daisuke Morita morita.dais...@lab.ntt.co.jp
NTT Software Innovation Center, NTT Corporation


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] KeyError: 'admin'

2014-09-08 Thread Daisuke Morita


Thanks, Boris.

I tried rally-manage db recreate before registering a deployment, but 
nothing changed at all in running Tempest...


It is late in Japan, so I will try it tomorrow.


Best regards,
Daisuke

On 2014/09/08 20:38, Boris Pavlovic wrote:

Daisuke,

We have as well changes in our DB models.

So running:

   $ rally-manage db recreate

Will be as well required..


Best regards,
Boris Pavlovic



On Mon, Sep 8, 2014 at 3:24 PM, Mikhail Dubov mdu...@mirantis.com
mailto:mdu...@mirantis.com wrote:

Hi Daisuke,

seems like your issue is connected to the change in the deployment
configuration file format for existing clouds we've merged
https://review.openstack.org/#/c/116766/ recently.

Please see the updated Wiki How to page

https://wiki.openstack.org/wiki/Rally/HowTo#Step_1._Deployment_initialization_.28use_existing_cloud.29
 that
describes the new format. In your case, you can just update the
deployment configuration file and run again /rally deployment
create/. Everything should work then.



Best regards,
Mikhail Dubov

Mirantis, Inc.
E-Mail: mdu...@mirantis.com mailto:mdu...@mirantis.com
Skype: msdubov

On Mon, Sep 8, 2014 at 3:16 PM, Daisuke Morita
morita.dais...@lab.ntt.co.jp mailto:morita.dais...@lab.ntt.co.jp
wrote:


Hi, rally developers!

Now, I am trying to use Rally to devstack cluster on AWS VM
(all-in-one). I'm following a blog post

https://www.mirantis.com/blog/rally-openstack-tempest-testing-made-simpler/
. I successfully installed Devstack, Rally and Tempest. Now, I
just ran
Tempest by 'rally verify start' command, but the command failed
with the
following stacktrace.


2014-09-08 10:57:57.803 17176 CRITICAL rally [-] KeyError: 'admin'
2014-09-08 10:57:57.803 17176 TRACE rally Traceback (most recent
call last):
2014-09-08 10:57:57.803 17176 TRACE rally   File
/usr/local/bin/rally,
line 10, in module
2014-09-08 10:57:57.803 17176 TRACE rally sys.exit(main())
2014-09-08 10:57:57.803 17176 TRACE rally   File
/usr/local/lib/python2.7/dist-packages/rally/cmd/main.py, line
40, in main
2014-09-08 10:57:57.803 17176 TRACE rally return
cliutils.run(sys.argv, categories)
2014-09-08 10:57:57.803 17176 TRACE rally   File
/usr/local/lib/python2.7/dist-packages/rally/cmd/cliutils.py, line
184, in run
2014-09-08 10:57:57.803 17176 TRACE rally ret = fn(*fn_args,
**fn_kwargs)
2014-09-08 10:57:57.803 17176 TRACE rally   File string,
line 2, in
start
2014-09-08 10:57:57.803 17176 TRACE rally   File
/usr/local/lib/python2.7/dist-packages/rally/cmd/envutils.py,
line 64,
in default_from_global
2014-09-08 10:57:57.803 17176 TRACE rally return f(*args,
**kwargs)
2014-09-08 10:57:57.803 17176 TRACE rally   File
/usr/local/lib/python2.7/dist-packages/rally/cmd/commands/verify.py,
line 59, in start
2014-09-08 10:57:57.803 17176 TRACE rally api.verify(deploy_id,
set_name, regex)
2014-09-08 10:57:57.803 17176 TRACE rally   File
/usr/local/lib/python2.7/dist-packages/rally/orchestrator/api.py,
line
153, in verify
2014-09-08 10:57:57.803 17176 TRACE rally
verifier.verify(set_name=set_name, regex=regex)
2014-09-08 10:57:57.803 17176 TRACE rally   File

/usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/tempest/tempest.py,
line 247, in verify
2014-09-08 10:57:57.803 17176 TRACE rally
self._prepare_and_run(set_name, regex)
2014-09-08 10:57:57.803 17176 TRACE rally   File
/usr/local/lib/python2.7/dist-packages/rally/utils.py, line
165, in
wrapper
2014-09-08 10:57:57.803 17176 TRACE rally result = f(self,
*args,
**kwargs)
2014-09-08 10:57:57.803 17176 TRACE rally   File

/usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/tempest/tempest.py,
line 146, in _prepare_and_run
2014-09-08 10:57:57.803 17176 TRACE rally
  self.generate_config_file()
2014-09-08 10:57:57.803 17176 TRACE rally   File

/usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/tempest/tempest.py,
line 89, in generate_config_file
2014-09-08 10:57:57.803 17176 TRACE rally
config.TempestConf(self.deploy_id).generate(self.config_file)
2014-09-08 10:57:57.803 17176 TRACE rally   File

/usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/tempest/config.py,
line 242, in generate
2014-09-08 10:57:57.803 17176 TRACE rally func()
2014-09-08 10:57:57.803 17176 TRACE rally   File

/usr/local/lib/python2.7/dist-packages

[openstack-dev] [Swift] Can gatekeeper middleware be removed from pipeline?

2014-08-19 Thread Daisuke Morita

Hi,

Can gatekeeper middleware be removed from pipeline?
This does not mean that i want to use Swift without gatekeeper because
it can be security risk, but i just want to make it clear whether it is
configurable or not.


Thanks,

-- 
Daisuke Morita morita.dais...@lab.ntt.co.jp
NTT Software Innovation Center, NTT Corporation


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Branchless Tempest QA Spec - final draft

2014-04-20 Thread Daisuke Morita


(2014/04/17 4:22), Jaume Devesa wrote:

I thought that OpenStack just support one release backwards, if we have
to support three versions, this is not useful.


In fact, I could not make sense this meaning. OpenStack has two 
security-supported series and one project under development.

https://wiki.openstack.org/wiki/Releases

Therefore, I think Sean's proposal is reasonable. Tempest should be able 
to test two supported releases for administrators and one release for 
developers.




There are already ways to enable/disable modules in tempest to adapt to
each deployment needs. Just wanted to avoid more configuration options.




On 16 April 2014 21:14, David Kranz dkr...@redhat.com
mailto:dkr...@redhat.com wrote:

On 04/16/2014 11:48 AM, Jaume Devesa wrote:

Hi Sean,

for what I understood, we will need a new feature flag for each
new feature, and a feature flag (default to false) for each
deprecated one. My concern is: since the goal is make tempest a
confident tool to test any installation and not and 'tempest.conf'
will not be auto-generated from any tool as devstack does,
wouldn't be too hard to prepare a tempest.conf file with so many
feature flags to enable and disable?

If we go down this route, and I think we should, we probably need to
accept that it will be hard for users to manually configure
tempest.conf. Tempest configuration would have to be done by
whatever installation technology was used, as devstack does, or by
auto-discovery. That implies that the presence of new features
should be discoverable through the api which is a good idea anyway.
Of course some one could configure it manually but IMO that is not
desirable even with where we are now.



Maybe I am simplifying too much, but wouldn't be enough with a
pair of functions decorators like

@new
@deprecated

Then, in tempest.conf it could be a flag to say which OpenStack
installation are you testing:

installation = [icehouse|juno]

if you choose Juno, tests with @new decorator will be executed and
tests with @deprecated will be skipped.
If you choose Icehouse, tests with @new decorator will be skipped,
and tests with @deprecated will be executed

I am missing some obvious case that make this approach a nonsense?

There are two problems with this. First, some folks are chasing
master for their deployments and some do not deploy all the features
that are set up by devstack. In both cases, it is not possible to
identify what can be tested with a simple name that corresponds to a
stable release. Second, what happens when we move on to K? The
meaning of new would have to change while retaining its old
meaning as well which won't work. I think Sean spelled out the
important scenarios.

  -David



Regards,
jaume


On 14 April 2014 15:21, Sean Dague s...@dague.net
mailto:s...@dague.net wrote:

As we're coming up on the stable/icehouse release the QA team
is looking
pretty positive at no longer branching Tempest. The QA Spec
draft for
this is here -

http://docs-draft.openstack.org/77/86577/2/check/gate-qa-specs-docs/3f84796/doc/build/html/specs/branchless-tempest.html
and hopefully address a lot of the questions we've seen so far.

Additional comments are welcome on the review -
https://review.openstack.org/#/c/86577/
or as responses on this ML thread.

-Sean

--
Sean Dague
Samsung Research America
s...@dague.net mailto:s...@dague.net /
sean.da...@samsung.com mailto:sean.da...@samsung.com
http://dague.net


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org  
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Daisuke Morita morita.dais...@lab.ntt.co.jp
NTT Software Innovation Center, NTT Corporation


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tempest] Which is the best way for skipping tests?

2013-12-04 Thread Daisuke Morita
Hi, everyone.

Which do you think is the best way of coding test skipping, writing
cls.skipException statement in setUpClass method or skipIf annotation
for each test method ?

This question comes to me in reviewing
https://review.openstack.org/#/c/59759/ . I think that work itself is
great and I hope this patch is merged to Tempest. I just want to focus
on coding styles and explicitness of test outputs.

If skipIf annotation is used, test output of Swift is as follows.

---
tempest.api.object_storage.test_account_quotas.AccountQuotasTest
test_admin_modify_quota[gate,smoke]
SKIP  1.15
test_upload_large_object[gate,negative,smoke]
SKIP  0.03
test_upload_valid_object[gate,smoke]
SKIP  0.03
test_user_modify_quota[gate,negative,smoke]
SKIP  0.03
tempest.api.object_storage.test_account_services.AccountTest
test_create_and_delete_account_metadata[gate,smoke]   OK
 0.32
test_list_account_metadata[gate,smoke]OK
 0.02
test_list_containers[gate,smoke]  OK
 0.02

...(SKIP)...

Ran 54 tests in 85.977s

OK
---


On the other hand, if cls.skipException is used, an output is changed as
follows.

---
setUpClass (tempest.api.object_storage.test_account_quotas
AccountQuotasTest)
SKIP  0.00
tempest.api.object_storage.test_account_services.AccountTest
test_create_and_delete_account_metadata[gate,smoke]   OK
 0.48
test_list_account_metadata[gate,smoke]OK
 0.02
test_list_containers[gate,smoke]  OK
 0.02

...(SKIP)...

Ran 49 tests in 81.475s

OK
---


I believe the output of the code using skipIf annotation is better.
Since the coverage of tests is displayed more definitely, it is easier
to find out what tests are really skipped.

I scanned the whole code of Tempest. The count of cls.skipException
statements is 63, and the count of skipIf annotations is 24. Replacing
them is not trivial task, but I think the most impportant for testing is
to output consistent and accurate log.


Am I missing something? Or, this kind of discussion has been done
already in the past? If so, could you let me know?


Best Regards,

-- 
Daisuke Morita morita.dais...@lab.ntt.co.jp
NTT Software Innovation Center, NTT Corporation


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tempest] Which is the best way for skipping tests?

2013-12-04 Thread Daisuke Morita


Thanks for your suggestion, Brant. And, I experimented multiple 
approaches like that in Tempest, annotating testtools.skipIf above 
setUp() method can deliver a similar result.


How about this annotation approach, Joe? This approach can meet both 
requirements we said, avoiding code duplications and accurately logging 
what are skipped. The demerit of this approach seems less intuitive than 
annotating above setUpClass method, but you do not need to spend any 
time to implement additional annotation.



Best Regards,
Daisuke Morita

(2013/12/05 8:59), Brant Knudson wrote:


In Keystone, we've got some tests that raise self.skipTest('...') in
the test class setUp() method (not setUpClass). My testing shows that if
there's several tests in the class then it shows all of those tests as
skipped (not just 1 skip). Does this do what you want?

Here's an example:
http://git.openstack.org/cgit/openstack/keystone/tree/keystone/tests/test_ipv6.py?id=73dbc00e6ac049f19d0069ecb07ca8ed75627dd5#n30

http://git.openstack.org/cgit/openstack/keystone/tree/keystone/tests/core.py?id=73dbc00e6ac049f19d0069ecb07ca8ed75627dd5#n500

  - Brant

On Wed, Dec 4, 2013 at 5:46 AM, Daisuke Morita
morita.dais...@lab.ntt.co.jp mailto:morita.dais...@lab.ntt.co.jp wrote:

Hi, everyone.

Which do you think is the best way of coding test skipping, writing
cls.skipException statement in setUpClass method or skipIf annotation
for each test method ?

This question comes to me in reviewing
https://review.openstack.org/#/c/59759/ . I think that work itself is
great and I hope this patch is merged to Tempest. I just want to focus
on coding styles and explicitness of test outputs.

If skipIf annotation is used, test output of Swift is as follows.

---
tempest.api.object_storage.test_account_quotas.AccountQuotasTest
 test_admin_modify_quota[gate,smoke]
SKIP  1.15
 test_upload_large_object[gate,negative,smoke]
SKIP  0.03
 test_upload_valid_object[gate,smoke]
SKIP  0.03
 test_user_modify_quota[gate,negative,smoke]
SKIP  0.03
tempest.api.object_storage.test_account_services.AccountTest
 test_create_and_delete_account_metadata[gate,smoke]
   OK
  0.32
 test_list_account_metadata[gate,smoke]
OK
  0.02
 test_list_containers[gate,smoke]
OK
  0.02

...(SKIP)...

Ran 54 tests in 85.977s

OK
---


On the other hand, if cls.skipException is used, an output is changed as
follows.

---
setUpClass (tempest.api.object_storage.test_account_quotas
 AccountQuotasTest)
SKIP  0.00
tempest.api.object_storage.test_account_services.AccountTest
 test_create_and_delete_account_metadata[gate,smoke]
   OK
  0.48
 test_list_account_metadata[gate,smoke]
OK
  0.02
 test_list_containers[gate,smoke]
OK
  0.02

...(SKIP)...

Ran 49 tests in 81.475s

OK
---


I believe the output of the code using skipIf annotation is better.
Since the coverage of tests is displayed more definitely, it is easier
to find out what tests are really skipped.

I scanned the whole code of Tempest. The count of cls.skipException
statements is 63, and the count of skipIf annotations is 24. Replacing
them is not trivial task, but I think the most impportant for testing is
to output consistent and accurate log.


Am I missing something? Or, this kind of discussion has been done
already in the past? If so, could you let me know?


Best Regards,

--
Daisuke Morita morita.dais...@lab.ntt.co.jp
mailto:morita.dais...@lab.ntt.co.jp
NTT Software Innovation Center, NTT Corporation


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Daisuke Morita morita.dais...@lab.ntt.co.jp
NTT Software Innovation Center, NTT Corporation


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] moratorium on new negative tests in Tempest

2013-11-22 Thread Daisuke Morita


Great work!
Now the test cases in Tempest are well-stocked so it's a good time to 
rearrange the design of test codes.


I checked mailing lists, IRC log and etherpads relating to this topic. 
Let me leave my 5 thoughts below.


How to handle:
1. Data type (e.g, int, bool)
2. Specific value or format support (e.g, RegExp)
3. Boundary value analysis (David made a mention to this issue below)
4. Invalid value by non-Unicode (Ken'ichi made a mention in his mail Nov 13)
5. Errors that complicated pre- or post- processings are required for 
reproducing them



I suggest that issues 1-4 be considered in the scope of new framework. 
From above sources, I feel a little bias towards invalid value testing.


On the other hand, I think that some tests remain outside of this framework.
As for Swift, the max total size of sending HTTP headers for metadata is 
4096 bytes but the max size of meta-key is 128 bytes and the max of 
meta-value is 256 bytes. It might be difficult to test boundary value of 
total HTTP headers with the new framework. In such cases, is it OK to 
write test case like current implementation?



Anyway, I do never want to derail this work. I am looking forward to a 
prototype:)



Best Regards,
Daisuke Morita


Excerpts from David Kranz's message of 2013-11-12 14:33:04 -0500:



I am working on this with Marc Koderer but we only just started and are
not quite ready. But since you asked now...

The problem is that the current implementation of negative tests is that
each case is represented as code in a method and targets a particular
set of api arguments and expected result. In most (but not all) of these
tests there is boilerplate code surrounding the real content which is
the actual arguments being passed and the value expected. That
boilerplate code has to be written correctly and reviewed. The general
form of the solution has to be worked out but basically would involve
expressing these tests declaratively, perhaps in a yaml file. In order
to do this we will need some kind of json schema for each api. The main
implementation around this is defining the yaml attributes that make it
easy to express the test cases, and somehow coming up with the json
schema for each api.

In addition, we would like to support fuzz testing where arguments
are, at least partially, randomly generated and the return values are
only examined for 4xx vs something else. This would be possible if we
had json schemas. The main work is to write a generator and methods for
creating bad values including boundary conditions for types with ranges.
I had thought a bit about this last year and poked around for an
existing framework. I didn't find anything that seemed to make the job
much easier but if any one knows of such a thing (python, hopefully)
please let me know.

The negative tests for each api would be some combination of
declaratively specified cases and auto-generated ones.

With regard to the json schema, there have been various attempts at this
in the past, including some ideas of how wsme/pecan will help, and it
might be helpful to have more project coordination. I can see a few options:

1. Tempest keeps its own json schema data
2. Each project keeps its own json schema in a way that supports
automated extraction
3. There are several use cases for json schema like this and it gets
stored in some openstacky place that is not in tempest

So that is the starting point. Comments and suggestions welcome! Marc
and I just started working on an etherpad
https://etherpad.openstack.org/p/bp_negative_tests  but any one is
welcome to contribute there.

   -David


--
Daisuke Morita morita.dais...@lab.ntt.co.jp
NTT Software Innovation Center, NTT Corporation


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev