Re: [Openstack] Are the Python APIs public or internal?

2013-03-01 Thread Daryl Walleck
I certainly use them daily. I actually use an interesting wrapper called 
Supernova (http://rackerhacker.github.com/supernova/) which allows for 
multi-user/multi-environment configurations. While the clients may not be as 
critical as the APIs themselves, they're something I certainly rely on heavily.

Daryl

From: openstack-bounces+daryl.walleck=rackspace@lists.launchpad.net 
[openstack-bounces+daryl.walleck=rackspace@lists.launchpad.net] on behalf 
of David Kranz [david.kr...@qrclab.com]
Sent: Friday, March 01, 2013 3:36 PM
To: openstack@lists.launchpad.net
Subject: Re: [Openstack] Are the Python APIs public or internal?

The Tempest (QA) team certainly considers them to be public and we just started 
getting some contributions that are testing novaclient. In other work I am also 
a consumer of several of these APIs so I really hope they don't break.

 -David

On 3/1/2013 8:50 AM, Dolph Mathews wrote:
I believe they should certainly be treated as public API's -- just like any 
other library. I'd also treat them as stable if they've ever been included in a 
versioned release. That said, I'm sure it would be easy to find examples of 
methods  attributes within the library that are not intended to be consumed 
externally, but perhaps either the naming convention or documentation doesn't 
sufficiently indicate that.

In keysoneclient, we're making backwards incompatible changes in a new 
subpackage (keystoneclient.v3) while maintaing compatibility in the common 
client code. For example, you should always be able to initialize the client 
with a tenant_id / tenant_name, even though the client will soon be using 
project_id / project_name internally to reflect our revised lingo.


-Dolph


On Thu, Feb 28, 2013 at 11:07 PM, Lorin Hochstein 
lo...@nimbisservices.commailto:lo...@nimbisservices.com wrote:
Here's an issue that came up in the operators doc sprint this week.

Let's say I wanted to write some Python scripts using the APIs exposed by the 
python-*client packages. As a concrete example, let's say I wrote a script that 
uses the keystone Python API that's exposed in the python-keystoneclient 
package:

https://github.com/lorin/openstack-ansible/blob/master/playbooks/keystone/files/keystone-init.py

Are these APIs public or stable  in some meaningful way? (i.e., can I count 
on this script still working across minor release upgrades)? Or should they be 
treated like internal APIs that could be changed at any time in the future? 
Or is this not defined at all?

Lorin


___
Mailing list: 
https://launchpad.net/~openstackhttps://launchpad.net/%7Eopenstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : 
https://launchpad.net/~openstackhttps://launchpad.net/%7Eopenstack
More help   : https://help.launchpad.net/ListHelp





___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack-qa-team] Blueprint for Compute Admin API tests

2012-10-17 Thread Daryl Walleck
Hi everyone,

After our discussions earlier today about being more vocal about what's being 
worked on, I wanted to raise awareness of what I'm currently working to avoid 
duplication of efforts. This is the first blueprint I've submitted, so if 
there's any additional detail that I'm missing, I'm always glad to get feedback.

https://blueprints.launchpad.net/tempest/+spec/add-compute-admin-tests

Daryl
-- 
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp


[Openstack-qa-team] Question about licensing header

2012-10-10 Thread Daryl Walleck
While I was doing some assorted maintenance on the Nova tests tonight, I 
noticed some inconsistencies in the license header of time files. While most 
attribute the work to OpenStack, LLC, I also see some where IBM is mentioned 
instead. I'm guessing this might be a copy/paste error, or are individual 
organizations supposed to be attributing themselves for their submissions?
-- 
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack-qa-team] Question about licensing header

2012-10-10 Thread Daryl Walleck
Gotcha, that makes sense. Thanks!

Daryl

From: annegen...@justwriteclick.com [annegen...@justwriteclick.com] on behalf 
of Anne Gentle [a...@openstack.org]
Sent: Wednesday, October 10, 2012 10:28 PM
To: Daryl Walleck
Cc: openstack-qa-team@lists.launchpad.net
Subject: Re: [Openstack-qa-team] Question about licensing header

Hi Daryl -

As I understand it (and I am not a lawyer), the Apache2 license
affords copyright assignment to the committer. It is correct to have a
header with a copyright from the company for which you worked at the
time of contribution. See
http://www.apache.org/licenses/LICENSE-2.0.html#apply.

For the Apache Software Foundation projects, Apache asks that they do
not add copyright statements to their header files. OpenStack projects
do not have such a policy statement that I know of.

Hope this helps!
Anne

On Wed, Oct 10, 2012 at 10:06 PM, Daryl Walleck
daryl.wall...@rackspace.com wrote:
 While I was doing some assorted maintenance on the Nova tests tonight, I
 noticed some inconsistencies in the license header of time files. While most
 attribute the work to OpenStack, LLC, I also see some where IBM is mentioned
 instead. I'm guessing this might be a copy/paste error, or are individual
 organizations supposed to be attributing themselves for their submissions?

 --
 Mailing list: https://launchpad.net/~openstack-qa-team
 Post to : openstack-qa-team@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack-qa-team
 More help   : https://help.launchpad.net/ListHelp


-- 
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [openstack-dev] [nova] Call for Help -- OpenStack API XML Support

2012-08-09 Thread Daryl Walleck
As part of my work on Tempest, I've created an alternate backend configuration 
to use XML requests/responses. This right now mostly covers Nova, but could 
easily be extended to test other projects as well. I hadn't pushed it yet 
because it seemed to be low priority, but I'd be more than glad to accelerate 
and get this committed.

Daryl

On Aug 9, 2012, at 8:14 PM, Doug Davis d...@us.ibm.commailto:d...@us.ibm.com
 wrote:


Situations like this are always interesting to watch.  :-)

On the one hand its open-source, so if you care about something then put up the 
resources to make it happen.
On the other hand, that doesn't mean that as a developer you get to ignore the 
bigger picture and only do 1/2 of the work because you don't care about the 
other 1/2.

Overall, I tend to agree with the attitude that as long as XML is officially 
supported then all code changes need to make sure they run through both the 
JSON and XML codepaths. And if this means twice the testcases then so be it.  
People committing code shouldn't have a choice in this - its either you do the 
full job or your code is rejected.

Having said that, it is a valid question to ask whether we want to continue to 
support both JSON and XML going forward.  But, until that decision is formally 
made letting 1/2 of the APIs atrophy makes the entire community look bad and 
therefore should not be allowed to happen.

My vote: from now on don't let any code change in unless if works for both.  I 
suspect we'll either see the XML side come up to speed really quickly or it'll 
force an ugly vote.  But either way, this needs to be resolved before the next 
release.

thanks
-Doug

STSM |  Standards Architect  |  IBM Software Group
(919) 254-6905  |  IBM 444-6905  |  d...@us.ibm.commailto:d...@us.ibm.com
The more I'm around some people, the more I like my dog.


George Reese george.re...@imaginary.commailto:george.re...@imaginary.com

08/09/2012 07:02 PM
Please respond to
OpenStack Development Mailing List 
openstack-...@lists.openstack.orgmailto:openstack-...@lists.openstack.org




To
OpenStack Development Mailing List 
openstack-...@lists.openstack.orgmailto:openstack-...@lists.openstack.org
cc
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net 
\(openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net\) 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject
Re: [openstack-dev] [nova] Call for Help -- OpenStack API XML
Support







And this is why I go off on the developer-oriented mentality of the OpenStack 
community.

The fact that there is no one in the OpenStack developer community writing XML 
stuff is not a reflection of the fact that there's no huge desire for XML.

It's in the spec for a reason: BECAUSE ENTERPRISES USE XML HEAVILY

OpenStack developers aren't that audience. They use JSON.

That the project can get to this point and not have tests for these things 
shows a flaw in the development processes, not some grand illustration of 
supply and demand.

Do I really have to point out that if the spec calls for JSON and XML, you 
should bloody well write integration tests to check for JSON and XML?

You don't write whatever happens to please you.

You know how I know all of this? I have an API that supports both XML and JSON. 
I personally prefer JSON. Most of my friends and colleagues prefer and use JSON.

Most of my customers use XML.

Thank $deity I actually write unit tests for each format.

-George

File under:
- statistics 101
- software development 101

On Aug 9, 2012, at 5:52 PM, Vishvananda Ishaya 
vishvana...@gmail.commailto:vishvana...@gmail.com wrote:


On Aug 9, 2012, at 3:32 PM, George Reese 
george.re...@imaginary.commailto:george.re...@imaginary.com wrote:

Why aren't the integration tests both XML and JSON?

The simple answer is that no one has taken the time to write them. Our devstack 
exercises use the python client bindings. Tempest has json clients but no xml 
clients[1]. I think this demonstrates that there just isn't a huge desire for 
xml. Users that I have chatted with just seem to care that the api works and 
that they they have good bindings.

I am definitely willing to be proven wrong on this point, but I'm secretly 
hoping everyone agrees with me. It is a lot of work to maintain three APIs (we 
are still maintaining EC2 as well) and keep them all functioning well, so if 
people are happy without OpenStack XML I would be perfectly content to 
deprecate it.

Vish

[1] https://github.com/openstack/tempest/tree/master/tempest/services/nova/xml

___
OpenStack-dev mailing list
openstack-...@lists.openstack.orgmailto:openstack-...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
George Reese (george.re...@imaginary.commailto:george.re...@imaginary.com)
t: @GeorgeReese   m: 

Re: [Openstack] [Openstack-qa-team] wait_for_server_status and Compute API

2012-06-18 Thread Daryl Walleck
I can verify that rescue is a non-race state. The transition is active to 
rescue on setting rescue, and rescue to active when leaving rescue.


 Original message 
Subject: Re: [Openstack-qa-team] wait_for_server_status and Compute API
From: Jay Pipes jaypi...@gmail.com
To: openstack-qa-t...@lists.launchpad.net 
openstack-qa-t...@lists.launchpad.net,openstack@lists.launchpad.net 
openstack@lists.launchpad.net
CC: Re: [Openstack-qa-team] wait_for_server_status and Compute API


On 06/18/2012 12:01 PM, David Kranz wrote:
 There are a few tempest tests, and many in the old kong suite that is
 still there, that wait for a server status that is something other than
 ACTIVE or VERIFY_RESIZE. These other states, such as BUILD or REBOOT,
 are transient so I don't understand why it is correct for code to poll
 for those states. Am I missing something or do those tests have race
 condition bugs?

No, you are correct, and I have made some comments in recent code
reviews to that effect.

Here are all the task states:

https://github.com/openstack/nova/blob/master/nova/compute/task_states.py

Out of all those task states, I believe the only one safe to poll in a
wait loop is RESIZE_VERIFY. All the others are prone to state
transitions outside the control of the user.

For the VM states:

https://github.com/openstack/nova/blob/master/nova/compute/vm_states.py

I consider the following to be non-racy, quiescent states:

ACTIVE
DELETED
STOPPED
SHUTDOFF
PAUSED
SUSPENDED
ERROR

I consider the following to be racy states that should not be tested for:

MIGRATING -- Instead, the final state should be checked for...
RESIZING -- Instead, the RESIZE_VERIFY and RESIZE_CONFIRM task states
should be checked

I have absolutely no idea what the state termination is for the
following VM states:

RESCUED -- is this a permanent state? Is this able to be queried for in
a consistent manner before it transitions to some further state?

SOFT_DELETE -- I have no clue what the purpose or queryability of this
state is, but would love to know...

Best,
-jay

--
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-t...@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack-qa-team] wait_for_server_status and Compute API

2012-06-18 Thread Daryl Walleck
I can verify that rescue is a non-race state. The transition is active to 
rescue on setting rescue, and rescue to active when leaving rescue.


 Original message 
Subject: Re: [Openstack-qa-team] wait_for_server_status and Compute API
From: Jay Pipes jaypi...@gmail.com
To: openstack-qa-team@lists.launchpad.net 
openstack-qa-team@lists.launchpad.net,openst...@lists.launchpad.net 
openst...@lists.launchpad.net
CC: Re: [Openstack-qa-team] wait_for_server_status and Compute API


On 06/18/2012 12:01 PM, David Kranz wrote:
 There are a few tempest tests, and many in the old kong suite that is
 still there, that wait for a server status that is something other than
 ACTIVE or VERIFY_RESIZE. These other states, such as BUILD or REBOOT,
 are transient so I don't understand why it is correct for code to poll
 for those states. Am I missing something or do those tests have race
 condition bugs?

No, you are correct, and I have made some comments in recent code
reviews to that effect.

Here are all the task states:

https://github.com/openstack/nova/blob/master/nova/compute/task_states.py

Out of all those task states, I believe the only one safe to poll in a
wait loop is RESIZE_VERIFY. All the others are prone to state
transitions outside the control of the user.

For the VM states:

https://github.com/openstack/nova/blob/master/nova/compute/vm_states.py

I consider the following to be non-racy, quiescent states:

ACTIVE
DELETED
STOPPED
SHUTDOFF
PAUSED
SUSPENDED
ERROR

I consider the following to be racy states that should not be tested for:

MIGRATING -- Instead, the final state should be checked for...
RESIZING -- Instead, the RESIZE_VERIFY and RESIZE_CONFIRM task states
should be checked

I have absolutely no idea what the state termination is for the
following VM states:

RESCUED -- is this a permanent state? Is this able to be queried for in
a consistent manner before it transitions to some further state?

SOFT_DELETE -- I have no clue what the purpose or queryability of this
state is, but would love to know...

Best,
-jay

--
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp
-- 
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp


[Openstack] Agenda for QA Status Meeting for 6/7

2012-06-06 Thread Daryl Walleck
The weekly QA Team meeting takes place at 17:00 UTC on IRC (#openstack-meeting 
on Freenode). We invite anyone interested in testing, quality assurance and 
performance engineering to attend the weekly meeting. 

The agenda for this week is as follows: 

* Status of Swift tests (Jose)
* Status of parallelization modifications (Daryl)
* Outstanding code reviews 
  -Review all outstanding code reviews in the queue: 
  -https://review.openstack.org/#/q/status:open+project:openstack/tempest,n,z 

* New items for discussion 
 - Discussion on David's resource management branch 
 - Further thoughts on test resource management (Sam)
 - Expectations of Tempest execution time

* Work in progress that will be landing soon 
 - Smoke testing base classes?
 - Swift tests
 - Instance level tests for the rest of the Compute API actions

* Open discussion 

Feel free to add to the agenda as needed.

Daryl
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack-qa-team] Agenda for QA Status Meeting for 6/7

2012-06-06 Thread Daryl Walleck
The weekly QA Team meeting takes place at 17:00 UTC on IRC (#openstack-meeting 
on Freenode). We invite anyone interested in testing, quality assurance and 
performance engineering to attend the weekly meeting. 

The agenda for this week is as follows: 

* Status of Swift tests (Jose)
* Status of parallelization modifications (Daryl)
* Outstanding code reviews 
  -Review all outstanding code reviews in the queue: 
  -https://review.openstack.org/#/q/status:open+project:openstack/tempest,n,z 

* New items for discussion 
 - Discussion on David's resource management branch 
 - Further thoughts on test resource management (Sam)
 - Expectations of Tempest execution time

* Work in progress that will be landing soon 
 - Smoke testing base classes?
 - Swift tests
 - Instance level tests for the rest of the Compute API actions

* Open discussion 

Feel free to add to the agenda as needed.

Daryl
-- 
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [QA] Aligning smoke / acceptance / promotion test efforts

2012-05-08 Thread Daryl Walleck
Hi Tim,

I understand where you're coming from as well. I'm all for anything that gets 
the job done right and keeps folks engaged. It's fair to say that most 
positive/happy path tests could be developed using novaclient, and that there 
may be workarounds for some of the data is not readily available from 
novaclient responses. That said, the truly nasty negative tests that are so 
critical would be nearly impossible. I'm not comfortable with having hard coded 
HTTP requests inside of tests, as that can easily become a maintainability 
nightmare.

That being said, I would much rather work this out than have Tempest splinter 
into separate efforts. I think the one thing we can all agree on is that to 
some degree, tests using novaclient are a necessity. I'm the team captain for 
the QA meeting this week, so I'll set the agenda around this topic so we can 
have a more in depth discussion.

Daryl

Sent from my iPad

On May 8, 2012, at 4:00 PM, Tim Simpson 
tim.simp...@rackspace.commailto:tim.simp...@rackspace.com wrote:

Hi Daryl,

I understand what you're trying to accomplish by creating a new client for the 
tests, but I'm not sure why the interface has to be completely different from 
the official set of Python client bindings. If it used the same interface 
everyone in the community could then benefit from the extra features you're 
adding to the Tempest client by swapping it in should the need arise. It would 
also be easier for Tempest newbs to create and contribute new tests.

I understand that in some cases the current interface is too nice, but wouldn't 
the existing one work fine for the majority of the tests? If so, why not just 
write a few extra methods to send HTTP requests directly (or for these cases, 
use http directly)?

Additionally I've heard from some in Rackspace QA that the client allows them 
to see the HTTP codes which is more illustrative. I feel like this problem 
could be solved with helper methods this:

def assert_response(http_code, func, *args, **kwargs):
try:
func(*args, **kwargs)
assert_equal(http_code, 200)
except ClientException as ce:
assert_equal(http_code, ce.code)

Then you'd write tests like this:

server = assert_response(200, servers.get, some_id)

You could of course have additional methods if the success case indicated a 
different HTTP code. If more than one HTTP code could possibly lead to the same 
return value then maybe that indicates the official bindings should be changed. 
In this case it would be another win, as Tempest writers would be pushing to 
ensure the Python client interface was as useful as possible.

Tim

From: 
openstack-bounces+tim.simpson=rackspace@lists.launchpad.netmailto:openstack-bounces+tim.simpson=rackspace@lists.launchpad.net
 
[openstack-bounces+tim.simpson=rackspace@lists.launchpad.netmailto:openstack-bounces+tim.simpson=rackspace@lists.launchpad.net]
 on behalf of Daryl Walleck 
[daryl.wall...@rackspace.commailto:daryl.wall...@rackspace.com]
Sent: Friday, May 04, 2012 12:03 AM
To: Maru Newby
Cc: Rick Lopez; 
openstack-qa-t...@lists.launchpad.netmailto:openstack-qa-t...@lists.launchpad.net;
 openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: Re: [Openstack] [QA] Aligning smoke / acceptance / promotion 
test efforts

Perhaps it's just me, but given if I was developing in a different language, I 
would not want to use a command line tool to interact with my application. What 
is the point then of developing RESTful APIs if the primary client is not it, 
but these command line tools instead?

While it may appear that the approach I'm advocating is extra work, let me try 
explain the purpose of this approach. If these aren't clear, I'd be more than 
glad to give some concrete examples where these techniques have been useful. A 
few benefits that come to mind are:


  *   Testing of XML implementations of the API. While this could be built into 
the clients, I don't see many folks who would rush to that cause
  *   Direct access to the response. The clients hide any header/response code 
information from the recipient, which can be very important. For example, the 
response headers for Nova contain a token that can be captured and used when 
troubleshooting issues.
  *   Avoiding the user friendliness of the clients. While retries and user 
friendly exception handling are great for clients, they are not what I want as 
a tester. As a tester, while I do want a minimal layer of abstraction between 
my AUT and my code so that I'm not making HTTP requests directly from my tests, 
from what I've seen the clients can make more efforts than I'd prefer to be 
helpful.
  *   The ability to inject other metrics gathering into my clients for the 
purpose of troubleshooting/logging/information handling

While perhaps the idea is that only the smoke tests would use this approach, 
I'm hesitant to the idea of developing tests using multiple

[Openstack] Agenda for QA Status Meeting for 5/10

2012-05-08 Thread Daryl Walleck
Weekly team meeting

The OpenStack QA Team holds public weekly meetings in #openstack-meeting, 
Thursday at 13:00 EST (17:00 UTC). Everyone interested in testing, quality 
assurance, performance engineering, etc, should attend!


Agenda for next meeting


* Review of last week's action items

(jaypipes) Get example smoke tests into Gerrit for review

* Blockers preventing work from moving forward

None

* Outstanding code reviews

Review all outstanding code reviews in the queue:

https://review.openstack.org/#/q/status:open+project:openstack/tempest,n,z

* New items for discussion (contact me with additional topics if needed)

1. (All) Feedback on Jay's Smoke Test Branch 
(https://review.openstack.org/#/c/7069/2)

2. (Jose) Update on Swift test development for Tempest

3. (Daryl) Thoughts on how to better document and convey functional test 
coverage

* Open discussion


Previous meetings

Previous meetings, with their notes and logs, can be found in under 
Meetings/QAMeetingLogshttp://wiki.openstack.org/Meetings/QAMeetingLogs.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack-qa-team] [Openstack] [QA] Aligning smoke / acceptance / promotion test efforts

2012-05-08 Thread Daryl Walleck
Hi Tim,

I understand where you're coming from as well. I'm all for anything that gets 
the job done right and keeps folks engaged. It's fair to say that most 
positive/happy path tests could be developed using novaclient, and that there 
may be workarounds for some of the data is not readily available from 
novaclient responses. That said, the truly nasty negative tests that are so 
critical would be nearly impossible. I'm not comfortable with having hard coded 
HTTP requests inside of tests, as that can easily become a maintainability 
nightmare.

That being said, I would much rather work this out than have Tempest splinter 
into separate efforts. I think the one thing we can all agree on is that to 
some degree, tests using novaclient are a necessity. I'm the team captain for 
the QA meeting this week, so I'll set the agenda around this topic so we can 
have a more in depth discussion.

Daryl

Sent from my iPad

On May 8, 2012, at 4:00 PM, Tim Simpson 
tim.simp...@rackspace.commailto:tim.simp...@rackspace.com wrote:

Hi Daryl,

I understand what you're trying to accomplish by creating a new client for the 
tests, but I'm not sure why the interface has to be completely different from 
the official set of Python client bindings. If it used the same interface 
everyone in the community could then benefit from the extra features you're 
adding to the Tempest client by swapping it in should the need arise. It would 
also be easier for Tempest newbs to create and contribute new tests.

I understand that in some cases the current interface is too nice, but wouldn't 
the existing one work fine for the majority of the tests? If so, why not just 
write a few extra methods to send HTTP requests directly (or for these cases, 
use http directly)?

Additionally I've heard from some in Rackspace QA that the client allows them 
to see the HTTP codes which is more illustrative. I feel like this problem 
could be solved with helper methods this:

def assert_response(http_code, func, *args, **kwargs):
try:
func(*args, **kwargs)
assert_equal(http_code, 200)
except ClientException as ce:
assert_equal(http_code, ce.code)

Then you'd write tests like this:

server = assert_response(200, servers.get, some_id)

You could of course have additional methods if the success case indicated a 
different HTTP code. If more than one HTTP code could possibly lead to the same 
return value then maybe that indicates the official bindings should be changed. 
In this case it would be another win, as Tempest writers would be pushing to 
ensure the Python client interface was as useful as possible.

Tim

From: 
openstack-bounces+tim.simpson=rackspace@lists.launchpad.netmailto:openstack-bounces+tim.simpson=rackspace@lists.launchpad.net
 
[openstack-bounces+tim.simpson=rackspace@lists.launchpad.netmailto:openstack-bounces+tim.simpson=rackspace@lists.launchpad.net]
 on behalf of Daryl Walleck 
[daryl.wall...@rackspace.commailto:daryl.wall...@rackspace.com]
Sent: Friday, May 04, 2012 12:03 AM
To: Maru Newby
Cc: Rick Lopez; 
openstack-qa-team@lists.launchpad.netmailto:openstack-qa-team@lists.launchpad.net;
 openst...@lists.launchpad.netmailto:openst...@lists.launchpad.net
Subject: Re: [Openstack] [QA] Aligning smoke / acceptance / promotion 
test efforts

Perhaps it's just me, but given if I was developing in a different language, I 
would not want to use a command line tool to interact with my application. What 
is the point then of developing RESTful APIs if the primary client is not it, 
but these command line tools instead?

While it may appear that the approach I'm advocating is extra work, let me try 
explain the purpose of this approach. If these aren't clear, I'd be more than 
glad to give some concrete examples where these techniques have been useful. A 
few benefits that come to mind are:


  *   Testing of XML implementations of the API. While this could be built into 
the clients, I don't see many folks who would rush to that cause
  *   Direct access to the response. The clients hide any header/response code 
information from the recipient, which can be very important. For example, the 
response headers for Nova contain a token that can be captured and used when 
troubleshooting issues.
  *   Avoiding the user friendliness of the clients. While retries and user 
friendly exception handling are great for clients, they are not what I want as 
a tester. As a tester, while I do want a minimal layer of abstraction between 
my AUT and my code so that I'm not making HTTP requests directly from my tests, 
from what I've seen the clients can make more efforts than I'd prefer to be 
helpful.
  *   The ability to inject other metrics gathering into my clients for the 
purpose of troubleshooting/logging/information handling

While perhaps the idea is that only the smoke tests would use this approach, 
I'm hesitant to the idea of developing tests using multiple

[Openstack-qa-team] Agenda for QA Status Meeting for 5/10

2012-05-08 Thread Daryl Walleck
Weekly team meeting

The OpenStack QA Team holds public weekly meetings in #openstack-meeting, 
Thursday at 13:00 EST (17:00 UTC). Everyone interested in testing, quality 
assurance, performance engineering, etc, should attend!


Agenda for next meeting


* Review of last week's action items

(jaypipes) Get example smoke tests into Gerrit for review

* Blockers preventing work from moving forward

None

* Outstanding code reviews

Review all outstanding code reviews in the queue:

https://review.openstack.org/#/q/status:open+project:openstack/tempest,n,z

* New items for discussion (contact me with additional topics if needed)

1. (All) Feedback on Jay's Smoke Test Branch 
(https://review.openstack.org/#/c/7069/2)

2. (Jose) Update on Swift test development for Tempest

3. (Daryl) Thoughts on how to better document and convey functional test 
coverage

* Open discussion


Previous meetings

Previous meetings, with their notes and logs, can be found in under 
Meetings/QAMeetingLogshttp://wiki.openstack.org/Meetings/QAMeetingLogs.
-- 
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [QA] Tempest Devstack/Jenkins gate job now passing

2012-05-04 Thread Daryl Walleck
This is great news!  Thanks Jay for all the hard work you've put into getting 
this up and running.

Daryl

On May 4, 2012, at 8:34 AM, Jay Pipes wrote:

 All,
 
 A momentous event has occurred. We now have all Tempest integration tests 
 passing against the devstack-deployed test environment spun up in the 
 dev-gate-tempest-devstack-vm:
 
 https://jenkins.openstack.org/job/dev-gate-tempest-devstack-vm/test/?width=800height=600
 
 Tempest is executing 157 integration tests in about 7-8 minutes on most of 
 the service providers.
 
 We're running the job for every commit into all core projects (except Swift 
 right now, until we add the Swift integration tests into Tempest). We're 
 going to observe the job for stability today and tomorrow and then hopefully 
 by next week the QA team can recommend the job to be a full merge gate on the 
 OpenStack core projects.
 
 Best,
 -jay
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [QA] Aligning smoke / acceptance / promotion test efforts

2012-05-03 Thread Daryl Walleck
So my first question is around this. So is the claim is that the client tools 
are the default interface for the applications? While that works for coders in 
python, what about people using other languages? Even then, there's no 
guarantee that the clients in different languages are implemented in the same 
way. Tempest was designed originally because while it does use an abstraction 
between the API and the tests, there is nothing to assist the user by 
retrying and the like. While I think there's a place for writing tests using 
the command line clients, to me that would be a smoke test of a client and not 
as much a smoke test of the API.

Daryl

On May 3, 2012, at 12:01 PM, Jay Pipes wrote:

However, before this can happen, a number of improvements need to be made to 
Tempest. The issue with the smoke tests in Tempest is that they aren't really 
smoke tests. They do not use the default client tools (like novaclient, 
keystoneclient, etc) and are not annotated consistently.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [QA] Aligning smoke / acceptance / promotion test efforts

2012-05-03 Thread Daryl Walleck
Perhaps it's just me, but given if I was developing in a different language, I 
would not want to use a command line tool to interact with my application. What 
is the point then of developing RESTful APIs if the primary client is not it, 
but these command line tools instead?

While it may appear that the approach I'm advocating is extra work, let me try 
explain the purpose of this approach. If these aren't clear, I'd be more than 
glad to give some concrete examples where these techniques have been useful. A 
few benefits that come to mind are:


  *   Testing of XML implementations of the API. While this could be built into 
the clients, I don't see many folks who would rush to that cause
  *   Direct access to the response. The clients hide any header/response code 
information from the recipient, which can be very important. For example, the 
response headers for Nova contain a token that can be captured and used when 
troubleshooting issues.
  *   Avoiding the user friendliness of the clients. While retries and user 
friendly exception handling are great for clients, they are not what I want as 
a tester. As a tester, while I do want a minimal layer of abstraction between 
my AUT and my code so that I'm not making HTTP requests directly from my tests, 
from what I've seen the clients can make more efforts than I'd prefer to be 
helpful.
  *   The ability to inject other metrics gathering into my clients for the 
purpose of troubleshooting/logging/information handling

While perhaps the idea is that only the smoke tests would use this approach, 
I'm hesitant to the idea of developing tests using multiple approaches simply 
for the sake of using the clients for certain tests. I'm assuming these were 
things that were talked about during the CLI portions of OpenStack summit, 
which I wasn't able to attend. I wasn't aware of this or even some of the new 
parallel testing efforts which somehow did not come up during the QA track. The 
purpose of Tempest in the first place was to unify the functional and 
integration testing efforts for OpenStack projects, and I'm dedicated to doing 
everything I can to make that happen. If everyone is in agreement on the other 
side, I certainly don't want to be the one in the way against the majority. 
However, I just wanted to state my concerns before we take any further actions.

Daryl

On May 3, 2012, at 9:54 PM, Maru Newby wrote:

The rest api is the default interface, and the client tools target that 
interface.  Since the clients are cli more than python api, they can be used by 
any language that can use a shell.  What exactly does reimplementing the 
clients for the sake of testing accomplish?  Double the maintenance effort for 
the same result, imho.

Cheers,


Maru

On 2012-05-03, at 12:54 PM, Daryl Walleck wrote:

So my first question is around this. So is the claim is that the client tools 
are the default interface for the applications? While that works for coders in 
python, what about people using other languages? Even then, there's no 
guarantee that the clients in different languages are implemented in the same 
way. Tempest was designed originally because while it does use an abstraction 
between the API and the tests, there is nothing to assist the user by 
retrying and the like. While I think there's a place for writing tests using 
the command line clients, to me that would be a smoke test of a client and not 
as much a smoke test of the API.

Daryl

On May 3, 2012, at 12:01 PM, Jay Pipes wrote:

However, before this can happen, a number of improvements need to be made to 
Tempest. The issue with the smoke tests in Tempest is that they aren't really 
smoke tests. They do not use the default client tools (like novaclient, 
keystoneclient, etc) and are not annotated consistently.

___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack-qa-team] [Openstack] [QA] Aligning smoke / acceptance / promotion test efforts

2012-05-03 Thread Daryl Walleck
Perhaps it's just me, but given if I was developing in a different language, I 
would not want to use a command line tool to interact with my application. What 
is the point then of developing RESTful APIs if the primary client is not it, 
but these command line tools instead?

While it may appear that the approach I'm advocating is extra work, let me try 
explain the purpose of this approach. If these aren't clear, I'd be more than 
glad to give some concrete examples where these techniques have been useful. A 
few benefits that come to mind are:


  *   Testing of XML implementations of the API. While this could be built into 
the clients, I don't see many folks who would rush to that cause
  *   Direct access to the response. The clients hide any header/response code 
information from the recipient, which can be very important. For example, the 
response headers for Nova contain a token that can be captured and used when 
troubleshooting issues.
  *   Avoiding the user friendliness of the clients. While retries and user 
friendly exception handling are great for clients, they are not what I want as 
a tester. As a tester, while I do want a minimal layer of abstraction between 
my AUT and my code so that I'm not making HTTP requests directly from my tests, 
from what I've seen the clients can make more efforts than I'd prefer to be 
helpful.
  *   The ability to inject other metrics gathering into my clients for the 
purpose of troubleshooting/logging/information handling

While perhaps the idea is that only the smoke tests would use this approach, 
I'm hesitant to the idea of developing tests using multiple approaches simply 
for the sake of using the clients for certain tests. I'm assuming these were 
things that were talked about during the CLI portions of OpenStack summit, 
which I wasn't able to attend. I wasn't aware of this or even some of the new 
parallel testing efforts which somehow did not come up during the QA track. The 
purpose of Tempest in the first place was to unify the functional and 
integration testing efforts for OpenStack projects, and I'm dedicated to doing 
everything I can to make that happen. If everyone is in agreement on the other 
side, I certainly don't want to be the one in the way against the majority. 
However, I just wanted to state my concerns before we take any further actions.

Daryl

On May 3, 2012, at 9:54 PM, Maru Newby wrote:

The rest api is the default interface, and the client tools target that 
interface.  Since the clients are cli more than python api, they can be used by 
any language that can use a shell.  What exactly does reimplementing the 
clients for the sake of testing accomplish?  Double the maintenance effort for 
the same result, imho.

Cheers,


Maru

On 2012-05-03, at 12:54 PM, Daryl Walleck wrote:

So my first question is around this. So is the claim is that the client tools 
are the default interface for the applications? While that works for coders in 
python, what about people using other languages? Even then, there's no 
guarantee that the clients in different languages are implemented in the same 
way. Tempest was designed originally because while it does use an abstraction 
between the API and the tests, there is nothing to assist the user by 
retrying and the like. While I think there's a place for writing tests using 
the command line clients, to me that would be a smoke test of a client and not 
as much a smoke test of the API.

Daryl

On May 3, 2012, at 12:01 PM, Jay Pipes wrote:

However, before this can happen, a number of improvements need to be made to 
Tempest. The issue with the smoke tests in Tempest is that they aren't really 
smoke tests. They do not use the default client tools (like novaclient, 
keystoneclient, etc) and are not annotated consistently.

___
Mailing list: https://launchpad.net/~openstack
Post to : 
openst...@lists.launchpad.netmailto:openst...@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


-- 
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack-qa-team] Devstack dependent tests

2012-04-29 Thread Daryl Walleck
I think what you mention is a good test. However, I wouldn't be able to run it 
in most of my test environments. Maybe there should be an invalidate token 
admin API functionality? I could see reasons for wanting it for functional 
reasons, and would still allow you to write a test like this.

Daryl


-- Sent from my HP TouchPad

On Apr 29, 2012 11:27 PM, Karajgi, Rohit rohit.kara...@nttdata.com wrote:
Hi,

We are writing new tests for keystone and some of these tests need to touch 
keystone database.
I really want to avoid  this, but unfortunately there are no RESTful APIs 
supported in stable/essex to do the job.

One of the example is
1.  Check if get_tenants api fails for expired token. There is no way I can 
set expiry date of the token using admin RESTful API.

So currently I'm planning to use mysql client commands and set the expiry date.
All such tests will be put in the attr decorator with the name devstack. So 
any one who doesn't want to run such tests should run tempest with nosetests -a 
kind!=devstack.

Does it make sense to add such tests?

Regards,
Rohit

__
Disclaimer:This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged, confidential, 
and proprietary data.  If you are not the intended recipient, please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding

--
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp
-- 
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] running Tempest continuously in the openstack project?

2012-02-27 Thread Daryl Walleck
I'm actively looking into any issues. I have all these tests passing locally in 
my environment, so the issues seem to be focused around people using devstack. 
I've made some merge prop's that will take care of a few of the issues. One 
issue that will certainly come up is that if you have rate limiting enabled, 
most of the tests will certainly fail. I've also included the link to the 
Tempest documentation, which also should help make using a bit more clear. I'm 
working on updating this document as I'm using Devstack so that I can either 
smooth over or enumerate issues that may come up when running the tests. If you 
run into anything though, please make a report to the Tempest project and I'll 
have a look.

http://qa.openstack.org/integration.html

https://launchpad.net/tempest

Daryl

On Feb 27, 2012, at 7:18 AM, Ionuț Arțăriși wrote:

 On 02/22/2012 07:17 PM, Jay Pipes wrote:
 On 02/22/2012 10:49 AM, James E. Blair wrote:
 Indeed, as soon as someone says I have Tempest working on a system
 configured by devstack with a repeatable process, here's how I did
 it... we'll start running it in Jenkins. But so far I've heard from
 the Tempest developers that it's not quite ready yet.
 
 That would be correct. Still work needed to get things working more
 consistently.
 
 
 How can I help with this effort? Who should I contact?
 
 Right now I'm having trouble setting up running the tests against devstack (a 
 configuration problem on my part I suppose, but I blame it on the lack of 
 documentation).
 
 -Ionuț
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] running Tempest continuously in the openstack project?

2012-02-27 Thread Daryl Walleck
This isn't really a bug in either project, but a configuration issue. The name 
of the endpoint isn't static, so making it 'nova' or 'compute' may not always 
be correct. If you check the name of your Compute project that's returned in 
the Keystone auth request, you can find the name and configure Tempest by 
setting the catalog property with that. To avoid having to even configure that, 
I have a merge prop in queue to look at the type (which for Nova as far as I 
understand should always be 'compute') to have one less configuration for 
people to deal with.

Daryl

On Feb 27, 2012, at 9:49 AM, David Kranz wrote:

 There is still a bug in tempest and/or keystone. To run Tempest and devstack 
 you have to:
 
 1. Add catalog_name=compute to tempest.conf
 2. Change name to type in rest_client.py
 
 
 -David
 
 
 On 2/27/2012 8:18 AM, Ionuț Arțăriși wrote:
 On 02/22/2012 07:17 PM, Jay Pipes wrote:
 On 02/22/2012 10:49 AM, James E. Blair wrote:
 Indeed, as soon as someone says I have Tempest working on a system
 configured by devstack with a repeatable process, here's how I did
 it... we'll start running it in Jenkins. But so far I've heard from
 the Tempest developers that it's not quite ready yet.
 
 That would be correct. Still work needed to get things working more
 consistently.
 
 
 How can I help with this effort? Who should I contact?
 
 Right now I'm having trouble setting up running the tests against devstack 
 (a configuration problem on my part I suppose, but I blame it on the lack of 
 documentation).
 
 -Ionuț
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack-qa-team] Tempest resize tests

2012-02-24 Thread Daryl Walleck
I've heard KVM/libvert support was added in Essex. I can confirm resize is 
still working with Xen Server, so the issue must be with those implementations.

Daryl

On Feb 24, 2012, at 2:58 PM, David Kranz wrote:

 I am not sure why the resize tests are failing but there is an error in the 
 compute log so I filed a bug:  https://bugs.launchpad.net/nova/+bug/940619
 
 -David
 
 -- 
 Mailing list: https://launchpad.net/~openstack-qa-team
 Post to : openstack-qa-team@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack-qa-team
 More help   : https://help.launchpad.net/ListHelp


-- 
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] running Tempest continuously in the openstack project?

2012-02-22 Thread Daryl Walleck
We're running Tempest as part of our continuous build. For the bug you're 
referring to David, there is a configuration in Tempest where the name of the 
Nova/Compute project can be set, so I don't think its fair to say that Tempest 
is currently broken. However, I am working on a more comprehensive set of 
documentation and auto-configuration tools to allow people to work with Tempest 
more easily. Expect more very soon coming down the pipeline.

Daryl

From: openstack-bounces+daryl.walleck=rackspace@lists.launchpad.net 
[openstack-bounces+daryl.walleck=rackspace@lists.launchpad.net] on behalf 
of David Kranz [david.kr...@qrclab.com]
Sent: Wednesday, February 22, 2012 8:24 AM
To: openstack@lists.launchpad.net
Subject: Re: [Openstack] running Tempest continuously in the openstack  project?

There is currently a bug https://bugs.launchpad.net/tempest/+bug/933845
that will prevent tempest from working with  keystone. That ticket
provides a workaround but still not all of the tests are working at the
moment. I think the goal is for tempest to become actively run but we
are not there yet...

  -David


On 2/22/2012 9:12 AM, Ionuț Arțăriși wrote:
 Hello,

 I'm trying to get started with Tempest and I would really like to be
 able to see a running setup somewhere. I'm getting a lot of tests
 failing and I don't know if it's because of my setup or because of
 bugs in Tempest or incompatibilities with the current code in the
 other components.

 I looked at Jenkins, but it seems there are only two tasks related to
 Tempest there, testing for git-merge and pep8. Does the OpenStack
 project actively run the full Tempest test suite anywhere?

 It would be great if anyone can offer a working configuration or at
 least say if they have Tempest running successfully against which
 versions of nova, glance, swift etc.

 Thanks,
 Ionuț

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [QA] Finding a new time for our weekly meeting?

2012-02-21 Thread Daryl Walleck
That time still works for me. Would be starting that this week (tomorrow)?

Daryl

On Feb 21, 2012, at 1:45 PM, Jay Pipes wrote:

 Hi QA Team,
 
 Unfortunately, I have another meeting happening at the same time as our 
 weekly IRC meeting in #openstack-meeting, and it's a meeting I really need to 
 attend :(
 
 I'm hoping that the QA team would be open to moving our meeting to a 
 different time or day?
 
 Would anyone be opposed to moving the meeting to Thursdays at 17:00 UTC 
 (currently 12:00 EST/09:00 PST)?
 
 Please let me know. I appreciate your flexibility!
 
 Best,
 -jay
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack-qa-team] Tempest and functional test success definition

2012-02-05 Thread Daryl Walleck
Good question. We had a discussion awhile back on whether SSHing into an 
instance broke the line between black and white box testing. We had people on 
both sides, but the discussion was tabled for a bit while we dealt with other 
issues. It's definitely something I'd like to talk about again. I'm doing some 
SSHing in the dev branch that I work from and find it fairly useful.

Daryl

On Jan 31, 2012, at 10:07 AM, David Kranz wrote:

 The discussion about Libvirt File Injection on the main list reminded me that 
 I am not clear on what we mean that a functional test passes. For example, in 
 one version of the stress test we are going to check in, the test assigns a 
 floating ip and ssh's to the created server to make sure it is, by some 
 definition, working. It looks like the Tempest test of server creation passes 
 if the API response says it did. Is this adequate? Another example is that if 
 we are testing that a volume is attached to a server, don't we have to ssh to 
 the server and make sure that the volume is accessible? Perhaps these aspects 
 are tested by the nova unit tests but at first glance it did not seem so.
 
 -David
 
 -- 
 Mailing list: https://launchpad.net/~openstack-qa-team
 Post to : openstack-qa-team@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack-qa-team
 More help   : https://help.launchpad.net/ListHelp


-- 
Mailing list: https://launchpad.net/~openstack-qa-team
Post to : openstack-qa-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-qa-team
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] describing APIs for OpenStack consumers

2011-10-25 Thread Daryl Walleck
Hi everyone,

This is just my opinion, but I've only found WADLs very useful when use tool 
based automation. To me they're a huge headache to read. To me, the current dev 
guide style of documentation has been far more helpful in developing automation.

Daryl

On Oct 25, 2011, at 3:24 PM, Anne Gentle wrote:

Hi all -

Would also love Swagger. Nati looked into it and he thought it would require a 
Python client generator, based on reading that Client generators are currently 
available for Scala, Java, Javascript, Ruby, PHP, and Actionscript 3. So in 
the meantime the QA list and Nati suggested WADL as a starting point for 
auto-generating simple API documentation while also looking towards Swagger for 
a way to document a public cloud like the Free Cloud. At the last OpenStack 
hackathon in the Bay Area (California), Nati worked through a simple WADL 
reader, he may be able to describe it better.

Hope that helps - sorry it's not more detailed than that but wanted to give 
some background, sounds like we all want similar outcomes and the resources for 
tasks to get us to outcomes is all we're lacking. QA Team, let me know how the 
Docs Team can work with you here.

Anne
Anne Gentle
a...@openstack.orgmailto:a...@openstack.org
my bloghttp://justwriteclick.com/ | my 
bookhttp://xmlpress.net/publications/conversation-community/ | 
LinkedInhttp://www.linkedin.com/in/annegentle | 
Delicioushttp://del.icio.us/annegentle | 
Twitterhttp://twitter.com/annegentle
On Tue, Oct 25, 2011 at 2:41 PM, Joseph Heck 
he...@mac.commailto:he...@mac.com wrote:
I expect this is going to open a nasty can of worms... today we don't have a 
consistent way of describing the APIs for the various services. I saw Nati's 
bug (https://launchpad.net/bugs/881621), which implies that all the services 
should have a WADL somewhere describing the API.

I'm not a huge fan of WADL, but the only other thing I've found is swagger 
(http://swagger.wordnik.com/spec).  I have been working towards trying to 
create an comprehensive OpenStack API documentation set that can be published 
as HTML, not unlike some of these:

   https://dev.twitter.com/docs/api
   http://developer.netflix.com/docs/REST_API_Reference
   http://code.google.com/p/bitly-api/wiki/ApiDocumentation#REST_API
   http://upcoming.yahoo.com/services/api/

To make this sort of web-page documentation effective, I think it's best to 
drive it from descriptions on each of the projects (if we can). I've checked 
with some friends who've done similar, and learned that most of the those API 
doc sets are maintained by hand - not generated from description files.

What do you all think about standardizing on WADL (or swagger) as a description 
of the API and generating comprehensive web-site-based API documentation from 
those description files? Does anyone have any other description formats that 
would work for this as an alternative?

(I admit I don't want to get into XML parsing hell, which is what it appears 
that WADL might lead too)

-joe


___
Mailing list: 
https://launchpad.net/~openstackhttps://launchpad.net/%7Eopenstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : 
https://launchpad.net/~openstackhttps://launchpad.net/%7Eopenstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : 
openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [QA] openstack-integration-tests

2011-10-19 Thread Daryl Walleck
Hi Rohit,

I'm glad to see so much interest in getting testing done right. So here's my 
thoughts. As far as the nova client/euca-tools portion, I think we absolutely 
need a series of tests that validate that these bindings work correctly. As a 
nice side effect they do test their respective APIs, which is good. I think 
duplication of testing between these two bindings and even what I'm envisioning 
as the main test suite is necessary, as we have to verify at least at a high 
level that they work correctly.

My thoughts for our core testing is that those would the ones that do not use 
language bindings. I think this is where the interesting architectural work can 
be done. Test framework is a very loose term that gets used a lot, but to me a 
framework includes:


  *   The test runner and it's capabilities
  *   How the test code is structured to assure 
maintainability/flexibility/ease of code re-use
  *   Any utilities provided to extend or ease the ability to test

I think we all have a lot of good ideas about this, it's just a matter of 
consolidating that and choosing one direction to go forward with.

Daryl

On Oct 19, 2011, at 9:58 AM, Rohit Karajgi wrote:

Hello Stackers,

I was at the design summit and the sessions that were ‘all about QA’ and had 
shown my interest in supporting this effort. Sorry I could not be present at 
the first QA IRC meeting due to a vacation.
I had a chance to eavesdrop at the meeting log and Nachi-san also shared his 
account of the outcome with me. Thanks Nachi!

Just a heads up to put some of my thoughts on ML before today’s meeting.
I had a look at the various (7 and counting??) test frameworks out there to 
test OpenStack API.
Jay, Gabe and Tim put up a neat wiki 
(http://wiki.openstack.org/openstack-integration-test-suites) to compare many 
of these.

I looked at Lettucehttps://github.com/gabrielfalcao/lettuce and felt it was 
quite effective. It’s incredibly easy to write tests once the wrappers over the 
application are setup. Easy as in “Given a ttylinux image create a Server” 
would be how a test scenario would be written in a typical .feature file, 
(which is basically a list of test scenarios for a particular feature) in a 
natural language. It has nose support, and there’s some neat 
documentationhttp://lettuce.it/index.html too. I was just curious if anyone 
has already tried out Lettuce with OpenStack? From the ODS, I think the Grid 
Dynamics guys already have their own implementation. It would be great if one 
of you guys join the meeting and throw some light on how you’ve got it to work.
Just for those who may be unaware, Soren’s branch 
openstack-integration-testshttps://github.com/openstack/openstack-integration-tests
 is actually a merge of Kong and Stacktester.

The other point I wanted to have more clarity on was on using both novaclient 
AND httplib2 to make the API requests. Though wwkeyboard did mention issues 
regarding spec bug proliferation into the client, how can we best utilize this 
dual approach and avoid another round of duplicate test cases. Maybe we target 
novaclient first and then the use httplib2 to fill in gaps? After-all 
novaclient does call httplib2 internally.

I would like to team up with Gabe and others for the unified test runner task. 
Please chip me in if you’re doing some division of labor there.

Thanks!
Rohit

(NTT)
From: 
openstack-bounces+rohit.karajgi=vertex.co...@lists.launchpad.netmailto:openstack-bounces+rohit.karajgi=vertex.co...@lists.launchpad.net
 
[mailto:openstack-bounces+rohit.karajgi=vertex.co...@lists.launchpad.netmailto:vertex.co...@lists.launchpad.net]
 On Behalf Of Gabe Westmaas
Sent: Monday, October 10, 2011 9:22 PM
To: openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net
Subject: [Openstack] [QA] openstack-integration-tests

I'd like to try to summarize and propose at least one next step for the content 
of the openstack-integration-tests git repository.  Note that this is only 
about the actual tests themselves, and says absolutely nothing about any gating 
decisions made in other sessions.

First, there was widespread agreement that in order for an integration suite to 
be run in the openstack jenkins, it should be included in the community github 
repository.

Second, it was agreed that there is value in having tests in multiple 
languages, especially in the case where those tests add value beyond the base 
language.  Examples of this may include testing using another set of bindings, 
and therefore testing the API.  Using a testing framework that just takes a 
different approach to testing.  Invalid examples include implanting the exact 
same test in another language simply because you don't like python.

Third, it was agreed that there is value in testing using novaclient as well as 
httplib2.  Similarly that there is value in testing both XML and JSON.

Fourth, for black box tests, any fixture setup that a suite of tests requires 
should be done via script that is close to but