This is still a work in progress so assertions for now every assertable thing 
may be fully developed yet. And there’s an important cost/benefit calculation 
in how deep we go with that (for example, I do not include assertions for the 
workarounds for Mitaka Tacker limitations). But I expect to beef it up further.

The quickest way to determine how things are done is to find the related 
assertion in the script and there will be a specific log entry that says what’s 
happening in that step. If there’s time, we can beef up descriptions but the 
general principle is to make it easy for someone who cares to see what’s 
happening, and to avoid duplicating the description of what’s happening in the 
test header. If we want to develop support tools that can pull these lines from 
tests to create some “test plan doc”, we can use these log entries as part of 
that step-by-step description.

For example:

  echo "$0: $(date) verify vHello server is running"
  apt-get install -y curl
  if [[ $(curl $SERVER_URL | grep -c "Hello World") == 0 ]]; then fail; fi
  assert "models-vhello-001 (vHello VNF creation)" true
  assert "models-tacker-003 (VNF creation)" true
  assert "models-tacker-vnfd-002 (artifacts creation)" true
  assert "models-tacker-vnfd-003 (user_data creation)" true

The step above shows that the method of determining that the VNF is really 
“active” is to curl the web server URL and get the expected response. The other 
assertions are implied to be true by the success of this operation (the 
artifact is the image created upon which the server is running, and the 
user_data contained the code that installed the web server, thus the successful 
curl confirms both were setup as expected).

Or the next step, which does the same thing but looks for the UUID which would 
be present in the returned page if the user_data code that pulled the ID from 
the config drive was successful in doing so (and thus the config drive was 
setup as expected).


  echo "$0: $(date) verify contents of config drive are included in web page"

  id=$(curl $SERVER_URL | awk "/uuid/ { print \$2 }")

  if [[ -z "$id" ]]; then fail; fi

  assert "models-tacker-vnfd-001 (config_drive creation)" true

(and actually now looking at this I see I need to change the “fail” lines to be 
“assert” lines also…)

Thanks,
Bryan Sullivan | AT&T

From: David McBride [mailto:dmcbr...@linuxfoundation.org]
Sent: Friday, December 02, 2016 10:12 AM
To: SULLIVAN, BRYAN L <bs3...@att.com>
Cc: test...@lists.opnfv.org; opnfv-tech-discuss@lists.opnfv.org
Subject: Re: [opnfv-tech-discuss] Feedback requested on test documentation 
example from Models

Bryan,

In general, I like this approach.

My only comment is that I think that the "Post-State" criteria should be more 
specific and detailed.  For example, "Tacker is installed and active in a 
docker container".  Specifically, how is this determined? OS commands? Docker 
commands? Something else?

Similarly, "the VNF is running and verified".  What does "verified" mean?  How, 
specifically, is this determined?

David

On Thu, Dec 1, 2016 at 6:56 PM, SULLIVAN, BRYAN L 
<bs3...@att.com<mailto:bs3...@att.com>> wrote:
Hi all,

In the test wg meetings I’ve mentioned the goals I have for optimizing the 
effort required to develop test documentation and coverage. In summary,

•         Tests should be self-documenting – no “test plan” should be needed 
beyond the entries in the test database and the comments in the tests

•         Tests should include specific (identified by an ID) test assertions, 
which provide all the information necessary to understand the steps of the 
test, beyond a general description. For example, for the test 
vHello_Tacker.sh<https://git.opnfv.org/models/tree/tests/vHello_Tacker.sh> see 
the header below.

•         The test assertions can be managed by some database if that’s 
necessary and as effective as a simple flat file. For now, a flat file will do 
and they can be further described as needed on a wiki. See 
test-assertions<https://wiki.opnfv.org/display/models/test-assertions> on the 
Models wiki as an example. With the flat file approach can use simple bash 
scripts to change (by sed etc) the IDs as needed (e.g. as they get renamed, 
split, merged, etc… as typically will happen as tests are developed).

•         Test coverage can be assessed by processing the set of test scripts 
to pull out the referenced assertions, and comparing them to the test assertion 
database. Or we can develop the test coverage map by adding assertion pass/fail 
reports (for the discrete assertions in addition to the overall test) to the 
test results database (recommended).

I’d like to get your feedback on this approach. The bottom line goal is that we 
have test documentation and coverage info with the least development and 
maintenance effort.

# What this is: Deployment test for the Tacker Hello World blueprint.
#
# Status: work in progress, planned for OPNFV Danube release.
#
# Use Case Description: A single-node simple python web server, connected to
# two internal networks (private and admin), and accessible via a floating IP.
# Based upon the OpenStack Tacker project's "tosca-vnfd-hello-world" blueprint,
# as extended for testing of more Tacker-supported features as of OpenStack
# Mitaka.
#
# Pre-State:
# models-joid-001 | models-apex-001 (installation of OPNFV system)
#
# Test Steps and Assertions:
# 1) bash vHello_Tacker.sh tacker-cli setup|start|run|stop|clean]
#   models-tacker-001 (Tacker installation in a docker container on the 
jumphost)
#   models-nova-001 (Keypair creation)
# 2) bash vHello_Tacker.sh tacker-cli start
#   models-tacker-002 (VNFD creation)
#   models-tacker-003 (VNF creation)
#   models-tacker-vnfd-001 (config_drive creation)
#   models-tacker-vnfd-002 (artifacts creation)
#   models-tacker-vnfd-003 (user_data creation)
#   models-vhello-001 (vHello VNF creation)
# 3) bash vHello_Tacker.sh tacker-cli stop
#   models-tacker-004 (VNF deletion)
#   models-tacker-005 (VNFD deletion)
#   models-tacker-vnfd-004 (artifacts deletion)
# 4) bash vHello_Tacker.sh tacker-cli clean
#   TODO: add assertions
#
# Post-State:
# After step 1, Tacker is installed and active in a docker container, and the
# test blueprint etc are prepared in a shared virtual folder /tmp/tacker.
# After step 2, the VNF is running and verified.
# After step 3, the VNF is deleted and the system returned to step 1 post-state.
# After step 4, the system returned to test pre-state.
#
# Cleanup: bash vHello_Tacker.sh tacker-cli clean


Thanks,
Bryan Sullivan | AT&T


_______________________________________________
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org<mailto:opnfv-tech-discuss@lists.opnfv.org>
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss



--
David McBride
Release Manager, OPNFV
Mobile: +1.805.276.8018<tel:%2B1.805.276.8018>
Email/Google Talk: 
dmcbr...@linuxfoundation.org<mailto:dmcbr...@linuxfoundation.org>
Skype: davidjmcbride1
IRC: dmcbride
_______________________________________________
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss

Reply via email to