Here are the answers to your questions...

1.       As we knew, ONAP Vendor Software Product (VSP) is zip containing the 
required HOT template with MANIFEST.json, Is VVP script validate this VSP as a 
whole including this MANIFEST.json or it only validates the HOT templates 
inside the VSP



[Trevor] For a VNF created from Heat, the VSP is an artifact created in SDC 
itself so we are not uploading a VSP nor verifying the VSP as a whole - only 
the Heat.  The Heat Orchestration Template package is a zip file with all files 
(Heat templates, environment files, and any supporting files/scripts) in the 
root directory of that zip file.  This is what is uploaded into SDC.  There is 
no MANIFEST.json file included per the Heat requirements.



2.       As VNFREQS defines list of HOT template requirements with MUST and 
MUST NOT criteria,

a.       How to validate given VSP against only MUST VNFREQS by using 
https://github.com/onap/vvp-validation-scripts<https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_onap_vvp-2Dvalidation-2Dscripts&d=DwMFAg&c=LFYZ-o9_HUMeMTSQicvjIg&r=g9LhwjMTPM4AuoWvYyDmqA&m=X4a0XtgdvP5oIyxMXai6-iRAyDMRQnPHPkOjP0zP0so&s=xnL_aohPfyvDzBrOb3eDAjhbE9zNV98FEg9kkY7iObI&e=>
 ?


[Trevor] The validation scripts already only validate MUST and MUST NOT 
requirements - not SHOULD and MAY.  If you're suggesting we need a way to 
filter further on testing just MUST and not MUST NOTS, please elaborate on why 
you think that is necessary and provide an example.  All the tests in the 
validation-scripts section must execute and pass for Heat to be considered 
valid.


b.      How to validate given VSP for a given set of VNFREQS by using these 
validation scripts

NOTE: Here Assuming VVP scripts supports VSP as whole.



[Trevor]  Please provide an example or use case of what you mean here.  The 
validation-scripts only validate the Heat requirements.  The validations 
already bypass checks that are not necessary based on conditions.  Why would 
you need further filtering of the execution?



3.       What are the versions of OpenStack HOT template 
(heat_template_version), VVP scripts support?



a.       [Trevor]  There are no restrictions placed on heat_template_version in 
the validation-scripts directly as the version of Open Stack is somewhat 
operator-specific.  The validation-scripts support versions as early as 
2013-05-23, but we haven't extensively tested it against all versions.

I don't have any issue with you all writing a wrapper.  You can either take the 
output files we documented and convert the results to a compatible format or we 
can add an additional output format for consumption by Dovetail/OVP.  If you 
would like us to produce an additional output format, then I would need some 
additional detail on the format described in your email.  It's relatively 
trivial to create a new output format.

{
"results" : [ {
"error" : "SUCCESS"
} ],
"build_tag" : "CVC",
"criteria" : "PASS"
}
}


*         In "results", is there one entry per validation?

*         In "results", what does it look like if there's a failure?

*         In "results", are there any restrictions on length or formatting of 
the error messages?  Some of the VVP information is multi-line.

*         What does "build_tag" represent?

*         What does "criteria" refer to?  Is it the overall result of the 
validation?  If so, what are the valid values?

Thanks,
Trevor


From: Kanagaraj Manickam [mailto:kanagaraj.manic...@huawei.com]
Sent: Wednesday, January 16, 2019 3:10 AM
To: xudan (N) <xuda...@huawei.com>; LOVETT, TREVOR J <tl2...@att.com>; 
opnfv-tech-discuss@lists.opnfv.org
Cc: STARK, STEVEN <ss8...@att.com>; WRIGHT, STEVEN A <sw3...@att.com>; 
HALLAHAN, RYAN <rh1...@att.com>; WEINSTOCK, ALAN M <aw2...@att.com>; Gaoweitao 
(Victor, Cloudify Network OSDT) <victor....@huawei.com>; 
mok...@intracom-telecom.com; pkara...@intracom-telecom.com
Subject: RE: [opnfv-tech-discuss] [Dovetail] Example VVP Output per Action Item 
from Developer Event

Hi Trevor,

During Casablanca release, In VTP, we introduced test cases for validating the 
give VNF CSAR file weather it complaint to ETSI SOL004 and it produce the 
results with following details:

1.       Passed/failed

2.       Error details.

Current result format is:
{
"results" : [ {
"error" : "SUCCESS"
} ],
"build_tag" : "CVC",
"criteria" : "PASS"
}
}

So this would help to verify weather given CSAR is compliant and if not, to let 
user knows the errors in the CSAR. Also we are enhancing this result with 
following details as its required for CVC

1.       VNF is TOSCA or HOT based

2.       VNF Template version

I believe similar approach could be followed for validating the HOT VNF as 
well. In this aspect, could you please help to find answers for following 
queries, which would help to integrate VVP scripts in VTP as validation test 
case, as discussed in the PARIS CVC meeting last week:

1.       As we knew, ONAP Vendor Software Product (VSP) is zip containing the 
required HOT template with MANIFEST.json, Is VVP script validate this VSP as a 
whole including this MANIFEST.json or it only validates the HOT templates 
inside the VSP ?

2.       As VNFREQS defines list of HOT template requirements with MUST and 
MUST NOT criteria,

a.       How to validate given VSP against only MUST VNFREQS by using 
https://github.com/onap/vvp-validation-scripts<https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_onap_vvp-2Dvalidation-2Dscripts&d=DwMFAg&c=LFYZ-o9_HUMeMTSQicvjIg&r=g9LhwjMTPM4AuoWvYyDmqA&m=X4a0XtgdvP5oIyxMXai6-iRAyDMRQnPHPkOjP0zP0so&s=xnL_aohPfyvDzBrOb3eDAjhbE9zNV98FEg9kkY7iObI&e=>
 ?

b.      How to validate given VSP for a given set of VNFREQS by using these 
validation scripts

NOTE: Here Assuming VVP scripts supports VSP as whole.

3.       What are the versions of OpenStack HOT template 
(heat_template_version), VVP scripts support?

I assume that we need to introduce an wrapper test case over VVP scripts, which 
will run the VVP scripts  and produce the result in the form  as required by 
the OVP portal/ Dovetail. Pls let me know your inputs.

Thank you.

Regards
Kanagaraj Manickam
Senior System Architect
P&S ONAP
Huawei Technologies India Pvt. Ltd.
Survey No. 37, Next to EPIP Area, Kundalahalli, Whitefield
Bengaluru-560066, Karnataka
Tel: + 91-80-49160700 ext 72410 Mob: 9945602938
[Company_logo]

________________________________
This e-mail and its attachments contain confidential information from HUAWEI, 
which
is intended only for the person or entity whose address is listed above. Any 
use of the
information contained herein in any way (including, but not limited to, total 
or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify 
the sender by
phone or email immediately and delete it!

From: xudan (N)
Sent: 16 January 2019 12:14
To: LOVETT, TREVOR J <tl2...@att.com<mailto:tl2...@att.com>>; 
opnfv-tech-discuss@lists.opnfv.org<mailto:opnfv-tech-discuss@lists.opnfv.org>
Cc: STARK, STEVEN <ss8...@att.com<mailto:ss8...@att.com>>; WRIGHT, STEVEN A 
<sw3...@att.com<mailto:sw3...@att.com>>; HALLAHAN, RYAN 
<rh1...@att.com<mailto:rh1...@att.com>>; WEINSTOCK, ALAN M 
<aw2...@att.com<mailto:aw2...@att.com>>; Gaoweitao (Victor, Cloudify Network 
OSDT) <victor....@huawei.com<mailto:victor....@huawei.com>>; Kanagaraj Manickam 
<kanagaraj.manic...@huawei.com<mailto:kanagaraj.manic...@huawei.com>>; 
mok...@intracom-telecom.com<mailto:mok...@intracom-telecom.com>; 
pkara...@intracom-telecom.com<mailto:pkara...@intracom-telecom.com>
Subject: RE: [opnfv-tech-discuss] [Dovetail] Example VVP Output per Action Item 
from Developer Event

Hi all,

The results looks very friendly for users to debugging. But it will be better 
if there is an explicit  value to show if it PASS or FAIL.
The results seems to be running one or several test cases against several VNFs, 
but it's difficult to find out the test case details (how many test cases there 
are and the results of each of them).
When doing the compliance tests, it always tests a set of chosen test cases 
against one single VNF. And it should report that which test cases PASS and 
which FAIL (better with the failure reason for debugging).
Please take these under consideration when VTP integrates these test cases.

BR,
Dan Xu

From: 
opnfv-tech-discuss@lists.opnfv.org<mailto:opnfv-tech-discuss@lists.opnfv.org> 
[mailto:opnfv-tech-discuss@lists.opnfv.org] On Behalf Of LOVETT, TREVOR J
Sent: Friday, January 11, 2019 12:18 AM
To: 
opnfv-tech-discuss@lists.opnfv.org<mailto:opnfv-tech-discuss@lists.opnfv.org>
Cc: STARK, STEVEN <ss8...@att.com<mailto:ss8...@att.com>>; WRIGHT, STEVEN A 
<sw3...@att.com<mailto:sw3...@att.com>>; HALLAHAN, RYAN 
<rh1...@att.com<mailto:rh1...@att.com>>; WEINSTOCK, ALAN M 
<aw2...@att.com<mailto:aw2...@att.com>>; Gaoweitao (Victor, Cloudify Network 
OSDT) <victor....@huawei.com<mailto:victor....@huawei.com>>
Subject: [opnfv-tech-discuss] [Dovetail] Example VVP Output per Action Item 
from Developer Event

Per an action item out of the developer event in France, the ONAP VVP team is 
providing a sample of what the default output of the vvp/validation-scripts 
produce.  This can be examined by the Dovetail team to determine if it can be 
consumed for the purpose of the LFN certification of Heat templates.

The scripts can produce several different output formats, but I'm using the CSV 
output format as it's the most machine readable.  There are also HTML and Excel 
output formats.  It's fairly trivial to add additional output formats so if 
there's an existing format that dovetail can consume, please provide the 
details and we can create that format if needed.

The sample_template_with_errors.zip contains the Heat template files that 
produced these reports.

Upon completion of a validation run, several files will be written to the 
outputs folder.  The two key files are:


*         failures - This file contains information on the failures in JSON 
format.  This file will not be present if no requirement violations are 
detected.

*         report.csv - Similar content to failures, but in CSV format and it 
also includes some additional data in a header section.  This file will be 
present even if violations are not detected, but it will not have rows error 
section of the report.

I've attached samples of both files in this email.  I've also attached a copy 
of what the report.csv looks like when there are no errors (see report - 
SUCCESS.csv)

The format of the files is fairly straight forward, but here's some additional 
documentation of the fields in the file:

report.csv file


*         Header - The first 9 rows of the csv file represent a header section 
that provides the following information

o   Row 1 - Static report header: "Validation Failures"

o   Row 2  - Blank

o   Row 3 - Profile Selected: Always ONAP

o   Row 4 - Tool Version: Semantic version ID Of the tool that produced the 
report

o   Row 5 - Report Generated At: Date and Time Stamp of when the report was 
generated (ex: 2018-12-18 12:34:19.064412 Eastern Standard Time)

o   Row 6 - Directory Validated: Shows the local, absolute directory that was 
scanned

o   Row 7 - Checksum: MD5 hash of all files in the directory that was scanned

o   Row 8 - Total Errors: The count of all errors encountered

o   Row 9 - Blank



*         Collection Failures - In the unlikely event there is setting up the 
test suite to run, then an optional section of the report will be written 
describing these errors.  There will be 2 header rows, and then 1 row for each 
setup failure encountered.  It is possible to have both collection failures and 
validation failures as the validation-scripts will execute any tests that did 
not fail setup.  However, if this section exists it represents an unexpected 
error and invalid run of the tool.  These issues should be referred to the VVP 
team for investigation.

o   Row 10 (if these errors are encountered): Start of Collection Failures will 
be denoted by a row containing "WARNING: The following unexpected errors..." in 
the first column of the row.

o   Row 11 (if these errors are encountered): Collection Failure heading 
columns: "Validation File", "Test", "Fixtures", "Error"

o   Row 12-N (if these errors are encountered): Rows for each error.  Each 
column is a string.


*         Validation Failures (Note the start row depends on whether or not 
collection failures are encountered)

o   Start of report  - Static Report Header denoting the start of the 
individual errors.  Always "Validation Failures"

o   Error Report Column Headings - "Input File", "Test", "Requirements", 
"Resolution Steps", "Error Message", "Raw Test Output"

o   Following the heading -  0 or more individual error rows for each violation 
found



Column


Description


Format


Required/Optional


Notes


Input File


List of files the error was detected in


Semicolon delimited list


Required





Test


Name of python module that detected the error


String


Required





Requirements


The full requirement text that the test validated.  There can be multiple, 
related requirements listed here


Multi-line String


Optional


Almost all requirements are mapped to 1 or more requirements, but there may be 
a few that are not yet mapped


Resolution Steps


Optional information to aid in resolution


Multi-line String


Optional


Consider this deprecated


Error Message


Detailed error message produced by the validation test.  This should provide 
specific information on why the test failed


Multi-line String


Required





Raw Test Output


Full output from pytest.  This includes the test content itself and detailed 
error message


Multi-line String


Required






failures file

This file is similar to the report file, but in JSON format.  There are a few 
notable differences beyond CSV vs. JSON:

*         The full requirement text is not available in this file - only the 
requirement IDs themselves.

*         Resolution Steps is also not available in this file

*         Collection failures are not included in this report

The following mapping shows how the JSON elements map to the column above.


-          file -> Input File: Format is a JSON list

-          vnfrqts -> N/A: Format is a JSON list of the requirement IDs 
associated with the test

-          test -> N/A: Name of the test method

-          test_file -> test: Name of the python file containing the test

-          raw_output -> Raw Test Output

-          message -> Error Message

Thanks,
Trevor


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#22707): 
https://lists.opnfv.org/g/opnfv-tech-discuss/message/22707
Mute This Topic: https://lists.opnfv.org/mt/29006268/21656
Group Owner: opnfv-tech-discuss+ow...@lists.opnfv.org
Unsubscribe: https://lists.opnfv.org/g/opnfv-tech-discuss/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to