[opnfv-tech-discuss] [dovetail] New dovetail.cvp.0.2.0 draft release notes

2017-07-07 Thread Wenjing Chu
Thanks Xudan for preparing these notes below and made a lot of the
contributions including supporting early users.

As you may have noticed, we have started to prepare weekly TAGs for the
draft dovetail.cvp test suite since a week ago, and the latest this week is
0.2.0. The "User Guide" document draft has also been updated to reflect the
changes.
http://artifacts.opnfv.org/dovetail/review/34285/testing_user_userguide/index.html
.

If you are testing with dovetail or keeping an eye on its progress, please
refer to the latest software and info. We will continue to update in this
way in a week fashion going forward. Let us know if you have any feedback
in any part of the draft artifacts/software or process.

Regards
Wenjing


dovetail.cvp.0.2.0 release notes:
---

*1. docker images used in cvp.0.2.0*

opnfv/dovetail:cvp.0.2.0 (commit ID
5ddf932dc28bcc47169c3091267d57ba5a99f9b2)

opnfv/yardstick:danube.3.0

opnfv/functest:cvp.0.2.0 (commit ID
3d03bbcfc45d00d4ce995d8aabff5808accb0687)

opnfv/testapi:cvp.0.2.0 (commit ID a7f82c093fab2ad19c7ffb0a81ec5756e91e73ae)

2. changes made since cvp.0.1.0

2.1 dovetail docker image changes

problems found and fixed

1)https checking support for commercial SUT

JIRA:DOVETAIL-456

2)sdnvpn path fix

JIRA:DOVETAIL-458

3)log improvement

JIRA:DOVETAIL-450

4)docker image tag with cvp.0.2.0

JIRA:DOVETAIL-447

2.2 yardstick docker image changes, as related to dovetail

problems found and fixed for HA test cases in yardstick consumed by dovetail

1). Bugfix: Monitor command in tc019 may not show the real nova-api service
status

  JIRA:YARDSTICK-655

2). Bugfix: "monitor_multi" type monitor in HA test case cannot get
"max_recover_time"

  JIRA:YARDSTICK-657

3). Improvement: Terminate openstack service's process using "kill" command

  JIRA:YARDSTICK-659

4). Improvement: "monitor_process" type monitor pass criteria

  JIRA:YARDSTICK-660

5). Bugfix: test.dbf file not deleted after execution of Disk I/O Block
High Availability test

  JIRA:YARDSTICK-696

2.3 functest docker image changes related to dovetail

problems found and fixed for SDNVPN test cases in functest consumed by
dovetail

“Unknown test case or tier 'bgpvpn', or not supported by the given scenario
'bgpvpn'”

https://gerrit.opnfv.org/gerrit/#/c/35051/2

https://gerrit.opnfv.org/gerrit/#/c/36777/3

https://gerrit.opnfv.org/gerrit/#/c/36785/

2.4 testapi docker image changes related to dovetail

problems found and fixed:

user management and some bugfix


3. wiki tracking testing activities


https://wiki.opnfv.org/display/dovetail/Running+history+for+the+dovetail+tool
section 1
___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


[opnfv-tech-discuss] [dovetail] Dovetail documents for review

2017-07-07 Thread Cooper, Trevor
I have reworked the "Documents for Review" page to give an overall view of all 
the planned documents, highlighting which are being reviewed (with gerrit links 
+ merge status). I hope this will make it easier for people reviewing the 
documents and to keep track of progress. Committers please review this and 
edit/comment to improve.

https://wiki.opnfv.org/display/dovetail/Dovetail+Documentation+for+Review

/Trevor
___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


Re: [opnfv-tech-discuss] [Dovetail] sdnvpn test cases with dovetail.cvp.0.1.0 throw an error

2017-07-07 Thread Srikanth Vavilapalli
Thanks Dan Xu for detailed response.

I changed the pod.xml as u suggested and ran the ha test suite. I also enabled 
debug option while running the tests and noticed the outage times getting 
logged in the dovetail.log file.

Thanks
Srikanth

From: xudan (N) [mailto:xuda...@huawei.com]
Sent: Thursday, July 06, 2017 11:55 PM
To: Srikanth Vavilapalli ; 
opnfv-tech-discuss@lists.opnfv.org
Subject: Re: [opnfv-tech-discuss] [Dovetail] sdnvpn test cases with 
dovetail.cvp.0.1.0 throw an error

Hi Srikanth,


  1.  About the pod.yaml file

  1.  Think it’s wrong. Switch to user “stack” first, then source under cloud 
rc file(named stackrc in my env), command “openstack server list” will show the 
platform nodes, mine is
+---+--+---+-+---+
| ID
| Name | Status | Networks | Image 
Name |
+---+--+---+-+---+
| 8e4380ea-b348-49b7-a5e4-39dc18e8994b | compute-0   | ACTIVE   | 
ctlplane=192.0.2.15  | overcloud-full   |
| 49a5f4f6-6b84-4ce3-9a69-654618511dfb| controller-2  | ACTIVE   | 
ctlplane=192.0.2.10  | overcloud-full   |
| adcef3e3-f456-4c01-987a-1d72182b3f00| compute-1   | ACTIVE   | 
ctlplane=192.0.2.17  | overcloud-full   |
| f41552b9-a4e4-4445-bd07-e8b025bec9b6 | controller-0  | ACTIVE   | 
ctlplane=192.0.2.14  | overcloud-full   |
| f622a698-5fa0-41ff-a0e9-566e3a5e45b7| controller-1  | ACTIVE   | 
ctlplane=192.0.2.13  | overcloud-full   |
+---+--+---+-+---+


  1.  Set the ${DOVETAIL_HOME}/pre_config/pod.yaml as follows

nodes:
-
name: node1
role: Controller
ip: 192.0.2.14——> node IP
user: heat-admin  ———> node login user
key_filename: /root/.ssh/id_rsa  ——> ssh key file
-
name: node2
role: Controller
ip: 192.0.2.13
user: heat-admin
key_filename: /root/.ssh/id_rsa
-
name: node3
role: Compute
ip: 192.0.2.15
user: heat-admin
key_filename: /root/.ssh/id_rsa

The user “heat-admin” above is obtained from the installation guide document. 
You can check it according to your platform installation documents. I can “ssh 
heat-admin@192.0.2.14” to verify that.
If deployed in HA mode, since only process HA (no node HA) are tested for the 
current tests, you can only write one Controller “node1” information in the 
pod.yaml , it will work, i.e., if I delete node2 node3 info in the above file, 
it will also work.


  1.  Copy the user “stack” private key (the path usually is 
/home/stack/.ssh/id_rsa) to  $DOVETAIL_HOME/pre_config/id_rsa



  1.  About the results data

  1.  In the result file, if the value of “sla_pass”  is 1, the test passes, 
otherwise fails.

Under linux vim env, suggest to use “:$!python -m json.tool” to transfer the 
result file into json to see more clearly.


  1.  For the outage time, it is not shown in the dovetail.ha.tc***.out file 
now.

If you run “dovetail run --testsuite  proposed_tests --testarea ha -d” (“-d” 
added for showing debug logs), you can find in results/dovetail.log, such as


2017-07-06 20:17:22,211 - container.Container - DEBUG - 2017-07-06 20:17:22,211 
yardstick.benchmark.scenarios.availability.monitor.basemonitor 
basemonitor.py:155 DEBUG the monitor result:{'total_time': 10.886008977890015, 
'outage_time': 1.1303958892822266, 'outage_count': 1, 'last_outage': 
1499372232.053993, 'first_outage': 1499372230.923597, 'total_count': 8}

Regards,
Dan Xu

发件人: Srikanth Vavilapalli [mailto:srikanth.vavilapa...@ericsson.com]
发送时间: 2017年7月7日 2:48
收件人: Srikanth Vavilapalli; xudan (N); 
opnfv-tech-discuss@lists.opnfv.org
主题: RE: [opnfv-tech-discuss] [Dovetail] sdnvpn test cases with 
dovetail.cvp.0.1.0 throw an error

Hi

Just continuing my dovetail testing with HA test suite and have few questions:


  1.  I created the pod.yaml file as shown below. Are these correct settings 
for an apex based OPNFV deployment?

root@r720-003 ~/dovetail/results $ cat ../pre_config/pod.yaml
nodes:
-
name: node1
role: Controller
ip: 192.168.122.140  <- apex undercloud IP
user: stack  <- apex undercloud login user
key_filename: /root/.ssh/id_rsa  <- apex undercloud ssh key file


  1.  The test run has generated output files for each test case. What is the 
way to interpret this output? Which fields in that log indicate the outage time 
and recovery time? I expecting some outage for these tests because my backend 
opnfv deployment is running in non-HA mode.

root@r720-003 ~/dovetail/results $ cat 

Re: [opnfv-tech-discuss] Multiple docker containers from one project

2017-07-07 Thread Beierl, Mark
Hello,

Having looked over the docker-hub build service, I also think this might be the 
better approach.  Less code for us to maintain, and the merge job from OPNFV 
Jenkins can use the web hook to remotely trigger the job on docker-hub.

Who has the opnfv credentials for docker-hub, and the credentials for the 
GitHub mirror that can set this up?  Is that the LF Helpdesk?

Regards,
Mark

Mark Beierl
SW System Sr Principal Engineer
Dell EMC | Office of the CTO
mobile +1 613 314 8106
mark.bei...@dell.com

On Jul 7, 2017, at 11:01, Xuan Jia 
> wrote:

+1 Using build service from docker-hub

On Thu, Jul 6, 2017 at 11:42 PM, Yujun Zhang (ZTE) 
> wrote:
Does anybody consider using the build service from docker-hub[1] ?

It supports multiple Dockerfile from same repository and easy to integrate with 
OPNFV Github mirror.

[1]: https://docs.docker.com/docker-hub/builds/


On Thu, Jul 6, 2017 at 11:02 PM Jose Lausuch 
> wrote:
Hi Mark,

I would incline for option 1), it sounds better than searching for a file. We 
could define specific values of DOCKERFILE var for each project.

/Jose


From: Beierl, Mark [mailto:mark.bei...@dell.com]
Sent: Thursday, July 06, 2017 16:18 PM
To: 
opnfv-tech-discuss@lists.opnfv.org
Cc: Julien >; Fatih Degirmenci 
>; Jose 
Lausuch >
Subject: Re: [opnfv-tech-discuss] Multiple docker containers from one project

Ideas:


  *   Change the DOCKERFILE parameter in releng jjb so that it can accept a 
comma delimited list of Dockerfile names and paths.  Problem with this, of 
course, is how do I default it to be different for StorPerf vs. Functest, etc?
  *   Change the opnfv-docker.sh to search for the named DOCKERFILE in all 
subdirectories.  This should cover the .aarch64 and vanilla docker file cases.

Please +1/-1 or propose other ideas, thanks!

Regards,
Mark

Mark Beierl
SW System Sr Principal Engineer
Dell EMC | Office of the CTO
mobile +1 613 314 8106
mark.bei...@dell.com

On Jun 24, 2017, at 04:05, Jose Lausuch 
> wrote:

+1

No need for an additional repo, the logic can be in Releng..
Functest will probably move to different containers some time soon, so that is 
something we could also leverage.

-Jose-


On 23 Jun 2017, at 18:39, Julien 
> wrote:

Agree,

If StorPerf can list some rules and examples, current scripts can be adapted 
for multiple docker image building and other project can use this type of 
changes. It is not deserved to add a new repo just for build a new image.



Fatih Degirmenci 
>于2017年6月21日周三
 上午2:26写道:
Hi Mark,

It is perfectly fine to have different build processes and/or number of 
artifacts for the projects from releng point of view.

Once you decide what to do for storperf, we can take a look and adapt docker 
build job/script to build storperf images, create additional repos on docker 
hub to push images and activate the builds when things are ready.

/Fatih

On 20 Jun 2017, at 19:18, Beierl, Mark 
> wrote:
Hello,

I'd like to poll the various groups about ideas for how to handle this 
scenario.  I have interns working on breaking down services from StorPerf into 
different containers.  In one case, it will be a simple docker compose that is 
used to fire up existing containers from the repos, but the other case requires 
more thought.

We are creating a second container (storperf-reporting) that will need to be 
built and pushed to hub.docker.com.  Right now the 
build process for docker images lives in releng, and it only allows for one 
image to be built.  Should I be requesting a second git repo in this case, or 
should we look at changing the releng process to allow multiple docker images 
to be build?

Regards,
Mark

Mark Beierl
SW System Sr Principal Engineer
Dell EMC | Office of the CTO
mobile +1 613 314 8106
mark.bei...@dell.com

___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss
___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss

Re: [opnfv-tech-discuss] Multiple docker containers from one project

2017-07-07 Thread Alec Hothan (ahothan)

+1 for the central file managed by releng with (unfrequent) changes submitted 
through gerrit (this is similar to how many openstack releng features work).
Overall, allowing projects to build more than 1 docker container is a nice 
addition.

With regard to versioning, is there a document describing how container 
versioning works? How do project owners track a container image (with the given 
Docker tag such as “danube.1.0”) to a particular git commit or tag? How often 
are containers rebuilt, or can they be rebuilt on demand by project owner?
For example, once Danube 3.0 is out, how can a project owner push out a newer 
version of a container (for example to fix a bug)?

Thanks

  Alec


From:  on behalf of Jose Lausuch 

Date: Friday, July 7, 2017 at 1:14 AM
To: "Chigang (Justin)" , "Yujun Zhang (ZTE)" 
, "Beierl, Mark" , 
"opnfv-tech-discuss@lists.opnfv.org" 
Subject: Re: [opnfv-tech-discuss] Multiple docker containers from one project

Hi Mark,

I wonder if that docker-list.txt should be located in each project’s repo... I 
know you want to avoid having patches in Releng and have the control in your 
repo, but I think that is the purpose of Release engineering :)

Also, that file is useless without the Releng mechanism, so if I clone Storperf 
and see that file in there, I won’t understand the purpose of as it isn’t 
really part of storperf framework/code... it can’t work without OPNFV CI.

The idea is good, but we could try to move that logic to Releng. For example, 
something like this:
http://paste.openstack.org/raw/614708/

Projects do not add new Dockerfiles so often, so it won’t really be a burden to 
put that file in a central place. Besides, it also exposes visibility to what 
all the projects are building.

For those Dockerfiles that are .patch we have to pay special attention as they 
can’t be built as is, but that logic can be done in the docker script.


/Jose



From: Chigang (Justin) [mailto:chig...@huawei.com]
Sent: Friday, July 07, 2017 09:02 AM
To: Yujun Zhang (ZTE) ; Jose Lausuch 
; Beierl, Mark ; 
opnfv-tech-discuss@lists.opnfv.org
Subject: 答复: [opnfv-tech-discuss] Multiple docker containers from one project

+1

We have used automated build  docker images in  docker hub account[1]. But each 
github repository just builds a docker image.
If it is supported, it will be great.

Regards
Justin

https://hub.docker.com/search/?isAutomated=0=0=1=0=Compass4nfv=0


发件人: 
opnfv-tech-discuss-boun...@lists.opnfv.org
 [mailto:opnfv-tech-discuss-boun...@lists.opnfv.org] 代表 Yujun Zhang (ZTE)
发送时间: 2017年7月6日 23:42
收件人: Jose Lausuch; Beierl, Mark; 
opnfv-tech-discuss@lists.opnfv.org
主题: Re: [opnfv-tech-discuss] Multiple docker containers from one project

Does anybody consider using the build service from docker-hub[1] ?

It supports multiple Dockerfile from same repository and easy to integrate with 
OPNFV Github mirror.

[1]: https://docs.docker.com/docker-hub/builds/


On Thu, Jul 6, 2017 at 11:02 PM Jose Lausuch 
> wrote:
Hi Mark,

I would incline for option 1), it sounds better than searching for a file. We 
could define specific values of DOCKERFILE var for each project.

/Jose


From: Beierl, Mark [mailto:mark.bei...@dell.com]
Sent: Thursday, July 06, 2017 16:18 PM
To: 
opnfv-tech-discuss@lists.opnfv.org
Cc: Julien >; Fatih Degirmenci 
>; Jose 
Lausuch >
Subject: Re: [opnfv-tech-discuss] Multiple docker containers from one project

Ideas:


  *   Change the DOCKERFILE parameter in releng jjb so that it can accept a 
comma delimited list of Dockerfile names and paths.  Problem with this, of 
course, is how do I default it to be different for StorPerf vs. Functest, etc?
  *   Change the opnfv-docker.sh to search for the named DOCKERFILE in all 
subdirectories.  This should cover the .aarch64 and vanilla docker file cases.

Please +1/-1 or propose other ideas, thanks!

Regards,
Mark

Mark Beierl
SW System Sr Principal Engineer
Dell EMC | Office of the CTO
mobile +1 613 314 8106
mark.bei...@dell.com

On Jun 24, 2017, at 04:05, Jose Lausuch 
> wrote:

+1

No need for an additional repo, the logic can be in Releng..
Functest will probably move to different containers some time soon, so that is 
something we could also leverage.

-Jose-


On 23 Jun 2017, at 18:39, 

Re: [opnfv-tech-discuss] Multiple docker containers from one project

2017-07-07 Thread Jose Lausuch
Hi Mark,

I wonder if that docker-list.txt should be located in each project’s repo... I 
know you want to avoid having patches in Releng and have the control in your 
repo, but I think that is the purpose of Release engineering :)

Also, that file is useless without the Releng mechanism, so if I clone Storperf 
and see that file in there, I won’t understand the purpose of as it isn’t 
really part of storperf framework/code... it can’t work without OPNFV CI.

The idea is good, but we could try to move that logic to Releng. For example, 
something like this:
http://paste.openstack.org/raw/614708/

Projects do not add new Dockerfiles so often, so it won’t really be a burden to 
put that file in a central place. Besides, it also exposes visibility to what 
all the projects are building.

For those Dockerfiles that are .patch we have to pay special attention as they 
can’t be built as is, but that logic can be done in the docker script.


/Jose



From: Chigang (Justin) [mailto:chig...@huawei.com]
Sent: Friday, July 07, 2017 09:02 AM
To: Yujun Zhang (ZTE) ; Jose Lausuch 
; Beierl, Mark ; 
opnfv-tech-discuss@lists.opnfv.org
Subject: 答复: [opnfv-tech-discuss] Multiple docker containers from one project

+1

We have used automated build  docker images in  docker hub account[1]. But each 
github repository just builds a docker image.
If it is supported, it will be great.

Regards
Justin

https://hub.docker.com/search/?isAutomated=0=0=1=0=Compass4nfv=0


发件人: 
opnfv-tech-discuss-boun...@lists.opnfv.org
 [mailto:opnfv-tech-discuss-boun...@lists.opnfv.org] 代表 Yujun Zhang (ZTE)
发送时间: 2017年7月6日 23:42
收件人: Jose Lausuch; Beierl, Mark; 
opnfv-tech-discuss@lists.opnfv.org
主题: Re: [opnfv-tech-discuss] Multiple docker containers from one project

Does anybody consider using the build service from docker-hub[1] ?

It supports multiple Dockerfile from same repository and easy to integrate with 
OPNFV Github mirror.

[1]: https://docs.docker.com/docker-hub/builds/


On Thu, Jul 6, 2017 at 11:02 PM Jose Lausuch 
> wrote:
Hi Mark,

I would incline for option 1), it sounds better than searching for a file. We 
could define specific values of DOCKERFILE var for each project.

/Jose


From: Beierl, Mark [mailto:mark.bei...@dell.com]
Sent: Thursday, July 06, 2017 16:18 PM
To: 
opnfv-tech-discuss@lists.opnfv.org
Cc: Julien >; Fatih Degirmenci 
>; Jose 
Lausuch >
Subject: Re: [opnfv-tech-discuss] Multiple docker containers from one project

Ideas:


  *   Change the DOCKERFILE parameter in releng jjb so that it can accept a 
comma delimited list of Dockerfile names and paths.  Problem with this, of 
course, is how do I default it to be different for StorPerf vs. Functest, etc?
  *   Change the opnfv-docker.sh to search for the named DOCKERFILE in all 
subdirectories.  This should cover the .aarch64 and vanilla docker file cases.

Please +1/-1 or propose other ideas, thanks!

Regards,
Mark

Mark Beierl
SW System Sr Principal Engineer
Dell EMC | Office of the CTO
mobile +1 613 314 8106
mark.bei...@dell.com

On Jun 24, 2017, at 04:05, Jose Lausuch 
> wrote:

+1

No need for an additional repo, the logic can be in Releng..
Functest will probably move to different containers some time soon, so that is 
something we could also leverage.

-Jose-


On 23 Jun 2017, at 18:39, Julien 
> wrote:

Agree,

If StorPerf can list some rules and examples, current scripts can be adapted 
for multiple docker image building and other project can use this type of 
changes. It is not deserved to add a new repo just for build a new image.



Fatih Degirmenci 
>于2017年6月21日周三
 上午2:26写道:
Hi Mark,

It is perfectly fine to have different build processes and/or number of 
artifacts for the projects from releng point of view.

Once you decide what to do for storperf, we can take a look and adapt docker 
build job/script to build storperf images, create additional repos on docker 
hub to push images and activate the builds when things are ready.

/Fatih

On 20 Jun 2017, at 19:18, Beierl, Mark 
> wrote:
Hello,

I'd like to poll the various groups about ideas for how to handle this 
scenario.  I have interns working on breaking down services from StorPerf into 
different containers.  In one case, it will be a simple docker compose that is 
used to fire 

[opnfv-tech-discuss] 答复: Multiple docker containers from one project

2017-07-07 Thread Chigang (Justin)
+1

We have used automated build  docker images in  docker hub account[1]. But each 
github repository just builds a docker image.
If it is supported, it will be great.

Regards
Justin

https://hub.docker.com/search/?isAutomated=0=0=1=0=Compass4nfv=0


发件人: opnfv-tech-discuss-boun...@lists.opnfv.org 
[mailto:opnfv-tech-discuss-boun...@lists.opnfv.org] 代表 Yujun Zhang (ZTE)
发送时间: 2017年7月6日 23:42
收件人: Jose Lausuch; Beierl, Mark; opnfv-tech-discuss@lists.opnfv.org
主题: Re: [opnfv-tech-discuss] Multiple docker containers from one project

Does anybody consider using the build service from docker-hub[1] ?

It supports multiple Dockerfile from same repository and easy to integrate with 
OPNFV Github mirror.

[1]: https://docs.docker.com/docker-hub/builds/


On Thu, Jul 6, 2017 at 11:02 PM Jose Lausuch 
> wrote:
Hi Mark,

I would incline for option 1), it sounds better than searching for a file. We 
could define specific values of DOCKERFILE var for each project.

/Jose


From: Beierl, Mark [mailto:mark.bei...@dell.com]
Sent: Thursday, July 06, 2017 16:18 PM
To: 
opnfv-tech-discuss@lists.opnfv.org
Cc: Julien >; Fatih Degirmenci 
>; Jose 
Lausuch >
Subject: Re: [opnfv-tech-discuss] Multiple docker containers from one project

Ideas:


  *   Change the DOCKERFILE parameter in releng jjb so that it can accept a 
comma delimited list of Dockerfile names and paths.  Problem with this, of 
course, is how do I default it to be different for StorPerf vs. Functest, etc?
  *   Change the opnfv-docker.sh to search for the named DOCKERFILE in all 
subdirectories.  This should cover the .aarch64 and vanilla docker file cases.

Please +1/-1 or propose other ideas, thanks!

Regards,
Mark

Mark Beierl
SW System Sr Principal Engineer
Dell EMC | Office of the CTO
mobile +1 613 314 8106
mark.bei...@dell.com

On Jun 24, 2017, at 04:05, Jose Lausuch 
> wrote:

+1

No need for an additional repo, the logic can be in Releng..
Functest will probably move to different containers some time soon, so that is 
something we could also leverage.

-Jose-


On 23 Jun 2017, at 18:39, Julien 
> wrote:

Agree,

If StorPerf can list some rules and examples, current scripts can be adapted 
for multiple docker image building and other project can use this type of 
changes. It is not deserved to add a new repo just for build a new image.



Fatih Degirmenci 
>于2017年6月21日周三
 上午2:26写道:
Hi Mark,

It is perfectly fine to have different build processes and/or number of 
artifacts for the projects from releng point of view.

Once you decide what to do for storperf, we can take a look and adapt docker 
build job/script to build storperf images, create additional repos on docker 
hub to push images and activate the builds when things are ready.

/Fatih

On 20 Jun 2017, at 19:18, Beierl, Mark 
> wrote:
Hello,

I'd like to poll the various groups about ideas for how to handle this 
scenario.  I have interns working on breaking down services from StorPerf into 
different containers.  In one case, it will be a simple docker compose that is 
used to fire up existing containers from the repos, but the other case requires 
more thought.

We are creating a second container (storperf-reporting) that will need to be 
built and pushed to hub.docker.com.  Right now the 
build process for docker images lives in releng, and it only allows for one 
image to be built.  Should I be requesting a second git repo in this case, or 
should we look at changing the releng process to allow multiple docker images 
to be build?

Regards,
Mark

Mark Beierl
SW System Sr Principal Engineer
Dell EMC | Office of the CTO
mobile +1 613 314 8106
mark.bei...@dell.com

___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss
___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss
___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


___
opnfv-tech-discuss mailing list

Re: [opnfv-tech-discuss] [Dovetail] sdnvpn test cases with dovetail.cvp.0.1.0 throw an error

2017-07-07 Thread xudan (N)
Hi Srikanth,


1.   About the pod.yaml file

1)  Think it’s wrong. Switch to user “stack” first, then source under cloud 
rc file(named stackrc in my env), command “openstack server list” will show the 
platform nodes, mine is
+---+--+---+-+---+
| ID
| Name | Status | Networks | Image 
Name |
+---+--+---+-+---+
| 8e4380ea-b348-49b7-a5e4-39dc18e8994b | compute-0   | ACTIVE   | 
ctlplane=192.0.2.15  | overcloud-full   |
| 49a5f4f6-6b84-4ce3-9a69-654618511dfb| controller-2  | ACTIVE   | 
ctlplane=192.0.2.10  | overcloud-full   |
| adcef3e3-f456-4c01-987a-1d72182b3f00| compute-1   | ACTIVE   | 
ctlplane=192.0.2.17  | overcloud-full   |
| f41552b9-a4e4-4445-bd07-e8b025bec9b6 | controller-0  | ACTIVE   | 
ctlplane=192.0.2.14  | overcloud-full   |
| f622a698-5fa0-41ff-a0e9-566e3a5e45b7| controller-1  | ACTIVE   | 
ctlplane=192.0.2.13  | overcloud-full   |
+---+--+---+-+---+


2)  Set the ${DOVETAIL_HOME}/pre_config/pod.yaml as follows

nodes:
-
name: node1
role: Controller
ip: 192.0.2.14——> node IP
user: heat-admin  ———> node login user
key_filename: /root/.ssh/id_rsa  ——> ssh key file
-
name: node2
role: Controller
ip: 192.0.2.13
user: heat-admin
key_filename: /root/.ssh/id_rsa
-
name: node3
role: Compute
ip: 192.0.2.15
user: heat-admin
key_filename: /root/.ssh/id_rsa

The user “heat-admin” above is obtained from the installation guide document. 
You can check it according to your platform installation documents. I can “ssh 
heat-admin@192.0.2.14” to verify that.
If deployed in HA mode, since only process HA (no node HA) are tested for the 
current tests, you can only write one Controller “node1” information in the 
pod.yaml , it will work, i.e., if I delete node2 node3 info in the above file, 
it will also work.


3)  Copy the user “stack” private key (the path usually is 
/home/stack/.ssh/id_rsa) to  $DOVETAIL_HOME/pre_config/id_rsa



2.   About the results data

1)  In the result file, if the value of “sla_pass”  is 1, the test passes, 
otherwise fails.

Under linux vim env, suggest to use “:$!python -m json.tool” to transfer the 
result file into json to see more clearly.


2)  For the outage time, it is not shown in the dovetail.ha.tc***.out file 
now.

If you run “dovetail run --testsuite  proposed_tests --testarea ha -d” (“-d” 
added for showing debug logs), you can find in results/dovetail.log, such as


2017-07-06 20:17:22,211 - container.Container - DEBUG - 2017-07-06 20:17:22,211 
yardstick.benchmark.scenarios.availability.monitor.basemonitor 
basemonitor.py:155 DEBUG the monitor result:{'total_time': 10.886008977890015, 
'outage_time': 1.1303958892822266, 'outage_count': 1, 'last_outage': 
1499372232.053993, 'first_outage': 1499372230.923597, 'total_count': 8}

Regards,
Dan Xu

发件人: Srikanth Vavilapalli [mailto:srikanth.vavilapa...@ericsson.com]
发送时间: 2017年7月7日 2:48
收件人: Srikanth Vavilapalli; xudan (N); opnfv-tech-discuss@lists.opnfv.org
主题: RE: [opnfv-tech-discuss] [Dovetail] sdnvpn test cases with 
dovetail.cvp.0.1.0 throw an error

Hi

Just continuing my dovetail testing with HA test suite and have few questions:


  1.  I created the pod.yaml file as shown below. Are these correct settings 
for an apex based OPNFV deployment?

root@r720-003 ~/dovetail/results $ cat ../pre_config/pod.yaml
nodes:
-
name: node1
role: Controller
ip: 192.168.122.140  <- apex undercloud IP
user: stack  <- apex undercloud login user
key_filename: /root/.ssh/id_rsa  <- apex undercloud ssh key file


  1.  The test run has generated output files for each test case. What is the 
way to interpret this output? Which fields in that log indicate the outage time 
and recovery time? I expecting some outage for these tests because my backend 
opnfv deployment is running in non-HA mode.

root@r720-003 ~/dovetail/results $ cat dovetail.ha.tc001.out
{"status": 1, "result": [{"context_cfg": {"nodes": {"node1": {"ip": 
"192.168.122.140", "key_filename": "/root/.ssh/id_rsa", "role": "Controller", 
"name": "node1.LF-785e43fe", "user": "stack"}}}, "scenario_cfg": {"task_id": 
"785e43fe-9ef0-4a0f-a6e2-f813a468915f", "runner": {"object": 
"yardstick.benchmark.scenarios.availability.serviceha.ServiceHA", "type": 
"Iteration", "output_filename": 
"/home/opnfv/yardstick/results/dovetail.ha.tc001.out", "iterations": 1, 
"runner_id": 55}, "tc": "opnfv_yardstick_tc019", "options": {"wait_time": 10, 
"attackers": 

Re: [opnfv-tech-discuss] Including externally licensed code

2017-07-07 Thread morgan.richomme

Hi,

chartjs is under MIT license so no compatibility issue with the default 
apache 2.0


I think both options are possible but if you clone it in your repo, I 
would suggest to create a 3rd-party directory to separate internal code 
from 3rd party code (e.g. 
https://git.opnfv.org/releng/tree/utils/test/reporting)


visualization js lib is also a good topic for the testing group... :)

/Morgan

On 07/07/2017 02:31, Beierl, Mark wrote:

Hello,

Quick question.  I have an intern project that is taking advantage of 
chartjs.org  code [1].  Should this be included in 
the docker container as a git clone, or is it acceptable to clone and 
include it as part of the storperf.git repo with the appropriate license?


[1] 
https://gerrit.opnfv.org/gerrit/#/c/37021/1/reporting/docker/src/static/js/Chart.min.js


Regards,
Mark

*Mark Beierl*
SW System Sr Principal Engineer
*Dell **EMC* | Office of the CTO
mobile+1 613 314 8106 
mark.bei...@dell.com 



___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss



--
Morgan Richomme
Orange/ IMT/ OLN/ CNC/ NCA/ SINA

Network architect for innovative services
Future of the Network community member
Open source Orange community manager


tel. +33 (0) 296 072 106
mob. +33 (0) 637 753 326
morgan.richo...@orange.com


_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.

___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss