[opnfv-tech-discuss] [FUEL] [Auto] Specific scenario for ONAP deployment

2018-12-03 Thread Klozik Martin
Hello Fuel team,

I'm a member of OPNFV Auto team and our project prepares script for automatic 
deployment of ONAP on top of the OPNFV platform. Thus we would like to create a 
new specific scenario, which will help OPNFV end users to deploy ONAP on both 
physical and virtual pods.

As our project aims to be multiplatform (i.e. support both Intel and Arm cpus), 
we've decided to prepare a scenario(s) for MCP/Fuel installer.

Initial patch is available for review at: 
https://gerrit.opnfv.org/gerrit/#/c/64369/

The patch above adds a simple definition of two os-nosdn-onap-noha and 
os-nosdn-onap-ha scenarios, which are created from generic 
os-nosdn-nofeature-[no]ha scenarios, with following modifications:
1) resources for virtual OPNFV deployment are increased to be sufficient for 
ONAP deployment
2) new onap state (script) was created, to check available resources at compute 
nodes and to pass them to ONAP installation script from (cloned) auto 
repository; Referred ONAP installation script is available for review at 
https://gerrit.opnfv.org/gerrit/#/c/64371/

Idea was to create a standalone "onap" state script, without any dependencies 
on the rest of the fuel code. So this script can be executed manually on 
existing MCP/Fuel deployment (e.g. installed by different scenario).

I would like to know your opinion about following "TODOs":
1) documentation of these new specific scenarios - I would like to document it 
in the auto project documentation. If needed, we can create a simple note in 
MCP/Fuel documentation with link to the related section of Auto project docs. 
Is this sufficient or do you prefer a different approach?
2) MCP/Fuel hardcodes the disk size of virtual POD servers (ctl, cmp, etc.). 
The size is hardocoded 100G at mcp/scripts/lib_jump_deploy.sh (lines 259 and 
262). However this is not enough space for instance storage for ONAP deployment 
(storage is shared from controller via NFS share to computes). Disk size should 
be configurable, ideally from the scenario definition file. So we can define 
higher value for "cinder" node in os-nosdn-onap-* scenario files. Cristina 
pointed out, that yaml definition already supports disk definitions 
(https://github.com/opnfv/pharos/blob/master/labs/arm/virtual2.yaml), but 
installer dosn't use them. Could you let us know, how difficult would be to 
support configurable dish size and if we can be of any help during its 
implementation?

Thank you,
Martin

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#22488): 
https://lists.opnfv.org/g/opnfv-tech-discuss/message/22488
Mute This Topic: https://lists.opnfv.org/mt/28570488/21656
Group Owner: opnfv-tech-discuss+ow...@lists.opnfv.org
Unsubscribe: https://lists.opnfv.org/g/opnfv-tech-discuss/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [opnfv-tech-discuss] [fuel] Fuel/MCP virtual deployment failure due new redis 5.0 config file

2018-11-13 Thread Klozik Martin
Hi Michael,


thanks for your explanation. It's good to know, that it is a known issue.


I'll use older fuel commits until your patch will be reverted.


Best Regards,

Martin


Od: Michael Polenchuk 
Odesláno: úterý 13. listopadu 2018 13:45:21
Komu: Klozik Martin
Kopie: TECH-DISCUSS OPNFV
Předmět: Re: [fuel] Fuel/MCP virtual deployment failure due new redis 5.0 
config file

Hi Martin,

The Redis v5.0 package has been removed from repository recently,
so we're going to switch back to 3.0 in reclass model, i.e. revert that patch.

On Tue, Nov 13, 2018 at 3:44 PM Klozik Martin 
mailto:martin.klo...@tieto.com>> wrote:

Hi Michael,


after the merge of your patch below, I'm observing errors during Fuel/MCP 
virtual installation at LAAS (labs.opnfv.org<http://labs.opnfv.org>) servers.


https://gerrit.opnfv.org/gerrit/#/c/64871/


The root cause is incompatible REDIS 5.0 configuration file, which is being 
used for execution of REDIS 3.0 server, which is being deployed by the 
installer. In case that I use commit before your merge, then installation 
succeeds.


Steps to reproduce:

1) book hpe-xx machine at labs.opnfv.org<http://labs.opnfv.org>

2) start VPN and ssh into hpe-xx

3) start virtual deployment:

git clone https://gerrit.opnfv.org/gerrit/fuel

cd fuel

git checkout stable/gambia  # 1st noticed at master branch, but stable is also 
affected

ci/deploy.sh -l ericsson -p virtual1 -s os-nosdn-nofeature-noha -D

I'm not sure if only virtual deployment is affected, because I don't have a HW 
POD to retest it.


Do you know, why redis 5.0 configuration file is applied, but redis binaries 
were kept at version 3.0?


Thank you,

Martin


--
  Michael Polenchuk
  Private Cloud / Mirantis Inc.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#22372): 
https://lists.opnfv.org/g/opnfv-tech-discuss/message/22372
Mute This Topic: https://lists.opnfv.org/mt/28121837/21656
Group Owner: opnfv-tech-discuss+ow...@lists.opnfv.org
Unsubscribe: https://lists.opnfv.org/g/opnfv-tech-discuss/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[opnfv-tech-discuss] [fuel] Fuel/MCP virtual deployment failure due new redis 5.0 config file

2018-11-13 Thread Klozik Martin
Hi Michael,


after the merge of your patch below, I'm observing errors during Fuel/MCP 
virtual installation at LAAS (labs.opnfv.org) servers.


https://gerrit.opnfv.org/gerrit/#/c/64871/


The root cause is incompatible REDIS 5.0 configuration file, which is being 
used for execution of REDIS 3.0 server, which is being deployed by the 
installer. In case that I use commit before your merge, then installation 
succeeds.


Steps to reproduce:

1) book hpe-xx machine at labs.opnfv.org

2) start VPN and ssh into hpe-xx

3) start virtual deployment:

git clone https://gerrit.opnfv.org/gerrit/fuel

cd fuel

git checkout stable/gambia  # 1st noticed at master branch, but stable is also 
affected

ci/deploy.sh -l ericsson -p virtual1 -s os-nosdn-nofeature-noha -D

I'm not sure if only virtual deployment is affected, because I don't have a HW 
POD to retest it.


Do you know, why redis 5.0 configuration file is applied, but redis binaries 
were kept at version 3.0?


Thank you,

Martin
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#22370): 
https://lists.opnfv.org/g/opnfv-tech-discuss/message/22370
Mute This Topic: https://lists.opnfv.org/mt/28121837/21656
Group Owner: opnfv-tech-discuss+ow...@lists.opnfv.org
Unsubscribe: https://lists.opnfv.org/g/opnfv-tech-discuss/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[opnfv-tech-discuss] Auto engineering plan update

2018-11-01 Thread Klozik Martin
Hello Tina,


together with Paul, we have updated the engineering plan at wiki:


https://wiki.opnfv.org/display/AUTO/Engineering+Project+Plan


It means, that we have updated topics we are currently working on. On the other 
side, there are parts of the plan, which are outdated and do not reflect our 
findings and discussions, which we had during last two months.


Thus I would like to ask you, how do you want to proceed with the plan update. 
Would you like to discuss it on the community meeting? Another option is, that 
we will prepare the plan changes and you'll review them through the wiki page 
history afterwards.


Please advise,

Martin
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#22255): 
https://lists.opnfv.org/g/opnfv-tech-discuss/message/22255
Mute This Topic: https://lists.opnfv.org/mt/27814027/21656
Group Owner: opnfv-tech-discuss+ow...@lists.opnfv.org
Unsubscribe: https://lists.opnfv.org/g/opnfv-tech-discuss/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [opnfv-tech-discuss] [Auto] huawei-pod12 ONAP status

2018-10-29 Thread Klozik Martin
adding Richard to the loop...


--Martin


Od: opnfv-tech-discuss@lists.opnfv.org  za 
uživatele Klozik Martin 
Odesláno: středa 24. října 2018 8:00
Komu: huangxiangyu; opnfv-tech-discuss@lists.opnfv.org
Předmět: Re: [opnfv-tech-discuss] [Auto] huawei-pod12 ONAP status


Hi Harry,


thank you for the hints, we will go ahead with ONAP re-deployment as suggested.


Best Regards,

Martin


Od: huangxiangyu 
Odesláno: středa 24. října 2018 5:01:37
Komu: Klozik Martin; opnfv-tech-discuss@lists.opnfv.org
Předmět: RE: [Auto] huawei-pod12 ONAP status


Hi Martin



On host1 you can run the following commands to recreate the ONAP stack:

cd /root/onap/integration/test/ete

source ./labs/huawei-shanghai/onap-openrc

./scripts/deploy-onap.sh huawei-shanghai



These steps will get all onap VMs spinning but all scripts be called inside are 
pulled from internet where some problems lie.

Be aware that there are some docker tag mismatches exist, you will need to 
manually check the install.log inside VMs to fix them and then call 
xx_vm_init.sh.



Regards

Harry



发件人: opnfv-tech-discuss@lists.opnfv.org 
[mailto:opnfv-tech-discuss@lists.opnfv.org] 代表 Klozik Martin
发送时间: 2018年10月23日 14:16
收件人: huangxiangyu ; opnfv-tech-discuss@lists.opnfv.org
主题: [opnfv-tech-discuss] [Auto] huawei-pod12 ONAP status



Hi Harry,

after the power down of huawei-pod12 we are facing an issue with the ONAP 
installation. The Onap VMs are seen as running by OpenStack, but all of them 
have failed to boot the kernel and are hanging in the initramfs.



Paul did some investigation and he has found out, that it is possible to boot 
vm manually, it means, that kernel will boot properly. He did some quick checks 
and it is not clear to us, why VMs can't be boot properly by the OpenStack.



Have you seen similar issue in the past, e.g. at any other huawei server after 
the recent power down? We have a suspicion, that some OS configuration 
performed during installation was not hardened in the configuration files and 
thus it was lost after the pod was powered down.



We will be grateful for any hints, so we can bring the ONAP installation up and 
running again.

I'm also wondering if you have any notes about the OS and ONAP installation. We 
can access the bash history, but it is not clear if all performed activities 
are really necessary. It would be great if you will document the steps required 
to have OS deployed. I suppose, that for documentation of ONAP installation, we 
have to ask Gary, am I right?



Thank you,

Martin
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#7): 
https://lists.opnfv.org/g/opnfv-tech-discuss/message/7
Mute This Topic: https://lists.opnfv.org/mt/27566007/21656
Group Owner: opnfv-tech-discuss+ow...@lists.opnfv.org
Unsubscribe: https://lists.opnfv.org/g/opnfv-tech-discuss/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[opnfv-tech-discuss] [Auto] Sprint 23

2018-10-24 Thread Klozik Martin
Hi Tina,


please transfer all TODO and IN PROGRESS tickets from sprint 22 to sprint 23 
and add also AUTO-87.


Thank you,

Martin
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#22211): 
https://lists.opnfv.org/g/opnfv-tech-discuss/message/22211
Mute This Topic: https://lists.opnfv.org/mt/27616204/21656
Group Owner: opnfv-tech-discuss+ow...@lists.opnfv.org
Unsubscribe: https://lists.opnfv.org/g/opnfv-tech-discuss/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [opnfv-tech-discuss] [Auto] huawei-pod12 ONAP status

2018-10-24 Thread Klozik Martin
Hi Harry,


thank you for the hints, we will go ahead with ONAP re-deployment as suggested.


Best Regards,

Martin


Od: huangxiangyu 
Odesláno: středa 24. října 2018 5:01:37
Komu: Klozik Martin; opnfv-tech-discuss@lists.opnfv.org
Předmět: RE: [Auto] huawei-pod12 ONAP status


Hi Martin



On host1 you can run the following commands to recreate the ONAP stack:

cd /root/onap/integration/test/ete

source ./labs/huawei-shanghai/onap-openrc

./scripts/deploy-onap.sh huawei-shanghai



These steps will get all onap VMs spinning but all scripts be called inside are 
pulled from internet where some problems lie.

Be aware that there are some docker tag mismatches exist, you will need to 
manually check the install.log inside VMs to fix them and then call 
xx_vm_init.sh.



Regards

Harry



发件人: opnfv-tech-discuss@lists.opnfv.org 
[mailto:opnfv-tech-discuss@lists.opnfv.org] 代表 Klozik Martin
发送时间: 2018年10月23日 14:16
收件人: huangxiangyu ; opnfv-tech-discuss@lists.opnfv.org
主题: [opnfv-tech-discuss] [Auto] huawei-pod12 ONAP status



Hi Harry,

after the power down of huawei-pod12 we are facing an issue with the ONAP 
installation. The Onap VMs are seen as running by OpenStack, but all of them 
have failed to boot the kernel and are hanging in the initramfs.



Paul did some investigation and he has found out, that it is possible to boot 
vm manually, it means, that kernel will boot properly. He did some quick checks 
and it is not clear to us, why VMs can't be boot properly by the OpenStack.



Have you seen similar issue in the past, e.g. at any other huawei server after 
the recent power down? We have a suspicion, that some OS configuration 
performed during installation was not hardened in the configuration files and 
thus it was lost after the pod was powered down.



We will be grateful for any hints, so we can bring the ONAP installation up and 
running again.

I'm also wondering if you have any notes about the OS and ONAP installation. We 
can access the bash history, but it is not clear if all performed activities 
are really necessary. It would be great if you will document the steps required 
to have OS deployed. I suppose, that for documentation of ONAP installation, we 
have to ask Gary, am I right?



Thank you,

Martin
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#22210): 
https://lists.opnfv.org/g/opnfv-tech-discuss/message/22210
Mute This Topic: https://lists.opnfv.org/mt/27566007/21656
Group Owner: opnfv-tech-discuss+ow...@lists.opnfv.org
Unsubscribe: https://lists.opnfv.org/g/opnfv-tech-discuss/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[opnfv-tech-discuss] [Auto] huawei-pod12 ONAP status

2018-10-23 Thread Klozik Martin
Hi Harry,

after the power down of huawei-pod12 we are facing an issue with the ONAP 
installation. The Onap VMs are seen as running by OpenStack, but all of them 
have failed to boot the kernel and are hanging in the initramfs.

Paul did some investigation and he has found out, that it is possible to boot 
vm manually, it means, that kernel will boot properly. He did some quick checks 
and it is not clear to us, why VMs can't be boot properly by the OpenStack.

Have you seen similar issue in the past, e.g. at any other huawei server after 
the recent power down? We have a suspicion, that some OS configuration 
performed during installation was not hardened in the configuration files and 
thus it was lost after the pod was powered down.

We will be grateful for any hints, so we can bring the ONAP installation up and 
running again.

I'm also wondering if you have any notes about the OS and ONAP installation. We 
can access the bash history, but it is not clear if all performed activities 
are really necessary. It would be great if you will document the steps required 
to have OS deployed. I suppose, that for documentation of ONAP installation, we 
have to ask Gary, am I right?

Thank you,
Martin
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#22198): 
https://lists.opnfv.org/g/opnfv-tech-discuss/message/22198
Mute This Topic: https://lists.opnfv.org/mt/27566007/21656
Group Owner: opnfv-tech-discuss+ow...@lists.opnfv.org
Unsubscribe: https://lists.opnfv.org/g/opnfv-tech-discuss/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [opnfv-tech-discuss] [Auto] Meeting minutes 10/08/2018

2018-10-10 Thread Klozik Martin
Hi Tina,


we can add tickets into the sprint 22 ourselves. If you would like to do it 
during sprint creation, then you could add following tickets:


AUTO-70 Klozik

AUTO-84 Paul

AUTO-85 Richard


Best Regards,

Martin


Od: Tina Tsou 
Odesláno: středa 10. října 2018 2:18
Komu: Klozik Martin; opnfv-tech-discuss@lists.opnfv.org
Předmět: RE: [Auto] Meeting minutes 10/08/2018


Dear Martin et al,



Thank you for taking notes for this week.



Besides

AUTO-44<https://jira.opnfv.org/secure/RapidBoard.jspa?rapidView=217=AUTO=detail=AUTO-44>
 Paul-Ionut Vaduva<https://wiki.opnfv.org/display/~pvaduva>

Build ONAP components for arm64 platform



Any other new or exisiting tickets can we include into Sprint 22?





Thank you,

Tina Tsou

Enterprise Architect

Arm

tina.t...@arm.com<mailto:tina.t...@arm.com>

+1 (408)931-3833



From: opnfv-tech-discuss@lists.opnfv.org  
On Behalf Of Klozik Martin
Sent: Tuesday, October 9, 2018 12:06 AM
To: opnfv-tech-discuss@lists.opnfv.org
Subject: [opnfv-tech-discuss] [Auto] Meeting minutes 10/08/2018



Hi Auto team,



I've published the meeting minutes from our call yesterday. Let me know in 
case, that I've missed or misunderstood some topic.



https://wiki.opnfv.org/pages/viewpage.action?pageId=29098709



Best Regards,

Martin

IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#22141): 
https://lists.opnfv.org/g/opnfv-tech-discuss/message/22141
Mute This Topic: https://lists.opnfv.org/mt/27119743/21656
Group Owner: opnfv-tech-discuss+ow...@lists.opnfv.org
Unsubscribe: https://lists.opnfv.org/g/opnfv-tech-discuss/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[opnfv-tech-discuss] [Auto] Meeting minutes 10/08/2018

2018-10-09 Thread Klozik Martin
Hi Auto team,


I've published the meeting minutes from our call yesterday. Let me know in 
case, that I've missed or misunderstood some topic.


https://wiki.opnfv.org/pages/viewpage.action?pageId=29098709


Best Regards,

Martin
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#22131): 
https://lists.opnfv.org/g/opnfv-tech-discuss/message/22131
Mute This Topic: https://lists.opnfv.org/mt/27119743/21656
Group Owner: opnfv-tech-discuss+ow...@lists.opnfv.org
Unsubscribe: https://lists.opnfv.org/g/opnfv-tech-discuss/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[opnfv-tech-discuss] [Auto] Meeting minutes 2018/9/17

2018-09-19 Thread Klozik Martin
Hi Joe,


thanks for perfect and detailed meeting minutes.


I have just one note to the milestones.


If I'm not mistaken, the outcome was, that in case of H release Auto project 
doesn't have any strict plan for the number of TCs to be implemented. We will 
continue in TC development on the best effort basis with the (current) capacity 
of 1 developer.


I know, that it is partly mentioned in Testcases topics, but it's related to 
discussion about milestones. Let's assure that we are on the same page to avoid 
future misunderstanding.


Thanks again,

Martin

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#22061): 
https://lists.opnfv.org/g/opnfv-tech-discuss/message/22061
Mute This Topic: https://lists.opnfv.org/mt/25753894/21656
Group Owner: opnfv-tech-discuss+ow...@lists.opnfv.org
Unsubscribe: https://lists.opnfv.org/g/opnfv-tech-discuss/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[opnfv-tech-discuss] [Auto] Jira clean & sweep

2018-09-18 Thread Klozik Martin
Dear Auto team members,


as part of Gambia "stabilisation efforts", I did a clean up in our Jira. 
Details could be found at following jira:


https://jira.opnfv.org/browse/AUTO-64


I would like to ask you to go through the tickets assigned to you and to:

  *   close tickets which are done
  *   change "Fix Version" in case that ticket is not related to Gambia, but it 
is set to 7.0.0 (e.g. set it to 8.0.0)

You could use following dashboard to list all active tickets assigned to you 
(click on your name below the first pie chart).

https://jira.opnfv.org/secure/Dashboard.jspa?selectPageId=11600

Best Regards,
Martin
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#22049): 
https://lists.opnfv.org/g/opnfv-tech-discuss/message/22049
Mute This Topic: https://lists.opnfv.org/mt/25743748/21656
Group Owner: opnfv-tech-discuss+ow...@lists.opnfv.org
Unsubscribe: https://lists.opnfv.org/g/opnfv-tech-discuss/unsub  
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [opnfv-tech-discuss] [compass4nfv][auto] Compass4nfv on LaaS x86 server

2018-08-02 Thread Klozik Martin
Hi Gerard,

I did a few checks today with following outcome:

* (HPE15) installation with VIRT_NUMBER=1 does not pass "reboot_hosts" stage, 
it simply terminates after this messages is shown; that's why I was not able to 
see anything else yesterday
* (HPE15) installation with VIRT_NUMBER=2 went correctly (i.e. I see the 
"Installation Complete!" banner :-))
* I tried to retest it (with VIRT_NUMBER=2) at "clean" server HPE16 and I hit 
the same issue as you did (i.e. RuntimeError: OS installation timeout)

I'm wondering if it can be related to the OS state. May be we should firstly 
try to do an apt update & upgrade (may be even followed by the sever reboot), 
before execution of compass installation. I'll give it a try at another server.

One additional question. How did you "installed" opnfv-clean binary at your 
server? I've used alien to create a deb package from 
http://artifacts.opnfv.org/apex/master/opnfv-apex-common-2.1-20160306.noarch.rpm.
 After its installation at HPE15, I was able to execute opnfv-clean, but I 
doubt that it works. So I'm wondering if it is enough to install this package 
or other two rpms/debs are also required (in that case I would expect that such 
dependency would be enforced by package spec file...).

BTW, I did a small updates to your compass install script. May be it's time to 
comment out one of the options, so it is directly usable for OPNFV deployment 
at LaaS server. What do you think?

Cheers,
Martin

Od: gerard.d...@wipro.com 
Odesláno: středa 1. srpna 2018 23:20
Komu: opnfv-tech-discuss@lists.opnfv.org
Kopie: huangxiangyu; Klozik Martin
Předmět: RE: [opnfv-tech-discuss] [compass4nfv][auto] Compass4nfv on LaaS x86 
server
    

Hi,
 
Thanks Martin and Harry for your feedback !
 
On hpe15, the log file doesn't end with the string "compass deploy success", so 
the process may not have completed correctly.
When you reach the point of "reboot_hosts do nothing", it takes quite a while 
(30-60 minutes ?) to get to the conclusion.
Is it possible you stopped the process before it finished ?
 
Also, I didn't see the setting of "VIRT_NUMBER" in the "deploy.sh" file in 
"/home/opnfv/compass4nfv", but then again it's possible you changed it 
afterwards.
 
One directory up ("/home/opnfv"), the file "deply.sh.log" ends in 
"launch_compass failed".
 
Out of curiosity, I tried option 1 on hpe15 (see script in ~/auto) (so: master 
branch, build+deploy, NOHA scenario, VIRT_NUMBER=2).
This one failed with the "get_installing_progress" error.
You can check the logs in /opt/opnfv-compass/compass4nfv, and in ~/auto).
 
Best regards,
Gerard
 
 
 
 



From: Klozik Martin [mailto:martin.klo...@tieto.com]
Sent: Wednesday, August 1, 2018 4:50 AM
To: Gerard Damm (Product Engineering Service) ; 
opnfv-tech-discuss@lists.opnfv.org
Cc: huangxiangyu 
Subject: Re: [opnfv-tech-discuss] [compass4nfv][auto] Compass4nfv on LaaS x86 
server
   
** This mail has been sent from an external source. Treat hyperlinks and 
attachments in this email with caution**


Hi Gerard,
 
I tried to follow your installation procedure (i.e. point 1 below) at HPE15 
with the only difference, i.e. VIRT_NUMBER=1. I've not observed the same error 
as you, but some (probably not fatal) assertion error (search  log for 
client.py). Anyhow the installation process seemed to finish somehow. I plan to 
have a more detailed look at the machine tomorrow. Feel free to have a look 
yourself, I'll forward you appropriate credentials. The installation log 
(output of deploy.sh)  is available at /home/opnfv/compass4nfv/deploy.log
 
Have a nice day,
Martin
 
 
Od:  opnfv-tech-discuss@lists.opnfv.org  za 
uživatele Gerard Damm 
Odesláno: středa 1. srpna 2018 2:06:43
Komu:  opnfv-tech-discuss@lists.opnfv.org
Kopie: huangxiangyu
Předmět: Re: [opnfv-tech-discuss] [compass4nfv][auto] Compass4nfv on LaaS x86 
server 

 
  

Thanks for pointing out that other possible issue.
 
The instructions I use as a reference:
https://docs.opnfv.org/en/latest/submodules/compass4nfv/docs/release/installation/index.html
 
 
My spelled out version/script for these instructions (case of virtual 
deployment on Ubuntu):
https://wiki.opnfv.org/display/AUTO/Script%3A+Compass4nfv 
 
 
I did 3 more attempts on hpe32, and unfortunately they also failed:
 
1) tarball 6.2, stable/fraser branch, noha scenario, set VIRT_NUMBER to 5, 
deploy.sh
2) quickstart.sh (i.e. in master branch, build.sh, ha scenario, deploy.sh)
3) master branch, build.sh, noha scenario, set VIRT_NUMBER to 5, deploy.sh
 
I got twice with the get_ansible_print error, and once a 
get_installing_progress error.
(details below)
 
At this point, probably the most efficient next step would be for you to try 
yourself on a LaaS server, 
write down exactly the sequence of commands you used, so as to find out the 
missing commands.
Then I'll updates my notes, and the com

Re: [opnfv-tech-discuss] [compass4nfv][auto] Compass4nfv on LaaS x86 server

2018-08-01 Thread Klozik Martin
Hi Gerard,


I tried to follow your installation procedure (i.e. point 1 below) at HPE15 
with the only difference, i.e. VIRT_NUMBER=1. I've not observed the same error 
as you, but some (probably not fatal) assertion error (search log for 
client.py). Anyhow the installation process seemed to finish somehow. I plan to 
have a more detailed look at the machine tomorrow. Feel free to have a look 
yourself, I'll forward you appropriate credentials. The installation log 
(output of deploy.sh) is available at /home/opnfv/compass4nfv/deploy.log


Have a nice day,

Martin


Od: opnfv-tech-discuss@lists.opnfv.org  za 
uživatele Gerard Damm 
Odesláno: středa 1. srpna 2018 2:06:43
Komu: opnfv-tech-discuss@lists.opnfv.org
Kopie: huangxiangyu
Předmět: Re: [opnfv-tech-discuss] [compass4nfv][auto] Compass4nfv on LaaS x86 
server


Thanks for pointing out that other possible issue.



The instructions I use as a reference:

https://docs.opnfv.org/en/latest/submodules/compass4nfv/docs/release/installation/index.html



My spelled out version/script for these instructions (case of virtual 
deployment on Ubuntu):

https://wiki.opnfv.org/display/AUTO/Script%3A+Compass4nfv





I did 3 more attempts on hpe32, and unfortunately they also failed:



1) tarball 6.2, stable/fraser branch, noha scenario, set VIRT_NUMBER to 5, 
deploy.sh

2) quickstart.sh (i.e. in master branch, build.sh, ha scenario, deploy.sh)

3) master branch, build.sh, noha scenario, set VIRT_NUMBER to 5, deploy.sh



I got twice with the get_ansible_print error, and once a 
get_installing_progress error.

(details below)



At this point, probably the most efficient next step would be for you to try 
yourself on a LaaS server,

write down exactly the sequence of commands you used, so as to find out the 
missing commands.

Then I'll updates my notes, and the compass4nfv docs may also be updated.



Best regards,

Gerard









1) tarball 6.2, stable/fraser branch, noha scenario, set VIRT_NUMBER to 5, 
deploy.sh



downloaded 6.2 tarball, checked out to stable/fraser, added these lines in 
deploy.sh (and with noha scenario):

export VIRT_NUMBER=5

export VIRT_CPUS=4

export VIRT_MEM=16384

export VIRT_DISK=200G



error:



Traceback (most recent call last):

  File "/opt/opnfv-compass/compass4nfv/deploy/client.py", line 1127, in 

main()

  File "/opt/opnfv-compass/compass4nfv/deploy/client.py", line 1122, in main

deploy()

  File "/opt/opnfv-compass/compass4nfv/deploy/client.py", line 1076, in deploy

ansible_print = client.get_ansible_print()

  File "/opt/opnfv-compass/compass4nfv/deploy/client.py", line 968, in 
get_ansible_print

raise RuntimeError("OS installation timeout")

RuntimeError: OS installation timeout

+ RET=1

+ sleep 25

+ [[ 1 -eq 0 ]]

+ /bin/false

+ exit 1







2) quickstart.sh (i.e. in master branch, build.sh, ha scenario, deploy.sh)



error:



2018-07-31 21:57:59,756 p=130 u=root |  hostname: host2

2018-07-31 21:57:59,782 p=130 u=root |  
host=compass-deck,url=/api/clusterhosts/2/state,body={"state": 
"ERROR"},headers={'Content-type': 'application/json', 'Accept': '*/*', 
'X-Auth-Token': '$1$UohR2peC$xirMX8ctPjiZv5d1amTDf/'}

2018-07-31 21:57:59,817 p=130 u=root |  notify host status success!!! 
status=200, body={

"severity": "INFO",

"created_at": "2018-07-31 21:37:19",

"updated_at": "2018-07-31 21:57:59",

"id": 2,

"state": "ERROR",

"ready": false,

"percentage": 0.0,

"message": ""

}



2018-07-31 21:57:59,818 p=130 u=root |  hostname: host1

2018-07-31 21:57:59,845 p=130 u=root |  
host=compass-deck,url=/api/clusterhosts/1/state,body={"state": 
"ERROR"},headers={'Content-type': 'application/json', 'Accept': '*/*', 
'X-Auth-Token': '$1$F7YoEKlk$1/6TRpRf7crU2U6t8S0lE1'}

2018-07-31 21:57:59,892 p=130 u=root |  notify host status success!!! 
status=200, body={

"severity": "INFO",

"created_at": "2018-07-31 21:37:19",

"updated_at": "2018-07-31 21:57:59",

"id": 1,

"state": "ERROR",

"ready": false,

"percentage": 0.0,

"message": ""

}



Traceback (most recent call last):

  File "/opt/opnfv-compass/compass4nfv/deploy/client.py", line 1136, in 

main()

  File "/opt/opnfv-compass/compass4nfv/deploy/client.py", line 1131, in main

deploy()

  File "/opt/opnfv-compass/compass4nfv/deploy/client.py", line 1086, in deploy

client.get_installing_progress(cluster_id, ansible_print)

  File "/opt/opnfv-compass/compass4nfv/deploy/client.py", line 1029, in 
get_installing_progress

_get_installing_progress()

  File "/opt/opnfv-compass/compass4nfv/deploy/client.py", line 1015, in 
_get_installing_progress

(cluster_id, status, cluster_state)

RuntimeError: ('get cluster %s state status %s: %s, error', (1, 200, 
{u'status': {u'completed_hosts': 0, u'total_hosts': 5, u'installing_hosts': 0, 
u'failed_hosts': 5}, u'severity': u'ERROR', u'created_at': u'2018-07-31 
21:37:19', u'message': u'total 5, 

Re: [opnfv-tech-discuss] [Auto] Jenkins for Auto

2018-06-12 Thread Klozik Martin
Hi Paul,


thanks for the explanation. It's good to know, that existing installation 
procedure can be reused for auto project.


Regards,

Martin



Od: Paul Vaduva 
Odesláno: úterý 12. června 2018 11:49
Komu: Klozik Martin
Kopie: Tina Tsou; gerard.d...@wipro.com; Cristina Pauna; Joe Kidder; 
opnfv-tech-discuss@lists.opnfv.org
Předmět: RE: [Auto] Jenkins for Auto


Hi Martin,



See my comments inline...



-Original Message-
From: Klozik Martin 
Sent: Tuesday, June 12, 2018 10:46 AM
To: Paul Vaduva 
Cc: Tina Tsou ; gerard.d...@wipro.com; Cristina Pauna 
; Joe Kidder ; 
opnfv-tech-discuss@lists.opnfv.org
Subject: Re: [Auto] Jenkins for Auto



Hi Paul,



thanks for taking care about initial yaml file definition at 
jjb/auto/auto.yaml. I've also noticed your +1 to my "ci/build-auto.sh" patch. 
It was acked by other team members too, so we will go ahead with my proposal.



Let me try to summarize my understanding of your CI job configuration. Please 
let me know your comments (that applies to all).



Your idea is to utilize Armband project for OPNFV installation via FUEL at ARM 
POD. I'm not familiar with the details, but it seems, that current 
configuration (with PROJECT=armband) works only for armband repository. Do you 
know if it can be modified/configured to properly install OPNFV at our pod 
while still utilizing existing build framework?



[Paul Vaduva]

Yes, thats true I made a job for deploying fuel armband version, it there will 
be other installers we would probably need other ci jobs that will install 
(compass, apex, etc..), Yes I already proposed a patch to extend armband deploy 
job template to accept project parameter.

https://gerrit.opnfv.org/gerrit/#/c/58415/



If not, we would probably need to do it "manually", e.g. to introduce 
"install_opnfv" function into ci/build-auto.sh, which will:

1) clone armband repo as part of our CI job

2) modify WORKSPACE to point to armband repo

3) execute ci/deploy.sh from armband (and other steps if required) to take care 
about OPNFV installation

4) restore WORKSPACE to its orignal value before ONAP deployment

[Paul Vaduva]

I thought about this as well (deploying Openstack as part of auto ci job but it 
seem to be much more effort in maintaining the deploy procedure then the 
previous variant (just using armband template for deploying their project). It 
seems to make more sense to keep the Openstack deploying separated from ONAP 
deployment.





Of course we would have to update this logic in the future to support both arm 
and x64 pods.



Best Regards,

Martin



Od: Klozik Martin

Odesláno: čtvrtek 7. června 2018 9:57

Komu: Joe Kidder; Paul Vaduva

Kopie: Tina Tsou; gerard.d...@wipro.com<mailto:gerard.d...@wipro.com>; Cristina 
Pauna

Předmět: Re: Jenkins for Auto







Direct link to previously mentioned gerrit draft review:



https://gerrit.opnfv.org/gerrit/#/c/58307/



--Martin







Od: Klozik Martin

Odesláno: čtvrtek 7. června 2018 9:48

Komu: Joe Kidder; Paul Vaduva

Kopie: Tina Tsou; gerard.d...@wipro.com<mailto:gerard.d...@wipro.com>; Cristina 
Pauna

Předmět: Re: Jenkins for Auto





Hi Paul,



just for the sake of discussion, I've prepared a draft patch of possible ci 
script skeleton. In case that we will decide to go in this direction it can be 
used as an initial version straight away and simply called from  jenkins yaml 
file.



May be we can do a simple vote in gerrit by adding +1 or -1.



Best Regards,

Martin



Od: Klozik Martin

Odesláno: čtvrtek 7. června 2018 9:07:36

Komu: Joe Kidder; Paul Vaduva

Kopie: Tina Tsou; gerard.d...@wipro.com<mailto:gerard.d...@wipro.com>; Cristina 
Pauna

Předmět: Re: Jenkins for Auto







Thanks Joe for adding me in the loop.





Hi Paul,





I'm sorry, I was not aware of your activities. I've sent a few thoughts about 
possible Auto CI (initial) configuration into the Auto mailing list. Could you 
please have a look at it? Based on this thread I can see that you're about to 
define a yaml file and  prepare initial body of Auto jobs (daily/verify/merge). 
Let me know if I can be of any help.





In a nutshell, I was proposing the same concept as is used by vswitchperf 
project for some time and later adopted by storperf too. It means, to define 
only minimalistic YAML file with definition of jobs frequency, allowed slaves, 
etc. and then to invoke a script  stored inside Auto repo (e.g. 
ci/build-auto.sh) with the name of the job (e.g. ci/build-auto.sh verify). We 
can start with empty body, so all jobs will always end with success if slave 
(unh-pod1) is up and auto repository clones smoothly. Later we can add  a real 
stuff there, based on the progress of platform installation scripts (for OPNFV 
and ONAP) and Auto tests automation. My idea was to start with a common 
functions (inside build-auto.sh) for platform installation (which can be shared 
among all jobs). Based  on the ex

Re: [opnfv-tech-discuss] [Auto] Jenkins for Auto

2018-06-12 Thread Klozik Martin
Hi Paul,

thanks for taking care about initial yaml file definition at 
jjb/auto/auto.yaml. I've also noticed your +1 to my "ci/build-auto.sh" patch. 
It was acked by other team members too, so we will go ahead with my proposal.

Let me try to summarize my understanding of your CI job configuration. Please 
let me know your comments (that applies to all).

Your idea is to utilize Armband project for OPNFV installation via FUEL at ARM 
POD. I'm not familiar with the details, but it seems, that current 
configuration (with PROJECT=armband) works only for armband repository. Do you 
know if it can be modified/configured to properly install OPNFV at our pod 
while still utilizing existing build framework?

If not, we would probably need to do it "manually", e.g. to introduce 
"install_opnfv" function into ci/build-auto.sh, which will:
1) clone armband repo as part of our CI job
2) modify WORKSPACE to point to armband repo
3) execute ci/deploy.sh from armband (and other steps if required) to take care 
about OPNFV installation
4) restore WORKSPACE to its orignal value before ONAP deployment

Of course we would have to update this logic in the future to support both arm 
and x64 pods.

Best Regards,
Martin

Od: Klozik Martin
Odesláno: čtvrtek 7. června 2018 9:57
Komu: Joe Kidder; Paul Vaduva
Kopie: Tina Tsou; gerard.d...@wipro.com; Cristina Pauna
Předmět: Re: Jenkins for Auto
   


Direct link to previously mentioned gerrit draft review:

https://gerrit.opnfv.org/gerrit/#/c/58307/

--Martin



Od: Klozik Martin
Odesláno: čtvrtek 7. června 2018 9:48
Komu: Joe Kidder; Paul Vaduva
Kopie: Tina Tsou; gerard.d...@wipro.com; Cristina Pauna
Předmět: Re: Jenkins for Auto
   

Hi Paul,

just for the sake of discussion, I've prepared a draft patch of possible ci 
script skeleton. In case that we will decide to go in this direction it can be 
used as an initial version straight away and simply called from  jenkins yaml 
file.

May be we can do a simple vote in gerrit by adding +1 or -1.

Best Regards,
Martin
  
Od: Klozik Martin
Odesláno: čtvrtek 7. června 2018 9:07:36
Komu: Joe Kidder; Paul Vaduva
Kopie: Tina Tsou; gerard.d...@wipro.com; Cristina Pauna
Předmět: Re: Jenkins for Auto
   


Thanks Joe for adding me in the loop.


Hi Paul,


I'm sorry, I was not aware of your activities. I've sent a few thoughts about 
possible Auto CI (initial) configuration into the Auto mailing list. Could you 
please have a look at it? Based on this thread I can see that you're about to 
define a yaml file and  prepare initial body of Auto jobs (daily/verify/merge). 
Let me know if I can be of any help.


In a nutshell, I was proposing the same concept as is used by vswitchperf 
project for some time and later adopted by storperf too. It means, to define 
only minimalistic YAML file with definition of jobs frequency, allowed slaves, 
etc. and then to invoke a script  stored inside Auto repo (e.g. 
ci/build-auto.sh) with the name of the job (e.g. ci/build-auto.sh verify). We 
can start with empty body, so all jobs will always end with success if slave 
(unh-pod1) is up and auto repository clones smoothly. Later we can add  a real 
stuff there, based on the progress of platform installation scripts (for OPNFV 
and ONAP) and Auto tests automation. My idea was to start with a common 
functions (inside build-auto.sh) for platform installation (which can be shared 
among all jobs). Based  on the experience we can later split it to separate 
jobs to get some visibility of particular "sub-task" stability directly in the 
Jenkins job history.


Best Regards,
Martin


BTW, could you please CC "auto mailing list" in the future discussions? So the 
knowledge is spread among the team and properly archived by the mailing list. 
Thanks.



Od: Joe Kidder 
Odesláno: středa 6. června 2018 13:11
Komu: Paul Vaduva
Kopie: Tina Tsou; gerard.d...@wipro.com; Cristina Pauna; Klozik Martin
Předmět: Re: Jenkins for Auto
   


Paul,
  Martin Klozic is also looking at CI. Just a heads up to avoid collisions. 


Joe
On Jun 6, 2018, at 6:48 AM, Paul Vaduva  wrote:

 

Hi Tina,
 
I was in vacation last week, 
I will take a look at auto jobs now.
 
Best Regards
Paul
 


From: Tina Tsou 
Sent: Friday, June 1, 2018 10:50 PM
To: Paul Vaduva ; Joe Kidder 
Cc:  gerard.d...@wipro.com; Cristina Pauna 
Subject: RE: Jenkins for Auto
   
Dear Paul,
 
Are you waiting for either Joe or Gerard to do something, before you try an 
OPNFV install?

 
 
Thank you,
Tina Tsou
Enterprise Architect
Arm
tina.t...@arm.com
+1 (408)931-3833
  


From: Paul Vaduva 
Sent: Friday, May 18, 2018 9:49 AM
To: Joe Kidder 
Cc:  gerard.d...@wipro.com; Tina Tsou ; Cristina Pauna 

Subject: RE: Jenkins for Auto
   
Hi,
 
Due to some naming conflictes we had to rename arm-pod7 to unh-pod1
https://build.opnfv.org/ci/computer/unh-pod1/
So the previous link is the correct one. 
I will start Monday on the next phase: deploying mcp.
 
Joe I saw a lot of di

Re: [opnfv-tech-discuss] [Auto] CI for Auto thoughts

2018-06-07 Thread Klozik Martin
Hi Joe,


thanks for the provided details.


--Martin


Od: Joe Kidder 
Odesláno: středa 6. června 2018 12:28:46
Komu: Klozik Martin
Kopie: gerard.d...@wipro.com; opnfv-tech-discuss@lists.opnfv.org
Předmět: Re: [opnfv-tech-discuss] [Auto] CI for Auto thoughts

Martin,
  If you mean the other nodes in the pod, there are credentials created by MCP 
when it installed OPNFV on the pod.

  For user “ubuntu”
  /var/lib/opnfv/mcp.rsa

  Also I occasionally log into the console on nodes with user “opnfv” and pw 
“opnfv_secret”.

Joe

On Jun 6, 2018, at 6:12 AM, Klozik Martin 
mailto:martin.klo...@tieto.com>> wrote:


Hi Gerard and Joe,


thank you for your feedback and comments. Its a good idea to discuss it further 
during next community meeting.


Just a few comments about current bare metal pod:

  *   connection to jenkins seems to be configured properly and pod is reported 
as up https://build.opnfv.org/ci/computer/unh-pod1/' Have you tried to reboot 
jump host to verify that monit is properly started?
  *   I've enabled passwordless sudo for jenkins user at jump host (as jenkins 
jobs are executed by this user, it  might be required to install/update some 
packages, switch to different users, etc.)
  *   I've changed jump host name from "localhost" to "unh-pod1-jump". As 
"unh-pod1" name was chosen as pod name in jenkins, we should rename pod nodes 
accordingly. However I was not able to figure out credentials/keys to access 
other physical nodes - any hint?
  *   I was not able to find UNH Lab listed among community labs at 
https://wiki.opnfv.org/display/pharos/Community+Labs Of course there are 
dedicated pages about LAAS support at UNH Lab, but it would be good to follow 
current practice, i.e. to document bare metal servers the similar way as other 
LABs. If you'll agree, I can prepare an initial version.


Have a nice day,
Martin

Od: gerard.d...@wipro.com<mailto:gerard.d...@wipro.com> 
mailto:gerard.d...@wipro.com>>
Odesláno: úterý 5. června 2018 18:50
Komu: Klozik Martin; 
opnfv-tech-discuss@lists.opnfv.org<mailto:opnfv-tech-discuss@lists.opnfv.org>
Kopie: Tina Tsou
Předmět: RE: [Auto] CI for Auto thoughts


Thanks for your feedback !



In the same order of your 9 bullet points:



1) dedicated pod, 2nd pod, robustness:

That particular pod is reserved for Auto. It is bare-metal, with 6 separate 
machines.

There might indeed be a need in the future for a 2nd pod, but we're not there 
yet ;)

Can you check if the Jenkins slave on that pod is correctly configured and 
robust ?



2) job frequency, per branch

Yes, the master branch can be triggered daily (even if no repo change). In the 
early weeks/months, there should not be any overload on the pod.

Indeed, for the stable/ branch, it only needs to be triggered when 
there was a change. Maybe also do a weekly sanity/health check.



3) job definition in YAML file in releng repo and job content in project repo

The structure was inspired by the armband project, and I guess is the same for 
all projects ?

It is the same approach used for the documentation (doc file names centralized 
in opnfvdocs repo, doc files distributed in project repos). Centralized content 
is changed less frequently than distributed content.



4) conditional sequence of job groups

Yes, certainly, no point trying subsequent jobs in case one fails.

Just one comment, though: some jobs near the end of the pipeline will be 
performance measurements (test cases). They will need to "pass" from a software 
execution point of view, but even if the measured performance is "poor" 
(against some SLA target numbers for example), the pipeline still needs to 
continue. The result analysis phase afterwards will sort things out. In other 
words, quantitative "quality/performance failures" should not interrupt a 
pipeline like binary pass/fail "execution failures".



5) output filtering, full logs retention period

Sure, that can help. Specific Auto test cases will definitely have customized 
outputs in files or DBs (to capture metric values), not just standard outputs 
to the console.

Also, repetitive analysis could be designed, which could look systematically 
into the full logs (data mining, even ML/AI ultimately, to empirically 
fine-tune things like policies and thresholds)



6) sharing Verify and Merge

Agreed: it sounds to me like Merge is a special case of Verify (last Verify of 
a change/patch-implied sequence of Verifies).



7) storing artefacts into OPNFV

I'm not familiar with this, but I'd say the test case execution results could 
end up in some OPNFV store, maybe in Functest, or in this artifactory. For the 
installation jobs, some success/failure results may also be stored. The 
ONAP-specific results might be shared with ONAP.



8) OPNFV and ONAP versions as parameters

Yes, absolutely. And additional param

Re: [opnfv-tech-discuss] [Auto] CI for Auto thoughts

2018-06-06 Thread Klozik Martin
Hi Gerard and Joe,


thank you for your feedback and comments. Its a good idea to discuss it further 
during next community meeting.


Just a few comments about current bare metal pod:

  *   connection to jenkins seems to be configured properly and pod is reported 
as up https://build.opnfv.org/ci/computer/unh-pod1/' Have you tried to reboot 
jump host to verify that monit is properly started?
  *   I've enabled passwordless sudo for jenkins user at jump host (as jenkins 
jobs are executed by this user, it  might be required to install/update some 
packages, switch to different users, etc.)
  *   I've changed jump host name from "localhost" to "unh-pod1-jump". As 
"unh-pod1" name was chosen as pod name in jenkins, we should rename pod nodes 
accordingly. However I was not able to figure out credentials/keys to access 
other physical nodes - any hint?
  *   I was not able to find UNH Lab listed among community labs at 
https://wiki.opnfv.org/display/pharos/Community+Labs Of course there are 
dedicated pages about LAAS support at UNH Lab, but it would be good to follow 
current practice, i.e. to document bare metal servers the similar way as other 
LABs. If you'll agree, I can prepare an initial version.


Have a nice day,
Martin

Od: gerard.d...@wipro.com 
Odesláno: úterý 5. června 2018 18:50
Komu: Klozik Martin; opnfv-tech-discuss@lists.opnfv.org
Kopie: Tina Tsou
Předmět: RE: [Auto] CI for Auto thoughts


Thanks for your feedback !



In the same order of your 9 bullet points:



1) dedicated pod, 2nd pod, robustness:

That particular pod is reserved for Auto. It is bare-metal, with 6 separate 
machines.

There might indeed be a need in the future for a 2nd pod, but we're not there 
yet ;)

Can you check if the Jenkins slave on that pod is correctly configured and 
robust ?



2) job frequency, per branch

Yes, the master branch can be triggered daily (even if no repo change). In the 
early weeks/months, there should not be any overload on the pod.

Indeed, for the stable/ branch, it only needs to be triggered when 
there was a change. Maybe also do a weekly sanity/health check.



3) job definition in YAML file in releng repo and job content in project repo

The structure was inspired by the armband project, and I guess is the same for 
all projects ?

It is the same approach used for the documentation (doc file names centralized 
in opnfvdocs repo, doc files distributed in project repos). Centralized content 
is changed less frequently than distributed content.



4) conditional sequence of job groups

Yes, certainly, no point trying subsequent jobs in case one fails.

Just one comment, though: some jobs near the end of the pipeline will be 
performance measurements (test cases). They will need to "pass" from a software 
execution point of view, but even if the measured performance is "poor" 
(against some SLA target numbers for example), the pipeline still needs to 
continue. The result analysis phase afterwards will sort things out. In other 
words, quantitative "quality/performance failures" should not interrupt a 
pipeline like binary pass/fail "execution failures".



5) output filtering, full logs retention period

Sure, that can help. Specific Auto test cases will definitely have customized 
outputs in files or DBs (to capture metric values), not just standard outputs 
to the console.

Also, repetitive analysis could be designed, which could look systematically 
into the full logs (data mining, even ML/AI ultimately, to empirically 
fine-tune things like policies and thresholds)



6) sharing Verify and Merge

Agreed: it sounds to me like Merge is a special case of Verify (last Verify of 
a change/patch-implied sequence of Verifies).



7) storing artefacts into OPNFV

I'm not familiar with this, but I'd say the test case execution results could 
end up in some OPNFV store, maybe in Functest, or in this artifactory. For the 
installation jobs, some success/failure results may also be stored. The 
ONAP-specific results might be shared with ONAP.



8) OPNFV and ONAP versions as parameters

Yes, absolutely. And additional parameters would be CPU architectures, VNFs 
(and their versions too), clouds (start with OpenStack: various versions, then 
AWS+Azure+GCP+... and their versions), VM image versions, …. There could easily 
be way too many parameters, leading to combinatorial explosion: better order 
another 10 or 100 pods already ;)



9) OPNFV installer, ONAP deployment method as parameters

Yes, likewise: yet other parameters. I suppose the combination of parameters 
will have to be controlled and selected manually, not the full Cartesian 
product of all possibilities.



We’ll be debriefing this discussion during the next Auto weekly meeting (June 
11th):

https://wiki.opnfv.org/display/AUTO/Auto+Project+Meetings



Best regards,

Gerard





From: Klozik Martin [mailto:martin.klo..

[opnfv-tech-discuss] [Auto] CI for Auto thoughts

2018-06-04 Thread Klozik Martin
Hi Auto Team,


I've read the CI related wiki page at 
https://wiki.opnfv.org/display/AUTO/CI+for+Auto and I would like to discuss 
with you a few thoughts based on my experience with CI from OPNFV vswitchperf 
project.


  *   I agree, that a dedicated POD should be reserved for Auto CI. Based on 
the length of the job execution and the content of VERIFY & MERGE jobs, it 
might be required to introduce an additional (2nd) POD in the future. I would 
prefer a bare metal POD to virtual one to simplify the final setup and thus 
improve its robustness.
  *   Based on the configuration, daily jobs can be triggered daily or "daily 
if there was any commit since the last CI run" (pollSCM trigger). The second 
setup can easy the load at CI POD - useful in case that POD is shared among 
several projects or jobs, e.g. more compute power is left for VERIFY/MERGE 
jobs. I suppose that in case of Auto the frequency will be really daily to 
constantly verify OPNFV & ONAP (master branch) functionality even if Auto 
repository was not modified. In case of stable release it doesn't make sense to 
run it on daily basis as OPNFV & ONAP versions will be fixed.
  *   I agree with the split of functionality between job definition yaml file 
(in releng repo) and real job "body" inside a Auto repo. This will add a 
flexibility to the job configuration and it will also allow us to verify new 
changes as part of verify job during review process before the changes are 
merged. For example, VSPERF project uses a CI related script 
ci/build-vsperf.sh, which based on the parameter executes daily, verify or 
merge job actions. This script is then called from jenkins yaml file 
(jjb/vswitchperf/vswitchperf.yaml) with appropriate parameter.
  *   In case of auto CI, it might be useful to split CI job into the set of 
dependent jobs, where following job will be executed only in case that previous 
job has succeeded. This will simplify analysis of job failures as it will be 
visible for the first sight if the failure is caused by Auto TCs or by platform 
installers. The history of these jobs will give us a simple overview of 
"sub-jobs" stability.

E.g.
auto-daily-master
|---> auto-install-opnfv
 |> auto-install-onap
  |-> auto-daily-tests

example 2:
  auto-verify-master
|---> auto-install-opnfv
 |> auto-install-onap
  |-> auto-verify
 code validation... OK
 doc validation  OK
 sanity TC OK

  *   It might be useful to prepare some output filtering. Otherwise it would 
be difficult to read the jenkins job  console log during analysis of CI run 
(especially a failure). It means to suppress console output of successful steps 
or tests and print only simplified results (as shown in auto-verify example 
above). This approach expects, that full logs are either accessible (i.e. kept 
for a while) at POD or dumped into jenkins job console output in case that 
error is detected.
  *   there are three basic job types used in OPNFV - DAILY, VERIFY (triggered 
by gerrit for every pushed patch or its change) and MERGE (triggered by gerrit 
after the merge of the patch). I suppose that in case of AUTO, VERIFY & MERGE 
jobs can share the content.
  *   Do you plan to store any artifacts "generated" by CI job into OPNFV 
artifactory? For example, vswitchperf is pushing results into OPNFV results 
database (I've seen, that AUTO is also going to push to results DB ) and after 
that it stores appropriate log files and test report into artifacts.opnfv.org. 
Does it make sense to store logs from OPNFV/ONAP installation or execution of 
Auto TCs into artifactory too?
  *   It might be useful to easily switch/test different OPNFV/ONAP versions. 
For example a "master" CI script can import a versions.sh file where 
user/tester can specify OPNFV and ONAP branches or commit IDs to be used.
  *   Analogically to previous point, also the OPNFV installer and ONAP 
deployment methods can be specified in the (same?) file.

Please consider these thoughts as an "outsider" point of view at Auto CI, as 
I'm new to the Auto project. It is highly possible that some of these points 
were already discussed or even solved already. In that case please don't 
hesitate to point out the sources I should check. Based on the discussion, I'll 
take care about CI auto wiki page updates, if it will be needed.

Thank you,
Martin
___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


Re: [opnfv-tech-discuss] [vsperf] Agenda for VSPERF weekly meeting - 30 May 2018 (ww67)

2018-05-30 Thread Klozik Martin
Hi all,


My Development Update follows:


VSPERF-579<https://jira.opnfv.org/browse/VSPERF-579> - connections: vswitch and 
deployment redesign - quite a complex patch is waiting for review

VSPERF-580<https://jira.opnfv.org/browse/VSPERF-580> - Change default names of 
OVS bridges to avoid collisions - patch is waiting for review


In both cases are details about patch testing available at jira.


Regards,

Martin




Od: Rao, Sridhar 
Odesláno: středa 30. května 2018 7:57
Komu: opnfv-tech-discuss@lists.opnfv.org
Kopie: 'ALFRED C 'MORTON (AL)''; 'Trevor Cooper'; 'Mars Toktonaliev (Nokia - 
US/Irving)'; 'Cian Ferriter'; Christian Trautman (ctrau...@redhat.com); Bill 
Michalowski (bmich...@redhat.com); Alec Hothan (ahothan); 
eddie.arr...@huawei.com; Jose Angel Lausuch; Julien Meunier; 
thomas.fai...@6wind.com; Klozik Martin; Elias Richard
Předmět: [vsperf] Agenda for VSPERF weekly meeting - 30 May 2018 (ww67)


Agenda,

  1.  Development Update
  2.  Plugfest Discussion – If Any.



Meeting minutes

  *   WW66: 
http://ircbot.wl.linuxfoundation.org/meetings/opnfv-vswitchperf/2018/opnfv-vswitchperf.2018-05-23-15.01.html
  *   WW65: 
http://ircbot.wl.linuxfoundation.org/meetings/opnfv-vswitchperf/2018/opnfv-vswitchperf.2018-05-16-14.56.html
  *   WW64: 
http://ircbot.wl.linuxfoundation.org/meetings/opnfv-vswitchperf/2018/opnfv-vswitchperf.2018-05-09-15.05.html





Time: Wednesday PDT 8h00 (GMT -7) UTC 15h00.



IRC: freenode https://freenode.net/ channel: #opnfv-vswitchperf; 
http://webchat.freenode.net/?channels=opnfv-vswitchperf



Audio: https://zoom.us/j/2362828999

Or iPhone one-tap :
US: +16699006833,,2362828999# or +16465588656,,2362828999#
Or Telephone:
Dial(for higher quality, dial a number based on your current location):
US: +1 669 900 6833 or +1 646 558 8656 or +1 877 369 0926 (Toll Free) or +1 855 
880 1246 (Toll Free)
Meeting ID: 236 282 8999
International numbers available: 
https://zoom.us/zoomconference?m=Xn-Kas4jq2GuyfbCCKYpwvi6FpHEYX8n





Regards,

Sridhar K. N. Rao (Ph. D)

Architect – SDN/NFV

+91-9900088064






Spirent Communications e-mail confidentiality.

This e-mail contains confidential and / or privileged information belonging to 
Spirent Communications plc, its affiliates and / or subsidiaries. If you are 
not the intended recipient, you are hereby notified that any disclosure, 
copying, distribution and / or the taking of any action based upon reliance on 
the contents of this transmission is strictly forbidden. If you have received 
this message in error please notify the sender by return e-mail and delete it 
from your system.

Spirent Communications plc
Northwood Park, Gatwick Road, Crawley, West Sussex, RH10 9XN, United Kingdom.
Tel No. +44 (0) 1293 767676
Fax No. +44 (0) 1293 767677

Registered in England Number 470893
Registered at Northwood Park, Gatwick Road, Crawley, West Sussex, RH10 9XN, 
United Kingdom.

Or if within the US,

Spirent Communications,
27349 Agoura Road, Calabasas, CA, 91301, USA.
Tel No. 1-818-676- 2300
___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


Re: [opnfv-tech-discuss] [VSPERF] CI failures

2018-05-15 Thread Klozik Martin
Hi All,


I had to downgrade a LT 4.4. kernel version from EPEL repo to 4.4.116 (from 
recent 4.4.131). After the downgrade all VSPERF tools compiles fine and also 
openvswitch kernel module works. However, quick phy2phy_cont test indicated a 
possible drop in performance. I've triggered a daily job to check results.


Regards,

Martin



Od: opnfv-tech-discuss-boun...@lists.opnfv.org 
<opnfv-tech-discuss-boun...@lists.opnfv.org> za uživatele Klozik Martin 
<martin.klo...@tieto.com>
Odesláno: pátek 11. května 2018 12:32
Komu: opnfv-tech-discuss@lists.opnfv.org
Předmět: [opnfv-tech-discuss] [VSPERF] CI failures


Hi,


recent CI failures (VERIFY, MERGE and DAILY jobs) were caused by missing 
repositories. It means, that our POD12 servers were not upgraded for some time 
and meantime repositories for 7.3 were removed (moved to vault repos).


I've updated node3 and node4 to Centos 7.5 today and kernel to 4.4.131 (recent 
kernel-lt from epel repo). However there are still some issues with vanilla 
ovs. It might be required to either downgrade a kernel a bit or to use vanilla 
kernel without RHT backports, which are sometimes incompatible with DPDK 
library or OVS kernel module.


I'll have a look early next week.


Regards,

Martin

___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


[opnfv-tech-discuss] [VSPERF] CI failures

2018-05-11 Thread Klozik Martin
Hi,


recent CI failures (VERIFY, MERGE and DAILY jobs) were caused by missing 
repositories. It means, that our POD12 servers were not upgraded for some time 
and meantime repositories for 7.3 were removed (moved to vault repos).


I've updated node3 and node4 to Centos 7.5 today and kernel to 4.4.131 (recent 
kernel-lt from epel repo). However there are still some issues with vanilla 
ovs. It might be required to either downgrade a kernel a bit or to use vanilla 
kernel without RHT backports, which are sometimes incompatible with DPDK 
library or OVS kernel module.


I'll have a look early next week.


Regards,

Martin

___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss


Re: [opnfv-tech-discuss] [VSPERF] trex_vm_tput KeyError: 'scapy'

2018-05-09 Thread Klozik Martin
Hi Dilip,


have you modified any configuration file? If so, could you please send us patch 
with your modifications? (e.g. output of "git diff" or "git diff master" in 
case you've created your branch).


As Sridhar has mentioned, it can be caused by outdated configuration or by 
accidental reconfiguration of TRAFFIC array (e.g. by TRAFFIC = {... } inside 
10_custom.conf).


Best Regards,

Martin


Od: opnfv-tech-discuss-boun...@lists.opnfv.org 
 za uživatele Rao, Sridhar 

Odesláno: pondělí 7. května 2018 7:37:57
Komu: dilip.d...@hpe.com; opnfv-tech-discuss@lists.opnfv.org
Předmět: Re: [opnfv-tech-discuss] [VSPERF] trex_vm_tput KeyError: 'scapy'

Dilip,

Do you see 'scapy' dictionary within TRAFFIC dictionary in conf/03_traffic.conf?
Looks like your traffic dictionary definition is old.

Regards,
Sridhar

-Original Message-
From: opnfv-tech-discuss-boun...@lists.opnfv.org 
 On Behalf Of Dilip Daya
Sent: Friday, May 04, 2018 8:08 PM
To: opnfv-tech-discuss@lists.opnfv.org
Subject: [opnfv-tech-discuss] [VSPERF] trex_vm_tput KeyError: 'scapy'

I'm hoping anyone can assist me with the following:`

Environment:
* ProLiant DL380 Gen10
* RHEL 7.5 (3.10.0-862.el7.x86_64)
* VSPERF git clone dated Apr-30-2018
* vloop-vnf-ubuntu-16.04_trex_20180209.qcow2

Command-line:

$ ./vsperf --integration trex_vm_tput
...
...
[INFO ]  2018-05-03 18:29:35,645 : (trex) - T-Rex:  In Trex connect method...
PING 192.168.35.2 (192.168.35.2) 56(84) bytes of data.
64 bytes from 192.168.35.2: icmp_seq=1 ttl=64 time=0.213 ms

--- 192.168.35.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt 
min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms root@192.168.35.2's password:
/root/trex/scripts/t-rex-64
[INFO ]  2018-05-03 18:29:38,420 : (trex) - T-Rex: Trex host successfully 
found...
[INFO ]  2018-05-03 18:29:38,420 : (trex) - In Trex send_rfc2544_throughput 
method [INFO ]  2018-05-03 18:29:38,421 : (trex) - T-Rex sending learning 
packets [DEBUG]  2018-05-03 18:29:38,440 : (trex) - Starting traffic at 0.1 
Gbps speed [DEBUG]  2018-05-03 18:29:38,441 : (pidstat) - cmd : sudo pkill 
--signal 2 pidstat ...
...
[ERROR]  2018-05-03 18:38:55,107 : (root) - Failed to run test: trex_vm_tput 
Traceback (most recent call last):
  File "./vsperf", line 831, in main
test.run()
  File "/home/dilip/vsperf_src/vswitchperf/testcases/testcase.py", line 360, in 
run
if self.step_run():
  File "/home/dilip/vsperf_src/vswitchperf/testcases/testcase.py", line 864, in 
step_run
self._step_result[i] = test_method(*step_params)
  File "/home/dilip/vsperf_src/vswitchperf/core/traffic_controller_rfc2544.py", 
line 70, in send_traffic
traffic, tests=self._tests, duration=self._duration, 
lossrate=self._lossrate)
  File "/home/dilip/vsperf_src/vswitchperf/tools/pkt_gen/trex/trex.py", line 
613, in send_rfc2544_throughput
self.learning_packets(traffic)
  File "/home/dilip/vsperf_src/vswitchperf/tools/pkt_gen/trex/trex.py", line 
515, in learning_packets
disable_capture=True)
  File "/home/dilip/vsperf_src/vswitchperf/tools/pkt_gen/trex/trex.py", line 
386, in generate_traffic
packet_1, packet_2 = self.create_packets(traffic, ports_info)
  File "/home/dilip/vsperf_src/vswitchperf/tools/pkt_gen/trex/trex.py", line 
202, in create_packets
if traffic['scapy']['enabled']:
KeyError: 'scapy'
[INFO ]  2018-05-03 18:38:55,114 : (root) - Continuing with next test...


--
Thanks,
-Dilip Daya


___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss




Spirent Communications e-mail confidentiality.

This e-mail contains confidential and / or privileged information belonging to 
Spirent Communications plc, its affiliates and / or subsidiaries. If you are 
not the intended recipient, you are hereby notified that any disclosure, 
copying, distribution and / or the taking of any action based upon reliance on 
the contents of this transmission is strictly forbidden. If you have received 
this message in error please notify the sender by return e-mail and delete it 
from your system.

Spirent Communications plc
Northwood Park, Gatwick Road, Crawley, West Sussex, RH10 9XN, United Kingdom.
Tel No. +44 (0) 1293 767676
Fax No. +44 (0) 1293 767677

Registered in England Number 470893
Registered at Northwood Park, Gatwick Road, Crawley, West Sussex, RH10 9XN, 
United Kingdom.

Or if within the US,

Spirent Communications,
27349 Agoura Road, Calabasas, CA, 91301, USA.
Tel No. 1-818-676- 2300
___
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss