Hi Gerard and Joe,

thank you for your feedback and comments. Its a good idea to discuss it further 
during next community meeting.


Just a few comments about current bare metal pod:

  *   connection to jenkins seems to be configured properly and pod is reported 
as up https://build.opnfv.org/ci/computer/unh-pod1/' Have you tried to reboot 
jump host to verify that monit is properly started?
  *   I've enabled passwordless sudo for jenkins user at jump host (as jenkins 
jobs are executed by this user, it  might be required to install/update some 
packages, switch to different users, etc.)
  *   I've changed jump host name from "localhost" to "unh-pod1-jump". As 
"unh-pod1" name was chosen as pod name in jenkins, we should rename pod nodes 
accordingly. However I was not able to figure out credentials/keys to access 
other physical nodes - any hint?
  *   I was not able to find UNH Lab listed among community labs at 
https://wiki.opnfv.org/display/pharos/Community+Labs Of course there are 
dedicated pages about LAAS support at UNH Lab, but it would be good to follow 
current practice, i.e. to document bare metal servers the similar way as other 
LABs. If you'll agree, I can prepare an initial version.


Have a nice day,
Martin
________________________________
Od: gerard.d...@wipro.com <gerard.d...@wipro.com>
Odesláno: úterý 5. června 2018 18:50
Komu: Klozik Martin; opnfv-tech-discuss@lists.opnfv.org
Kopie: Tina Tsou
Předmět: RE: [Auto] CI for Auto thoughts


Thanks for your feedback !



In the same order of your 9 bullet points:



1) dedicated pod, 2nd pod, robustness:

That particular pod is reserved for Auto. It is bare-metal, with 6 separate 
machines.

There might indeed be a need in the future for a 2nd pod, but we're not there 
yet ;)

Can you check if the Jenkins slave on that pod is correctly configured and 
robust ?



2) job frequency, per branch

Yes, the master branch can be triggered daily (even if no repo change). In the 
early weeks/months, there should not be any overload on the pod.

Indeed, for the stable/<release> branch, it only needs to be triggered when 
there was a change. Maybe also do a weekly sanity/health check.



3) job definition in YAML file in releng repo and job content in project repo

The structure was inspired by the armband project, and I guess is the same for 
all projects ?

It is the same approach used for the documentation (doc file names centralized 
in opnfvdocs repo, doc files distributed in project repos). Centralized content 
is changed less frequently than distributed content.



4) conditional sequence of job groups

Yes, certainly, no point trying subsequent jobs in case one fails.

Just one comment, though: some jobs near the end of the pipeline will be 
performance measurements (test cases). They will need to "pass" from a software 
execution point of view, but even if the measured performance is "poor" 
(against some SLA target numbers for example), the pipeline still needs to 
continue. The result analysis phase afterwards will sort things out. In other 
words, quantitative "quality/performance failures" should not interrupt a 
pipeline like binary pass/fail "execution failures".



5) output filtering, full logs retention period

Sure, that can help. Specific Auto test cases will definitely have customized 
outputs in files or DBs (to capture metric values), not just standard outputs 
to the console.

Also, repetitive analysis could be designed, which could look systematically 
into the full logs (data mining, even ML/AI ultimately, to empirically 
fine-tune things like policies and thresholds)



6) sharing Verify and Merge

Agreed: it sounds to me like Merge is a special case of Verify (last Verify of 
a change/patch-implied sequence of Verifies).



7) storing artefacts into OPNFV

I'm not familiar with this, but I'd say the test case execution results could 
end up in some OPNFV store, maybe in Functest, or in this artifactory. For the 
installation jobs, some success/failure results may also be stored. The 
ONAP-specific results might be shared with ONAP.



8) OPNFV and ONAP versions as parameters

Yes, absolutely. And additional parameters would be CPU architectures, VNFs 
(and their versions too), clouds (start with OpenStack: various versions, then 
AWS+Azure+GCP+... and their versions), VM image versions, …. There could easily 
be way too many parameters, leading to combinatorial explosion: better order 
another 10 or 100 pods already ;)



9) OPNFV installer, ONAP deployment method as parameters

Yes, likewise: yet other parameters. I suppose the combination of parameters 
will have to be controlled and selected manually, not the full Cartesian 
product of all possibilities.



We’ll be debriefing this discussion during the next Auto weekly meeting (June 
11th):

https://wiki.opnfv.org/display/AUTO/Auto+Project+Meetings



Best regards,

Gerard





From: Klozik Martin [mailto:martin.klo...@tieto.com]
Sent: Monday, June 4, 2018 9:40 AM
To: opnfv-tech-discuss@lists.opnfv.org
Cc: Tina Tsou <tina.t...@arm.com>; Gerard Damm (Product Engineering Service) 
<gerard.d...@wipro.com>
Subject: [Auto] CI for Auto thoughts



** This mail has been sent from an external source. Treat hyperlinks and 
attachments in this email with caution**

Hi Auto Team,



I've read the CI related wiki page at 
https://wiki.opnfv.org/display/AUTO/CI+for+Auto and I would like to discuss 
with you a few thoughts based on my experience with CI from OPNFV vswitchperf 
project.



  *   I agree, that a dedicated POD should be reserved for Auto CI. Based on 
the length of the job execution and the content of VERIFY & MERGE jobs, it 
might be required to introduce an additional (2nd) POD in the future. I would 
prefer a bare metal POD to virtual one to simplify the final setup and thus 
improve its robustness.
  *   Based on the configuration, daily jobs can be triggered daily or "daily 
if there was any commit since the last CI run" (pollSCM trigger). The second 
setup can easy the load at CI POD - useful in case that POD is shared among 
several projects or jobs, e.g. more compute power is left for VERIFY/MERGE 
jobs. I suppose that in case of Auto the frequency will be really daily to 
constantly verify OPNFV & ONAP (master branch) functionality even if Auto 
repository was not modified. In case of stable release it doesn't make sense to 
run it on daily basis as OPNFV & ONAP versions will be fixed.
  *   I agree with the split of functionality between job definition yaml file 
(in releng repo) and real job "body" inside a Auto repo. This will add a 
flexibility to the job configuration and it will also allow us to verify new 
changes as part of verify job during review process before the changes are 
merged. For example, VSPERF project uses a CI related script 
ci/build-vsperf.sh, which based on the parameter executes daily, verify or 
merge job actions. This script is then called from jenkins yaml file 
(jjb/vswitchperf/vswitchperf.yaml) with appropriate parameter.
  *   In case of auto CI, it might be useful to split CI job into the set of 
dependent jobs, where following job will be executed only in case that previous 
job has succeeded. This will simplify analysis of job failures as it will be 
visible for the first sight if the failure is caused by Auto TCs or by platform 
installers. The history of these jobs will give us a simple overview of 
"sub-jobs" stability.

E.g.
    auto-daily-master
        |---> auto-install-opnfv
             |----> auto-install-onap
                  |-----> auto-daily-tests

example 2:

      auto-verify-master
        |---> auto-install-opnfv
             |----> auto-install-onap
                  |-----> auto-verify

                         code validation... OK

                         doc validation .... OK

                         sanity TC        .... OK

  *   It might be useful to prepare some output filtering. Otherwise it would 
be difficult to read the jenkins job  console log during analysis of CI run 
(especially a failure). It means to suppress console output of successful steps 
or tests and print only simplified results (as shown in auto-verify example 
above). This approach expects, that full logs are either accessible (i.e. kept 
for a while) at POD or dumped into jenkins job console output in case that 
error is detected.
  *   there are three basic job types used in OPNFV - DAILY, VERIFY (triggered 
by gerrit for every pushed patch or its change) and MERGE (triggered by gerrit 
after the merge of the patch). I suppose that in case of AUTO, VERIFY & MERGE 
jobs can share the content.
  *   Do you plan to store any artifacts "generated" by CI job into OPNFV 
artifactory? For example, vswitchperf is pushing results into OPNFV results 
database (I've seen, that AUTO is also going to push to results DB ) and after 
that it stores appropriate log files and test report into artifacts.opnfv.org. 
Does it make sense to store logs from OPNFV/ONAP installation or execution of 
Auto TCs into artifactory too?
  *   It might be useful to easily switch/test different OPNFV/ONAP versions. 
For example a "master" CI script can import a versions.sh file where 
user/tester can specify OPNFV and ONAP branches or commit IDs to be used.
  *   Analogically to previous point, also the OPNFV installer and ONAP 
deployment methods can be specified in the (same?) file.



Please consider these thoughts as an "outsider" point of view at Auto CI, as 
I'm new to the Auto project. It is highly possible that some of these points 
were already discussed or even solved already. In that case please don't 
hesitate to point out the sources I should check. Based on the discussion, I'll 
take care about CI auto wiki page updates, if it will be needed.



Thank you,

Martin

The information contained in this electronic message and any attachments to 
this message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information. If you are not the 
intended recipient, you should not disseminate, distribute or copy this e-mail. 
Please notify the sender immediately and destroy all copies of this message and 
any attachments. WARNING: Computer viruses can be transmitted via email. The 
recipient should check this email and any attachments for the presence of 
viruses. The company accepts no liability for any damage caused by any virus 
transmitted by this email. www.wipro.com
_______________________________________________
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss

Reply via email to