Hi,

For we don't have enough resources for Jenkins logs or other reasons, for
example, they are not valuable any more, the Jenkins system may delete some
of the historical records. Someday, you may find that the record indicated
by build_id won't exist in the future. It is OK if it does not exist in my
opinion.

Julien,
BR

David McBride <dmcbr...@linuxfoundation.org>于2018年4月26日周四 上午11:53写道:

> Hi Jack,
>
> Sorry for the slow response.  I've been busy with the Fraser release.
>
> Comments
>
>    1. I didn't understand point 1 about the build_id.  I thought that was
>    the purpose of the Jenkins macro that Julien developed.  What am I missing?
>    2. I'd like to see Health Check broken out as a separate column from
>    Functest.
>    3. The installer drop-down just lists "Fuel".  We need to include both
>    Fuel@x86 and Fuel@aarm64.
>
> Thanks for your effort in pushing this forward.
>
> David
>
>
> On Sun, Apr 22, 2018 at 8:49 PM, Chenjiankun (JackChan) <
> chenjiank...@huawei.com> wrote:
>
>> Hi David,
>>
>>
>>
>> As discussed in the previous Email, I have create a first version of
>> scenario results table(we can call it gating reporting J), see:
>>
>>          http://116.66.187.136:9998/index.html
>>
>> Patch see:
>>
>>          https://gerrit.opnfv.org/gerrit/#/c/56179/
>>
>>
>>
>> For now, I have added four filters:
>>
>> 1.       Scenario
>>
>> 2.       Version
>>
>> 3.       Installer
>>
>> 4.       Iteration(we will query the last 10 days data, the default
>> iteration is 10, if all record is more than 10, it will only 10 record)
>>
>>
>>
>> For now, we still have some problems to solve:
>>
>> 1.     build_id(Jenkins id), we don’t have a unified ID to mark the
>> whole Jenkins jobs(deployment + functest + yardstick), for now, I use the
>> build_tag(e.g. jenkins-functest-fuel-baremetal-daily-master-131, the
>> project:functest and installer:fuel are in build_tag) to index, so we can
>> only show the functest result.
>>
>> 2.       Jenkins job results, depend on build_id, we can’t show all
>> results(deployment + functest + yardstick, only functest for now). And each
>> projects need to upload their results(UNTRIGGERED, PASS, FAIL).
>>
>> 3.       Deployment results, for now, only daisy upload its deployment
>> result, we need ask each installer to upload deployment results, only
>> build_id ready, we can show all results.
>>
>> 4.       Statistic results, depend on projects upload their results
>>
>>
>>
>> It is the first version, we will add more functions step by step after
>> the above issue solved.
>>
>> @All, if you have any suggestions, please let me know.
>>
>>
>>
>> BRs,
>>
>> Jack Chan
>>
>>
>>
>> *发件人:* chenjiankun
>> *发送时间:* 2018年1月22日 17:02
>> *收件人:* 'David McBride'
>> *抄送:* TECH-DISCUSS OPNFV; tro...@redhat.com; Brattain, Ross B; Rao,
>> Sridhar; OLLIVIER Cédric IMT/OLN; mark.bei...@dell.com; Yuyang
>> (Gabriel); ALFRED C 'MORTON ' (acmor...@att.com); emma.l.fo...@intel.com;
>> Liyin (Ace); Wangwulin (Linda); georg.k...@ericsson.com; Serena Feng;
>> Julien
>> *主题:* RE: [opnfv-tech-discuss][test-wg]Requirements for test resources
>> collection
>>
>>
>>
>> Thanks, David.
>>
>>
>>
>> According your descriptions, I have created a demo table as below(wish I
>> do not misunderstanding your meaning):
>>
>>
>>
>> *scenario *
>>
>> *date *
>>
>> *Jenkins *
>>
>> *Version *
>>
>> *Installer *
>>
>> *Deployment *
>>
>> *Functest *
>>
>> *yardstick *
>>
>> os-nosdn-nofeature-ha
>>
>> 2018-01-21
>>
>> Jenkins id
>>
>> euphrates
>>
>> compass
>>
>> pass
>>
>> pass
>>
>> pass
>>
>> 2018-01-21
>>
>> Jenkins id
>>
>> euphrates
>>
>> compass
>>
>> fail
>>
>> not trigger
>>
>> not trigger
>>
>> statistic
>>
>> 8/9/10
>>
>> (pass:8,triggered:9, total:10)
>>
>> 6/7/8
>>
>> 6/7/8
>>
>>
>>
>>
>>
>> This last line in table body is the statistics information, and lines
>> above are the detailed information(and it can be folded).
>>
>> The score 8/9/10 is pass/triggered/total. Total means should run,
>> triggered means actually run.
>>
>> Also we can add three filters:
>>
>>
>>
>> If you select installer as compass, then will show all data related to
>> compass.
>>
>> Iterations means last x data points to be displayed.
>>
>>
>>
>> Does this table satisfied your requirements?
>>
>>
>>
>> BRs,
>>
>> Jack Chan
>>
>> *发件人:* David McBride [mailto:dmcbr...@linuxfoundation.org
>> <dmcbr...@linuxfoundation.org>]
>> *发送时间:* 2018年1月20日 3:07
>> *收件人:* chenjiankun
>> *抄送:* TECH-DISCUSS OPNFV; tro...@redhat.com; Brattain, Ross B; Rao,
>> Sridhar; OLLIVIER Cédric IMT/OLN; mark.bei...@dell.com; Yuyang
>> (Gabriel); ALFRED C 'MORTON ' (acmor...@att.com); emma.l.fo...@intel.com;
>> Liyin (Ace); Wangwulin (Linda); georg.k...@ericsson.com; Serena Feng;
>> Julien
>> *主题:* Re: [opnfv-tech-discuss][test-wg]Requirements for test resources
>> collection
>>
>>
>>
>> +Serena, Julien
>>
>>
>>
>> Thanks, Jack.
>>
>>    1. Data reported per scenario (i.e., jenkins job, deployment,
>>    functest, yardstick, etc. displayed together for each scenario) instead of
>>    separate test silos.
>>    2. Include deployment results
>>    3. Include all Jenkins job results (failure to start, failure to
>>    complete, etc.)
>>    4. Clear date/time stamps for every data point
>>    5. Display the data above for the last x data points (e.g., 4, 5, 10
>>    ?)
>>    6. Use an easy-to-understand, unified scoring method for all test
>>    frameworks.
>>
>> As I mentioned, yesterday, Julien and Serena have been working on this,
>> as well.  Julien has developed a macro
>> <https://gerrit.opnfv.org/gerrit/#/c/48515/> to enable consolidation of
>> all results per scenario. He is intending to use the Daisy installer as a
>> platform to verify the macro, which then can be adapted to other installers.
>>
>>
>>
>> In addition, Serena has agreed to help manage an intern who can assist
>> with the project.  I have an action to create an intern proposal for that
>> purpose.
>>
>>
>>
>> David
>>
>>
>>
>> On Fri, Jan 19, 2018 at 1:23 AM, chenjiankun <chenjiank...@huawei.com>
>> wrote:
>>
>> Hi,
>>
>>
>>
>> As we discussed last test working group weekly meeting, we want to do
>> test resources aggregation.
>>
>> We plan to offer a new friendly web portal which contain all existing
>> test resource and more functions.
>>
>>
>>
>> I have a broad classification as bellow:
>>
>> 1.       Data analysis
>>
>> a)         Reporting(existing, For release)
>>
>> b)         Bitergia(existing)
>>
>> c)         Grafana(existing, For detailed test results)
>>
>> d)         ……(maybe we can develop more tools to show our detailed test
>> results)
>>
>> 2.       Test working group information(What information you want to see
>> from test working group? Test working group event? Event of each project?)
>>
>> 3.       Tools of each project(Need each project member to complete)
>>
>> 4.       ……(waiting for you to improve)
>>
>>
>>
>>
>>
>> This email is aim at collecting requirements for test resources, so if
>> you have any idea about classification, existing tools(such as reporting),
>> new functions you want, please do not hesitate to comment here.
>>
>> As Gabriel said, he will create a new wiki page for test resources
>> collection, so you can also comment there.
>>
>>
>>
>> @David, @Tim, can you repeat your advice about reporting here? I will try
>> my best to implement it.
>>
>> @All, all requirements, advice, comment are welcome~J
>>
>>
>>
>> BRs,
>>
>> Jack Chan
>>
>>
>>
>>
>>
>> --
>>
>> *David McBride*
>>
>> Release Manager, OPNFV
>>
>> Mobile: +1.805.276.8018
>>
>> Email/Google Talk: dmcbr...@linuxfoundation.org
>>
>> Skype: davidjmcbride1
>>
>> IRC: dmcbride
>>
>
>
>
> --
> *David McBride*
> Release Manager, OPNFV
> Mobile: +1.805.276.8018
> Email/Google Talk: dmcbr...@linuxfoundation.org
> Skype: davidjmcbride1
> IRC: dmcbride
>
_______________________________________________
opnfv-tech-discuss mailing list
opnfv-tech-discuss@lists.opnfv.org
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss

Reply via email to