Hi David & Jack,
Thanks for mentioning the task, an interface for collecting deployment
results has been ready in TestAPI[1],
the Macro for pushing deployment result is also ready by Jelien[2], I
believe it will facilitate all the installers' work,
currently, Julien is working on pushing daisy results to TestAPI leveraging
that Macro, I think it will be finished soon.
And regarding how to show all the information in a table, I suggest we can
take a look at Jack's proposal first.
@Jack, a few comments:
1. what's the opinion of jenkins id?
2. for a scenario-installer combination, some will not run once in a
day(trigger multiple times or run in multiple pods),
in this case, a simple pass/fail will be too vague, and to facilitate
the support of data iteration, I would suggest
leveraging 8/9/10(8 passed, 9 triggered, 10total), delete the
final statistic line
3. how about adding a healthcheck column(functest-healthcheck test cases),
to see if the installer meets the milestone 3.0
[1]: https://gerrit.opnfv.org/gerrit/#/c/49895/
[2]: https://gerrit.opnfv.org/gerrit/#/c/48515/
BRs
Serena
On Mon, Jan 22, 2018 at 5:02 PM chenjiankun <[email protected]> wrote:
> Thanks, David.
>
>
>
> According your descriptions, I have created a demo table as below(wish I
> do not misunderstanding your meaning):
>
>
>
> *scenario *
>
> *date *
>
> *Jenkins *
>
> *Version *
>
> *Installer *
>
> *Deployment *
>
> *Functest *
>
> *yardstick *
>
> os-nosdn-nofeature-ha
>
> 2018-01-21
>
> Jenkins id
>
> euphrates
>
> compass
>
> pass
>
> pass
>
> pass
>
> 2018-01-21
>
> Jenkins id
>
> euphrates
>
> compass
>
> fail
>
> not trigger
>
> not trigger
>
> statistic
>
> 8/9/10
>
> (pass:8,triggered:9, total:10)
>
> 6/7/8
>
> 6/7/8
>
>
>
>
>
> This last line in table body is the statistics information, and lines
> above are the detailed information(and it can be folded).
>
> The score 8/9/10 is pass/triggered/total. Total means should run,
> triggered means actually run.
>
> Also we can add three filters:
>
>
>
> If you select installer as compass, then will show all data related to
> compass.
>
> Iterations means last x data points to be displayed.
>
>
>
> Does this table satisfied your requirements?
>
>
>
> BRs,
>
> Jack Chan
>
> *发件人:* David McBride [mailto:[email protected]]
> *发送时间:* 2018年1月20日 3:07
> *收件人:* chenjiankun
> *抄送:* TECH-DISCUSS OPNFV; [email protected]; Brattain, Ross B; Rao,
> Sridhar; OLLIVIER Cédric IMT/OLN; [email protected]; Yuyang (Gabriel);
> ALFRED C 'MORTON ' ([email protected]); [email protected]; Liyin
> (Ace); Wangwulin (Linda); [email protected]; Serena Feng; Julien
> *主题:* Re: [opnfv-tech-discuss][test-wg]Requirements for test resources
> collection
>
>
>
> +Serena, Julien
>
>
>
> Thanks, Jack.
>
> 1. Data reported per scenario (i.e., jenkins job, deployment,
> functest, yardstick, etc. displayed together for each scenario) instead of
> separate test silos.
> 2. Include deployment results
> 3. Include all Jenkins job results (failure to start, failure to
> complete, etc.)
> 4. Clear date/time stamps for every data point
> 5. Display the data above for the last x data points (e.g., 4, 5, 10 ?)
> 6. Use an easy-to-understand, unified scoring method for all test
> frameworks.
>
> As I mentioned, yesterday, Julien and Serena have been working on this, as
> well. Julien has developed a macro
> <https://gerrit.opnfv.org/gerrit/#/c/48515/> to enable consolidation of
> all results per scenario. He is intending to use the Daisy installer as a
> platform to verify the macro, which then can be adapted to other installers.
>
>
>
> In addition, Serena has agreed to help manage an intern who can assist
> with the project. I have an action to create an intern proposal for that
> purpose.
>
>
>
> David
>
>
>
> On Fri, Jan 19, 2018 at 1:23 AM, chenjiankun <[email protected]>
> wrote:
>
> Hi,
>
>
>
> As we discussed last test working group weekly meeting, we want to do test
> resources aggregation.
>
> We plan to offer a new friendly web portal which contain all existing test
> resource and more functions.
>
>
>
> I have a broad classification as bellow:
>
> 1. Data analysis
>
> a) Reporting(existing, For release)
>
> b) Bitergia(existing)
>
> c) Grafana(existing, For detailed test results)
>
> d) ……(maybe we can develop more tools to show our detailed test
> results)
>
> 2. Test working group information(What information you want to see
> from test working group? Test working group event? Event of each project?)
>
> 3. Tools of each project(Need each project member to complete)
>
> 4. ……(waiting for you to improve)
>
>
>
>
>
> This email is aim at collecting requirements for test resources, so if you
> have any idea about classification, existing tools(such as reporting), new
> functions you want, please do not hesitate to comment here.
>
> As Gabriel said, he will create a new wiki page for test resources
> collection, so you can also comment there.
>
>
>
> @David, @Tim, can you repeat your advice about reporting here? I will try
> my best to implement it.
>
> @All, all requirements, advice, comment are welcome~J
>
>
>
> BRs,
>
> Jack Chan
>
>
>
>
>
> --
>
> *David McBride*
>
> Release Manager, OPNFV
>
> Mobile: +1.805.276.8018
>
> Email/Google Talk: [email protected]
>
> Skype: davidjmcbride1
>
> IRC: dmcbride
> _______________________________________________
> opnfv-tech-discuss mailing list
> [email protected]
> https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss
>
_______________________________________________
opnfv-tech-discuss mailing list
[email protected]
https://lists.opnfv.org/mailman/listinfo/opnfv-tech-discuss