Re: Task Result Dashboards

2017-01-16 Thread Tim Flink
On Fri, 13 Jan 2017 13:58:25 +0100
Josef Skladanka  wrote:

> On Thu, Jan 12, 2017 at 7:42 AM, Tim Flink  wrote:
> 
> > The idea was to start with static site generation because it doesn't
> > require an application server, is easy to host and likely easier to
> > develop, at least initially.
> >
> > I don't really have a strong preference either way, just wanted to
> > say  
> that "initial development" time is the same for web app, and for
> static generated pages - it both does the same thing - takes an input
> + output template and produces output. You can't really get around
> that from what I'm seeing here. Static generated page equals cached
> data in the app, and for the starters we can go on using just the
> stupidest of caches provided in Flask (even though it might well be
> cool and interesting to use some document store later on, but that's
> premature optimization now).

Honestly, I don't care a whole lot about how the dashboards are
implemented so long as they get done and they get done relatively
quickly.


> > >After brief discussion with jskladan, I understand that
> > > resultsDB would be able to handle requests from dynamic page.  
> >
> > Sure but then someone would have to write and maintain it. The
> > things that drove me towards static site generation are:
> >  
> 
> Write and maintain what? I'm being sarcastic here, but this sounds
> like the code for static generated pages will not have to be written
> and maintained... And once again - the actual code that does the
> actual thing will be the same, regardless of whether the output is a
> web page, or a http response.

I was thinking that a static site generator would work around the need
for auth and interface code to create new dashboards. We could just
have a git repo with yaml files and if someone wanted a new dashboard,
they could just submit a PR with the new yaml file.

> >  
> > > * I'm not sure what exactly is meant by 'item tag' in the examples
> > > section.
> > >
> > > * Would the YAML configuration look something like this:
> > >
> > >url: link.to.resultsdbapi.org
> > >overview:
> > >- testplan:
> > >  - name: LAMP
> > >  - items:
> > >- mariadb
> > >- httpd
> > >  - tasks:
> > >- and:
> > >  - rpmlint
> > >  - depcheck
> > >  - or:
> > >- foo
> > >- bar  
> >
> > I was thinking more of the example yaml that's in the git repo at
> > taskdash/mockups/yamlspec.yml [1] but I'm not really tied to it
> > strongly
> > - so long as it works and the format is easy enough to understand.
> >
> >  
> I guess I know where you were going with that example, but it is a bit
> lacking. For one all it really allows for is "hard and" relationship
> between the testcases in the testplan (dashboard, call it whatever you
> like), which might be enough, but with what was said here it will
> start being insufficient pretty fast. The other thing is, that we
> really want to be able to do the "item selection" in some way. We
> sure could say "take all results for all these four testcases, and
> produce a line-per-item" but that is so broad, that it IMO stops
> making sense anywhere beyond the "global" (read applicable to all the
> items in the resutsdb) testplans.

This is meant as an initial direction, not a final resting place. I
fully expect that the functionality will continue to evolve if we adopt
the project.
> > >Is there going to be any additional grouping (for example,
> > > based on arch) or some kind of more precise outcome aggregation
> > > (only warn if part of testplan is failing, etc.)  
> >
> > Maybe but I think those features can be added later. Are you of the
> > mind that we need to take those things into account now?
> >
> >  
> I don't really think that they can. Take a simple "gating" dashboard
> for example. There is a pretty huge difference between "package
> passes, if rpmlint, depcheck and abicheck pass on it" and "package
> passses if rpmlint, depcheck and abicheck pass for all the required
> arches". And I'm certain we want to be able to do the latter. Like it
> is not really "pass" when rpmlint passed on ARM, depcheck on X86_64
> and abicheck on i386, but all the other combinations failed.

Why can't all that be hardcoded for now, at least? The required checks
and arches don't change very often.

> It might seem like unnecessarily overcomplicating things, but I don't
> thin that the dashboard-generating tool should make assumptions (like
> that grouping by arch is what you want to do) - it should be spelled
> out in the input format, so there is as much black box removed as
> possible. Will it take more time to write the input? Sure. Is it
> worth it? Absolutely.

Again, this wasn't intended as a final spec but as a starting point. If
at all possible, I want to have something which can be shown off as at
least a demo before devconf. With that in mind, I'd like to keep
everything as simple as possible for now.

>

Re: Task Result Dashboards

2017-01-13 Thread Josef Skladanka
On Thu, Jan 12, 2017 at 7:42 AM, Tim Flink  wrote:

> The idea was to start with static site generation because it doesn't
> require an application server, is easy to host and likely easier to
> develop, at least initially.
>
> I don't really have a strong preference either way, just wanted to say
that "initial development" time is the same for web app, and for static
generated pages - it both does the same thing - takes an input + output
template and produces output. You can't really get around that from what
I'm seeing here. Static generated page equals cached data in the app, and
for the starters we can go on using just the stupidest of caches provided
in Flask (even though it might well be cool and interesting to use some
document store later on, but that's premature optimization now).


> >After brief discussion with jskladan, I understand that resultsDB
> > would be able to handle requests from dynamic page.
>
> Sure but then someone would have to write and maintain it. The things
> that drove me towards static site generation are:
>

Write and maintain what? I'm being sarcastic here, but this sounds like the
code for static generated pages will not have to be written and
maintained... And once again - the actual code that does the actual thing
will be the same, regardless of whether the output is a web page, or a http
response.

>
> > * I'm not sure what exactly is meant by 'item tag' in the examples
> > section.
> >
> > * Would the YAML configuration look something like this:
> >
> >url: link.to.resultsdbapi.org
> >overview:
> >- testplan:
> >  - name: LAMP
> >  - items:
> >- mariadb
> >- httpd
> >  - tasks:
> >- and:
> >  - rpmlint
> >  - depcheck
> >  - or:
> >- foo
> >- bar
>
> I was thinking more of the example yaml that's in the git repo at
> taskdash/mockups/yamlspec.yml [1] but I'm not really tied to it strongly
> - so long as it works and the format is easy enough to understand.
>
>
I guess I know where you were going with that example, but it is a bit
lacking. For one all it really allows for is "hard and" relationship
between the testcases in the testplan (dashboard, call it whatever you
like), which might be enough, but with what was said here it will start
being insufficient pretty fast. The other thing is, that we really want to
be able to do the "item selection" in some way. We sure could say "take all
results for all these four testcases, and produce a line-per-item" but that
is so broad, that it IMO stops making sense anywhere beyond the "global"
(read applicable to all the items in the resutsdb) testplans.


> >Is there going to be any additional grouping (for example, based
> > on arch) or some kind of more precise outcome aggregation (only warn
> > if part of testplan is failing, etc.)
>
> Maybe but I think those features can be added later. Are you of the
> mind that we need to take those things into account now?
>
>
I don't really think that they can. Take a simple "gating" dashboard for
example. There is a pretty huge difference between "package passes, if
rpmlint, depcheck and abicheck pass on it" and "package passses if rpmlint,
depcheck and abicheck pass for all the required arches". And I'm certain we
want to be able to do the latter. Like it is not really "pass" when rpmlint
passed on ARM, depcheck on X86_64 and abicheck on i386, but all the other
combinations failed.

It might seem like unnecessarily overcomplicating things, but I don't thin
that the dashboard-generating tool should make assumptions (like that
grouping by arch is what you want to do) - it should be spelled out in the
input format, so there is as much black box removed as possible.
Will it take more time to write the input? Sure. Is it worth it? Absolutely.



> > * Are we going to generate the dashbord for the latest results only,
> > or/and some kind of summary over given period in history?
>
> For now, the latest results. In my mind, we'd be running the dashboard
> creation on a cron job or in response to fedmsgs. At that point, we'd
> date the generated dashboards and keep a record of those without
> needing a lot more complexity
>

The question here is "what is latest results"? Do we just take now-month
for the first run, and then "update" on top of that? I would not
necessarily have a problem with that, it's just that we most deffinitely
would want to capture _some_ timespan, and I think this is more about "what
timespan it its".
If we decide to go with "take the old state, apply updates on top of that",
then we will (I think) pretty fast arrive to a point where we mirror the
data from ResultsDB, just in a different format, stored in a document store
instead of relational database. Not saying it's a bad or wrong thing to do.
I actually think it's a pretty good solution - better than querying
increasingly more data from ResultsDB anyway.
___
qa-devel mail

Re: Task Result Dashboards

2017-01-12 Thread Matthew Miller
On Wed, Jan 11, 2017 at 11:42:32PM -0700, Tim Flink wrote:
> The motivation is to enable more complex visualizations of results. If
> we're interested in the current state of all ruby packages or all
> python packages it's not all that easy to see that at a glance with our
> current resultsdb interfaces.
> 
> I can also see having a dashboard for all critpath packages or all
> packages needed for building a livecd being a useful thing to render

Oh, I see. Cool. This seems like it could be very useful for Modularity
— one dashboard for each module.


-- 
Matthew Miller

Fedora Project Leader
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Task Result Dashboards

2017-01-11 Thread Tim Flink
On Wed, 11 Jan 2017 12:43:23 +0100
Lukas Brabec  wrote:

> Hi team,
> 
> I would like to open discussion on topic "Task Result Dashboards".
> I'm posting here in order to avoid long of the grid discussions we
> had last time regarding the docker testing stuff.
> There is a tracking ticket in phab [1] that links to tflink's initial
> ideas [2].
> 
> * What is the motivation, what do we want to achieve with such
> dashboards and who is the 'non-techincal audience'?

The motivation is to enable more complex visualizations of results. If
we're interested in the current state of all ruby packages or all
python packages it's not all that easy to see that at a glance with our
current resultsdb interfaces.

I can also see having a dashboard for all critpath packages or all
packages needed for building a livecd being a useful thing to render

> * Runnable once a day or once per hour at minimum; does this imply
> static periodically refreshed page? If so, what is the motivation for
> static website?

The idea was to start with static site generation because it doesn't
require an application server, is easy to host and likely easier to
develop, at least initially.

>After brief discussion with jskladan, I understand that resultsDB
> would be able to handle requests from dynamic page.

Sure but then someone would have to write and maintain it. The things
that drove me towards static site generation are:

 - easier to make something quickly

 - enables easier dashboard contribution; we just need the description
   of what's being visualized.

> * I'm not sure what exactly is meant by 'item tag' in the examples
> section.
> 
> * Would the YAML configuration look something like this:
> 
>url: link.to.resultsdbapi.org
>overview:
>- testplan:
>  - name: LAMP
>  - items:
>- mariadb
>- httpd
>  - tasks:
>- and:
>  - rpmlint
>  - depcheck
>  - or:
>- foo
>- bar

I was thinking more of the example yaml that's in the git repo at
taskdash/mockups/yamlspec.yml [1] but I'm not really tied to it strongly
- so long as it works and the format is easy enough to understand.

>Is there going to be any additional grouping (for example, based
> on arch) or some kind of more precise outcome aggregation (only warn
> if part of testplan is failing, etc.)

Maybe but I think those features can be added later. Are you of the
mind that we need to take those things into account now?

> * Are we going to generate the dashbord for the latest results only,
> or/and some kind of summary over given period in history?

For now, the latest results. In my mind, we'd be running the dashboard
creation on a cron job or in response to fedmsgs. At that point, we'd
date the generated dashboards and keep a record of those without
needing a lot more complexity

> * How are Task dasboards related to Static dashboards [3]

They're the same thing - a demonstration of me somehow forgetting that
there was a basic ticket when I wrote T738.

Thanks for taking this on, it'll be great to have some nicer status
displays of the kinds of checks running in Taskotron.

Tim


[1]
https://bitbucket.org/tflink/taskdash/src/67f370e62f163de94d8404331d13c404fbf1ec73/mockups/yamlspec.yml?at=master&fileviewer=file-view-default
> 
> [1] https://phab.qa.fedoraproject.org/T725
> [2] https://bitbucket.org/tflink/taskdash
> [3] https://phab.qa.fedoraproject.org/T738
> ___
> qa-devel mailing list -- qa-devel@lists.fedoraproject.org
> To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org



pgp9U1e5TiN7K.pgp
Description: OpenPGP digital signature
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Task Result Dashboards

2017-01-11 Thread Matthew Miller
On Wed, Jan 11, 2017 at 12:43:23PM +0100, Lukas Brabec wrote:
> * What is the motivation, what do we want to achieve with such
>dashboards and who is the 'non-techincal audience'?

I guess that depends on what you mean by the technical audience.

Other than people who are steeped in QA day in and day out, I can see
several audiences:

* Volunteer packagers who have hit a failed check and are trying to
  dig out what it means.

* Users who are following a particular package or an issue they've
  encountered

* Anyone in the project who wants to have some task automated and have
  results that are shareable. I'm thinking right now of the planned
  Docs engine (where a git commit triggers an a build or a translations
  update triggers a cache refresh, and then the build results are
  pushed to another git repo which backs a static web site)

* People not currently in the project who we want to impress and
  entice into becoming contributors with our awesome and easy-to-use
  tooling. 

* Me, trying to get an overview of the current state of Fedora activity
  :)


Is this what you are looking for?


-- 
Matthew Miller

Fedora Project Leader
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Task Result Dashboards

2017-01-11 Thread Lukas Brabec
Hi team,

I would like to open discussion on topic "Task Result Dashboards". I'm posting
here in order to avoid long of the grid discussions we had last time regarding
the docker testing stuff.
There is a tracking ticket in phab [1] that links to tflink's initial ideas [2].

* What is the motivation, what do we want to achieve with such dashboards and
   who is the 'non-techincal audience'?

* Runnable once a day or once per hour at minimum; does this imply static
   periodically refreshed page? If so, what is the motivation for
static website?
   After brief discussion with jskladan, I understand that resultsDB would be
   able to handle requests from dynamic page.

* I'm not sure what exactly is meant by 'item tag' in the examples section.

* Would the YAML configuration look something like this:

   url: link.to.resultsdbapi.org
   overview:
   - testplan:
 - name: LAMP
 - items:
   - mariadb
   - httpd
 - tasks:
   - and:
 - rpmlint
 - depcheck
 - or:
   - foo
   - bar

   Is there going to be any additional grouping (for example, based on arch) or
   some kind of more precise outcome aggregation (only warn if part of testplan
   is failing, etc.)

* Are we going to generate the dashbord for the latest results only, or/and
   some kind of summary over given period in history?

* How are Task dasboards related to Static dashboards [3]



[1] https://phab.qa.fedoraproject.org/T725
[2] https://bitbucket.org/tflink/taskdash
[3] https://phab.qa.fedoraproject.org/T738
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org