Re: Release validation NG: planning thoughts

2016-12-06 Thread Adam Williamson
On Mon, 2016-12-05 at 11:39 -0800, Adam Williamson wrote:
> 
> One interesting thought I had, though - should we store the *test
> cases* in the middleware 'validate/report' thing I've been describing
> here, or should we store them in ResultsDB?

Er, sorry, to be clear here, by 'store the test cases' I mean 'the
database entries for test cases'. *Not* the actual test case text
itself, I don't think that should be in the middleware system or in
resultsdb.
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
http://www.happyassassin.net
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Release validation NG: planning thoughts

2016-12-05 Thread Adam Williamson
On Mon, 2016-12-05 at 10:33 +0100, Josef Skladanka wrote:
> I kind of believe, that the "environment requirements" should be a part of
> the
> testplan - we should say that "testplan X needs testcase Y ran on Foo and
> Bar"
> in the testplan. Instead of listing all the different options in the
> testcase, and then
> just selecting "a version of it" in testplan.

Oh! Then yes, we absolutely agree. That's what I think too.

In my head, a 'test case' - to this system - is some kind of resource
locator for what's expected to be instructions on testing something,
and an id. And that's all it is. Absolutely I agree we don't store any
other metadata associated with the test case, but with the test plan.

And yes, you were right in your assumption about what I meant by 'test
environment'.

One interesting thought I had, though - should we store the *test
cases* in the middleware 'validate/report' thing I've been describing
here, or should we store them in ResultsDB?

The 'test plan' stuff should clearly go in the middleware, I think. But
it's not so straightforward whether we keep the 'test cases' there or
in ResultsDB, especially if they're just super-simple 'here's a URL and
some identifiers for it' objects.

Oh, BTW, I definitely was thinking that we should cope with test cases
being moved around and having their human-friendly names changed, but
still being recognized as 'the same test case'.
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
http://www.happyassassin.net
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Release validation NG: planning thoughts

2016-12-05 Thread Josef Skladanka
On Thu, Dec 1, 2016 at 6:04 PM, Adam Williamson 
wrote:

> On Thu, 2016-12-01 at 14:25 +0100, Josef Skladanka wrote:
> > On Wed, Nov 30, 2016 at 6:29 PM, Adam Williamson <
> adamw...@fedoraproject.org
> > > wrote:
> > > On Wed, 2016-11-30 at 18:20 +0100, Josef Skladanka wrote:
> > > > I would try not to go the third way, because that is really prone to
> > >
> > > erros
> > > > IMO, and I'm not sure that "per context" is always right. So for me,
> the
> > > > "TCMS" part of the data, should be:
> > > > 1) testcases (with required fields/types of the fields in the "result
> > > > response"
> > > > 2) testplans - which testcases, possibly organized into groups. Maybe
> > >
> > > even
> > > > dependencies + saying "I need testcase X to pass, Y can be pass or
> warn,
> > >
> > > Z
> > > > can be whatever when A passes, for the testplan to pass"
> > > >
> > > > But this is fairly complex thing, to be honest, and it would be the
> first
> > > > and only useable TCMS in the world (from my point of view).
> > >
> > > I have rather different opinions, actually...but I'm not working on
> > > this right now and I'd rather have something concrete to discuss than
> > > just opinions :)
> > >
> > > We should obviously set goals properly, before diving into
> implementation
> >
> > details :) I'm interested in what you have in mind, since I've been
> > thinking about this particular kind of thing for the last few years, and
> it
> > really depends on what you expect of the system.
>
> Well, the biggest point where I differ is that I think your 'third way'
> is kind of unavoidable. For all kinds of reasons.
>
> We re-use test cases between package update testing, Test Days, and
> release validation testing, for instance; some tests are more or less
> unique to some specific process, but certainly not all of them. The
> desired test environments may be significantly different in these
> different cases.
>

We also have secondary arch teams using release validation processes
> similar to the primary arch process: they use many of the same test
> cases, but the desired test environments are of course not the same.
>
>
I think we actually agree, but I'm not sure, since I don't really know what
you mean by "test environment" and how should it
1) affect the data stored with the result
2) affect the testcase itself

I have a guess, and I base the rest of my response on it, but I'd rather
know, than assume :)



> Of course, in a non-wiki based system you could plausibly argue that a
> test case could be stored along with *all* of its possible
> environments, and then the configuration for a specific test event
> could include the information as to which environments are relevant
> and/or required for that test event. But at that point I think you're
> rather splitting hairs...
>
> In my original vision of 'relval NG' the test environment wouldn't
> actually exist at all, BTW. I was hoping we could simply list test
> cases, and the user could choose the image they were testing, and the
> image would serve as the 'test environment'. But on second thought
> that's unsustainable as there are things like BIOS vs. UEFI where we
> may want to run the same test on the same image and consider it a
> different result. The only way we could stick to my original vision
> there would be to present 'same test, different environment' as another
> row in the UI, kinda like we do for 'two-dimensional test tables' in
> Wikitcms; it's not actually horrible UI, but I don't think we'd want to
> pretend in the backend that these were two completely different. I
> mean, we could. Ultimately a 'test case' is going to be a database row
> with a URL and a numeric ID. We don't *have* to say the URL key is
> unique. ;)
>

I got a little lost here, but I think I understand what you are saying.
This is IMO one of the biggest pain-points we have currently - the stuff
where we kind of consider "Testcase FOO" for BIOS and UEFI to be
the same, but different at the same time, and I think this is where the
TCMS should come in play, actually.

Because I believe, that there is a fundamental difference between
1) the 'text' of the testcase (which says 'how to do it' basically)
2) the (what I think you call) environment - aka UEFI vs BIOS, 64bit vs
ARM, ...
3) the testplan

And this might be us saying the same things, but we often can end up in a
situation, where we say stuff like "this test(case?) makes sense for BIOS
and
UEFI, for x86_64 and ARM, for physical host and virtual machine, ...)"
and sometimes it would make sense to store the 'variables of the env' with
the testcases, and sometims in the testplan, and figuring out the split is
a difficult thing to do.

I kind of believe, that the "environment requirements" should be a part of
the
testplan - we should say that "testplan X needs testcase Y ran on Foo and
Bar"
in the testplan. Instead of listing all the different options in the
testcase, and then
just selecting "a version of it" in testplan.

And this leads,

Re: Release validation NG: planning thoughts

2016-12-01 Thread Adam Williamson
On Thu, 2016-12-01 at 14:25 +0100, Josef Skladanka wrote:
> On Wed, Nov 30, 2016 at 6:29 PM, Adam Williamson  > wrote:
> > On Wed, 2016-11-30 at 18:20 +0100, Josef Skladanka wrote:
> > > I would try not to go the third way, because that is really prone to
> > 
> > erros
> > > IMO, and I'm not sure that "per context" is always right. So for me, the
> > > "TCMS" part of the data, should be:
> > > 1) testcases (with required fields/types of the fields in the "result
> > > response"
> > > 2) testplans - which testcases, possibly organized into groups. Maybe
> > 
> > even
> > > dependencies + saying "I need testcase X to pass, Y can be pass or warn,
> > 
> > Z
> > > can be whatever when A passes, for the testplan to pass"
> > > 
> > > But this is fairly complex thing, to be honest, and it would be the first
> > > and only useable TCMS in the world (from my point of view).
> > 
> > I have rather different opinions, actually...but I'm not working on
> > this right now and I'd rather have something concrete to discuss than
> > just opinions :)
> > 
> > We should obviously set goals properly, before diving into implementation
> 
> details :) I'm interested in what you have in mind, since I've been
> thinking about this particular kind of thing for the last few years, and it
> really depends on what you expect of the system.

Well, the biggest point where I differ is that I think your 'third way'
is kind of unavoidable. For all kinds of reasons.

We re-use test cases between package update testing, Test Days, and
release validation testing, for instance; some tests are more or less
unique to some specific process, but certainly not all of them. The
desired test environments may be significantly different in these
different cases.

We also have secondary arch teams using release validation processes
similar to the primary arch process: they use many of the same test
cases, but the desired test environments are of course not the same.

Of course, in a non-wiki based system you could plausibly argue that a
test case could be stored along with *all* of its possible
environments, and then the configuration for a specific test event
could include the information as to which environments are relevant
and/or required for that test event. But at that point I think you're
rather splitting hairs...

In my original vision of 'relval NG' the test environment wouldn't
actually exist at all, BTW. I was hoping we could simply list test
cases, and the user could choose the image they were testing, and the
image would serve as the 'test environment'. But on second thought
that's unsustainable as there are things like BIOS vs. UEFI where we
may want to run the same test on the same image and consider it a
different result. The only way we could stick to my original vision
there would be to present 'same test, different environment' as another
row in the UI, kinda like we do for 'two-dimensional test tables' in
Wikitcms; it's not actually horrible UI, but I don't think we'd want to
pretend in the backend that these were two completely different. I
mean, we could. Ultimately a 'test case' is going to be a database row
with a URL and a numeric ID. We don't *have* to say the URL key is
unique. ;)

There are really all kinds of ways you can structure it, but I think
fundamentally they'd all boil down to the same inherent level of
complexity; some of them might be demonstrably worse than others
(like...sticking them all in wikicode and parsing wiki table syntax to
figure out when you have different 'test instances' for the same test
case! that sounds like a *really bad* way to do it!)

Er. I'm rambling, aren't I? One reason I actually tend to prefer just
sitting down and writing something to trying to plan it all out
comprehensively is that when I just sit here and try to think out
planning questions I get very long-winded and fuzzy and chase off down
all possible paths. Just writing a damn thing is usually quite quick
and crystallizes a lot of the questions wonderfully...
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
http://www.happyassassin.net
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Release validation NG: planning thoughts

2016-11-30 Thread Adam Williamson
On Wed, 2016-11-30 at 18:20 +0100, Josef Skladanka wrote:
> I would try not to go the third way, because that is really prone to erros
> IMO, and I'm not sure that "per context" is always right. So for me, the
> "TCMS" part of the data, should be:
> 1) testcases (with required fields/types of the fields in the "result
> response"
> 2) testplans - which testcases, possibly organized into groups. Maybe even
> dependencies + saying "I need testcase X to pass, Y can be pass or warn, Z
> can be whatever when A passes, for the testplan to pass"
> 
> But this is fairly complex thing, to be honest, and it would be the first
> and only useable TCMS in the world (from my point of view).

I have rather different opinions, actually...but I'm not working on
this right now and I'd rather have something concrete to discuss than
just opinions :)
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
http://www.happyassassin.net
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Release validation NG: planning thoughts

2016-11-30 Thread Josef Skladanka
On Wed, Nov 30, 2016 at 11:14 AM, Adam Williamson <
adamw...@fedoraproject.org> wrote:

> On Wed, 2016-11-30 at 02:10 -0800, Adam Williamson wrote:
> > On Wed, 2016-11-30 at 09:38 +0100, Josef Skladanka wrote:
> > > So if this is what you wanted to do (data validation), it might be a
> good
> > > idea to have that submitter middleware.
> >
> > Yeah, that's really kind of the key 'job' of that layer. Remember,
> > we're dealing with *manual* testing here. We can't really just have a
> > webapp that forwards whatever the hell people manage to stuff through
> > its input fields into ResultsDB.
>
> I guess another way you could look at it is, this would be the layer
> where we actually define what kinds of manual test results we want to
> store in ResultsDB, and what the format for each type should be. I
> kinda like the idea that we could use the same middleware to do that
> job for various different frontends for submitting and viewing results,
> e.g. the webUI part of this project, a CLI app like relval, and a
> different webUI like testdays...
>
> Yes, that IMO makes a lot of sense. Especially if we want to target
multiple "input tools". Then it might make sense to have what I was
discussing in the previous post (and what you have been, I think talking
about)  - a format (two of them, actually) that defines:
1) what testcases are relevant for X (where X is, say Rawhide nightly
testing, Testday for translations, foobar)
2) required structure (fields, types of the field) of the response

The question here is, whether the "required structure" is better off "per
testcase" (i.e. "this testcase always requires these fields") or "per
context" (i.e. results for this "thing" always require these fields) or
event those combined ("this testcase, in this context, requires X, Y and Z,
but in this other context, it only needs FOOBAR")

I would try not to go the third way, because that is really prone to erros
IMO, and I'm not sure that "per context" is always right. So for me, the
"TCMS" part of the data, should be:
1) testcases (with required fields/types of the fields in the "result
response"
2) testplans - which testcases, possibly organized into groups. Maybe even
dependencies + saying "I need testcase X to pass, Y can be pass or warn, Z
can be whatever when A passes, for the testplan to pass"

But this is fairly complex thing, to be honest, and it would be the first
and only useable TCMS in the world (from my point of view).

Let's do it!
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Release validation NG: planning thoughts

2016-11-30 Thread Josef Skladanka
On Wed, Nov 30, 2016 at 11:10 AM, Adam Williamson <
adamw...@fedoraproject.org> wrote:

> On Wed, 2016-11-30 at 09:38 +0100, Josef Skladanka wrote:
> > So if this is what you wanted to do (data validation), it might be a good
> > idea to have that submitter middleware.
>
> Yeah, that's really kind of the key 'job' of that layer. Remember,
> we're dealing with *manual* testing here. We can't really just have a
> webapp that forwards whatever the hell people manage to stuff through
> its input fields into ResultsDB.
>

I'm not sure I'm getting it right, but the people will pass the data
through a "tool" (say web app) which will provide fields to fill, and will
most probably end up doing the data "sanitation" on its own. So the
"frontend" could store data directly in ResultsDB, since the frontend would
make the user fill all the fields. I guess I know what you are getting at
("but this is exactly the double validation!") but it is IMHO actually
harder to have "generic stupid frontend" that gets the "form schema" from
the middleware, shows the form, and blindly forwads data to the middleware,
showing errors back, than
1) having a separate app for that, that will know the validation rules
2) it being an actual frontend on the middleware, thus reusing the "check"
code internally


> R...we need to tell the web UI 'these are the
> possible scenarios for which you should prompt users to input results
> at all'
>
Agreed


> (which for release validation is all the 'notice there's a new
> compose, combine it with the defined release validation test cases and
> expose all that info to the UI' work),

That is IMO a separate problem, but yeah.


> and we need to take the data the
> web UI generates from user input, make sure it actually matches up with
> the schema we decide on for storing the results before forwarding it to
> resultsdb, and tell the web UI there's a problem if it doesn't.
>
And this is what I have been discussing in the first part of the reply.


> That's how I see it, anyhow. Tell me if I seem way off. :)
> --
> Adam Williamson
> Fedora QA Community Monkey
> IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
> http://www.happyassassin.net
> ___
> qa-devel mailing list -- qa-devel@lists.fedoraproject.org
> To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org
>
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Release validation NG: planning thoughts

2016-11-30 Thread Adam Williamson
On Wed, 2016-11-30 at 02:10 -0800, Adam Williamson wrote:
> On Wed, 2016-11-30 at 09:38 +0100, Josef Skladanka wrote:
> > So if this is what you wanted to do (data validation), it might be a good
> > idea to have that submitter middleware.
> 
> Yeah, that's really kind of the key 'job' of that layer. Remember,
> we're dealing with *manual* testing here. We can't really just have a
> webapp that forwards whatever the hell people manage to stuff through
> its input fields into ResultsDB.

I guess another way you could look at it is, this would be the layer
where we actually define what kinds of manual test results we want to
store in ResultsDB, and what the format for each type should be. I
kinda like the idea that we could use the same middleware to do that
job for various different frontends for submitting and viewing results,
e.g. the webUI part of this project, a CLI app like relval, and a
different webUI like testdays...
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
http://www.happyassassin.net
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Release validation NG: planning thoughts

2016-11-30 Thread Adam Williamson
On Wed, 2016-11-30 at 09:38 +0100, Josef Skladanka wrote:
> So if this is what you wanted to do (data validation), it might be a good
> idea to have that submitter middleware.

Yeah, that's really kind of the key 'job' of that layer. Remember,
we're dealing with *manual* testing here. We can't really just have a
webapp that forwards whatever the hell people manage to stuff through
its input fields into ResultsDB.

Really there's two kinds of 'validation' going on, if you'd like to
think of it that way: we need to tell the web UI 'these are the
possible scenarios for which you should prompt users to input results
at all' (which for release validation is all the 'notice there's a new
compose, combine it with the defined release validation test cases and
expose all that info to the UI' work), and we need to take the data the
web UI generates from user input, make sure it actually matches up with
the schema we decide on for storing the results before forwarding it to
resultsdb, and tell the web UI there's a problem if it doesn't.

That's how I see it, anyhow. Tell me if I seem way off. :)
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
http://www.happyassassin.net
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Release validation NG: planning thoughts

2016-11-30 Thread Josef Skladanka
On Tue, Nov 29, 2016 at 5:34 PM, Adam Williamson  wrote:

> On Tue, 2016-11-29 at 19:41 +0530, Kanika Murarka wrote:
> > 2. Keep a record of no. of validation test done by a tester and highlight
> > it once he login. A badge is being prepared for no. of validation testing
> > done by a contributor[1].
>
> Well, this information would kind of inevitably be collected at least
> in resultsdb and probably wind up in the transmitter component's DB
> too, depending on exactly how we set things up.
>

I think that this probably should be in ResultsDB - it's the actual stored
result data.
The transmitter component should IMO store the "semantics" (testplans,
stuff like that), and use the "raw" resultsdb data as a source to present
meaningful view.
I'd say that as a rule of thumb, replicating data on multiple places is a
sign of design error.
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Release validation NG: planning thoughts

2016-11-30 Thread Josef Skladanka
On Mon, Nov 28, 2016 at 6:48 PM, Adam Williamson  wrote:

> On Mon, 2016-11-28 at 09:40 -0800, Adam Williamson wrote:
> > The validator/submitter component would be responsible for watching out
> > for new composes and keeping track of tests and 'test environments' (if
> > we keep that concept); it would have an API with endpoints you could
> > query for this kind of information in order to construct a result
> > submission, and for submitting results in some kind of defined form. On
> > receiving a result it would validate it according to some schemas that
> > admins of the system could configure (to ensure the report is for a
> > known compose, image, test and test environment, and do some checking
> > of stuff like the result status, user who submitted the result, comment
> > content, stuff like that). Then it'd forward the result to resultsdb.
>
> It occurs to me that it's possible resultsdb might be designed to do
> all this already, or it might make sense to amend resultsdb to do all
> or some of it; if that's the case, resultsdb folks, please do jump in
> and suggest it :)
>

That's what I thought, when reading the proposal - the "Submitter" seems
like an unnecessary layer, to some extent - submitting stuff to resultsdb
is pretty easy.
What resultsdb is not doing now, though is the data validation - let's say
you wanted to check that specific fields are set (on top of what resultsdb
requires, which basically is just testcase and outcome) - that can be done
in resultsdb (there is a diff with that functionality), but at the moment
only on global level. So it might not necessarily make sense to set e.g.
'compose' as a required field for the whole resultsdb, since
testday-related results might not even have that.
So if this is what you wanted to do (data validation), it might be a good
idea to have that submitter middleware. Or (and I'm not sure it's the
better solution) I could try and make that configuration more granular, so
you could set the requirements e.g. per namespace, thus effectively
allowing setting the constraints even per testcase. But that would need
even more though - should the constraints be inherited from the upper
layers? How about when all but one testcases in a namespace need to have
parameter X, but for the one, it does not make sense? (Probably a design
error, but needs to be thought-through in the design phase).

So, even though resultsDB could do that, it is borderline "too smart" for
it (I really want to keep any semantics out of ResultsDB). I'm not
necessarily against it (especially if we end up wanting that on more
places), but until now, we more or less worked with "clients that submits
data makes sure all required fields are set" i.e "it's not resultsdb's
place to say what is or is not required for a specific usecase". I'm not
against the change, but at least for the first implementation (of the
Release validation NG) I'd vote for the middleware solution. We can add the
data validation functionality to ResultsDB later on, when we have a more
concrete idea.

Makes sense?

Joza
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Release validation NG: planning thoughts

2016-11-29 Thread Adam Williamson
On Tue, 2016-11-29 at 19:41 +0530, Kanika Murarka wrote:
> Hey everyone,
> I have some thoughts for the project:-
> 
> 1. We can have a notification system, which gives a notifications like:-
> * 'There is a test day coming for this compose in 2 days'
> * 'A new compose has been added'
> Something to motivate and keep reminding testers about test days and new
> composes.

Yeah, this is certainly going to be needed if only simply to replace
the Wikitcms event creation notification emails (these are sent by
'relvalconsumer', which is the fedmsg consumer bot that creates the
events).

> 2. Keep a record of no. of validation test done by a tester and highlight
> it once he login. A badge is being prepared for no. of validation testing
> done by a contributor[1].

Well, this information would kind of inevitably be collected at least
in resultsdb and probably wind up in the transmitter component's DB
too, depending on exactly how we set things up. For badge purposes,
we're *certainly* going to have this system firing off fedmsgs in all
directions, so the badges can be granted just based on the fedmsgs.
'User W reported a X for test Y on compose Z' (or similar) is a very
obvious fedmsg to emit.

> 3. Someway to show that testing for a particular compose is not required
> now, so testers can move on to newer composes.

We're talking about approximately this in the design ticket. My initial
design idea would *only* show images for the 'current' validation event
if you need to download an image for testing; I don't really see an
awful lot of point in offering older images for download. I suggested
offering events from the previous week or so for selection if you
already have an image downloaded, to prevent people having to download
new images all the time but also prevent us getting uselessly old
reports.

I'd see it as the validator/submitter component's job to keep track of
information about events/composes (however we conceive it), like when
they appeared, and the web UI's job to make decisions about which to
actually show people.

> 4. Also, we can add a 'sort by priority' option in the list of test images.

Yes, something like that, at least. The current system actually does
something more or less like this. The download tables on the wiki pages
are not randomly ordered, but ordered using a weighting provided by
fedfind which includes the importance of the image subvariant as a
factor:

https://pagure.io/fedora-qa/fedfind/blob/master/f/fedfind/helpers.py#_331

It currently penalizes ARM images quite heavily, which is not because
ARM isn't important, but a craven surrender to the practical realities
of wiki tables: they look a lot better if all the ARM disk images are
grouped together than if they're interspersed throughout the table. We
obviously have more freedom to avoid this issue in the design of the
new system.

Thanks for the thoughts!
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
http://www.happyassassin.net
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Release validation NG: planning thoughts

2016-11-28 Thread Adam Williamson
On Mon, 2016-11-28 at 09:40 -0800, Adam Williamson wrote:
> The validator/submitter component would be responsible for watching out
> for new composes and keeping track of tests and 'test environments' (if
> we keep that concept); it would have an API with endpoints you could
> query for this kind of information in order to construct a result
> submission, and for submitting results in some kind of defined form. On
> receiving a result it would validate it according to some schemas that
> admins of the system could configure (to ensure the report is for a
> known compose, image, test and test environment, and do some checking
> of stuff like the result status, user who submitted the result, comment
> content, stuff like that). Then it'd forward the result to resultsdb.

It occurs to me that it's possible resultsdb might be designed to do
all this already, or it might make sense to amend resultsdb to do all
or some of it; if that's the case, resultsdb folks, please do jump in
and suggest it :)
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
http://www.happyassassin.net
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org