Re: Release validation NG: planning thoughts

2016-12-05 Thread Adam Williamson
On Mon, 2016-12-05 at 10:33 +0100, Josef Skladanka wrote:
> I kind of believe, that the "environment requirements" should be a part of
> the
> testplan - we should say that "testplan X needs testcase Y ran on Foo and
> Bar"
> in the testplan. Instead of listing all the different options in the
> testcase, and then
> just selecting "a version of it" in testplan.

Oh! Then yes, we absolutely agree. That's what I think too.

In my head, a 'test case' - to this system - is some kind of resource
locator for what's expected to be instructions on testing something,
and an id. And that's all it is. Absolutely I agree we don't store any
other metadata associated with the test case, but with the test plan.

And yes, you were right in your assumption about what I meant by 'test
environment'.

One interesting thought I had, though - should we store the *test
cases* in the middleware 'validate/report' thing I've been describing
here, or should we store them in ResultsDB?

The 'test plan' stuff should clearly go in the middleware, I think. But
it's not so straightforward whether we keep the 'test cases' there or
in ResultsDB, especially if they're just super-simple 'here's a URL and
some identifiers for it' objects.

Oh, BTW, I definitely was thinking that we should cope with test cases
being moved around and having their human-friendly names changed, but
still being recognized as 'the same test case'.
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
http://www.happyassassin.net
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


2016-12-05 Fedora QA Devel Meeting Minutes

2016-12-05 Thread Tim Flink
=
#fedora-meeting-1: fedora-qadevel
=

Minutes: 
https://meetbot.fedoraproject.org/fedora-meeting-1/2016-12-05/fedora-qadevel.2016-12-05-15.00.html
Minutes (text): 
https://meetbot.fedoraproject.org/fedora-meeting-1/2016-12-05/fedora-qadevel.2016-12-05-15.00.txt
Log: 
https://meetbot.fedoraproject.org/fedora-meeting-1/2016-12-05/fedora-qadevel.2016-12-05-15.00.log.html


Meeting summary
---
* Roll Call  (tflink, 15:00:47)

* Announcements and Information  (tflink, 15:06:06)
  * taskotron dev machines rebuilt to Fedora 24 - tflink, mkrizek
(tflink, 15:06:17)
  * resultsdb on dev migrated to 2.0 - jskladan, mkrizek  (jskladan,
15:07:33)

* Rebuilding Taskotron instances  (tflink, 15:11:15)

* qadevel changes  (tflink, 15:24:05)

* open floor  (tflink, 15:30:27)

Meeting ended at 15:32:45 UTC.




Action Items






Action Items, by person
---
* **UNASSIGNED**
  * (none)




People Present (lines said)
---
* tflink (45)
* mkrizek (14)
* jskladan (5)
* zodbot (4)
* kparal (1)




Generated by `MeetBot`_ 0.1.4

.. _`MeetBot`: http://wiki.debian.org/MeetBot


pgplWeT0OajlM.pgp
Description: OpenPGP digital signature
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: ResultsDB 2.0 - DB migration on DEV

2016-12-05 Thread Tim Flink
On Fri, 25 Nov 2016 13:01:51 +0100
Josef Skladanka  wrote:

> So, I have performed the migration on DEV - there were some problems
> with it going out of memory, so I had to tweak it a bit (please have
> a look at D1059, that is what I ended up using by hot-fixing on DEV).
> 
> There still is a slight problem, though - the migration of DEV took
> about 12 hours total, which is a bit unreasonable. Most of the time
> was spent in
> `alembic/versions/dbfab576c81_change_schema_to_v2_0_step_2.py` lines
> 84-93 in D1059. The code takes about 5 seconds to change 1k results.
> That would mean at least 15 hours of downtime on PROD, and that, I
> think is unreal...
> 
> And since I don't know how to make it faster (tips are most
> welcomed), I suggest that we archive most of the data in STG/PROD
> before we go forward with the migration. I'd make a complete backup,
> and deleted all but the data from the last 3 months (or any other
> reasonable time span).
> 
> We can then populate an "archive" database, and migrate it on its own,
> should we decide it is worth it (I don't think it is).
> 
> What do you think?

While it would be nice to not lose (in the sense that it wouldn't be
readily available) all that old data, 15 hours does seem a bit extreme.

Is there a way we could export the results as a json file or something
similar? If there is (or if it could be added without too much
trouble), we would have multiple options:

1. Dump the contents of the current db, do a partial offline migration
   and finish it during the upgrade outage by export/importing the
   newest data, deleting the production db and importing the offline
   upgraded db. If that still takes too long, create a second postgres
   db containing the offline upgrade, switchover during the outage and
   import the new results since the db was copied.

2. If the import/export process is fast enough, might be able to do
   instead of the inplace migration

Thoughts on either of these options?

Tim


pgpfKAmwVGIHv.pgp
Description: OpenPGP digital signature
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Release validation NG: planning thoughts

2016-12-05 Thread Josef Skladanka
On Thu, Dec 1, 2016 at 6:04 PM, Adam Williamson 
wrote:

> On Thu, 2016-12-01 at 14:25 +0100, Josef Skladanka wrote:
> > On Wed, Nov 30, 2016 at 6:29 PM, Adam Williamson <
> adamw...@fedoraproject.org
> > > wrote:
> > > On Wed, 2016-11-30 at 18:20 +0100, Josef Skladanka wrote:
> > > > I would try not to go the third way, because that is really prone to
> > >
> > > erros
> > > > IMO, and I'm not sure that "per context" is always right. So for me,
> the
> > > > "TCMS" part of the data, should be:
> > > > 1) testcases (with required fields/types of the fields in the "result
> > > > response"
> > > > 2) testplans - which testcases, possibly organized into groups. Maybe
> > >
> > > even
> > > > dependencies + saying "I need testcase X to pass, Y can be pass or
> warn,
> > >
> > > Z
> > > > can be whatever when A passes, for the testplan to pass"
> > > >
> > > > But this is fairly complex thing, to be honest, and it would be the
> first
> > > > and only useable TCMS in the world (from my point of view).
> > >
> > > I have rather different opinions, actually...but I'm not working on
> > > this right now and I'd rather have something concrete to discuss than
> > > just opinions :)
> > >
> > > We should obviously set goals properly, before diving into
> implementation
> >
> > details :) I'm interested in what you have in mind, since I've been
> > thinking about this particular kind of thing for the last few years, and
> it
> > really depends on what you expect of the system.
>
> Well, the biggest point where I differ is that I think your 'third way'
> is kind of unavoidable. For all kinds of reasons.
>
> We re-use test cases between package update testing, Test Days, and
> release validation testing, for instance; some tests are more or less
> unique to some specific process, but certainly not all of them. The
> desired test environments may be significantly different in these
> different cases.
>

We also have secondary arch teams using release validation processes
> similar to the primary arch process: they use many of the same test
> cases, but the desired test environments are of course not the same.
>
>
I think we actually agree, but I'm not sure, since I don't really know what
you mean by "test environment" and how should it
1) affect the data stored with the result
2) affect the testcase itself

I have a guess, and I base the rest of my response on it, but I'd rather
know, than assume :)



> Of course, in a non-wiki based system you could plausibly argue that a
> test case could be stored along with *all* of its possible
> environments, and then the configuration for a specific test event
> could include the information as to which environments are relevant
> and/or required for that test event. But at that point I think you're
> rather splitting hairs...
>
> In my original vision of 'relval NG' the test environment wouldn't
> actually exist at all, BTW. I was hoping we could simply list test
> cases, and the user could choose the image they were testing, and the
> image would serve as the 'test environment'. But on second thought
> that's unsustainable as there are things like BIOS vs. UEFI where we
> may want to run the same test on the same image and consider it a
> different result. The only way we could stick to my original vision
> there would be to present 'same test, different environment' as another
> row in the UI, kinda like we do for 'two-dimensional test tables' in
> Wikitcms; it's not actually horrible UI, but I don't think we'd want to
> pretend in the backend that these were two completely different. I
> mean, we could. Ultimately a 'test case' is going to be a database row
> with a URL and a numeric ID. We don't *have* to say the URL key is
> unique. ;)
>

I got a little lost here, but I think I understand what you are saying.
This is IMO one of the biggest pain-points we have currently - the stuff
where we kind of consider "Testcase FOO" for BIOS and UEFI to be
the same, but different at the same time, and I think this is where the
TCMS should come in play, actually.

Because I believe, that there is a fundamental difference between
1) the 'text' of the testcase (which says 'how to do it' basically)
2) the (what I think you call) environment - aka UEFI vs BIOS, 64bit vs
ARM, ...
3) the testplan

And this might be us saying the same things, but we often can end up in a
situation, where we say stuff like "this test(case?) makes sense for BIOS
and
UEFI, for x86_64 and ARM, for physical host and virtual machine, ...)"
and sometimes it would make sense to store the 'variables of the env' with
the testcases, and sometims in the testplan, and figuring out the split is
a difficult thing to do.

I kind of believe, that the "environment requirements" should be a part of
the
testplan - we should say that "testplan X needs testcase Y ran on Foo and
Bar"
in the testplan. Instead of listing all the different options in the
testcase, and then
just selecting "a version of it"