Re: Check out the Fedora Packager Dashboard!

2022-08-31 Thread Josef Skladanka
Björn,

I won't be addressing the comments one-by-one, as it mostly boils down
to "I don't like the UI/UX" (how I read it, at least). And I
absolutely understand and accept that. On the other hand, we (as in
the people who developed this) are, well, developers, and that about
sums up the UX optimizations :)

The current state makes sense for us, but we are very open to pull
requests and improvements. Or, if you don't feel like diving into
JavaScript, you can put together a feature request that will, in
detail, document the UI changes you'd like to see, the different icon
choice you made, and explain why you believe those are more
"universally understood" or better. I'm not promising everything will
be accepted, but once again, I know for a fact the UI could be better.

Thanks for taking the time to sum up your thoughts, we are looking
forward to seeing less "this sucks" and more "here's how I'd fix it,
though"!

J.

On Fri, Aug 26, 2022 at 6:22 PM Björn Persson  wrote:
>
> Fabio Valentini wrote:
> > On Thu, Aug 25, 2022 at 11:43 AM Artur Frenszek-Iwicki
> >  wrote:
> > > I'll forget the meaning and the numbers will go back to being visual 
> > > clutter. It would be immensely helpful
> > > to have some symbolic icons next to the numbers, which would allow to 
> > > easily guess what each of them means.
> >
> > Sounds like you need to clear your browser cache or something, because
> > there *are* symbols next to these numbers:
> > https://decathorpe.fedorapeople.org/packager-dashboard.png
>
> I too can see the icons when I allow fontawesome.com, but few of them
> help with understanding the numbers. The beetle for "bugs" and the
> speech bubble for "comments" are pretty obvious, but I still have to
> point to all the others to find out what they mean, and even then many
> of them seem completely random. How does a lightning bolt symbolize
> updates? What's the connection between a shield and priority? A
> triangle, a circle and a square combine into "overrides"? There are two
> different line chart icons. How does one remember which is which? And a
> seatbelt apparently means "orphans" somehow.
>
> I assume that "PRs" stands for "pull requests". The icon for that is
> the word "git". That's better than a random unrelated picture, but if a
> picture is just text, then it should be actual text and not a picture.
> It's also somewhat inaccurate because pull requests aren't a Git thing
> but a concept that some web interfaces layer on top of Git.
>
> Rather than hiding the intelligible words in mouseover boxes, it would
> be better to write them directly on the screen instead of the icons. If
> there is some idea that the icons should be language-independent, then
> the beetle also fails. Software defects are not called insects in all
> languages.
>
> > > Similarly, at the top of the page, I get a banner that informs me about 
> > > FAS integration and says:
> > > > After linking the dashboard with your FAS through the settings menu...
> > > Which is all nice and dandy, but doing a Ctrl+F on the page for 
> > > "settings" gives exactly one match -
> > > that being the text in the banner. So there's no visible link to said 
> > > "settings menu" anywhere.
> > > How do I access it?
> >
> > The big "gear" icon (the almost universal symbol for "Settings") in
> > the top panel should be what you're looking for.
>
> The gear is called "Options", and beside it is an icon called
> "Customize dashboard". "Settings" could refer to either of those. It
> would be nice to have consistent terminology, but hey, we can always
> click on everything and explore.
>
> The gear icon is also misleading. It alludes to machinery in motion, so
> it suggests a menu of commands to do things, rather than options or
> settings. There is a wrench icon that would be a good symbol for
> settings, but that apparently means Koschei.
>
> Björn Persson
> ___
> devel mailing list -- devel@lists.fedoraproject.org
> To unsubscribe send an email to devel-le...@lists.fedoraproject.org
> Fedora Code of Conduct: 
> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives: 
> https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
> Do not reply to spam, report it: 
> https://pagure.io/fedora-infrastructure/new_issue
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


Re: orphaning Taskotron-related packages

2020-11-25 Thread Josef Skladanka
On Mon, Nov 23, 2020 at 7:11 PM Tim Flink  wrote:
>
> On Thu, 12 Nov 2020 18:25:17 +0100
> Kamil Paral  wrote:
>
> > Note: The email subject should have said "retiring" instead of
> > "orphaning". There is little reason to orphan them, retiring is the
> > right approach here. Perhaps except for mongoquery, somebody else
> > could be interested in maintaining that, so that one should be
> > orphaned instead.
>
> Orphaning python-mongoquery and retiring everything else makes sense to
> me.
>
> Tim


+1
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/qa-devel@lists.fedoraproject.org


Re: Fedora Packager Dashboard available for testing

2020-06-24 Thread Josef Skladanka
First of all, thanks for the feedback!

On Wed, Jun 24, 2020 at 10:28 AM Vít Ondruch  wrote:
> Would it be possible to change the "reset" to something like "set
> all/unset all". When I wanted to know what actually "orphaned" means, I
> had to click on every option, which is not very convenient.

We'll absolutely discuss it. Do you think that once Zbyszek's RFE is
implemented, this would be solved for you?

> Also, I am not sure about the "provenpackager" group. Should it be
> displayed? Should it do something?

We are already "discarding" some groups, seems like provenpackagers
just slipped the cut. if provenpackagers group owns no packages, I
think we could/should just cut it too. Frantisek/Lukas - WDYT?
>
> Actually the whole "groups" section is a bit confusing. I am not sure
> what the radio buttons do (I am aware about the hints, but they don't
> make the situation better understandable to me).

I honestly absolutely agree it is not ideal. We have tried many
iterations of the hints, and this is by far the best we've came up
with.
The idea behind the filters is this:
 - "blind eye icon" - if selected, you won't see any packages owned by
the group, even if you are the "primary maintainer" (forgive my,
probably wrong, terminology)
 - "single person icon" - when picked, only packages "belonging to the
group" that you are the "owner"/"primary maintainer" are shown
 - "three people icon" - if this option is selected, packages from the
group are shown even if you don't own them

We'd be happy for any suggestions you might have, on how to make this
comprehensible.

Thanks, josef
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


Re: Fedora Packager Dashboard available for testing

2020-06-24 Thread Josef Skladanka
Iñaki,

looking at your dashboard overview, my guess would be that only one of
the packages has any bugs/updates/prs/... since we "know" you have six
packages (shown in the header), but only one is shown in the
dashboard. So according to our information, the other five packages
just don't need any attention.

If the data is incomplete, please report a bug showing what's missing
so we can investigate.
Thank you,

josef

On Wed, Jun 24, 2020 at 11:06 AM Iñaki Ucar  wrote:
>
> Congrats for the great work!
>
> One question. I don't have many packages, but I only see one of them. Why?
>
> Iñaki
>
> On Tue, 23 Jun 2020 at 18:35, Josef Skladanka  wrote:
> >
> > Hi,
> >
> > We'd like to announce public testing of the Packager Dashboard - a new
> > service for Fedora package maintainers aiming to provide all relevant
> > data: FTBFS/FTI status (from both Bugzilla, Koschei and health check),
> > orphan warnings, bugzillas, pull requests, active overrides and
> > updates - at a single place in an easy to read and filter way.
> >
> > The Dashboard is now available: https://packager.fedorainfracloud.org/
> >
> > Packager Dashboard leverages caching in the Oraculum backend to
> > significantly speed-up loading times with comparison to querying all
> > the relevant resources separately. We, of course, can't cache the
> > entire Bugzilla, Pagure, Bodhi... so we only cache data for users who
> > visit Packager Dashboard at least once per 14 days. Please keep in
> > mind that the first load for a “new” user might take a while. Most of
> > the data sources are refreshed every hour.
> >
> > You can use the Dashboard for individual accounts as well as for FAS groups.
> >
> > We'd love to hear your feedback. Please keep in mind that this is
> > testing deployment - it's currently running on a server with very
> > limited resources and we're aiming for production deployment on
> > CommuniShift during this summer.
> >
> > Feel free to provide ideas or bug reports at
> > https://pagure.io/fedora-qa/packager_dashboard or simply send an email
> > reply to this thread with all kinds of feedback.
> >
> > I'd like to mention the other people who made this possible:
> >  - Miro Hrončok (churchyard) - Original idea
> > <https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/thread/4I3LRNUGSOQYYBZK3JUYL3EX74SWDF2T/>
> > and ideas for data to display
> >  - František Zatloukal - Backend <https://pagure.io/fedora-qa/oraculum>
> >  - Lukáš Brabec - Frontend <https://pagure.io/fedora-qa/packager_dashboard>
> >
> > Josef
> > ___
> > devel mailing list -- devel@lists.fedoraproject.org
> > To unsubscribe send an email to devel-le...@lists.fedoraproject.org
> > Fedora Code of Conduct: 
> > https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> > List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> > List Archives: 
> > https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
>
>
>
> --
> Iñaki Úcar
> ___
> devel mailing list -- devel@lists.fedoraproject.org
> To unsubscribe send an email to devel-le...@lists.fedoraproject.org
> Fedora Code of Conduct: 
> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives: 
> https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


Re: Fedora Packager Dashboard available for testing

2020-06-24 Thread Josef Skladanka
On Wed, Jun 24, 2020 at 9:15 AM Zbigniew Jędrzejewski-Szmek
 wrote:
> RFE: would it be possible to make the icons in the header clickable
> (the part where there's the ladybug, zapf, blocks, wrench, etc), so that
> we'd get redirected to that list of issues (e.g. FTI bugs)?

It also dawned on me that you might have missed the filtering options
which are already present - it is a bit more cumbersome for the
specific use case you presented, but if you click on the little Gear
icon on the right side of the header, you could use the switches to
filter out just the bugs/prs/... already

The RFE you proposed would just do that much faster, though!

Josef
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


Re: Fedora Packager Dashboard available for testing

2020-06-24 Thread Josef Skladanka
On Wed, Jun 24, 2020 at 9:15 AM Zbigniew Jędrzejewski-Szmek
 wrote:

> RFE: would it be possible to make the icons in the header clickable
> (the part where there's the ladybug, zapf, blocks, wrench, etc), so that
> we'd get redirected to that list of issues (e.g. FTI bugs)?
>
> Zbyszek


Zbyszek,

I quite like the idea, and I have created a RFE on your behalf in
Pagure: 
I think this should be fairly straightforward, but I'm of course
putting words in Lukas' mouth here :)

We're glad you like it otherwise!

j.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


Re: Fedora Packager Dashboard available for testing

2020-06-24 Thread Josef Skladanka
On Wed, Jun 24, 2020 at 3:16 AM Bob Hepple  wrote:
>
> Nice!
>
> Is the ourobolos (the snake eating its own tail) supposed to animate
> all the time? To me, that kinda sorta means something is working - but
> what? Hovering over it gives no clue.
>
> Or should I just wait a bit longer for it to finish? (I already waited
> a few minutes)

Bob,
from what I've heard from Frantisek and Lukas, the teeny VPS is under
constant load since yesterday, since we have _a lot_ of tasks in the
celery queues. It might well be that it will take some time to get to
sync your data. IIRC the ouroboros is shown at the beginning when we
wait for information about your groups and packages to be downloaded
from Pagure, which was one of the really slow bits during our testing.
Once that is loaded, the other information is being loaded and shown
progressively.

Frantisek/Lukas please correct me if I'm wrong about this.

J.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


Fedora Packager Dashboard available for testing

2020-06-23 Thread Josef Skladanka
Hi,

We'd like to announce public testing of the Packager Dashboard - a new
service for Fedora package maintainers aiming to provide all relevant
data: FTBFS/FTI status (from both Bugzilla, Koschei and health check),
orphan warnings, bugzillas, pull requests, active overrides and
updates - at a single place in an easy to read and filter way.

The Dashboard is now available: https://packager.fedorainfracloud.org/

Packager Dashboard leverages caching in the Oraculum backend to
significantly speed-up loading times with comparison to querying all
the relevant resources separately. We, of course, can't cache the
entire Bugzilla, Pagure, Bodhi... so we only cache data for users who
visit Packager Dashboard at least once per 14 days. Please keep in
mind that the first load for a “new” user might take a while. Most of
the data sources are refreshed every hour.

You can use the Dashboard for individual accounts as well as for FAS groups.

We'd love to hear your feedback. Please keep in mind that this is
testing deployment - it's currently running on a server with very
limited resources and we're aiming for production deployment on
CommuniShift during this summer.

Feel free to provide ideas or bug reports at
https://pagure.io/fedora-qa/packager_dashboard or simply send an email
reply to this thread with all kinds of feedback.

I'd like to mention the other people who made this possible:
 - Miro Hrončok (churchyard) - Original idea

and ideas for data to display
 - František Zatloukal - Backend 
 - Lukáš Brabec - Frontend 

Josef
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


Re: Change proposal discussion - Optimize SquashFS Size

2020-02-06 Thread Josef Skladanka
On Thu, Feb 6, 2020 at 3:06 PM Kevin Kofler  wrote:

> Hence, the remainder of your post is a strawman based on entirely
> fictional
> "statistics".
>
>
I'm glad you agree that your own argumentation is flawed, since it's based
entirely on fictional statistics :)

Since you not only presented no proof or numbers whatsoever to support your
claims like:
 - "at least 2000 users are affected by the slow download time"
 - "a few percent increase or decrease in download  time mean hours of
difference in download time for those users"
but also admitted to having no way of knowing.

I respect your opinions, but please stop presenting it as facts, or
implying that your opinions are based on hard data, when you have none to
back your claims.

I'd be more than happy to see your numbers, and be corrected.

Best, Josef

P.S. would be nice if you applied the standards you are requiring of others
on yourself too. It would make these interactions way more pleasant. But
hey, who am I to tell you what to do :)
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


Re: Change proposal discussion - Optimize SquashFS Size

2020-02-06 Thread Josef Skladanka
On Thu, Feb 6, 2020 at 12:01 AM Kevin Kofler  wrote:

> Oh, and to answer your other point:
>
> Lukas Brabec wrote:
> > It is pretty common for us in Fedora QA (well, I'm quite biased in this
> > case).
> > And no, we cannot compose our images, we have to test the exact same
> > images that will be shipped. We cannot test custom images and pretend
> > the results are going to be the same for official images.
>
> So you propose to optimize Fedora for your own internal use at the expense
> of thousands of users?
>
> Kevin Kofler
>

Assuming that your numbers are even accurate (which I have not seen any
proof of so far), your vaguely defined "thousands" (semantically implies <
10, but absolutely < 20k, especially since you tend to use hyperbole, and
that would definitely be "tens of thousands" to suit your case better) are
_at worst_ less than 2%, but most probably less than 1% of _downloads_ [1].
Given that at least some users perform more than one installation per
downloaded image, it is quite probable that it's even smaller percentage of
_users_, But let's be generous and say it's 5% of users, that really
struggle, and currently download a Fedora image for lets call it "tens of
hours" (when "a few percent increase or decrease in download  time can mean
hours of difference", it is quite obvious that the base download time is
enormous to start with). If there are optimizations to be performed on the
project's scale, what would you say has a bigger impact:
 a) optimization targeting < 5%
 b) optimization targeting > 95%

Looking forward to seeing your proof based on hard-data,

Josef

[1]
https://www.pcworld.com/article/3038353/fedora-project-leader-matthew-miller-reveals-whats-in-store-for-fedora-in-2016.html
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


Re: [Test-Announce] Fedora 29 Final is GO

2018-10-27 Thread Josef Skladanka
On Fri, Oct 26, 2018 at 11:02 PM Christian Dersch 
wrote:

>
> Having that criterion " if there's an RC specific failure to output images
> that didn't cause nightlies to fail, then we have to find out what's going
> on." would be really nice.
>
> Greetings,
> Christian
>

Christian, I absolutely agree, that having the Spins built is desirable,
and especially when they were "OK all the time before" it is extra weird
when a build is just plain canceled (just to make it clear, I did not have
anything to do with that, neither know who did).The thing with having a
criterion like that, with the current way the compose process works would
most probably mean whole new compose being created, thus basically
invalidating the testing done on the blocking images in that compose.
There is also the question of "who's going to be responsible for that
investigation?" (as it seems to be in the domain of already pretty busy
releng, and not qa. Correct me if I'm wrong) and "how is that investigation
gonna be performed?" (release criteria IMO should be clearly actionable).
The better way (I'd even go as far as saying the "correct", but I'm not
that well-versed in releng stuff), would be pushing not for the "easy way
out" (criterion) but for the tooling to change in such a way, that spins
can be built separately off of the "main" isos.
Just my $.02
J.
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org


Re: status report

2018-06-10 Thread Josef Skladanka
Sorry, wrong list. I blame the heat!

On Sun, Jun 10, 2018 at 5:19 PM, Josef Skladanka 
wrote:

> = Highlights =
>
> * Participated in interviewing candidates to replace Petr
> * Deployed Vault on dev
>   * there still are some quirks with OIDC login, that I need to iron out,
> but the overall concept seems good for the usecase
> * Modified libtaskotron to allow grabbing secrets from the Vault <
> https://pagure.io/taskotron/libtaskotron/c/4f3d0d0b3be6f065cb5a578070220f
> a5f4a212f5?branch=develop>
> * Fixed buidmaster-configure steps to enable proper support for launching
> tasks from a non-mirrored repo (aka the "discover feature"0
> * Deployed a task that builds docker images (resultsdb at the moment) in
> dev <https://pagure.io/taskotron/task-dockerbuild>
>   * The trigger seems to ignore the fedmessages, but when triggered via
> jobrunner, for a specific fedmesage, the whole process works fine
> * <https://taskotron-dev.fedoraproject.org/resultsdb/results/20658684>
> * <https://hub.docker.com/r/fedoraqa/resultsdb/tags/>
>
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/qa-devel@lists.fedoraproject.org/message/M3HOJU424WC7TT7NX4ETZB3CCBMCO4TN/


status report

2018-06-10 Thread Josef Skladanka
= Highlights =

* Participated in interviewing candidates to replace Petr
* Deployed Vault on dev
  * there still are some quirks with OIDC login, that I need to iron out,
but the overall concept seems good for the usecase
* Modified libtaskotron to allow grabbing secrets from the Vault <
https://pagure.io/taskotron/libtaskotron/c/4f3d0d0b3be6f065cb5a578070220fa5f4a212f5?branch=develop
>
* Fixed buidmaster-configure steps to enable proper support for launching
tasks from a non-mirrored repo (aka the "discover feature"0
* Deployed a task that builds docker images (resultsdb at the moment) in
dev 
  * The trigger seems to ignore the fedmessages, but when triggered via
jobrunner, for a specific fedmesage, the whole process works fine
* 
* 
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/qa-devel@lists.fedoraproject.org/message/SQCX5OMPAAN2INQBRFVJ66NU52VEHPKW/


Re: Please review - Infra Ansible - move slaves from home to srv

2017-11-21 Thread Josef Skladanka
The raw diff was attached to the original email, I could have mentioned
that, I guess. /me was not able to make gmail send unformatted/unwrapped
text.
Sorry for the inconvenience.

j.

On Mon, Nov 20, 2017 at 6:32 PM, Tim Flink <tfl...@redhat.com> wrote:

> On Mon, 20 Nov 2017 10:36:03 +0100
> Josef Skladanka <jskla...@redhat.com> wrote:
>
> > I'm not sure what is the best way to ask for review for a pagure-less
> > project, since we don't use Phabricator any more, so... let the
> > funmail begin:
>
> The wrapped diff is hard to read but it looks pretty good to me. I
> think that the patch should be applied in parts as we reimage the
> client-host machines but that's more of a nitpick :)
>
> Tim
>
> > diff --git a/inventory/host_vars/qa10.qa.fedoraproject.org
> > b/inventory/host_vars/qa10.qa.fedoraproject.org
> > index 297f614e3..d2119dc47 100644
> > --- a/inventory/host_vars/qa10.qa.fedoraproject.org
> > +++ b/inventory/host_vars/qa10.qa.fedoraproject.org
> > @@ -9,18 +9,18 @@ gw: 10.5.124.254
> >
> >  short_hostname: qa10.qa
> >  slaves:
> > -  - { user: "{{ short_hostname }}-1", home: "/home/{{ short_hostname
> > }}-1", dir: "/home/{{ short_hostname }}-1/slave" }
> > -  - { user: "{{ short_hostname }}-2", home: "/home/{{ short_hostname
> > }}-2", dir: "/home/{{ short_hostname }}-2/slave" }
> > -  - { user: "{{ short_hostname }}-3", home: "/home/{{ short_hostname
> > }}-3", dir: "/home/{{ short_hostname }}-3/slave" }
> > -  - { user: "{{ short_hostname }}-4", home: "/home/{{ short_hostname
> > }}-4", dir: "/home/{{ short_hostname }}-4/slave" }
> > -  - { user: "{{ short_hostname }}-5", home: "/home/{{ short_hostname
> > }}-5", dir: "/home/{{ short_hostname }}-5/slave" }
> > -  - { user: "{{ short_hostname }}-6", home: "/home/{{ short_hostname
> > }}-6", dir: "/home/{{ short_hostname }}-6/slave" }
> > -  - { user: "{{ short_hostname }}-7", home: "/home/{{ short_hostname
> > }}-7", dir: "/home/{{ short_hostname }}-7/slave" }
> > -  - { user: "{{ short_hostname }}-8", home: "/home/{{ short_hostname
> > }}-8", dir: "/home/{{ short_hostname }}-8/slave" }
> > -  - { user: "{{ short_hostname }}-9", home: "/home/{{ short_hostname
> > }}-9", dir: "/home/{{ short_hostname }}-9/slave" }
> > -  - { user: "{{ short_hostname }}-10", home: "/home/{{ short_hostname
> > }}-10", dir: "/home/{{ short_hostname }}-10/slave" }
> > -  - { user: "{{ short_hostname }}-11", home: "/home/{{ short_hostname
> > }}-11", dir: "/home/{{ short_hostname }}-11/slave" }
> > -  - { user: "{{ short_hostname }}-12", home: "/home/{{ short_hostname
> > }}-12", dir: "/home/{{ short_hostname }}-12/slave" }
> > -  - { user: "{{ short_hostname }}-13", home: "/home/{{ short_hostname
> > }}-13", dir: "/home/{{ short_hostname }}-13/slave" }
> > -  - { user: "{{ short_hostname }}-14", home: "/home/{{ short_hostname
> > }}-14", dir: "/home/{{ short_hostname }}-14/slave" }
> > -  - { user: "{{ short_hostname }}-15", home: "/home/{{ short_hostname
> > }}-15", dir: "/home/{{ short_hostname }}-15/slave" }
> > +  - { user: "{{ short_hostname }}-1", home: "/srv/buildslaves/{{
> > short_hostname }}-1", dir:
> > "/srv/buildslaves/{{ short_hostname }}-1/slave" }
> > +  - { user: "{{ short_hostname }}-2", home: "/srv/buildslaves/{{
> > short_hostname }}-2", dir:
> > "/srv/buildslaves/{{ short_hostname }}-2/slave" }
> > +  - { user: "{{ short_hostname }}-3", home: "/srv/buildslaves/{{
> > short_hostname }}-3", dir:
> > "/srv/buildslaves/{{ short_hostname }}-3/slave" }
> > +  - { user: "{{ short_hostname }}-4", home: "/srv/buildslaves/{{
> > short_hostname }}-4", dir:
> > "/srv/buildslaves/{{ short_hostname }}-4/slave" }
> > +  - { user: "{{ short_hostname }}-5", home: "/srv/buildslaves/{{
> > short_hostname }}-5", dir:
> > "/srv/buildslaves/{{ short_hostname }}-5/slave" }
> > +  - { user: "{{ short_hostname }}-6", home: "/srv/buildslaves/{{
> > short_hostname }}-6", dir:
> > "/srv/buildslaves/{{ short_hostname }}-6/slave" }
> > +  - 

Please review - Infra Ansible - move slaves from home to srv

2017-11-20 Thread Josef Skladanka
I'm not sure what is the best way to ask for review for a pagure-less
project, since we don't use Phabricator any more, so... let the funmail
begin:


diff --git a/inventory/host_vars/qa10.qa.fedoraproject.org
b/inventory/host_vars/qa10.qa.fedoraproject.org
index 297f614e3..d2119dc47 100644
--- a/inventory/host_vars/qa10.qa.fedoraproject.org
+++ b/inventory/host_vars/qa10.qa.fedoraproject.org
@@ -9,18 +9,18 @@ gw: 10.5.124.254

 short_hostname: qa10.qa
 slaves:
-  - { user: "{{ short_hostname }}-1", home: "/home/{{ short_hostname
}}-1", dir: "/home/{{ short_hostname }}-1/slave" }
-  - { user: "{{ short_hostname }}-2", home: "/home/{{ short_hostname
}}-2", dir: "/home/{{ short_hostname }}-2/slave" }
-  - { user: "{{ short_hostname }}-3", home: "/home/{{ short_hostname
}}-3", dir: "/home/{{ short_hostname }}-3/slave" }
-  - { user: "{{ short_hostname }}-4", home: "/home/{{ short_hostname
}}-4", dir: "/home/{{ short_hostname }}-4/slave" }
-  - { user: "{{ short_hostname }}-5", home: "/home/{{ short_hostname
}}-5", dir: "/home/{{ short_hostname }}-5/slave" }
-  - { user: "{{ short_hostname }}-6", home: "/home/{{ short_hostname
}}-6", dir: "/home/{{ short_hostname }}-6/slave" }
-  - { user: "{{ short_hostname }}-7", home: "/home/{{ short_hostname
}}-7", dir: "/home/{{ short_hostname }}-7/slave" }
-  - { user: "{{ short_hostname }}-8", home: "/home/{{ short_hostname
}}-8", dir: "/home/{{ short_hostname }}-8/slave" }
-  - { user: "{{ short_hostname }}-9", home: "/home/{{ short_hostname
}}-9", dir: "/home/{{ short_hostname }}-9/slave" }
-  - { user: "{{ short_hostname }}-10", home: "/home/{{ short_hostname
}}-10", dir: "/home/{{ short_hostname }}-10/slave" }
-  - { user: "{{ short_hostname }}-11", home: "/home/{{ short_hostname
}}-11", dir: "/home/{{ short_hostname }}-11/slave" }
-  - { user: "{{ short_hostname }}-12", home: "/home/{{ short_hostname
}}-12", dir: "/home/{{ short_hostname }}-12/slave" }
-  - { user: "{{ short_hostname }}-13", home: "/home/{{ short_hostname
}}-13", dir: "/home/{{ short_hostname }}-13/slave" }
-  - { user: "{{ short_hostname }}-14", home: "/home/{{ short_hostname
}}-14", dir: "/home/{{ short_hostname }}-14/slave" }
-  - { user: "{{ short_hostname }}-15", home: "/home/{{ short_hostname
}}-15", dir: "/home/{{ short_hostname }}-15/slave" }
+  - { user: "{{ short_hostname }}-1", home: "/srv/buildslaves/{{
short_hostname }}-1", dir: "/srv/buildslaves/{{ short_hostname }}-1/slave" }
+  - { user: "{{ short_hostname }}-2", home: "/srv/buildslaves/{{
short_hostname }}-2", dir: "/srv/buildslaves/{{ short_hostname }}-2/slave" }
+  - { user: "{{ short_hostname }}-3", home: "/srv/buildslaves/{{
short_hostname }}-3", dir: "/srv/buildslaves/{{ short_hostname }}-3/slave" }
+  - { user: "{{ short_hostname }}-4", home: "/srv/buildslaves/{{
short_hostname }}-4", dir: "/srv/buildslaves/{{ short_hostname }}-4/slave" }
+  - { user: "{{ short_hostname }}-5", home: "/srv/buildslaves/{{
short_hostname }}-5", dir: "/srv/buildslaves/{{ short_hostname }}-5/slave" }
+  - { user: "{{ short_hostname }}-6", home: "/srv/buildslaves/{{
short_hostname }}-6", dir: "/srv/buildslaves/{{ short_hostname }}-6/slave" }
+  - { user: "{{ short_hostname }}-7", home: "/srv/buildslaves/{{
short_hostname }}-7", dir: "/srv/buildslaves/{{ short_hostname }}-7/slave" }
+  - { user: "{{ short_hostname }}-8", home: "/srv/buildslaves/{{
short_hostname }}-8", dir: "/srv/buildslaves/{{ short_hostname }}-8/slave" }
+  - { user: "{{ short_hostname }}-9", home: "/srv/buildslaves/{{
short_hostname }}-9", dir: "/srv/buildslaves/{{ short_hostname }}-9/slave" }
+  - { user: "{{ short_hostname }}-10", home: "/srv/buildslaves/{{
short_hostname }}-10", dir: "/srv/buildslaves/{{ short_hostname
}}-10/slave" }
+  - { user: "{{ short_hostname }}-11", home: "/srv/buildslaves/{{
short_hostname }}-11", dir: "/srv/buildslaves/{{ short_hostname
}}-11/slave" }
+  - { user: "{{ short_hostname }}-12", home: "/srv/buildslaves/{{
short_hostname }}-12", dir: "/srv/buildslaves/{{ short_hostname
}}-12/slave" }
+  - { user: "{{ short_hostname }}-13", home: "/srv/buildslaves/{{
short_hostname }}-13", dir: "/srv/buildslaves/{{ short_hostname
}}-13/slave" }
+  - { user: "{{ short_hostname }}-14", home: "/srv/buildslaves/{{
short_hostname }}-14", dir: "/srv/buildslaves/{{ short_hostname
}}-14/slave" }
+  - { user: "{{ short_hostname }}-15", home: "/srv/buildslaves/{{
short_hostname }}-15", dir: "/srv/buildslaves/{{ short_hostname
}}-15/slave" }
diff --git a/inventory/host_vars/qa11.qa.fedoraproject.org
b/inventory/host_vars/qa11.qa.fedoraproject.org
index de99d2ba1..47c5b702d 100644
--- a/inventory/host_vars/qa11.qa.fedoraproject.org
+++ b/inventory/host_vars/qa11.qa.fedoraproject.org
@@ -9,18 +9,18 @@ gw: 10.5.124.254

 short_hostname: qa11
 slaves:
-  - { user: "{{ short_hostname }}-1", home: "/home/{{ short_hostname
}}-1", dir: "/home/{{ short_hostname }}-1/slave" }
-  - { user: "{{ short_hostname }}-2", home: "/home/{{ short_hostname
}}-2", dir: "/home/{{ 

Re: 2017-10-16 @ 14:00 UTC - Fedora QA Devel Meeting

2017-10-16 Thread Josef Skladanka
Looks like it will be just the two of us today, Tim - I don't have any
serious updates, but I'm all for doing it, if you deem it useful.

On Mon, Oct 16, 2017 at 6:36 AM, Tim Flink  wrote:

> # Fedora QA Devel Meeting
> # Date: 2017-10-16
> # Time: 14:00 UTC
> (https://fedoraproject.org/wiki/Infrastructure/UTCHowto)
> # Location: #fedora-meeting-1 on irc.freenode.net
>
>
> https://fedoraproject.org/wiki/QA:Qadevel-20171016
>
> If you have any additional topics, please reply to this thread or add
> them in the wiki doc.
>
> Tim
>
>
> Proposed Agenda
> ===
>
> Announcements and Information
> -
>   - Please list announcements or significant information items below so
> the meeting goes faster
>
> Tasking
> ---
>   - Does anyone need tasks to do?
>
> Potential Other Topics
> --
>
>   - deployment of ansiblize branches
>
> Open Floor
> --
>   - TBD
>
> ___
> qa-devel mailing list -- qa-devel@lists.fedoraproject.org
> To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org
>
>
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Proposal to CANCEL: 2017-08-28 QA Devel Meeting

2017-08-28 Thread Josef Skladanka
ack

On Mon, Aug 28, 2017 at 5:58 AM, Tim Flink  wrote:

> There are more than one of us traveling to Flock on Monday and as such,
> I propose that we cancel the regularly scheduled QA Devel meeting.
>
> If there is some urgent topic to discuss, please reply to this thread
> and the meeting can happen if there is someone around who is willing to
> lead such meeting.
>
> Tim
>
> ___
> qa-devel mailing list -- qa-devel@lists.fedoraproject.org
> To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org
>
>
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Discontinuing Phabricator

2017-08-04 Thread Josef Skladanka
As you all probably know, we decided that keeping Phab up and running is
not the best use of our - rather limited, and shrinking - resources, so we
moved all our projects to Pagure. Yay!

As of now, all (relevant) tickes are moved to Pagure, and we have the
Differential revisions archived as html snapshots here:
https://fedorapeople.org/groups/qa/phabarchive/differentials/phab.qa.fedoraproject.org/
(note that this is not the final version, once kparal gets to update it,
the "download raw diff" links will provide you with just that).

Links between tickets, and ticket dependencies are hopefully moved too, as
are the references for the Differential revisions tied to that ticket - I
was able to manually check a few tickets, and "it was fine" (tm). In
phabricator, ticket could be a part of multiple projects (like execdb +
resultsdb + libtaskotron) - we (kparal mostly) cleaned up quite a deal of
those, but some still made sense to keep. Pagure can not represent these,
so I ended up duplicating the tickets. Such tickets' first comment (or some
of the few first comments) says "This is a duplicate of ..." - meaning just
that this was part of several projects and the referenced ticket is just
the same.

This also means, that as of now, we won't be actively taking part in
maintaining, or using Phabricator. We are still to decide on a reasonable
way to do code reviews, so any tips on the topic are more than welcomed. If
you have some un-merged differential revisions, that you'd like to see
taken care of, please create a pull request, and mention that it is WRT to
a specific diff.

I'm sad to see this great tool go, hopefully, we'll be able to make decent
use of Pagure.

Josef

P.S. if you feel brave enough, feel free to have a look at the junk-code
that made this possible at https://pagure.io/fedora-qa/phabarchive/
(disclaimer - the code should die in fire!)
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Proposal to CANCEL: 2017-07-03 QA Devel Meeting

2017-07-02 Thread Josef Skladanka
+1

On Sat, Jul 1, 2017 at 7:30 PM, Tim Flink  wrote:

> There are multiple holidays this week and I suspect that most folks
> (including me) won't be around for a QA Devel meeting so I propose that
> we cancel the regular meeting.
>
> If there is some urgent topic to discuss, reply to this thread and the
> meeting can happen but I won't be around and someone else would have to
> lead it.
>
> Tim
>
> ___
> qa-devel mailing list -- qa-devel@lists.fedoraproject.org
> To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org
>
>
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Re-Scheduling Jobs for Taskotron as a User

2017-04-20 Thread Josef Skladanka
On Thu, Apr 20, 2017 at 12:07 AM, Adam Williamson <
adamw...@fedoraproject.org> wrote:

> OK, like I said, half-baked =) But wdyt?
>
>
Love it! (And I swear, it has nothing to do with the fact, that I also
thought this would be a great way to solve it in a more generic manner.)
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Proposal to CANCEL: 2017-03-27 QA Devel Meeting

2017-03-27 Thread Josef Skladanka
OK

On Mon, Mar 27, 2017 at 5:30 AM, Tim Flink  wrote:

> I have a conflict during the normal QA Devel meeting this week so
> unless someone else wants to lead the meeting, I propose that we cancel
> it.
>
> Tim
>
> ___
> qa-devel mailing list -- qa-devel@lists.fedoraproject.org
> To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org
>
>
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Trigger changes - call for comments

2017-02-16 Thread Josef Skladanka
Hey, gang!

As with the ExecDB, I took some time to try and formalize what I think is
to be done with Trigger in the near-ish future.
Since it came to my attention, that the internal G-Docs can not be accessed
outside of RH, this time, it is shared from my personal account - hopefully
more people will be enabled to read and comment on the document.
Without further ado -
https://docs.google.com/document/d/1BEWdgm0jO4p5DsTO4vntwUumgLZGtU5lBaBX5av7MGQ/edit?usp=sharing

Thanks,
joza
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Taskotron CI in Taskotron

2017-02-15 Thread Josef Skladanka
On Wed, Feb 15, 2017 at 5:55 PM, Adam Williamson <adamw...@fedoraproject.org
> wrote:

> On Wed, 2017-02-15 at 12:59 +0100, Josef Skladanka wrote:
> > On Tue, Feb 14, 2017 at 8:51 PM, Adam Williamson <
> adamw...@fedoraproject.org
> > > wrote:
> > > Are you aware of fedmsg-dg-replay? It's a fairly easy way to 'replay'
> > > fedmsgs for testing. All you need (IIRC) is the fedmsg-relay service
> > > running on the same system, and you can run
> > >
> >
> > I am, but it has this bad quality of changing the topic, so we would have
> > to change the consumers' topics, so we would need to change those too. Or
> > make it configurable in some way...
> > I'd rather do it the way I have it now - using the trigger's internal
> > replay functionality instead of doing unnecessary complicated changes
> just
> > for the sake of using it to test stuff once in a while.
>
> I was thinking that it's probably not that difficult to set up a
> testing fedmsg bus as a test fixture with some canned messages that can
> be replayed on request, but I haven't looked at doing it so I really
> don't know how much work it is. I wonder if fedmsg's test suite does
> it.
>
>
Ah so, I did not get that at the first read, and now it is obvious even
from the previous email *facepalm*. Yeah, that would make sense, I guess.
We'll see about that, at the moment, we have bigger fish to fry, but in the
end I'd like to have this stuff covered too.
Thanks for the good idea, though!

joza
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Taskotron CI in Taskotron

2017-02-15 Thread Josef Skladanka
On Tue, Feb 14, 2017 at 8:51 PM, Adam Williamson  wrote:

> Are you aware of fedmsg-dg-replay? It's a fairly easy way to 'replay'
> fedmsgs for testing. All you need (IIRC) is the fedmsg-relay service
> running on the same system, and you can run
>
I am, but it has this bad quality of changing the topic, so we would have
to change the consumers' topics, so we would need to change those too. Or
make it configurable in some way...
I'd rather do it the way I have it now - using the trigger's internal
replay functionality instead of doing unnecessary complicated changes just
for the sake of using it to test stuff once in a while.

J.
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


ExecDB rewrite - call for comments

2017-02-12 Thread Josef Skladanka
Hey gang!

With the incoming changes, I'd like to make ExecDB a bit more worth its
name, and make it less tied to Buildbot than it is at the moment, and also
make some changes to what functionality it provides.

Please, comment!

Thanks, Joza

https://docs.google.com/a/redhat.com/document/d/1sOAn2WJ0-XAJu9ssckevS9-m2BM7DwyGbfUaSHx3Nq0/edit?usp=sharing
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Wiki page gardening

2017-02-09 Thread Josef Skladanka
Awesome, thanks!

On Fri, Feb 10, 2017 at 4:27 AM, Adam Williamson  wrote:

> Hi folks! I did a bit of light gardening on the Taskotron and ResultsDB
>  and a few other wiki pages today:
>
> * https://fedoraproject.org/wiki/Taskotron
> * https://fedoraproject.org/wiki/Taskotron_contribution_guide
>   (moved from User:Tflink/taskotron_contribution_guide)
> * https://fedoraproject.org/wiki/QA:Phabricator
>   (moved from QA/Phabricator)
> * https://fedoraproject.org/wiki/ResultsDB
> * https://fedoraproject.org/wiki/QA:Tools
>   (moved from QA/Tools)
>
> I guess most significantly, I tried to consolidate the 'how to
> contribute' instructions a bit to make it easier for people to find
> their way through. The main 'how to use arcanist' stuff is now in the
> Phabricator page, and you can use this anchor link to link to it:
>
> https://fedoraproject.org/wiki/QA:Phabricator#issues-diffs
>
> That content was moved from
> https://phab.qa.fedoraproject.org/w/contributing/ . I sprinkled links
> to it around a few other pages. The Taskotron_contribution_guide page
> links to that page for the generic instructions, and just includes
> Taskotron-specific stuff. Notably, I tried to include a comprehensive
> and up-to-date list of the Taskotron repositories on that page;
> hopefully that can be the sole place where such a list lives now (I
> removed the other incomplete and out of date lists I could find).
>
> I updated QA:Tools to link to a few more things, and removed various
> bits of out-of-date content to make the pages look less...sad. :)
>
> Please let me know about (or just fix) any problems you see :) Thanks!
> --
> Adam Williamson
> Fedora QA Community Monkey
> IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
> http://www.happyassassin.net
> ___
> qa-devel mailing list -- qa-devel@lists.fedoraproject.org
> To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org
>
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Taskotron CI in Taskotron

2017-02-09 Thread Josef Skladanka
On Thu, Feb 9, 2017 at 5:58 PM, Matthew Miller <mat...@fedoraproject.org>
wrote:

> On Thu, Feb 09, 2017 at 03:29:13AM +0100, Josef Skladanka wrote:
> > I finally got some work done on the CI task for Taskotron in Taskotron.
> The
> > idea here is that after each commit (of a relevant project - trigger,
> > execdb, resultsdb, libtaskotron) to pagure, we will run the whole stack
> in
> > docker containers, and execute a known "phony" task, to see whether it
> all
> > goes fine.
>
> This is excellent. I'd love, eventually, to get to a point where we can
> run the checks _pre_ commit and gate on them. Is there a path from this
> to that?


Absolutely, that is the goal.

Generally speaking, we'd like to run tests on Pagure's PRs.
For taskotron specifically, we'll need to figure out some Phabricator
plugin that fires off a fedmsg (or calls some API, whatever) on new
Differential request, but generally it is the same idea.

Joza
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Taskotron CI in Taskotron

2017-02-08 Thread Josef Skladanka
Gang,

I finally got some work done on the CI task for Taskotron in Taskotron. The
idea here is that after each commit (of a relevant project - trigger,
execdb, resultsdb, libtaskotron) to pagure, we will run the whole stack in
docker containers, and execute a known "phony" task, to see whether it all
goes fine.

The way I devised is that I'll build a 'testsuite' container based on the
Trigger, and instead of running the fedmsg hub, I'll just use the CLI to
"replay" what would happen on a known, predefined fedmsg.
The testsuite will then watch execdb and resultsdb, whether everything went
fine.

It is not at all finished, but I started hacking on it here:
https://pagure.io/taskotron/task-taskotron-ci
I hope to finish it (to a point where it runs the phony task) till the end
of the week. At that point, I'd be glad for any actual, sensible task ideas
to ideally test as much of the capabilities of the
libtaskotron/execdb/resultsdb as possible.

The only problem with this kind of testing is, that we still don't really
have a good way to test trigger, as it is tied to external events. My idea
here was that I could add something like wiki edit consumer, and trigger
tasks off of that, making that one "triggering" edit from inside the
testsuite. But As it's almost 4am here, I'm not sure it is the best idea.
Once again, I'll be glad for any input/ideas/evil laughter.

Joza
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Libtaskotron - allow non-cli data input

2017-02-08 Thread Josef Skladanka
On Wed, Feb 8, 2017 at 7:39 PM, Kamil Paral  wrote:

> > I mentioned this in IRC but why not have a bit of both and allow input
> > as either a file or on the CLI. I don't think that json would be too
> > bad to type on the command line as an option for when you're running
> > something manually:
> >
> >   runtask sometask.yml -e "{'namespace':'someuser',\
> > 'module':'somemodule', 'commithash': 'abc123df980'}"
>
> I probably misunderstood you on IRC. In my older response here, I actually
> suggested something like this - having "--datafile data.json", which can
> also be used like "--datafile -" meaning stdin. You can then use "echo
>  | runtask --datafile - ". But your solution is probably
> easier to look at.
>

I honestl like the `--datafile [fname, -]` approach a lot. We could sure
name the param better, but that's about it. I like it better than
necessarily having a long cmdline, and you can still use "echo " if
you wanted to have a cmdline example, or "cat " for the common usage



> > There would be some risk of running into the same problems we had with
> > AutoQA where depcheck commands were too long for bash to parse but
> > that's when I'd say "you need to use a file for that"
>
> Definitely.
>

And that's why I'd rather stay away from long cmdlines :)


>
> > > I'm a bit torn between providing as much useful data as we can when
> > > scheduling (because a) yaml formulas are very limited and you can't
> > > do stuff like string parsing/splitting b) might save you a lot of
> > > work/code to have this data presented to you right from the start),
> > > and the easy manual execution (when you need to gather and provide
> > > all that data manually). It's probably about finding the right
> > > balance. We can't avoid having structured multi-data input, I don't
> > > think.
> >
> > If we did something along the lines of allowing input on the CLI, we
> > could have both, no? We'd need to be clear on the precedence of file vs
> > CLI input but that seems to me like something that could solve the
> > issue of dealing with more complicated inputs without requiring users
> > to futz with a file when running tasks locally.
>
> That's not the worry I had. Creating a file or writing json to a command
> line is a bit more work than the current state, but not a problem. What I'm
> a bit afraid of is that we'll start adding many keyvals into the json just
> because it is useful or convenient. As an artificial example, let's say for
> a koji_build FOO we supply NVR, name, epoch, owner, build_id and
> build_timestamp. And if we receive all of that in the fedmsg (or from some
> koji query that we'll need to do anyway for some reason), it makes sense to
> pass that data, it's free for us and it's less work for the task (it
> doesn't have to do its own queries). However, running the task manually as
> a task developer (and I don't mean re-running an existing task on FOO by
> copy-pasting the existing data json from a log file, but running it on a
> fresh new koji build BAR) makes it much more difficult for the developer,
> because he needs to figure out (manually) all those values for BAR just to
> be able to run his task.
>

Even more extreme (deliberately, to illustrate the point) example would be
> to pass the whole koji buildinfo dict structure that you get when running
> koji.getBuild(). Which could be actually easier for the developer to
> emulate, because we could document a single command that retrieves exactly
> that. Unless we start adding additional data to it...
>
> So on one hand, I'd like to pass as much data as we have to make task
> formulas simpler, but on the other hand, I'm afraid task development
> (manual task execution, without having a trigger to get all this data by
> magic) will get harder. (I hope I managed to explain it better this time:))
> _


As I mentioned in one of the other emails - the dev (while developing)
should really only need to provide the data that is relevant for the
task/formula. Why have a ton of stuff that you never use in the "testing
data" - it is unnecessary work, and even makes it more prone to error IMO.
If I had task that only needs NVR, name and build_timestamp, I'd (while
developing/testing) just pass a structure containing these.

Or do you think that is a bad idea? I sure can see how (e.g.) the resultsdb
directive could be spitting warnings out about missing data, but that is
why we have the different profiles - the resultsdb could fail in production
mode, if data was missing (and that probably means some serious error) or
just warn you in development mode.
If you wanted to "test it thoroughly" you'd better use some real data
anyway - and if we store the "input data structure" in logs for the tasks,
then there even is a good source of those, should you want to copy-paste it.

I hope I understood what you meant.

joza
___
qa-devel mailing list -- 

Re: Libtaskotron - allow non-cli data input

2017-02-08 Thread Josef Skladanka
On Wed, Feb 8, 2017 at 4:11 PM, Tim Flink  wrote:

> On Wed, 8 Feb 2017 08:26:30 -0500 (EST)
> Kamil Paral  wrote:
>
> I think another question is whether we want to keep assuming that the
> *user supplies the item* that is used as a UID in resultsdb. As you say,
> it seems a bit odd to require people to munge stuff together like
> "namespace/module#commithash" at the same time that it can be separated
> out into a dict-like data structure for easy access.
>
>
Emphasis mine. I think that we should not really be assuming that at all.
In most cases, the item should be provided by the trigger automagically,
the same with the type. With what I'd like to see for the structured input,
the conventions module could/should take that data into account while
constructing the "default" results.
Keep in mind, that the one result can also have multiple "items" (as it can
have a multiple of any extra data field), if it makes sense. One, the
"auto-provided" and the second could be user-added. That would make it both
consistent (the tirgger generated item) and flexible, if a different "item"
makes sense.

Would it make more sense to just pass in the dict and have semi-coded
> conventions for reporting to resultsdb based on the item_type which
> could be set during the task instead of requiring that to be known
> before task execution time?
>
> Something along the lines of enabling some common kinds of input for
> the resultsdb directive - module commit, dist-git rpm change, etc. so
> that you could specify the item_type to the resultsdb directive and it
> would know to look for certain bits to construct the UID item that's
> reported to resultsdb.
>

Yup, I think that setting some conventions, and making sure we keep the
same (or at least very similar) set of metadata for the relevant type is a
key.
I mentioned this in the previous email, but I am, in the past few days,
thinking about making the types a bit more general - the pretty specific
types we have now made sense, when we first designed stuff, and had a very
narrow usecase.
Now that we want to make the stack usable in stuff like Platform CI, I
think it would make sense to abstract a bit more, so we don't have
`koji_build`, `brew_build`, `copr_build` which are essentialy the same, but
differ in minor details. We can specify those classes/details in extradata,
or could even use multiple types - having the common set of information
guaranteed for all the 'build' type, and add other kind of data to
`koji_build`, `brew_build` of `whatever_build` as needed.


> Using Kamil's example, assume that we have a task for a module and the
> following data is passed in:
>
>   {'namespace':'someuser', 'module':'httpd', 'commithash':'abc123df980'}
>
> Neither item nor type is specified on the CLI at execution time. The
> task executes using that input data and when it comes time to report to
> resultsdb:
>
>   - name: report results to resultsdb
> resultsdb:
>   results: ${some_task_output}
>   type: module
>
> By passing in that type of module, the directive would look through the
> input data and construct the "item" from input.namespace, input.module
> and input.commithash.
>
> I'm not sure if it makes more sense to have a set of "types" that the
> resultsdb directive understands natively or to actually require item
> but allow variable names in it along the lines of
>
>   "item":"${namespace}/${module}#${commithash}"
>

I'd rather have that in "conventions" than the resultsdb directive, but I
guess it is essentialy the same thing, once you think about it.


>
> > > My take on this is, that we will say which variables are provided
> > > by the trigger for each type. If a variable is missing, the
> > > formula/execution should just crash when it tries to access it.
> >
> > Sounds reasonable.
>
> +1 from me as well. Assume everything is there, crash if there's
> something requested that isn't available (missing data etc.)
>
>
yup, that's what I have in mind.


> > We'll probably end up having a mix of necessary and convenience
> > values in the inputdata. "name" is probably a convenience value here,
> > so that tasks don't have to parse if they need to use it in a certain
> > directive. "epoch" might be an important value for some test cases,
> > and let's say we learn the value in trigger during scheduling
> > investigation, so we decide to pass it down. But that information is
> > not that easy to get manually. If you know what to do, you'll open up
> > a particular koji page and see it. But you can also be clueless about
> > how to figure it out. The same goes for build_id, again can be
> > important, but also can be retrieved later, so more of a convenience
> > data (saving you from writing a koji query). This is just an example
> > for illustration, might not match real-world use cases.
>
> I mentioned this in IRC but why not have a bit of both and allow input
> as either a file or on the CLI. I don't think that json would be 

Re: Libtaskotron - allow non-cli data input

2017-02-08 Thread Josef Skladanka
On Wed, Feb 8, 2017 at 2:26 PM, Kamil Paral  wrote:

> This is what I meant - keeping item as is, but being able to pass another
> structure to the formula, which can then be used from it. I'd still like to
> keep the item to a single string, so it can be queried easily in the
> resultsdb. The item should still represent what was tested. It's just that
> I want to be able to pass arbitrary data to the formulae, without the need
> for ugly hacks like we have seen with the git commits lately.
>
>
> So, the question is now how much we want the `item` to uniquely identify
> the item under test. Currently we mostly do (rpmlint, rpmgrill) and
> sometimes don't (depcheck, because item is NVR, but the full ID is NEVRA,
> and we store arch in the results extradata section).
>
>
I still kind of believe that the `item` should be chosen with great respect
to what actually is the item under test, but it also really depends on what
you want to do with it later on. Note that the `item` is actually a
convention (yay, more water to adamw's "if we only had some awesome new
project" mill), and is not enforced in any way. I believe that there should
be firm rules (once again - conventions) on what the item is for each "well
known" item type, so you can kind-of assume that if you query for
`item=foo=koji_build` you are getting the results related to that
build.
As we were discussing privately with the item types (I'm not going to go
into much detail here, but for the rest of you guys - I'm contemplating
making the types more general, and using more of the 'metadata' to store
additional spefics - like replacing `type=koji_build` with `type=build,
source=koji`, or `type=build, source=brew` - on the high level, you know
that a package/build was tested, and you don't really care where it came
from, but you sometimes might care, and so there is the additional metadata
stored. We could even have more types stored for one results, or I don't
know... It's complicated), the idea behind item is that it should be a
reasonable value, that carries the "what was tested" information, and you
will use the other "extra-data" fields to provide more details (like we
kind-of want to do with arch, but we don't really..). The reason for it to
be 'reasonable value" and not "superset of all values that we have" is to
make the general querying a bit more straightforward.


> If we have structured input data, what happens to `item` for
> check_modulemd? Currently it is "namespace/module#commithash". Will it stay
> the same, and they'll just avoid parsing it because we'll also provide
> ${data.namespace}, ${data.module} and ${data.hash}? Or will the `item` be
> perhaps just "module" (and the rest will be stored as extradata)? What
> happens when we have a generic git_commit type, and the source can be an
> arbitrary service? Will have some convention to use item as
> "giturl#commithash"?
>
>
Once again - whatever makes sense as the item. For me that would be the
Repo/SHA combo, with server, repo, branch, and commit in extradata.
And it comes to "storing as much relevant metadata as possible" once again.
The thing is, that as long as stuff is predictable, it almost does not
matter what it is, and it once again points out how good of an idea is the
conventions stuff. I believe that we are now storing much less metadata in
resultsdb than we should, and it is caused mostly by the fact that
 - we did not really need to use the results much so far
 - it is pretty hard to pass data into libtaskotron, and querying all the
services all the time, to get the metadata, is/was deemed a bad idea - why
do it ourselves, if the consumer can get it himself. They know that it is
koji_build, so they can query koji.

There is a fine balance to be struck, IMO, so we don't end up storing "all
the data" in resultsdb. But I believe that the stuff relevant for the
result consumption should be there.


Because the ideal way would be to store the whole item+data structure as
> item in resultsdb. But that's hard to query for humans, so we want a simple
> string as an identifier.
>

This, for me, is once again about being predictable. As I said above, I
still think that `item` should be a reasonable identifier, but not
necessary a superset of all the info. That is what the extra data is for.
Talking about...


> But sometimes there can be a lot of data points which uniquely identify
> the thing under test only when you specify it all (for example what Dan
> wrote, sometimes the ID is the old NVR *plus* the new NVR). Will we want to
> somehow combine them into a single item value? We should give some
> directions how people should construct their items.
>
>
My gut feeling here would be storing the "new NVR" (the thing that actually
caused the test to be executed) as item, and adding 'old nvr' to extra
data. But I'm not that familiar with the specific usecase. To me, this
would make sense, because when you query for "this NVR related results"
you'd get the results 

Re: making test suites work the same way

2017-02-06 Thread Josef Skladanka
On Mon, Feb 6, 2017 at 1:35 PM, Kamil Paral  wrote:

>
> That's a good point. But do we have a good alternative here? If we depend
> on packages like that, I see only two options:
>
> a) ask the person to install pyfoo as an RPM (in readme)
> b) ask the person to install gcc and libfoo-devel as an RPM (in readme)
> and pyfoo will be then compiled and installed from pypi
>
> Approach a) is somewhat easier and does not require compilation stack and
> devel libraries. OTOH it requires using virtualenv with
> --system-site-packages, which means people get different results on
> different setups. That's exactly what I'm trying to eliminate (or at least
> reduce). E.g. https://phab.qa.fedoraproject.org/D where I can run the
> test suite from makefile and you can't, and it's quite difficult to figure
> out why.
>
>
With b) approach, you need compilation stack on the system. I don't think
> it's such a huge problem, because you're a developer after all. The
> advantage is that virtualenv can be created without --system-site-packages,
> which means locally installed libraries do not affect the execution/test
> suite results. Also, pyfoo is installed with exactly the right version,
> further reducing differences between setups. The only thing that can differ
> is the version of libfoo-devel, which can affect the behavior. But the
> likeliness of that happening is much smaller than having pyfoo of a
> different version or pulling any deps from the system site packages.
>
>
The reason why I want to recommend `make test` for running the test suite
> (at least in readme), is because in the makefile we can ensure that a clean
> virtualenv with correct properties is created, and only and exactly the
> right versions of deps from requirements.txt are installed. We can perform
> further necessary steps, like installing the project
> . That further increases
> reliability. Compare this to manually running `pytest`- a custom virtualenv
> must be active; it can be configured differently than recommended in
> readme, it can be out of date, or it can have more packages installed than
> needed; you might forget some necessary steps.
>
>
Sure, I am a devel, but not a C-devel... As I told you in our other
conversation - I see what you are trying to accomplish, but for me the gain
does not even balance the issues. With variant 'a', all you need to do is
make sure "these python packages are installed" to run the test suite. I'd
rather have something like `requirements_testing.txt` where all the deps
are speeled out in the proper versions, and using that as a base for the
virtualenv population (I guess we could easily make do with the
requirements.py we have now). Either you have the right version in your
system (or in your own development virtualenv from which you are running
the tests), or the right version will be installed for you from pip.
Yes, we might get down to people having to install bunch of header files,
and gcc, if for some reason their system is so different that they can not
obtain the right version in any other way, but it will work most of the
time.



> Of course nothing prevents you from simply running the test suite using
> `pytest`. It's the same approach that Phab will do when submitting a patch.
> However, when some issues arises, I'd like all parties to be able to run
> `make test` and it should return the same result. That should be the most
> reliable method, and if it doesn't return the same thing, it means we have
> an important problem somewhere, and it's not just "a wrongly configured
> project on one dev machine".
>
So, I see these main use cases for `make test` and b) approach:
> * good a reliable default for newcomers, an approach that's the least
> likely to go wrong
> * determining the reason for failures that only one party sees and the
> other doesn't
> * `make test-ci` target, that will hopefully be used one day to perform
> daily/per-commit CI testing of our codebases. Again, using the most
> reliable method available.
>
>
Sure, nobody forces _me_ to do it this way, but I still fail to see the
overall general benefit. If a random _python web app_ project that I wanted
to submit a patch for wanted me to install gcc and tons of -devel libs, I'd
be going to the next door. We were talking "accessibility" a lot with Phab,
and one of the arguments against it (not saying it was you in particular)
was that "it is complicated, and needs additional packages installed". This
is even worse version of the same. At least to me.
On top of that - who is going to be syncing up the versions of said
packages between Fedora (our target) and the requirements.txt? What release
are we going to be using as the target? And is it even the right place and
way to do it?



> For some codebases this is not viable anyway, e.g. libtaskotron, because
> they depend on packages not available in pypi (koji) and thus need
> --system-site-packages. But e.g. resultsdb 

Libtaskotron - allow non-cli data input

2017-02-06 Thread Josef Skladanka
Chaps,

we were discussing this many times in the past, and as with the
type-restriction, I think this is the right time to get this done, actually.

It sure ties to the fact, that I'm trying to put together
Taskotron-continuously-testing-Taskotron together - the idea here being
that on each commit to a devel branch on any of the Taskotron components,
we will spin-up a testing instance of the whole stack, and run some
integration tests.

To do this, I added a new consumer to Trigger (
https://phab.qa.fedoraproject.org/D1110) that eats Pagure.io commits, and
spins jobs based on that.
This means, that I want to have the repo, branch and commit id as input for
the job, thus making yet-another-nasty-hack to pass the combined data into
the job (https://phab.qa.fedoraproject.org/D1110#C16697NL18) so I can hack
it apart later on either in the formula or in the task itself.

It would be very helpfull to be able to pass some structured data into the
task instead.

I kind of remember that we agreed on json/yaml. The possibilities were
either reading it from stdin or file. I don't really care that much either
way, but would probably feel a bit better about having a cli-param to pass
the filename there.

The formulas already provide a way to 'query' structured data via the
dot-format, so we could do with as much as passing some variable like
'task_data' that would contain the parsed json/yaml.

What do you think?

Joza
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Lift type-restriction on ibtaskotron's cli + resultsdb directive

2017-02-06 Thread Josef Skladanka
Hey Gang,

this is bugging me for quite a while now, and although I know why we put
the restrictions there back then, I'm not sure the benefits still outweigh
the problems.

Especially now, when we'll probably be getting some traction, I'd like to
propose removing the type-check completely. On top of that, we could have
some "known" types (koji_build, bodhi_update, compose, ...), and implement
a "spellcheck" - like a Hamming or Levenshtein distance - to catch typos,
and warn the user.

All the Taskotron jobs are given the type programatically now anyway, so
the worry-for-typos is now IMO a bit lessened, and actual human users would
be warned in the logs.

(lib)Taskotron is pretty agnostic to what the people are doing with it, and
this seems to be a lefover arbitrary limit, that may have sense, but should
probably be implemented in an other part of the stack (like trigger) that
the users may be (in the future) directly interacting with.

Thoughts?

Joza
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: ResultsDB 2.0 - DB migration on DEV

2017-01-25 Thread Josef Skladanka
Estimate on the PROD migration finish is in about 24 hours from now. STG
was seamless, so I'm not expecting any troubles here either.

On Wed, Jan 25, 2017 at 10:47 AM, Josef Skladanka <jskla...@redhat.com>
wrote:

> STG is done (took about 15 hours), starting the archive migration for
> PROD, and I'll start figuring way to merge the data. Probably tomorrow.
>
> On Tue, Jan 24, 2017 at 5:49 PM, Josef Skladanka <jskla...@redhat.com>
> wrote:
>
>> So I started the data migration for the STG archives - should be done in
>> about 15 hours from now (running for cca six hours already) - estimated on
>> the number of results that were already converted.
>> If that goes well, I'll start the PROD archives migration tomorrow, and
>> start working on merging the archives with the "base".
>> If nothing goes sideways, we should have all the data in one place by the
>> end of this week.
>>
>> J.
>>
>
>
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Static dashboards PoC

2017-01-25 Thread Josef Skladanka
Folks,

lbrabec and I made the static dashboards happen, a sample can be seen here:
https://jskladan.fedorapeople.org/dashboards/

Note that these are all generated from a yaml config that defines the
packages/testcases + real resultsdb data. Not that the dashboards make much
sense, but it shows off what we can easily do.

One non-obvious feature is that next to the dashboard name in the left part
of the screen, there is a "dropdown" icon. Clicking on that will show you
the previous results of that dashboard. We only show the current ones for
each one to minimize the visual clutter, and it's what you care about most
of the time anyway.

J.
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: ResultsDB 2.0 - DB migration on DEV

2017-01-24 Thread Josef Skladanka
So I started the data migration for the STG archives - should be done in
about 15 hours from now (running for cca six hours already) - estimated on
the number of results that were already converted.
If that goes well, I'll start the PROD archives migration tomorrow, and
start working on merging the archives with the "base".
If nothing goes sideways, we should have all the data in one place by the
end of this week.

J.
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: ResultsDB 2.0 - DB migration on DEV

2017-01-17 Thread Josef Skladanka
Just for the sake of logging things done, this is how pruning the PROD db
was done - it is conceptually the same as what was done for STG, so I'm not
adding the comments.

$ pg_dump -Fc resultsdb > resultsdb.dump
$ createdb -T template0 resultsdb_archive
$ pg_restore -d resultsdb_archive resultsdb.dump
$ psql resultsdb_archive
=# select id, job_id from result where submit_time<'2017-01-10' order by
submit_time desc limit 1;
id| job_id
--+
 11604818 | 387701

=# select id, job_id from result where job_id > 387701 order by id limit 1;
id| job_id
--+
 11604819 | 387702

=# delete from result_data where result_id >= 11604819;
=# delete from result where id >= 11604819;
=# delete from job where id >= 387702;

$ psql resultsdb

=# delete from result_data where result_id < 11604819;
=# delete from result where id < 11604819;
=# delete from job where id < 387702;
___
qa-devel mailing list -- qa-de...@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Proposal: Migrating More Git Projects to Pagure

2017-01-14 Thread Josef Skladanka
On Fri, Jan 13, 2017 at 5:49 PM, Adam Williamson <adamw...@fedoraproject.org
> wrote:

> On Fri, 2017-01-13 at 14:16 +0100, Josef Skladanka wrote:
> > > I am personaly against issues/pull requests on Pagure - logging into
> Phab
> > is about as difficult as logging into Pagure, and I don't see the benefit
> > of "allowing people to do it, since it's possible" even balancing out the
> > problem of split environments.
> > But that's just me.
>
> The difficult thing with Phab isn't logging into it (any more), it's
> setting up the entire arcanist workflow you need to be able to submit
> diffs. People aren't going to do that for drive-bys
>

And the thing is - they do not need to setup arcanist and all that:
https://phab.qa.fedoraproject.org/differential/diff/create/
This is as simple as it gets - paste raw diff, or upload a file. This will
just create a new differential revision, no fuss.
While it's not github-ish, it is easy, and just works - what's the problem?

j.
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: New ExecDB

2017-01-12 Thread Josef Skladanka
There's not been a huge amount of effort put to this - I've had other
priorities ever since, but I can get back on it, if you feel it's the time
to do it. The only code to work in that direction is here:
https://bitbucket.org/fedoraqa/execdb/branch/feature/pony where I only
basically started on removing the tight coupling between execdb and
buildbot, and then I went on trying to figure out what's in this thread.

On Tue, Jan 10, 2017 at 6:57 AM, Tim Flink <tfl...@redhat.com> wrote:

> On Fri, 21 Oct 2016 13:16:04 +0200
> Josef Skladanka <jskla...@redhat.com> wrote:
>
> > So, after a long discussion, we arrived to this solution.
> >
> > We will clearly split up the "who to notify" part, and "should we
> > re-schedule" part of the proposal. The party to notify will be stored
> > in the `notify` field, with `taskotron, task, unknown` options.
> > Initially any crashes in `shell` or `python` directive, during
> > formula parsing, and when installing the packages specified in the
> > formula's environment will be sent to task maintainers, every other
> > crash to taskotron maintainer. That covers what I initially wanted
> > from the multiple crashed states.
> >
> > On top of that, we feel that having an information on "what went
> > wrong" is important, and we'd like to have as much detail as
> > possible, but on the other hand we don't want the re-scheduling logic
> > to be too complicated. We agreed on using a `cause` field, with
> > `minion, task, network, libtaskotron, unknown` options, and storing
> > any other details in a key-value store. We will likely just
> > re-schedule any crashed task anyway, at the beginning, but this
> > allows us to hoard some data, and make more informed decision later
> > on. On top of that, the `fatal` flag can be set, to say that it is
> > not necessary to reschedule, as the crash is unlikely to be fixed by
> > that.
> >
> > This allows us to keep the re-scheduling logic rather simple, and most
> > imporantly decoupled from the parts that just report what went wrong.
>
> How far did you end up getting on this?
>
> Tim
>
> ___
> qa-devel mailing list -- qa-devel@lists.fedoraproject.org
> To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org
>
>
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Proposal to CANCEL: 2016-12-19 Fedora QA Devel Meeting

2016-12-20 Thread Josef Skladanka
+1, especially since all the relevant pepole here are on PTO.

On Mon, Dec 19, 2016 at 5:34 AM, Tim Flink  wrote:

> Most of the regular folks will be absent this week and I'm not aware of
> anything urgent to cover so I propose that we cancel the weekly Fedora
> QA devel meeting.
>
> If there are any topics that I'm forgetting about and/or you think
> should be brought up with the group, reply to this thread and we can
> un-cancel the meeting.
>
> Tim
>
> ___
> qa-devel mailing list -- qa-devel@lists.fedoraproject.org
> To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org
>
>
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: ResultsDB 2.0 - DB migration on DEV

2016-12-07 Thread Josef Skladanka
On Mon, Dec 5, 2016 at 4:25 PM, Tim Flink  wrote:

> Is there a way we could export the results as a json file or something
> similar? If there is (or if it could be added without too much
> trouble), we would have multiple options:
>

Sure, adding some kind of export should be doable


>
> 1. Dump the contents of the current db, do a partial offline migration
>and finish it during the upgrade outage by export/importing the
>newest data, deleting the production db and importing the offline
>upgraded db. If that still takes too long, create a second postgres
>db containing the offline upgrade, switchover during the outage and
>import the new results since the db was copied.
>
>
I slept just two hours, so this is a bit entangled for me. So - my initial
idea was, that we
 - dump the database
 - delete most of the results
 - do migration on the small data set

In paralel (or later on), we would
 - create a second database (let's call it 'archive')
 - import the un-migrated dump
 - remove data that is in the production db
 - run the lenghty migration

This way, we have minimal downtime, and the data are available in the
'archive' db,

With the archive db, we could either
1) dump the data and then import it to the prod db (again no down-time)
2) just spawn another resultsdb (archives.resultsdb?) instance, that would
operate on top of the archives

I'd rather do the second, since it also has the benefit of being able to
offload old data
to the 'archive' database (which would/could be 'slow by definition'),
while keeping the 'active' dataset
small enough, that it could all be in memory for fast queries,.

What do you think? I guess we wanted to do something pretty similar, I just
got lost a bit in what you wrote :)



> 2. If the import/export process is fast enough, might be able to do
>instead of the inplace migration
>

My gut feeling is that it would be pretty slow, but I have no relevant
experience.

Joza
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Release validation NG: planning thoughts

2016-12-05 Thread Josef Skladanka
On Thu, Dec 1, 2016 at 6:04 PM, Adam Williamson <adamw...@fedoraproject.org>
wrote:

> On Thu, 2016-12-01 at 14:25 +0100, Josef Skladanka wrote:
> > On Wed, Nov 30, 2016 at 6:29 PM, Adam Williamson <
> adamw...@fedoraproject.org
> > > wrote:
> > > On Wed, 2016-11-30 at 18:20 +0100, Josef Skladanka wrote:
> > > > I would try not to go the third way, because that is really prone to
> > >
> > > erros
> > > > IMO, and I'm not sure that "per context" is always right. So for me,
> the
> > > > "TCMS" part of the data, should be:
> > > > 1) testcases (with required fields/types of the fields in the "result
> > > > response"
> > > > 2) testplans - which testcases, possibly organized into groups. Maybe
> > >
> > > even
> > > > dependencies + saying "I need testcase X to pass, Y can be pass or
> warn,
> > >
> > > Z
> > > > can be whatever when A passes, for the testplan to pass"
> > > >
> > > > But this is fairly complex thing, to be honest, and it would be the
> first
> > > > and only useable TCMS in the world (from my point of view).
> > >
> > > I have rather different opinions, actually...but I'm not working on
> > > this right now and I'd rather have something concrete to discuss than
> > > just opinions :)
> > >
> > > We should obviously set goals properly, before diving into
> implementation
> >
> > details :) I'm interested in what you have in mind, since I've been
> > thinking about this particular kind of thing for the last few years, and
> it
> > really depends on what you expect of the system.
>
> Well, the biggest point where I differ is that I think your 'third way'
> is kind of unavoidable. For all kinds of reasons.
>
> We re-use test cases between package update testing, Test Days, and
> release validation testing, for instance; some tests are more or less
> unique to some specific process, but certainly not all of them. The
> desired test environments may be significantly different in these
> different cases.
>

We also have secondary arch teams using release validation processes
> similar to the primary arch process: they use many of the same test
> cases, but the desired test environments are of course not the same.
>
>
I think we actually agree, but I'm not sure, since I don't really know what
you mean by "test environment" and how should it
1) affect the data stored with the result
2) affect the testcase itself

I have a guess, and I base the rest of my response on it, but I'd rather
know, than assume :)



> Of course, in a non-wiki based system you could plausibly argue that a
> test case could be stored along with *all* of its possible
> environments, and then the configuration for a specific test event
> could include the information as to which environments are relevant
> and/or required for that test event. But at that point I think you're
> rather splitting hairs...
>
> In my original vision of 'relval NG' the test environment wouldn't
> actually exist at all, BTW. I was hoping we could simply list test
> cases, and the user could choose the image they were testing, and the
> image would serve as the 'test environment'. But on second thought
> that's unsustainable as there are things like BIOS vs. UEFI where we
> may want to run the same test on the same image and consider it a
> different result. The only way we could stick to my original vision
> there would be to present 'same test, different environment' as another
> row in the UI, kinda like we do for 'two-dimensional test tables' in
> Wikitcms; it's not actually horrible UI, but I don't think we'd want to
> pretend in the backend that these were two completely different. I
> mean, we could. Ultimately a 'test case' is going to be a database row
> with a URL and a numeric ID. We don't *have* to say the URL key is
> unique. ;)
>

I got a little lost here, but I think I understand what you are saying.
This is IMO one of the biggest pain-points we have currently - the stuff
where we kind of consider "Testcase FOO" for BIOS and UEFI to be
the same, but different at the same time, and I think this is where the
TCMS should come in play, actually.

Because I believe, that there is a fundamental difference between
1) the 'text' of the testcase (which says 'how to do it' basically)
2) the (what I think you call) environment - aka UEFI vs BIOS, 64bit vs
ARM, ...
3) the testplan

And this might be us saying the same things, but we often can end up in a
situation, where we say stuff like "this test(case?) makes sense for BIOS
and
UEFI, for x86_64 and ARM, f

Re: Release validation NG: planning thoughts

2016-12-01 Thread Josef Skladanka
On Wed, Nov 30, 2016 at 6:29 PM, Adam Williamson <adamw...@fedoraproject.org
> wrote:

> On Wed, 2016-11-30 at 18:20 +0100, Josef Skladanka wrote:
> > I would try not to go the third way, because that is really prone to
> erros
> > IMO, and I'm not sure that "per context" is always right. So for me, the
> > "TCMS" part of the data, should be:
> > 1) testcases (with required fields/types of the fields in the "result
> > response"
> > 2) testplans - which testcases, possibly organized into groups. Maybe
> even
> > dependencies + saying "I need testcase X to pass, Y can be pass or warn,
> Z
> > can be whatever when A passes, for the testplan to pass"
> >
> > But this is fairly complex thing, to be honest, and it would be the first
> > and only useable TCMS in the world (from my point of view).
>
> I have rather different opinions, actually...but I'm not working on
> this right now and I'd rather have something concrete to discuss than
> just opinions :)
>
> We should obviously set goals properly, before diving into implementation
details :) I'm interested in what you have in mind, since I've been
thinking about this particular kind of thing for the last few years, and it
really depends on what you expect of the system.
___
qa-devel mailing list -- qa-de...@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Release validation NG: planning thoughts

2016-11-30 Thread Josef Skladanka
On Wed, Nov 30, 2016 at 11:10 AM, Adam Williamson <
adamw...@fedoraproject.org> wrote:

> On Wed, 2016-11-30 at 09:38 +0100, Josef Skladanka wrote:
> > So if this is what you wanted to do (data validation), it might be a good
> > idea to have that submitter middleware.
>
> Yeah, that's really kind of the key 'job' of that layer. Remember,
> we're dealing with *manual* testing here. We can't really just have a
> webapp that forwards whatever the hell people manage to stuff through
> its input fields into ResultsDB.
>

I'm not sure I'm getting it right, but the people will pass the data
through a "tool" (say web app) which will provide fields to fill, and will
most probably end up doing the data "sanitation" on its own. So the
"frontend" could store data directly in ResultsDB, since the frontend would
make the user fill all the fields. I guess I know what you are getting at
("but this is exactly the double validation!") but it is IMHO actually
harder to have "generic stupid frontend" that gets the "form schema" from
the middleware, shows the form, and blindly forwads data to the middleware,
showing errors back, than
1) having a separate app for that, that will know the validation rules
2) it being an actual frontend on the middleware, thus reusing the "check"
code internally


> R...we need to tell the web UI 'these are the
> possible scenarios for which you should prompt users to input results
> at all'
>
Agreed


> (which for release validation is all the 'notice there's a new
> compose, combine it with the defined release validation test cases and
> expose all that info to the UI' work),

That is IMO a separate problem, but yeah.


> and we need to take the data the
> web UI generates from user input, make sure it actually matches up with
> the schema we decide on for storing the results before forwarding it to
> resultsdb, and tell the web UI there's a problem if it doesn't.
>
And this is what I have been discussing in the first part of the reply.


> That's how I see it, anyhow. Tell me if I seem way off. :)
> --
> Adam Williamson
> Fedora QA Community Monkey
> IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
> http://www.happyassassin.net
> ___
> qa-devel mailing list -- qa-devel@lists.fedoraproject.org
> To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org
>
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Release validation NG: planning thoughts

2016-11-30 Thread Josef Skladanka
On Tue, Nov 29, 2016 at 5:34 PM, Adam Williamson  wrote:

> On Tue, 2016-11-29 at 19:41 +0530, Kanika Murarka wrote:
> > 2. Keep a record of no. of validation test done by a tester and highlight
> > it once he login. A badge is being prepared for no. of validation testing
> > done by a contributor[1].
>
> Well, this information would kind of inevitably be collected at least
> in resultsdb and probably wind up in the transmitter component's DB
> too, depending on exactly how we set things up.
>

I think that this probably should be in ResultsDB - it's the actual stored
result data.
The transmitter component should IMO store the "semantics" (testplans,
stuff like that), and use the "raw" resultsdb data as a source to present
meaningful view.
I'd say that as a rule of thumb, replicating data on multiple places is a
sign of design error.
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Release validation NG: planning thoughts

2016-11-30 Thread Josef Skladanka
On Mon, Nov 28, 2016 at 6:48 PM, Adam Williamson  wrote:

> On Mon, 2016-11-28 at 09:40 -0800, Adam Williamson wrote:
> > The validator/submitter component would be responsible for watching out
> > for new composes and keeping track of tests and 'test environments' (if
> > we keep that concept); it would have an API with endpoints you could
> > query for this kind of information in order to construct a result
> > submission, and for submitting results in some kind of defined form. On
> > receiving a result it would validate it according to some schemas that
> > admins of the system could configure (to ensure the report is for a
> > known compose, image, test and test environment, and do some checking
> > of stuff like the result status, user who submitted the result, comment
> > content, stuff like that). Then it'd forward the result to resultsdb.
>
> It occurs to me that it's possible resultsdb might be designed to do
> all this already, or it might make sense to amend resultsdb to do all
> or some of it; if that's the case, resultsdb folks, please do jump in
> and suggest it :)
>

That's what I thought, when reading the proposal - the "Submitter" seems
like an unnecessary layer, to some extent - submitting stuff to resultsdb
is pretty easy.
What resultsdb is not doing now, though is the data validation - let's say
you wanted to check that specific fields are set (on top of what resultsdb
requires, which basically is just testcase and outcome) - that can be done
in resultsdb (there is a diff with that functionality), but at the moment
only on global level. So it might not necessarily make sense to set e.g.
'compose' as a required field for the whole resultsdb, since
testday-related results might not even have that.
So if this is what you wanted to do (data validation), it might be a good
idea to have that submitter middleware. Or (and I'm not sure it's the
better solution) I could try and make that configuration more granular, so
you could set the requirements e.g. per namespace, thus effectively
allowing setting the constraints even per testcase. But that would need
even more though - should the constraints be inherited from the upper
layers? How about when all but one testcases in a namespace need to have
parameter X, but for the one, it does not make sense? (Probably a design
error, but needs to be thought-through in the design phase).

So, even though resultsDB could do that, it is borderline "too smart" for
it (I really want to keep any semantics out of ResultsDB). I'm not
necessarily against it (especially if we end up wanting that on more
places), but until now, we more or less worked with "clients that submits
data makes sure all required fields are set" i.e "it's not resultsdb's
place to say what is or is not required for a specific usecase". I'm not
against the change, but at least for the first implementation (of the
Release validation NG) I'd vote for the middleware solution. We can add the
data validation functionality to ResultsDB later on, when we have a more
concrete idea.

Makes sense?

Joza
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


ResultsDB 2.0 - DB migration on DEV

2016-11-25 Thread Josef Skladanka
So, I have performed the migration on DEV - there were some problems with
it going out of memory, so I had to tweak it a bit (please have a look at
D1059, that is what I ended up using by hot-fixing on DEV).

There still is a slight problem, though - the migration of DEV took about
12 hours total, which is a bit unreasonable. Most of the time was spent in
`alembic/versions/dbfab576c81_change_schema_to_v2_0_step_2.py` lines 84-93
in D1059. The code takes about 5 seconds to change 1k results. That would
mean at least 15 hours of downtime on PROD, and that, I think is unreal...

And since I don't know how to make it faster (tips are most welcomed), I
suggest that we archive most of the data in STG/PROD before we go forward
with the migration. I'd make a complete backup, and deleted all but the
data from the last 3 months (or any other reasonable time span).

We can then populate an "archive" database, and migrate it on its own,
should we decide it is worth it (I don't think it is).

What do you think?

J.
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Proposal to CANCEL: 2016-10-31 Fedora QA Devel Meeting

2016-10-31 Thread Josef Skladanka
+1 to cancel

On Mon, Oct 31, 2016 at 5:58 AM, Tim Flink  wrote:

> I'm not aware of any topics that need to be discussed/reviewed as a
> group this week, so I propose that we cancel the weekly Fedora QA devel
> meeting.
>
> If there are any topics that I'm forgetting about and/or you think
> should be brought up with the group, reply to this thread and we can
> un-cancel the meeting.
>
> Tim
>
> ___
> qa-devel mailing list -- qa-devel@lists.fedoraproject.org
> To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org
>
>
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: New ExecDB

2016-10-21 Thread Josef Skladanka
So, after a long discussion, we arrived to this solution.

We will clearly split up the "who to notify" part, and "should we
re-schedule" part of the proposal. The party to notify will be stored in
the `notify` field, with `taskotron, task, unknown` options. Initially any
crashes in `shell` or `python` directive, during formula parsing, and when
installing the packages specified in the formula's environment will be sent
to task maintainers, every other crash to taskotron maintainer. That covers
what I initially wanted from the multiple crashed states.

On top of that, we feel that having an information on "what went wrong" is
important, and we'd like to have as much detail as possible, but on the
other hand we don't want the re-scheduling logic to be too complicated. We
agreed on using a `cause` field, with `minion, task, network, libtaskotron,
unknown` options, and storing any other details in a key-value store. We
will likely just re-schedule any crashed task anyway, at the beginning, but
this allows us to hoard some data, and make more informed decision later
on. On top of that, the `fatal` flag can be set, to say that it is not
necessary to reschedule, as the crash is unlikely to be fixed by that.

This allows us to keep the re-scheduling logic rather simple, and most
imporantly decoupled from the parts that just report what went wrong.
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: New ExecDB

2016-10-12 Thread Josef Skladanka
On Tue, Oct 11, 2016 at 1:14 PM, Kamil Paral  wrote:

> Proposal looks good to me, I don't have any strong objections.
>
> 1. If you don't like blame: UNIVERSE, why not use blame: TESTBENCH?
> 2. I think that having enum values in details in crash structure would be
> better, but I don't have strong opinion either way.
>
>
> For consistency checking, yes. But it's somewhat inflexible. If the need
> arises, I imagine the detail string can be in json format (or
> semicolon-separated keyvals or something) and we can store several useful
> properties in there, not just one.
>


I'd rather do the key-value thing as we do in ResultsDB than storing plalin
Json. Yes the new Postgres can do it (and can also search it to some
extent), but it is not all-mighty, and has its own problems.



> E.g. not only that Koji call failed, but what was its HTTP error code. Or
> not that dnf install failed, but also whether it was the infamous "no more
> mirror to try" error or a dependency error. I don't want to misuse that to
> store loads of data, but this could be useful to track specific issues we
> have hard times to track currently (e.g. our still existing depcheck issue,
> that happens only rarely and it's difficult for us to get a list of tasks
> affected by it). With this, we could add a flag "this is related to problem
> XYZ that we're trying to solve".
>
>
I probably understand, what you want, but I'd rather have a specified set
of values, which will/can be acted upon. Maybe changing the structure to
`{state, blame, cause, details}`, where the `cause` is still an enum of
known values but details is freeform, but strictly used for humans? So we
can "CRASHED->THIRDPARTY->UNKNOWN->"text of the exception" for example, or
"CRASHED->TASKOTRON->NETWORK->"dnf - no more mirrors to try".

I'd rather act on a known set of values, then have code like:

if ('dnf' in detail and 'no more mirrors' in detail) or ('DNF' in
detail and 'could not connect' in detail)

in the end, it is almost the same, because there will be problems with
clasifying the errors, and the more layers we add, the harder it gets -
that is the reason I initially only wanted to do the {state, blame} thing.
But I feel that this is not enough (just state and blame) information for
us to act upon - e.g. to decide when to automatically reschedule, and when
not, but I'm afraid that with the exploded complexity of the 'crashed
states' the code for handling the "should we reschedule" decisions will be
awfull. Notyfiing the right party is fine (that is what blame gives us),
but this is IMO what we should focus on a bit.

Tim, do you have any comments?
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Resultsdb v2.0 - API docs

2016-10-03 Thread Josef Skladanka
So, what's the decision? I know I can "guesstimate", but I'd like to see a
group consensus before I actually start coding.

On Thu, Sep 29, 2016 at 7:31 AM, Josef Skladanka <jskla...@redhat.com>
wrote:

>
>
> On Tue, Sep 27, 2016 at 6:06 PM, Kamil Paral <kpa...@redhat.com> wrote:
>
>> ...
>> What are the use cases? I can think of one - yesterday Adam mentioned he
>> would like to save manual test results into resultsdb (using a frontend).
>> That would have no ExecDB entry (no UUID). Is that a problem in the current
>> design? This also means we would probably not create a group for this
>> result - is that also OK?
>>
>
> Having no ExecDB entry is not a problem, although it provides global UUID
> for our execution, the UUID from ExecDB is not necessary at all for
> ResultsDB (or the manual-testing-frontend). The point of ExecDB's UUID is
> to be able to tie together the whole automated run from the point of
> Trigger to the ResultsDB. But ResultsDB can (and does, if used that way)
> create Group UUIDs on its own. So we could still create a groups for the
> manual tests - e.g. per build - if we wanted to, the groups are made to be
> more usable (and easier to use) than the old jobs. But we definitely could
> do without them, just selecting the right results would (IMHO) be a bit
> more complicated without the groups.
>
> The thing here (which I guess is not that obvious) is, that there are
> different kinds of UUIDS, and that you can generate "non-random" ones,
> based on namespace and name- this is what we're going to use in OpenQA, for
> example, where we struggled with the "old"design of ResultsDB (you needed
> to create the Job during trigger time, and then propagate the id, so it's
> available in the end, at report time). We are going to use something like
> `uuid.uuid3("OpenQA in Fedora", "Build Fedora-Rawhide-20160928.n.0")`
> (pseudocode to some extent), to create the same group UUID for the same
> build. This approach can be easily replicated anywhere, to provide
> canonical UUIDs, if needed.
>
> Hope that I was at least a bit on topic :)
>
> j.
>
>
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: Resultsdb v2.0 - API docs

2016-09-28 Thread Josef Skladanka
On Tue, Sep 27, 2016 at 6:06 PM, Kamil Paral  wrote:

> ...
> What are the use cases? I can think of one - yesterday Adam mentioned he
> would like to save manual test results into resultsdb (using a frontend).
> That would have no ExecDB entry (no UUID). Is that a problem in the current
> design? This also means we would probably not create a group for this
> result - is that also OK?
>

Having no ExecDB entry is not a problem, although it provides global UUID
for our execution, the UUID from ExecDB is not necessary at all for
ResultsDB (or the manual-testing-frontend). The point of ExecDB's UUID is
to be able to tie together the whole automated run from the point of
Trigger to the ResultsDB. But ResultsDB can (and does, if used that way)
create Group UUIDs on its own. So we could still create a groups for the
manual tests - e.g. per build - if we wanted to, the groups are made to be
more usable (and easier to use) than the old jobs. But we definitely could
do without them, just selecting the right results would (IMHO) be a bit
more complicated without the groups.

The thing here (which I guess is not that obvious) is, that there are
different kinds of UUIDS, and that you can generate "non-random" ones,
based on namespace and name- this is what we're going to use in OpenQA, for
example, where we struggled with the "old"design of ResultsDB (you needed
to create the Job during trigger time, and then propagate the id, so it's
available in the end, at report time). We are going to use something like
`uuid.uuid3("OpenQA in Fedora", "Build Fedora-Rawhide-20160928.n.0")`
(pseudocode to some extent), to create the same group UUID for the same
build. This approach can be easily replicated anywhere, to provide
canonical UUIDs, if needed.

Hope that I was at least a bit on topic :)

j.
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: 2016-09-14 @ 14:00 UTC - QA Tools Video "Standup" Meeting

2016-09-22 Thread Josef Skladanka
I'd rather go with the option no. 1, but I don't really care that much
either way. So if one option suits you guys better, I'll comply.

J.

On Thu, Sep 22, 2016 at 9:59 AM, Martin Krizek  wrote:

> - Original Message -
> > From: "Tim Flink" 
> > To: qa-devel@lists.fedoraproject.org
> > Sent: Wednesday, September 14, 2016 4:59:49 PM
> > Subject: Re: 2016-09-14 @ 14:00 UTC -  QA Tools Video "Standup" Meeting
> >
> > 
> >
> > One of the topics that came up was how often to do these video
> > meetings. Having additional weekly meetings via video seems like
> > overkill but if there's an appropriate in-depth topic, meeting via
> > video to talk instead of type would be useful.
> >
> > The two options we came up with are:
> >
> > 1. Switch the first qadevel meeting of every month to be via video,
> >making sure that an agenda is sent out early enough for folks to be
> >prepared.
> >
> > 2. Pencil in a video meeting once or twice a month on the Wednesday
> >after qadevel meetings. Ask for video topics during the qadevel
> >meeting and on email. If there are enough topics suggested which
> >would benefit from talking instead of typing, meet on the following
> >Wednesday to discuss via video. If there is no need to meet via
> >video, skip it.
> >
> > I realize that I'm changing things up a little bit from what we were
> > talking about at the end of the meeting but I have a small concern
> > about option 1 - one of the issues that we had today is that folks
> > weren't prepared because we didn't set an agenda.
> >
> > If we switch one qadevel meeting per month  to video, how do we want to
> > handle setting the agenda early enough so that participants have enough
> > time to prepare?
> >
> > Any thoughts or preferences?
> >
>
> My vote would be for the option 2., mostly because the video meeting can
> be easily skipped if we don't have any topics to discuss and that
> setting an agenda would be done on a set time (Monday meeting). It seems
> to me that this could work well.
>
>
> Thanks,
> Martin
> ___
> qa-devel mailing list -- qa-devel@lists.fedoraproject.org
> To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org
>
___
qa-devel mailing list -- qa-devel@lists.fedoraproject.org
To unsubscribe send an email to qa-devel-le...@lists.fedoraproject.org


Re: RFR: New Dist-Git Task Storage Proposal

2016-09-14 Thread Josef Skladanka
On Tue, Sep 13, 2016 at 4:20 PM, Tim Flink  wrote:

> On Mon, 12 Sep 2016 14:44:27 -0600
> Tim Flink  wrote:
>
> > I wrote up a quick draft of the new dist-git task storage proposal
> > that was discussed in Brno after Flock.
> >
> > https://phab.qadevel.cloud.fedoraproject.org/w/taskotron/
> new_distgit_task_storage_proposal/
> >
> > Please review the document and either let me know (or fix in the wiki
> > page) things which aren't clear or bits that I forgot.
>
> I added more information to the wiki page about the default, or bare
> executable case which we discussed during/after flock.
>
> Tim
>

LGTM - I'd just add a link to documentation for the results.yaml format, if
we have any (if we don't then we'd better write one :D)
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/qa-devel@lists.fedoraproject.org


Re: Resultsdb v2.0 - API docs

2016-09-14 Thread Josef Skladanka
On Tue, Sep 13, 2016 at 8:19 PM, Randy Barlow 
wrote:

> Will the api/v1.0/ endpoint continue to function as-is for a while, to
> give integrators time to adjust to the new API? That would be ideal for
> Bodhi, so we can adjust our code to work with v2.0 after it is already in
> production. If not, we will need to coordinate bodhi and resultsdb releases
> at the same time.
>

Hey! There is a plan for the  v1.0 endpoint to work, even though being a
bit limited in features, but from what I remember about Bodhi, that will
not affect it at all.
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/qa-devel@lists.fedoraproject.org


Re: Resultsdb v2.0 - API docs

2016-08-18 Thread Josef Skladanka
So, I have completed the first draft of the ResultsDB 2.0 API.
The documentation lives here: http://docs.resultsdb20.apiary.io/# and I'd
be glad if you could have a look at it.

The overall idea is still not changed - ResultsDB should be a "dumb"
results store, that knows next to nothing (if not nothing at all) about the
semantics/meaning of the data stored, and this should be applied in the
consumer. This is why, for example, no result override is planned, although
it might make sense to override a known fail to pas for some usecase (like
gating), it might not be the right thing to do for some other tool in the
pipeline, thus the override needs to happen at the consumer side.
What's not covered in detail is auth model - I only reflected it by
acknowledging the probable future presence of some kind of auth in the POST
queries (reserved _auth parameter), but the actual implementation is not a
problem to solve today.

On top of that I'd also want to know (and this is probably mostly question
for Ralph), whether it makes sense to try and keep both the old and new API
up for some time. It should not be that complicated to do, I'd just rather
not spend too much time on it, as changing the consumers (bodhi, as far as
I know) is most probably much less time consuming than keeping the old API
running. At the moment, I will probably make it happen, but if we agree
it's not worth the time...

Feel free to post comments/feature requests/whatever - I'd love for this to
be stable (or at least a base for non-breaking changes) for at least next
few years (lol I know, right...), so let's do it right :)

joza

On Mon, Aug 15, 2016 at 10:48 PM, Josef Skladanka <jskla...@redhat.com>
wrote:

> Hey gang,
>
> I spent most of today working on the new API docs for ResultsDB, making
> use of the even better Apiary.io tool.
>
> Before I put even more hours into it, please let me know, whether you
> think it's fine at all - I'm yet to find a better tool for describing APIs,
> so I'm definitely biased, but since it's the Documentation, it needs to
> also be useful.
>
> http://docs.resultsdb20.apiary.io/
>
> I am also trying to put more work towards documenting the attributes and
> the "usual" queries, so please try and think about this aspect of the docs
> too.
>
> Thanks, Joza
>
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/qa-devel@lists.fedoraproject.org


Resultsdb v2.0 - API docs

2016-08-15 Thread Josef Skladanka
Hey gang,

I spent most of today working on the new API docs for ResultsDB, making use
of the even better Apiary.io tool.

Before I put even more hours into it, please let me know, whether you think
it's fine at all - I'm yet to find a better tool for describing APIs, so
I'm definitely biased, but since it's the Documentation, it needs to also
be useful.

http://docs.resultsdb20.apiary.io/

I am also trying to put more work towards documenting the attributes and
the "usual" queries, so please try and think about this aspect of the docs
too.

Thanks, Joza
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/qa-devel@lists.fedoraproject.org


Re: Request for Testing: New Auth Method for Phabricator

2016-07-21 Thread Josef Skladanka
Linking the account worked for me just fine, although I stumbled upon
the Err 500 while trying to log-in via persona (worked on the second
try, though).
After logging out, and re-logging in via Ipsilon for the first and
third time, this is what I got:

Unhandled Exception ("HTTPFutureHTTPResponseStatus")
[HTTP/400]
  Bad Request wrote:
> I've been working on moving our phabricator instance off of persona
> before that system is turned off in a few months.
>
> I have an extension deployed in staging and I'd like it to see a bit
> more testing before looking into deploying it in production.
>
> https://phab.qa.stg.fedoraproject.org/
>
> To link your existing account (on staging, this won't work on the
> production instance yet) to the new auth method:
>
> 1. Click on the "user" button next to the search bar when logged in
>
> 2. Click on "manage" on the left hand side of the screen
>
> 3. Click on "edit settings" on the right hand side of the screen
>
> 4. Click on "External Accounts"
>
> 5. Click on "Ipsilon" under "Add External Account
>
> 6. Log in with your FAS credentials.
>
> Please let me know if you try this and are successful or if you run
> into problems. I haven't been able to reproduce the 500 issue with
> persona on stg but I suspect it's intermittant and will try again later
> to see if I can fix it enough to be somewhat reliable.
>
> Tim
>
> ___
> qa-devel mailing list
> qa-devel@lists.fedoraproject.org
> https://lists.fedoraproject.org/admin/lists/qa-devel@lists.fedoraproject.org
>
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://lists.fedoraproject.org/admin/lists/qa-devel@lists.fedoraproject.org


PoC of "configurable trigger"

2016-06-01 Thread Josef Skladanka
Source: https://bitbucket.org/fedoraqa/taskotron-trigger/branch/pony
Diff: https://phab.qadevel.cloud.fedoraproject.org/D872

This started as simple bike-shedding to make more sense in naming (so
everything is not named "Trigger"), but it went further :D

The main change here is, what I call "configurable trigger" - at the moment,
every time we want to add support for even the most basic new task (like the
package-specific task for docker), changes are needed in the trigger's source
code.

These changes add a concept of a "rules engine", that decides what tasks to
schedule based on data extracted from the received FedMessage, and a
set of rules.

The rules-engine is YAML, in a format like this::
```
- do:
  - {tasks: [depcheck, upgradepath]}
  when: {message_type: KojiTagChanged}
- do:
  - {tasks: [dockerautotest]}
  when: {message_type: KojiBuildCompleted, name: docker}
- do:
  - {tasks: [abicheck]}
  when:
message_type: KojiBuildCompleted
name:
  $in: ${critpath_pkgs}
  $nin: ['docker'] # critpath excludes
```

The rules are split in two parts `when` and `do`, the `when` clause is
a mongo query that will get evaluated against the dataset provided by
the FedMsg consumer. For example, the KojiBuildCompletedJobTrigger now
publishes this (values are fake, to make it more descriptive::

message_data = {
"_msg": {...snipped...},
"message_type": "KojiBuildCompleted",
"item": "docker-1.9.1-6.git6ec29ef.fc23",
"item_type": "koji_build",
"name": "docker",
"version": "1.9.1-6.git6ec29ef",
"release": "fc23",
"critpath_pkgs": [..., "docker", ...]
"distgit_branch": "f23",
}

So taking the rules, and the data, going from the top:

 # First rule's `when` is `False` as `message_type` is not `KojiTagChanged`
 # Second rule is `True` because both the `message_type` and name in the
   `when` clause match the data
 # Third rule does _not_ schedule anything, because even though `docker` is
   in `critpath_pkgs`, it also is part of the critpath excludes list, and
   so the rule is ignored

The `when` clauses are in fact mongo queries
,
evaluated using a Python library that implements it for querying Python objects.

The rules engine then takes the `do` clauses of the 'passed' rules, and
produces arguments for the `trigger_tasks()` calls. By default, `item`, and
`item_type` are taken from the `message_data`, `arches` is set to
`config.valid_arches`, and then all the key/values from the `do`'s body are
added on top. This means, that we can have a task, that for example forces
an architecture different than default::
```
- do:
  - {tasks: [awesome_arm_check], arches: [armhfp]}
  when: {message_type: KojiBuildCompleted}
```

The `do` clause can have multiple items in it, so something like this is
possible::
```
- do:
  - {tasks: [rpmlint]}
  - {tasks: [awesome_arm_check], arches: [armhfp]}
  when: {message_type: KojiBuildCompleted}
```

Triggering `rpmlint` on the default architectures, and `awesome_arm_check`
on `armhfp` for each package built in Koji.

This means, that when we want to trigger new (somewhat specific) tasks,
no changes are needed in the trigger's code, but just in the configuration,
to alter the rules. If we come to the point where more functionality is
needed, than it obviously calls for changes in the underlying code, in order
to add more key/values to the data provided by the Fedmsg consumer, or
adding more general functionality overall.

A good example of this is the dist-git style tasks problem. To solve it
I have added a new command (`$discover`)to the `do` section, that crawls the
provided git repo/branch, and schedules jobs for all `runtask.yml`'s found::
```
- do:
  - {$discover: {repo:
'http://pkgs.fedoraproject.org/git/rpms-checks/${name}.git', branch:
'${distgit_branch}'}}
  when: {message_type: KojiBuildCompleted}
```

In the bigger picture, this 'rules engine' functionality can be used to
make (for example) a web interface, that allows creating/altering the rules,
instead of changing the config file (the rules can as easily be taken from
a database, as from the config file), or even to provide a per-user triggering
capability - we could add a piece of code, that checks (selected) users'
Fedorapeople profile for a file, that contains rules in this format, and
then could simply run the engine on those rules+data from Fedmsg to decide
whether the user-defined tasks should be run.

It also somewhat reduces the tight bond between the trigger and FedMessage,
as the rules engine does not really care where did the data (used to evaluate
the rules) come from.

This is by no means final, but it IMO shows quite an interesting PoW/idea, that
was not that complicated to implement, and made the trigger lot better at what
it can 

Re: 2016-05-09 @ 14:00 UTC - Fedora QA Devel Meeting

2016-05-09 Thread Josef Skladanka
Won't be able to make it today, but I spent the last two weeks dealing
with base-image building.

I can use some tasks, if we have something urgent and doable.
Otherwise, I'd continue hacking on the trigger.

On Mon, May 9, 2016 at 6:02 AM, Tim Flink  wrote:
> # Fedora QA Devel Meeting
> # Date: 2016-05-09
> # Time: 14:00 UTC
> (https://fedoraproject.org/wiki/Infrastructure/UTCHowto)
> # Location: #fedora-meeting-1 on irc.freenode.net
>
> Please put announcements and information under the "Announcements and
> Information" section of the wiki page for this meeting:
>
> https://phab.qadevel.cloud.fedoraproject.org/w/meetings/20160509-fedoraqadevel/
>
> Tim
>
>
> Proposed Agenda
> ===
>
> Announcements and Information
> -
>   - Please list announcements or significant information items below so
> the meeting goes faster
>
> Tasking
> ---
>   - Does anyone need tasks to do?
>
> Potential Other Topics
> --
>
>   - Docker testing
>   - abi checking
>   - packaging
>
> Open Floor
> --
>   - TBD
>
> ___
> qa-devel mailing list
> qa-de...@lists.fedoraproject.org
> http://lists.fedoraproject.org/admin/lists/qa-de...@lists.fedoraproject.org
>
___
qa-devel mailing list
qa-de...@lists.fedoraproject.org
http://lists.fedoraproject.org/admin/lists/qa-de...@lists.fedoraproject.org


Re: Proposal to CANCEL: 2016-03-21 Fedora QA Devel Meeting

2016-03-21 Thread Josef Skladanka
ack

On Mon, Mar 21, 2016 at 6:47 AM, Tim Flink  wrote:

> I don't have any hugely important topics for the QA Devel meeting this
> week so instead of taking up 30-60 minutes of everyone's time this
> week, I propose that the meeting be canceled.
>
> If there is a topic that you would like to see discussed, reply to this
> thread with that topic and we can hold the meeting as it would have
> been scheduled.
>
> Otherwise, I'll sync up with folks about tasks during the week.
>
> Tim
>
> ___
> qa-devel mailing list
> qa-devel@lists.fedoraproject.org
> http://lists.fedoraproject.org/admin/lists/qa-devel@lists.fedoraproject.org
>
>
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
http://lists.fedoraproject.org/admin/lists/qa-devel@lists.fedoraproject.org


Testcase namespacing - adding structure to result reporting

2016-02-08 Thread Josef Skladanka
This is an initial take on stuff that was discussed in person during Tim's
stay in Brno. Sending to the list for additional discussion/fine-tuning.
 
= What =

Talking rpmgrill-like checks, there will be a need to be able to facilitate
some kind of structure for representing that a check is composed of multiple
subchecks, for example:

check - FAILED
subcheck1 - PASSED
subcheck2 - PASSED
subcheck3 - FAILED
subcheck4 - PASSED

!IMPORTANT: ResultsDB will not be responsible for computing the result value
for an "upper level" Result from the subchecks - this is the check's (check
developer's) responsibility.

This could (should?) be done on two levels:
* physicall nesting the Results as such in the database structure
* namespacing Testcases

For the start, we decided to go with the simplistic approach of nesting the
Testcases via a simple namespacing - thus allowing a frontend/query tool to
reconstruct the structure at least to some extent e.g. by relying on a premise,
that Results that are a part of one Job can be converted to a tree-like
structure, based on the Testcase namespacing, at least to some extent, if
needed.


== Namespace structure ==

We'll be providing some top-level namespaces (list not yet final):
* app
* fedoraqa
* package
* scratch (?)

These will the further split to facilitate for a finer level of granularity,
e.g.:

app
testdays
powermanagement
pm-suspendr
fedoraqa
depcheck
rpmgrill
package

unit
func

Everything below the top-level will be 100% user defined. We might have
recommendations for specific namespaces (like package.), but we won't
be enforcing them.

The structure will be implemented (at least in the initial implementation) just
via the Testcase.name attribute in the DB, using dots as a separator. Later on,
we can easily add an easy way of using wildcards for searching (e.g.
app.testdays.*.pm-suspendr)

!IMPORTANT: the namespaces are not to be used to represent "additional data"
about the underlying result such as architecture, item under test, etc. 
This is what the Result's extra-data (ResultData) is there for.

NOTE: Although we do not encourage to store the results to the finest
granularity "just because" (e.g. individual results of a unittest testsuite),
we leave it to the check-developer's judgement. If there is a usecase for it,
let them do it, we don't care, as long as the DB is not extremely overloaded.


== Authentication/Authorization ==

We'll be continuing with the "expect no malice" approach we have right now.
There will be just a simple limitation in libtaskotron:

check git clone
if cloned: only allow non-pkg namespace if __our__ repo
else: do whatever, don't care

in libtaskotron:
check the git checkout like listed above
have whitelisted napespace repos in config

!FIXME: the mechanism above is just copied from tflink's notes, I can't
remember the details :/


== TODOs ==

* Change our checks to use the fedoraqa namespace
* Implement repo checking in libtaskotron
* Write docs for how to report stuff to ResultsDB
* Come up with root nodes for namespaces
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
http://lists.fedoraproject.org/admin/lists/qa-devel@lists.fedoraproject.org


Re: 2015-07-27 @ 14:00 UTC - Fedora QA Devel Meeting

2015-07-27 Thread Josef Skladanka
I won't be able to make it to the meeting today, so please just CP these:

#topic jskladan's update
#info T414 is cursed /me spent most of the week getting distracted by 
OtherThings(tm)
#info Docker is broken (machines can't be linked) - BUG #1244124
#info when `git apply` is misbehaving, check CR/LF vs LF
#info gremlins in Tim's machine cuased the WIP diff to be incomplete (found out 
on Friday), /me will carry on either from current state, or from the complete 
patch, if Tim finds it
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/qa-devel


Re: Coding Style

2015-06-18 Thread Josef Skladanka
- Original Message -
 From: Kamil Paral kpa...@redhat.com

 Will we try to live with it in libtaskotron for a while, or should I create
 similar patches for all our projects right away?

I vote for doing it everywhere. I have already converted the ExecDB using 
autopep8
`autopep8  -r  --max-line-length 99  --in-place -a -a ./` as there is next to 
none
change in git-blame's blaming there.

For the other projects (including libtaskotron, once we merge the 
disposable-clients branch),
I suggest using fake author `git commit --author Auto PEP8 
qa-devel@lists.fedoraproject.org`
for the autopep8 initial conversion's commit, so one can then easily dig-deeper 
with git-blame,
if needed.

Thoughts?
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/qa-devel


Re: Coding Style

2015-06-15 Thread Josef Skladanka
   I'm not picking on Josef here - I'm sure I've submitted code recently
   with lint errors, this was just the review I was looking at which
   triggered the idea:
   
   https://phab.qadevel.cloud.fedoraproject.org/D389


No worries, I'm not taking it personaly. As I commented in the D389 - the not 
compliant parts of the code were mostly in the spirit ofthe rest of the code 
in the respective files (thus actually honoring the PEP8 
-https://www.python.org/dev/peps/pep-0008/#a-foolish-consistency-is-the-hobgoblin-of-little-minds
 ). Not saying that it is the best though.
 
   exceptions that we'd want, I'm proposing that we use strict PEP8 with
   almost no exceptions.

For me, strict PEP8 is next-to-unusable, and almost always leads to code like 
this:

+result = self.resultsdb.create_result(job_id=job_data['id'],
+  testcase_name=checkname,
+  outcome=detail.outcome,
+  summary=detail.summary
+  or None,
+  log_url=result_log_url,
+  item=detail.item,
+  type=detail.report_type,
+  **detail.keyvals
+  )

Hard to read, and heavily concentrated to the right edge of the 80-char mark.

 ...
 In this case it would involve asking Josef to stop putting spaces between
 parameter keyvals

I actually did stop doing that quite some time ago :)

First of all I'd suggest to move our codebase to strict PEP8 (or 
as-strict-as-possible), so we can have see how our code looks like, when PEP8 
compliant.
For starters, we could just plain use autopep8 - 
https://pypi.python.org/pypi/autopep8/
How about that?

J.
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/qa-devel


Re: 2015-06-01 Fedora QA Devel Meeting Minutes

2015-06-02 Thread Josef Skladanka
   * tflink to pester jskladan

Sorry about that, /me misread some old let's cancel the meeting email...

ad Testdays:
  The Testday revamp is about half-done, as the process was interrupted by 
testing spree. I'm all in for 'killing' the old cloud machine, and I think it 
can be done ASAP.
  The new code will be ready long before the new cycle of Testdays, and it 
should be deployed by Ansible, as was mentioned during the meeting.

ad git in Phab:
  I tend to agree with kparal - as long as it's quite easy to setup repos, I do 
not really care, where is the repo hosted - especially if Phab is able to push 
to remote repos, thus keeping the (IMO) more visible Bitbucket repos up to date.

ad meeting time:
  I'm OK with the current time, I'll just need to be more careful with marking 
emails as read on my phone *facepalm*

J.



___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/qa-devel


Re: openQA live image testing: ready for merge?

2015-03-12 Thread Josef Skladanka
Adam,

please set these up for review in Phabricator. I strongly suspect (given the 
time that I spent looking at the changes so far) that some discussion will be 
required, and Phab is _the_ place to do it.
Also, please make sure to rebase your repos to their current state, before 
creating the phab reviews.

For further development, I'd suggest creating an account on Bitbucket, and 
using the core repos - all the FedoraQA Devs can write to the repos, and all 
the admins and administer it. Once you have the accound, I'll add you to the 
Dev group, and having a feature branch in the  core repo seems quite better, 
given the development work-flows we currently adhere to.

Thanks,

joza
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/qa-devel


Re: openQA live image testing: ready for merge?

2015-03-12 Thread Josef Skladanka
Some preliminary feedback:

= openqa_fedora =

== _do_install_and_reboot.pm ==

Please delete the anaconda_install_finish needle, if it is unused.

anaconda_install_done needle: 
  * Why is only a part of the button selected?
  * What is the logic behind assert_and_click for multiple areas in one 
needle? Seems like the click is done on the last of the areas (judging from 
the contents of the needle) - is this _always_ true?

== main.pm ==

ad the contents of:
  _boot_to_default.pm
  _live_run_anaconda.pm
  _anaconda_select_language.pm

I'm absolutely for splitting this up a bit, but I'd rather have it done in a 
slightly different way:
  * rename _boot_to_default.pm to something in the likes of 
_handle_bootloader.pm (/me is bad with names, but it really just handles the 
grub options...)
  * merge _live_run_anaconda.pm and _anaconda_select_language.pm into one file, 
and call it something like _get_to_anaconda_main_hub.pm

This will keep the idea of having things split (so the unless Kickstart 
clause is just in one place), and will join the pieces, that IMHO should be 
together anyway.


== Needle changes ==

=== anaconda_spoke_done.json ===

Why change the needle, and why in this particular manner? The change looks 
unnecessary. If there is no particular reason for it, please revert to previous 
version.

=== bootloader_bios_live.json ===

The black area (last match area in the needle) is IMHO quite useless - I 
suspect it is a remain of the original bootloader needle. If there isn't a 
reason for having it there, please remove the area from the needle.

=== gdm.json ===

I'm not sure why you selected the particular bit of the screen, but it does not 
really make much sense to me. Why did you not select any of the more distinct 
areas of the gdm screen?

Also, I'd really like for the needle files to be named as close to the tag 
(i.e. graphical_login ) as possible, I know that you probably made this with 
other login managers in mind, but please use graphical_login_gdm as the name 
of the file, instead of plain gdm.


= openqa_fedora_tools =

== conf_test_suites.py ==

I'm fairly certain that both default_install and package_set_minimal cover 
QA:Testcase_install_to_VirtIO.

== openqa_trigger.py ==

I really don't like the whole check_condition() thing. The name of the function 
does no correspond to what it does, which is quite unpleasant together with its 
side-effects (scheduling the jobs, and changing value of the jobs variable), 
and using variables from out of its scope.

Also, it seems that you forgot to actually fill the uni_done variable, 
resulting in `if condition and image.arch not in uni_done:` being effectively 
reduced to `if condition`, and `if not all(arch in uni_done for arch in 
arches):` reduced to `if True`.

So please:
 * find a more appropriate name for check_condition()
 * pass all necessary variables in arguments
 * make sure the uni_done variable is filled with the right data, and ideally 
rename it to something more descriptive of it's purpose.

I've spent an hour or so tackling it, so please consider this as an example: 
http://fpaste.org/197044/63062142/ but note that I have not ran the code (so 
typos are probably present).




I hope I'm not being too harsh, it is most certainly not my intent to come 
around that way,

J.
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/qa-devel


Re: Coding Style

2014-06-03 Thread Josef Skladanka
 But if there's a strong desire for more columns, I'll manage. Can't hinder
 the team, can I? :)

Also, we should mention that by default the maximal line length is set to 79, 
not 80.

Let's just set it to 80 (as we already use it in the code), and forget about 
the heretic 100 idea :)
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/qa-devel


Re: Default invocation of pytest for qadevel projects

2014-03-06 Thread Josef Skladanka
 Any thoughts on which of those (if either) would be better?

I do not really mind either, and do not have any strong preference. I'm used to 
having the non-functional tests run by default, but I can easily manage any way 
we decide to do it.

j.
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/qa-devel


Taskbot: TAP vs Subunit

2013-10-31 Thread Josef Skladanka
Tim (and of course the rest of the gang ;)),

During our chat with Tim, we agreed that we'd really like to use some 
standardized 'output format' for the tests in Taskbot, to be a bit more 
programming-language/results-store-implementation agnostic.
We knew about two options - TAP 
http://testanything.org/wiki/index.php/Main_Page and Subunit 
https://pypi.python.org/pypi/python-subunit.

= Subunit =

At least in Python is quite tightly coupled with unittest, both ideologically 
and practically. I was unable to find a way to just produce a Subunit stream 
without the need of actually running a testsuite. 

The format is (basically) just plain PASS/FAIL/INFO/..., and there is 
possibility to add some 'details'results. It should also be possible to add an 
attachment to the end of the stream, but no result-specific data can be added 
(IMO).

Also Subunit is now in the process of transition to new implementation, which 
now should fix some issues with concurrency, add more result-states, etc., but 
will be binary, so human readability would quite suffer.

I do not really feel that this is a good match for our needs (at least without 
any significant hacking)

= TAP =

TAP is not unittest-specific, and is human-readable plaintext format.

It also has just PASS/FAIL logic, but there is a possibility to add YAML 
'metadata' to any result (since TAP v. 13).

The real issue with TAP is Python support. 
There is a TAP-consumer library created as an example for PyParsing 
http://pyparsing.wikispaces.com/file/detail/TAP.py, but it does not support 
the v13 protocol, and is quite useless as such.
TAP-producer library for Python https://github.com/rjbs/pytap is also using 
the old version (i.e. no YAML extensions), and seems to be dead (2 years since 
last update). It is also quite badly written.


= Result =

Although neither choice is ideal, I feel that TAP would be the better choice, 
even though it would mean implementing our own producer/parser.
Also the TAP is really simplistic format, so creating a TAP output should be 
quite easy even in any programming language.

It is possible that I somehow utterly misunderstood the Subunit concept, so it 
might be useful to contact some QAs currently using it (I thing Tim mentioned 
something about OpenStack?), or contact the developer directly.


J.
___
qa-devel mailing list
qa-de...@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/qa-devel


Re: Taskbot: TAP vs Subunit

2013-10-31 Thread Josef Skladanka
Lucas,

do you use any library for producing TAP format? Also, do you have any TAP 
parser, or do you just emit it? I was looking for something in Python, but all 
I got is either outdated, or non-complete.

Thanks,

Joza
___
qa-devel mailing list
qa-devel@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/qa-devel