On Fri, Sep 1, 2017 at 8:56 AM, Marek Hulán <[email protected]> wrote:
> Thanks Timo for your input. Please see my comment below in the text.
>
> On čtvrtek 31. srpna 2017 23:08:34 CEST Timo Goebel wrote:
>> Am 28.08.17 um 17:12 schrieb Marek Hulán:
>> > 1) codeclimate is red
>> >
>> > This can be ignored, we never agreed on using this as a hard metric for
>> > the
>> > PR. The motivation to introduce it was mainly to save some time to
>> > reviewer. We don't have to run it manually to get indications whether
>> > there's something introducing a big complexity [1]. From my experience it
>> > sometimes leads to worse code, since author splits the logic into more
>> > methods to lower e.g. cyclomatic complexity but it should be judged
>> > separately in every case.
>> +1
>> I like it as a suggestion, but sometimes it's just off and better be
>> ignored.
>>
>> > 2) foreman is red
>> >
>> > This can happen because of intermittent tests failures. If the PR is
>> > clearly not causing new ones and commiter is aware of this error, the PR
>> > is merged with message like "test unrelated" comments. If we are not
>> > sure, we retrigger the run,
>> >
>> > If Foreman develop branch is broken, we need to merge the PR to fix it so
>> > this is another exception. Usually we trigger the jenkins job manually
>> > first to see that the PR fixes the issue.
>>
>> +1
>> Yes, don't merge a PR with failing Foreman core tests.
>>
>> > 3) katello is red
>> >
>> > If katello becomes red only for this PR, we analyze what causes it.
>> > Usually it means that we change some internal things that have impact on
>> > Katello. In such case, we're doing our best to send a fixing PR to
>> > Katello or we ping someone with better knowledge in this area. We don't
>> > merge the PR until it's resolved, then usually we merge both parts at the
>> > same time.
>>
>> I think, this is totally unfair to all the "smaller" plugin maintainers
>> and that's why I vote for removing the test completely or just keep it
>> to test our public APIs.
>> I believe, we should do the following:
>> If the Foreman PR breaks some public API, e.g. facets, and the Katello
>> tests show that, my suggestion is to fix the foreman PR to not break the
>> public API and add proper depreciations if possible.
>> If we change something inside Foreman - in the past we changed the host
>> multiple actions from GET to POST or introduced strong parameters for
>> example - the contributor or maintainer should send a mail to
>> foreman-dev expaining what needs to be changed in plugins. I think it's
>> also a good idea to fix the example plugin or the How to create a plugin
>> wiki page if applicable.
>> However I think, it's the plugin maintainer's responsibility to make
>> sure his plugin works with Foreman. Everything else doesn't scale. In
>> the past a lot of "my" plugins broke because of changes to Foreman core.
>> Nobody cared to send a PR so far. But that's fine. I don't expect
>> anybody to. It's my job to test the plugin and fix it if it breaks.
>> I think, we should not block Foreman PRs because an additional parameter
>> was added to some internal method, just because Katello happens to
>> overwrite that method. It just doesn't make any sense to me why we
>> should do that for Katello but not for all the other plugins out there.
>> This is not against Katello, it's just the only plugin tested with
>> Foreman core right now.
>> Currently, we're developing a plugin to show system logfiles in Foreman.
>> That requires a complete ELK-stack for development. I would not expect
>> every developer to have that at hand.
>> If we leave Katello in the matrix, I think it would be totally fair to
>> also add our new plugin to Foreman's test matrix as well. But I wouldn't
>> want to block Foreman development just because some test in that plugin
>> breaks.
>> I know, Katello is important for RedHat and it's one of the larger
>> plugins. But that doesn't justify a special role in my opinion.
>
> I understand your feeling. A "justification" for me is that Katello is the
> largest plugin we have and therefore is much more prone to be broken by
> changes in Foreman. The more code you touch from plugin the higher the chance
> is that new core PR breaks something. Also I think for core it's a good way to
> find out what impacts our changes have. By testing Katello we get early notice
> about something that can impact all plugins. The PR author can consider
> whether there's less breaking way of doing the change.
>
> Having said that I still can understand that other plugin maintainers feel
> it's unfair. But instead of dropping Katello from matrix, I think the opposite
> approach would make more sense. I'd like to see many more plugins tested. I
> think plugin test sets are usually much smaller than core one, so it shouldn't
> take too much computation time.

+1 - we should probably skip running the foreman tests again with the plugin
in this matrix, as this is usually the longest time in the test run for plugins.

>
> In such case I think we'd need a criteria for which plugin should be covered.
> Feel free to ask me to start separate thread, but few thoughts what these
> could be: the plugin lives under theforeman organization on github, the plugin
> is packaged in foreman-packaging, the plugin has support in foreman-installer.
> It's funny that Katello does not technically meet any of these but I think
> it's being worked on.
>
> Even if such job wouldn't block the core PR to be merged, I as a plugin
> maintainer would at least see a warning that my plugin was broken. In ideal
> case, the core PR author would consider keeping the change backwards
> compatible.

X most used plugins could be a good criteria, which would give us some insights
on how the changes in core influence the wider ecosystem.

-- Ivan

>
> --
> Marek
>
>>
>> > If it turns out there are more PRs that are failing with same errors, we
>> > merge PRs if we're sure they don't introduce new Katello failures. At
>> > this time, we're not blocking merges until Katello is green again. (*)
>> > here the suggestion applies
>> >
>> > 4) upgrade is red
>> >
>> > this is very similar to katello job, if there's some issue in upgrade, we
>> > should not merge the PR. I remember though, there was a time when we knew
>> > the job is broken which fall under "known to be broken" category.
>>
>> +1, if upgrade is broken the PR is broken.
>>
>> > 5) There's no 5, all the rest must be green. Sometimes hound service does
>> > not respond and remains in "running" state, then it's retriggered by the
>> > reviewer. prprocessor and travis must always be happy.
>>
>> Don't get me started on hound. But it's the best we have right now, so
>> generally speaking: Yes: Linting is important. If there are lint
>> warnings, we shouldn't merge the PR.
>>
>> > Now the promised suggestion. When we see katello/upgrade job is broken on
>> > multiple PRs, the first reviewer who spots it should notify the rest of
>> > developers about "from now on, we ignore tests XY". The ideal channel
>> > seems to be this list. This helps Katello devs to find out when it
>> > started, what were the commits that first induced it.
>> >
>> > [1] https://groups.google.com/forum/#!topic/foreman-dev/p7ESagXwNwk
>> >
>> > Thanks for comments
>> >
>> > --
>> > Marek
>>
>> Timo
>
>
> --
> You received this message because you are subscribed to the Google Groups 
> "foreman-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected].
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"foreman-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to