On Thu, Aug 31, 2017 at 5:08 PM, Timo Goebel <[email protected]> wrote:

> Am 28.08.17 um 17:12 schrieb Marek Hulán:
>
>> 1) codeclimate is red
>>
>> This can be ignored, we never agreed on using this as a hard metric for
>> the
>> PR. The motivation to introduce it was mainly to save some time to
>> reviewer.
>> We don't have to run it manually to get indications whether there's
>> something
>> introducing a big complexity [1]. From my experience it sometimes leads to
>> worse code, since author splits the logic into more methods to lower e.g.
>> cyclomatic complexity but it should be judged separately in every case.
>>
> +1
> I like it as a suggestion, but sometimes it's just off and better be
> ignored.
>
>>
>> 2) foreman is red
>>
>> This can happen because of intermittent tests failures. If the PR is
>> clearly
>> not causing new ones and commiter is aware of this error, the PR is merged
>> with message like "test unrelated" comments. If we are not sure, we
>> retrigger
>> the run,
>>
>> If Foreman develop branch is broken, we need to merge the PR to fix it so
>> this
>> is another exception. Usually we trigger the jenkins job manually first
>> to see
>> that the PR fixes the issue.
>>
> +1
> Yes, don't merge a PR with failing Foreman core tests.
>
>>
>> 3) katello is red
>>
>> If katello becomes red only for this PR, we analyze what causes it.
>> Usually it
>> means that we change some internal things that have impact on Katello. In
>> such
>> case, we're doing our best to send a fixing PR to Katello or we ping
>> someone
>> with better knowledge in this area. We don't merge the PR until it's
>> resolved,
>> then usually we merge both parts at the same time.
>>
> I think, this is totally unfair to all the "smaller" plugin maintainers
> and that's why I vote for removing the test completely or just keep it to
> test our public APIs.
> I believe, we should do the following:
> If the Foreman PR breaks some public API, e.g. facets, and the Katello
> tests show that, my suggestion is to fix the foreman PR to not break the
> public API and add proper depreciations if possible.
> If we change something inside Foreman - in the past we changed the host
> multiple actions from GET to POST or introduced strong parameters for
> example - the contributor or maintainer should send a mail to foreman-dev
> expaining what needs to be changed in plugins. I think it's also a good
> idea to fix the example plugin or the How to create a plugin wiki page if
> applicable.
> However I think, it's the plugin maintainer's responsibility to make sure
> his plugin works with Foreman. Everything else doesn't scale. In the past a
> lot of "my" plugins broke because of changes to Foreman core. Nobody cared
> to send a PR so far. But that's fine. I don't expect anybody to. It's my
> job to test the plugin and fix it if it breaks.
> I think, we should not block Foreman PRs because an additional parameter
> was added to some internal method, just because Katello happens to
> overwrite that method. It just doesn't make any sense to me why we should
> do that for Katello but not for all the other plugins out there. This is
> not against Katello, it's just the only plugin tested with Foreman core
> right now.
> Currently, we're developing a plugin to show system logfiles in Foreman.
> That requires a complete ELK-stack for development. I would not expect
> every developer to have that at hand.
> If we leave Katello in the matrix, I think it would be totally fair to
> also add our new plugin to Foreman's test matrix as well. But I wouldn't
> want to block Foreman development just because some test in that plugin
> breaks.
> I know, Katello is important for RedHat and it's one of the larger
> plugins. But that doesn't justify a special role in my opinion.


I'm going to try to approach this from an unbiased position, so if I start
to lean a particular way please forgive me up front and try to ignore the
leaning. I think it's important for us to look at the core reasoning behind
why this set of tests was added to Foreman PRs and agreed upon by Foreman
and Katello to begin with.

 1) Reduce developer inefficiency by preventing broken environments
 2) Reduce unplanned work n the form of drop everything fixes

With Katello being a large plugin no only in scope, but in developer size
whenever a breakage would occur a large set of developers would end up with
broken environments that required 1-2 developers to drop what they were
doing to fix the issue immediately or face a growing backlog of PRs
followed by build breakages which has resulted in a snowball effect. This
has never been an enjoyable position to be in, since it could come on
unexpectedly. Other plugin's have suffered the same and expressed the same.

I would argue that since Foreman provides the plugin ability, it has to
honor not breaking them. When this all started, there was a shallow plugin
API and the way to "integrate" your plugin was through judicious use of
Rubyisms not a public, standardized API. Times have changed and we should
change with them. However, I don't think we can whole hog change the way
things work until we provide the ability to do things properly. To that
end, I think we have two other conversations worth having and gathering
data about:

 1) What does it mean to be a plugin
 2) What are the ways plugins try to extend Foreman and which are not
covered by a sustainable API

To date, the best resource we have is [1]. And note, even [1] describes
methods of overriding any functionality found within Foreman (or another
plugin for that matter). This could both easily be their own email topics
or a single topic but moved to their own thread. I've also wondered if
plugins are still the right way to go if we stopped to look back at what
our goals with them are but thats another, much larger discussion to be had.

I am happy to break out these topics to their own threads if people think
that would be prudent.

Eric

[1]
http://projects.theforeman.org/projects/foreman/wiki/How_to_Create_a_Plugin


>
> If it turns out there are more PRs that are failing with same errors, we
>> merge
>> PRs if we're sure they don't introduce new Katello failures. At this time,
>> we're not blocking merges until Katello is green again. (*) here the
>> suggestion applies
>>
>> 4) upgrade is red
>>
>> this is very similar to katello job, if there's some issue in upgrade, we
>> should not merge the PR. I remember though, there was a time when we knew
>> the
>> job is broken which fall under "known to be broken" category.
>>
> +1, if upgrade is broken the PR is broken.
>
>>
>> 5) There's no 5, all the rest must be green. Sometimes hound service does
>> not
>> respond and remains in "running" state, then it's retriggered by the
>> reviewer.
>> prprocessor and travis must always be happy.
>>
> Don't get me started on hound. But it's the best we have right now, so
> generally speaking: Yes: Linting is important. If there are lint warnings,
> we shouldn't merge the PR.
>
>>
>> Now the promised suggestion. When we see katello/upgrade job is broken on
>> multiple PRs, the first reviewer who spots it should notify the rest of
>> developers about "from now on, we ignore tests XY". The ideal channel
>> seems to
>> be this list. This helps Katello devs to find out when it started, what
>> were
>> the commits that first induced it.
>>
>> [1] https://groups.google.com/forum/#!topic/foreman-dev/p7ESagXwNwk
>>
>> Thanks for comments
>>
>> --
>> Marek
>>
>> Timo
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "foreman-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Eric D. Helms
Red Hat Engineering

-- 
You received this message because you are subscribed to the Google Groups 
"foreman-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to