On Tue, 6 Mar 2018 09:54:50 +0100 Stefan Schmidt <ste...@osg.samsung.com> said:

> Hello.
> 
> 
> On 03/06/2018 07:44 AM, Carsten Haitzler (The Rasterman) wrote:
> > tbh platforms is an issue. windows is kind of big as setting up a
> > cross-build environment is a fair bit of work. setting up a windows vm
> > costs money to test (need licenses, or if you can scrounge one off another
> > pc you have/had). osx is worse in that you have to go buy hardware to run
> > osx to test.
> >
> > i think making it a requirement that every commit work on every platform is
> > not viable UNLESS:
> >
> > 1. a windows cross-compile env is available to developers (e.g. ssh into a
> > vm set up for this with clear documentation and scripts/tools to do the
> > builds. i have one set up for me at home).
> > 2. a vm with remote display that developers can use to run/test changes
> > easily.
> > 3. an actual osx box that developers can ssh into and compile and runa nd
> > test and remotely view/interact with like the windows vm.
> > 4. same for freebsd etc.
> 
> We do not have this and I am pretty sure we never will (I can only hope the
> future proofs me wrong). Maybe we should be more honest and state that any

then i can't see how we could make it a requirement for committing that they
build/run there. when people report issues with specific builds/platforms then
address them as needed.

> platform we support (besides Linux on x86_64) has only been tested at some
> point in the past.

i actually cross-build for windows maybe every few weeks or so, and freebsd
maybe similarly too. i build on rpi3 (32bit) too regularly enough.

we haven't tested on every linux distro either... so should we only claim a few
distributions? i don't think we're being dishonest really. releases do  get a
lot more testing to see they work across os's. master may not get as much
until a release cycle.

> > if a platform is on EASILY accessible and able to be EASILY worked with,
> > then making this a requirement to pass a build/test on that platform is
> > silly.
> >
> > developers have to be able to do incremental builds. not a "wait 10 mins for
> > the next build cycle to happen then wait another 10 for the log to see if it
> > worked this time". that's fine for reports. it's not fine for actual
> > development.
> >
> > if this infra existed and worked well, THEN i think it might be sane to
> > start adding "it has to pass a build test". then deciding on what platforms
> > have to be supported is another step. this has to be pain-free or as close
> > as possible to that.
> 
> Good luck to finding somehow setting this all up and keep it working.
> Definitely not me. :-) If I look back how badly the idea of having a windows
> vm, a mac and a arm device hooked up to Jenkins turned out. I simply gave up
> on that part.

well then i just can't see us ever making it a requirement they build across
these os's on every single commit if it can't be automated and made available
to developers to actually look into and see what is working or not and why. :(

> > not to mention... the test suites need to actually be reliable. i just found
> > one of the ecore_file tests was grabbing a page from sf.net ... and now
> > sf.net is refusing to servie it anymore thus test suites keep failing.
> > tests that are fragile like this should not be some gatekeeper as to if
> > code goes in or not.
> >
> > if a test suite is to be a gatekeeper it has to be done right. that means it
> > has to work reliably on the build host. run very quickly. things like
> > testing network fetches has to not rely on anything outside of that
> > vm/box/chroot etc. etc. ... and we don't have that situation. this probably
> > needs to be fixed first and foremost. not to mention just dealing with
> > check and our tests to find what went wrong is a nightmare. finding the
> > test that goes wrong in a sea of output ... is bad.
> >
> > so my take iis this: first work on the steps needed to get the final
> > outcome. better test infra. easier to write tests. easier to run and find
> > just the test that failed and run it by itself easily etc. it should be as
> > simple as:
> >
> > make check
> > ...
> > FAIL: src/tests/some_test_binary
> >
> > and to test it i just copy & paste that binary/line and nothing more and i
> > get exactly that one test that failed. i don't have to set env vars, read
> > src code to find the test name and so on. ... it currently is not this
> > simple by far. ;(
> 
> Yes, our tests are not as reliable as they should be.
> Yes, they would need to run in an controlled env.
> Yes, we might need so look at alternatives to libcheck.

i'm just saying these need to not randomly reject commits from devs when the
issue has nothing to do with what the dev is doing. it can't become an automated
requirement unless its reliably correct. :(

> But even with me agreeing to the three things above the core question stays
> still open.
> 
> Is this developer community willing to accept a working test suite as a
> gatekeeper? I don't think this is the case.

i think it's best to make it an expectation that devs run make check and
compile against efl before a push before even considering making it the
gatekeeper. we aren't even there yet with enough tooling, let alone talking of
an automated gatekeeper. just a reliable, easy to use and complete test suite
would be a big step forward. i think it's putting the cart before the horse to
talk automated gatekeepers + jenkins ... etc. without getting these first
things right.

> My personal motivation to work on QA and CI has gone down to zero over the
> years. It just feels like a Sisyphus task to look at master again and again
> why it is broken now. Dig through the commits, bisect them, point fingers and
> constantly poke people top get it fixed. All long after the problem have
> entered master.

what you want is gerrit. and i've seen how that works. i'm not a fan. it ends
up either:

1. it's ignored and patches sit in review for weeks, months or years and vanish
or
2. it's gamed because everything has to go through it, it's minimized to try
and remove the impact. people just vote things up without real review etc.

if you automated the voting to a build check instead of humans, you'd need to
find some way to do that and have a test build bot vote and do it FAST. that
means you need enough infra for a build per commit and it has to be totally
reliable. the build env and the tests. is that really possible? then if you
don't do them in strict order you end up with conflicts, and if some are
rejected from a push with multiple commits, you have dependent commits...

i don't see how this ends up better. it ends up worse i think.

> I willing to admit that the approach I used to reach my goals might have been
> flawed and simply failed. Someone else might want to pick up the slack on it.

i really don't think we have a lot of break issues given size and complexity.
not build breaks anyway. right now if jenkins detects a build break... how does
a developer know? it can take hours before it detects something. should they
sit hitting reload on the browser for the next hour hoping one of the builds
going contains their commit? we have a broken feedback cycle. jenkins should
mail the mailing list with "commit X broke build: log etc. link here". if a
build breaks, jenkins should go back commits until it finds a working one then
report the commit that broke. at least devs would get a notification. unless i
they sit staring at build.e.org with reloads or someone tells me, i just have
no idea a build would have broken.

i think we've talked about this many times before... :)

-- 
------------- Codito, ergo sum - "I code, therefore I am" --------------
Carsten Haitzler - ras...@rasterman.com


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
enlightenment-devel mailing list
enlightenment-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/enlightenment-devel

Reply via email to