In order to allow decent pre-testing before merging to next I had Jason 
write a shell script to talk to a Jenkins server and organize a Jenkins server 
to accept such tests with decent turn around. Little did I know how rigid 
Jenkins is in how it wants to do things; so the final result was just too 
cumbersome to deal with websites and job numbers .... 

   I'd like us to try again this time with just ssh and simple scripts (Jenkins 
is just not suitable). From the user point of view one would be in a git branch 
such as

~/Src/petsc (my-cool-branch=) 
$ ./bin/submittest options

depending on options and a configuration file it would launch a series of tests 
on the local machine or any another machine you have ssh keys for. It could 
even have an option like --merge-to-next where in the tests it merges the 
branch into next automatically before running the tests. The jobs would all be 
run (even on the local machine) off of other clones so wouldn't interfere with 
your own work. By default it would run a few cases that usually mess up, a 
complex, a quad precision, a c++, a 64 bit integer etc. this will catch most 
basic errors.

The biggest API question I have is how to "get the results back to you". Should 
it email results from each job as it comes (or just failures), try to batch the 
results from all jobs together. Is email even always possible from the crazy 
machine it is sshing to? Do we have copy the results back from remote machines 
and put in some magic place you can read? How to tell you they are done? A text 
message? Update some website?  I think I would be happy with an email on each 
failure (I can start fixing stuff right away) and a final email once all tests 
are done telling me they are all done. So I know the tests are all done). What 
about advanced things liking killing tests you've started because you already 
got a failure and fixed it and resubmitted. This could get complex; or not. I 
know, let's use XXX to do this.

   Barry


> On Jun 6, 2016, at 4:44 PM, Matthew Knepley <[email protected]> wrote:
> 
> On Mon, Jun 6, 2016 at 7:08 PM, Satish Balay <[email protected]> wrote:
> On Mon, 6 Jun 2016, Matthew Knepley wrote:
> 
> > On Mon, Jun 6, 2016 at 5:27 PM, Satish Balay <[email protected]> wrote:
> >
> > > Matt,
> > >
> > > Per integration workflow - all feature branches should be tested
> > > locally (and be complete) - before merged to next [next is for
> > > integration testing - not feature testing]
> > >
> > > You could have used (modified):
> > >
> > > config/examples/arch-linux-xsdk-dbg.py
> > > config/examples/arch-osx-xsdk-opt.py
> > >
> > > Yeah - we don't have automatic 'feature branch test before integration
> > > testing' workflow - so currently this has to be done manually.
> >
> >
> > I don't really have the OS/compiler options available to test things
> > exhaustively, so I am
> > using next for this. I think this is acceptable right now.
> 
> Well then the feature is not yet ready for next - and could have
> waited until it was ready.
> 
> I do nto agree here. Just because some Git nerd thinks that 'next' should be
> that way does not mean it is what makes us most productive. I think we are way
> more productive using the nightly tests as a way to discover bugs. I cannot 
> waste
> my personal time running a bunch of tests on my own slow laptop before 
> pushing.
> 
>   Matt
>  
> If it was for someone to use - then they could have used the
> feature-branch..
> 
> The workflow requires the feature to be complete - and minimally
> tested - before merge to next.. We do ocassionally take shortcuts -
> but this sometimes this results in extended broken next...
> 
> Satish
> 
> 
> 
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener

Reply via email to