Re: [petsc-dev] PETSc future starting as a new design layer that runs on top of PETSc 3?

2022-07-26 Thread Scott Kruger


I have to put in a good work for Fortran.   There are many parallels in
capability with modern C++, and it only requires disregarding every
implementation method you previously know and mapping them onto keywords
that have different meanings in every other language.   The tooling is
comparable to C and C++ in many ways, and once the clang team debugs PGI's
fortran parser, flang will be every bit as good as clang (should occur
within the next funding cycle.  Or two).   For hiring young scientists to
work on future PETSc, we'd be able to tap into the surprisingly large pool
of STEM graduates who are unable to get jobs at AI startups.   We have a
prototype https://gitlab.com/NIMRODteam/nimrod-abstract that we'd love to
get feedback on.



On Tue, Jul 26, 2022 at 8:21 AM Jed Brown  wrote:

> I have to put in a good word for Rust. There are many parallels in
> capability with modern C++, but the compiler enforces many good practices
> (and guarantees safety), compiler error and warning messages are really
> useful, and the tooling is phenomenal on multiple fronts, from packaging to
> refactoring to documentation and testing. We have a prototype
> https://github.com/petsc/petsc-rs that we'd love to get feedback on.
>
> On Tue, Jul 26, 2022, at 8:07 AM, Jacob Faibussowitsch wrote:
>
> >  And for programmers today who program by googling, googling does not
> distinguish between good modern C++ solutions and crappy 15 year old
> solutions that still work but should not be used today.
>
> Sure, all jokes aside the difference between old and modern C++ is broadly
> speaking:
>
> 1. If you find yourself specifying the type, you are using old C++ ->
> always prefer auto, and duck typing
> 2. If you find yourself managing memory directly, you are using old C++ ->
> always prefer smart pointers
> 3. If you find yourself calling begin/end, acquire/release, do/undo
> function pairs you are using C -> prefer to wrap everything in RAII types.
>
> The ability to forgo types and write generic algorithms that only require
> *functionality* (and being able to assert this at compile-time) rather than
> a specific type name. Having an explicit memory ownership model that is
> enforced by construction via smart pointers. Making it impossible not to
> clean up after yourself via RAII.
>
> All “modern” C++ does is make the above more ergonomic and easier to do.
>
> Best regards,
>
> Jacob Faibussowitsch
> (Jacob Fai - booss - oh - vitch)
>
> > On Jul 26, 2022, at 09:55, Barry Smith  wrote:
> >
> >
> >
> >> On Jul 26, 2022, at 9:43 AM, Jacob Faibussowitsch 
> wrote:
> >>
> >>> even more importantly we would need a huge amount of education as to
> what to use and what not to use otherwise our hacking habits will fill the
> source code with bad code.
> >>
> >> As long as you never type “new” and “delete” then you are using modern
> C++ :)
> >
> > As joke, this is good, but reality is that C++ has 30 years of
> accumulated junk, and I don't know of any automated way to prevent the use
> of that junk from being used in PETSc, there is no C++ compiler flag
> --std-no-old-junk. And for programmers today who program by googling,
> googling does not distinguish between good modern C++ solutions and crappy
> 15 year old solutions that still work but should not be used today.
> >
> >>
> >>> Based on Jacob's contributions even "modern" C++ requires lots of
> macros.
> >>
> >> Not really. Most of the macros are in service of making C++-ish code
> work from C, and are used as a convenience. If I didn’t have to make the
> C++ callable from C, then we could remove many of the macros.
> >>
> >> Admittedly PetscCall() and friends would need to stay (unless we
> mandate C++23 https://en.cppreference.com/w/cpp/utility/basic_stacktrace)
> but now that they are uniform it would also not be difficult to factor them
> out again.
> >
> > PetscCall is because C does not have exceptions. Presumably, a modern
> C++ PETSc would use exceptions for all error handling so would not need
> PetscCall and friends at all? The stack on error would be handled in a
> modern C++ way.
> >>
> >> Best regards,
> >>
> >> Jacob Faibussowitsch
> >> (Jacob Fai - booss - oh - vitch)
> >>
> >>> On Jul 26, 2022, at 09:26, Barry Smith  wrote:
> >>>
> >>>
> >>> With C++ we would need good security guards on the MR who prevent use
> of the "bad old C++" paradigms and only allow use of proper modern
> techniques; even more importantly we would need a huge amount of education
> as to what to use and what not to use otherwise our hacking habits will
> fill the source code with bad code.
> >>>
> >>> Based on Jacob's contributions even "modern" C++ requires lots of
> macros. Macros are horrible because it makes using automatic
> transformations on the source code (that utilize the language structure and
> are not just regular expression based) almost impossible. We've been doing
> some refactoring recently (mostly Jacob with PetscCall and now I am adding
> more variants of PetscCall) and 

Re: [petsc-dev] test harness requires case independent

2022-04-29 Thread Scott Kruger


https://gitlab.com/petsc/petsc/-/merge_requests/5189

On 2022-04-27 14:33, Barry Smith did write:
> 
>   Scott,
> 
>   Could the test harness search for requires be made case independent?
> 
> $ make -f gmakefile.test test query='requires' queryval='*superlu_dist*'
> Using MAKEFLAGS: -- queryval=*superlu_dist* query=requires
>  CLINKER arch-superlu_dist-single/tests/ksp/ksp/tests/ex17
> TEST 
> arch-superlu_dist-single/tests/counts/ksp_ksp_tests-ex17_superlu_dist.counts
>  ok ksp_ksp_tests-ex17_superlu_dist
> 
> 
> $ make -f gmakefile.test test query='requires' queryval='*SuperLU_DIST*'
> Using MAKEFLAGS: -- queryval=*SuperLU_DIST* query=requires
> # No test results in  ./arch-superlu_dist-single/tests/counts
> 
> 
> I looked query_test.py and could not see where the queryval comes from
> 
> Thanks
> 

-- 
Scott Kruger
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 466-3196
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] Gitlab workflow discussion with GitLab developers

2022-01-21 Thread Scott Kruger
On 2022-01-20 21:40, Junchao Zhang did write:
> *  Email notification when one is mentioned or added as a reviewer

Like Barry, I get emails on these so I think your notification settings
are off.

> *  Color text in comment box
> *  Click a failed job, run the job with the *updated* branch

I doubt that they will ever allow this because it would get to
complicated, but there are improvements to workflow that could be made.

Ideal workflow:
 - Automatically detects that this is a resubmit, and runs the last
   failed job first; i.e., if linux-cuda-double fails, run that job
   first, and then rerun the rest of the pipeline if it passes (so that
   we get a clean pipeline for the MR).

Current workflow (from Satish) which works but is a pain:
 - Launch pipeline.  Stop it.  Find job on web page and start it
   manually.  If passes, hit run on pipeline.

Less-ideal-but-improved workflow:
Based on what I've seen the team do with the `pages:` job (which I
learned about this week), this might work?

Add something like this to `.test`:

  only:
  variables:
- $PETSC_RUN_JOB == $TEST_ARCH

So that could then launch a pipeline with:
PETSC_RUN_JOB = arch-ci-linux-cuda
except I'm pretty sure this won't work based on how those `$`'s are
interpreted.  Thoughts, Satish?

Other-Less-ideal-but=improved workflow:
I tried playing around with setting variables related to tags when you
launch a job; e.g., 
   PETSC_JOB_TAG = gpu:nvidia

where `gpu:nvidia` is a current tag that I also tried to label a job in
other ways, but I couldn't get it to work (documentation made me think
we could do this.  This was a couple of years ago though, and perhaps
they have something like this working.



> *  Allow one to reorder commits (e.g., the fix up commits generated from
> applying comments) and mark commits that should be fixed up
> *  Easily retarget a branch, e.g., from main to release (currently I have
> to checkout to local machine, do rebase, then push)

This is making a git gui in gitlab (GitKraken, gitk, lazygit, etc.)  
No disagreement, but the workflow issues should take much higher priority IMO.

Scott

 
> --Junchao Zhang
> 
> 
> On Thu, Jan 20, 2022 at 7:05 PM Barry Smith  wrote:
> 
> >
> >   I got asked to go over some of my Gitlab workflow uses next week with
> > some Gitlab developers; they do this to understand how Gitlab is used, how
> > it can be improved etc.
> >
> >   If anyone has ideas on topics I should hit, let me know. I will hit them
> > on the brokenness of appropriate code-owners not being automatically added
> > to reviewers. And support for people outside of the Petsc group to set more
> > things when they make MRs. And being to easily add non-PETSc folks as
> > reviewers.
> >
> >   Barry
> >
> >

-- 
Scott Kruger
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 466-3196
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] [DocTip!] #3 CI docs build and preview

2021-11-08 Thread Scott Kruger
On 2021-11-06 15:09, Patrick Sanan did write:

> Unfortunately, I don't know a simple way to reliably preview a single .rst
> file (as you might be used to from working with some Markdown-based tools).

rst2html is the standard python docutils command.  The problem is that
Sphinx is really rst+(extensions for book-like documentation) so the
extensions won't be handled, But that will be true for MyST well.

Scott


-- 
Scott Kruger
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 466-3196
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] [DocTip!] #2: Aiming for self-updating docs

2021-11-08 Thread Scott Kruger
On 2021-11-06 11:12, Matthew Knepley did write:
> On Sat, Nov 6, 2021 at 10:34 AM Patrick Sanan 
> wrote:
> > I don't know what WEB is, but if you're saying that this is kinda clunky,
> > yes it definitely is - my only contention is that it's better than
> > copy-pasting code and output.  I'm not sure if there's an easier and/or
> > better way with Sphinx.
> >
> 
> WEB was the futuristic documentation idea of Don Knuth.
> 

It never caught on (for good reasons IMO), but it is important
historically and programmers should be aware of it:
https://en.wikipedia.org/wiki/Literate_programming

For those who love literate programming and fortran, PPPL developed this
in the 80's:
https://w3.pppl.gov/~krommes/fweb.html
and I believe it is still maintained.

I dealt with a code written with this tool.  Interesting, but I never
wanted to follow it myself.  

Scott



>   Thanks,
> 
>  Matt
> 
> 
> >
> >> Doing actual literate documentation of key tutorial programs would be a
> >> nice way of doing this, but I realise that's a lot more effort.
> >>
> > This is still a hope/plan to go into doc/tutorials - follow the deal.ii
> > model for a small number of key examples. Matt has done a couple of pages
> > there already, in this direction.
> >
> > Lawrence
> >
> >
> 
> -- 
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>

-- 
Scott Kruger
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 466-3196
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] PETSCTEST_VALGRIND

2021-10-25 Thread Scott Kruger


Adding this as a feature would add some ugly code logic for something
that only affects 2 tests out of the thousands that we have.  

Instead, MR !4500 enables one to filter out that test.  

So:

make -f gmakefile test s='snes_tutorials-ex19_' i='\!*logviewmemory' VALGRIND=1

does every ex19 test *except* for the problematic test.


I might be able to figure out a better way of hard-coding runtime
behavior per test, but I think for now this offers a general-purpose
workaround.



If you still don't like that, you could just create a local modification
to `config/gmakegentest.py`:
if self.petsc_arch.find('valgrind') >= 0:
  self.conf['PETSCTEST_VALGRIND']=1

to 
self.conf['PETSCTEST_VALGRIND']=1


This means that the `logviewmemory` test would never be run, but that
might be OK for your needs.

Scott

On 2021-10-25 09:22, Pierre Jolivet did write:
> Hello,
> I think I’m asking too much of test harness, but is there a way to 
> automatically deactivate a test when running with VALGRIND=1 without having 
> to play around with PETSC_RUNNING_ON_VALGRIND in the code?
> I see the variable PETSCTEST_VALGRIND, but it seems rather static and set 
> depending on PETSC_ARCH.
> So even if there is src/snes/tutorials/ex19.c: requires: 
> defined(PETSC_USE_LOG) !defined(PETSCTEST_VALGRIND), 
> Running make -f gmakefile test s='snes*logviewmemory' VALGRIND=1 on my 
> machine (without valgrind in the PETSC_ARCH) gets the test going.
> 
> Thanks,
> Pierre

-- 
Scott Kruger
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 466-3196
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] Is this a bug in test generation?

2021-08-20 Thread Scott Kruger


Separate tests require separate names.  Requiring the suffix to be in
the tests is not a bug or a limitation, but rather a requirement.
This does require an `output_file:` at the testset level.

Scott

On 2021-08-20 12:57, Pierre Jolivet did write:
> That’s a long-standing issue that I personally bypass by setting the suffix 
> in the test instead of at the testset scope.
> But maybe that’s not working for you either?
> 
> Thanks,
> Pierre
> 
> > On 20 Aug 2021, at 12:43 PM, Stefano Zampini  
> > wrote:
> > 
> > Scott
> > 
> > This test is specified as
> > 
> >testset:
> >   suffix: expl
> >   nsize: {{1 2}}
> >   filter: grep -v "MPI processes" | grep -v " type:" | grep -v "Mat 
> > Object"
> >   args: -ksp_converged_reason -view_explicit_mat -pc_type none 
> > -ksp_type {{cg gmres}}
> >   test:
> > args: -mat_type aij
> >   test:
> > requires: hypre
> > args: -mat_type hypre
> > 
> > It triggers an error here https://gitlab.com/petsc/petsc/-/jobs/1519558357 
> > <https://gitlab.com/petsc/petsc/-/jobs/1519558357>, with a  CI 
> > configuration without hypre.
> > Other testset instances works fine if any “require:" is both specified in 
> > the testset body and in the specific test.
> > Is this supposed to work?
> > 
> > Thanks
> > Stefano
> 

-- 
Scott Kruger
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 466-3196
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] Kokkos make error on Spock

2021-07-20 Thread Scott Kruger
Cray puts the runtime somewhere and then uses their library to put it
in.

I should have mentioned this earlier, but we have
config/examples/arch-olcf-frontier-hip-beta.py

for trying to keep up with Cray's default hip, but I think it's
different than what Junchao does for Kokkos which is why I didn't
mention it earlier.

Scott


On 2021-07-20 11:26, Mark Adams did write:
> OK, it looks like this flag caused the problem and does not seem to be
> necessary:
> 
> '--LDFLAGS=-L'+os.environ['ROCM_PATH']+'/lib -lhsa-runtime64',
> 
> On Tue, Jul 20, 2021 at 9:00 AM Mark Adams  wrote:
> 
> >
> >
> > On Mon, Jul 19, 2021 at 6:41 PM Scott Kruger  wrote:
> >
> >>
> >> Mark,
> >>
> >> On tulip, things with fortran went wonky when a `-fast` flag snuck into
> >> the flags (snuck in being me copying a previous configure file and not
> >> questioning the flags).  The reason is that for clang, `-fast` implies
> >> link time optimization (lto) for C/C++ code, but flang doesn't support
> >> lto so things got weird.  I suspect that gfortran does not either (but
> >> perhaps the real question is why not use flang?)
> >>
> >
> > It does work with GNU. I am doing this for an application and I want to
> > avoid dictating the program env.
> >
> > I'm (re)trying with -O0 just to check.
> >
> >
> >>
> >> I look at configure and I don't see anything in your flags that would
> >> trigger lto, but as Junchao says, it might be picking something up from
> >> Kokkos, so perhaps this is the issue.
> >>
> >>
> > The configure files that I sent did not have Kokkos. They were stripped
> > down.
> >
> > I have asked ORNL, but don't expect help.
> > The person (Matt) that is helping me for other things is very helpful so
> > perhaps he can come up with some ideas.
> > Otherwise I will tell the app that they need to use GNU for now.
> >
> > Thanks,
> > Mark
> >
> >
> >
> >> Scott
> >>
> >>
> >> On 2021-07-19 07:19, Mark Adams did write:
> >> > Thanks, but this happens w/o Kokkos.
> >> > I've stripped this down and attached good/bad logs without/with Fortran
> >> > bindings.
> >> > Hope this helps,
> >> > Thanks again,
> >> > Mark
> >> >
> >> > On Sun, Jul 18, 2021 at 12:00 PM Stefano Zampini <
> >> stefano.zamp...@gmail.com>
> >> > wrote:
> >> >
> >> > > This is probably kokkos pulling in the dependencies where compiling
> >> kokkos
> >> > > source within PETSc.
> >> > >
> >> > > Il Dom 18 Lug 2021, 16:29 Mark Adams  ha scritto:
> >> > >
> >> > >> Whoops, this error was just from not telling gfortran to allow long
> >> lines.
> >> > >>
> >> > >> Anway, I did find that the when fortran bindings are enabled this OMP
> >> > >> thing gets into the library. No idea how. I'll askm ORNL.
> >> > >>
> >> > >> 10:26 jczhang/fix-cray-mpicxx-includes/main=
> >> > >> /gpfs/alpine/csc314/scratch/adams/petsc2$ nm
> >> > >>
> >> /gpfs/alpine/csc314/scratch/adams/petsc2/arch-spock-opt-cray-kokkos/lib/libpetsc.so
> >> > >> |g offload
> >> > >>  U .omp_offloading.img_cache.cray_amdgcn-amd-amdhsa
> >> > >>  U .omp_offloading.img_size.cray_amdgcn-amd-amdhsa
> >> > >>  U .omp_offloading.img_start.cray_amdgcn-amd-amdhsa
> >> > >> 01d457b0 T vecgetoffloadmask_
> >> > >>
> >> > >>
> >> > >> On Sun, Jul 18, 2021 at 8:43 AM Mark Adams  wrote:
> >> > >>
> >> > >>> Ah, your test was not on Spock ... I have something working but
> >> this is
> >> > >>> strange.
> >> > >>>
> >> > >>> I switched to the GNU ProgEnv. and it passed the Fortran test in
> >> > >>> 'check', but this failed:
> >> > >>>
> >> > >>>
> >> > >>>
> >> > >>>
> >> > >>>
> >> > >>>
> >> > >>>
> >> > >>>
> >> > >>>
> >> > >>> *08:26 2 jczhang/fix-cray-mpicxx-includes/main=
> >> > >>> /gpfs/alpine/csc314/scratch/adams/petsc/src/snes/tutori

Re: [petsc-dev] Kokkos make error on Spock

2021-07-19 Thread Scott Kruger
, on non-Kokkos C tests, is fixed by
> >>>>>>>> turning the fortran bindings off:
> >>>>>>>>
> >>>>>>>> ld.lld: error:
> >>>>>>>> /gpfs/alpine/phy122/proj-shared/spock/petsc/current/arch-opt-cray-kokkos/lib/libpetsc.so:
> >>>>>>>> undefined reference to 
> >>>>>>>> .omp_offloading.img_start.cray_amdgcn-amd-amdhsa
> >>>>>>>> [--no-allow-shlib-undefined]
> >>>>>>>> ld.lld: error:
> >>>>>>>> /gpfs/alpine/phy122/proj-shared/spock/petsc/current/arch-opt-cray-kokkos/lib/libpetsc.so:
> >>>>>>>> undefined reference to 
> >>>>>>>> .omp_offloading.img_size.cray_amdgcn-amd-amdhsa
> >>>>>>>> [--no-allow-shlib-undefined]
> >>>>>>>> ld.lld: error:
> >>>>>>>> /gpfs/alpine/phy122/proj-shared/spock/petsc/current/arch-opt-cray-kokkos/lib/libpetsc.so:
> >>>>>>>> undefined

Re: [petsc-dev] Auto-testing Failure?

2021-06-03 Thread Scott Kruger


It's a type of special filter:

   test:
  filter: Error: true

If you look in `src/sys/tests` there are a couple examples of this.

The `Error:` enables the ignoring of the error code.

Scott


On 2021-06-03 09:54, Jacob Faibussowitsch did write:
> Hello All,
> 
> All of our unit tests look for some kind of positive result, but is there any 
> established way in the harness to test for failure of a particular kind?
> 
> Best regards,
> 
> Jacob Faibussowitsch
> (Jacob Fai - booss - oh - vitch)
> 

-- 
Scott Kruger
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 466-3196
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] git worktree

2021-05-19 Thread Scott Kruger
On 2021-05-19 08:36, Patrick Sanan did write:
> Cool - I didn't know about this approach - If you still have your experiments 
> sitting around, can you put numbers on what kind of space savings are we 
> talking about vs the dumb approach (I have an independent clone for every 
> branch I'm interested in working on simultaneously)?

The basic difference is in the `.git` directory.  Currently a full clone
of petsc has a >300MB .git directory for a ~125MB source directory,
while a worktree has a <1MB .git file. Even smaller than the light weight
clone (by a negligible amount).

The ability to have `git branch` show not only all of your local
branches but which ones are in a separate directory likely seems like a
mild difference, but I find it the main feature I like as it helps
remind me of what I have no matter what directory I'm working in.  

Scott

P.S.  While looking at what was taking up space in the source tree, it
seems like we could save around 10% just by deleting the .eps files in
the tau documentation images directory.

> 
> @Barry - thanks for the reminder about that script - even if I don't use it 
> regularly it's good to know it's there to raid in the future when I'm pushed 
> in desperation to start scripting things. 
> 
> Re the related shallow/"blobless" clone stuff I was posting about [1] - it's 
> fun and work-adjacent (hence in #random) to read about and good to be able to 
> pull out of your pocket when some pathological repo comes along, but the 
> boring truth is that because it's another syntax to remember (or script) and 
> there's a minor inconvenience in the usage (I don't like the way it behaves 
> when you need to fetch something missing, and there's no internet 
> connection), I'll likely never use the feature in my normal workflow. The 
> robustness, simplicity, and google-ability of the dumb way are too attractive!
> 
> 
> [1] 
> https://github.blog/2020-12-21-get-up-to-speed-with-partial-clone-and-shallow-clone/
>  
> <https://github.blog/2020-12-21-get-up-to-speed-with-partial-clone-and-shallow-clone/>
>  
> 
> 
> > Am 19.05.2021 um 05:54 schrieb Scott Kruger :
> > 
> > 
> > 
> > A.  I remember your email about it, and I even have it checked out.
> > I didn't get it at the time, but necessity is not only them other of
> > invention, but of learning.
> > 
> > Scott
> > 
> > On 2021-05-18 18:39, Barry Smith did write:
> >> 
> >>  Scott,
> >> 
> >>My solution to working with multiple PETSc branches without the 
> >> agonizing pain is g...@gitlab.com:petsc/petscgitbash.git 
> >> 
> >>One could argue it is too particular to Barry's specific workflow but 
> >> perhaps it has ideas/code that can be stolen for others. It could also 
> >> potentially be done using the gitlab python bindings and thus remove the 
> >> direct use of the rest full interface.  I have been using it for about a 
> >> year and a half and probably for about six months it has been pretty 
> >> robust and stable. A reminder of its approach
> >> 
> >> #  An alias for git that manages working with multiple branches of PETSc 
> >> from the command line
> >> #This is specific to PETSc and not useful for any other respositories
> >> #
> >> #Replaces some actions that normally require cut-and-paste and/or 
> >> (manually) opening the browser to gitlab.com
> >> #
> >> #+ Sets the PETSC_ARCH based on the branch name
> >> #+ Preserves compiled code associated with the branch checked out when 
> >> changing branches
> >> #+ Updates lib/petsc/conf/petscvariables with the branch values so, 
> >> for example, you can compile in Emacs without knowing the PETSC_ARCH in 
> >> Emacs
> >> #+ Creates new branches with the name 
> >> ${PETSC_GIT_BRANCH_PREFIX}/DATE/yourspecificbranchname
> >> #+ Adds /release to branch name if created from release branch
> >> #+ Can checkout branches based on a partial branch name, if multiple 
> >> branches contain the string it lists the possibilites
> >> #+ Submits branches to pipeline testing from the command line
> >> #+ Checks the current branches latest pipeline test results (and 
> >> optionally opens the browser to the pipeline)
> >> #+ Opens new or current MR without cut and paste from the branches
> >> #
> >> #Oana suggested the idea to save waiting for code to recompile after 
> >> changing branches and the use of touch
> >> #to force code to not get recompiled unnecessarily. This inspired this 
> >> 

Re: [petsc-dev] git worktree

2021-05-18 Thread Scott Kruger



A.  I remember your email about it, and I even have it checked out.
I didn't get it at the time, but necessity is not only them other of
invention, but of learning.

Scott

On 2021-05-18 18:39, Barry Smith did write:
> 
>   Scott,
> 
> My solution to working with multiple PETSc branches without the agonizing 
> pain is g...@gitlab.com:petsc/petscgitbash.git 
> 
> One could argue it is too particular to Barry's specific workflow but 
> perhaps it has ideas/code that can be stolen for others. It could also 
> potentially be done using the gitlab python bindings and thus remove the 
> direct use of the rest full interface.  I have been using it for about a year 
> and a half and probably for about six months it has been pretty robust and 
> stable. A reminder of its approach
> 
> #  An alias for git that manages working with multiple branches of PETSc from 
> the command line
> #This is specific to PETSc and not useful for any other respositories
> #
> #Replaces some actions that normally require cut-and-paste and/or 
> (manually) opening the browser to gitlab.com
> #
> #+ Sets the PETSC_ARCH based on the branch name
> #+ Preserves compiled code associated with the branch checked out when 
> changing branches
> #+ Updates lib/petsc/conf/petscvariables with the branch values so, for 
> example, you can compile in Emacs without knowing the PETSC_ARCH in Emacs
> #+ Creates new branches with the name 
> ${PETSC_GIT_BRANCH_PREFIX}/DATE/yourspecificbranchname
> #+ Adds /release to branch name if created from release branch
> #+ Can checkout branches based on a partial branch name, if multiple 
> branches contain the string it lists the possibilites
> #+ Submits branches to pipeline testing from the command line
> #+ Checks the current branches latest pipeline test results (and 
> optionally opens the browser to the pipeline)
> #+ Opens new or current MR without cut and paste from the branches
> #
> #Oana suggested the idea to save waiting for code to recompile after 
> changing branches and the use of touch
> #to force code to not get recompiled unnecessarily. This inspired this 
> script which then grew uncontrollably.
> #
> #Does NOT change the source code in any way, only touches the object files
> #
> #Does not currently have a mechanism for multiple PETSC_ARCH for a single 
> branch
> #
> #Requires git higher than 1.8  TODO: add a check for this
> #
> #  Usage:
> # git checkout partialname
> # git checkout -  check out the last 
> branch you were on
> # git checkout -b newbranchname [rootbranch] [message] adds 
> ${PETSC_GIT_BRANCH_PREFIX}, date, and /release (when needed) to new base 
> branch name
> # The message can contain 
> what the branch is for and who inspired it
> # git checkout -b newbranchname [main or release]
> # git pl   [partialname]  run a GitLab pipeline
> # git cpl  [-show] [partialname]  check on status of 
> pipeline
> # git mr [-f] [partialname]   open new or current MR 
> for current branch, -f allows MR without first submitting pipeline
> # git branch -D[D] [partialname]  deletes branch you may 
> be currently in, extra D deletes remote also
> # git rebase [partialname]pulls main or release 
> as appropriate and then rebases against it
> # git brancheslists branches in MR, 
> in MR as WIP, tested but not in MR and not merged in main with pipeline 
> results
> # git push [-f] [partialname] pushes branch
> # git fixup   commit changes and 
> rebase as fixup in the current branch and force push resul
> # git mrfixup rebases branch as fixup 
> to remove all commits applied by MR with Apply suggestion
> # git cherry newbranchname [release]  removes the most recent 
> commit from the current branch and puts it in a new branch off of main [or 
> release]
> # git pop go to previous branch, 
> before git checkout (like - except handles multiple branch changes in the 
> script)
> # git diffdo git diff HEAD~1
> #
> # cizappipeline   delete all the 
> blocked/manual MR pipelines (appears to only work for project owners?
> # cibuild  url [-show]login into the test 
> machine and build the PETSc version being tested
> #
> 
&

Re: [petsc-dev] git worktree

2021-05-18 Thread Scott Kruger



With later versions of git, `git branch` shows branches are also setup
as worktree's which I personally like quite a bit.   Also, thinking
through the worktree workflow has forced me to improve my own workflow
in terms of directory naming discipline (but that is probably just me
who had far too many clones around).

The lightweight clone is simpler, but I think that can be fixed.

Scott



On 2021-05-18 17:17, Jed Brown did write:
> I don't follow the advantage over lightweight clones, such as: 
> 
> $ git clone --branch=release --reference petsc gitlab:petsc/petsc 
> petsc-release
> Cloning into 'petsc-release'...
> remote: Enumerating objects: 261, done.
> remote: Counting objects: 100% (261/261), done.
> remote: Compressing objects: 100% (52/52), done.
> remote: Total 474 (delta 213), reused 239 (delta 207), pack-reused 213
> Receiving objects: 100% (474/474), 430.41 KiB | 1.96 MiB/s, done.
> Resolving deltas: 100% (296/296), completed with 110 local objects.
> Updating files: 100% (9721/9721), done.
> $ cd petsc-release/
> release= ~/petsc-release$ du -hs .git
> 2.3M.git
> 
> Scott Kruger  writes:
> 
> > Relatively recently, I learned about the git worktree feature and
> > attached my write-up of how I use it in petsc.   I have no idea whether
> > the response will be:
> >
> >This has been around since 2015 at least, and you're just now
> >finding out about it?  LOL!
> >
> > or:
> >
> >   I can't believe I never heard about it either!
> >
> >
> > Since Patrick recently talked about shallow clones with git on slack, I
> > suspect it's the latter (and I didn't hear about this feature from petsc
> > dev's which is where I typically gain all my git knowledge).  Basically,
> > if you have more than one clone of petsc on your drive, you'll be
> > interested in the worktree feature.
> >
> > The reason why the write-up is a bit long boils down the fact that we
> > have the `/` in our branch names.  It makes things a bit more
> > complicated compared to my other projects (but is nice for the directory
> > structure).  I have not scripted away the complexity either -- I haven't
> > reached that level of annoyance.
> >
> > The reason why I just don't have the rst file as an MR, is because the
> > way I have it point to an existing branch seems cumbersome.  Perhaps a
> > git guru knows an easier way with some type of detached state or faster
> > way of getting the HEAD to point to the right sha in one go.  I'd be
> > very interested if someone knows a better method.
> >
> > Scott
> >
> >
> > -- 
> > Scott Kruger
> > Tech-X Corporation   kru...@txcorp.com
> > 5621 Arapahoe Ave, Suite A   Phone: (720) 466-3196
> > Boulder, CO 80303Fax:   (303) 448-7756
> >
> >
> > Working on multiple branches simultaneously
> > ===
> >
> > Our goal is to have a parallel structure of directories each with a 
> > different
> > branch.
> >
> > Let's start off with the basic structure::
> >
> > - ptroot
> >|- petsc  (main)
> >
> >
> > The petsc directory is the directory that comes from `git clone` and we 
> > have main as a general branch.  
> >
> > The simplest example is to do a quick bugfix in a separate worktree::
> >
> > git worktree add ../petsc-bugfix
> >
> > The output of this is::
> >
> > Preparing worktree (new branch 'petsc-bugfix')
> > Updating files: 100% (9829/9829), done.
> > HEAD is now at ...
> >
> > The directory is now this::
> >
> > - ptroot
> >|- petsc  (main)
> >|
> >|- petsc-bugfix  (petsc-bugfix)
> >
> > This is like a separate clone, but is more lightweight because it does not 
> > copy
> > over the `.git` directory (it has a `.git` file instead) and has advantages
> > because typing `git branch` shows information on all of the worktree's::
> >
> > * main
> > + petsc-bugfix
> >
> > where the `*` denotes the branch of the directory we are in and `+` denotes
> > other worktree branches (this appears to be a feature in newer versions of 
> > git).
> >
> >
> > The naming convention of a git branch in petsc is `developer/branch-name`; 
> > e.g.,
> > `scott/test-fix-reporting`.  The slash will introduce some wrinkles into the
> > normal worktree usage.   Let's try this::
> >
> > git worktree add ../scott/test-fix-reporting
> >
> > We now hav

[petsc-dev] git worktree

2021-05-18 Thread Scott Kruger


Relatively recently, I learned about the git worktree feature and
attached my write-up of how I use it in petsc.   I have no idea whether
the response will be:

   This has been around since 2015 at least, and you're just now
   finding out about it?  LOL!

or:

  I can't believe I never heard about it either!


Since Patrick recently talked about shallow clones with git on slack, I
suspect it's the latter (and I didn't hear about this feature from petsc
dev's which is where I typically gain all my git knowledge).  Basically,
if you have more than one clone of petsc on your drive, you'll be
interested in the worktree feature.

The reason why the write-up is a bit long boils down the fact that we
have the `/` in our branch names.  It makes things a bit more
complicated compared to my other projects (but is nice for the directory
structure).  I have not scripted away the complexity either -- I haven't
reached that level of annoyance.

The reason why I just don't have the rst file as an MR, is because the
way I have it point to an existing branch seems cumbersome.  Perhaps a
git guru knows an easier way with some type of detached state or faster
way of getting the HEAD to point to the right sha in one go.  I'd be
very interested if someone knows a better method.

Scott


-- 
Scott Kruger
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 466-3196
Boulder, CO 80303Fax:   (303) 448-7756


Working on multiple branches simultaneously
===

Our goal is to have a parallel structure of directories each with a different
branch.

Let's start off with the basic structure::

- ptroot
   |- petsc  (main)


The petsc directory is the directory that comes from `git clone` and we 
have main as a general branch.  

The simplest example is to do a quick bugfix in a separate worktree::

git worktree add ../petsc-bugfix

The output of this is::

Preparing worktree (new branch 'petsc-bugfix')
Updating files: 100% (9829/9829), done.
HEAD is now at ...

The directory is now this::

- ptroot
   |- petsc  (main)
   |
   |- petsc-bugfix  (petsc-bugfix)

This is like a separate clone, but is more lightweight because it does not copy
over the `.git` directory (it has a `.git` file instead) and has advantages
because typing `git branch` shows information on all of the worktree's::

* main
+ petsc-bugfix

where the `*` denotes the branch of the directory we are in and `+` denotes
other worktree branches (this appears to be a feature in newer versions of git).


The naming convention of a git branch in petsc is `developer/branch-name`; e.g.,
`scott/test-fix-reporting`.  The slash will introduce some wrinkles into the
normal worktree usage.   Let's try this::

git worktree add ../scott/test-fix-reporting

We now have::

- ptroot
   |- petsc  (main)
   |
   |- petsc-bugfix  (petsc-bugfix)
   |
   |- scott
  |
  |- test-fix-reporting (test-fix-reporting)


which isn't *exactly* what we wanted.  Instead, we  use the `-b` flag to use the
right branch name::

   git worktree add -b 'scott/test-fix-reporting' ../scott/test-fix-reporting
   cd ../scott/test-fix-reporting
   git branch --set-upstream-to=origin/scott/test-fix-reporting 
scott/test-fix-reporting

The last 2 steps were to avoid using the `--set-upstream` to the first `git
push`.  Those two steps are not strictly necessary.

(Aside:  `git worktree add` can take a 3rd argument to give the branch name and
many tutorials use that; however that doesn't work with `/` in the name.  The
documentation itself says that the argument is `commit-ish`.  The `-b` argument
is needed for the PETSc naming convention.)

We now have::

- ptroot
   |- petsc  (main)
   |
   |- petsc-bugfix  (petsc-bugfix)
   |
   |- scott
  |
  |- test-fix-reporting (scott/test-fix-reporting)

which is what we wanted as `git branch` shows (again, assuming a newer version
of git)::

> git branch
+ main
+ petsc-bugfix
+ scott/test-fix-reporting

This provides a nicely organized structure.  

Tracking an existing remote branch
===

The above shows a worktree based on performing the equivalent of 
a `git  checkout -b` to start with a new branch.   Here, we show how to follow
an existing remote branch.  
For reasons given by the Aside above, our naming scheme makes this a bit more
complicated.  Here is what I have working::

   # Get version that matches remote branch
   git checkout barry/feature-pintogpu

   # Need to create worktree with different name at the remote branch to avoid 
conflicts
   git checkout -b temp

   # About to create a branch with the same name so delete
   git branch -D barry/feat

Re: [petsc-dev] Listing failed tests

2021-05-17 Thread Scott Kruger
On 2021-05-17 16:20, Stefano Zampini did write:
> This was changed recently, I still don’t know why.

Developer 1:  This test output is too verbose.
Developer 2:  This test output is too terse.

I'll shoot for a Goldilocks MR.

Scott


> I also find very convenient to have the list of failing tests printed at the 
> end, instead of having to manually scroll up the output.
> For now, you can edit gmakefile.test and fix showreport variable
> 
>  showreport = "-s"
> 
> Which is now set only if you are running the entire testsuite, and not a 
> selected set of tests.]
> 
> 
> 
> > On May 17, 2021, at 4:16 PM, Matthew Knepley  wrote:
> > 
> > I have looked at the test system documentation, but I cannot figure out how 
> > to make it list the failed tests at the end of a run. It used to do this by 
> > default.
> > 
> >   Thanks,
> > 
> > Matt
> > 
> > -- 
> > What most experimenters take for granted before they begin their 
> > experiments is infinitely more interesting than any results to which their 
> > experiments lead.
> > -- Norbert Wiener
> > 
> > https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
> 

-- 
Scott Kruger
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 466-3196
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] Strange issue with testsuite

2021-05-12 Thread Scott Kruger
It's something like this:

cd arch-debug-uni/tests/sys/tests/runex54_1_options_file-ex54options_1a_wrong/

cat  *.tmp   #  Does anything look fishy?
cat diff*.sh # see how diff test with filter is being setup by harness
./diff*.sh   # run the diff test


Hopefully you can see what's the problem is in more detail.  These error
catching tests are tricker to debug than the normal tests.

Scott

On 2021-05-12 16:48, Stefano Zampini did write:
> I'm currently trying to build PETSc on a NEC-SX. Apart from other issues,
> I'm facing the following issue with the testsuite.
> 
> [zampins@stork petsc]$ make test s='sys*ex54*1a*' V=1
> Using MAKEFLAGS: -- V=1 s=sys*ex54*1a*
> arch-debug-uni/tests/sys/tests/runex54_1_options_file-ex54options_1a_wrong.sh
>  -v
>  ok sys_tests-ex54_1_options_file-ex54options_1a_wrong #
> /home/zampins/src/petsc/lib/petsc/bin/petsc-mpiexec.uni  -n 1 ../ex54
> -options_left 0 -options_view -options_file ex54options_1a_wrong  2>&1 |
> cat > ex54_1_options_file-ex54options_1a_wrong.tmp
> not ok diff-sys_tests-ex54_1_options_file-ex54options_1a_wrong # Error
> code: 1
> # 1a2
> # > [0]PETSC ERROR:
> /home/zampins/src/petsc/arch-debug-uni/tests/sys/tests/runex54_1_options_file-ex54options_1a_wrong/../ex54
> on a arch-debug-uni named stork by zampins Wed May 12 16:45:46 2021
> 
> However, if I run the test directly, I think it produces the proper output
> to be filtered. How can I debug this?
> 
> [zampins@stork tests]$ ./ex54 -options_left 0 -options_view -options_file
> ex54options_1a_wrong
> [0]PETSC ERROR: - Error Message
> --
> [0]PETSC ERROR: Invalid argument
> [0]PETSC ERROR: Unknown first token in options file ex54options_1a_wrong
> line 1: !
> [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html
> for trouble shooting.
> [0]PETSC ERROR: Petsc Development GIT revision: v3.15.0-298-g2115278b7a
>  GIT Date: 2021-05-12 14:01:17 +0300
> [0]PETSC ERROR: ./ex54 on a arch-debug-uni named stork by zampins Wed May
> 12 16:46:21 2021
> [0]PETSC ERROR: Configure options --FC_LINKER_FLAGS="-Wl,-z -Wl,muldefs"
> --download-sowing-configure-arguments="CC=ncc CXX=nc++"
> --with-blaslapack-lib="[/opt/nec/ve/nlc/2.3.0/lib/liblapack.a,/opt/nec/ve/nlc/2.3.0/lib/libblas_sequential.a]"
> --with-cc=ncc --with-cxx=nc++ --with-debugging=1 --with-fc=nfort
> --with-mpi=0 --with-shared-ld=nld --with-shared-libraries=1
> PETSC_ARCH=arch-debug-uni
> [0]PETSC ERROR: #1 PetscOptionsInsertFilePetsc() at
> /home/zampins/src/petsc/src/sys/objects/options.c:537
> [0]PETSC ERROR: #2 PetscOptionsInsertFile() at
> /home/zampins/src/petsc/src/sys/objects/options.c:645
> [0]PETSC ERROR: #3 PetscOptionsInsertArgs() at
> /home/zampins/src/petsc/src/sys/objects/options.c:684
> [0]PETSC ERROR: #4 PetscOptionsInsert() at
> /home/zampins/src/petsc/src/sys/objects/options.c:907
> [0]PETSC ERROR: #5 PetscInitialize() at
> /home/zampins/src/petsc/src/sys/objects/pinit.c:1024
> 
> -- 
> Stefano

-- 
Scott Kruger
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 466-3196
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] empty space on left side of website pages

2021-04-26 Thread Scott Kruger



Rather than have us edit the CSS, perhaps just getting people to agree
to a different theme:
https://sphinx-themes.org/

I think alabaster, aiohttp, cloud_sptheme, ...  meet Barry's complaint.

There is a lot to like on the kotti_docs_theme for example although the
bar is on the right instead of the left.

Scott

On 2021-04-26 08:58, Patrick Sanan did write:
> As far as I know (which isn't very far, with web stuff), changing things on 
> that level requires somehow getting into CSS.
> 
> For instance, you can see what it looks like with other widths directly from 
> Firefox (fun, didn't know you could do this):
> - go to the page
> - hit F12
> - click around on the left to find the  that corresponds to the part you 
> care about
> - look in the middle column to find the piece of CSS that's controlling 
> things (here, something called .col-md-3)
> - edit the CSS - in attached screenshot I change the max width of that 
> sidebar to 5%.
> 
> But, I want to avoid having to do things on the level of CSS and HTML - I 
> think that should be done as a collective effort in maintaining the theme 
> (and Sphinx itself).
> If we really care enough about the width of that sidebar, we'll create a fork 
> of the theme, add a setting for it, and try to get it merged to the theme's 
> release branch.
> 
> 
> > Am 23.04.2021 um 23:12 schrieb Barry Smith :
> > 
> > 
> >Thanks. Even if we just leave it is there a way to make it a little 
> > "skinnier", it seems very wide in my default browser.
> > 
> > 
> > 
> >> On Apr 23, 2021, at 1:08 PM, Patrick Sanan  >> <mailto:patrick.sa...@gmail.com>> wrote:
> >> 
> >> It is possible to put things there, as in this link which is both 
> >> documentation and example:
> >> https://pydata-sphinx-theme.readthedocs.io/en/latest/user_guide/sections.html#the-left-sidebar
> >>  
> >> <https://pydata-sphinx-theme.readthedocs.io/en/latest/user_guide/sections.html#the-left-sidebar>
> >> 
> >> Other projects using this theme have the mostly-empty left sidebar:
> >> https://numpy.org/doc/stable/ <https://numpy.org/doc/stable/>
> >> https://jupyter.readthedocs.io/en/latest/ 
> >> <https://jupyter.readthedocs.io/en/latest/>
> >> 
> >> (They also have fancier landing pages, though, which we have been 
> >> discussing).
> >> 
> >> 
> >> It goes away on mobile devices or small windows, at least.
> >> 
> >> 
> >>> Am 23.04.2021 um 19:21 schrieb Barry Smith  >>> <mailto:bsm...@petsc.dev>>:
> >>> 
> >>> 
> >>>   There is a lot of empty space on the left side of the website pages; 
> >>> under the Search slot.  Does this empty left side need to be so large, 
> >>> seems to waste a lot of the screen?
> >>> 
> >>>   Barry
> >>> 
> >> 
> > 
> 

-- 
Scott Kruger
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 466-3196
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] commenting on/asking questions on documentation pages

2021-04-23 Thread Scott Kruger



I do not know much about these:
https://sphinx-comments.readthedocs.io/en/latest/

I'm still not sure a link to a gitlab "New Issue" wouldn't be superior
however.

Scott

On 2021-04-23 12:16, Barry Smith did write:
> 
>Maybe the "edit" code could be copied and modified to go to a new issue 
> window (and stick the URL into the issue) and then let people type. Could it 
> even be clever enough to stick the issue number into the displayed webpage as 
> clickable. Not perfect, but at least we would get notifications for each 
> comment. 
> 
>Then the page might have a sidebar like
> 
>   Edit this page
>   Comment, ask question on page
> 
>   Comment !2345
>   Comment !4555
> 
>Shouldn't Sphinx have such a beast? 
> 
> 
>Barry
> 
> 
> 
> > On Apr 23, 2021, at 8:50 AM, Patrick Sanan  wrote:
> > 
> > 
> > 
> >> Am 23.04.2021 um 04:45 schrieb Barry Smith  >> <mailto:bsm...@petsc.dev>>:
> >> 
> >> 
> >>I can edit documentation pages directly from the page now, this is 
> >> totally awesome but I see no button to comment or ask questions on a page. 
> >> 
> >>I think every page should, by the edit button, have a "Comment, ask 
> >> questions" button that anyone can click on to make a comment or ask a 
> >> question about the page. It would be super fantastic if they could refer 
> >> to particular people in their comments but perhaps that is too difficult. 
> > 
> > 
> >> For example I am looking at 
> >> https://petsc.gitlab.io/-/petsc/-/jobs/1204309863/artifacts/public/overview/features.html
> >>  
> >> <https://petsc.gitlab.io/-/petsc/-/jobs/1204309863/artifacts/public/overview/features.html>
> >>  and I immediately want to ask 
> >> 
> >> Where is the TS solver table in the list of solver tables?
> >> 
> >> Barry
> >> 
> >> Note the pre-historic PETSc html manual pages which everyone despises 
> >> have a button in the upper right hand corner to report problems/ask 
> >> questions so what I am asking for is not unprecedented. Our old code uses 
> >> email which is not ideal but not ideal is better than not. Surely modern 
> >> systems like Sphinx have this support built in?
> >> 
> > 
> > I think the intended way to do this with our Sphinx template would be to 
> > add custom HTML templates, which can then be added to the sidebar.
> > https://pydata-sphinx-theme.readthedocs.io/en/latest/user_guide/sections.html#add-your-own-html-templates-to-theme-sections
> >  
> > <https://pydata-sphinx-theme.readthedocs.io/en/latest/user_guide/sections.html#add-your-own-html-templates-to-theme-sections>
> > 
> >  I'm worried that this involves too much scripting and customization, 
> > though. For example here's the way the "edit this page" link is done:
> > https://github.com/pydata/pydata-sphinx-theme/blob/master/pydata_sphinx_theme/_templates/edit-this-page.html
> >  
> > <https://github.com/pydata/pydata-sphinx-theme/blob/master/pydata_sphinx_theme/_templates/edit-this-page.html>
> > 
> > Doesn't seem too bad but it relies on a pretty big chunk of Python as well:
> > https://github.com/pydata/pydata-sphinx-theme/blob/master/pydata_sphinx_theme/__init__.py#L438
> >  
> > <https://github.com/pydata/pydata-sphinx-theme/blob/master/pydata_sphinx_theme/__init__.py#L438>
> > 
> > 
> > 
> > I'll open an issue on this, though, since it's entirely possible that 
> > someone else (or me, later) will think of a simple way to make this work, 
> > as it would indeed be a great feature.
> 

-- 
Scott Kruger
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 466-3196
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] -with-kokkos-cuda-arch=AMPERE80 nonsense

2021-04-07 Thread Scott Kruger
On 2021-04-06 14:44, Matthew Knepley did write:
> > > Does spack have some magic for this we could use?
> > >
> >
> > spack developed the archspec repo to abstract all of these issues:
> > https://github.com/archspec/archspec
> 
> 
> I do not love it. Besides the actual code (you can always complain about
> code),
> they do not really do any tests. They go look in a few places that data
> should be.
> We can do the same thing in probably 10x less code. It would be great to
> actually
> test the hardware to verify.
> 

My impression is that the current project is languishing because they
are focusing on the spack side right now.   But if this is the project
that is the ECP-annointed solution, then it has the best chance of
succeeding through sheer resources.   

The thing I like the best is that having a stand-alone project to handle
these issues is a real forehead-slapper (i.e., "why didn't I think of
that?!").  Todd Gamblin has stated that the goal is to allow vendors to
contribute because it will be in their interest to contribute.  This
should have been done years ago.

Regarding whether we could do better:  Now would actually be a good time
to contribute while the project is young, but I don't have the time
(like everyone else which is why this is a perennial problem).   It
would also be a good time to create a separate project if this one is
too annoying for folks.  In general, like spack, they have done a good
job on the interface so that part is important.

Scott




>   Thanks,
> 
>  Matt
> 
> 
> > This is a *great* idea and eventually BuildSystem should incorporate it as
> > the standard way of doing things; however, it is been focused mostly on
> > the CPU issues, and is still under active development (my understanding
> > is that the pulling it out of spack and getting those interop issues
> > sorted out is tangled up in how spack handles dependencies and
> > compilers).  It'd be nice if someone would go in and port the Kokkos gpu
> > mappings to archspec as there is some great knowledge on these mapping
> > buried in the Kokkos build system (not volunteering); i.e., translating
> > that webpage to some real code (even if it is in make) is valuable.
> >
> > TL;DR:  It's a known problem with currently no good solution AFAIK.
> > Waiting until archspec gets further along seems like the best solution.
> >
> > Scott
> >
> > P.S. ROCm has rocminfo which also doesn't solve the problem but is at
> > least sane.
> >
> 
> 
> -- 
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>

-- 
Scott Kruger
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 466-3196
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] -with-kokkos-cuda-arch=AMPERE80 nonsense

2021-04-06 Thread Scott Kruger


I wrote sent this yesterday but am having some strange mailing issues.

On 2021-04-03 22:42, Barry Smith did write:
> 
>   It would be very nice to NOT require PETSc users to provide this flag, how 
> the heck will they know what it should be when we cannot automate it 
> ourselves? 
> 
>   Any ideas of how this can be determined based on the current system? NVIDIA 
> does not help since these "advertising" names don't seem to trivially map to 
> information you can get from a particular GPU when you logged into it. For 
> example nvidia-smi doesn't use these names directly. Is there some mapping 
> from nvidia-smi  to these names we could use? If we are serious about having 
> a non-trivial number of users utilizing GPUs, which we need to be for future, 
> we cannot have this absurd demands in our installation process. 

The mapping of the Nvidia card to the gencodes and cuda arch is one of
those annoyances that is so ridiculous it is hard to believe.
The best reference I have found is this:
https://arnon.dk/matching-sm-architectures-arch-and-gencode-for-various-nvidia-cards/

To this end, the fact that Kokkos provides a mapping from colloquial
card name to gencode/arch is a real benefit and useful.  The problem is
that this mapping is buried in their build system and lacks
introspection.

> 
>   Barry
> 
> Does spack have some magic for this we could use?
> 

spack developed the archspec repo to abstract all of these issues:
https://github.com/archspec/archspec

This is a *great* idea and eventually BuildSystem should incorporate it as
the standard way of doing things; however, it is been focused mostly on
the CPU issues, and is still under active development (my understanding
is that the pulling it out of spack and getting those interop issues
sorted out is tangled up in how spack handles dependencies and
compilers).  It'd be nice if someone would go in and port the Kokkos gpu
mappings to archspec as there is some great knowledge on these mapping
buried in the Kokkos build system (not volunteering); i.e., translating
that webpage to some real code (even if it is in make) is valuable.

TL;DR:  It's a known problem with currently no good solution AFAIK.
Waiting until archspec gets further along seems like the best solution.

Scott

P.S. ROCm has rocminfo which also doesn't solve the problem but is at
least sane.


Re: [petsc-dev] reproducing crashes in the test harness

2021-03-30 Thread Scott Kruger



If you run the full test suite and have failures, you get something like
this:

#
#  To rerun failed tests:
# make -f gmakefile test test-fail=1


This was turned off for non-full-test cases (search and query cases)  in
the MR that reduced test harness verbosity.  I think what you are
wanting is to turn it back on for all cases with this message:

#
#  To rerun all failed tests in debugger:
# make -f gmakefile test test-fail=1 DEBUG=1


Is that correct?

This is a 2 line fix in `config/report_tests.py` if I understand this
correctly.

Scott

P.S.   Please don't forget the help target.  It's actually helpful:
   make -f gmakefile.test help

On 2021-03-29 23:25, Barry Smith did write:
> 
> # FAILED snes_tutorials-ex12_quad_hpddm_reuse_threshold 
> snes_tutorials-ex12_p4est_nc_singular_2d_hpddm snes_tutorials-ex56_hpddm 
> snes_tutorials-ex12_quad_hpddm_reuse_threshold_baij sys_tests-ex53_2 
> snes_tutorials-ex12_quad_hpddm_reuse_baij 
> snes_tutorials-ex12_quad_hpddm_reuse 
> snes_tutorials-ex12_p4est_singular_2d_hpddm 
> snes_tutorials-ex12_tri_parmetis_hpddm 
> snes_tutorials-ex12_quad_singular_hpddm sys_tests-ex26_1 sys_tests-ex26_2 
> snes_tutorials-ex12_tri_parmetis_hpddm_baij 
> snes_tutorials-ex12_tri_hpddm_reuse_baij snes_tutorials-ex12_tri_hpddm_reus
> 
> Scott,
> 
>   Any thoughts on how the test harness could tell the developer exactly how 
> to reproduce a problematic cases in the debugger without them digging around 
> in the code to check arguments etc.
> 
>   So for example "Run: mpiexec -n N ./xxx args -start_in_debugger" to 
> reproduce this problem? Then one could just cut and paste and be debugging 
> away.
> 
>   Thanks
> 
>   Barry
> 

-- 
Scott Kruger
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 466-3196
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] Test harness + PETSC_HAVE_DEFINED

2021-03-22 Thread Scott Kruger


It keys off of $PETSC_ARCH/include/petscconf.h so is a configure/build
time configuration not a runtime configuration.  

Scott



On 2021-03-22 21:40, Pierre Jolivet did write:
> 
> 
> > On 22 Mar 2021, at 9:24 PM, Pierre Jolivet  wrote:
> > 
> > Hello,
> > My make check is skipping tests which have a “requires: 
> > defined(PETSC_USE_SHARED_LIBRARIES)” with the message "SKIP 
> > PETSC_HAVE_DEFINED(PETSC_USE_SHARED_LIBRARIES) requirement not met" even 
> > though in configure.log I have:
> > 2021-03-22T16:17:43.1626452Z #define PETSC_USE_SHARED_LIBRARIES 1
> > Is this expected?
> 
> Sorry for the double send, I’m now realizing it should read define, not 
> defined.
> 
> > Here is an ever more puzzling behavior.
> > https://gitlab.com/petsc/petsc/-/jobs/1118286502/artifacts/browse/arch-ci-freebsd-pkgs-opt/tests/
> >  
> > <https://gitlab.com/petsc/petsc/-/jobs/1118286502/artifacts/browse/arch-ci-freebsd-pkgs-opt/tests/>
> >  ok ksp_ksp_tests-ex6_3_skip_pipegcr # SKIP Null requirement not met: 
> > define(PETSC_USE_AVX512_KERNELS)
> >   #PIPEGCR generates nans on linux-knl
> >   test:
> > requires: !define(PETSC_USE_AVX512_KERNELS)
> > suffix: 3_skip_pipegcr
> 
> I’ve also realized that PETSC_USE_AVX512_KERNELS is defined on that worker, 
> which seems a little weird to me (is it defined for all workers, even those 
> which are not AVX512-capable?).
> 
> Thanks,
> Pierre
> 
> > Why is this test skipped (on a worker other than linux-knl)?
> > Thanks,
> > Pierre
> 

-- 
Scott Kruger
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 466-3196
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] 32 bit compilers and PETSc

2021-03-04 Thread Scott Kruger
On 2021-03-04 08:58, Satish Balay via petsc-dev did write:
> On Wed, 3 Mar 2021, Barry Smith wrote:
> > 
> >Can we make ./configure ban 32 bit compilers unless a special flag is 
> > used? And just have one CI test that uses 32 bit where we turn off examples 
> > that overflow 32 bits?
> 
> We could add another linux test where 32bit part is more obvious.
> 
> And we have "requires: defined(FLAG)" but not sure if we can check for 
> "PETSC_SIZEOF_VOID_P 8" this way. Perhaps we can add to configure:
> 
> requires: defined(PETSC_USING_64BIT_PTR)
> or
> requires: !defined(PETSC_USING_32BIT_PTR)

This sounds like a lot of work just to figure out which tests overflow,
and then clutter up the tests just to fix bad compilers.  For example,
does the "medium" test matrix cause overflow?  Perhaps we could just
turn off any external file reading?

Scott


> 
> Satish
> 
> ---
> diff --git a/config/BuildSystem/config/types.py 
> b/config/BuildSystem/config/types.py
> index 39eda33099..d35adae503 100644
> --- a/config/BuildSystem/config/types.py
> +++ b/config/BuildSystem/config/types.py
> @@ -268,6 +268,8 @@ char petsc_max_path_len[] = xstr(PETSC_MAX_PATH_LEN);
>   'enum': (4, 8),
>   'size_t': (8, 4)}.items():
>self.executeTest(self.checkSizeof, args=[t, sizes])
> +if self.sizes['void-p'] == 8:
> +  self.addDefine('USING_64BIT_PTR',1)
>  self.executeTest(self.checkVisibility)
>  self.executeTest(self.checkMaxPathLen)
>  return
> 
> 
> ./configure CFLAGS=-m32 CXXFLAGS=-m32 FFLAGS=-m32 --with-mpi=0 && make && 
> make check

-- 
Scott Kruger
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 466-3196
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] Comparing binary output files in test harness

2021-02-19 Thread Scott Kruger



Could you file a gitlab issue on this?

Offhand, I think it'd be best to modify petscdiff to check the suffix
and then pass it off to exodiff.  This would be more easily extended to
other file types (e.g., h5diff).  

Scott

On 2021-02-19 04:59, Blaise A Bourdin did write:
> Hi,
> 
> I would like to write better tests for my exodus I/O functions and compare 
> the binary files written to the drive instead of the output of the examples.
> For instance, would it be possible to do the following:
>   ex26 -i  -o output1.exo; mpirun -np 2 ex26 -i  
> -o output2.exo; exodiff output1.exo output2.exo
> And check the result of exodiff, or run exodiff between output1.exo or 
> output2.exo and a stored binary result?
> 
> Regards,
> Blaise
> 
> -- 
> A.K. & Shirley Barton Professor of  Mathematics
> Adjunct Professor of Mechanical Engineering
> Adjunct of the Center for Computation & Technology
> Louisiana State University, Lockett Hall Room 344, Baton Rouge, LA 70803, USA
> Tel. +1 (225) 578 1612, Fax  +1 (225) 578 4276 Web 
> http://www.math.lsu.edu/~bourdin
> 

-- 
Scott Kruger
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 466-3196
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] "Search" does not work in the testing system?

2021-01-29 Thread Scott Kruger



`config/gmakegentest.py` if your PETSC_DIR/PETSC_ARCH environment 
variables are set, otherwise

use the --petsc-dir and --petsc-arch arguments.

This is likely TMI, but this script generates the 
$PETSC_ARCH/tests/testfiles files that is used by
gmakefile to understand the dependencies.  When you have a new file, you 
need to let the make

system know about that new dependency.

Scott


On 1/29/21 10:46 AM, Fande Kong wrote:

Thanks so much!

It worked well with existing examples. If I add a new example, this 
does not work anymore. What extra step I need to take?


Fande

On Wed, Jan 27, 2021 at 4:50 PM Scott Kruger <mailto:kru...@txcorp.com>> wrote:




You can change the 'test' target to 'print-test' to see the actual
targets you'll be testing.

You can also just change your search string to
src/snes/tutorials/ex1.c to grab all tests associated with ex1.c.

Scott

On 1/27/21 4:45 PM, Zhang, Hong via petsc-dev wrote:

make PETSC_DIR=/Users/kongf/projects/moose4/petsc 
PETSC_ARCH=arch-darwin-c-debug -f gmakefile test search='snes_tutorials-ex1_*'

or

make PETSC_DIR=/Users/kongf/projects/moose4/petsc 
PETSC_ARCH=arch-darwin-c-debug -f gmakefile test 
globsearch='snes_tutorials-ex1_*’

Hong (Mr.)


On Jan 27, 2021, at 5:21 PM, Fande Kong 
<mailto:fdkong...@gmail.com>  wrote:

Hi All,

I want to run one particular SNES test using the following command-line:

"make PETSC_DIR=/Users/kongf/projects/moose4/petsc 
PETSC_ARCH=arch-darwin-c-debug -f gmakefile test search='snes_tutorials-ex1'"

I got the following output:

"Using MAKEFLAGS: search=snes_tutorials-ex1% PETSC_ARCH=arch-darwin-c-debug 
PETSC_DIR=/Users/kongf/projects/moose4/petsc"

But I did not see any useful test information.

Could you kindly let me know what I did wrong?

Thanks,

Fande


-- 
Tech-X corporationkru...@txcorp.com <mailto:kru...@txcorp.com>

5621 Arapahoe Ave, Suite A   Phone: (720) 466-3196
Boulder, CO 80303Fax:   (303) 448-7756



--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 466-3196
Boulder, CO 80303Fax:   (303) 448-7756



Re: [petsc-dev] "Search" does not work in the testing system?

2021-01-27 Thread Scott Kruger



You can change the 'test' target to 'print-test' to see the actual 
targets you'll be testing.


You can also just change your search string to src/snes/tutorials/ex1.c 
to grab all tests associated with ex1.c.


Scott

On 1/27/21 4:45 PM, Zhang, Hong via petsc-dev wrote:

make PETSC_DIR=/Users/kongf/projects/moose4/petsc 
PETSC_ARCH=arch-darwin-c-debug -f gmakefile test search='snes_tutorials-ex1_*'

or

make PETSC_DIR=/Users/kongf/projects/moose4/petsc 
PETSC_ARCH=arch-darwin-c-debug -f gmakefile test 
globsearch='snes_tutorials-ex1_*’

Hong (Mr.)


On Jan 27, 2021, at 5:21 PM, Fande Kong  wrote:

Hi All,

I want to run one particular SNES test using the following command-line:

"make PETSC_DIR=/Users/kongf/projects/moose4/petsc PETSC_ARCH=arch-darwin-c-debug -f 
gmakefile test search='snes_tutorials-ex1'"

I got the following output:

"Using MAKEFLAGS: search=snes_tutorials-ex1% PETSC_ARCH=arch-darwin-c-debug 
PETSC_DIR=/Users/kongf/projects/moose4/petsc"

But I did not see any useful test information.

Could you kindly let me know what I did wrong?

Thanks,

Fande


--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 466-3196
Boulder, CO 80303Fax:   (303) 448-7756



Re: [petsc-dev] bin/sh error while running tests

2021-01-20 Thread Scott Kruger



OK.  To fix Stefano's other complaint, I see now that it is best to move 
the logic to report_tests.py with a new flag.  This will fix all 
bashism's as well as fix Jed's complaint of the weird programming of 
having a null dependency.


Scott


On 1/20/21 7:40 AM, Satish Balay via petsc-dev wrote:

Probably best to stick with 'sh' - if bash were universal - we could add in 
this dependency..

diff --git a/config/BuildSystem/config/programs.py 
b/config/BuildSystem/config/programs.py
index f35baa2e0c..6a684310dd 100755
--- a/config/BuildSystem/config/programs.py
+++ b/config/BuildSystem/config/programs.py
@@ -70,7 +70,7 @@ class Configure(config.base.Configure):
  
def configurePrograms(self):

  '''Check for the programs needed to build and run PETSc'''
-self.getExecutable('sh',   getFullPath = 1, resultName = 'SHELL')
+self.getExecutable('bash',   getFullPath = 1, resultName = 'SHELL')
  if not hasattr(self, 'SHELL'): raise RuntimeError('Could not locate sh 
executable')
  self.getExecutable('sed',  getFullPath = 1)
  if not hasattr(self, 'sed'): raise RuntimeError('Could not locate sed 
executable')

Satish

On Wed, 20 Jan 2021, Stefano Zampini wrote:


as long as this gets fixed, I'm fine with any solution

Il giorno mer 20 gen 2021 alle ore 16:04 Pierre Jolivet 
ha scritto:


Sorry for the noise, I'm now just realizing that it is in fact exactly the
same bash-ism… (I prefer my “fix” though, but I guess the rest of Stefano’s
comment still holds true).

Thanks,
Pierre

On 20 Jan 2021, at 1:59 PM, Pierre Jolivet  wrote:



On 20 Jan 2021, at 12:11 PM, Stefano Zampini 
wrote:

This is an issue with the default shell used by the makefile. Below is my
fix. We should probably have a CI machine that checks for these
shell-related errors.


I second this. Just spent too much time finding this other bash-ism in
gmakefile.test…
https://gitlab.com/petsc/petsc/-/commit/e4b11943e93779206a0e5f2091646de2e86b10e3#551c4017403b9179c385d5600f43348b6288a751

2021-01-20T11:21:22.5942304Z /usr/bin/sh: 1: test: false: unexpected operator
2021-01-20T11:21:22.5981176Z make: *** [gmakefile.test:270: check-test-errors] 
Error 1

Thanks,
Pierre

diff --git a/gmakefile.test b/gmakefile.test
index c38e37f..ffd7bdb 100644
--- a/gmakefile.test
+++ b/gmakefile.test
@@ -379,10 +379,11 @@ starttime: pre-clean $(libpetscall)
 @$(eval STARTTIME := $(shell date +%s))

  report_tests: starttime $(TESTTARGETS)
+ifeq ($(showreport),true)
 @$(eval ENDTIME := $(shell date +%s))
-   -@if test ${showreport} == "true"; then
  elapsed_time=$$(($(ENDTIME)- $(STARTTIME))) && \
-   $(PYTHON) $(CONFIGDIR)/report_tests.py -m $(MAKE) -d
$(TESTDIR)/counts -t 5 -e $${elapsed_time};\
-fi
+   elapsed_time=$$(($(ENDTIME)- $(STARTTIME))) && \
+   $(PYTHON) $(CONFIGDIR)/report_tests.py -m $(MAKE) -d
$(TESTDIR)/counts -t 5 -e $${elapsed_time};
+endif

Il giorno mar 19 gen 2021 alle ore 20:41 Scott Kruger 
ha scritto:



I can't reproduce this with the latest master:

hip 1261: git pull
Already up to date.
hip 1262: make -f gmakefile.test test search='notatest'
Using MAKEFLAGS: -- search=notatest
hip 1263:



On 1/19/21 8:19 AM, Stefano Zampini wrote:

Just rebased over latest master and got this

zampins@vulture:~/Devel/petsc$ make -f gmakefile.test test
search='notatest'
Using MAKEFLAGS: -- search=notatest
/bin/sh: 1: test: false: unexpected operator

--
Stefano


--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 466-3196
Boulder, CO 80303Fax:   (303) 448-7756



--
Stefano








--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 466-3196
Boulder, CO 80303Fax:   (303) 448-7756



Re: [petsc-dev] bin/sh error while running tests

2021-01-19 Thread Scott Kruger



I can't reproduce this with the latest master:

hip 1261: git pull
Already up to date.
hip 1262: make -f gmakefile.test test search='notatest'
Using MAKEFLAGS: -- search=notatest
hip 1263:



On 1/19/21 8:19 AM, Stefano Zampini wrote:

Just rebased over latest master and got this

zampins@vulture:~/Devel/petsc$ make -f gmakefile.test test 
search='notatest'

Using MAKEFLAGS: -- search=notatest
/bin/sh: 1: test: false: unexpected operator

--
Stefano


--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 466-3196
Boulder, CO 80303Fax:   (303) 448-7756



Re: [petsc-dev] Testing seems broken in master

2021-01-04 Thread Scott Kruger




Yes.  There are test cases where a cast is done so for those.

Scott


On 1/4/21 10:29 AM, Satish Balay wrote:

hm - indices are integers  and its not ignored (i.e %d) - only the %f and %e 
diff is ignored (by default)

Satish

On Mon, 4 Jan 2021, Scott Kruger wrote:




Is this just the 3rd problem?

Regarding how you can end up with changes not being caught:
The default (going all the way back to the old harness) is to not check
numbers to avoid round-off errors giving false negatives (failures).
Of course, sometimes you *want* to check the numbers; e.g., for indices.  The
solution for this is to add:

  diff_args: -j

Scott

On 12/31/20 4:40 PM, Barry Smith wrote:

   I think I have it "fixed" now in the branch, once it passes the pipeline I
will shepard it through the MR quickly. Sorry about this, even all our CI
testing can miss a great deal.

   Barry




On Dec 31, 2020, at 2:44 PM, Barry Smith mailto:bsm...@petsc.dev>> wrote:


   This is a different (3rd) problem. Funny it didn't bother anyone for two
months.

   Fix is in barry/2020-12-29/fix-petscdiff-bracket but the pipeline keeps
failing ts_tutorials_advection-diffusion-reaction-ex3_2 fails on different
machines with slightly different counts. I don't see how this change could
cause that! But gets old results on my machine. Very frustrating.
Barry


On Dec 31, 2020, at 1:02 PM, Matthew Knepley mailto:knep...@gmail.com>> wrote:

On Thu, Dec 31, 2020 at 1:48 PM Barry Smith mailto:bsm...@petsc.dev>> wrote:


   So the programs output changes and should no longer match that
 in the output/* file yet the test harness does not error with a
 statement that the two outputs do not match?

    I noticed the gmakegentest.py is not being run before it runs
 the test? Does this mean it is just running all the old stuff
 which does match fine?

    Then either how petscdiff is called by the test harness has
 changed or petscdiff has changed and does not detect changes
 anymore

    BTW: I always use -f ./gmakefile.test test not just the gmakefile

    All the PETSc changes are trivial and can be seen with a
 simple diff, it is hard to believe they would cause this
 behavior but I guess they must.

    You can go to PETSC_ARCH/tests/snes/tests and run the ex13
 shell script directly.


It is the sed problem:

master *$:/PETSc3/petsc/petsc-dev$
/PETSc3/petsc/petsc-dev/lib/petsc/bin/petscdiff
/PETSc3/petsc/petsc-dev/src/snes/tests/output/ex13_bench.out
ex13_bench.tmp

sed: 1: "s/\033[1;31m//g": unbalanced brackets ([])
sed: 1: "s/\033[0;39m\033[0;49m//g": unbalanced brackets ([])
sed: 1: "s/\033[1;31m//g": unbalanced brackets ([])
sed: 1: "s/\033[0;39m\033[0;49m//g": unbalanced brackets ([])

The error was getting eaten.

This is in current master. Is it fixed in a branch?

    Matt

   Barry



 On Dec 31, 2020, at 12:38 PM, Matthew Knepley
 mailto:knep...@gmail.com>> wrote:

 I just pulled master, and simple alterations to tests do not
 produce a failure:

 master *$:/PETSc3/petsc/petsc-dev$ PETSC_ARCH=arch-master-debug
 make -f ./gmakefile test search="snes_tests-ex13_bench"
 TIMEOUT=5000 EXTRA_OPTIONS="-dm_
 refine 0"
 Using MAKEFLAGS: EXTRA_OPTIONS=-dm_refine 0 TIMEOUT=5000
 search=snes_tests-ex13_bench
         TEST
 arch-master-debug/tests/counts/snes_tests-ex13_bench.counts
  ok snes_tests-ex13_bench
  ok diff-snes_tests-ex13_bench

 I check that the runs produce different output when done manually.

 Scott and Barry, could this be related to changed to testing?

   Thanks,

      Matt

 --
 What most experimenters take for granted before they begin
 their experiments is infinitely more interesting than any
 results to which their experiments lead.
 -- Norbert Wiener

 https://www.cse.buffalo.edu/~knepley/
 <http://www.cse.buffalo.edu/%7Eknepley/>



--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/
<http://www.cse.buffalo.edu/%7Eknepley/>




--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 466-3196
Boulder, CO 80303Fax:   (303) 448-7756



Re: [petsc-dev] Testing seems broken in master

2021-01-04 Thread Scott Kruger




Is this just the 3rd problem?

Regarding how you can end up with changes not being caught:
The default (going all the way back to the old harness) is to not check 
numbers to avoid round-off errors giving false negatives (failures).
Of course, sometimes you *want* to check the numbers; e.g., for 
indices.  The solution for this is to add:


 diff_args: -j

Scott

On 12/31/20 4:40 PM, Barry Smith wrote:


  I think I have it "fixed" now in the branch, once it passes the 
pipeline I will shepard it through the MR quickly. Sorry about this, 
even all our CI testing can miss a great deal.


  Barry



On Dec 31, 2020, at 2:44 PM, Barry Smith > wrote:



  This is a different (3rd) problem. Funny it didn't bother anyone 
for two months.


  Fix is in barry/2020-12-29/fix-petscdiff-bracket but the pipeline 
keeps failing ts_tutorials_advection-diffusion-reaction-ex3_2 fails 
on different machines with slightly different counts. I don't see how 
this change could cause that! But gets old results on my machine. 
Very frustrating.

Barry

On Dec 31, 2020, at 1:02 PM, Matthew Knepley > wrote:


On Thu, Dec 31, 2020 at 1:48 PM Barry Smith > wrote:



  So the programs output changes and should no longer match that
in the output/* file yet the test harness does not error with a
statement that the two outputs do not match?

   I noticed the gmakegentest.py is not being run before it runs
the test? Does this mean it is just running all the old stuff
which does match fine?

   Then either how petscdiff is called by the test harness has
changed or petscdiff has changed and does not detect changes
anymore

   BTW: I always use -f ./gmakefile.test test not just the gmakefile

   All the PETSc changes are trivial and can be seen with a
simple diff, it is hard to believe they would cause this
behavior but I guess they must.

   You can go to PETSC_ARCH/tests/snes/tests and run the ex13
shell script directly.


It is the sed problem:

master *$:/PETSc3/petsc/petsc-dev$ 
/PETSc3/petsc/petsc-dev/lib/petsc/bin/petscdiff 
/PETSc3/petsc/petsc-dev/src/snes/tests/output/ex13_bench.out 
ex13_bench.tmp


sed: 1: "s/\033[1;31m//g": unbalanced brackets ([])
sed: 1: "s/\033[0;39m\033[0;49m//g": unbalanced brackets ([])
sed: 1: "s/\033[1;31m//g": unbalanced brackets ([])
sed: 1: "s/\033[0;39m\033[0;49m//g": unbalanced brackets ([])

The error was getting eaten.

This is in current master. Is it fixed in a branch?

   Matt

  Barry



On Dec 31, 2020, at 12:38 PM, Matthew Knepley
mailto:knep...@gmail.com>> wrote:

I just pulled master, and simple alterations to tests do not
produce a failure:

master *$:/PETSc3/petsc/petsc-dev$ PETSC_ARCH=arch-master-debug
make -f ./gmakefile test search="snes_tests-ex13_bench"
TIMEOUT=5000 EXTRA_OPTIONS="-dm_
refine 0"
Using MAKEFLAGS: EXTRA_OPTIONS=-dm_refine 0 TIMEOUT=5000
search=snes_tests-ex13_bench
        TEST
arch-master-debug/tests/counts/snes_tests-ex13_bench.counts
 ok snes_tests-ex13_bench
 ok diff-snes_tests-ex13_bench

I check that the runs produce different output when done manually.

Scott and Barry, could this be related to changed to testing?

  Thanks,

     Matt

-- 
What most experimenters take for granted before they begin

their experiments is infinitely more interesting than any
results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/





--
What most experimenters take for granted before they begin their 
experiments is infinitely more interesting than any results to which 
their experiments lead.

-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 







--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 466-3196
Boulder, CO 80303Fax:   (303) 448-7756



Re: [petsc-dev] slight inconsistency with test harness?

2020-12-30 Thread Scott Kruger



https://gitlab.com/petsc/petsc/-/merge_requests/3525

Your complaint about the test harness verbosity when running restricted 
set of tests is also fixed in this one as well (and this one is so nice 
I should have done this quite awhile ago).


Scott


On 12/29/20 8:32 PM, Barry Smith wrote:


Scott,


I spent way to much time puzzling over why

[bsmith@p1 petsc]$ make -f gmakefile.test test search='src*ts*tests*ex26*'
Using MAKEFLAGS: -- search=src*ts*tests*ex26*
# No tests run
# No tests run
# No tests run

When I ran with help I noticed some seemingly slight inconsistency. It 
says for a directory you include the src but for a specific example 
you do not include src?


Would it be possible to add support to the test harness so if one puts 
in the src* for a specific example it still works? For silly people 
like me who type the full directory path, and keeping typing it over 
and over again even though it does not work. And to support not 
putting in the src/ for directories?


Thanks

Barry



Tests can be generated by searching with multiple methods
For general searching (using config/query_test.py):
  make -f gmakefile.test test search='sys*ex2*'
or the shortcut using s
  make -f gmakefile.test test s='sys*ex2*'
You can also use the full path to a file directory
  make -f gmakefile.test test s='src/sys/tests/'

To search for fields from the original test definitions:
  make -f gmakefile.test test query='requires' 
queryval='*MPI_PROCESS_SHARED_MEMORY*'

or the shortcut using q and qv
  make -f gmakefile.test test q='requires' 
qv='*MPI_PROCESS_SHARED_MEMORY*'

To filter results from other searches, use searchin
  make -f gmakefile.test test s='src/sys/tests/' searchin='*options*'

To re-run the last tests which failed:
  make -f gmakefile.test test test-fail='1'

To see which targets match a given pattern (useful for doing a 
specific target):

  make -f gmakefile.test print-test search=sys*

To build an executable, give full path to location:
  make -f "gmakefile.test" ${PETSC_ARCH}/tests/sys/tests/ex1
or make the test with NO_RM=1

Above is from: help-make help-targets help-test

[bsmith@p1 petsc]$ make -f gmakefile.test test search='src*ts*tests*ex26*'
Using MAKEFLAGS: -- search=src*ts*tests*ex26*
# No tests run
# No tests run
# No tests run
[bsmith@p1 petsc]$ make -f gmakefile.test printtest 
search='src*ts*tests*ex26*'

make: *** No rule to make target 'printtest'.  Stop.
[bsmith@p1 petsc]$ make -f gmakefile.test print_test 
search='src*ts*tests*ex26*'

make: *** No rule to make target 'print_test'.  Stop.
[bsmith@p1 petsc]$ make -f gmakefile.test print-test 
search='src*ts*tests*ex26*'


[bsmith@p1 petsc]$ gmake -f gmakefile.test print-test 
search='src*ts*tests*ex26*'


[bsmith@p1 petsc]$ gmake -f gmakefile.test test 
search='src*ts*tests*ex26*'

Using MAKEFLAGS: -- search=src*ts*tests*ex26*
# No tests run
# No tests run
# No tests run
[bsmith@p1 petsc]$ ls src/ts/tests/ex26
ex26   ex26.c
[bsmith@p1 petsc]$ ls src/ts/tests/ex26
ex26   ex26.c
[bsmith@p1 petsc]$ gmake -f gmakefile.test test search='ts*tests*ex26*'
Using MAKEFLAGS: -- search=ts*tests*ex26*
        CC arch-ci-linux-cuda-double/tests/ts/tests/ex26.o
  CLINKER arch-ci-linux-cuda-double/tests/ts/tests/ex26





--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 466-3196
Boulder, CO 80303Fax:   (303) 448-7756



Re: [petsc-dev] Updated testing

2020-11-17 Thread Scott Kruger



Thanks.  Many thanks for the new sphinx documentation which I find
easier to write in as I develop, and for the tutorial series which provided
really useful feedback on what devs wanted.

Scott


On 11/17/20 3:05 PM, Matthew Knepley wrote:
On Tue, Nov 17, 2020 at 12:00 PM Scott Kruger <mailto:kru...@txcorp.com>> wrote:





As a heads up, the test harness changed a fair amount in master
and release as a result
of MR   !3382 which includes all of the changes suggested at the
tutorial, as well as some
other issues that were resolved.

The latest documentation on the testing can be found here:
https://docs.petsc.org/en/master/developers/testing/


This is great documentation. It is helping me.

  Thanks,

      Matt


and as always:
make -f gmakefile.test help



-- 
Tech-X corporationkru...@txcorp.com <mailto:kru...@txcorp.com>

5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756



--
What most experimenters take for granted before they begin their 
experiments is infinitely more interesting than any results to which 
their experiments lead.

-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 
<http://www.cse.buffalo.edu/%7Eknepley/>


--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756



[petsc-dev] Updated testing

2020-11-17 Thread Scott Kruger




As a heads up, the test harness changed a fair amount in master and 
release as a result
of MR   !3382 which includes all of the changes suggested at the 
tutorial, as well as some

other issues that were resolved.

The latest documentation on the testing can be found here:
https://docs.petsc.org/en/master/developers/testing/

and as always:
make -f gmakefile.test help



--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756



Re: [petsc-dev] Something is wrong with testing

2020-11-16 Thread Scott Kruger




It's doing the right thing:


From src/dm/impls/plex/tests/ex35.c:

 build:
    requires: !define(PETSC_USE_64BIT_INDICES) double !complex 
!define(PETSC_HAVE_VALGRIND)



So because you have it defined PETSC_HAVE_VALGRIND,  it's instructed to 
skip the build and thus

you have:


# SKIP Null requirement not met:

The language is confusing because it's like a double negative, but it's 
reporting the right thing.


Scott



On 11/16/20 10:37 AM, Matthew Knepley wrote:
On Mon, Nov 16, 2020 at 11:36 AM Satish Balay > wrote:


Works fine for me - so don't know whats different in your env -
that is triggerig this.


I had PETSC_HAVE_VALGRIND defined. When I remove it, everything works 
fine. Why is that killing the test system?


  Thanks,

     Matt

Satish
---


[balay@pj01 petsc]$ make test
globsearch="dm_impls_plex_tests-ex35_tet" TIMEOUT=5000
Using MAKEFLAGS: -- TIMEOUT=5000
globsearch=dm_impls_plex_tests-ex35_tet
          CC arch-linux-c-debug/tests/dm/impls/plex/tests/ex35.o
     CLINKER arch-linux-c-debug/tests/dm/impls/plex/tests/ex35
        TEST
arch-linux-c-debug/tests/counts/dm_impls_plex_tests-ex35_tet.counts
 ok dm_impls_plex_tests-ex35_tet
 ok diff-dm_impls_plex_tests-ex35_tet

# -
#   Summary
# -
# success 2/2 tests (100.0%)
# failed 0/2 tests (0.0%)
# todo 0/2 tests (0.0%)
# skip 0/2 tests (0.0%)
#
# Wall clock time for tests: 0 sec
# Approximate CPU time (not incl. build time): 0.05 sec
#
# Timing summary (actual test time / total CPU time):
#   dm_impls_plex_tests-ex35_tet: 0.05 sec / 0.05 sec
[balay@pj01 petsc]$ grep PETSC_HAVE_VALGRIND
arch-linux-c-debug/include/petscconf.h
[balay@pj01 petsc]$




On Mon, 16 Nov 2020, Matthew Knepley wrote:

> WIth the latest master I get
>
> knepley/feature-tetgen-labels $:/PETSc3/petsc/petsc-pylith$
> PETSC_ARCH=arch-master-debug make -f ./gmakefile test
> globsearch="dm_impls_plex_tests-ex35_tet" TIMEOUT=5000 EXTRA_O
> PTIONS=""
> Using MAKEFLAGS: EXTRA_OPTIONS= TIMEOUT=5000
> globsearch=dm_impls_plex_tests-ex35_tet
>         TEST
> arch-master-debug/tests/counts/dm_impls_plex_tests-ex35_tet.counts
>  ok dm_impls_plex_tests-ex35_tet # SKIP Null requirement not met:
> define(PETSC_HAVE_VALGRIND), Null requirement not met:
> define(PETSC_HAVE_VALGRIND)
>
> I cannot trace it through yet. I reconfigured and rebuilt, and I
still get
> this. Does anyone know what is happening?
>
> Is it connected to the latest valgrind thing?
>
>   Thanks,
>
>      Matt
>
>



--
What most experimenters take for granted before they begin their 
experiments is infinitely more interesting than any results to which 
their experiments lead.

-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 



--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756



Re: [petsc-dev] Test Harness Tutorial on October 29 at 2pm Central

2020-10-29 Thread Scott Kruger



For those that wish to follow along today:
https://docs.google.com/presentation/d/1q3mExIBdpfDyxDx-1yDfZGH0v6HRSfWzWtFryV3CNb4/edit#slide=id.p

On 10/28/20 8:27 AM, Munson, Todd via petsc-dev wrote:


Dear all,

A reminder that our next PETSc tutorial will be by Scott Kruger who 
will be talking about the test harness, which will occur on October 29 
at 2pm central.  Please email Scott directly if you have questions 
that you would like him to address in the tutorial.  The information 
for the call is below.


Thanks, Todd.

To join the meeting on a computer or mobile phone: 
https://bluejeans.com/753281003?src=calendarLink


Phone Dial-in

+1.312.216.0325 (US (Chicago))

+1.408.740.7256 (US (San Jose))

+1.866.226.4650 (US Toll Free)

Global Numbers: https://www.bluejeans.com/premium-numbers

Meeting ID: 753 281 003

Room System

199.48.152.152 or bjn.vc

Meeting ID: 753 281 003

Want to test your video connection?

https://bluejeans.com/111



--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756



Re: [petsc-dev] [XSDK-DEV] PR20 - Vote next xSDK telecon (Oct 1st) (fwd)

2020-09-23 Thread Scott Kruger



Sorry if I'm missing something, but my take is that this is mostly going 
to affect  PETSc's spack package file which doesn't affect PETSc 
directly?    I'm struggling to see how this affects PETSc.


Scott



On 9/23/20 8:47 AM, Satish Balay via petsc-dev wrote:

FYI - this policy change would affect petsc - and we [petsc] would have to vote 
on it..

Too many issues here..

Satish

-- Forwarded message --
Date: Fri, 18 Sep 2020 16:38:06 +
From: "Hudson, Stephen Tobias P"
 <0d3a20d2ae38-dmarc-requ...@listserv.llnl.gov>
Reply-To: XSDK-DEV 
To: xsdk-...@listserv.llnl.gov
Subject: [XSDK-DEV] PR20 - Vote next xSDK telecon (Oct 1st)

I want to aim to vote on PR20 at the next xSDK telecon (Oct 1st).

PR20. 
https://github.com/xsdk-project/xsdk-community-policies/pull/20

Summary of changes:

M1. Modified to base installation around Spack and includes compliance with the 
Spack variant guidelines.

https://github.com/xsdk-project/xsdk-community-policies/blob/python-updates/installation_policies/xSDK_spack_variant_guidelines.md

Note that these guidelines state that when configuration options exist for 
index size, precision, shared libraries and build_type then they should be 
reflected in Spack variants.

The variant names/types listed are recommendations. Please think about whether 
you are happy with these variants for your code and make any comments either on 
the PR or on the xsdk slack page in the channel spack-variants-and-pr20 
(https://app.slack.com/client/T01AWKXU6F5/C01B3T17P7E)

The old installation polices have been combined into one document called 
pre_spack_install_policies.md
 and are no longer part of M1.

M16. The old M16 has been removed as this is incorporated into M1. A new M16 
requires a Debug build option.

If you have any questions or comments, and especially if I have overlooked any 
previous comments, please let me know either in PR or in the Slack channel 
referenced above.




To unsubscribe from the XSDK-DEV list, click the following link:
http://listserv.llnl.gov/SCRIPTS/wa.exe?SUBED1=XSDK-DEV=1


--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756



Re: [petsc-dev] testharness rerun test based on error condition; GPU; gitlab issues still broken

2020-09-04 Thread Scott Kruger



On 9/4/20 11:12 AM, Satish Balay wrote:

The test harness prints:

# To rerun failed tests:
# /usr/bin/gmake -f gmakefile test test-fail=1

So perhaps we the CI can be changed to ignore result of 'make alltests' - and 
always run this [and then check the error code]


But this says even if we have legit failures then we should rerun this, 
and then worry about whether it is a real error code.


However - I'm not seeing error return here..

Satish
--

[balay@pj01 petsc.x]$ make test globsearch='*ksp*tests*ex49_*cg*'
Using MAKEFLAGS: -- globsearch=*ksp*tests*ex49_*cg*
 TEST arch-complex/tests/counts/ksp_ksp_tests-ex49_cg.counts
  ok ksp_ksp_tests-ex49_cg
not ok diff-ksp_ksp_tests-ex49_cg # Error code: 1
#   2d1
#   < extra text
 TEST arch-complex/tests/counts/ksp_ksp_tests-ex49_pipecg2.counts


This isn't a good example since it's a diff error.  It's not what Barry 
is referring to.


Scott


  ok ksp_ksp_tests-ex49_pipecg2+ksp_norm_type-preconditioned
  ok diff-ksp_ksp_tests-ex49_pipecg2+ksp_norm_type-preconditioned
  ok ksp_ksp_tests-ex49_pipecg2+ksp_norm_type-unpreconditioned
  ok diff-ksp_ksp_tests-ex49_pipecg2+ksp_norm_type-unpreconditioned
  ok ksp_ksp_tests-ex49_pipecg2+ksp_norm_type-natural
  ok diff-ksp_ksp_tests-ex49_pipecg2+ksp_norm_type-natural

# -
#   Summary
# -
# FAILED diff-ksp_ksp_tests-ex49_cg
# success 7/8 tests (87.5%)
# failed 1/8 tests (12.5%)
# todo 0/8 tests (0.0%)
# skip 0/8 tests (0.0%)
#
# Wall clock time for tests: 1 sec
# Approximate CPU time (not incl. build time): 0.19 sec
#
# To rerun failed tests:
# /usr/bin/gmake -f gmakefile test test-fail=1
#
# Timing summary (actual test time / total CPU time):
#   ksp_ksp_tests-ex49_pipecg2: 0.02 sec / 0.19 sec
#   ksp_ksp_tests-ex49_cg: 0.00 sec / 0.00 sec
[balay@pj01 petsc.x]$ echo $?
0
[balay@pj01 petsc.x]$ /usr/bin/gmake -f gmakefile test test-fail=1
Using MAKEFLAGS: -- test-fail=1
 TEST arch-complex/tests/counts/ksp_ksp_tests-ex49_cg.counts
  ok ksp_ksp_tests-ex49_cg
not ok diff-ksp_ksp_tests-ex49_cg # Error code: 1
#   2d1
#   < extra text

# -
#   Summary
# -
# FAILED diff-ksp_ksp_tests-ex49_cg
# success 1/2 tests (50.0%)
# failed 1/2 tests (50.0%)
# todo 0/2 tests (0.0%)
# skip 0/2 tests (0.0%)
#
# Wall clock time for tests: 0 sec
# Approximate CPU time (not incl. build time): 0.01 sec
#
# To rerun failed tests:
# /usr/bin/gmake -f gmakefile test test-fail=1
#
# Timing summary (actual test time / total CPU time):
#   ksp_ksp_tests-ex49_cg: 0.01 sec / 0.01 sec
[balay@pj01 petsc.x]$ echo $?
0
[balay@pj01 petsc.x]$



On Fri, 4 Sep 2020, Scott Kruger wrote:



That's a good idea, but I'll have to think about this a bit.   It seems
relatively straightforward, but I'd be doing this in bash so I'd like to come
up with an implementation that is not overly complicated.    Do you have a job
that has the issue offhand?

Scott


On 9/4/20 10:27 AM, Barry Smith wrote:

Scott,

 How difficult would it be for the test harness to run a failed test
 again if the return code has specific values? Instead of erroring out.

 I am thinking in particular about GPUs but it is general. If the GPU
 doesn't have he resources available it will error out thus crashing the
 entire job in the pipeline requiring retrying the job from the GUI.
 Wasting everyone's time.

 Seems in theory like it should be pretty straightforward but, of course,
 unforeseen issues can make it difficult. Just check the program's error
 code and it if is certain values run the program again, or wait a few
 seconds and run

Barry


Issues are still broken hence here.




--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756



Re: [petsc-dev] testharness rerun test based on error condition; GPU; gitlab issues still broken

2020-09-04 Thread Scott Kruger



That's a good idea, but I'll have to think about this a bit.   It seems 
relatively straightforward, but I'd be doing this in bash so I'd like to 
come up with an implementation that is not overly complicated.    Do you 
have a job that has the issue offhand?


Scott


On 9/4/20 10:27 AM, Barry Smith wrote:

   Scott,

How difficult would it be for the test harness to run a failed test again 
if the return code has specific values? Instead of erroring out.

I am thinking in particular about GPUs but it is general. If the GPU 
doesn't have he resources available it will error out thus crashing the entire 
job in the pipeline requiring retrying the job from the GUI. Wasting everyone's 
time.

Seems in theory like it should be pretty straightforward but, of course, 
unforeseen issues can make it difficult. Just check the program's error code 
and it if is certain values run the program again, or wait a few seconds and run

   Barry


Issues are still broken hence here.


--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756



Re: [petsc-dev] Pause-for-approval Pipelines?

2020-08-27 Thread Scott Kruger



What's wrong with using the API to release the paused job instead of using it 
to start a fresh pipeline?

   Generally I like to pass the Pipeline before making a PR. So the test on 
creating a new MR is annoying. Yes after the initial MR I might be able to 
release the paused job in lieu of starting pipelines fresh. It would be nice to 
send some pushes that don't trigger a pipeline start at all because I know I 
don't need one. Maybe that is possible, I'll need to investigate.



I agree with this, but this would require keying off of labels rather 
than just MR.

The new `rules:` keyword is supposed to be more flexible, but from the docs,
I can't tell that changing a label can launch an pipeline:

https://docs.gitlab.com/ee/ci/yaml/#workflowrules

They do show examples of ignore WIP, but it looks like it applies to 
commits?


But going back to the original point, gitlab supports documentation only 
pipelines:

https://docs.gitlab.com/ee/development/pipelines.html

So, in the end, I can't tell if gitlab can do what we want or not.


Scott



--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756



Re: [petsc-dev] Pause-for-approval Pipelines?

2020-08-27 Thread Scott Kruger



Does branch+master mean an automatic rebase?

Scott


On 8/27/20 10:59 AM, Satish Balay via petsc-dev wrote:

BTW: here are some reasons for using the MR pipeline instead of the web 
interface pipeline.

- it tests branch+master (more useful?) - instead of branch [web pipeline].
- you can skip the forced re-bases that were required when CI changed [i.e even 
if your branch is off old master - the latest CI settings from latest master 
will get used by MR pipeline]
- it enables testing of MRs from forks. [so the additional complexity of that 
workflow is now gone. Note: only developers can start these pipelines - from 
the pipeline tab on the MR web page]

And as mentioned - developers can ignore this, and continue to start pipelines 
from the web interface.

There is now some additional complexity in figuring out if the latest changes 
are tested [and by which pipeline, MR or web etc..] - but this part of the 
workflow should primarily affect integrator group.

Satish

On Thu, 27 Aug 2020, Satish Balay via petsc-dev wrote:


On Thu, 27 Aug 2020, Jacob Faibussowitsch wrote:


Why does one pipeline request spawn two separate pipelines now? Specifically 
one is a normal pipeline whereas the other includes some sort of manual 
approval button which “runs” indefinitely if you don’t either cancel it or 
approve it.

The 2 pipelines you see are
- readdocs pipeline
- merge-pipeline - auto starts - does the pre stage and pauses.


I think this was somewhat discussed in a previous MR 
(https://gitlab.com/petsc/petsc/-/merge_requests/3063 
) which indicates it is 
useful for doing a pipeline of the branch+destination but how is this different from 
the existing merge-train infrastructure that was already in place?

Its not a replacement for merge train.[merge train is a way to do the merge 
when the MR is tested and ready for merge]

However you can use this as a replacement for starting a new pipeline from the 
web interface https://gitlab.com/petsc/petsc/-/pipelines/new
[i.e instead of starting a web interface pipeline - you just go to the MR page 
- 'pipeline tab' and hit continue]

Or you can ignore this and continue to use the web interface.



It is annoying to have to manually go in and cancel the phony pipeline every 
time (not to mention twice as many emails from gitlab notifying me the 
femtosecond these pipelines fail).

You shouldn't have to cancel the automatic MR pipeline. They should just stay 
paused.

And I don't remember getting e-mails from these stalled MR pipelines. Perhaps 
you got them because of pre-stage failures?

However if you have errors in pre stage tests - you might as well check and fix 
them.

The one thats causing most trouble is readdocs pipeline. Its probably best to 
disable it until its issues are resolved.

Satish


--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756



Re: [petsc-dev] test harness failure make -f gmakefile.test test query='requires' queryval='kokkos'

2020-08-12 Thread Scott Kruger



A.  gmakefile is not passing PETSC_DIR and PETSC_ARCH to 
query_tests.py like it should.
The immediate fix is to set your env variables, but I'll do a quick MR 
to fix this.


Scott


On 8/11/20 8:11 PM, Barry Smith wrote:


  Scott,

   Sometimes when I run the below test the harness mis-behaves. I 
cannot say what is different when this happens and doesn't because 
sometimes it also works.


  Thanks

   Barry




make -f gmakefile.test  test query='requires' queryval='kokkos'
Traceback (most recent call last):
File "config/query_tests.py", line 226, in 
  main()
File "config/query_tests.py", line 187, in main
  petsc_full_arch = os.path.join(petsc_dir, petsc_arch)
File "/usr/lib64/python3.7/posixpath.py", line 94, in join
  genericpath._check_arg_types('join', a, *p)
File "/usr/lib64/python3.7/genericpath.py", line 153, in _check_arg_types
  (funcname, s.__class__.__name__)) from None
TypeError: join() argument must be str or bytes, not 'NoneType'
Using MAKEFLAGS: -- queryval=kokkos query=requires
# No tests run
# No tests run
# No tests run




--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756



Re: [petsc-dev] REPLACE=1 not working for me

2020-07-10 Thread Scott Kruger




Looks like a problem in petscdiff between the filter output flag and the 
move flag.  I'll take a look.


Scott




On 7/10/20 9:15 AM, Mark Adams wrote:

REPLACE=1 is doing something funny:

11:12 knepley/feature-swarm-fortran *= ~/Codes/petsc$ make -f gmakefile 
test search='dm_impls_swarm_tutorials-ex1f90_0' REPLACE=1

Using MAKEFLAGS: REPLACE=1 search=dm_impls_swarm_tutorials-ex1f90_0
         TEST 
arch-macosx-gnu-g/tests/counts/dm_impls_swarm_tutorials-ex1f90_0.counts

  ok dm_impls_swarm_tutorials-ex1f90_0
not ok diff-dm_impls_swarm_tutorials-ex1f90_0 # Error code: 1
# 1c1
# < DM Object: Potential Grid 1 MPI processes
# ---
# > DM Object: 1 MPI processes
# 3d2
# < Potential Grid in 2 dimensions:
# 12c11
# < DM Object: Particle Grid 1 MPI processes
# ---
# > DM Object: 1 MPI processes
*# mv'ing ex1f90_0.tmp.filter_tmp --> 
/Users/markadams/Codes/petsc/src/dm/impls/swarm/tutorials/output/ex1f90_0.out.filter_tmp

*
# -
#   Summary
# -
# FAILED diff-dm_impls_swarm_tutorials-ex1f90_0
# success 1/2 tests (50.0%)
# failed 1/2 tests (50.0%)
# todo 0/2 tests (0.0%)
# skip 0/2 tests (0.0%)
#
# Wall clock time for tests: 1 sec
# Approximate CPU time (not incl. build time): 0.92 sec
#
# To rerun failed tests:
#     /usr/bin/make -f gmakefile test test-fail=1
#
# Timing summary (actual test time / total CPU time):
#   dm_impls_swarm_tutorials-ex1f90_0: 0.68 sec / 0.92 sec
11:12 knepley/feature-swarm-fortran *= ~/Codes/petsc$ ll 
/Users/markadams/Codes/petsc/src/dm/impls/swarm/tutorials/output/

total 16
*-rw-r--r--  1 markadams  staff  2208 Jul 10 08:42 ex1_0.out
-rw-r--r--  1 markadams  staff  1612 Jul 10 08:42 ex1f90_0.out
-rw-r--r--  1 markadams  staff     0 Jun 23 14:06 ex2_0.out*


--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] How do I see the gcov results?

2020-06-24 Thread Scott Kruger




For more detail, Stage 4 of the pipeline ("analyze-pipeline") has all of 
the gcov data and you can download it from the right side after clicking 
"Download" from "Job Artifacts" tab.  This is handled by the 
.gitlab-ci.yml file (search for gcov).


If someone knows how gcov outputs it's data and how to upgrade the 
lib/petsc/bin/maint/gcov.py to read in the data as gitlab organizes it 
and then output the html/figures, then we'd have it done (locally.  To 
upload to wiki or other gitlab display would require more work on the 
gitlab-ci.yml file).


I spent quite a few hours on it, and got stuck.  It requires 
understanding gcov to a degree that was interfering with other priorities.


If someone has the knowledge or inclination, it's a good problem to solve.

Scott



On 6/24/20 2:39 PM, Satish Balay via petsc-dev wrote:

Its not yet setup in the current CI

Satish

On Wed, 24 Jun 2020, Matthew Knepley wrote:


Thanks,

Matt




--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] testing with Pipelines before making merge request

2020-06-23 Thread Scott Kruger




I see the value in this, but am somewhat ambivalent..

As a workflow, I did do the "Assign to me" and then when it was actually 
ready to merge, I changed the status to "Ready to merge" and assigned to 
Satish.  I liked this because the gitlab MR button in the upper right 
shows the ones assigned to you by default -- it's a nice To Do list.


Of course, with your petscgitbash, effectively you can see your MR To Do 
list from the command-line, and having a tidy MR list for PETSc itself 
is really nice.  I agree that PETSc's is a bit messy.


However, if I'm going to get a conflict, it's almost certainly going to 
be from Junchao and I like seeing his MR's just so I know which files 
he's working on.  (Click that MR button, clear the search with your 
name, change Author=@jczhang07, and it'll appear in your recent searches 
so easy-peasy).Yes, I can do a

git branch -a | grep jczhang
but there are a lot of branches there, and there isn't as much 
explanation as you get with an MR.


Of course, I could try actually emailing or *gasp* talking to Junchao, 
but isn't the whole point of the internet to minimize contact with 
people?  ;-)


And yes, I can see it when the MR is ready, but one (admittedly 
generally positive) side-effect is going to be a shorter MR cycle.


Scott

P.S.  I await the perfect git command(s) that renders my whole argument 
moot and makes me feel like a git noob.




On 6/23/20 6:21 PM, Barry Smith wrote:


   One can test a branch with Pipelines (and fix it) before making a merge 
request. GitLab is smart enough to remember that branch has passed the pipeline 
and not require another test just because you make a MR (unless of course you 
change something based on MR feedback).

   This can prevent some churn in merge request messages and constant pushes. 
Of course if one needs help in fixing a pipeline problem one is free to make a 
WIP merge request and ask for help there.

Barry



--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] https://developer.nvidia.com/nccl

2020-06-17 Thread Scott Kruger





Here's a paper from a few years ago that uses NCCL to give a better 
mpi_bcast:


https://arxiv.org/pdf/1707.09414.pdf

But what's interesting is that they have this statement:

In general, NCCL integration with MPI runtimes might lead to very 
complicated designs. Thus, the proposed work is a step towards achieving 
similar or better performance without utilizing NCCL.


Scott

On 6/16/20 9:19 PM, Karl Rupp wrote:
 From a practical standpoint it seems to me that NCCL is an offering to 
a community that isn't used to MPI. It's categorized as 'Deep Learning 
Software' on the NVIDIA page ;-)


The section 'NCCL and MPI' has some interesting bits:
  https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/mpi.html

At the bottom of the page there is
  "Using NCCL to perform inter-GPU communication concurrently with 
CUDA-aware MPI may create deadlocks. (...) Using both MPI and NCCL to 
perform transfers between the same sets of CUDA devices concurrently is 
therefore not guaranteed to be safe."


While I'm impressed that NVIDIA even 'reinvents' MPI for their GPUs to 
serve the deep learning community, I don't think NCCL provides enough 
beyond MPI for PETSc.


Best regards,
Karli





On 6/17/20 4:13 AM, Junchao Zhang wrote:
It should be renamed as NCL (NVIDIA Communications Library) as it adds 
point-to-point, in addition to collectives. I am not sure whether to 
implement it in petsc as none exscale machine uses nvidia GPUs.


--Junchao Zhang


On Tue, Jun 16, 2020 at 6:44 PM Matthew Knepley > wrote:


    It would seem to make more sense to just reverse-engineering this as
    another MPI impl.

    Matt

    On Tue, Jun 16, 2020 at 6:22 PM Barry Smith mailto:bsm...@petsc.dev>> wrote:




    --     What most experimenters take for granted before they begin 
their

    experiments is infinitely more interesting than any results to which
    their experiments lead.
    -- Norbert Wiener

    https://www.cse.buffalo.edu/~knepley/
    



--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] "alt" versions of tests

2020-06-15 Thread Scott Kruger




This is more about how the reporting is done than one should interpret 
what's actually happening under the hood.  The way the test is formed is 
essentially:

   diff-test1 2> test.out || diff-test2 2> test.out

So diff-test1 output gets overwritten by diff-test2.

Let me see if I can fix.

Scott




On 6/15/20 9:51 AM, Mark Adams wrote:
src/ksp/ksp/tutorials/output/ex71_bddc_elast_both_approx_*alt.*out uses 
ML and src/ksp/ksp/tutorials/output/ex71_bddc_elast_both_approx.out uses 
GAMG.


The test seems to look at the alt file and not the normal one. I don't 
understand. I do get an error message ...


11:42 adams/cheby-spd-cg= ~/Codes/petsc-master$ make cleantest
/usr/bin/make  --no-print-directory -f gmakefile.test 
PETSC_ARCH=arch-macosx-gnu-g PETSC_DIR=/Users/markadams/Codes/petsc 
cleantest

/bin/rm -f -r ./arch-macosx-gnu-g/tests ./arch-macosx-gnu-g/tests/testfiles
11:42 adams/cheby-spd-cg= ~/Codes/petsc-master$ make -f gmakefile test 
search='ksp_ksp_tutorials-ex71_bddc_elast_both%' PETSC_DIR=$PWD
*gmakefile.test:92: arch-macosx-gnu-g/tests/testfiles: No such file or 
directory
*/System/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python 
/Users/markadams/Codes/petsc-master/config/gmakegentest.py 
--petsc-dir=/Users/markadams/Codes/petsc-master 
--petsc-arch=arch-macosx-gnu-g --testdir=./arch-macosx-gnu-g/tests
Using MAKEFLAGS: PETSC_DIR=/Users/markadams/Codes/petsc-master 
search=ksp_ksp_tutorials-ex71_bddc_elast_both%

           CC arch-macosx-gnu-g/tests/ksp/ksp/tutorials/ex71.o
      CLINKER arch-macosx-gnu-g/tests/ksp/ksp/tutorials/ex71
         TEST 
arch-macosx-gnu-g/tests/counts/ksp_ksp_tutorials-ex71_bddc_elast_both_approx.counts

  ok ksp_ksp_tutorials-ex71_bddc_elast_both_approx
not ok diff-ksp_ksp_tutorials-ex71_bddc_elast_both_approx # Error code: 1
# 1,13c1
# <   0 KSP Residual norm 1615.07
# <   1 KSP Residual norm 420.868
# <   2 KSP Residual norm 187.45
# <   3 KSP Residual norm 67.3919
# <   4 KSP Residual norm 21.3237
# <   5 KSP Residual norm 5.8091
# <   6 KSP Residual norm 1.0923
# <   7 KSP Residual norm 0.527464
# <   8 KSP Residual norm 0.380684
# <   9 KSP Residual norm 0.0354163
# <  10 KSP Residual norm 0.0237308
# <  11 KSP Residual norm 0.0121289
# < Linear solve converged due to CONVERGED_RTOL iterations 11
# ---
# > Linear solve converged due to CONVERGED_RTOL iterations 10
# 77,92d64
# <     PC Object: 1 MPI processes
# <       type: shell
# <         Nullspace corrected interior solve
# <         L:
# <           Mat Object: 1 MPI processes
# <             type: seqdense
# <             rows=144, cols=6
# <             total: nonzeros=864, allocated nonzeros=864
# <             total number of mallocs used during MatSetValues calls=0
# <         K:
# <           Mat Object: 1 MPI processes
# <             type: seqdense
# <             rows=144, cols=6
# <             total: nonzeros=864, allocated nonzeros=864
# <             total number of mallocs used during MatSetValues calls=0
# <         inner preconditioner:
# 94,95c66,67
# <             type: ml
# <               type is MULTIPLICATIVE, levels=3 cycles=v
# ---
# >       type: gamg
# >         type is MULTIPLICATIVE, levels=2 cycles=v
# 97a70,77

On Mon, Jun 15, 2020 at 10:15 AM Satish Balay > wrote:


On Mon, 15 Jun 2020, Mark Adams wrote:

 > My pipeline is failing on ksp/ex71.c and it seems to be picking
up an "alt"
 > version of the output.

Hm - it does a diff with (basic, alt) files. If all diffs fail -
then it prints a diff from one of them.


 > I tried REPLACE=1 and both output files seemed to
 > change. What is going on with these "alt" output files?

I'm not sure how this works with alt files. I assumed it ignores alt
files - and updates the primary file.  Usually I would need a new alt
file - so I just move this over manually to a new one [and keep the
current files unchanged]

Note: To test the new alt file - one need to do 'make cleantest' and
rerun the test - otherwise the test harness does not know that it
should pick up the new alt file.

Satish



--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] testset + test with only args

2020-04-22 Thread Scott Kruger




This is probably more detail than you want, but:

When you explicitly put in the suffix, you are telling the test harness 
to put each of those "subtests" into a separate script rather than a 
single script (multiple tests within a single script), so that's why

it works.


For the single script case, I have this line in config/gmakegentest.py:

  if key in usedVars: continue# Do not duplicate setting vars

Since you have a duplicated var, it gets skipped.

I'm not sure where this line comes from (`git blame` is confusing me).

I'll set up an MR and run a pipeline job to see if it causes problems.

Scott

On 4/22/20 3:04 AM, Pierre Jolivet wrote:

Hello,
Given a testset, what are the mandatory fields in the subsequent tests to have 
something functioning?
Here is a MWE showing that something is broken for a reason I’m not sure I 
understand if only the field args is provided.
$ git diff --unified=0 src/ksp/ksp/tutorials/ex1.c
diff --git a/src/ksp/ksp/tutorials/ex1.c b/src/ksp/ksp/tutorials/ex1.c
index 3b3e776a6d..59113dfd1a 100644
--- a/src/ksp/ksp/tutorials/ex1.c
+++ b/src/ksp/ksp/tutorials/ex1.c
@@ -187,0 +188,11 @@ int main(int argc,char **args)
+   testset:
+  suffix: 4
+  nsize: 1
+  args: -ksp_converged_reason -ksp_max_it 1000
+  test:
+ args:
+  test:
+ args: -ksp_type bcgs -pc_type {{hypre gamg}}
+  test:
+ args: -ksp_type gmres -pc_type {{lu ilu}}
+
$ make -f gmakefile test globsearch="ksp_ksp_tutorials-ex1_4*"
[..]
# FAILED diff-ksp_ksp_tutorials-ex1_4+a 
diff-ksp_ksp_tutorials-ex1_4+b+pc_type-hypre 
diff-ksp_ksp_tutorials-ex1_4+b+pc_type-gamg 
diff-ksp_ksp_tutorials-ex1_4+c+pc_type-hypre 
diff-ksp_ksp_tutorials-ex1_4+c+pc_type-gamg
[..]

It should be diff-ksp_ksp_tutorials-ex1_4+c+pc_type-lu 
diff-ksp_ksp_tutorials-ex1_4+c+pc_type-ilu, shouldn’t it?
The problem seems to go away if I explicitly put the suffixes myself, e.g., 
suffix: a, suffix: b, and suffix: c, cf. below, but I’d prefer to avoid having 
to do that.
# FAILED diff-ksp_ksp_tutorials-ex1_4_a 
diff-ksp_ksp_tutorials-ex1_4_c+pc_type-lu 
diff-ksp_ksp_tutorials-ex1_4_c+pc_type-ilu 
diff-ksp_ksp_tutorials-ex1_4_b+pc_type-hypre 
diff-ksp_ksp_tutorials-ex1_4_b+pc_type-gamg

Thanks,
Pierre



--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] Issue with make test and globsearch

2020-04-09 Thread Scott Kruger




Can you send me these files?

$PETSC_ARCH/tests/datatest.pkl
$PETSC_ARCH/tests/testfiles

Thanks,
Scott


On 4/9/20 10:47 AM, Stefano Zampini wrote:

[szampini@localhost petsc]$ make -f gmakefile.test test globsearch='mat_*'
Using MAKEFLAGS: -- globsearch=mat_*
make: *** No rule to make target 'mat_tests-ex1_1', needed by 
'report_tests'.  Stop.


--
Stefano


--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] Globsearch fails for me when running tests

2020-03-30 Thread Scott Kruger




Sorry for the delay.  Fixed in MR !2663.

It turns out that the *read* for $(file <...) did not occur until
gmake 4.2 so this bit me even on a (admittedly old) Linux dev box.
To enable the widest range of usage, I keep a modified version
of the current $(shell ...) usage, but have commented out the
gmake >4.2 solution for people who want the truly scalable
solution (e.g., globsearch='*').  The current fix should work
until dm, or other package,  gets to around 9K tests which seems to
be when globsearch failed.

The new solution uses an upgraded `config/query_tests.py` script
so is similar to the query/queryval functionality.

Scott



On 3/28/20 10:34 AM, Satish Balay via petsc-dev wrote:

On Sat, 28 Mar 2020, Jed Brown wrote:


Matthew Knepley  writes:


IIRC, the $(file ...) function does not work with stock make in macOS.


You're right; that is a make 4.0 feature.  But developers who need
globsearch should have the ability to evade Apple's anti-GNU smear.



Do we really have to make things hard on Mac to use something I use every
day hundreds of times?


Are you still using make-3.81?  Do you also use an Apple Newton?

You can still use search and searchin.

Scott can implement globsearch to call Python that calls make print-test
(listing tests on stdout) and returns the result on stdout.  But it
can't pass the list all the tests on the command line.



Configure has been printing this warning for a very long time - but its easily 
ignored..

 
===
 * WARNING: You have an older version of Gnu make, it 
will work,
 but may not support all the parallel testing options. You 
can install the
 latest Gnu make with your package manager, such as brew or 
macports, or use
 the --download-make option to get the latest Gnu make *
 
===

Satish



--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] Globsearch fails for me when running tests

2020-03-25 Thread Scott Kruger




Ugh -- this is ugly.

Can't we just tell users to either use the '%' syntax or recompile their 
linux kernel?


Just kidding.  I'll take a look.

Scott


On 3/25/20 3:48 PM, Jed Brown wrote:

Scott, you can't pass '$(alltesttargets)' on the command line like this.

   TESTTARGETS := $(shell $(PYTHON) -c"import sys,fnmatch,itertools; 
m=[fnmatch.filter(sys.argv[2].split(),p) for p in sys.argv[1].split()]; print(' 
'.join(list(itertools.chain.from_iterable(m" '$(globsearch)' '$(alltesttargets)')

For this feature, probably put them in an argsfile

   $(file >$(TESTDIR)/globsearch.args,$(alltesttargets))

and make your Python read from that file.  I don't know a way to pass it
on stdin.

Scott Kruger  writes:


What platform?

On 3/25/20 3:20 PM, Stefano Zampini wrote:

This was working before..

[szampini@localhost petsc]$ make -f gmakefile.test test globsearch='dm*'
make: execvp: /usr/bin/sh: Argument list too long
Using MAKEFLAGS: -- globsearch=dm*
# No tests run
# No tests run
# No tests run

[szampini@localhost petsc]$ git branch
* knepley/feature-dm-remove-hybrid

[szampini@localhost petsc]$ make -v
GNU Make 4.2.1
Built for x86_64-redhat-linux-gnu
Copyright (C) 1988-2016 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later
<http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.



--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] Globsearch fails for me when running tests

2020-03-25 Thread Scott Kruger




What platform?

On 3/25/20 3:20 PM, Stefano Zampini wrote:

This was working before..

[szampini@localhost petsc]$ make -f gmakefile.test test globsearch='dm*'
make: execvp: /usr/bin/sh: Argument list too long
Using MAKEFLAGS: -- globsearch=dm*
# No tests run
# No tests run
# No tests run

[szampini@localhost petsc]$ git branch
* knepley/feature-dm-remove-hybrid

[szampini@localhost petsc]$ make -v
GNU Make 4.2.1
Built for x86_64-redhat-linux-gnu
Copyright (C) 1988-2016 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 


This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.



--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] ccache tips?

2020-01-17 Thread Scott Kruger



I didn't realize brew put the links in libexec.   I did it manually like 
in this tutorial:


https://software.intel.com/en-us/articles/accelerating-compilation-part-1-ccache

This tutorial discusses the size of the cache.  I made mine too small 
when I first set it up.


I like Jed's mpi method as the things I want cached the most are the 
parallel stuff -- I don't want to cache the serial externalbuilds.


Scott

On 1/17/20 10:23 AM, Balay, Satish via petsc-dev wrote:

I have ccache setup automatically on my linux box.

balay@sb /home/balay
$ which gcc
/usr/lib64/ccache/gcc

i.e the easiest thing to do is update PATH. For ex on OSX [where its not 
automatically setup] - I have it installed via brew and:

export PATH=/usr/local/opt/ccache/libexec:$PATH

Satish



On Fri, 17 Jan 2020, Patrick Sanan wrote:


I'm shamefully not using ccache. How do I do it? Is it as simple as ./configure 
--with-cc="ccache gcc" --with-cxx="ccache g++"? Works on OS X and various 
Linuxes? Any known issue with external packages or otherwise?




--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] Fortran equivalent + separate output with output_file

2020-01-13 Thread Scott Kruger




On 1/13/20 8:32 AM, Pierre Jolivet wrote:

Hello,
This is actually two separate questions, sorry.
1) I’m looking for the Fortran equivalent of the following, but I couldn’t get 
any help looking at the sources.
   ierr = PetscOptionsBegin(PETSC_COMM_WORLD,"","","");CHKERRQ(ierr);
   ierr = PetscOptionsFList("-mat_type","Matrix 
type","MatSetType",MatList,deft,type,256,);CHKERRQ(ierr);
   ierr = PetscOptionsEnd();CHKERRQ(ierr);
2) I have Fortran tests which share the same outputs as my C tests. I want to 
use the same output_file, but my test has a separate output parameter. Is there 
someway to generate output_file dynamically?
!   test:
!  suffix: foo
!  output_file: output/ex76_foo_bar-%D.out <— how to?
!  nsize: 4
!  args: -bar {{5 15}separate output}
If it’s not possible in Fortran, but possible in C, I can switch things around 
of course.


I don't understand the goal here.  Why don't you always know the name?

Scott

--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] How to replace floating point numbers in test outputs?

2020-01-05 Thread Scott Kruger




It's a bug.  Fixed in MR !2428

Scott


On 1/4/20 1:27 PM, Smith, Barry F. wrote:


   Yes, since difference in floating point are considered not a change the 
REPLACE which only updates files with changes won't update them.

   I don't understand the output below, looks identical to me, why is it a diff?

   Scott, Perhaps when DIFF_NUMBERS=1 is given the REPLACE should replace if 
the numbers are different?

Barry



On Jan 4, 2020, at 12:37 PM, Stefano Zampini  wrote:

I would like to overwrite floating point numbers in tests outputs. I remember 
in the past we could just pass REPLACE=1; it seems it is not possible with the 
current master.

I tried the command below (this is a test which produces different floating 
point values) with no success. Any suggestion on how we can do this?

$ make -f gmakefile.test test globsearch="mat*ex5_12_B" DIFF_NUMBERS=1 REPLACE=1
Using MAKEFLAGS: REPLACE=1 DIFF_NUMBERS=1 globsearch=mat*ex5_12_B
 TEST arch-debug/tests/counts/mat_tests-ex5_12_B.counts
  ok mat_tests-ex5_12_B
not ok diff-mat_tests-ex5_12_B # Error code: 1
#   42,43c42,43
#   < 0.
#   < 0.
#   ---
#   > 1.
#   > 1.
#   42,43c42,43
#   < 0.
#   < 0.
#   ---
#   > 1.
#   > 1.

# -
#   Summary
# -
# FAILED diff-mat_tests-ex5_12_B
# success 1/2 tests (50.0%)
# failed 1/2 tests (50.0%)
# todo 0/2 tests (0.0%)
# skip 0/2 tests (0.0%)
#
# Wall clock time for tests: 0 sec
# Approximate CPU time (not incl. build time): 0.05 sec
#
# To rerun failed tests:
# /usr/bin/make -f gmakefile test test-fail=1
#
# Timing summary (actual test time / total CPU time):
#   mat_tests-ex5_12_B: 0.03 sec / 0.05 sec






--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] Valgrind problems

2019-12-08 Thread Scott Kruger





Once MR!2329 is merged in, all of these test harness annoyances should 
be fixed.  It wasn't a single issue, but rather a bunch of stdout/stderr 
and error code issues that were fixed in multiple MR's.


Scott


On 12/8/19 12:11 AM, Smith, Barry F. wrote:




On Dec 7, 2019, at 6:00 PM, Matthew Knepley  wrote:

Nope, you are right.

   Thanks,

 Matt

On Sat, Dec 7, 2019 at 6:19 PM Balay, Satish  wrote:
The fix for this is in 363424266cb675e6465b4c7dcb06a6ff8acf57d2

Do you have this commit in your branch - and still seeing issues?

Satish

On Sat, 7 Dec 2019, Patrick Sanan wrote:


I was actually wondering about this, as in some cases valgrind errors appear 
and sometimes they don't, but I didn't dig into it too deeply.

Here's my workaround, FWIW, which shows some output for that test on master.

I don't see any output when I just run the tests like this:

 VALGRIND=1 make -f gmakefile.test test 
globsearch="dm_impls_plex_tests-ex1_fluent_2"

But I do see something if I do this to find any non-empty .err files:

 find $PETSC_ARCH/tests -name *.err ! -size 0

And then I see these valgrind warnings after copy-pasting the path:


   This is really bad. Perhaps it is now fixed in master but obviously it 
crucial that all errors and valgrind errors are always visible in logging, 
otherwise we drive ourselves nuts chasing ghosts.

Barry







$ cat 
arch-master-extra-opt/tests/dm/impls/plex/examples/tests/runex1_fluent_2/runex1_fluent_2.err
==4990== Conditional jump or move depends on uninitialised value(s)
==4990==at 0x4C3705A: rawmemchr (in 
/usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==4990==by 0x7687351: _IO_str_init_static_internal (strops.c:41)
==4990==by 0x767878C: vsscanf (iovsscanf.c:40)
==4990==by 0x76721A3: sscanf (sscanf.c:32)
==4990==by 0x588147D: DMPlexCreateFluent_ReadSection (plexfluent.c:105)
==4990==by 0x5882E3A: DMPlexCreateFluent (plexfluent.c:246)
==4990==by 0x588492F: DMPlexCreateFluentFromFile (plexfluent.c:30)
==4990==by 0x577E2B7: DMPlexCreateFromFile (plexcreate.c:3254)
==4990==by 0x10BEDF: CreateMesh (ex1.c:170)
==4990==by 0x10A20B: main (ex1.c:430)
==4990==  Uninitialised value was created by a stack allocation
==4990==at 0x5881313: DMPlexCreateFluent_ReadSection (plexfluent.c:96)
==4990==
==4990== Use of uninitialised value of size 8
==4990==at 0x766276F: _IO_vfscanf (vfscanf.c:633)
==4990==by 0x767879C: vsscanf (iovsscanf.c:41)
==4990==by 0x76721A3: sscanf (sscanf.c:32)
==4990==by 0x588147D: DMPlexCreateFluent_ReadSection (plexfluent.c:105)
==4990==by 0x5882E3A: DMPlexCreateFluent (plexfluent.c:246)
==4990==by 0x588492F: DMPlexCreateFluentFromFile (plexfluent.c:30)
==4990==by 0x577E2B7: DMPlexCreateFromFile (plexcreate.c:3254)
==4990==by 0x10BEDF: CreateMesh (ex1.c:170)
==4990==by 0x10A20B: main (ex1.c:430)
==4990==  Uninitialised value was created by a stack allocation
==4990==at 0x5881313: DMPlexCreateFluent_ReadSection (plexfluent.c:96)
==4990==
==4990== Conditional jump or move depends on uninitialised value(s)
==4990==at 0x766277B: _IO_vfscanf (vfscanf.c:630)
==4990==by 0x767879C: vsscanf (iovsscanf.c:41)
==4990==by 0x76721A3: sscanf (sscanf.c:32)
==4990==by 0x588147D: DMPlexCreateFluent_ReadSection (plexfluent.c:105)
==4990==by 0x5882E3A: DMPlexCreateFluent (plexfluent.c:246)
==4990==by 0x588492F: DMPlexCreateFluentFromFile (plexfluent.c:30)
==4990==by 0x577E2B7: DMPlexCreateFromFile (plexcreate.c:3254)
==4990==by 0x10BEDF: CreateMesh (ex1.c:170)
==4990==by 0x10A20B: main (ex1.c:430)
==4990==  Uninitialised value was created by a stack allocation
==4990==at 0x5881313: DMPlexCreateFluent_ReadSection (plexfluent.c:96)
==4990==

Am 07.12.2019 um 21:45 schrieb Matthew Knepley :

I am trying to clean up valgrind errors. However this one

   dm_impls_plex_tests-ex1_fluent_2

is valgrind clean on my machine. Does anyone get it to output something?

   Thanks,

  Matt

--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 







--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/




--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


[petsc-dev] Gitlab notifications and labels

2019-11-14 Thread Scott Kruger via petsc-dev




In a conversation with Barry, he mentioned that we hadn't
really discussed label subscription on this mailing list,
despite the fact that this is perhaps the more useful control
of notifications than what is in given in Settings.

Following up on this discussion, if folks go here:
  https://gitlab.com/petsc/petsc/-/labels
you can then subscribe to the labels that you want
notifications for.  This is potentially the best method
for subscribing to PETSc development notifications, and
better than just *Participate* level which I had advocated
for earlier.  I'll now advocate for *Participate* globally,
and label subscription locally.

Of course, this requires all folks who start an MR/Issue
to appropriately use labels.  But if you want a response,
you should expect to need this to get the response you want.

And of course, this requires accurate labels.  For example,
I notice that there is no label for folks developing
DM, even though that is, perhaps, the most active area of
development.  Do you require a new label?  (asks the
not-a-dm-dev).

Scott

P.S.  This means that I will stop using @person for GPU
discussions and just use the GPU label. Subscribe now GPU devs!



--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] I think the test system is broken in master

2019-10-23 Thread Scott Kruger via petsc-dev




Thanks for the debugging, Satish -- I was very confused because I 
recently changed petscdiff and I assumed I rebugged it.


On 10/23/19 3:10 PM, Balay, Satish wrote:

On Wed, 23 Oct 2019, Balay, Satish via petsc-dev wrote:


On Wed, 23 Oct 2019, Matthew Knepley via petsc-dev wrote:


On Wed, Oct 23, 2019 at 4:20 PM Balay, Satish  wrote:

I am not saying the master branch tests are failing. I am saying that
running the
test system with REPLACE=1 is no longer working correctly.


ok.


On Wed, 23 Oct 2019, Matthew Knepley via petsc-dev wrote:


I just rebased my branch on master, and now with REPLACE=1 I am getting

not ok diff-ksp_ksp_tutorials-ex70_fetidp # Error code: 1
#   1,2d0
#   <   DMSWARM_PIC: Using method CellDM->LocatePoints
#   <   DMSWARM_PIC: Using method CellDM->GetNeigbors
#   mv'ing ex70_fetidp.tmp -->


/var/folders/hk/tc1pd0g57l78rpt0lttc1sfhgn/T/petscdiff.XX.fjHMBkGr


the destination of the 'mv' is wrong. So likely a bug in REPLACE wrt testset
[as non-testset example is working fine for me]


Actually the issue is with filter_output. And since 'mv' is done by petscdiff - 
it doesn't really work here..

"filter_output: grep -v atomic" gives: diff-ksp_ksp_tutorials-ex70_fetidp.sh
grep -v atomic 
/home/balay/petsc/src/ksp/ksp/examples/tutorials/output/ex70_fetidp.out | 
/home/balay/petsc/lib/petsc/bin/petscdiff -m - ex70_fetidp.tmp > 
diff-runex70_fetidp.out 2> diff-runex70_fetidp.out

[without filter_output:]

/home/balay/petsc/lib/petsc/bin/petscdiff -m 
/home/balay/petsc/src/ksp/ksp/examples/tutorials/output/ex70_fetidp.out 
ex70_fetidp.tmp > diff-runex70_fetidp.out 2> diff-runex70_fetidp.out


Perhaps "filter_output" codepath should be skippled when REPLACE=1 is specified 
[while invoking the diff]..


I don't understand how to fit this into the test harness.

Perhaps rather than trying to shoe horn that in, we could have a 
separate python script, `replace_test_output.py` that does it.

It is more manual, but more straightforward; e.g.,

config/replace_test_output.py 
--testdir=$PETSC_ARCH/tests/ksp/ksp/examples/tutorials



I always hated the filter_output feature...

Scott



Satish



[and not something that recently broke - so rebase with latest master
was not a factor - as v3.12 also has this breakage]

Scott might have to take a look at this..

Satish





--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] Wrong "failed tests" command

2019-10-21 Thread Scott Kruger via petsc-dev




When we created the map of directory+test+variants to targets we did not 
use enough delimiters to allow the inverse map to be determined so 
creating the proper list of target names is an ill-posed problem.   The 
globsearch was a way of trying to just catch more of them, but obviously 
it still has a bug.  I'll look at it.


Scott


On 10/21/19 12:47 PM, Jed Brown wrote:

Yeah, it's missing the numeric block size.  The following works

/usr/bin/make -f gmakefile test globsearch='mat_tests-ex128_1 
mat_tests-ex37_nsize-2_mat_type-mpibaij_mat_block_size-1'

Also, globsearch can be replaced by search in this usage.

"Smith, Barry F. via petsc-dev"  writes:


   May need more work on the tester infrastructure?


On Oct 21, 2019, at 12:30 PM, Pierre Jolivet via petsc-dev 
 wrote:

Hello,
In this pipeline build log, https://gitlab.com/petsc/petsc/-/jobs/326525063, it 
shows that I can rerun failed tests using the following command:
/usr/bin/make -f gmakefile test globsearch='mat_tests-ex128_1 
mat_tests-ex37_nsize-2_mat_type-mpibaij_mat_block_size 
mat_tests-ex37_nsize-1_mat_type-mpibaij_mat_block_size mat_tests-ex128_2 
mat_tests-ex37_nsize-2_mat_type-baij_mat_block_size 
mat_tests-ex37_nsize-1_mat_type-baij_mat_block_size mat_tests-ex30_4 
mat_tests-ex37_nsize-2_mat_type-sbaij_mat_block_size mat_tests-ex18_* 
mat_tests-ex76_3 mat_tests-ex37_nsize-2_mat_type-mpisbaij_mat_block_size 
mat_tests-ex37_nsize-1_mat_type-sbaij_mat_block_size’

If used, this command does not run any of the mat_test-ex37* tests.

Thanks,
Pierre


--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] test harness: output of actually executed command for V=1 gone?

2019-10-02 Thread Scott Kruger via petsc-dev




In MR !2138 I have this target as show-fail  which I think is more 
descriptive.


config/report_tests.py -f
is what's done directly.

I made it such that one can copy and paste, but it might be too verbose.

Scott


On 9/20/19 8:53 PM, Jed Brown wrote:

"Smith, Barry F."  writes:


Satish and Barry:  Do we need the Error codes or can I revert to previous 
functionality?


   I think it is important to display the error codes.

   How about displaying at the bottom how to run the broken tests? You already 
show how to run them with the test harness, you could also print how to run 
them directly? Better then mixing it up with the TAP output?


How about a target for it?

make -f gmakefile show-test search=abcd

We already have print-test, which might more accurately be named ls-test.



--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] Mixing separate and shared ouputs

2019-10-02 Thread Scott Kruger via petsc-dev




Fixed in MR# 2138
https://gitlab.com/petsc/petsc/merge_requests/2138

Thanks for the report.

Scott


On 9/28/19 3:44 AM, Pierre Jolivet via petsc-dev wrote:

Hello,
If I put something like this in src/ksp/ksp/examples/tutorials/ex12.c
   args: -ksp_gmres_cgs_refinement_type refine_always -ksp_type {{cg 
gmres}separate output} -pc_type {{jacobi bjacobi lu}separate output}
I get
# success 9/13 tests (69.2%)

Now
   args: -ksp_gmres_cgs_refinement_type refine_always -ksp_type {{cg 
gmres}shared output} -pc_type {{jacobi bjacobi lu}shared output}
Still gives me
# success 9/13 tests (69.2%)

But
   args: -ksp_gmres_cgs_refinement_type refine_always -ksp_type {{cg 
gmres}shared output} -pc_type {{jacobi bjacobi lu}separate output}
Gives me
# success 6/7 tests (85.7%)

Is this the expected behavior?
Any easy way to get 13 tests as well?

Thanks,
Pierre



--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] TAP file and testing error

2019-09-26 Thread Scott Kruger via petsc-dev





My summary is that we can just do the easiest fix then and have 
test_tap.log and test_err.log


Scott


On 9/26/19 9:28 AM, Balay, Satish wrote:

On Thu, 26 Sep 2019, Scott Kruger via petsc-dev wrote:




On 9/26/19 12:45 AM, Stefano Zampini wrote:

You usually get the backslash when you trying to be quick and tab-complete
the command :-)


Yes, I agree that the makefile should fix this.


and, so far, PETSc accepted this. We should either filter the variable in
the makefile, or change the filenames with their paths.
Scott, are the two PETSC_ARCH specifications really needed in the filename?
I mean first as a folder, than in the filename itself


Barry or Satish should confirm, but as I recall the idea is that if you are
collecting the the log files into a dashboard, then the PETSC_ARCH labeling
scheme is best.


Well previously - for the dashboard - we did the rename during the copy [from 
build location to the dashboard].

If I remember correctly only one generated file had PETSC_ARCH encoded
- that is PETSC_ARCH/lib/petsc/config/reconfigure-PETSC_ARCH.py - so
that its easy to copy/save/reuse this script.

Its not clear to me what type of dashboard we can have in the future -
and how to get the logs there [beyond the current one at gitlab at
https://gitlab.com/petsc/petsc/pipelines]

Satish



Pierre Jolivet has an bug report in, and I have the show-test functionality
in, so I'll try to get a new MR out soon.

Scott




Il giorno gio 26 set 2019 alle ore 09:38 Matthew Knepley mailto:knep...@gmail.com>> ha scritto:

 On Wed, Sep 25, 2019 at 8:52 PM Stefano Zampini via petsc-dev
 mailto:petsc-dev@mcs.anl.gov>> wrote:

 If we specify a PETSC_ARCH with a trailing slash, the current
 testing makefile fails. Can this be fixed?


 PETSC_ARCH is a string name and not necessarily a directory. I think
 we should check and fail if it has a slash.

     Matt

 *zampins@vulture*:*~/Devel/petsc*$ make -f gmakefile.test test
 globsearch="*densecuda*" PETSC_ARCH=arch-gpu-double-unifmem/

 touch: cannot touch
 
'./arch-gpu-double-unifmem//tests/test_arch-gpu-double-unifmem/_tap.log':
 No such file or directory

 touch: cannot touch
 
'./arch-gpu-double-unifmem//tests/test_arch-gpu-double-unifmem/_err.log':
 No such file or directory--
 Stefano



 --
 What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which
 their experiments lead.
 -- Norbert Wiener

 https://www.cse.buffalo.edu/~knepley/
 <http://www.cse.buffalo.edu/~knepley/>



--
Stefano





--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] TAP file and testing error

2019-09-26 Thread Scott Kruger via petsc-dev




On 9/26/19 12:45 AM, Stefano Zampini wrote:
You usually get the backslash when you trying to be quick and 
tab-complete the command :-)


Yes, I agree that the makefile should fix this.

and, so far, PETSc accepted this. We should either filter the variable 
in the makefile, or change the filenames with their paths.
Scott, are the two PETSC_ARCH specifications really needed in the 
filename? I mean first as a folder, than in the filename itself


Barry or Satish should confirm, but as I recall the idea is that if you 
are collecting the the log files into a dashboard, then the PETSC_ARCH 
labeling scheme is best.


Pierre Jolivet has an bug report in, and I have the show-test 
functionality in, so I'll try to get a new MR out soon.


Scott




Il giorno gio 26 set 2019 alle ore 09:38 Matthew Knepley 
mailto:knep...@gmail.com>> ha scritto:


On Wed, Sep 25, 2019 at 8:52 PM Stefano Zampini via petsc-dev
mailto:petsc-dev@mcs.anl.gov>> wrote:

If we specify a PETSC_ARCH with a trailing slash, the current
testing makefile fails. Can this be fixed?


PETSC_ARCH is a string name and not necessarily a directory. I think
we should check and fail if it has a slash.

    Matt

*zampins@vulture*:*~/Devel/petsc*$ make -f gmakefile.test test
globsearch="*densecuda*" PETSC_ARCH=arch-gpu-double-unifmem/

touch: cannot touch

'./arch-gpu-double-unifmem//tests/test_arch-gpu-double-unifmem/_tap.log':
No such file or directory

touch: cannot touch

'./arch-gpu-double-unifmem//tests/test_arch-gpu-double-unifmem/_err.log':
No such file or directory--
Stefano



-- 
What most experimenters take for granted before they begin their

experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/




--
Stefano


--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] TAP file and testing error

2019-09-25 Thread Scott Kruger via petsc-dev




Can you try rerunning after removing the trailing backslash to PETSC_ARCH?

Scott


On 9/25/19 1:51 PM, Stefano Zampini wrote:
If we specify a PETSC_ARCH with a trailing slash, the current testing 
makefile fails. Can this be fixed?


*zampins@vulture*:*~/Devel/petsc*$ make -f gmakefile.test test 
globsearch="*densecuda*" PETSC_ARCH=arch-gpu-double-unifmem/


touch: cannot touch 
'./arch-gpu-double-unifmem//tests/test_arch-gpu-double-unifmem/_tap.log': No 
such file or directory


touch: cannot touch 
'./arch-gpu-double-unifmem//tests/test_arch-gpu-double-unifmem/_err.log': No 
such file or directory--

Stefano


--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] test harness: output of actually executed command for V=1 gone?

2019-09-20 Thread Scott Kruger via petsc-dev






On 9/20/19 2:49 PM, Jed Brown wrote:

Hapla  Vaclav via petsc-dev  writes:


On 20 Sep 2019, at 19:59, Scott Kruger 
mailto:kru...@txcorp.com>> wrote:


On 9/20/19 10:44 AM, Hapla Vaclav via petsc-dev wrote:
I was used to copy the command actually run by test harness, change to 
example's directory and paste the command (just changing one .. to ., e.g. 
../ex1 to ./ex1).
Is this output gone? Bad news. I think there should definitely be an option to 
quickly reproduce the test run to work on failing tests.

I only modified the V=0 option to suppress the TAP 'ok' output.

I think you are referring to the 'not ok' now giving the error code instead of 
the cmd which is now true regardless of V.  This was suggested by others.  I 
defer to the larger group on what's desired here.

Note that is sometimes tedious to deduce the whole command line from the test 
declarations, for example because of multiple args: lines.

Personally, I recommend just cd'ing into the test directory and running the 
scripts by hand.

For example:
cd $PETSC_ARCH/tests/ksp/ksp/examples/tests/runex22
cat ksp_ksp_tests-ex22_1.sh
mpiexec  -n 1 ../ex22   > ex22_1.tmp 2> runex22.err

OK, this takes a bit more time but does to job.


That's yucky.  I think we should have an option to print the command(s)
that would be run, one line per expanded {{a b c}}, so we can copy-paste
into the terminal with only one step of indirection.


Ugh.  I'm dealing with bash at this level - not python.

Satish and Barry:  Do we need the Error codes or can I revert to 
previous functionality?


Scott


--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] test harness: output of actually executed command for V=1 gone?

2019-09-20 Thread Scott Kruger via petsc-dev






On 9/20/19 10:44 AM, Hapla Vaclav via petsc-dev wrote:

I was used to copy the command actually run by test harness, change to 
example's directory and paste the command (just changing one .. to ., e.g. 
../ex1 to ./ex1).

Is this output gone? Bad news. I think there should definitely be an option to 
quickly reproduce the test run to work on failing tests.


I only modified the V=0 option to suppress the TAP 'ok' output.

I think you are referring to the 'not ok' now giving the error code 
instead of the cmd which is now true regardless of V.  This was 
suggested by others.  I defer to the larger group on what's desired here.




Note that is sometimes tedious to deduce the whole command line from the test 
declarations, for example because of multiple args: lines.


Personally, I recommend just cd'ing into the test directory and running 
the scripts by hand.


For example:
cd $PETSC_ARCH/tests/ksp/ksp/examples/tests/runex22
cat ksp_ksp_tests-ex22_1.sh
mpiexec  -n 1 ../ex22   > ex22_1.tmp 2> runex22.err

or

cd $PETSC_ARCH/tests/ksp/ksp/examples/tests
./runex22.sh
 ok ksp_ksp_tests-ex22_1
 ok diff-ksp_ksp_tests-ex22_1



Scott



--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] Gitlab notifications

2019-09-12 Thread Scott Kruger via petsc-dev




Here's what I did:

Settings -> Notifications -> developers + Participate

The default is "Global".  "Participate" is what I'm using now.

There is a "Custom", but it confuses me since it says you can
use it to match "Participate", but you can't do something like:
Email all new issues, but only show me the MR's I am mentioned
in or own.

Scott


On 9/12/19 7:39 AM, Balay, Satish via petsc-dev wrote:

On Thu, 12 Sep 2019, Jed Brown via petsc-dev wrote:


Matthew Knepley via petsc-dev  writes:


On Thu, Sep 12, 2019 at 9:05 AM Balay, Satish via petsc-dev <
petsc-dev@mcs.anl.gov> wrote:


When a new MR is created, approval rules default to 'Integration' and
'Team'

So everyone in the team probably receives emails on all MRs. Now that
we have CODEOWNERS setup - perhaps the Team should be removed?



Can you explain CODEOWNERS to me? I cannot find it on the GItlab site. I
want to see every MR.


https://docs.gitlab.com/ee/user/project/code_owners.html

We currently require approval from Integration (of which you are a
member) and a code owner (as specified in the file).

We used to have optional approvals from any other developer, but Satish
just removed that due to this notification thing, which I guess means
that any other developer (non-integrator, non-owner) should just comment
their approval if they find time to review.


Ah - forgot the primary purpose of having 'Team' in the 'approve' list.
Is there a way to disable notifications for team  - unless they participate?

Satish



--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] args loop in testset

2019-06-26 Thread Scott Kruger via petsc-dev




Thanks for the report Jakub.

I have a backlog of fixes to put into a PR, but am about to leave for 
vacation so it'll probably be a couple of weeks.


Scott


On 6/26/19 5:16 AM, Jakub Kruzik via petsc-dev wrote:

Hello,

args loop in test in testset does not insert a space after the argument. 
E.g., changing


test:
   args: -bs {{1 2 3 4 5 6 7 8 9 10 11 12}} -pc_type cholesky

into:

testset:
test:
     args: -bs {{1 2 3 4 5 6 7 8 9 10 11 12}} -pc_type cholesky

in ksp/ksp/examples/tests/ex49.c

Gives error:
#   [0]PETSC ERROR: Argument out of range
#   [0]PETSC ERROR: Input string 1-pc_type has no integer value (do 
not include . in it)
#   [0]PETSC ERROR: Petsc Development GIT revision: 
v3.11.2-1093-g080cba1312  GIT Date: 2019-06-26 10:52:54 +0200
#   [0]PETSC ERROR: #1 PetscOptionsStringToInt() line 1946 in 
/home/jakub/devel/petsc/petsc/src/sys/objects/options.c
#   [0]PETSC ERROR: #2 PetscOptionsGetInt() line 2282 in 
/home/jakub/devel/petsc/petsc/src/sys/objects/options.c
#   [0]PETSC ERROR: #3 main() line 18 in 
/home/jakub/devel/petsc/petsc/src/ksp/ksp/examples/tests/ex49.c

#   [0]PETSC ERROR: PETSc Option Table entries:
#   [0]PETSC ERROR: -bs 1-pc_type


All the best,

Jakub


--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] better regular testing on accelerators

2019-06-12 Thread Scott Kruger via petsc-dev





I think trying to push the logic of the type 'requires: cuda' or 
'requires: !cuda' to be implicit rather than explicit is a bad

if it comes to that.

Scott


On 6/12/19 9:03 AM, Smith, Barry F. via petsc-dev wrote:




On Jun 12, 2019, at 9:58 AM, Jed Brown  wrote:

Would it be sufficient to add the CUDA arguments to PETSC_OPTIONS when running 
the test suite on those machines?


   You mean -vec_type cuda -mat_type cuda -dm_vec_type cuda -dm_mat_type cuda ? 
We should definitely try it, it may break something we'll see. It will miss all 
the code that uses directly MatCreateAIJ() but then maybe we should change 
that code :-)


   Barry



"Smith, Barry F. via petsc-dev"  writes:


   In order to get better testing on the accelerators I think we need to 
abandon the -vec_type cuda approach scattered through a handful of examples and 
instead test ALL examples that are feasible automatically with the various 
accelerator options.  I think essentially any examples that use AIJ are 
feasible for testing (those that use BAIJ, SBAIJ, Ell are not) I am not sure if 
there is an automatic way to determine all of these cases. Labeling all such 
test cases manually would likely miss some and be out of date immediately.

   Any thoughts?

   Thanks

   Barry

Maybe we could short circuit the issue by having a mode of configuring or 
compiling or running where MatCreate_AIJ and VecCreate_ are bypassed directly 
to the accelerator cousins (this is not completely trivial because the cousins 
generally construct the basic ones and then convert themselves to the cousin).




--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] User(s) manual sections field in manual pages?

2019-06-12 Thread Scott Kruger via petsc-dev




So many projects use it (including the linux kernel, moving away from
bookdown, says wikipedia)


They switched from Docbook to rst.

   https://www.kernel.org/doc/html/latest/


Is that the default C autodoc extension, or hawkmoth?

https://hawkmoth.readthedocs.io/en/latest/extension.html




Sphinx supports a search dialog, but it would be a lot nicer if it would
autocomplete.


https://pypi.org/project/sphinxcontrib-lunrsearch/

Haven't tried it in any of my projects though.

Scott


P.S.  The default search barely adequate.
There are extensions for elasticsearch and
then another one called whoosh, but I haven't tried those
either (there used to be a plugin for sphinxsearch, but
trying to google the sphinx search plugin for sphinx the
documentation package seems impossible).

P.S.S.  I like rst and sphinx if you can't tell, but it
does take some work to have it integrate to a particular
workflow, and sometimes extensions beyond the builtins
are needed to get it to do what you want.

See these:
https://github.com/yoloseem/awesome-sphinxdoc
https://sphinxext-survey.readthedocs.io/en/latest/

For vim users, riv.vim is very nice.


--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] Fwd: [DL-interest] The missing piece in deep learning?

2019-05-17 Thread Scott Kruger via petsc-dev






On 5/16/19 4:57 PM, Mills, Richard Tran via petsc-dev wrote:
Interesting. I expect that Google will be convinced to rewrite 
Tensorflow entirely in Fortran now.


Seriously, I'm not sure what the motivation for this particular project 
is. Is it that people are tired of reading in their data using some nice 
Python tool and would rather be using old school fixed-format data 
routines in Fortran? =)



Ken Thompson, Turing Award Lecture:

In college, before video games, we would amuse ourselves by
posing programming exercises. One of the favorites was to
write the shortest self-reproducing program. Since this
is an exercise divorced from reality, the usual vehicle
was FORTRAN. Actually, FORTRAN was the language of
choice for the same reason that three-legged
races are popular.


Re: [petsc-dev] testing in parallel

2019-05-06 Thread Scott Kruger via petsc-dev




@bsmith -- this long message addresses your other messages as well.


Regarding why the different `'make -jXX` gives different number of
tests depending on the value of XX:

This issue here is how we handle failed tests.
The general paradigm is in the shell script is:


   petsc_testrun "${mpiexec} -n ${nsize} ${exec} ...

   res=$?

   if test $res = 0; then
  petsc_testrun "${diff_exe} ...
   else
  printf "ok ${label} # SKIP Command failed so no diff\n"
   fi



If we fail, then we skip the diff and don't record that test since it is 
skipped (we don't report SKIP or TODO's by default).
In other words, a successful invocation of a run will have 2 test 
(running and diffing), but a failure of the running will only have 1 
test (just the running, which will be a failure).


So the real question is:
  Why does the "-j20" case have more failures in running?

These are problems that Barry has been reporting both here and in
private email messages.

For my tests, these are the mat tests that fail with a make -j20:
mat_tests-ex23_10
mat_tests-ex23_2
mat_tests-ex23_3
mat_tests-ex23_4
mat_tests-ex23_5
mat_tests-ex23_9
mat_tests-ex23_vscat_default
mat_tests-ex23_vscat_sf
mat_tutorials-ex12_1


All of the mat_tests=ex23*   tests fail with timeouts.

mat_tutorials-ex12_1 fails without any stderr.  I don't know what's 
going on really.


Barry has reported hard crashes that don't occur when running the script 
by hand.  I assume that this is related to lack of resources when 
running in parallel, but it's speculative.



I am surprised at how consistent this seems to be -- the differences are 
pretty reproducible if I don't have much else going on with my laptop.

Perhaps this suggests a solution.

The fact that mat_test-ex23* is a problem could be predicted by
just looking at my 'make -j1' run and seeing which tests took the most
time:
# Timing summary (actual test time / total CPU time):
#   mat_tests-ex23_4: 13.29 sec / 15.08 sec
#   mat_tests-ex23_9: 12.20 sec / 14.19 sec
#   mat_tests-ex23_10: 9.72 sec / 11.36 sec
#   mat_tests-ex23_3: 7.22 sec / 8.33 sec
#   mat_tests-ex23_vscat_default: 7.06 sec / 8.22 sec

Predictably, when I coded up the dependencies, I list them sequentially 
and I assume that gmake's parallelization will just doing a queue-based 
task distribution.That is, mat_tests-ex23_4 will be invoked 
immediately after mat_tests-ex23_3.  This means that gmake will run 
these expensive tests together.


A crude method of trying to ameliorate these problems would be to 
randomize the dependency list.  In this example, the goal would be to 
prevent multiple ex23 executables from being called at the same time.


Of course, a better method would be to use some type of our own 
round-robin distribution based on an expected "JFLAG" value.  That could 
perhaps be a flag passed to config/gmakegentest.py.


Comments welcome.

Scott








On 4/29/19 5:04 PM, Scott Kruger via petsc-dev wrote:



FYI -- I have reproduced all the problems but am still looking at it.

I thought perhaps it would be something about the globsearch's 
invocation of python, but it's not -- I get the same thing even with 
gmake's native filter (and in fact, it appears to be worse).


I'm getting something funny in the counts directory which is where each 
individual run stores its output, but I need more testing to figure out 
what's going on.


Scott


On 4/22/19 11:00 PM, Jed Brown via petsc-dev wrote:

I don't know how this would happen and haven't noticed it myself.
Perhaps Scott can help investigate.  It would help to know which tests
run in each case.  To debug, I would make a dry-run or skip-all mode
that skips actually running the tests and just reports success (or
skip).

Stefano Zampini  writes:


The print-test target seems ok wrt race conditions

[szampini@localhost petsc]$ make -j1 -f gmakefile.test print-test  
globsearch="mat*" | wc

   1 538   11671
[szampini@localhost petsc]$ make -j20 -f gmakefile.test print-test  
globsearch="mat*" | wc

   1 538   11671

However, if I run the tests, I get two different outputs

[szampini@localhost petsc]$ make -j20 -f gmakefile.test test 
globsearch="mat*"

[..]
# -
#   Summary
# -
# success 1226/1312 tests (93.4%)
# failed 0/1312 tests (0.0%)
# todo 6/1312 tests (0.5%)
# skip 80/1312 tests (6.1%)

[szampini@localhost petsc]$ make -j20 -f gmakefile.test test 
globsearch="mat*"

[..]
# -
#   Summary
# -
# success 990/1073 tests (92.3%)
# failed 0/1073 tests (0.0%)
# todo 6/1073 tests (0.6%)
# skip 77/1073 tests (7.2%)


On Apr 22, 2019, at 8:12 PM, Jed Brown  wrote:

Stefano Zampini via petsc-dev  writes:


Scott,

I have noticed that make -j20 -f gmakefile.test test 
globsearch="mat*" does

not always run the same number of tests. How hard is to fix this race
condition in the genera

Re: [petsc-dev] alternatives to alt files

2019-05-03 Thread Scott Kruger via petsc-dev




On 5/3/19 3:13 PM, Smith, Barry F. wrote:




On May 3, 2019, at 3:57 PM, Scott Kruger  wrote:



Sticking to the immediate issues and ignoring the other meta issues...

I think what you want could possibly be used to simplify the test harness if we 
push things down to the petscdiff level.  If we
have petscdiff detect the diff then it will automatically apply
the patches.  This would eliminate the "alt" files from the test
harness level.


   This could be fine. One could maybe even get away without using the patch 
tool but simply store the diffs that appear and compare the diff with the basic 
version against the stored diffs.


I'm not sure understand this.

I was thinking of something like:

tests/output
/ex1.out
/ex1-1.patch
/ex1-2.patch

And then have petscdiff automaticall cycle through the patches (by 
patching into the local directory to avoid polluting the original repo).


The "update feature" of petscdiff shouldn't create patches, but it would 
be nice to have it automate the patch management in some way to try and 
make it a bit easier to develop tests).


Scott





Of course, petscdiff is in bash and we've talked about replacing
it with a python version.  Matt has said he has a preliminary version
and I'd appreciate being able to use this as a starting point.

Scott


On 5/2/19 3:59 PM, Smith, Barry F. wrote:

Scott and PETSc folks,
  Using alt files for testing is painful. Whenever you add, for example, a 
new variable to be output in a viewer it changes the output files and you need 
to regenerate the alt files for all the test configurations. Even though the 
run behavior of the code hasn't changed.
 I'm looking for suggestions on how to handle this kind of alternative 
output in a nicer way (alternative output usually comes from different 
iterations counts due to different precision and often even different 
compilers).
 I idea I was thinking of was instead of having "alt" files we have "patch" 
files that continue just the patch to the original output file instead of a complete copy. Thus in 
some situations the patch file would still apply even if the original output file changed thus 
requiring much less manual work in updating alt files. Essentially the test harness would test 
against the output file, if that fails it would apply the first patch and compare again, try the 
second patch etc.
   Scott,
  What do you think? Should be an easy addition to the current model (no 
need to even remove the alt testing)? Would it also be possible to add a PATCH 
option to the test rule where it automatically added the new patch file? 
Perhaps all the patches for a test case could all be stored in the same file 
also so we don't need to manage patch_1.out patch_2.out etc? Each new patch 
would just get added to the file?
 Thoughts?
Barry


--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756




--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] alternatives to alt files

2019-05-03 Thread Scott Kruger via petsc-dev




Sticking to the immediate issues and ignoring the other meta issues...

I think what you want could possibly be used to simplify the test 
harness if we push things down to the petscdiff level.  If we

have petscdiff detect the diff then it will automatically apply
the patches.  This would eliminate the "alt" files from the test
harness level.

Of course, petscdiff is in bash and we've talked about replacing
it with a python version.  Matt has said he has a preliminary version
and I'd appreciate being able to use this as a starting point.

Scott


On 5/2/19 3:59 PM, Smith, Barry F. wrote:


Scott and PETSc folks,

  Using alt files for testing is painful. Whenever you add, for example, a 
new variable to be output in a viewer it changes the output files and you need 
to regenerate the alt files for all the test configurations. Even though the 
run behavior of the code hasn't changed.

 I'm looking for suggestions on how to handle this kind of alternative 
output in a nicer way (alternative output usually comes from different 
iterations counts due to different precision and often even different 
compilers).

 I idea I was thinking of was instead of having "alt" files we have "patch" 
files that continue just the patch to the original output file instead of a complete copy. Thus in 
some situations the patch file would still apply even if the original output file changed thus 
requiring much less manual work in updating alt files. Essentially the test harness would test 
against the output file, if that fails it would apply the first patch and compare again, try the 
second patch etc.

   Scott,

  What do you think? Should be an easy addition to the current model (no 
need to even remove the alt testing)? Would it also be possible to add a PATCH 
option to the test rule where it automatically added the new patch file? 
Perhaps all the patches for a test case could all be stored in the same file 
also so we don't need to manage patch_1.out patch_2.out etc? Each new patch 
would just get added to the file?

 Thoughts?

Barry



--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] testing in parallel

2019-04-29 Thread Scott Kruger via petsc-dev




FYI -- I have reproduced all the problems but am still looking at it.

I thought perhaps it would be something about the globsearch's 
invocation of python, but it's not -- I get the same thing even with 
gmake's native filter (and in fact, it appears to be worse).


I'm getting something funny in the counts directory which is where each 
individual run stores its output, but I need more testing to figure out 
what's going on.


Scott


On 4/22/19 11:00 PM, Jed Brown via petsc-dev wrote:

I don't know how this would happen and haven't noticed it myself.
Perhaps Scott can help investigate.  It would help to know which tests
run in each case.  To debug, I would make a dry-run or skip-all mode
that skips actually running the tests and just reports success (or
skip).

Stefano Zampini  writes:


The print-test target seems ok wrt race conditions

[szampini@localhost petsc]$ make -j1 -f gmakefile.test print-test  
globsearch="mat*" | wc
   1 538   11671
[szampini@localhost petsc]$ make -j20 -f gmakefile.test print-test  
globsearch="mat*" | wc
   1 538   11671

However, if I run the tests, I get two different outputs

[szampini@localhost petsc]$ make -j20 -f gmakefile.test test globsearch="mat*"
[..]
# -
#   Summary
# -
# success 1226/1312 tests (93.4%)
# failed 0/1312 tests (0.0%)
# todo 6/1312 tests (0.5%)
# skip 80/1312 tests (6.1%)

[szampini@localhost petsc]$ make -j20 -f gmakefile.test test globsearch="mat*"
[..]
# -
#   Summary
# -
# success 990/1073 tests (92.3%)
# failed 0/1073 tests (0.0%)
# todo 6/1073 tests (0.6%)
# skip 77/1073 tests (7.2%)


On Apr 22, 2019, at 8:12 PM, Jed Brown  wrote:

Stefano Zampini via petsc-dev  writes:


Scott,

I have noticed that make -j20 -f gmakefile.test test globsearch="mat*" does
not always run the same number of tests. How hard is to fix this race
condition in the generation of the rules?


Can you reproduce with the print-test target?  These are just running
Python to create a list of targets, and should all take place before
executing rules.


--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] need for HDF5 < 1.8.0 support?

2018-12-07 Thread Scott Kruger via petsc-dev




FWIW, spack only supports 1.8.10 and above which means that a 
significant fraction of the scientific software stack has moved on.


Scott

On 12/7/18 7:56 AM, Jed Brown via petsc-dev wrote:

"Smith, Barry F."  writes:


   A potential drawback is some users also use HDF5 directly in their code and 
may be using an older version (people are very slow to change).


The HDF5 developers were very deliberate about this.  You can still use
the old API in your own code while linking to the new library.  See
H5_USE_16_API.

https://support.hdfgroup.org/HDF5/doc/RM/APICompatMacros.html



--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] tests with multiple loops

2018-11-15 Thread Scott Kruger via petsc-dev




Fixed in scott/fix-forloops.  Could you take a look and see if that 
works for you?


Thanks,
Scott


On 11/8/18 8:40 AM, Hapla Vaclav via petsc-dev wrote:

Assume the following test

   test:
 suffix: 4_tet_test_orient
 nsize: 2
 args: -dim 3 -distribute 0
 args: -rotate_interface_0 {{0 1 2 11 12 13}}
 args: -rotate_interface_1 {{0 1 2 11 12 13}}

I was thinking that it should produce all combinations of -rotate_interface_0 
and -rotate_interface_1, i.e. 6*6*2 = 72 tests including diffs.

But instead it produces only 22 tests for me. I guess it's wrong, isn't it?

Thanks

Vaclav

ok 
dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-0_rotate_interface_1-0
ok 
diff-dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-0_rotate_interface_1-0
ok 
dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-0_rotate_interface_1-1
ok 
diff-dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-0_rotate_interface_1-1
ok 
dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-0_rotate_interface_1-2
ok 
diff-dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-0_rotate_interface_1-2
ok 
dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-0_rotate_interface_1-11
ok 
diff-dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-0_rotate_interface_1-11
ok 
dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-0_rotate_interface_1-12
ok 
diff-dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-0_rotate_interface_1-12
ok 
dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-0_rotate_interface_1-13
ok 
diff-dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-0_rotate_interface_1-13
ok 
dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-1_rotate_interface_1-13
ok 
diff-dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-1_rotate_interface_1-13
ok 
dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-2_rotate_interface_1-13
ok 
diff-dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-2_rotate_interface_1-13
ok 
dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-11_rotate_interface_1-13
ok 
diff-dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-11_rotate_interface_1-13
ok 
dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-12_rotate_interface_1-13
ok 
diff-dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-12_rotate_interface_1-13
ok 
dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-13_rotate_interface_1-13
ok 
diff-dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-13_rotate_interface_1-13
# -
#   Summary
# -
# success 22/22 tests (100.0%)
# failed 0/22 tests (0.0%)
# todo 0/22 tests (0.0%)
# skip 0/22 tests (0.0%)



--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] tests with multiple loops

2018-11-12 Thread Scott Kruger via petsc-dev




Yes, that seems wrong.  I'll take a look.

Scott


On 11/8/18 8:40 AM, Hapla Vaclav via petsc-dev wrote:

Assume the following test

   test:
 suffix: 4_tet_test_orient
 nsize: 2
 args: -dim 3 -distribute 0
 args: -rotate_interface_0 {{0 1 2 11 12 13}}
 args: -rotate_interface_1 {{0 1 2 11 12 13}}

I was thinking that it should produce all combinations of -rotate_interface_0 
and -rotate_interface_1, i.e. 6*6*2 = 72 tests including diffs.

But instead it produces only 22 tests for me. I guess it's wrong, isn't it?

Thanks

Vaclav

ok 
dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-0_rotate_interface_1-0
ok 
diff-dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-0_rotate_interface_1-0
ok 
dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-0_rotate_interface_1-1
ok 
diff-dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-0_rotate_interface_1-1
ok 
dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-0_rotate_interface_1-2
ok 
diff-dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-0_rotate_interface_1-2
ok 
dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-0_rotate_interface_1-11
ok 
diff-dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-0_rotate_interface_1-11
ok 
dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-0_rotate_interface_1-12
ok 
diff-dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-0_rotate_interface_1-12
ok 
dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-0_rotate_interface_1-13
ok 
diff-dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-0_rotate_interface_1-13
ok 
dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-1_rotate_interface_1-13
ok 
diff-dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-1_rotate_interface_1-13
ok 
dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-2_rotate_interface_1-13
ok 
diff-dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-2_rotate_interface_1-13
ok 
dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-11_rotate_interface_1-13
ok 
diff-dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-11_rotate_interface_1-13
ok 
dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-12_rotate_interface_1-13
ok 
diff-dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-12_rotate_interface_1-13
ok 
dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-13_rotate_interface_1-13
ok 
diff-dm_impls_plex_tests-ex18_4_tet_test_orient_rotate_interface_0-13_rotate_interface_1-13
# -
#   Summary
# -
# success 22/22 tests (100.0%)
# failed 0/22 tests (0.0%)
# todo 0/22 tests (0.0%)
# skip 0/22 tests (0.0%)



--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] tiny issues in test harness

2018-09-26 Thread Scott Kruger




Fix for #1 is merged into next.

See ecc1beb596a8093f7509ca38016ed30c93784193

For #2, I think it'll work if you use double quotes and egrep.
If you do this:
cd $PETSC_DIR/src/sys/examples/test
grep filter *.c

I think you'll see some examples of complicated filters that Barry got 
working.


Scott


On 8/14/18 4:05 AM, Jakub Kruzik wrote:

Hi all,

I started using test harness in PERMON and find out a couple of issues 
with it.


1) multiple "args:" keywords in "test:" in "testset:" are ignored except 
for the last "args:" keyword. See attached MWE and check it with


python2 ${PETSC_DIR}/config/testparse.py -t ex1.c -v 3

2) single quotes can't be used in "filter:"  keyword. E.g. filter: grep 
'r =' generates


petsc_testrun ... 'grep 'r =''

resulting in filter being "grep r". Might be worth mentioning this in 
dev manual and/or putting guards into parsing/petsc_testrun.


Best,

Jakub



--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] Test output broken for test sets

2018-09-26 Thread Scott Kruger



Matt,

I'm a bit confused as to what you want to do here.

I put this block in ksp/ksp/examples/test/ex1.c just for testing.

I get:
> config/gmakegentest.py
Warning: 
/Users/kruger/ptroot/upstream/petsc/src/sys/examples/tests/output/ex1_6_tet.out 
not found.
Warning: 
/Users/kruger/ptroot/upstream/petsc/src/sys/examples/tests/output/ex1_6_hex.out 
not found.


which I think is what you are complaining about.

But I I change the faces argument to this:

  args: -use_generator -faces {{2,2,2  1,3,5  3,4,7}separate}

Then I get this:
> config/gmakegentest.py
Warning: 
/Users/kruger/ptroot/upstream/petsc/src/sys/examples/tests/output/ex1_faces-3__4__7_6_hex.out 
not found.
Warning: 
/Users/kruger/ptroot/upstream/petsc/src/sys/examples/tests/output/ex1_faces-1__3__5_6_hex.out 
not found.
Warning: 
/Users/kruger/ptroot/upstream/petsc/src/sys/examples/tests/output/ex1_faces-2__2__2_6_hex.out 
not found.
Warning: 
/Users/kruger/ptroot/upstream/petsc/src/sys/examples/tests/output/ex1_faces-3__4__7_6_tet.out 
not found.
Warning: 
/Users/kruger/ptroot/upstream/petsc/src/sys/examples/tests/output/ex1_faces-1__3__5_6_tet.out 
not found.
Warning: 
/Users/kruger/ptroot/upstream/petsc/src/sys/examples/tests/output/ex1_faces-2__2__2_6_tet.out 
not found.



Is this what you really want?

Scott

P.S.  Jakub:  Your issue is separate.  I'll have a fix pushed shortly.

On 9/24/18 7:26 AM, Matthew Knepley wrote:

At least on my machine, this does not work correctly

   testset:
     nsize: {{1 2 4}}
     args: -use_generator -faces {{2,2,2  1,3,5  3,4,7}}
     args: -interpolate -distribute -interpolate_after_distribute {{0 1}}
     args: -dm_plex_check_pointsf
     test:
       #TODO some combinations generate wrong SF
       suffix: 6_tet
       requires: ctetgen
       args: -cell_simplex 1 -dm_plex_generator ctetgen
     test:
       suffix: 6_hex
       args: -cell_simplex 0
If I give REPLACE=1, it makes a single output file 6_hex.out and 
6_tet.out for all the test, with the output of the last test in it.


   Thanks,

      Matt

--
What most experimenters take for granted before they begin their 
experiments is infinitely more interesting than any results to which 
their experiments lead.

-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 



--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] Test output broken for test sets

2018-09-26 Thread Scott Kruger




I'll take a look.
Scott


On 9/26/18 8:58 AM, Jed Brown wrote:

Scott, do you know how to fix this?

Jakub Kruzik  writes:


Related issues:

https://lists.mcs.anl.gov/pipermail/petsc-dev/2018-August/023448.html

Jakub


On 9/24/18 3:26 PM, Matthew Knepley wrote:

At least on my machine, this does not work correctly

   testset:
     nsize: {{1 2 4}}
     args: -use_generator -faces {{2,2,2  1,3,5  3,4,7}}
     args: -interpolate -distribute -interpolate_after_distribute {{0 1}}
     args: -dm_plex_check_pointsf
     test:
       #TODO some combinations generate wrong SF
       suffix: 6_tet
       requires: ctetgen
       args: -cell_simplex 1 -dm_plex_generator ctetgen
     test:
       suffix: 6_hex
       args: -cell_simplex 0
If I give REPLACE=1, it makes a single output file 6_hex.out and
6_tet.out for all the test, with the output of the last test in it.

   Thanks,

      Matt

--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/



--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] test harness hiccup

2018-09-08 Thread Scott Kruger




The line is this:
@$(RM) -rf $(TESTDIR)/counts $(TESTLOGFILE)

so I'm a bit confused how this could happen.

Are you on master?

Scott


On 9/8/18 4:06 PM, Smith, Barry F. wrote:


$ make alltests




rm: ./arch-simple/tests/counts: Directory not empty
make[2]: *** [pre-clean] Error 1





--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


[petsc-dev] Fwd: Re: [petsc-users] PETSc doesn't allow use of multithreaded MKL with MUMPS + fblaslapack?

2018-08-13 Thread Scott Kruger





 Forwarded Message 
Subject: Re: [petsc-users] PETSc doesn't allow use of multithreaded MKL 
with MUMPS + fblaslapack?

Date: Sun, 12 Aug 2018 14:20:42 -0500
From: Satish Balay 
Reply-To: petsc-users 
To: Appel, Thibaut 
CC: petsc-us...@mcs.anl.gov 

Hm - its just a default - so you can always change the default value
to a more suitable one for your usage. [i.e use --with-blaslapack-lib
option instead of --with-blaslapack-dir option]

For regular petsc build - we think that sequential MKL is the best match.

For a build with C/Pardiso - I believe its best to use threaded MKL.

Wrt with-openmp=1 - if threaded MKL is preferable - I guess we could
change the default. But a default does not prevent one from using a
preferred blas [whatever that is]

Wrt fblaslapack - yes its not multi-threaded. But I believe openblas is
multi-threaded [so you could use --download-openblas as alternative]

The usual issue is - one cannot use threaded MKL [or any threaded
library] as a black box. They would have to always be aware of how
many mpi procs, and openmp threads are being used - and tweak these
parameters constantly. The default for OpenMPI is to use the whole
machine - i.e it expects 1 mpi task per node. If one users more mpi
tasks per node - and does not reduce threads per node - they get bad
performance. Hence we avoid using threaded MKL as a default..

Satish

 On Sun, 12 Aug 2018, Appel, Thibaut wrote:


Good afternoon,

I have an application code written in pure MPI but wanted to exploit 
multithreading in MUMPS (contained in calls to BLAS routines)
On a high-end parallel cluster I’m using, I’m linking with the Intel MKL 
library but it seems that PETSc won’t configure the way I want:

./configure […] —with-openmp=1 --with-pic=1 --with-cc=mpiicc --with-cxx=mpiicpc 
--with-fc=mpiifort --with-blaslapack-dir=${MKLROOT} 
--with-scalapack-lib="-L${MKLROOT}/lib/intel64 -lmkl_scalapack_lp64 
-lmkl_blacs_intelmpi_lp64" --with-scalapack-include=${MKLROOT}/include 
--download-metis --download-parmetis --download-mumps

yields BLAS/LAPACK: -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread

while if I configure with cpardiso on top of the same flags

./configure […] —with-openmp=1 —with-pic=1 --with-cc=mpiicc --with-cxx=mpiicpc 
--with-fc=mpiifort --with-blaslapack-dir=${MKLROOT} 
--with-scalapack-lib="-L${MKLROOT}/lib/intel64 -lmkl_scalapack_lp64 
-lmkl_blacs_intelmpi_lp64" --with-scalapack-include=${MKLROOT}/include 
--with-mkl_cpardiso-dir=${MKLROOT} --download-metis --download-parmetis --download-mumps

the configure script says
===
BLASLAPACK: Looking for Multithreaded MKL for C/Pardiso
===

and yields BLAS/LAPACK: -lmkl_intel_lp64 -lmkl_core -lmkl_intel_thread 
-lmkl_blacs_intelmpi_lp64 -liomp5 -ldl -lpthread

In other words, there is no current possibility of activating multithreaded 
BLAS with MUMPS in spite of the option —with-openmp=1, as libmkl_sequential is 
linked. Is it not possible to fix that and use libmkl_intel_thread by default?

On another smaller cluster, I do not have MKL and configure PETSc with BLAS 
downloaded with —download-fblaslapack, which is not multithreaded.
Could you confirm I would need to link with a multithreaded BLAS library I 
downloaded myself and use —with-openmp=1? Would it be `recognized` by the MUMPS 
installed by PETSc?

Thanks for your support,


Thibaut





Re: [petsc-dev] Test parsers is parsing *.c~ files

2018-03-15 Thread Scott Kruger



Yes, as you surmise Barry, I am a vim user.

Based on earlier comments, I have this in config/gmakegentest.py:

# Ignore emacs and other temporary files
if exfile.startswith("."): continue
if exfile.startswith("#"): continue

I assume that this:
if exfile.endwith("~"): continue

will work.   Do folks see any problems?

Scott



On 3/14/18 10:05 PM, Smith, Barry F. wrote:



   This drives me nuts also, obviously Scott does not use Emacs or he would 
have make sure long ago that the test harness would not trip over the crumbs of 
Emacs.

Matt, You'll need to share the exact forms to of the crumbs that are 
confusing the test harness.

Barry


On Mar 14, 2018, at 11:03 PM, Matthew Knepley  wrote:

This took me forever to find. The parser is parsing emacs backups and 
overwriting some tests.

Matt

--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/




--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] [petsc-users] Using PETSC with an openMP program

2018-03-02 Thread Scott Kruger



On 3/2/18 12:44 PM, Matthew Knepley wrote:
On Fri, Mar 2, 2018 at 2:39 PM, Jed Brown > wrote:


Matthew Knepley > writes:

 > That is not the same as printing unused arguments. Michael's Pythia
 > does this correctly, but it is even less simple.

You want it to accept the unused arguments and just print them without
error, or some more subtle relationship among dependent options?


Yes, I do. I consider PETSc to have the correct functionality. The open 
world
assumption is a good one, as long as you report that no one accepted 
that option.


https://docs.python.org/3/library/argparse.html#partial-parsing

Requires Python > 2.7





   Matt

   We're
here in a thread about not silently accepting options that *don't
exist anywhere*.




--
What most experimenters take for granted before they begin their 
experiments is infinitely more interesting than any results to which 
their experiments lead.

-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] new test harness in PETSc

2018-01-25 Thread Scott Kruger



On 1/25/18 4:35 AM, Patrick Sanan wrote:


This following is what I would try, given my limited knowledge of how 
things work and after a quick scan of the docs. Is this as intended or 
is there an easier way?

You pretty much have it.



--

I make some seemingly-innocuous changes to the code. (In this case I 
literally change nothing in master.)


I want to run all the tests to make sure I didn't break something that I 
don't understand, so I look at the User Manual and copy and run this 
command:


    make -f gmakefile.test test


This will be hooked up to the main makefile test target at some point.
Right now,
make allgtest
is more silent for example, but you can pick out the errors.
make allgtest-tap
is equivalent to above (you see progress, but you have to pick out
the errrors that occurred in 0.1% of tests).



I go away for a while and come back and see that some (3) tests have 
failed, and that the harness gives me instructions on how to re-run them.


     # -
     #   Summary
     # -
     # FAILED ts_tutorials-ex11_adv_2d_quad_0 
diff-sys_classes_viewer_tests-ex4_4 ts_tutorials-ex11_adv_2d_quad_1

     # success 3051/3915 tests (77.9%)
     # failed 3/3915 tests (0.1%)
     # todo 91/3915 tests (2.3%)
     # skip 770/3915 tests (19.7%)
     #
     # To rerun failed tests:
     #     /opt/local/bin/gmake -f gmakefile test 
search='ts_tutorials-runex11_adv_2d_quad_0 
sys_classes_viewer_tests-runex4_4 ts_tutorials-runex11_adv_2d_quad'


I want to take a look at the output so I can see what's happening - I 
see that I'm given instructions on how to get more verbose output, so I 
run one of the tests again with V=1 :


     /opt/local/bin/gmake V=1 -f gmakefile test 
search='ts_tutorials-runex11_adv_2d_quad_0'


This tells me that this is failing because of a timeout

     #Exceeded timeout limit of 60 s

I poke around in the dev manual (note that "make -f gmakefile help" and 
"make -f gmakefile.test help" say nothing about TIMEOUT as promised)


Oops.  Yes, a bug.



  If I look in the runex4_4 directory, I find 
sys_classes_viewer_tests-ex4_4.sh which gives me something to copy and 
paste and use my actual knowledge/reasoning to fix:


   
  /Users/patrick/petsc-master/arch-darwin-master-double-debug/bin/mpiexec  -n 1 ../ex4 -myviewer ascii:ex4a1.tmp::append


(I'm not going to try to debug this failure further, but I'm confident 
I'd be able to given the information I have now)


This is pretty close to what I want, which is a way to "back out of the 
test harness" to be able to debug the failed test myself. I was hoping 
that running this from the PETSc root directory would more directly tell 
me that:


   
  arch-darwin-master-double-debug/tests/src/sys/classes/viewer/examples/tests/runex4_4.sh  -v



The output is meant to be user facing (in which case they
do not want to be told how to debug -- they just want to
report it), and to be run for nightlies -- in which case
we want quite a bit of terseness.

I think the fact that you figured it out on your own
is a success!  You did the workflow I use and yes, the
first time is annoying, but then it's pretty straightforward.

A few comments:
The additional step of poking around the runex4_4 directory
is to avoid output file name clashes.

The  runex4_4.sh script has a -h command to show other options,
especially the ability to automatically update the
output files.

NO_RM=1 is my favorite option to gmakefile.test, but
I have shared builds.  Somewhere we have a change that turns this on
if shared builds are there so debugging is faster.

Scott









2018-01-25 5:47 GMT+01:00 Smith, Barry F. <bsm...@mcs.anl.gov 
<mailto:bsm...@mcs.anl.gov>>:



    PETSc developers,

      We have completed moving all PETSc examples over from the old
test system (where tests were written in the makefile) to a new
system, provided by Scott Kruger, where the test rules are written
in bottom of the source file of the example. Directions for usage
and adding new tests can be found in the PETSc developers manual
http://www.mcs.anl.gov/petsc/petsc-dev/docs/developers.pdf
<http://www.mcs.anl.gov/petsc/petsc-dev/docs/developers.pdf> chapter 7.

   Barry





--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] test harness loops with different outputs

2018-01-24 Thread Scott Kruger



On 1/24/18 6:50 AM, Vaclav Hapla wrote:




24. 1. 2018 v 14:45, Vaclav Hapla :

How should I specify output files for {{...}} with different outputs, please?


Oh I see, {{...}separateoutput} is meant literally. But in that case 
typesetting it in italics is confusing.


It's clearer in the examples listed below.   I couldn't get lstinline to 
work for this syntax so used $ ... $.  If someone knows how to do it, 
please feel free to fix.


Scott



Vaclav



I have consulted the developers manual but it's not clear to me still. And the 
example listings, page 34-45, seem to be broken.

BTW in 7.1.2, I think there should be the space instead of the comma in the 
listing.

Thanks

Vaclav




--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] Our pull request work flow is terrible and horrible

2018-01-15 Thread Scott Kruger



On 1/12/18 9:53 PM, Jed Brown wrote:

"Smith, Barry F."  writes:


Sadly you cannot reply to previous comments for Github PRs, there
is just a mass of unorganized previous comments. If this is fixed
then Github becomes more desirable looking.


I think it was an intentional choice to try to keep discussions on
topic, while fully threaded discussions tend to fragment into
sub-discussions.  Supporting that fragmentation is something email does
quite well.  FWIW, GitLab has a two-level threading concept (instead of
arbitrarily deep nesting).

I usually review commits and use line comments for specific issues.
Github does that fine.  That general discussion of issue is not threaded
is mildly annoying sometimes, but in my experience not actually an
issue.



I find it super annoying myself if you have more than
1 person comment on a PR.  You are constantly having
to use @person - #comment to keep the conversations
about issues straight.

It is definitely not a show stopper, but I have loved
BitBucket's threaded conversations personally.

Scott



--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] Our pull request work flow is terrible and horrible

2018-01-11 Thread Scott Kruger



On 1/11/18 10:40 AM, Patrick Sanan wrote:
One idea is to impose a stricter guideline that things on the bitbucket 
PR page are things that everyone is actively trying to merge. That way, 
maintainers can just look at the bottom of the list to see what's 
lagging, instead of having to to work up the list and try to remember 
which of the PRs are WIP or proposals or experiments or even abandoned 
ideas.


This probably requires an itchier trigger finger on declining PRs which 
need substantial work.


A related point is that (as happened with the last PR I made), if a big 
edit is performed after the original PR is made or even approved, then 
it's not always clear "whose court" the PR is in. Maybe it's better to 
just make a new PR in this situation. I'm not sure if bitbucket allows 
you to decline your own PR (I fear not) - that would make this easier.


You can.



My own suggest is to hook bin/maint/exampleslog.py up to the
nightly runs such that the output is on the main testing
page and even, perhaps, an automated email to the people
who are "responsible".   I am unclear how the nightly
workflow works -- if there is a description somewhere, I
can try hooking it up myself.

The goal is to distribute the responsibility for fixing
errors in master/next testing, and to make it easier to
see what the problems are (without parsing two dozen log
files).

Scott



--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] limiting tests to avoid unneeded ones

2018-01-11 Thread Scott Kruger




On 1/11/18 5:05 AM, Jed Brown wrote:
You'd need to know that 'master' was actually clean for that commit with 
your configuration on your machine. If an automated system, where would 
that information be stored?


If not automated, just run the git diff and then make test the desired 
packages.



The dependency in the makefile is on petsc lib for every test.
To test just the desired packages, you'd have to either a) add a
search='ts%' to the gmakefile line, b) have the testing use separate
libs for the build/test cycle, c) add new functionality to make
it easier.

Scott

--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] Experiments in test timing with new harness

2018-01-08 Thread Scott Kruger



This was essentially master.

Yes, these kinds of experiments will need to be redone as more tests get 
moved over.


Scott


On 1/5/18 11:11 PM, Smith, Barry F. wrote:


   What branch? Master has only a small percentage of the examples in the new 
test harness.



On Jan 5, 2018, at 10:11 PM, Scott Kruger <kru...@txcorp.com> wrote:




On my laptop:

rm -rf $PETSC_ARCH/tests
time make -f gmakefile.test test NO_RM=1

Results were this:
603.458u 191.569s 20:29.22 64.6%6030+0k 3297+2284io 224278pf+0w

Immediately redo (NO_RM=1 => no rebuilding of executables):
  time make -f gmakefile.test test NO_RM=1
Results were this:
280.566u 85.166s 15:54.69 38.3%Time0+0k 331+2653io 2135pf+0w


(320 seconds shorter, or 53% shorter)


Here are some tests with TIMEOUTS:
TIMEOUT=10
Removes (for me, see below):
snes_tutorials-ex56_hypre sys_tutorials-ex3f_1

Shortens it to:
243.836u 91.350s 5:30.93 101.2% 0+0k 1295+1546io 7502pf+0w


TIMEOUT=1
Removes:
snes_tutorials-ex13_2d_q3_0 snes_tutorials-ex13_3d_q3_0 snes_tutorials-ex56_0 
sys_tutorials-ex3f sys_tutorials-ex3 snes_tutorials-ex13_3d_q2_0 
ts_tutorials-ex18_p2p1_xyper_ref snes_tutorials-ex56_hypre ts_tests-ex4_2 
ts_tutorials-ex26_2 ts_tests-ex4_4

Shortens it to:
214.718u 98.313s 5:02.38 103.5% 0+0k 597+1569io 7593pf+0w

(65 seconds shorter than 60 sec timeout; or ~23% shorter)


The upshot is that improving test times means focusing
on compile time.  While we cannot
get rid of the compile time of the individual files,
it is possible that having a single package executable,
as Jed has advocated, could save a significant amount of
time.   If we estimate that link time is 50% of
the compile time, and that we can have a 50% savings by
linking a single executable instead of multiple, then 25%
time saving would occur with a single executable build.
I have no idea if this is the right estimate, but it
does motivate the single executable idea as the best
method of improving the test timing, in addition to
the disk space savings.




Having tests take a "long time" is currently, for this
configuration, not a big issue; however, it is important
to note that while I have several packages, I certainly
do not have exodus, for example.


Here are my configuration options:
  '--download-mpich=1',
  '--with-fc=/opt/homebrew/bin/gfortran',
  '--with-x=0',
  '--with-cxx-dialect=C++11',
  '--with-clanguage=C++',
  '--with-debugging=0',
  '--download-cmake=1',
  '--download-hdf5',
  '--download-netcdf',
  '--download-hypre=1',
  '--download-metis=1',
  '--download-parmetis=1',
  '--download-superlu_dist=1',
  '--download-scalapack',
  '--download-ptscotch',
  '--download-mumps',
  '--download-sowing=1',
  '--with-shared-libraries=1',
  '--DATAFILESPATH=/Users/kruger/petsc/datafiles'


Scott


--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756




--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


[petsc-dev] Experiments in test timing with new harness

2018-01-05 Thread Scott Kruger




On my laptop:

rm -rf $PETSC_ARCH/tests
time make -f gmakefile.test test NO_RM=1

Results were this:
603.458u 191.569s 20:29.22 64.6%6030+0k 3297+2284io 224278pf+0w

Immediately redo (NO_RM=1 => no rebuilding of executables):
  time make -f gmakefile.test test NO_RM=1
Results were this:
280.566u 85.166s 15:54.69 38.3%Time0+0k 331+2653io 2135pf+0w


(320 seconds shorter, or 53% shorter)


Here are some tests with TIMEOUTS:
TIMEOUT=10
Removes (for me, see below):
snes_tutorials-ex56_hypre sys_tutorials-ex3f_1

Shortens it to:
243.836u 91.350s 5:30.93 101.2% 0+0k 1295+1546io 7502pf+0w


TIMEOUT=1
Removes:
snes_tutorials-ex13_2d_q3_0 snes_tutorials-ex13_3d_q3_0 
snes_tutorials-ex56_0 sys_tutorials-ex3f sys_tutorials-ex3 
snes_tutorials-ex13_3d_q2_0 ts_tutorials-ex18_p2p1_xyper_ref 
snes_tutorials-ex56_hypre ts_tests-ex4_2 ts_tutorials-ex26_2 ts_tests-ex4_4


Shortens it to:
214.718u 98.313s 5:02.38 103.5% 0+0k 597+1569io 7593pf+0w

(65 seconds shorter than 60 sec timeout; or ~23% shorter)


The upshot is that improving test times means focusing
on compile time.  While we cannot
get rid of the compile time of the individual files,
it is possible that having a single package executable,
as Jed has advocated, could save a significant amount of
time.   If we estimate that link time is 50% of
the compile time, and that we can have a 50% savings by
linking a single executable instead of multiple, then 25%
time saving would occur with a single executable build.
I have no idea if this is the right estimate, but it
does motivate the single executable idea as the best
method of improving the test timing, in addition to
the disk space savings.




Having tests take a "long time" is currently, for this
configuration, not a big issue; however, it is important
to note that while I have several packages, I certainly
do not have exodus, for example.


Here are my configuration options:
  '--download-mpich=1',
  '--with-fc=/opt/homebrew/bin/gfortran',
  '--with-x=0',
  '--with-cxx-dialect=C++11',
  '--with-clanguage=C++',
  '--with-debugging=0',
  '--download-cmake=1',
  '--download-hdf5',
  '--download-netcdf',
  '--download-hypre=1',
  '--download-metis=1',
  '--download-parmetis=1',
  '--download-superlu_dist=1',
  '--download-scalapack',
  '--download-ptscotch',
  '--download-mumps',
  '--download-sowing=1',
  '--with-shared-libraries=1',
  '--DATAFILESPATH=/Users/kruger/petsc/datafiles'


Scott


--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] remove temporary output with new test-harness

2017-11-29 Thread Scott Kruger



On 11/29/17 10:25 AM, Matthew Knepley wrote:
On Wed, Nov 29, 2017 at 11:23 AM, Scott Kruger <kru...@txcorp.com 
<mailto:kru...@txcorp.com>> wrote:




The generated scripts automatically run in a subdirectory to avoid
name conflicts.  For example, runex1.sh creates a runex1 directory
relative
to its location and then runs the actual test in there.  These
temporary directories are not removed automatically by the test
harness as it hasn't been a problem yet.


Okay, then it would be nice to give a list of files to be removed after 
the test.


Why?It's all in $PETSC_ARCH/tests which is meant to be ephemeral in 
the same way that $PETSC_ARCH/objs is.


If it turns out that disk space is an issue (like it is for the 
executables themselves), then we can have the scripts remove that 
temporary directory after the test is run easily enough, but we still 
don't need to have all of the coding related to figuring out which files 
were generated (and tip of the hat to Jed who had the idea of running in 
the subdirectories to avoid name conflicts to begin with).


Scott




   Thanks,

      Matt

Scott


On 11/29/17 9:43 AM, Matthew Knepley wrote:

On Wed, Nov 29, 2017 at 10:38 AM, Scott Kruger
<kru...@txcorp.com <mailto:kru...@txcorp.com>
<mailto:kru...@txcorp.com <mailto:kru...@txcorp.com>>> wrote:

     rm $PETSC_ARCH/tests//


     I assume you really mean something else, specifically
related to the
     generated test script, but there isn't enough information
to answer
     that.

     Or more specifically, the test language was not written to
generate
     scripts that have an `rm` command in them, so perhaps
explaining why
     an extension would be a good idea.]


I think what Stefano means is the following;

    I write a test for the HDF5 Viewer. It creates a file called
"vector.dat". I want it removed at the end of the test.

I think the answer is, the test directory is blown away, so as
long as the file is in that temp dir, you are fine.

    Thanks,

       Matt

     Scott


     On 11/29/17 6:02 AM, Stefano Zampini wrote:

         How can I remove a file created by the executable I'm
testing
with the new test har

<https://maps.google.com/?q=with+the+new+test+har=gmail=g>ness?

         --         Stefano


     --     Tech-X Corporation kru...@txcorp.com
<mailto:kru...@txcorp.com> <mailto:kru...@txcorp.com
<mailto:kru...@txcorp.com>>
     5621 Arapahoe Ave, Suite A       Phone: (720) 974-1841
<tel:%28720%29%20974-1841>
     <tel:%28720%29%20974-1841>
     Boulder, CO 80303                Fax: (303) 448-7756
<tel:%28303%29%20448-7756>
     <tel:%28303%29%20448-7756>




-- 
What most experimenters take for granted before they begin their

experiments is infinitely more interesting than any results to
which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/
<https://www.cse.buffalo.edu/~knepley/>
<http://www.caam.rice.edu/~mk51/ <http://www.caam.rice.edu/~mk51/>>


-- 
Tech-X Corporation kru...@txcorp.com <mailto:kru...@txcorp.com>

5621 Arapahoe Ave, Suite A       Phone: (720) 974-1841
<tel:%28720%29%20974-1841>
Boulder, CO 80303                Fax: (303) 448-7756
<tel:%28303%29%20448-7756>




--
What most experimenters take for granted before they begin their 
experiments is infinitely more interesting than any results to which 
their experiments lead.

-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/~mk51/>


--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] remove temporary output with new test-harness

2017-11-29 Thread Scott Kruger



The generated scripts automatically run in a subdirectory to avoid name 
conflicts.  For example, runex1.sh creates a runex1 directory relative
to its location and then runs the actual test in there.  These temporary 
directories are not removed automatically by the test harness as it 
hasn't been a problem yet.


Scott


On 11/29/17 9:43 AM, Matthew Knepley wrote:
On Wed, Nov 29, 2017 at 10:38 AM, Scott Kruger <kru...@txcorp.com 
<mailto:kru...@txcorp.com>> wrote:


rm $PETSC_ARCH/tests//


I assume you really mean something else, specifically related to the
generated test script, but there isn't enough information to answer
that.

Or more specifically, the test language was not written to generate
scripts that have an `rm` command in them, so perhaps explaining why
an extension would be a good idea.]


I think what Stefano means is the following;

   I write a test for the HDF5 Viewer. It creates a file called 
"vector.dat". I want it removed at the end of the test.


I think the answer is, the test directory is blown away, so as long as 
the file is in that temp dir, you are fine.


   Thanks,

      Matt

Scott


On 11/29/17 6:02 AM, Stefano Zampini wrote:

How can I remove a file created by the executable I'm testing
with the new test harness?

-- 
Stefano



-- 
Tech-X Corporation kru...@txcorp.com <mailto:kru...@txcorp.com>

5621 Arapahoe Ave, Suite A       Phone: (720) 974-1841
<tel:%28720%29%20974-1841>
Boulder, CO 80303                Fax: (303) 448-7756
<tel:%28303%29%20448-7756>




--
What most experimenters take for granted before they begin their 
experiments is infinitely more interesting than any results to which 
their experiments lead.

-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/~mk51/>


--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] remove temporary output with new test-harness

2017-11-29 Thread Scott Kruger




rm $PETSC_ARCH/tests//


I assume you really mean something else, specifically related to the 
generated test script, but there isn't enough information to answer that.


Or more specifically, the test language was not written to generate 
scripts that have an `rm` command in them, so perhaps explaining why an 
extension would be a good idea.


Scott


On 11/29/17 6:02 AM, Stefano Zampini wrote:
How can I remove a file created by the executable I'm testing with the 
new test harness?


--
Stefano


--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] test harness chokes

2017-11-21 Thread Scott Kruger



PR here:
https://bitbucket.org/petsc/petsc/pull-requests/805/bug-fix-for-empty-tests-default/diff



On 11/20/17 2:11 PM, Smith, Barry F. wrote:


on

/*TEST

  test:

  test:
 suffix: 2
 nsize: 2

TEST*/

with

$ ./config/gmakegentest.py
KeyError: 0



--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] test harness with tests that depend on multiple source files

2017-11-21 Thread Scott Kruger



On 11/20/17 8:59 PM, Smith, Barry F. wrote:




On Nov 20, 2017, at 9:37 PM, Jed Brown <j...@jedbrown.org> wrote:

Scott Kruger <kru...@txcorp.com> writes:


depends keyword:

From:
   dm/examples/tutorials/ex13f90.F90

!/*TEST
!
!   build:
!  requires: !complex
!  depends:  ex13f90aux.F90
!
!TEST*/


How would anyone know that "requires" refers to PETSc configuration
while "depends" refers to files?  Why not something like
"requires_file"?


   And hence requires_config ?



My own view is that `requires` was is meant
for a *user* interested in that particular test,

whereas

`depends` is something a *developer*, who
presumably understands the concepts of build
dependencies trees, would specify as he is
developing the test

so trying to correlate these two keywords is
not required.

But happy to change these if that is what the
consensus is.

Scott

P.S.  I'll note that the literal answer to Jed's
original question of RTFM does work and has been
there for over a year:

gabrielle 44: git blame developers.tex | grep depends:
29921a8f05c src/docs/tex/manual/developers.tex (Scott Kruger 
2016-09-28 09:56:26 -0500 1662) \item[{depends:




--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] test harness with tests that depend on multiple source files

2017-11-20 Thread Scott Kruger



depends keyword:

From:
  dm/examples/tutorials/ex13f90.F90

!/*TEST
!
!   build:
!  requires: !complex
!  depends:  ex13f90aux.F90
!
!TEST*/



Scott


On 11/19/17 10:48 AM, Smith, Barry F. wrote:


   Scott,

I asked you this before and you responded but I forgot and lost your 
response.

For tests that rely on multiple source files can the test harness handle 
it? How? For example src/vec/vec/examples/tutorials/ex21f90.F90

Is there a list of all keywords that are searched for when generating the 
tests from the source code?

Thanks

Barry



--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] nightlybuilds (next vs next-tmp)

2017-11-16 Thread Scott Kruger




   ./config/gmakegentest.py --petsc-arch=arch-master-debug

I have encountered a strange behavior, perhaps only on my machine, where 
if I do not run


   PETSC_ARCH=arch-master-debug 
./config/gmakegentest.py --petsc-arch=arch-master-debug


then it does not update correctly.



I don't know what's going on.

The key lines are:

parser.add_option('--petsc-arch', help='Set PETSC_ARCH different 
from environment', default=os.environ.get('PETSC_ARCH'))


...

main(petsc_arch=opts.petsc_arch, output=opts.output, ...)




So certainly your second case is redundant.  The first problem
indicates a problem with config/gmakegen.py since I inherit from
that.  I'm at SC right now -- I can take a look later.


Scott





   Thanks,

      Matt

this parses all the examples and sets up the scripts that are run to
do the testing. Then use, for example,

   make -f gmakefile test globsearch='*heat*'

to run all tests that have heat in the example name or path. Or you
can do


   make -f gmakefile test globsearch='dm*'

to run all tests in the dm directories. Sometimes you need a little
trial and error to get the globsearch right to run your example and
not others.

You will get a little frustrated the first couple times you do it,
just bug us and we'll help you get past the stumbling blocks.

   Barry









 >
 >   Matt
 >
 > --Richard
 >
 >
 >
 > >
 > >    Matt
 > >
 > > All logs record time. And Karl's script summarizes those times
on the
 > > dashboard. For eg:
 > >
 > >
http://ftp.mcs.anl.gov/pub/petsc/nightlylogs/archive/2017/11/11/maint.html

 > >
 > > If you want to do some analysis on those times - you can grab the
 > > [historical] logs and run the required analysis.
 > >
 > > Satish
 > >
 > >
 > >
 > > --
 > > What most experimenters take for granted before they begin
their experiments is infinitely more interesting than any results to
which their experiments lead.
 > > -- Norbert Wiener
 > >
 > > https://www.cse.buffalo.edu/~knepley/

 >
 >
 >
 >
 >
 > --
 > What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
 > -- Norbert Wiener
 >
 > https://www.cse.buffalo.edu/~knepley/





--
What most experimenters take for granted before they begin their 
experiments is infinitely more interesting than any results to which 
their experiments lead.

-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] nightlybuilds (next vs next-tmp)

2017-11-16 Thread Scott Kruger



On 11/15/17 8:48 PM, Smith, Barry F. wrote:







For those of us who have no idea how to do this, could someone please give me a 
pointer or two on where to look for an example or two or some documentation? I 
should probably be spending a few minutes a day converting some examples, but I 
don't know how or where to start.

There is a manual chapter on the test system, but for cut & paste semantics, 
you can look at SNES which has a lot of converted examples.
Basically, you take each test entry from the makefile, and move it into the 
source file itself.


   Richard,

Scott wrote a tool to semi-automatically do it from the makefile but sadly the tool is currently broken (it had no nightly testing) and like most python code is undebuggable. 



I think it's still useful just to generate some boilerplate to get
you going, but yes, you should not expect it to work.  And it's just
not that it wasn't tested, it's that a lot of requested features
were close to impossible for translation without me learning
more about NLP :-)

For example:
cd src/ksp/ksp/examples/tutorials
$PETSC_DIR/bin/maint/convertExamples.py -s $PWD

There will be a bunch of new_ex*.
files.  You can then copy over those tests
and start editing (start with test -> testset).
It will at least get your editing going, but
a lot of tests can/should be merged, for loop
syntax is wrong, etc.

src/ksp/ksp/examples/tutorials/ex10.c
is what I used in my own development of requested
features so is also an examplar.


 Anyways  after you have put a test in the example source code manually 
as Matt says, you run from PETSC_DIR


 ./config/gmakegentest.py

this parses all the examples and sets up the scripts that are run to do the 
testing. Then use, for example,

   make -f gmakefile test globsearch='*heat*'

to run all tests that have heat in the example name or path. Or you can do


   make -f gmakefile test globsearch='dm*'

to run all tests in the dm directories. Sometimes you need a little trial and 
error to get the globsearch right to run your example and not others.



I usually do this:

make -f gmakefile print-test globsearch='dm*'

Then, when the tests printed out match the search I want, I edit the 
command line to change print-test to test.




You will get a little frustrated the first couple times you do it, just bug us 
and we'll help you get past the stumbling blocks.



After editting a file, it's good just to test the parsing to make
sure you do not have errors (it's YAML format -- easy to get
something wrong).

For example, in ksp/ksp/examples/tutorials:
$PETSC_DIR/config/testparse.py -t ex10.c -v1

testparse.py is the parser used by gmakegentest.py
so if it passes without error, at least you have that
part correct.


Scott





   Matt
  
--Richard






Matt

All logs record time. And Karl's script summarizes those times on the
dashboard. For eg:

http://ftp.mcs.anl.gov/pub/petsc/nightlylogs/archive/2017/11/11/maint.html

If you want to do some analysis on those times - you can grab the
[historical] logs and run the required analysis.

Satish



--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/






--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/




--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


Re: [petsc-dev] broken nightlybuilds (next vs next-tmp)

2017-11-13 Thread Scott Kruger



On 11/11/17 11:47 AM, Jed Brown wrote:

The way I see it - a broken next [where folks can't easily figure out
who or which commit is responsible for the brakages] - doesn't help
much..


The fundamental problem here is that we aren't accurate enough at
placing blame and getting the appropriate person to fix it.  It doesn't
help that we are a distributed team and have plenty of our own
obligations.  I can't fix something while I'm teaching class or meeting
with students, for example.  But we should all be able to get to it
within a day, either to withdraw the branch from 'next' or to actually
fix it.

I think a lot of our noise in 'next' is "stupid shit", like compilation
failing on some architecture.  Automating a very limited test suite
running on PRs within minutes should help a lot to deal with that.  More
subtle interaction problems can and should continue to be dealt with via
'next'.


I agree with both points here which is why I created
bin/maint/exampleslog.py  which is now in master but not
hooked up in runhtml.py or linked to from the nightlies.

The idea of this is that it assigns responsibility
by the last committer of a given example.  While this is
obviously not perfect, if someone edited an example recently,
they are in the best position to take a quick look at what
the problem is (hence, saying it assigns responsibility and
not blame since you don't know where the blame lies).

This would allow a distribution of responsibility rather than just 
having a single person have to do everything, and doesn't require

a huge paradigm shift.

Scott




--
Tech-X Corporation   kru...@txcorp.com
5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
Boulder, CO 80303Fax:   (303) 448-7756


  1   2   >