[petsc-dev] IS block size meaning

2018-08-14 Thread Vaclav Hapla
Hello

I would like to move the discussion from the PR 
https://bitbucket.org/petsc/petsc/pull-requests/1076/fix-issetblocksize_general 

 because it's quite a general concern.

It's about the meaning of block size for IS.

Barry thinks that IS block size is something completely different than Vec 
block size:
Blocked IS "consists of continous runs of entries where the first entry of a 
run is divisible by the bs so for example: for a block size of 2 the IS 0 1 4 5 
10 11 is a block IS. 0 1 3 5 10 11 does not have a block size. Another way of 
looking at it is that a ISGeneral with a given blocksize can be converted to a 
ISCreateBlock() while other ISs are not (except with the trivial block size of 
1)."

Barry originally wanted to enforce this property for ISGENERAL. But I say it's 
only checked in ISSetBlockSize_General() under PETSC_DEBUG and only if indices 
are set before (see the PR description and diff), and from documentation it’s 
also not really obvious. There is otherwise nothing which would prevent one 
from breaking the "block-wise contiguity" requirement.

At least Matt and other contributors to HDF5 IO for IS and DMPlex (including 
me) grasped the same meaning of "blocksize" as Vec has without that additional 
requirement. In this context, it's used just as an "integer vector". For 
instance, in the dataset /viz/topology/cells in DMPlex HDF5 IO each block is an 
element and each entry is a vertex index.

My opinion is that if anybody wants to have IS with Barry's original meaning of 
"blocked", they can just use ISCreateBlock() which really enforces "block-wise 
contiguity" by API and data structure. There could be also an additional 
standalone function ISCheckBlockwiseContiguous() with the same checking code as 
is currently in ISSetBlockSize_General().

On the other hand, if I’m thinking about any alternative, it could perhaps be 
PetscSection. But it would be quite some manpower to implement PetscSection IO 
and employ it in DMPlex IO. And I'm not sure about the worth of the outcome for 
users.

Opinions?

Vaclav

Re: [petsc-dev] GAMG and custom MatMults in smoothers

2018-07-02 Thread Vaclav Hapla
EventCount  Time (sec) Flop 
--- Global ---  --- Stage ---   Total
   Max Ratio  Max Ratio   Max  Ratio  Mess   Avg len Reduct 
 %T %F %M %L %R  %T %F %M %L %R Mflop/s


--- Event Stage 0: Main Stage

 uÝ  1 1.0 8.0012e+00 8.0 0.00e+00 0.0 0.0e+00 0.0e+00 
0.0e+00 56  0  0  0  0  56  0  0  0  0 0


Memory usage is given in bytes:

Object Type  Creations   Destructions Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

  Viewer 1  00 0.

Average time to get PetscTime(): 9.53674e-08
Average time for MPI_Barrier(): 0.000205183
Average time for zero size MPI_Send(): 0.000130266
#PETSc Option Table entries:
-log_view
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 
sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --with-cc="/usr/bin/xcrun -sdk macosx10.13 clang" 
--with-matlab=1 --with-matlab-arch=maci64 --with-matlab-engine 
--with-matlab-socket=0 --with-hdf5-dir=/opt/miniconda3/envs/salvus 
--download-mpich --with-valgrind --with-x=0 
PETSC_ARCH=arch-mac-mpich-matlab-hdf5
-
Libraries compiled on 2018-06-19 13:50:29 on vhapla-macbook-pro 
Machine characteristics: Darwin-17.6.0-x86_64-i386-64bit
Using PETSc directory: /Users/vhapla/petsc1
Using PETSc arch: arch-mac-mpich-matlab-hdf5
-

Using C compiler: /Users/vhapla/petsc1/arch-mac-mpich-matlab-hdf5/bin/mpicc
-Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas 
-fstack-protector -Qunused-arguments -fvisibility=hidden -g3  
-

Using include paths: -I/Users/vhapla/petsc1/include 
-I/Users/vhapla/petsc1/arch-mac-mpich-matlab-hdf5/include 
-I/opt/miniconda3/envs/salvus/include 
-I/Applications/MATLAB_R2017b.app/extern/include -I/usr/local/include
-

Using C linker: /Users/vhapla/petsc1/arch-mac-mpich-matlab-hdf5/bin/mpicc
Using libraries: -Wl,-rpath,/Users/vhapla/petsc1/arch-mac-mpich-matlab-hdf5/lib 
-L/Users/vhapla/petsc1/arch-mac-mpich-matlab-hdf5/lib -lpetsc 
-Wl,-rpath,/opt/miniconda3/envs/salvus/lib -L/opt/miniconda3/envs/salvus/lib 
-L/Applications/MATLAB_R2017b.app/bin/maci64 
-L/Applications/MATLAB_R2017b.app/extern/lib/maci64 -llapack -lblas -lhdf5_hl 
-lhdf5 -leng -lmex -lmx -lmat -lut -licudata -licui18n -licuuc -lstdc++ -ldl
-



  ##
  ##
  #  WARNING!!!#
  ##
  #   This code was compiled with a debugging option,  #
  #   To get timing results run ./configure#
  #   using --with-debugging=no, the performance will  #
  #   be generally two or three times faster.  #
  ##
  ##




log.c
Description: Binary data
2. 7. 2018 v 17:57, Stefano Zampini <stefano.zamp...@gmail.com>:The intent was not to show you a deadlock; was to provide you evidence that the PetscLogEvent model is meant to have the same event created collectively (hopefully at class registration) on all processes. See also PetscLogViewDefault2018-07-02 18:45 GMT+03:00 Vaclav Hapla <vaclav.ha...@erdw.ethz.ch>:Thanks for the MWE. But it behaves exactly as I would anticipate, see log. I think that naming the event differently for each rank is not good but fortunately PETSc just ignores this for rank > 0. Important is the event id which is the same. I correctly get that the time spent in this event is slightly more than 8 s and the ratio between the shortest and the longest time among all ranks is 8.0.Where's the deadlock? Note that PetscLogEventBegin/End and PetscStageLogPush/Pop are not collective!And thanks for another argument for better specificity of the log view. If the same event would be nested at several levels, the obtained total time wouldn't say anything useful.ThanksVaclav2. 7. 2018 v 17:20, Stefano Zampini <stefano.zamp...@gmail.com>:I don't see why this should lead to deadlock? With current class-wise events you can already have many simultaneous instances of 

Re: [petsc-dev] GAMG and custom MatMults in smoothers

2018-07-02 Thread Vaclav Hapla
Thanks for the MWE. But it behaves exactly as I would anticipate, see log. I think that naming the event differently for each rank is not good but fortunately PETSc just ignores this for rank > 0. Important is the event id which is the same. I correctly get that the time spent in this event is slightly more than 8 s and the ratio between the shortest and the longest time among all ranks is 8.0.Where's the deadlock? Note that PetscLogEventBegin/End and PetscStageLogPush/Pop are not collective!And thanks for another argument for better specificity of the log view. If the same event would be nested at several levels, the obtained total time wouldn't say anything useful.ThanksVaclav
*** WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r 
-fCourier9' to print this document***


-- PETSc Performance Summary: 
--



  ##
  ##
  #  WARNING!!!#
  ##
  #   This code was compiled with a debugging option,  #
  #   To get timing results run ./configure#
  #   using --with-debugging=no, the performance will  #
  #   be generally two or three times faster.  #
  ##
  ##


./ex on a arch-mac-mpich-matlab-hdf5 named vhapla-macbook-pro with 8 
processors, by vhapla Mon Jul  2 17:27:41 2018
Using Petsc Development GIT revision: v3.9.2-600-g2e10669da5  GIT Date: 
2018-06-15 17:49:51 +0200

 Max   Max/MinAvg  Total 
Time (sec):   8.012e+00  1.6   8.012e+00
Objects:  1.000e+00  1.0   1.000e+00
Flop: 0.000e+00  0.0   0.000e+00  0.000e+00
Flop/sec:0.000e+00  0.0   0.000e+00  0.000e+00
Memory:   3.819e+04  1.0  3.055e+05
MPI Messages: 0.000e+00  0.0   0.000e+00  0.000e+00
MPI Message Lengths:  0.000e+00  0.0   0.000e+00  0.000e+00
MPI Reductions:   1.200e+01  1.0

Flop counting convention: 1 flop = 1 real number operation of type 
(multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length N --> 2N 
flop
and VecAXPY() for complex vectors of length N --> 
8N flop

Summary of Stages:   - Time --  - Flop -  --- Messages ---  -- 
Message Lengths --  -- Reductions --
Avg %Total Avg %Total   counts   %Total 
Avg %Total   counts   %Total 
 0:  Main Stage: 8.0115e+00 100.0%  0.e+00   0.0%  0.000e+00   0.0%  
0.000e+000.0%  4.000e+00  33.3% 


See the 'Profiling' chapter of the users' manual for details on interpreting 
output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flop: Max - maximum over all processors
   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and 
PetscLogStagePop().
  %T - percent time in this phase %F - percent flop in this phase
  %M - percent messages in this phase %L - percent message lengths in 
this phase
  %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flop over all processors)/(max time over all 
processors)



  ##
  ##
  #  WARNING!!!#
  ##
  #   This code was compiled with a debugging option,  #
  #   To get timing results run ./configure#
  #   using --with-debugging=no, the performance will  #
  #   be generally two or three times faster.  #
  ##
  ##


EventCount  

Re: [petsc-dev] GAMG and custom MatMults in smoothers

2018-07-02 Thread Vaclav Hapla


> 2. 7. 2018 v 16:38, Matthew Knepley :
> 
> On Mon, Jul 2, 2018 at 9:33 AM Vaclav Hapla  <mailto:vaclav.ha...@erdw.ethz.ch>> wrote:
> 
> 
>> 2. 7. 2018 v 15:50, Matthew Knepley > <mailto:knep...@gmail.com>>:
>> 
>> On Mon, Jul 2, 2018 at 8:28 AM Vaclav Hapla > <mailto:vaclav.ha...@erdw.ethz.ch>> wrote:
>>> 2. 7. 2018 v 15:05, Matthew Knepley >> <mailto:knep...@gmail.com>>:
>>> 
>>> On Mon, Jul 2, 2018 at 7:54 AM Vaclav Hapla >> <mailto:vaclav.ha...@erdw.ethz.ch>> wrote:
>>> 
>>> 
>>>> 2. 7. 2018 v 14:48, Matthew Knepley >>> <mailto:knep...@gmail.com>>:
>>>> 
>>>> On Mon, Jul 2, 2018 at 3:48 AM Vaclav Hapla >>> <mailto:vaclav.ha...@erdw.ethz.ch>> wrote:
>>>> Barry wrote:
>>>>>   This could get ugly real fast, for example, for vector operations, 
>>>>> there may be dozens of named vectors and each one gets its own logging? 
>>>>> You'd have to make sure that only the objects you care about get named, 
>>>>> is that possible?
>>>>> 
>>>>>I don't know if there is a good solution within the PETSc logging 
>>>>> infrastructure to get what you want but maybe what you propose is the 
>>>>> best possible.
>>>> 
>>>> As I suggest, this behavior would be only triggered by a specific option.
>>>> 
>>>> I think there are actually 4 strings which could be used as an event name 
>>>> suffix in log view:
>>>> 1) name
>>>> 2) prefix
>>>> 3) type
>>>> 4) custom string (set by something like PetscObjectSetLogViewSuffix)
>>>> I think the best would be to let user choose by offering 
>>>> -log_view_by_{name,prefix,type,suffix}.
>>>> 
>>>> For example, with -log_view_by_prefix, you could readily distinguish 
>>>> PCTelescope outer and inner apply, because you would see a separate 
>>>> "PCApply (telescope_)" event.
>>>> With -log_view_by_type, you would see PCApply (telescope).
>>>> 
>>>> I think this would be useful because the current class-wide events like 
>>>> MatMult or PCApply aggregate very different operations from which some are 
>>>> for free and some form hotspots.
>>>> 
>>>> 
>>>> Stefano wrote:
>>>>> The issue with this sort of “dynamic” logging is that now PETSc requires 
>>>>> PetscLogEvent created during the registration of the class, so that all 
>>>>> the ranks in PETSC_COMM_WORLD have the same events registered.
>>>>> What you propose is not generally supported for this specific reason.
>>>>> 
>>>>> Your “log_name” may work if users register their own classes (with their 
>>>>> own LogEvents created properly), and currently we don’t have support 
>>>>> (maybe I’m wrong) to add an “InitializePackage” method for the users’ 
>>>>> registered classes.
>>>> 
>>>> 
>>>> I don't agree. What I suggest is basically an ability to allow 
>>>> automatically created object-wise events, so it _can't_ be managed during 
>>>> the class registration. In presence of respective option, the event would 
>>>> be created during PetscLogEventBegin by taking the class-wide event's 
>>>> name, concatenating the suffix and registering a new event. The event id 
>>>> would be stored in the PetscObject structure.
>>>> 
>>>> 
>>>> Matt wrote:
>>>>> As people have pointed out, this would not work well for Events. However, 
>>>>> this is exactly what stages are for.
>>>>> Use separate stages for the different types of MatMult. I did this, for 
>>>>> example, when looking at performance
>>>>> on different MG levels.
>>>> 
>>>> Yes, performance on different MG levels is a nice use case. I don't 
>>>> understand how you inject stages into MatMults. To me it's exactly the 
>>>> same problem as with events - you have to define MatMult_custom where you 
>>>> take the original mult and wrap into PetscStageLogPush/Pop and then use 
>>>> MatSetOperation to redefine MatMult. Or do you mean something more elegant?
>>>> 
>>>> You could do that, but usually I think of stages as being structural. I 
>>>> think for your example I would push/pop the stage
>>>> i

Re: [petsc-dev] GAMG and custom MatMults in smoothers

2018-07-02 Thread Vaclav Hapla


> 2. 7. 2018 v 15:50, Matthew Knepley :
> 
> On Mon, Jul 2, 2018 at 8:28 AM Vaclav Hapla  <mailto:vaclav.ha...@erdw.ethz.ch>> wrote:
>> 2. 7. 2018 v 15:05, Matthew Knepley > <mailto:knep...@gmail.com>>:
>> 
>> On Mon, Jul 2, 2018 at 7:54 AM Vaclav Hapla > <mailto:vaclav.ha...@erdw.ethz.ch>> wrote:
>> 
>> 
>>> 2. 7. 2018 v 14:48, Matthew Knepley >> <mailto:knep...@gmail.com>>:
>>> 
>>> On Mon, Jul 2, 2018 at 3:48 AM Vaclav Hapla >> <mailto:vaclav.ha...@erdw.ethz.ch>> wrote:
>>> Barry wrote:
>>>>   This could get ugly real fast, for example, for vector operations, there 
>>>> may be dozens of named vectors and each one gets its own logging? You'd 
>>>> have to make sure that only the objects you care about get named, is that 
>>>> possible?
>>>> 
>>>>I don't know if there is a good solution within the PETSc logging 
>>>> infrastructure to get what you want but maybe what you propose is the best 
>>>> possible.
>>> 
>>> As I suggest, this behavior would be only triggered by a specific option.
>>> 
>>> I think there are actually 4 strings which could be used as an event name 
>>> suffix in log view:
>>> 1) name
>>> 2) prefix
>>> 3) type
>>> 4) custom string (set by something like PetscObjectSetLogViewSuffix)
>>> I think the best would be to let user choose by offering 
>>> -log_view_by_{name,prefix,type,suffix}.
>>> 
>>> For example, with -log_view_by_prefix, you could readily distinguish 
>>> PCTelescope outer and inner apply, because you would see a separate 
>>> "PCApply (telescope_)" event.
>>> With -log_view_by_type, you would see PCApply (telescope).
>>> 
>>> I think this would be useful because the current class-wide events like 
>>> MatMult or PCApply aggregate very different operations from which some are 
>>> for free and some form hotspots.
>>> 
>>> 
>>> Stefano wrote:
>>>> The issue with this sort of “dynamic” logging is that now PETSc requires 
>>>> PetscLogEvent created during the registration of the class, so that all 
>>>> the ranks in PETSC_COMM_WORLD have the same events registered.
>>>> What you propose is not generally supported for this specific reason.
>>>> 
>>>> Your “log_name” may work if users register their own classes (with their 
>>>> own LogEvents created properly), and currently we don’t have support 
>>>> (maybe I’m wrong) to add an “InitializePackage” method for the users’ 
>>>> registered classes.
>>> 
>>> 
>>> I don't agree. What I suggest is basically an ability to allow 
>>> automatically created object-wise events, so it _can't_ be managed during 
>>> the class registration. In presence of respective option, the event would 
>>> be created during PetscLogEventBegin by taking the class-wide event's name, 
>>> concatenating the suffix and registering a new event. The event id would be 
>>> stored in the PetscObject structure.
>>> 
>>> 
>>> Matt wrote:
>>>> As people have pointed out, this would not work well for Events. However, 
>>>> this is exactly what stages are for.
>>>> Use separate stages for the different types of MatMult. I did this, for 
>>>> example, when looking at performance
>>>> on different MG levels.
>>> 
>>> Yes, performance on different MG levels is a nice use case. I don't 
>>> understand how you inject stages into MatMults. To me it's exactly the same 
>>> problem as with events - you have to define MatMult_custom where you take 
>>> the original mult and wrap into PetscStageLogPush/Pop and then use 
>>> MatSetOperation to redefine MatMult. Or do you mean something more elegant?
>>> 
>>> You could do that, but usually I think of stages as being structural. I 
>>> think for your example I would push/pop the stage
>>> inside your Mat operation wrapper (I don't see why you need another one), 
>>> and this behavior could be controlled with
>>> another option so you could turn it off.
>> 
>> I meant hierarchies of typically Mats or PCs, where you don't define any 
>> custom operations but compose together existing types (which should be 
>> promoted I believe). So no "my" wrapper. As I wrote below:
>> 
>>>>>  Think e.g. of having additive MATCOMPOSITE wrapping multiplicative 
>>>&

Re: [petsc-dev] GAMG and custom MatMults in smoothers

2018-07-02 Thread Vaclav Hapla


> 2. 7. 2018 v 15:20, Stefano Zampini :
> 
> 
> 
> 2018-07-02 11:48 GMT+03:00 Vaclav Hapla  <mailto:vaclav.ha...@erdw.ethz.ch>>:
> Barry wrote:
>>   This could get ugly real fast, for example, for vector operations, there 
>> may be dozens of named vectors and each one gets its own logging? You'd have 
>> to make sure that only the objects you care about get named, is that 
>> possible?
>> 
>>I don't know if there is a good solution within the PETSc logging 
>> infrastructure to get what you want but maybe what you propose is the best 
>> possible.
> 
> As I suggest, this behavior would be only triggered by a specific option.
> 
> I think there are actually 4 strings which could be used as an event name 
> suffix in log view:
> 1) name
> 2) prefix
> 3) type
> 4) custom string (set by something like PetscObjectSetLogViewSuffix)
> I think the best would be to let user choose by offering 
> -log_view_by_{name,prefix,type,suffix}.
> 
> For example, with -log_view_by_prefix, you could readily distinguish 
> PCTelescope outer and inner apply, because you would see a separate "PCApply 
> (telescope_)" event.
> With -log_view_by_type, you would see PCApply (telescope).
> 
> I think this would be useful because the current class-wide events like 
> MatMult or PCApply aggregate very different operations from which some are 
> for free and some form hotspots.
> 
> 
> Stefano wrote:
>> The issue with this sort of “dynamic” logging is that now PETSc requires 
>> PetscLogEvent created during the registration of the class, so that all the 
>> ranks in PETSC_COMM_WORLD have the same events registered.
>> What you propose is not generally supported for this specific reason.
>> 
>> Your “log_name” may work if users register their own classes (with their own 
>> LogEvents created properly), and currently we don’t have support (maybe I’m 
>> wrong) to add an “InitializePackage” method for the users’ registered 
>> classes.
> 
> 
> I don't agree. What I suggest is basically an ability to allow automatically 
> created object-wise events, so it _can't_ be managed during the class 
> registration. In presence of respective option, the event would be created 
> during PetscLogEventBegin by taking the class-wide event's name, 
> concatenating the suffix and registering a new event. The event id would be 
> stored in the PetscObject structure.
> 
> 
> As I said before, now PETSc event model assumes the events are created at 
> class registration.

I don't think there's such assumption. You can register an event where you 
want. There's nothing special about the class-wise events other that they are 
registered during *InitializePackage (which can be however called much later 
than PetscInitialize). They are otherwise by no means tied to that class.

There are even examples which do that:
http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Profiling/PetscLogEventRegister.html
 
<http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Profiling/PetscLogEventRegister.html>

> I'm not saying your dynamic thing cannot be done; I'm saying it will likely 
> deadlock

I don't see why this should lead to deadlock? With current class-wise events 
you can already have many simultaneous instances of the same event at once.

Thanks

Vaclav

>  
> 
> Matt wrote:
>> As people have pointed out, this would not work well for Events. However, 
>> this is exactly what stages are for.
>> Use separate stages for the different types of MatMult. I did this, for 
>> example, when looking at performance
>> on different MG levels.
> 
> Yes, performance on different MG levels is a nice use case. I don't 
> understand how you inject stages into MatMults. To me it's exactly the same 
> problem as with events - you have to define MatMult_custom where you take the 
> original mult and wrap into PetscStageLogPush/Pop and then use 
> MatSetOperation to redefine MatMult. Or do you mean something more elegant?
> 
> Thanks
> 
> Vaclav
> 
> 
> 
>> 29. 6. 2018 v 22:42, Smith, Barry F. > <mailto:bsm...@mcs.anl.gov>>:
>> 
>> 
>> 
>>> On Jun 29, 2018, at 9:33 AM, Vaclav Hapla >> <mailto:vaclav.ha...@erdw.ethz.ch>> wrote:
>>> 
>>> 
>>> 
>>>> 22. 6. 2018 v 17:47, Smith, Barry F. >>> <mailto:bsm...@mcs.anl.gov>>:
>>>> 
>>>> 
>>>> 
>>>>> On Jun 22, 2018, at 5:43 AM, Pierre Jolivet >>>> <mailto:pierre.joli...@enseeiht.fr>> wrote:
>>>>> 
>>>>> Hello,
>>>>> I’m solving a system using a MATSHELL and PCG

Re: [petsc-dev] GAMG and custom MatMults in smoothers

2018-07-02 Thread Vaclav Hapla


> 2. 7. 2018 v 15:05, Matthew Knepley :
> 
> On Mon, Jul 2, 2018 at 7:54 AM Vaclav Hapla  <mailto:vaclav.ha...@erdw.ethz.ch>> wrote:
> 
> 
>> 2. 7. 2018 v 14:48, Matthew Knepley > <mailto:knep...@gmail.com>>:
>> 
>> On Mon, Jul 2, 2018 at 3:48 AM Vaclav Hapla > <mailto:vaclav.ha...@erdw.ethz.ch>> wrote:
>> Barry wrote:
>>>   This could get ugly real fast, for example, for vector operations, there 
>>> may be dozens of named vectors and each one gets its own logging? You'd 
>>> have to make sure that only the objects you care about get named, is that 
>>> possible?
>>> 
>>>I don't know if there is a good solution within the PETSc logging 
>>> infrastructure to get what you want but maybe what you propose is the best 
>>> possible.
>> 
>> As I suggest, this behavior would be only triggered by a specific option.
>> 
>> I think there are actually 4 strings which could be used as an event name 
>> suffix in log view:
>> 1) name
>> 2) prefix
>> 3) type
>> 4) custom string (set by something like PetscObjectSetLogViewSuffix)
>> I think the best would be to let user choose by offering 
>> -log_view_by_{name,prefix,type,suffix}.
>> 
>> For example, with -log_view_by_prefix, you could readily distinguish 
>> PCTelescope outer and inner apply, because you would see a separate "PCApply 
>> (telescope_)" event.
>> With -log_view_by_type, you would see PCApply (telescope).
>> 
>> I think this would be useful because the current class-wide events like 
>> MatMult or PCApply aggregate very different operations from which some are 
>> for free and some form hotspots.
>> 
>> 
>> Stefano wrote:
>>> The issue with this sort of “dynamic” logging is that now PETSc requires 
>>> PetscLogEvent created during the registration of the class, so that all the 
>>> ranks in PETSC_COMM_WORLD have the same events registered.
>>> What you propose is not generally supported for this specific reason.
>>> 
>>> Your “log_name” may work if users register their own classes (with their 
>>> own LogEvents created properly), and currently we don’t have support (maybe 
>>> I’m wrong) to add an “InitializePackage” method for the users’ registered 
>>> classes.
>> 
>> 
>> I don't agree. What I suggest is basically an ability to allow automatically 
>> created object-wise events, so it _can't_ be managed during the class 
>> registration. In presence of respective option, the event would be created 
>> during PetscLogEventBegin by taking the class-wide event's name, 
>> concatenating the suffix and registering a new event. The event id would be 
>> stored in the PetscObject structure.
>> 
>> 
>> Matt wrote:
>>> As people have pointed out, this would not work well for Events. However, 
>>> this is exactly what stages are for.
>>> Use separate stages for the different types of MatMult. I did this, for 
>>> example, when looking at performance
>>> on different MG levels.
>> 
>> Yes, performance on different MG levels is a nice use case. I don't 
>> understand how you inject stages into MatMults. To me it's exactly the same 
>> problem as with events - you have to define MatMult_custom where you take 
>> the original mult and wrap into PetscStageLogPush/Pop and then use 
>> MatSetOperation to redefine MatMult. Or do you mean something more elegant?
>> 
>> You could do that, but usually I think of stages as being structural. I 
>> think for your example I would push/pop the stage
>> inside your Mat operation wrapper (I don't see why you need another one), 
>> and this behavior could be controlled with
>> another option so you could turn it off.
> 
> I meant hierarchies of typically Mats or PCs, where you don't define any 
> custom operations but compose together existing types (which should be 
> promoted I believe). So no "my" wrapper. As I wrote below:
> 
>>>>  Think e.g. of having additive MATCOMPOSITE wrapping multiplicative 
>>>> MATCOMPOSITE wrapping MATTRANSPOSE wrapping MATAIJ. You want to measure 
>>>> this MATAIJ instance's MatMult separately but you surely don't want to 
>>>> rewrite implementation of MatMult_Transpose or force yourself to use 
>>>> MATSHELL just to hang the events on MatMult*.
> 
> 
> Its not enough to make separate stages for additive MC, multiplicative MC, 
> and MT? If you want stages for every single
> combination created dynamically, you can push another st

Re: [petsc-dev] GAMG and custom MatMults in smoothers

2018-07-02 Thread Vaclav Hapla


> 2. 7. 2018 v 14:48, Matthew Knepley :
> 
> On Mon, Jul 2, 2018 at 3:48 AM Vaclav Hapla  <mailto:vaclav.ha...@erdw.ethz.ch>> wrote:
> Barry wrote:
>>   This could get ugly real fast, for example, for vector operations, there 
>> may be dozens of named vectors and each one gets its own logging? You'd have 
>> to make sure that only the objects you care about get named, is that 
>> possible?
>> 
>>I don't know if there is a good solution within the PETSc logging 
>> infrastructure to get what you want but maybe what you propose is the best 
>> possible.
> 
> As I suggest, this behavior would be only triggered by a specific option.
> 
> I think there are actually 4 strings which could be used as an event name 
> suffix in log view:
> 1) name
> 2) prefix
> 3) type
> 4) custom string (set by something like PetscObjectSetLogViewSuffix)
> I think the best would be to let user choose by offering 
> -log_view_by_{name,prefix,type,suffix}.
> 
> For example, with -log_view_by_prefix, you could readily distinguish 
> PCTelescope outer and inner apply, because you would see a separate "PCApply 
> (telescope_)" event.
> With -log_view_by_type, you would see PCApply (telescope).
> 
> I think this would be useful because the current class-wide events like 
> MatMult or PCApply aggregate very different operations from which some are 
> for free and some form hotspots.
> 
> 
> Stefano wrote:
>> The issue with this sort of “dynamic” logging is that now PETSc requires 
>> PetscLogEvent created during the registration of the class, so that all the 
>> ranks in PETSC_COMM_WORLD have the same events registered.
>> What you propose is not generally supported for this specific reason.
>> 
>> Your “log_name” may work if users register their own classes (with their own 
>> LogEvents created properly), and currently we don’t have support (maybe I’m 
>> wrong) to add an “InitializePackage” method for the users’ registered 
>> classes.
> 
> 
> I don't agree. What I suggest is basically an ability to allow automatically 
> created object-wise events, so it _can't_ be managed during the class 
> registration. In presence of respective option, the event would be created 
> during PetscLogEventBegin by taking the class-wide event's name, 
> concatenating the suffix and registering a new event. The event id would be 
> stored in the PetscObject structure.
> 
> 
> Matt wrote:
>> As people have pointed out, this would not work well for Events. However, 
>> this is exactly what stages are for.
>> Use separate stages for the different types of MatMult. I did this, for 
>> example, when looking at performance
>> on different MG levels.
> 
> Yes, performance on different MG levels is a nice use case. I don't 
> understand how you inject stages into MatMults. To me it's exactly the same 
> problem as with events - you have to define MatMult_custom where you take the 
> original mult and wrap into PetscStageLogPush/Pop and then use 
> MatSetOperation to redefine MatMult. Or do you mean something more elegant?
> 
> You could do that, but usually I think of stages as being structural. I think 
> for your example I would push/pop the stage
> inside your Mat operation wrapper (I don't see why you need another one), and 
> this behavior could be controlled with
> another option so you could turn it off.

I meant hierarchies of typically Mats or PCs, where you don't define any custom 
operations but compose together existing types (which should be promoted I 
believe). So no "my" wrapper. As I wrote below:

>>>  Think e.g. of having additive MATCOMPOSITE wrapping multiplicative 
>>> MATCOMPOSITE wrapping MATTRANSPOSE wrapping MATAIJ. You want to measure 
>>> this MATAIJ instance's MatMult separately but you surely don't want to 
>>> rewrite implementation of MatMult_Transpose or force yourself to use 
>>> MATSHELL just to hang the events on MatMult*.

Thanks

Vaclav


> 
>   Matt
>  
> Thanks
> 
> Vaclav
> 
> 
> 
>> 29. 6. 2018 v 22:42, Smith, Barry F. > <mailto:bsm...@mcs.anl.gov>>:
>> 
>> 
>> 
>>> On Jun 29, 2018, at 9:33 AM, Vaclav Hapla >> <mailto:vaclav.ha...@erdw.ethz.ch>> wrote:
>>> 
>>> 
>>> 
>>>> 22. 6. 2018 v 17:47, Smith, Barry F. >>> <mailto:bsm...@mcs.anl.gov>>:
>>>> 
>>>> 
>>>> 
>>>>> On Jun 22, 2018, at 5:43 AM, Pierre Jolivet >>>> <mailto:pierre.joli...@enseeiht.fr>> wrote:
>>>>> 
>>>>> Hello,
>>>>> I’m solving a system us

Re: [petsc-dev] GAMG and custom MatMults in smoothers

2018-07-02 Thread Vaclav Hapla
Barry wrote:
>   This could get ugly real fast, for example, for vector operations, there 
> may be dozens of named vectors and each one gets its own logging? You'd have 
> to make sure that only the objects you care about get named, is that possible?
> 
>I don't know if there is a good solution within the PETSc logging 
> infrastructure to get what you want but maybe what you propose is the best 
> possible.

As I suggest, this behavior would be only triggered by a specific option.

I think there are actually 4 strings which could be used as an event name 
suffix in log view:
1) name
2) prefix
3) type
4) custom string (set by something like PetscObjectSetLogViewSuffix)
I think the best would be to let user choose by offering 
-log_view_by_{name,prefix,type,suffix}.

For example, with -log_view_by_prefix, you could readily distinguish 
PCTelescope outer and inner apply, because you would see a separate "PCApply 
(telescope_)" event.
With -log_view_by_type, you would see PCApply (telescope).

I think this would be useful because the current class-wide events like MatMult 
or PCApply aggregate very different operations from which some are for free and 
some form hotspots.


Stefano wrote:
> The issue with this sort of “dynamic” logging is that now PETSc requires 
> PetscLogEvent created during the registration of the class, so that all the 
> ranks in PETSC_COMM_WORLD have the same events registered.
> What you propose is not generally supported for this specific reason.
> 
> Your “log_name” may work if users register their own classes (with their own 
> LogEvents created properly), and currently we don’t have support (maybe I’m 
> wrong) to add an “InitializePackage” method for the users’ registered classes.


I don't agree. What I suggest is basically an ability to allow automatically 
created object-wise events, so it _can't_ be managed during the class 
registration. In presence of respective option, the event would be created 
during PetscLogEventBegin by taking the class-wide event's name, concatenating 
the suffix and registering a new event. The event id would be stored in the 
PetscObject structure.


Matt wrote:
> As people have pointed out, this would not work well for Events. However, 
> this is exactly what stages are for.
> Use separate stages for the different types of MatMult. I did this, for 
> example, when looking at performance
> on different MG levels.

Yes, performance on different MG levels is a nice use case. I don't understand 
how you inject stages into MatMults. To me it's exactly the same problem as 
with events - you have to define MatMult_custom where you take the original 
mult and wrap into PetscStageLogPush/Pop and then use MatSetOperation to 
redefine MatMult. Or do you mean something more elegant?

Thanks

Vaclav



> 29. 6. 2018 v 22:42, Smith, Barry F. :
> 
> 
> 
>> On Jun 29, 2018, at 9:33 AM, Vaclav Hapla  wrote:
>> 
>> 
>> 
>>> 22. 6. 2018 v 17:47, Smith, Barry F. :
>>> 
>>> 
>>> 
>>>> On Jun 22, 2018, at 5:43 AM, Pierre Jolivet  
>>>> wrote:
>>>> 
>>>> Hello,
>>>> I’m solving a system using a MATSHELL and PCGAMG.
>>>> The MPIAIJ Mat I’m giving to GAMG has a specific structure (inherited from 
>>>> the MATSHELL) I’d like to exploit during the solution phase when the 
>>>> smoother on the finest level is doing MatMults.
>>>> 
>>>> Is there some way to:
>>>> 1) decouple in -log_view the time spent in the MATSHELL MatMult and in the 
>>>> smoothers MatMult
>>> 
>>> You can register a new event and then inside your MATSHELL MatMult() call 
>>> PetscLogEventBegin/End on your new event.
>>> 
>>>  Note that the MatMult() like will still contain the time for your MatShell 
>>> mult so you will need to subtract it off to get the time for your non-shell 
>>> matmults.
>> 
>> In PERMON, we sometimes have quite complicated hierarchy of wrapped matrices 
>> and want to measure MatMult{,Transpose,Add,TransposeAdd} separately for 
>> particular ones. Think e.g. of having additive MATCOMPOSITE wrapping 
>> multiplicative MATCOMPOSITE wrapping MATTRANSPOSE wrapping MATAIJ. You want 
>> to measure this MATAIJ instance's MatMult separately but you surely don't 
>> want to rewrite implementation of MatMult_Transpose or force yourself to use 
>> MATSHELL just to hang the events on MatMult*.
>> 
>> We had a special wrapper type just adding some prefix to the events for the 
>> given object but this is not nice. What about adding a functionality to 
>> PetscLogEventBegin/End that would distinguish based on the first 
>> PetscObject's name or option prefix? O

Re: [petsc-dev] GAMG and custom MatMults in smoothers

2018-06-29 Thread Vaclav Hapla



> 22. 6. 2018 v 17:47, Smith, Barry F. :
> 
> 
> 
>> On Jun 22, 2018, at 5:43 AM, Pierre Jolivet  
>> wrote:
>> 
>> Hello,
>> I’m solving a system using a MATSHELL and PCGAMG.
>> The MPIAIJ Mat I’m giving to GAMG has a specific structure (inherited from 
>> the MATSHELL) I’d like to exploit during the solution phase when the 
>> smoother on the finest level is doing MatMults.
>> 
>> Is there some way to:
>> 1) decouple in -log_view the time spent in the MATSHELL MatMult and in the 
>> smoothers MatMult
> 
>   You can register a new event and then inside your MATSHELL MatMult() call 
> PetscLogEventBegin/End on your new event.
> 
>Note that the MatMult() like will still contain the time for your MatShell 
> mult so you will need to subtract it off to get the time for your non-shell 
> matmults.

In PERMON, we sometimes have quite complicated hierarchy of wrapped matrices 
and want to measure MatMult{,Transpose,Add,TransposeAdd} separately for 
particular ones. Think e.g. of having additive MATCOMPOSITE wrapping 
multiplicative MATCOMPOSITE wrapping MATTRANSPOSE wrapping MATAIJ. You want to 
measure this MATAIJ instance's MatMult separately but you surely don't want to 
rewrite implementation of MatMult_Transpose or force yourself to use MATSHELL 
just to hang the events on MatMult*.

We had a special wrapper type just adding some prefix to the events for the 
given object but this is not nice. What about adding a functionality to 
PetscLogEventBegin/End that would distinguish based on the first PetscObject's 
name or option prefix? Of course optionally not to break guys relying on 
current behavior - e.g. under something like -log_view_by_name. To me it's 
quite an elegant solution working for any PetscObject and any event.

I can do that if I get some upvotes.

Vaclav

> 
>> 2) hardwire a specific MatMult implementation for the smoother on the finest 
>> level
> 
>   In the latest release you do MatSetOperation() to override the normal 
> matrix vector product with anything else you want. 
> 
>> 
>> Thanks in advance,
>> Pierre
>> 
>> PS : here is what I have right now,
>> MatMult  118 1.0 1.0740e+02 1.6 1.04e+13 1.6 1.7e+06 6.1e+05 
>> 0.0e+00 47100 90 98  0  47100 90 98  0 81953703
>> […]
>> PCSetUp2 1.0 8.6513e+00 1.0 1.01e+09 1.7 2.6e+05 4.0e+05 
>> 1.8e+02  5  0 14 10 66   5  0 14 10 68 94598
>> PCApply   14 1.0 8.0373e+01 1.1 9.06e+12 1.6 1.3e+06 6.0e+05 
>> 2.1e+01 45 87 72 78  8  45 87 72 78  8 95365211 // I’m guessing a lot of 
>> time here is being wasted in doing inefficient MatMults on the finest level 
>> but this is only speculation
>> 
>> Same code with -pc_type none -ksp_max_it 13,
>> MatMult   14 1.0 1.2936e+01 1.7 1.35e+12 1.6 2.0e+05 6.1e+05 
>> 0.0e+00 15100 78 93  0  15100 78 93  0 88202079
>> 
>> The grid itself is rather simple (two levels, extremely aggressive 
>> coarsening),
>>   type is MULTIPLICATIVE, levels=2 cycles=v
>>   KSP Object: (mg_coarse_) 1024 MPI processes
>> linear system matrix = precond matrix:
>> Mat Object: 1024 MPI processes
>>   type: mpiaij
>>   rows=775, cols=775
>>   total: nonzeros=1793, allocated nonzeros=1793
>> 
>> linear system matrix followed by preconditioner matrix:
>> Mat Object: 1024 MPI processes
>>   type: shell
>>   rows=1369307136, cols=1369307136
>> Mat Object: 1024 MPI processes
>>   type: mpiaij
>>   rows=1369307136, cols=1369307136
>>   total: nonzeros=19896719360, allocated nonzeros=19896719360
> 



[petsc-dev] hang while saving non-distributed Exodus mesh

2018-05-24 Thread Vaclav Hapla
Hello

I wanted to load an Exodus mesh and immediately save it into HDF5 (i.e. use 
PETSc just for the conversion) without distributing (needed for reproducing 
another issue). See 
https://bitbucket.org/petsc/petsc/branch/haplav/fix-dmplex-immediate-exodus-hdf5-save.

But then there's a problem with labels created by the reader (e.g. "Cell 
Sets"). Only rank 0 has them. This in turn leads to the HDF5 writer hanging in 
DMPlexWriteLabels_HDF5_Static.

I think this should be fixed somehow and I will gladly do that. But my question 
is, what should be fixed? Should DMPlexCreateExodus produce consistent labels 
(I mean the number of labels and their names being the same on all ranks of the 
DM's communicator), or DMPlexWriteLabels_HDF5_Static cope somehow with 
inconsistent labels?

I mean in longer term it would be nice to enforce labels being always 
consistent (e.g. DMSetLabelValue being collective) ...

Thanks for opinions,
Vaclav

Re: [petsc-dev] calling petsc from a mex file

2018-05-10 Thread Vaclav Hapla
I confirm the same problem. But I also don't know how to fix this 
systematically in the buildsystem. I found hard-coded -L options here
  
https://bitbucket.org/petsc/petsc/src/9e21ac129fc054b3abeb584f186c61369f6d442a/config/BuildSystem/config/packages/MatlabEngine.py#lines-39
but I'm in doubts I can do the same with -Wl,-rpath.

Vaclav

> 9. 5. 2018 v 18:49, Munson, Todd :
> 
> Note: I compiled the latest Petsc using --with-matlab and changed to the
> src/snes/examples/tutorials directory and did a "make ex5".  The
> compilation works fine, but the "-Wl,-rpath" for the matlab
> libraries is missing and I have to set my DYLD_LIBRARY_PATH
> manually.  I don't know where to fix this in the build
> system.



[petsc-dev] variable 'length' is uninitialized in src/sys/fileio/mprint.c

2018-05-03 Thread Vaclav Hapla
Barry,
this is obviously wrong in current master:

14416c0e507 src/sys/fileio/mprint.c 791) size_t length;
cb500232d0b src/sys/fileio/mprint.c 792) char   buff[length];

It results in
  warning: variable 'length' is uninitialized when used here [-Wuninitialized]
when the PETSC_HAVE_MATLAB_ENGINE macro is defined.

Vaclav

[petsc-dev] Chaco from --download-chaco vs --download-trilinos (Re: broken stuff in next for many days)

2018-04-23 Thread Vaclav Hapla
I can confirm --download-chaco vs --download-trilinos give different chaco 
partitioning - I can reproduce on my machine. So I made some investigation.

PETSC_HAVE_CHACO_INT_ASSIGNMENT is set only for --download-trilinos. 
This macro makes difference only at 4 places:

At src/mat/partition/impls/chaco/chaco.c:8 and 
src/dm/impls/plex/plexpartition.c:1192:
#if defined(PETSC_HAVE_CHACO_INT_ASSIGNMENT)
#include 
#else
/* Older versions of Chaco do not have an include file */
PETSC_EXTERN int interface(int nvtxs, int *start, int *adjacency, int *vwgts,
 float *ewgts, float *x, float *y, float *z, char 
*outassignname,
 char *outfilename, short *assignment, int architecture, 
int ndims_tot,
 int mesh_dims[3], double *goal, int global_method, int 
local_method,
 int rqi_flag, int vmax, int ndims, double eigtol, long 
seed);
#endif

At src/mat/partition/impls/chaco/chaco.c:68 and 
src/dm/impls/plex/plexpartition.c:1227:
#if defined(PETSC_HAVE_CHACO_INT_ASSIGNMENT)
  int   *assignment;
#else
  short *assignment;  
#endif

So it seems PETSc is not responsible for the different outputs.

In Trilinos, Chaco is shipped within the Seacas package. The notice at 
http://gsjaardema.github.io/seacas/#chaco 
 says
"The short *assignment argument to the interface function has been changed to 
int *assignment to permit decompositions with more than 32,768 processors."
They also say
"There are also now a CHACO_VERSION_MAJOR, CHACO_VERSION_MINOR, 
CHACO_VERSION_PATCH defines in chaco.h."
Looking there, the version is 3.0.0.

PETSc's configure --download-chaco gets this one: 
http://ftp.mcs.anl.gov/pub/petsc/externalpackages/Chaco-2.2-p2.tar.gz 

It actually includes a git repo. There we can see it's the original author's 
version + some updates by Barry and Satish, mainly to avoid some name clashes.
The original version is probably this one: 
https://www3.cs.stonybrook.edu/~algorith/implement/chaco/distrib/Chaco-2.2.tar.gz
 

Looking at the contents of the tarball, these sources data back to 2000.
I tried comparing these two version but there are two many changes in 
formatting to identify crucial changes easily. But definitely these two 
versions differ and can give different results.

The main intent of my tests in src/dm/impls/plex/examples/tutorials/ex5.c is 
testing HDF5 I/O with different formats and I just needed to somehow distribute 
the sequential mesh read from exodus file. So I will replace chaco by simple 
here. But still it's a question whether we want to do something about the 
confusing situation with Chaco.

Vaclav

> 22. 4. 2018 v 17:56, Satish Balay :
> 
> On Sun, 22 Apr 2018, Smith, Barry F. wrote:
> 
>>> It would be good to figure out exactly where the difference comes form
>>> [wrt chaco from trilinos vs separate install of chaco - or some other
>>> interaction from trilinos]
>> 
>>   Different random numbers being generated?
> 
> Its trilinos vs non-trilinos build of packages - so I don't think its 
> 'different random number generation' issue.
> 
> Trilinos install does:
> 
>if self.libraries.check(self.dlib, "interface"):
>  self.addDefine('HAVE_CHACO',1)
>  self.addDefine('HAVE_CHACO_INT_ASSIGNMENT',1)
>if self.libraries.check(self.dlib, "ML_Set_PrintLevel"):
>  self.addDefine('HAVE_ML',1)
>if self.libraries.check(self.dlib, "Zoltan_LB_Partition"):
>  self.addDefine('HAVE_ZOLTAN',1)
>if self.libraries.check(self.dlib, "ex_close"):
>  self.addDefine('HAVE_EXODUSII',1)
> 
> 
> For one there is different chaco code in petsc based on this flag
> HAVE_CHACO_INT_ASSIGNMENT. Its not clear to me if this is causing the
> difference.
> 
> --download-chaco is using 'Chaco-2.2-p2.tar.gz'. This matches the
> latest chaco tarball from their website [Chaco-2.2.tar.gz]. So I do
> not know if chaco packaged by trilinos has changes that cause this
> difference [clearly it added some change that resulted in us using the
> flag HAVE_CHACO_INT_ASSIGNMENT]
> 
> And then there is also exodus.. --download-exodus uses the latest
> snapshot from https://github.com/gsjaardema/seacas/
> 
> Satish



Re: [petsc-dev] Matlab interface issue

2018-04-18 Thread Vaclav Hapla
Barry, yes, it does what is needed.

However, I found one more small problem:
make check was throwing
dyld: Library not loaded: @rpath/libicudata.56.dylib
  Referenced from: /Users/vhapla/petsc1/src/snes/examples/tutorials/ex19
  Reason: image not found

I fixed it locally just by adding
-Wl,-rpath,/Applications/MATLAB_R2017b.app/bin/maci64 
-Wl,-rpath,/Applications/MATLAB_R2017b.app/extern/lib/maci64
to MATLAB_ENGINE_LIB, PETSC_EXTERNAL_LIB_BASIC and PETSC_WITH_EXTERNAL_LIB
in $PETSC_ARCH/lib/petsc/conf/petscvariables.

Could you please fix configure this way?

Thanks
Vaclav

> 17. 4. 2018 v 23:26, Smith, Barry F. <bsm...@mcs.anl.gov>:
> 
> 
>   Please try the branch barry/feature-matlab-socket/maint  and 
> --with-matlab-socket=0 and let me know if it works.
> 
>   Barry
> 
> 
>> On Apr 17, 2018, at 3:16 AM, Vaclav Hapla <vaclav.ha...@erdw.ethz.ch> wrote:
>> 
>> OK. But definitely it's tricky that you try to inject all PETSc compiler 
>> options to the mex wrapper which is by documentation only tested with Xcode 
>> and Intel and it's always set up just for one compiler (see mex -setup).
>> 
>> Hence, I would at least vote for config option to turn off this socket mex 
>> stuff to bypass potential problems if one doesn't need it. Like 
>> --with-matlab-socket=0 or so which would just deactivate the matlabbin 
>> target. Or behave like that if --with-matlab-engine=1 and --with-matlab=0 
>> (this currently fails with "Did not find package MATLAB needed by 
>> MatlabEngine" which is confusing anyway as it's not a standalone product).
>> 
>> Thanks,
>> Vaclav
>> 
>>> 16. 4. 2018 v 20:30, Smith, Barry F. <bsm...@mcs.anl.gov>:
>>> 
>>> Hmm, I tried on my Mac and had the same problem. But then I upgraded from 
>>> Matlab 2017A to 2018A and boom the problem went away and it was able to 
>>> build the socket mex files without a problem.
>>> 
>>> So my conclusion is that we shouldn't change anything in the PETSc 
>>> makefiles?
>>> 
>>>   Barry
>>> 
>>> 
>>>> On Apr 16, 2018, at 6:41 AM, Vaclav Hapla <vaclav.ha...@erdw.ethz.ch> 
>>>> wrote:
>>>> 
>>>> Hello
>>>> 
>>>> I need to compile PETSc with MATLAB interface on Mac OS.
>>>> 
>>>> First thing is that Mex supports only Xcode or Intel compilers, not GCC. 
>>>> So I compiled PETSc with Xcode, using options
>>>> --with-cc="/usr/bin/xcrun -sdk macosx10.13 clang" --with-matlab 
>>>> --with-matlab-arch=maci64 --with-matlab-engine --with-mpi=0
>>>> 
>>>> But the target "matlabcodes" (in 
>>>> src/sys/classes/viewer/impls/socket/matlab/makefile) was failing with
>>>> Undefined symbols for architecture x86_64:
>>>> "_main", referenced from:
>>>>implicit entry/start for main executable
>>>> ld: symbol(s) not found for architecture x86_64
>>>> 
>>>> After some investigation I found out the problem lies in specifying 
>>>> environment variables to the mex compiler. Mex has some predefined values 
>>>> of these variables and fails if we override them. According to MATLAB 
>>>> documentation, it should be possible to just append what we need, e.g.
>>>> LDFLAGS='${LDFLAGS} ${PETSC_EXTERNAL_LIB_BASIC}'
>>>> but this apparently does not work as excepted, some predefined options are 
>>>> removed this way. I was debugging that by adding -v to ${MATLAB_MEX} calls 
>>>> in the makefile. It seems the injected LDFLAGS is interpreted _after_ 
>>>> something more is added to LDFLAGS by MEX (stupid).
>>>> 
>>>> What works for me is specifying the options directly do ${MATLAB_MEX}. 
>>>> This is also tricky, though, as ${MATLAB_MEX} does not accept all options, 
>>>> e.g. -Wl,-rpath. So I ended up with replacing
>>>> GCC='${CC}' CC='${PCC}' CFLAGS='${COPTFLAGS} ${CC_FLAGS} ${CCPPFLAGS}' 
>>>> LDFLAGS='${PETSC_EXTERNAL_LIB_BASIC}'
>>>> by
>>>> ${PETSC_CC_INCLUDES} -L${PETSC_DIR}/${PETSC_ARCH}/lib ${PETSC_LIB_BASIC}
>>>> in src/sys/classes/viewer/impls/socket/matlab/makefile. See the attached 
>>>> patch. This at least compiled.
>>>> 
>>>> 
>>>> Do you think this could break anything? Do you have some better idea?
>>>> 
>>>> BTW It would be better if the matlabcodes target could be turned off by 
>>>> configure, if one is interested only in using MATLAB Engine and wants to 
>>>> avoid problems with the mex compiler.
>>>> 
>>>> Thanks,
>>>> Vaclav
>>>> 
>>> 
>> 
> 



Re: [petsc-dev] Matlab interface issue

2018-04-17 Thread Vaclav Hapla
OK. But definitely it's tricky that you try to inject all PETSc compiler 
options to the mex wrapper which is by documentation only tested with Xcode and 
Intel and it's always set up just for one compiler (see mex -setup).

Hence, I would at least vote for config option to turn off this socket mex 
stuff to bypass potential problems if one doesn't need it. Like 
--with-matlab-socket=0 or so which would just deactivate the matlabbin target. 
Or behave like that if --with-matlab-engine=1 and --with-matlab=0 (this 
currently fails with "Did not find package MATLAB needed by MatlabEngine" which 
is confusing anyway as it's not a standalone product).

Thanks,
Vaclav

> 16. 4. 2018 v 20:30, Smith, Barry F. <bsm...@mcs.anl.gov>:
> 
> Hmm, I tried on my Mac and had the same problem. But then I upgraded from 
> Matlab 2017A to 2018A and boom the problem went away and it was able to build 
> the socket mex files without a problem.
> 
> So my conclusion is that we shouldn't change anything in the PETSc makefiles?
> 
>Barry
> 
> 
>> On Apr 16, 2018, at 6:41 AM, Vaclav Hapla <vaclav.ha...@erdw.ethz.ch> wrote:
>> 
>> Hello
>> 
>> I need to compile PETSc with MATLAB interface on Mac OS.
>> 
>> First thing is that Mex supports only Xcode or Intel compilers, not GCC. So 
>> I compiled PETSc with Xcode, using options
>>  --with-cc="/usr/bin/xcrun -sdk macosx10.13 clang" --with-matlab 
>> --with-matlab-arch=maci64 --with-matlab-engine --with-mpi=0
>> 
>> But the target "matlabcodes" (in 
>> src/sys/classes/viewer/impls/socket/matlab/makefile) was failing with
>> Undefined symbols for architecture x86_64:
>>  "_main", referenced from:
>> implicit entry/start for main executable
>> ld: symbol(s) not found for architecture x86_64
>> 
>> After some investigation I found out the problem lies in specifying 
>> environment variables to the mex compiler. Mex has some predefined values of 
>> these variables and fails if we override them. According to MATLAB 
>> documentation, it should be possible to just append what we need, e.g.
>>  LDFLAGS='${LDFLAGS} ${PETSC_EXTERNAL_LIB_BASIC}'
>> but this apparently does not work as excepted, some predefined options are 
>> removed this way. I was debugging that by adding -v to ${MATLAB_MEX} calls 
>> in the makefile. It seems the injected LDFLAGS is interpreted _after_ 
>> something more is added to LDFLAGS by MEX (stupid).
>> 
>> What works for me is specifying the options directly do ${MATLAB_MEX}. This 
>> is also tricky, though, as ${MATLAB_MEX} does not accept all options, e.g. 
>> -Wl,-rpath. So I ended up with replacing
>>  GCC='${CC}' CC='${PCC}' CFLAGS='${COPTFLAGS} ${CC_FLAGS} ${CCPPFLAGS}' 
>> LDFLAGS='${PETSC_EXTERNAL_LIB_BASIC}'
>> by
>>  ${PETSC_CC_INCLUDES} -L${PETSC_DIR}/${PETSC_ARCH}/lib ${PETSC_LIB_BASIC}
>> in src/sys/classes/viewer/impls/socket/matlab/makefile. See the attached 
>> patch. This at least compiled.
>> 
>> 
>> Do you think this could break anything? Do you have some better idea?
>> 
>> BTW It would be better if the matlabcodes target could be turned off by 
>> configure, if one is interested only in using MATLAB Engine and wants to 
>> avoid problems with the mex compiler.
>> 
>> Thanks,
>> Vaclav
>> 
> 



[petsc-dev] Matlab interface issue

2018-04-16 Thread Vaclav Hapla
Hello

I need to compile PETSc with MATLAB interface on Mac OS.

First thing is that Mex supports only Xcode or Intel compilers, not GCC. So I 
compiled PETSc with Xcode, using options
  --with-cc="/usr/bin/xcrun -sdk macosx10.13 clang" --with-matlab 
--with-matlab-arch=maci64 --with-matlab-engine --with-mpi=0

But the target "matlabcodes" (in 
src/sys/classes/viewer/impls/socket/matlab/makefile) was failing with
Undefined symbols for architecture x86_64:
  "_main", referenced from:
 implicit entry/start for main executable
ld: symbol(s) not found for architecture x86_64

After some investigation I found out the problem lies in specifying environment 
variables to the mex compiler. Mex has some predefined values of these 
variables and fails if we override them. According to MATLAB documentation, it 
should be possible to just append what we need, e.g.
  LDFLAGS='${LDFLAGS} ${PETSC_EXTERNAL_LIB_BASIC}'
but this apparently does not work as excepted, some predefined options are 
removed this way. I was debugging that by adding -v to ${MATLAB_MEX} calls in 
the makefile. It seems the injected LDFLAGS is interpreted _after_ something 
more is added to LDFLAGS by MEX (stupid).

What works for me is specifying the options directly do ${MATLAB_MEX}. This is 
also tricky, though, as ${MATLAB_MEX} does not accept all options, e.g. 
-Wl,-rpath. So I ended up with replacing
  GCC='${CC}' CC='${PCC}' CFLAGS='${COPTFLAGS} ${CC_FLAGS} ${CCPPFLAGS}' 
LDFLAGS='${PETSC_EXTERNAL_LIB_BASIC}'
by
  ${PETSC_CC_INCLUDES} -L${PETSC_DIR}/${PETSC_ARCH}/lib ${PETSC_LIB_BASIC}
in src/sys/classes/viewer/impls/socket/matlab/makefile. See the attached patch. 
This at least compiled.


patch
Description: Binary data


Do you think this could break anything? Do you have some better idea?

BTW It would be better if the matlabcodes target could be turned off by 
configure, if one is interested only in using MATLAB Engine and wants to avoid 
problems with the mex compiler.

Thanks,
Vaclav

[petsc-dev] plexexodusii.c incompatible with complex - not covered?

2018-04-12 Thread Vaclav Hapla
At master, configured with
  --download-exodusii --download-hdf5 --download-netcdf --download-pnetcdf 
--with-scalar-type=complex
I get bunch of compiler warnings
  warning: passing argument 2 of ‘VecGetArrayRead’ from incompatible pointer 
type
  warning: passing argument 2 of ‘VecRestoreArrayRead’ from incompatible 
pointer type
in
  src/dm/impls/plex/plexexodusii.c.

Is this configuration not covered by nightly tests?


I want to fix it anyway. Do you agree with the following scheme?

  Vec v;
  const PetscScalar *arr;
  PetscReal *arr_real;
#if defined(PETSC_USE_COMPLEX)
  PetscInt i;
#endif

...
  ierr = VecGetArrayRead(v, );CHKERRQ(ierr);
#if defined(PETSC_USE_COMPLEX)
  ierr = PetscMalloc1(size, _real);CHKERRQ(ierr);
  for (i = 0; i < size; ++i) {
arr_real[i] = PetscRealPart(arr[i]);
#if defined(PETSC_USE_DEBUG)
if (PetscImaginaryPart(arr[i])) {
  SETERRQ(PETSC_COMM_SELF, PETSC_ERR_SUP, "Vector contains complex numbers 
but only real vectors are currently supported.");
}
#endif
  }
#else
  arr_real = (PetscReal*)arr;
#endif

...
  ierr = VecRestoreArrayRead(v, );CHKERRQ(ierr);
#if defined(PETSC_USE_COMPLEX)
  ierr = PetscFree(arr_real);CHKERRQ(ierr);
#endif


Thanks,
Vaclav

[petsc-dev] make check fortran warnings for complex single 64idx

2018-04-11 Thread Vaclav Hapla
Hello

I just compiled current master with
'--with-precision=single',
'--with-scalar-type=complex',
'--with-64-bit-indices',

configure and make work, but 'make check' throws some warnings regarding 
src/snes/examples/tutorials/ex5f.F90, see below.

mpifort —version
says
GNU Fortran (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 20160609

I'm not much into Fortran - can you see what's going wrong?

Vaclav



C/C++ example src/snes/examples/tutorials/ex19 run successfully with 2 MPI 
processes
***Error detected during compile or link!***
See http://www.mcs.anl.gov/petsc/documentation/faq.html
/home/vhapla/petsc/src/snes/examples/tutorials ex5f
*
mpifort -c -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g
-I/home/vhapla/petsc/include 
-I/home/vhapla/petsc/arch-linux-gcc-complex-single-64idx/include 
-I/home/vhapla/include-o ex5f.o ex5f.F90
ex5f.F90:289:17:

  temp = (min(j-1,my-j))*hy
 1
Warning: Possible change of value in conversion from INTEGER(8) to REAL(4) at 
(1) [-Wconversion]
ex5f.F90:294:43:

   x(i,j) = temp1 * sqrt(min(min(i-1,mx-i)*hx,(temp)))
   1
Warning: Possible change of value in conversion from INTEGER(8) to REAL(4) at 
(1) [-Wconversion]
ex5f.h:29:21:

   common /params/ lambda,mx,my
 1
Warning: Padding of 4 bytes required before ‘mx’ in COMMON ‘params’ at (1); 
reorder elements or use -fno-align-commons
ex5f.h:29:21:

   common /params/ lambda,mx,my
 1
Warning: Padding of 4 bytes required before ‘mx’ in COMMON ‘params’ at (1); 
reorder elements or use -fno-align-commons
ex5f.h:29:21:

   common /params/ lambda,mx,my
 1
Warning: Padding of 4 bytes required before ‘mx’ in COMMON ‘params’ at (1); 
reorder elements or use -fno-align-commons
ex5f.h:29:21:

   common /params/ lambda,mx,my
 1
Warning: Padding of 4 bytes required before ‘mx’ in COMMON ‘params’ at (1); 
reorder elements or use -fno-align-commons
ex5f.h:29:21:

   common /params/ lambda,mx,my
 1
Warning: Padding of 4 bytes required before ‘mx’ in COMMON ‘params’ at (1); 
reorder elements or use -fno-align-commons



[petsc-dev] MatLoad/VecLoad for a file with multiple objects

2018-03-23 Thread Vaclav Hapla
Hello

I'm quite surprised that there can be multiple objects in a single binary file, 
and they must be loaded in the exact order they are in the file. This behavior 
is not documented in MatLoad or VecLoad.

For instance, there can be a matrix A, RHS b and initial guess x0 stored in one 
file in this order, but you have to call
  MatLoad(A,viewer); VecLoad(b,viewer); VecLoad(x0,viewer);
in the same order.

When you e.g. call VecLoad first, it fails with "Not a vector next in file". 
This I think is not exactly informative (it says there's no vector but doesn't 
say that there's something different, actually a matrix).

I can make a PR documenting this, maybe also improving the error message, if 
you otherwise find this behavior OK.

Vaclav

[petsc-dev] change in Manual Pages generation?

2018-03-08 Thread Vaclav Hapla
I have noticed that Level and Location is generated in a different way in maint 
and master, compare e.g.
http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/IS/PetscSectionCopy.html
http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/IS/PetscSectionCopy.html

Is it a deliberate change?
I find the original way better.

Vaclav

Re: [petsc-dev] Handling pull requests in a better way

2018-03-01 Thread Vaclav Hapla


> 1. 3. 2018 v 14:04, Patrick Sanan :
> 
> Maybe it would also help to add more explicit instructions to the wiki 
> (https://bitbucket.org/petsc/petsc/wiki/Home#markdown-header-contributing-to-petsc
>  
> )
>  on how to construct a branch that is likely to get through the integration 
> steps quickly.
> 
> I'd suggest adding language about these (and volunteer to write it), even if 
> some might be obvious:
> - Adding tests for whatever you submit
> - Testing with configurations other than the usual double/real/C setup 
> (complex, single) 
> - Making the PR as small/atomic as possible (can your PR be 2 or more 
> separate PRs?)

Yes, I think it's quite common that one would like to factor out one or more 
smaller bugfixes and/or improvements from a bigger PR. But some of them may 
depend on some others, and definitely the original big PR depends on all of 
them. Perhaps it would be nice if there is some documented way of specifying 
dependendencies between PRs to insure a proper order of merging (I think 
BitBucket has no such feature?).

Thanks,
Vaclav

> - Running through valgrind (using --download-mpich) before submitting
> 
> 2018-03-01 12:33 GMT+01:00 Karl Rupp  >:
> Dear PETSc folks,
> 
> I think we can do a better job when it comes to handling pull requests (PRs). 
> We have several PRs piling up, which after some time (imho) get merged 
> relatively carelessly instead of reaping the full benefits of a thorough 
> review.
> 
> In order to improve the integration of pull requests, I propose to nominate a 
> PR integrator, who is a-priori responsible for *all* incoming PRs. The PR 
> integrator is free to delegate a particular PR integration to someone with 
> the relevant domain-specific knowledge (e.g. Matt for DMPlex-related things) 
> by appropriate comments on Bitbucket. In case of delays, the PR integrator is 
> also responsible for issuing reminders over time (like Barry has done in the 
> past).
> 
> The idea is to make daily progress with the PRs. One integration step per day 
> (e.g. testing or merging to next) is presumably enough to handle the load, 
> whereas things get messy if we let things pile up. Automated testing may help 
> a bit in the future, but it doesn't release us from properly reviewing the 
> contributed code. 
>  
> Any objections to my PR integrator proposal? Any volunteers? ;-)
> If nobody else wants to be the highly esteemed PR integrator, I can do it. ;-)
> 
> Best regards,
> Karli
> 



Re: [petsc-dev] test harness loops with different outputs

2018-01-26 Thread Vaclav Hapla


> 26. 1. 2018 v 14:58, Vaclav Hapla <vaclav.ha...@erdw.ethz.ch>:
> 
> 
> 
>> 26. 1. 2018 v 14:08, Vaclav Hapla <vaclav.ha...@erdw.ethz.ch 
>> <mailto:vaclav.ha...@erdw.ethz.ch>>:
>> 
>> 
>> 
>>> 26. 1. 2018 v 14:03, Patrick Sanan <patrick.sa...@gmail.com 
>>> <mailto:patrick.sa...@gmail.com>>:
>>> 
>>> There is probably a better way but what's worked for me here is to make 
>>> sure I have pdflatex on my system (sudo apt install texlive, or similar), 
>>> then "make reconfigure", which seems to set PDFLATEX as required, and then 
>>> something like
>>> 
>>>  cd src/docs/tex/manual
>>>  make manual.pdf LOC=$PETSC_DIR
>>>  cd ../../../docs
>>>  # manual.pdf should be there
>> 
>> Thanks, Patrick. It would be definitely a better way. But I have installed 
>> PETSc into Linux with texlive installed and pdflatex command in PATH, and 
>> still the PDFLATEX variable hasn't got set.
> 
> There must be something broken in my system as I also get bunch of
>   /bin/sh: 2: -map: not found
> errors for
>   make alldoc LOC=${PETSC_DIR}
> which means also MAPNAMES is not set.
> I started from scratch in different PETSc clone and it's the same. 
> Interesting.

The variables were not set because sowing was not installed. It didn't come to 
my mind as sowing is not directly related to the PDF manual processing. Damn.

> 
> Vaclav
> 
>> 
>> Vaclav
>> 
>>> 
>>> 2018-01-26 14:00 GMT+01:00 Vaclav Hapla <vaclav.ha...@erdw.ethz.ch 
>>> <mailto:vaclav.ha...@erdw.ethz.ch>>:
>>> 
>>> 
>>> > 25. 1. 2018 v 9:33, Smith, Barry F. <bsm...@mcs.anl.gov 
>>> > <mailto:bsm...@mcs.anl.gov>>:
>>> >
>>> >
>>> >  src/docs/tex/manual/developers.tex there is also a make file that 
>>> > generates the PDF.
>>> 
>>> I'm trying to build the manual. Apparently the PDFLATEX variable is not 
>>> set. So I set it externally and it works - is it supposed to be done this 
>>> way? Would you mind adding the PDFLATEX variable into 
>>> ${PETSC_DIR}/lib/petsc/conf/variables? Perhaps with assigned using ?= so 
>>> that it could be overridden?
>>> 
>>> Vaclav
>>> 
>>> >
>>> >  Barry
>>> >
>>> > The document should provide information about the location of its source.
>>> >
>>> >
>>> >
>>> >> On Jan 25, 2018, at 2:19 AM, Vaclav Hapla <vaclav.ha...@erdw.ethz.ch 
>>> >> <mailto:vaclav.ha...@erdw.ethz.ch>> wrote:
>>> >>
>>> >>
>>> >>
>>> >>> 24. 1. 2018 v 22:55, Scott Kruger <kru...@txcorp.com 
>>> >>> <mailto:kru...@txcorp.com>>:
>>> >>>
>>> >>>
>>> >>>
>>> >>> On 1/24/18 6:50 AM, Vaclav Hapla wrote:
>>> >>>>> 24. 1. 2018 v 14:45, Vaclav Hapla <vaclav.ha...@erdw.ethz.ch 
>>> >>>>> <mailto:vaclav.ha...@erdw.ethz.ch>>:
>>> >>>>>
>>> >>>>> How should I specify output files for {{...}} with different outputs, 
>>> >>>>> please?
>>> >>>> Oh I see, {{...}separateoutput} is meant literally. But in that case 
>>> >>>> typesetting it in italics is confusing.
>>> >>>
>>> >>> It's clearer in the examples listed below.   I couldn't get lstinline 
>>> >>> to work for this syntax so used $ ... $.  If someone knows how to do 
>>> >>> it, please feel free to fix.
>>> >>
>>> >> Where are this manual's latex sources?
>>> >>
>>> >> Vaclav
>>> >>
>>> >>>
>>> >>> Scott
>>> >>>
>>> >>>> Vaclav
>>> >>>>>
>>> >>>>> I have consulted the developers manual but it's not clear to me 
>>> >>>>> still. And the example listings, page 34-45, seem to be broken.
>>> >>>>>
>>> >>>>> BTW in 7.1.2, I think there should be the space instead of the comma 
>>> >>>>> in the listing.
>>> >>>>>
>>> >>>>> Thanks
>>> >>>>>
>>> >>>>> Vaclav
>>> >>>
>>> >>> --
>>> >>> Tech-X Corporation   kru...@txcorp.com 
>>> >>> <mailto:kru...@txcorp.com>
>>> >>> 5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841 
>>> >>> <tel:%28720%29%20974-1841>
>>> >>> Boulder, CO 80303Fax:   (303) 448-7756 
>>> >>> <tel:%28303%29%20448-7756>
>>> >>
>>> >



Re: [petsc-dev] test harness loops with different outputs

2018-01-26 Thread Vaclav Hapla


> 26. 1. 2018 v 14:08, Vaclav Hapla <vaclav.ha...@erdw.ethz.ch>:
> 
> 
> 
>> 26. 1. 2018 v 14:03, Patrick Sanan <patrick.sa...@gmail.com 
>> <mailto:patrick.sa...@gmail.com>>:
>> 
>> There is probably a better way but what's worked for me here is to make sure 
>> I have pdflatex on my system (sudo apt install texlive, or similar), then 
>> "make reconfigure", which seems to set PDFLATEX as required, and then 
>> something like
>> 
>>  cd src/docs/tex/manual
>>  make manual.pdf LOC=$PETSC_DIR
>>  cd ../../../docs
>>  # manual.pdf should be there
> 
> Thanks, Patrick. It would be definitely a better way. But I have installed 
> PETSc into Linux with texlive installed and pdflatex command in PATH, and 
> still the PDFLATEX variable hasn't got set.

There must be something broken in my system as I also get bunch of
  /bin/sh: 2: -map: not found
errors for
  make alldoc LOC=${PETSC_DIR}
which means also MAPNAMES is not set.
I started from scratch in different PETSc clone and it's the same. Interesting.

Vaclav

> 
> Vaclav
> 
>> 
>> 2018-01-26 14:00 GMT+01:00 Vaclav Hapla <vaclav.ha...@erdw.ethz.ch 
>> <mailto:vaclav.ha...@erdw.ethz.ch>>:
>> 
>> 
>> > 25. 1. 2018 v 9:33, Smith, Barry F. <bsm...@mcs.anl.gov 
>> > <mailto:bsm...@mcs.anl.gov>>:
>> >
>> >
>> >  src/docs/tex/manual/developers.tex there is also a make file that 
>> > generates the PDF.
>> 
>> I'm trying to build the manual. Apparently the PDFLATEX variable is not set. 
>> So I set it externally and it works - is it supposed to be done this way? 
>> Would you mind adding the PDFLATEX variable into 
>> ${PETSC_DIR}/lib/petsc/conf/variables? Perhaps with assigned using ?= so 
>> that it could be overridden?
>> 
>> Vaclav
>> 
>> >
>> >  Barry
>> >
>> > The document should provide information about the location of its source.
>> >
>> >
>> >
>> >> On Jan 25, 2018, at 2:19 AM, Vaclav Hapla <vaclav.ha...@erdw.ethz.ch 
>> >> <mailto:vaclav.ha...@erdw.ethz.ch>> wrote:
>> >>
>> >>
>> >>
>> >>> 24. 1. 2018 v 22:55, Scott Kruger <kru...@txcorp.com 
>> >>> <mailto:kru...@txcorp.com>>:
>> >>>
>> >>>
>> >>>
>> >>> On 1/24/18 6:50 AM, Vaclav Hapla wrote:
>> >>>>> 24. 1. 2018 v 14:45, Vaclav Hapla <vaclav.ha...@erdw.ethz.ch 
>> >>>>> <mailto:vaclav.ha...@erdw.ethz.ch>>:
>> >>>>>
>> >>>>> How should I specify output files for {{...}} with different outputs, 
>> >>>>> please?
>> >>>> Oh I see, {{...}separateoutput} is meant literally. But in that case 
>> >>>> typesetting it in italics is confusing.
>> >>>
>> >>> It's clearer in the examples listed below.   I couldn't get lstinline to 
>> >>> work for this syntax so used $ ... $.  If someone knows how to do it, 
>> >>> please feel free to fix.
>> >>
>> >> Where are this manual's latex sources?
>> >>
>> >> Vaclav
>> >>
>> >>>
>> >>> Scott
>> >>>
>> >>>> Vaclav
>> >>>>>
>> >>>>> I have consulted the developers manual but it's not clear to me still. 
>> >>>>> And the example listings, page 34-45, seem to be broken.
>> >>>>>
>> >>>>> BTW in 7.1.2, I think there should be the space instead of the comma 
>> >>>>> in the listing.
>> >>>>>
>> >>>>> Thanks
>> >>>>>
>> >>>>> Vaclav
>> >>>
>> >>> --
>> >>> Tech-X Corporation   kru...@txcorp.com 
>> >>> <mailto:kru...@txcorp.com>
>> >>> 5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841 
>> >>> <tel:%28720%29%20974-1841>
>> >>> Boulder, CO 80303Fax:   (303) 448-7756 
>> >>> <tel:%28303%29%20448-7756>
>> >>
>> >
>> 
>> 
> 



Re: [petsc-dev] test harness loops with different outputs

2018-01-26 Thread Vaclav Hapla


> 26. 1. 2018 v 14:03, Patrick Sanan <patrick.sa...@gmail.com>:
> 
> There is probably a better way but what's worked for me here is to make sure 
> I have pdflatex on my system (sudo apt install texlive, or similar), then 
> "make reconfigure", which seems to set PDFLATEX as required, and then 
> something like
> 
>  cd src/docs/tex/manual
>  make manual.pdf LOC=$PETSC_DIR
>  cd ../../../docs
>  # manual.pdf should be there

Thanks, Patrick. It would be definitely a better way. But I have installed 
PETSc into Linux with texlive installed and pdflatex command in PATH, and still 
the PDFLATEX variable hasn't got set.

Vaclav

> 
> 2018-01-26 14:00 GMT+01:00 Vaclav Hapla <vaclav.ha...@erdw.ethz.ch 
> <mailto:vaclav.ha...@erdw.ethz.ch>>:
> 
> 
> > 25. 1. 2018 v 9:33, Smith, Barry F. <bsm...@mcs.anl.gov 
> > <mailto:bsm...@mcs.anl.gov>>:
> >
> >
> >  src/docs/tex/manual/developers.tex there is also a make file that 
> > generates the PDF.
> 
> I'm trying to build the manual. Apparently the PDFLATEX variable is not set. 
> So I set it externally and it works - is it supposed to be done this way? 
> Would you mind adding the PDFLATEX variable into 
> ${PETSC_DIR}/lib/petsc/conf/variables? Perhaps with assigned using ?= so that 
> it could be overridden?
> 
> Vaclav
> 
> >
> >  Barry
> >
> > The document should provide information about the location of its source.
> >
> >
> >
> >> On Jan 25, 2018, at 2:19 AM, Vaclav Hapla <vaclav.ha...@erdw.ethz.ch 
> >> <mailto:vaclav.ha...@erdw.ethz.ch>> wrote:
> >>
> >>
> >>
> >>> 24. 1. 2018 v 22:55, Scott Kruger <kru...@txcorp.com 
> >>> <mailto:kru...@txcorp.com>>:
> >>>
> >>>
> >>>
> >>> On 1/24/18 6:50 AM, Vaclav Hapla wrote:
> >>>>> 24. 1. 2018 v 14:45, Vaclav Hapla <vaclav.ha...@erdw.ethz.ch 
> >>>>> <mailto:vaclav.ha...@erdw.ethz.ch>>:
> >>>>>
> >>>>> How should I specify output files for {{...}} with different outputs, 
> >>>>> please?
> >>>> Oh I see, {{...}separateoutput} is meant literally. But in that case 
> >>>> typesetting it in italics is confusing.
> >>>
> >>> It's clearer in the examples listed below.   I couldn't get lstinline to 
> >>> work for this syntax so used $ ... $.  If someone knows how to do it, 
> >>> please feel free to fix.
> >>
> >> Where are this manual's latex sources?
> >>
> >> Vaclav
> >>
> >>>
> >>> Scott
> >>>
> >>>> Vaclav
> >>>>>
> >>>>> I have consulted the developers manual but it's not clear to me still. 
> >>>>> And the example listings, page 34-45, seem to be broken.
> >>>>>
> >>>>> BTW in 7.1.2, I think there should be the space instead of the comma in 
> >>>>> the listing.
> >>>>>
> >>>>> Thanks
> >>>>>
> >>>>> Vaclav
> >>>
> >>> --
> >>> Tech-X Corporation   kru...@txcorp.com 
> >>> <mailto:kru...@txcorp.com>
> >>> 5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841 
> >>> <tel:%28720%29%20974-1841>
> >>> Boulder, CO 80303Fax:   (303) 448-7756 
> >>> <tel:%28303%29%20448-7756>
> >>
> >
> 
> 



Re: [petsc-dev] test harness loops with different outputs

2018-01-26 Thread Vaclav Hapla


> 25. 1. 2018 v 9:33, Smith, Barry F. <bsm...@mcs.anl.gov>:
> 
> 
>  src/docs/tex/manual/developers.tex there is also a make file that generates 
> the PDF.

I'm trying to build the manual. Apparently the PDFLATEX variable is not set. So 
I set it externally and it works - is it supposed to be done this way? Would 
you mind adding the PDFLATEX variable into 
${PETSC_DIR}/lib/petsc/conf/variables? Perhaps with assigned using ?= so that 
it could be overridden?

Vaclav

> 
>  Barry
> 
> The document should provide information about the location of its source.
> 
> 
> 
>> On Jan 25, 2018, at 2:19 AM, Vaclav Hapla <vaclav.ha...@erdw.ethz.ch> wrote:
>> 
>> 
>> 
>>> 24. 1. 2018 v 22:55, Scott Kruger <kru...@txcorp.com>:
>>> 
>>> 
>>> 
>>> On 1/24/18 6:50 AM, Vaclav Hapla wrote:
>>>>> 24. 1. 2018 v 14:45, Vaclav Hapla <vaclav.ha...@erdw.ethz.ch>:
>>>>> 
>>>>> How should I specify output files for {{...}} with different outputs, 
>>>>> please?
>>>> Oh I see, {{...}separateoutput} is meant literally. But in that case 
>>>> typesetting it in italics is confusing.
>>> 
>>> It's clearer in the examples listed below.   I couldn't get lstinline to 
>>> work for this syntax so used $ ... $.  If someone knows how to do it, 
>>> please feel free to fix.
>> 
>> Where are this manual's latex sources?
>> 
>> Vaclav
>> 
>>> 
>>> Scott
>>> 
>>>> Vaclav
>>>>> 
>>>>> I have consulted the developers manual but it's not clear to me still. 
>>>>> And the example listings, page 34-45, seem to be broken.
>>>>> 
>>>>> BTW in 7.1.2, I think there should be the space instead of the comma in 
>>>>> the listing.
>>>>> 
>>>>> Thanks
>>>>> 
>>>>> Vaclav
>>> 
>>> -- 
>>> Tech-X Corporation   kru...@txcorp.com
>>> 5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
>>> Boulder, CO 80303Fax:   (303) 448-7756
>> 
> 



Re: [petsc-dev] test harness loops with different outputs

2018-01-25 Thread Vaclav Hapla


> 24. 1. 2018 v 22:55, Scott Kruger <kru...@txcorp.com>:
> 
> 
> 
> On 1/24/18 6:50 AM, Vaclav Hapla wrote:
>>> 24. 1. 2018 v 14:45, Vaclav Hapla <vaclav.ha...@erdw.ethz.ch>:
>>> 
>>> How should I specify output files for {{...}} with different outputs, 
>>> please?
>> Oh I see, {{...}separateoutput} is meant literally. But in that case 
>> typesetting it in italics is confusing.
> 
> It's clearer in the examples listed below.   I couldn't get lstinline to work 
> for this syntax so used $ ... $.  If someone knows how to do it, please feel 
> free to fix.

Where are this manual's latex sources?

Vaclav

> 
> Scott
> 
>> Vaclav
>>> 
>>> I have consulted the developers manual but it's not clear to me still. And 
>>> the example listings, page 34-45, seem to be broken.
>>> 
>>> BTW in 7.1.2, I think there should be the space instead of the comma in the 
>>> listing.
>>> 
>>> Thanks
>>> 
>>> Vaclav
> 
> -- 
> Tech-X Corporation   kru...@txcorp.com
> 5621 Arapahoe Ave, Suite A   Phone: (720) 974-1841
> Boulder, CO 80303Fax:   (303) 448-7756



Re: [petsc-dev] test harness loops with different outputs

2018-01-24 Thread Vaclav Hapla


> 24. 1. 2018 v 14:45, Vaclav Hapla <vaclav.ha...@erdw.ethz.ch>:
> 
> How should I specify output files for {{...}} with different outputs, please?

Oh I see, {{...}separateoutput} is meant literally. But in that case 
typesetting it in italics is confusing.

Vaclav

> 
> I have consulted the developers manual but it's not clear to me still. And 
> the example listings, page 34-45, seem to be broken.
> 
> BTW in 7.1.2, I think there should be the space instead of the comma in the 
> listing.
> 
> Thanks
> 
> Vaclav



[petsc-dev] test harness loops with different outputs

2018-01-24 Thread Vaclav Hapla
How should I specify output files for {{...}} with different outputs, please?

I have consulted the developers manual but it's not clear to me still. And the 
example listings, page 34-45, seem to be broken.

BTW in 7.1.2, I think there should be the space instead of the comma in the 
listing.

Thanks

Vaclav

Re: [petsc-dev] Plex XDMF/HDF5 I/O

2018-01-24 Thread Vaclav Hapla


> 17. 1. 2018 v 13:04, Matthew Knepley <knep...@gmail.com>:
> 
> On Wed, Jan 17, 2018 at 6:49 AM, Vaclav Hapla <vaclav.ha...@erdw.ethz.ch 
> <mailto:vaclav.ha...@erdw.ethz.ch>> wrote:
> For DMPlex and PETSCVIEWERHDF5, DMView produces a HDF5 file consisting 
> basically of these 3 parts:
> 1) vertices (HDF5 group /geometry)
> 2) "native" Plex topology as a serialization of the Plex graph
> 3) traditional topology as connectivity of vertices, XDMF-compatible (HDF5 
> group /viz/topology)
> So the topology is stored redundantly in two different ways 2 and 3.
> 
> I'm now a member of the team developing the Salvus application, and we would 
> like to use XDMF-compatible HDF5 as our native format. I would be glad if 
> PETSc took care of this completely instead of coding it just for Salvus.
> 
> Looking how it is now in PETSc, I'm tempted to implement a new separate 
> Viewer for reading and writing purely this HDF5 flavor, i.e. parts 1 and 3. 
> It also looks to me easier for parallel I/O and support by external tools 
> like https://github.com/nschloe/meshio <https://github.com/nschloe/meshio>. 
> The writer code can be copy-pasted from plexhdf5.c (function 
> DMPlexWriteTopology_Vertices_HDF5_Static) and the reader can be implemented 
> relatively easily using DMPlexCreateFromCellListParallel (see preview at 
> https://bitbucket.org/haplav/petsc/commits/4719ef6a33f5f9a4a1b8c81c7a6e6ed0426097f1?at=haplav/feature-dmplex-distributed-hdf5-read
>  
> <https://bitbucket.org/haplav/petsc/commits/4719ef6a33f5f9a4a1b8c81c7a6e6ed0426097f1?at=haplav/feature-dmplex-distributed-hdf5-read>).
> 
> It would of course not support any possible DMPlex instance, but what is 
> supported by XDMF forms a sensible subset for practical use. The current HDF5 
> could be kept as a native PETSc format allowing storing any possible DMPlex. 
> But I would then remove part 3 from this format, getting rid of 
> "Visualization topology currently only supports..." errors currently thrown 
> by DMView.
> 
> Here is my original argument for storing both. HDF5 is intended to be 
> extensible in this way, since its has a directory structure. Keeping a single 
> file
> is much easier than two different files if you want both sets of information, 
> and there is much shared between the two. I would have suggested making
> it easier to select which part gets output using formats.

There's currently only one format PETSC_VIEWER_HDF5_VIZ. Do you mean having 
more values like PETSC_VIEWER_HDF5_VIZ (just the XDMF data), 
PETSC_VIEWER_HDF5_PLEX (just the serialized Plex) and PETSC_VIEWER_HDF5_ALL 
(the two combined)?

Maybe PETSC_VIEWER_HDF5_XDMF would be clearer than PETSC_VIEWER_HDF5_VIZ as it 
would contain exactly data needed for XDMF in the binary HDF5 form.

Thanks
Vaclav

>  
> There's still a question how to store non-homogenous meshes (with different 
> cell shapes). I can see two possible ways:
> 1) Store separate "sections" of a mesh, each section with homogenous cell 
> shape. Matt: "This might mean that you need to permute the numbering to group 
> cells of the same shape, which is already making me feel tired."
> 2) XDMF allows "mixed" topology 
> (http://www.xdmf.org/index.php/XDMF_Model_and_Format#Arbitrary 
> <http://www.xdmf.org/index.php/XDMF_Model_and_Format#Arbitrary>) where cell 
> shape is specified per cell. The problem is that each row representing a cell 
> would have different number of values. This is not nice for current 
> ISView/VecView with fixed block size.
> It will also require generalising DMPlexCreateFromCellListParallel().
> 
>   2) Still sounds easier to me.
>  
> Later on I would consider generating the XDMF XML file directly by this 
> Viewer without having to call the python script 
> $PETSC_DIR/bin/petsc_gen_xdmf.py afterwards.
> 
> We did this in Pylith originally, and eventually threw it away. Its much less 
> flexible and harder to maintain.
> 
>   Thanks,
> 
> Matt
>  
> Any comments on this?
> 
> Thanks
> 
> Vaclav
> 
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/~mk51/>



[petsc-dev] Plex XDMF/HDF5 I/O

2018-01-17 Thread Vaclav Hapla
For DMPlex and PETSCVIEWERHDF5, DMView produces a HDF5 file consisting 
basically of these 3 parts:
1) vertices (HDF5 group /geometry)
2) "native" Plex topology as a serialization of the Plex graph
3) traditional topology as connectivity of vertices, XDMF-compatible (HDF5 
group /viz/topology)
So the topology is stored redundantly in two different ways 2 and 3.

I'm now a member of the team developing the Salvus application, and we would 
like to use XDMF-compatible HDF5 as our native format. I would be glad if PETSc 
took care of this completely instead of coding it just for Salvus.

Looking how it is now in PETSc, I'm tempted to implement a new separate Viewer 
for reading and writing purely this HDF5 flavor, i.e. parts 1 and 3. It also 
looks to me easier for parallel I/O and support by external tools like 
https://github.com/nschloe/meshio. The writer code can be copy-pasted from 
plexhdf5.c (function DMPlexWriteTopology_Vertices_HDF5_Static) and the reader 
can be implemented relatively easily using DMPlexCreateFromCellListParallel 
(see preview at 
https://bitbucket.org/haplav/petsc/commits/4719ef6a33f5f9a4a1b8c81c7a6e6ed0426097f1?at=haplav/feature-dmplex-distributed-hdf5-read).

It would of course not support any possible DMPlex instance, but what is 
supported by XDMF forms a sensible subset for practical use. The current HDF5 
could be kept as a native PETSc format allowing storing any possible DMPlex. 
But I would then remove part 3 from this format, getting rid of "Visualization 
topology currently only supports..." errors currently thrown by DMView.

There's still a question how to store non-homogenous meshes (with different 
cell shapes). I can see two possible ways:
1) Store separate "sections" of a mesh, each section with homogenous cell 
shape. Matt: "This might mean that you need to permute the numbering to group 
cells of the same shape, which is already making me feel tired."
2) XDMF allows "mixed" topology 
(http://www.xdmf.org/index.php/XDMF_Model_and_Format#Arbitrary) where cell 
shape is specified per cell. The problem is that each row representing a cell 
would have different number of values. This is not nice for current 
ISView/VecView with fixed block size.
It will also require generalising DMPlexCreateFromCellListParallel().

Later on I would consider generating the XDMF XML file directly by this Viewer 
without having to call the python script $PETSC_DIR/bin/petsc_gen_xdmf.py 
afterwards.

Any comments on this?

Thanks

Vaclav

Re: [petsc-dev] proposed minor PetscPartitioner changes

2017-12-07 Thread Vaclav Hapla


> 9. 11. 2017 v 12:53, Matthew Knepley :
> 
> I think I need to create a proof-of-concept. I would start by employing 
> MatPartitioning in PetscPartitionerPartition with anything outside of this 
> function untouched for now (as already suggested in #192), if you agree.
> 
> Or what about implementing a special temporary PetscPartitioner 
> implementation wrapping MatPartitioning?
> PETSCPARTITIONERMATPARTITIONING sounds crazy, though :) But could be a good 
> starting point.
> 
> This is probably easier, and allows nice regression testing, namely run all 
> tests using EXTRA_OPTIONS="-petscparitioner_type matpartitioning". I think
> that should be alright for correctness. There are some parallel 
> redistribution tests in Plex.
> 
> We will need at least one performance regression. Look at how I do it here:
> 
>   
> https://bitbucket.org/petsc/petsc/src/312beb00c9b3e1e8ec8fac64a948a1af779da02f/src/dm/impls/plex/examples/tests/ex9.c?at=master=file-view-default
> 
> You can make custom events, directly access the times, and compare. You could 
> run the two versions
> and check for degradation for a sequence of meshes in parallel.
> 
>   Thanks,
> 
>  Matt

I have made some progress in this, see 
https://bitbucket.org/haplav/petsc/branch/haplav/feature-petscpartitionermatpartitioning
There's a new test ex23 which shows basically that
  -petscparitioner_type parmetis
and
  -petscparitioner_type matpartitioning -mat_partitioning_type parmetis
give exactly the same results.

I have not created a PR yet since the regression testing you propose fails in 
some cases. When running
  make -f gmakefile.test test EXTRA_OPTIONS="-petscpartitioner_type 
matpartitioning -options_left 0" search='dm_impls_plex_tests%'
it seems majority of tests pass but e.g. dm_impls_plex_tests-ex7_7 fails since 
its reference output has been saved for -petscpartitioner_type simple. How to 
deal with that, please?

As can be seen in my PetscPartitionerPartition_MatPartitioning, the needed 
manipulations with IS are slightly more complicated than what I inferred from 
the Jed's comments. But still just the existing IS methods suffice. See 
bitbucket.org/haplav/petsc/src/ab7d5f43fd87d1d57b51b6e7ff8de0ef3e904673/src/dm/impls/plex/petscpartmatpart.c?at=haplav%2Ffeature-petscpartitionermatpartitioning#petscpartmatpart.c-92

Vaclav

Re: [petsc-dev] DMPlexCreate* and DMPlexInterpolate

2017-12-05 Thread Vaclav Hapla


> 5. 12. 2017 v 12:29, Matthew Knepley <knep...@gmail.com>:
> 
> On Mon, Dec 4, 2017 at 1:06 PM, Matthew Knepley <knep...@gmail.com 
> <mailto:knep...@gmail.com>> wrote:
> On Fri, Dec 1, 2017 at 11:10 AM, Vaclav Hapla <vaclav.ha...@erdw.ethz.ch 
> <mailto:vaclav.ha...@erdw.ethz.ch>> wrote:
> Hello
> 
> I noticed DMPlexCreateFromFile ignores the interpolate flag for HDF5. It's 
> the only format for which PetscViewer API is used for loading, and this API 
> has no means to specify interpolation. I guess there should be at least
> 
> if (interpolate) {
>   DM idm = NULL;
> 
>   ierr = DMPlexInterpolate(*dm, );CHKERRQ(ierr);
>   ierr = DMPlexCopyCoordinates(*dm, idm);CHKERRQ(ierr);
>   ierr = DMCopyLabels(*dm, idm);CHKERRQ(ierr);
>   ierr = DMDestroy(dm);CHKERRQ(ierr);
>   *dm = idm;
> }
> 
> in DMPlexCreateFromFile after the DMLoad call.
> 
> Yes.
>  
> This brings me to more general thing to discuss. The behavior of different 
> mesh constructors for interpolate=PETSC_TRUE is not unified. The most 
> elaborate behavior is in DMPlexCreateDoublet which after DMPlexInterpolate 
> also copies the DM name, coordinates and labels. So I suggest all this stuff 
> is done in DMPlexInterpolate.
> 
> Yes.
>  
> BTW I think it's kind of weird the output DM passed to DMPlexInterpolate must 
> me nullified - cf. MatReuse.
> 
> Yes. My original motivation turned out to be not important. I will get rid of 
> it.
> 
> I will make all changes in a branch and push to next.
> 
> It is here.
> 
>   
> https://bitbucket.org/petsc/petsc/branch/knepley/fix-plex-interpolation-consistency
>  
> <https://bitbucket.org/petsc/petsc/branch/knepley/fix-plex-interpolation-consistency>
> 
>Matt

It looks good, thanks!

Vaclav

>  
>   Thanks,
> 
>  Matt
>  
> 
> Vaclav
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/~mk51/>
> 
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/~mk51/>


[petsc-dev] DMPlexCreate* and DMPlexInterpolate

2017-12-01 Thread Vaclav Hapla
Hello

I noticed DMPlexCreateFromFile ignores the interpolate flag for HDF5. It's the 
only format for which PetscViewer API is used for loading, and this API has no 
means to specify interpolation. I guess there should be at least

if (interpolate) {
  DM idm = NULL;

  ierr = DMPlexInterpolate(*dm, );CHKERRQ(ierr);
  ierr = DMPlexCopyCoordinates(*dm, idm);CHKERRQ(ierr);
  ierr = DMCopyLabels(*dm, idm);CHKERRQ(ierr);
  ierr = DMDestroy(dm);CHKERRQ(ierr);
  *dm = idm;
}

in DMPlexCreateFromFile after the DMLoad call.

This brings me to more general thing to discuss. The behavior of different mesh 
constructors for interpolate=PETSC_TRUE is not unified. The most elaborate 
behavior is in DMPlexCreateDoublet which after DMPlexInterpolate also copies 
the DM name, coordinates and labels. So I suggest all this stuff is done in 
DMPlexInterpolate.

BTW I think it's kind of weird the output DM passed to DMPlexInterpolate must 
me nullified - cf. MatReuse. 

Vaclav

[petsc-dev] DMPlex test ex18 and DMPlexCreateFromDAG

2017-11-23 Thread Vaclav Hapla
Hello

The DMPlex example src/dm/impls/plex/examples/tests/ex18.c fails with  
-cell_simplex 0 :

[0]PETSC ERROR: Null argument, when expecting valid pointer
[0]PETSC ERROR: Null Object: Parameter # 1
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for 
trouble shooting.
[0]PETSC ERROR: Petsc Development GIT revision: v3.8.2-360-gb71d430  GIT Date: 
2017-11-23 09:02:30 -0600
[0]PETSC ERROR: ./ex18 on a arch-linux-gcc-salvus named salvus-vm by haplav Thu 
Nov 23 17:03:16 2017
[0]PETSC ERROR: Configure options --download-chaco --download-exodusii 
--download-hdf5 --download-med --download-metis --download-netcdf 
--download-parmetis --download-ptscotch --download-triangle --download-ctetgen 
--with-cc=mpicc --with-cxx=mpicxx --with-eigen-dir=/usr --with-fc=mpifort 
--with-python=1 --with-shared-libraries=1 
--with-valgrind-dir=/opt/valgrind/valgrind-3.13 PETSC_ARCH=arch-linux-gcc-salvus
[0]PETSC ERROR: #1 DMGetDimension() line 4048 in 
/scratch/petsc-dev/src/dm/interface/dm.c
[0]PETSC ERROR: #2 DMPlexCreateFromDAG() line 2704 in 
/scratch/petsc-dev/src/dm/impls/plex/plexcreate.c
[0]PETSC ERROR: #3 CreateQuad_2D() line 302 in 
/scratch/petsc-dev/src/dm/impls/plex/examples/tests/ex18.c
[0]PETSC ERROR: #4 CreateMesh() line 425 in 
/scratch/petsc-dev/src/dm/impls/plex/examples/tests/ex18.c
[0]PETSC ERROR: #5 main() line 464 in 
/scratch/petsc-dev/src/dm/impls/plex/examples/tests/ex18.c


The problem is that the DM is never created in this case. It could be fixed by 
adding 
  ierr = DMCreate(comm, dm);CHKERRQ(ierr);
  ierr = DMSetType(*dm, DMPLEX);CHKERRQ(ierr);
to the beginning of CreateQuad_2D and CreateHex_3D in ex18.c.

However, I rather suggest to
1) change the DM argument of these functions to DM* to unify them with 
CreateSimplex_2D and CreateSimplex_3D,
2) alter DMPlexCreateFromDAG so that DM is strictly output parameter. I don't 
understand why it couldn't create the DMPlex on its own - see the manual page: 
"dm - The empty DM object, usually from DMCreate() and DMSetDimension()".

I can fix that as suggested above if you agree.

Vaclav

Re: [petsc-dev] proposed minor PetscPartitioner changes

2017-11-10 Thread Vaclav Hapla


> 10. 11. 2017 v 22:34, Jed Brown <j...@jedbrown.org>:
> 
> Vaclav Hapla <vaclav.ha...@erdw.ethz.ch> writes:
> 
>>>> Yes. Mat can represent any graph in several different ways -
>>>> e.g. Laplacian, adjacency, incidence, oriented incidence matrix. The
>>>> graph could be also represented in other way like a list of vertices
>>>> and edges. 
>>> 
>>> Also known as COO format for a matrix.
>> 
>> You could represent by COO any of the matrices above. I don't understand how 
>> it relates to listing vertices and edges of a graph.
> 
> COO is literally a list of edges.  

For the adjacency matrix of a weighted graph this is true.

> Square matrices and graphs are the
> same thing.
> 
>>> 
>>>> MatPartitioning picks just one representation as an input - the
>>>> adjacency matrix. But I mean the picked representation does not
>>>> matter, and the result is not a partitioning of any matrix, but
>>>> partitioning of the graph. The graph is the underlying concept. This
>>>> is why I don't consider the Mat prefix optimal.
>>> 
>>> Matrix and graph are equivalent concepts.  
>> 
>> I think that's an overly strong statement. It sounds like a matrix and a 
>> graph are bijectively interchangeable things which is certainly not true:
>> 1) As I mentioned, there are multiple ways of representing a graph by a 
>> matrix, and for a given graphs these matrix representations don't even have 
>> the same dimensions.
>> 2) Even if you stick to the adjacency matrix (which you apparently do), it's 
>> still not bijective with the graph - the adjacency matrix is a square 
>> symmetric Boolean matrix. You can just say that the _sparse pattern_ of the 
>> _symmetric_ matrix is bijective with the respective graph. This is a by far 
>> weaker statement.
> 
> Graphs can have edge weights.  

Weighted graphs are already an extension. But OK.

> A symmetric matrix is an undirected
> graph.  A nonsymmetric matrix is a directed graph.

OK. And non-square matrices are graphs?
Matrices definitely "are" linear operators. Are graphs linear operators?

> 
>> Why you then have a special matrix type MATMPIADJ which is value-less 
> 
> MatCreateMPIAdj literally has an argument named "values".

This argument is optional the same way as graphs are optionally weighted. But 
OK.

> 
>> and automatically symmetric if all matrices "are" graphs?
> 
> It is not "automatically symmetric" -- you have to declare that with
> MatSetOption as documented in the man page.

Fair. But it is automatically square at least.

>  A partitioner for an
> undirected graph may require that the input is symmetric, much like how
> CG will fail if the input is not SPD.



Re: [petsc-dev] proposed minor PetscPartitioner changes

2017-11-10 Thread Vaclav Hapla


> 10. 11. 2017 v 14:56, Jed Brown <j...@jedbrown.org>:
> 
> Vaclav Hapla <vaclav.ha...@erdw.ethz.ch <mailto:vaclav.ha...@erdw.ethz.ch>> 
> writes:
> 
>>> 10. 11. 2017 v 5:09, Smith, Barry F. <bsm...@mcs.anl.gov>:
>>> 
>>> 
>>> 
>>>> On Nov 8, 2017, at 3:52 AM, Vaclav Hapla <vaclav.ha...@erdw.ethz.ch> wrote:
>>>> 
>>>> 
>>>>> 8. 11. 2017 v 9:06, Lisandro Dalcin <dalc...@gmail.com>:
>>>>> 
>>>>> On 8 November 2017 at 05:51, Smith, Barry F. <bsm...@mcs.anl.gov> wrote:
>>>>>> 
>>>>>>> On Nov 7, 2017, at 1:33 AM, Lisandro Dalcin <dalc...@gmail.com> wrote:
>>>>>>> 
>>>>>>> The only concern I have about PetscPartitioner is that the API depends
>>>>>>> on DM (PetscPartitionerPartition_ routines). Maybe
>>>>>>> PetscPartitioner should eventually move to became more agnostic, and
>>>>>>> that way it can be used to partition matrices and meshes.
>>>>>> 
>>>>>> This is certainly a serious flaw if PetscPartitioner is intended as THE 
>>>>>> API to use for partitioning. If it is not intended as THE API for 
>>>>>> partitioning then that is also a problem, because why have multiple APIs 
>>>>>> for what is essentially one set of abstractions.
>>>>>> 
>>>>> 
>>>>> Note however that things looks easy to refactor. I'll try to team up
>>>>> with Matt to improve things.
>>>> 
>>>> Wait, now we are at the beginning again. I actually wanted to do some 
>>>> refactoring of PetscPartitioner, starting with few cosmetic changes to 
>>>> make it better settable from options. But Barry kept me back of any edits 
>>>> since he think it's anti-systematic to keep two independent classes doing 
>>>> essentially the same. And I agree with that to be honest. It's strange to 
>>>> have two ParMetis, two Scotch and two whatever interfaces.
>>> 
>>> Strange is not the word, f***up is the word
>>> 
>>>> The only thing I don't like on MatPartitioning is its name as it's not 
>>>> just for Mat Partitioning :-)
>>>> 
>>>> There are from my point of view multiple issues with PetscPartitioner. 
>>>> Let's look e.g. at PetscPartitionerPartition. It takes as arguments both 
>>>> PetscPartitioner and DM. This DM must be in fact DMPlex which is not 
>>>> checked so it will probably crash somewhere deep in the stack once the 
>>>> first DMPlex specific line is reached. Then there are two output arguments 
>>>> PetscSection partSection and IS *partition. The first must be created 
>>>> beforehand while the second is created inside. And I guess they must keep 
>>>> the same basic information just in two different forms.
>>>> 
>>>> Actually the original problem I wanted to solve is that 
>>>> src/dm/impls/plex/examples/tutorials/ex5.c fails with partitioner set to 
>>>> PETSCPARTITIONERPARMETIS for certain numbers of processes, see below. Let 
>>>> me start with pull request altering ex5.c so that partitioner type can be 
>>>> set from options properly and this bug can be reproduced easily.
>>>> [ 0] ***ASSERTION failed on line 176 of file 
>>>> /scratch/petsc-dev/arch-linux-gcc-salvus/externalpackages/git.parmetis/libparmetis/comm.c:
>>>>  j == nnbrs
>>>> [ 2] ***ASSERTION failed on line 176 of file 
>>>> /scratch/petsc-dev/arch-linux-gcc-salvus/externalpackages/git.parmetis/libparmetis/comm.c:
>>>>  j == nnbrs
>>>> ex5: 
>>>> /scratch/petsc-dev/arch-linux-gcc-salvus/externalpackages/git.parmetis/libparmetis/comm.c:176:
>>>>  libparmetis__CommSetup: Assertion `j == nnbrs' failed.
>>>> ex5: 
>>>> /scratch/petsc-dev/arch-linux-gcc-salvus/externalpackages/git.parmetis/libparmetis/comm.c:176:
>>>>  libparmetis__CommSetup: Assertion `j == nnbrs' failed.
>>>> 
>>>> I'm wondering whether the MatPartitioning interface has the same problem. 
>>>> But anyhow I mean it's maybe time to decide about the two interfaces 
>>>> before chasing all these PetscPartitioner issues.
>>>> 
>>>> I propose
>>>> - to rename Mat Partitioning to just Partitioning/-er or take over the 
>>>> name PetscPartitioner,
>>> 
>>> Why? What is wrong wit

[petsc-dev] PR #800

2017-11-10 Thread Vaclav Hapla
Hi Matt

I would like to redirect the changes in PR #800 "DMPlex ex5 set partitioner 
from options" into maint so that we have a test base for fixing the 
PETSCPARTITIONERPARMETIS bug below, mentioned in the other thread. I feel this 
should fixed anyway, independently of the proposed 
PETSCPARTITIONERMATPARTITIONING which will take some time.

So if you agree, I decline current PR #800, and prepare a new one targeted to 
main.

src/dm/impls/plex/examples/tutorials/ex5.c fails with partitioner set to 
PETSCPARTITIONERPARMETIS for certain numbers of processes
[ 0] ***ASSERTION failed on line 176 of file 
/scratch/petsc-dev/arch-linux-gcc-salvus/externalpackages/git.parmetis/libparmetis/comm.c:
 j == nnbrs
[ 2] ***ASSERTION failed on line 176 of file 
/scratch/petsc-dev/arch-linux-gcc-salvus/externalpackages/git.parmetis/libparmetis/comm.c:
 j == nnbrs
ex5: 
/scratch/petsc-dev/arch-linux-gcc-salvus/externalpackages/git.parmetis/libparmetis/comm.c:176:
 libparmetis__CommSetup: Assertion `j == nnbrs' failed.
ex5: 
/scratch/petsc-dev/arch-linux-gcc-salvus/externalpackages/git.parmetis/libparmetis/comm.c:176:
 libparmetis__CommSetup: Assertion `j == nnbrs' failed.

Thanks

Vaclav

Re: [petsc-dev] proposed minor PetscPartitioner changes

2017-11-10 Thread Vaclav Hapla

> 10. 11. 2017 v 5:09, Smith, Barry F. <bsm...@mcs.anl.gov>:
> 
> 
> 
>> On Nov 8, 2017, at 3:52 AM, Vaclav Hapla <vaclav.ha...@erdw.ethz.ch> wrote:
>> 
>> 
>>> 8. 11. 2017 v 9:06, Lisandro Dalcin <dalc...@gmail.com>:
>>> 
>>> On 8 November 2017 at 05:51, Smith, Barry F. <bsm...@mcs.anl.gov> wrote:
>>>> 
>>>>> On Nov 7, 2017, at 1:33 AM, Lisandro Dalcin <dalc...@gmail.com> wrote:
>>>>> 
>>>>> The only concern I have about PetscPartitioner is that the API depends
>>>>> on DM (PetscPartitionerPartition_ routines). Maybe
>>>>> PetscPartitioner should eventually move to became more agnostic, and
>>>>> that way it can be used to partition matrices and meshes.
>>>> 
>>>> This is certainly a serious flaw if PetscPartitioner is intended as THE 
>>>> API to use for partitioning. If it is not intended as THE API for 
>>>> partitioning then that is also a problem, because why have multiple APIs 
>>>> for what is essentially one set of abstractions.
>>>> 
>>> 
>>> Note however that things looks easy to refactor. I'll try to team up
>>> with Matt to improve things.
>> 
>> Wait, now we are at the beginning again. I actually wanted to do some 
>> refactoring of PetscPartitioner, starting with few cosmetic changes to make 
>> it better settable from options. But Barry kept me back of any edits since 
>> he think it's anti-systematic to keep two independent classes doing 
>> essentially the same. And I agree with that to be honest. It's strange to 
>> have two ParMetis, two Scotch and two whatever interfaces.
> 
>  Strange is not the word, f***up is the word
> 
>> The only thing I don't like on MatPartitioning is its name as it's not just 
>> for Mat Partitioning :-)
>> 
>> There are from my point of view multiple issues with PetscPartitioner. Let's 
>> look e.g. at PetscPartitionerPartition. It takes as arguments both 
>> PetscPartitioner and DM. This DM must be in fact DMPlex which is not checked 
>> so it will probably crash somewhere deep in the stack once the first DMPlex 
>> specific line is reached. Then there are two output arguments PetscSection 
>> partSection and IS *partition. The first must be created beforehand while 
>> the second is created inside. And I guess they must keep the same basic 
>> information just in two different forms.
>> 
>> Actually the original problem I wanted to solve is that 
>> src/dm/impls/plex/examples/tutorials/ex5.c fails with partitioner set to 
>> PETSCPARTITIONERPARMETIS for certain numbers of processes, see below. Let me 
>> start with pull request altering ex5.c so that partitioner type can be set 
>> from options properly and this bug can be reproduced easily.
>> [ 0] ***ASSERTION failed on line 176 of file 
>> /scratch/petsc-dev/arch-linux-gcc-salvus/externalpackages/git.parmetis/libparmetis/comm.c:
>>  j == nnbrs
>> [ 2] ***ASSERTION failed on line 176 of file 
>> /scratch/petsc-dev/arch-linux-gcc-salvus/externalpackages/git.parmetis/libparmetis/comm.c:
>>  j == nnbrs
>> ex5: 
>> /scratch/petsc-dev/arch-linux-gcc-salvus/externalpackages/git.parmetis/libparmetis/comm.c:176:
>>  libparmetis__CommSetup: Assertion `j == nnbrs' failed.
>> ex5: 
>> /scratch/petsc-dev/arch-linux-gcc-salvus/externalpackages/git.parmetis/libparmetis/comm.c:176:
>>  libparmetis__CommSetup: Assertion `j == nnbrs' failed.
>> 
>> I'm wondering whether the MatPartitioning interface has the same problem. 
>> But anyhow I mean it's maybe time to decide about the two interfaces before 
>> chasing all these PetscPartitioner issues.
>> 
>> I propose
>> - to rename Mat Partitioning to just Partitioning/-er or take over the name 
>> PetscPartitioner,
> 
>  Why? What is wrong with Mat? Mat can represent any graph and graphs are 
> always what partitioning packages actually partition. I don't see a reason 
> for a different name. MatPartitioner does not, nor should it, directly 
> partition meshes etc, those can/should be done by; the proper massaging of 
> data, the creation of a MatPartitioner object, calling the partitioner and 
> then using the result of the partitioner to build whatever appropriate data 
> structures are needed for the mesh partitioner.

Yes. Mat can represent any graph in several different ways - e.g. Laplacian, 
adjacency, incidence, oriented incidence matrix. The graph could be also 
represented in other way like a list of vertices and edges. MatPartitioning 
picks just one representation as an input - 

Re: [petsc-dev] DMPlexCreateMedFromFile broken

2017-11-09 Thread Vaclav Hapla
7. 11. 2017 v 19:51, Matthew Knepley <knep...@gmail.com>:On Tue, Nov 7, 2017 at 11:44 AM, Vaclav Hapla <vaclav.ha...@erdw.ethz.ch> wrote:Nico Schlömer just introduced MED support into his meshio tool, based on my issue request. When I convert blockcylinder-50.exo directly to the MED format with meshio, the result can be loaded into GMSH and looks reasonable. But DMPlexCreateMedFromFile denies to open it with the following error:_MEDmeshAdvancedRd236.c [285] : Erreur de valeur invalide du filtre _MEDmeshAdvancedRd236.c [285] : _MEDmeshAdvancedRd236.c [286] : geotype = 0_MEDmeshAdvancedRd236.c [286] : meshname = "mesh0"_MEDmeshAdvancedRd236.c [286] : _datagroupname2 = "NOE"_MEDmeshAdvancedRd236.c [287] : (*_filter).storagemode = 2[0]PETSC ERROR: #1 DMPlexCreateMedFromFile() line 86 in /scratch/petsc-dev/src/dm/impls/plex/plexmed.c[0]PETSC ERROR: #2 DMPlexCreateFromFile() line 2815 in /scratch/petsc-dev/src/dm/impls/plex/plexcreate.c[0]PETSC ERROR: #3 main() line 38 in /scratch/petsc-dev/src/dm/impls/plex/examples/tutorials/ex5.cCan you see whether it's a bug in DMPlexCreateMedFromFile or the MED file is broken?It appears to be an MED incompatibility  https://bitbucket.org/petsc/petsc/src/17bd883d72f40a596f2d89b5afda5a233b621464/src/dm/impls/plex/plexmed.c?at=master=file-view-default#plexmed.c-86We should ask Nico about it.I tried to load the MED file from meshio (attached) by the MED-supplied test $PETSC_ARCH/externalpackages/med-3.2.0/tests/c/test5.c, and it didn't complain. I compared it with DMPlexCreateMedFromFile, replaced MED_COMPACT_STMODE flag by MED_GLOBAL_PFLMODE in MEDfilterBlockOfEntityCr calls (patch attached), and it started to work magically.But I have no idea what's the meaning of this flag - I found no documentation of the MED library.Note also that your knepley/fix-plex-med-orientation fix is still needed so I mean this should be applied generally - Nico's meshio is independent of GMSH.Vaclav

blockcylinder-50.med
Description: Binary data


plexmed.patch
Description: Binary data
  Thanks,     Matt ThanksVaclav5. 11. 2017 v 18:48, Matthew Knepley <knep...@gmail.com>:On Thu, Nov 2, 2017 at 12:08 PM, Vaclav Hapla <vaclav.ha...@erdw.ethz.ch> wrote:It seems that DMPlexCreateMedFromFile leaves out some mesh elements. I found it out when investigating why ParMetis redistribution crashes.I attach the datafile $PETSC_DIR/share/petsc/datafiles/meshes/blockcylinder-50.exo converted to GMSH and MED format.The conversion EXO to GMSH was done by meshio (github.com/nschloe/meshio), and GMSH to MED by GMSH itself.I did:cd $PETSC_DIR/src/dm/impls/plex/examples/tutorialsmake ex5FILE=blockcylinder-50.exo && ./ex5 -filename $FILE -dm_view hdf5:$FILE.h5 && $PETSC_DIR/bin/petsc_gen_xdmf.py $FILE.h5FILE=blockcylinder-50.msh && ./ex5 -filename $FILE -dm_view hdf5:$FILE.h5 && $PETSC_DIR/bin/petsc_gen_xdmf.py $FILE.h5FILE=blockcylinder-50.med && ./ex5 -filename $FILE -dm_view hdf5:$FILE.h5 && $PETSC_DIR/bin/petsc_gen_xdmf.py $FILE.h5While the output from blockcylinder-50.exo and blockcylinder-50.msh looks OK, that from blockcylinder-50.med is corrupted.I also attach my HDF5/XDMF outputs and screenshots from ParaView.It appears that MED, at least coming from GMsh, inverts hexes just like GMsh does. I have fixed this in branch  knepley/fix-plex-med-orientationIf you run your test in this, everything should be fine.Michael, can you check whether this is a general fix, or it only applied to MED from GMsh?  Thanks,     Matt Vaclav-- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.-- Norbert Wienerhttps://www.cse.buffalo.edu/~knepley/-- What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.-- Norbert Wienerhttps://www.cse.buffalo.edu/~knepley/

Re: [petsc-dev] proposed minor PetscPartitioner changes

2017-11-09 Thread Vaclav Hapla
Thanks for the discussion. I think we got further.

> 8. 11. 2017 v 20:42, Matthew Knepley :
> 
> On Wed, Nov 8, 2017 at 2:27 PM, Jed Brown  > wrote:
> Matthew Knepley > writes:
> 
> > On Wed, Nov 8, 2017 at 1:49 PM, Jed Brown  > > wrote:
> >
> >> Matthew Knepley > writes:
> >>
> >> >> > No, this is the right structure.
> >> >>
> >> >> Oh come on.  You're defending a quadratic algorithm.
> >> >>
> >> >>   ierr = ParMETIS_V3_PartKway(vtxdist, xadj, adjncy, vwgt, adjwgt,
> >> >> , , , , tpwgts, ubvec, options, ,
> >> >> assignment, );
> >> >>   // ...
> >> >>   for (p = 0, i = 0; p < nparts; ++p) {
> >> >> for (v = 0; v < nvtxs; ++v) {
> >> >>   if (assignment[v] == p) points[i++] = v;
> >> >> }
> >> >>   }
> >> >>
> >> >> MatPartitioningApply creates an IS with "assignment" and can be
> >> >> converted to a global numbering with ISPartitioningToNumbering.  You
> >> >> could as well have an ISPartitioningToSectionAndIS() that produces your
> >> >> representation, preferably without this silly quadratic algorithm.
> >> >>
> >> >
> >> > Time it. Tell me if it matters. Telling me it matters in the long run is
> >> > metaphysics.

Jed replied exactly what I needed to know - whether there's some easy 
conversion from the natural IS output of MatPartitioningApply into 
IS+PetscSection DMPlexDistribute  needs. He showed there actually is a way, and 
it won't have worse performance (theoretical performance benefit). Moreover, 
what you do now for every PetscPartitioner implementation (all those "Convert 
to PetscSection+IS" blocks) would be done just in one place (design benefit). 
In this context, the request for timing was slightly unfair, I think.

> >>
> >> I realize ParMETIS isn't scalable and that if you have a modest number
> >> of parts and only a million or so elements per rank, the cost of what
> >> you do here will be acceptable for most uses.
> >>
> >> But you didn't refute my point that ISPartitioningToSectionAndIS can
> >> produce the representation you want.
> >
> >
> > I do not think its an either or thing. Many equivalent interfaces are
> > possible,
> > so I should have told Vaclav "I think this is the right one", but I thought
> > that
> > was implicit in me being the responder, and none of us thinking that there
> > is
> > a true "right" in interface design.
> >
> >
> >> The IS you're creating is similar
> >> to the inverse of the Numbering (which is a permutation).  You are the
> >> one that replaced a scalable algorithm that has existed for a long time
> >> and uses types correctly with PetscPartitioner which has some ugly
> >> warts, duplicates a lot of code, and isn't a viable replacement for
> >> MatPartitioning.
> >
> >
> > 1) MatPartitioning is not a replacement for what I do. According to you,
> > that
> > should end the argument right there.
> >
> > 2) Deep inside MatPartitioning, it must split things up as I do in order to
> > pack
> > stuff to be sent. I think it should be exposed as I have done.
> 
> Pack what stuff to be sent?  PetscPartitionerPartition() takes the
> arguments to ParMETIS directly as arrays.  To use MatPartitioning, you
> pass those same arrays to MatCreateMPIAdj which literally just holds the
> pointers.  Then MatPartitioningApply passes those on to ParMETIS or
> whichever partitioning package.  PetscPartitionerPartition_*
> implementations depends on DM strictly as a way to define the weights.
> That isn't more general or better; it's more restrictive.
> 
> This sounds like deliberate misunderstanding. We are not talking about the 
> input to
> ParMetis, but the output. The biggest shortcoming of ParMetis (and indeed all 
> mesh
> partitioners) is that they tell you what must move, but do not move it. This 
> is the value
> add of Plex, it moves the mesh (including all connectivity) and the field 
> data according
> to the partition. In order to do this communication, you always want the 
> partition segmented
> by rank, as it is in the Section/IS output.

I think we are talking here about both input and output!

What PetscPartitionerPartition actually does is getting the adjacency 
information from the DM using
DMPlexCreatePartitionerGraph(DM dm, PetscInt height, PetscInt *numVertices, 
PetscInt **offsets, PetscInt **adjacency, IS *globalNumbering)
and these raw arrays are then passed to
(*part->ops->partition)(part, dm, size, numVertices, start, adjacency, 
partSection, partition);

This I don't like from couple of reasons:
1) the type-specific function has completely different arguments than the 
interface function - this is slightly non-standard, and if you talk about 
exposing, there should be an interface function with these arguments
2) on the other hand you derive IS+PetscSection output from the "raw" 
partitioner output inside each 

Re: [petsc-dev] proposed minor PetscPartitioner changes

2017-11-08 Thread Vaclav Hapla

> 8. 11. 2017 v 15:59, Matthew Knepley <knep...@gmail.com>:
> 
> On Wed, Nov 8, 2017 at 9:14 AM, Vaclav Hapla <vaclav.ha...@erdw.ethz.ch 
> <mailto:vaclav.ha...@erdw.ethz.ch>> wrote:
> 
>> 8. 11. 2017 v 14:52, Jed Brown <j...@jedbrown.org 
>> <mailto:j...@jedbrown.org>>:
>> 
>> Matthew Knepley <knep...@gmail.com <mailto:knep...@gmail.com>> writes:
>> 
>>> On Wed, Nov 8, 2017 at 4:52 AM, Vaclav Hapla <vaclav.ha...@erdw.ethz.ch 
>>> <mailto:vaclav.ha...@erdw.ethz.ch>>
>>> wrote:
>>> 
>>>> 
>>>>> 8. 11. 2017 v 9:06, Lisandro Dalcin <dalc...@gmail.com 
>>>>> <mailto:dalc...@gmail.com>>:
>>>>> 
>>>>> On 8 November 2017 at 05:51, Smith, Barry F. <bsm...@mcs.anl.gov 
>>>>> <mailto:bsm...@mcs.anl.gov>> wrote:
>>>>>> 
>>>>>>> On Nov 7, 2017, at 1:33 AM, Lisandro Dalcin <dalc...@gmail.com 
>>>>>>> <mailto:dalc...@gmail.com>> wrote:
>>>>>>> 
>>>>>>> The only concern I have about PetscPartitioner is that the API depends
>>>>>>> on DM (PetscPartitionerPartition_ routines). Maybe
>>>>>>> PetscPartitioner should eventually move to became more agnostic, and
>>>>>>> that way it can be used to partition matrices and meshes.
>>>>>> 
>>>>>>  This is certainly a serious flaw if PetscPartitioner is intended as
>>>> THE API to use for partitioning. If it is not intended as THE API for
>>>> partitioning then that is also a problem, because why have multiple APIs
>>>> for what is essentially one set of abstractions.
>>>>>> 
>>>>> 
>>>>> Note however that things looks easy to refactor. I'll try to team up
>>>>> with Matt to improve things.
>>>> 
>>>> Wait, now we are at the beginning again. I actually wanted to do some
>>>> refactoring of PetscPartitioner, starting with few cosmetic changes to make
>>>> it better settable from options. But Barry kept me back of any edits since
>>>> he think it's anti-systematic to keep two independent classes doing
>>>> essentially the same. And I agree with that to be honest. It's strange to
>>>> have two ParMetis, two Scotch and two whatever interfaces. The only thing I
>>>> don't like on MatPartitioning is its name as it's not just for Mat
>>>> Partitioning :-)
>>>> 
>>>> There are from my point of view multiple issues with PetscPartitioner.
>>>> Let's look e.g. at PetscPartitionerPartition. It takes as arguments both
>>>> PetscPartitioner and DM. This DM must be in fact DMPlex which is not
>>>> checked so it will probably crash somewhere deep in the stack once the
>>>> first DMPlex specific line is reached. Then there are two output arguments
>>>> PetscSection partSection and IS *partition. The first must be created
>>>> beforehand while the second is created inside. And I guess they must keep
>>>> the same basic information just in two different forms.
>>>> 
>>> 
>>> This is wrong.
> 
> Matt, do you mean what I wrote above is not true (then I'm going to prove 
> that it's true), or the opposite that the PetscPartitionerPartition design is 
> actually not good :-)
> 
> I meant "they must keep the same basic information just in two different 
> forms" is wrong (I had to leave for work). The Section gives offsets for
> each partition, and the IS stores the actual points being sent.

OK, now I see. And do you think the Section could be somehow computed from just 
the IS?

Thanks

Vaclav

> 
>Matt
>  
> Vaclav
> 
>> 
>> Dude, this isn't how we interaction here.  If there is a technical
>> reason why what Vaclav, Lisandro, and Barry want to do cannot work, you
>> should explain that.  Just casting doubt without working toward a
>> solution is not okay.
> 
> 
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/~mk51/>



Re: [petsc-dev] proposed minor PetscPartitioner changes

2017-11-08 Thread Vaclav Hapla

> 8. 11. 2017 v 14:52, Jed Brown <j...@jedbrown.org>:
> 
> Matthew Knepley <knep...@gmail.com <mailto:knep...@gmail.com>> writes:
> 
>> On Wed, Nov 8, 2017 at 4:52 AM, Vaclav Hapla <vaclav.ha...@erdw.ethz.ch>
>> wrote:
>> 
>>> 
>>>> 8. 11. 2017 v 9:06, Lisandro Dalcin <dalc...@gmail.com>:
>>>> 
>>>> On 8 November 2017 at 05:51, Smith, Barry F. <bsm...@mcs.anl.gov> wrote:
>>>>> 
>>>>>> On Nov 7, 2017, at 1:33 AM, Lisandro Dalcin <dalc...@gmail.com> wrote:
>>>>>> 
>>>>>> The only concern I have about PetscPartitioner is that the API depends
>>>>>> on DM (PetscPartitionerPartition_ routines). Maybe
>>>>>> PetscPartitioner should eventually move to became more agnostic, and
>>>>>> that way it can be used to partition matrices and meshes.
>>>>> 
>>>>>  This is certainly a serious flaw if PetscPartitioner is intended as
>>> THE API to use for partitioning. If it is not intended as THE API for
>>> partitioning then that is also a problem, because why have multiple APIs
>>> for what is essentially one set of abstractions.
>>>>> 
>>>> 
>>>> Note however that things looks easy to refactor. I'll try to team up
>>>> with Matt to improve things.
>>> 
>>> Wait, now we are at the beginning again. I actually wanted to do some
>>> refactoring of PetscPartitioner, starting with few cosmetic changes to make
>>> it better settable from options. But Barry kept me back of any edits since
>>> he think it's anti-systematic to keep two independent classes doing
>>> essentially the same. And I agree with that to be honest. It's strange to
>>> have two ParMetis, two Scotch and two whatever interfaces. The only thing I
>>> don't like on MatPartitioning is its name as it's not just for Mat
>>> Partitioning :-)
>>> 
>>> There are from my point of view multiple issues with PetscPartitioner.
>>> Let's look e.g. at PetscPartitionerPartition. It takes as arguments both
>>> PetscPartitioner and DM. This DM must be in fact DMPlex which is not
>>> checked so it will probably crash somewhere deep in the stack once the
>>> first DMPlex specific line is reached. Then there are two output arguments
>>> PetscSection partSection and IS *partition. The first must be created
>>> beforehand while the second is created inside. And I guess they must keep
>>> the same basic information just in two different forms.
>>> 
>> 
>> This is wrong.

Matt, do you mean what I wrote above is not true (then I'm going to prove that 
it's true), or the opposite that the PetscPartitionerPartition design is 
actually not good :-)

Vaclav

> 
> Dude, this isn't how we interaction here.  If there is a technical
> reason why what Vaclav, Lisandro, and Barry want to do cannot work, you
> should explain that.  Just casting doubt without working toward a
> solution is not okay.



Re: [petsc-dev] proposed minor PetscPartitioner changes

2017-11-08 Thread Vaclav Hapla

> 8. 11. 2017 v 9:06, Lisandro Dalcin :
> 
> On 8 November 2017 at 05:51, Smith, Barry F.  wrote:
>> 
>>> On Nov 7, 2017, at 1:33 AM, Lisandro Dalcin  wrote:
>>> 
>>> The only concern I have about PetscPartitioner is that the API depends
>>> on DM (PetscPartitionerPartition_ routines). Maybe
>>> PetscPartitioner should eventually move to became more agnostic, and
>>> that way it can be used to partition matrices and meshes.
>> 
>>   This is certainly a serious flaw if PetscPartitioner is intended as THE 
>> API to use for partitioning. If it is not intended as THE API for 
>> partitioning then that is also a problem, because why have multiple APIs for 
>> what is essentially one set of abstractions.
>> 
> 
> Note however that things looks easy to refactor. I'll try to team up
> with Matt to improve things.

Wait, now we are at the beginning again. I actually wanted to do some 
refactoring of PetscPartitioner, starting with few cosmetic changes to make it 
better settable from options. But Barry kept me back of any edits since he 
think it's anti-systematic to keep two independent classes doing essentially 
the same. And I agree with that to be honest. It's strange to have two 
ParMetis, two Scotch and two whatever interfaces. The only thing I don't like 
on MatPartitioning is its name as it's not just for Mat Partitioning :-)

There are from my point of view multiple issues with PetscPartitioner. Let's 
look e.g. at PetscPartitionerPartition. It takes as arguments both 
PetscPartitioner and DM. This DM must be in fact DMPlex which is not checked so 
it will probably crash somewhere deep in the stack once the first DMPlex 
specific line is reached. Then there are two output arguments PetscSection 
partSection and IS *partition. The first must be created beforehand while the 
second is created inside. And I guess they must keep the same basic information 
just in two different forms.

Actually the original problem I wanted to solve is that 
src/dm/impls/plex/examples/tutorials/ex5.c fails with partitioner set to 
PETSCPARTITIONERPARMETIS for certain numbers of processes, see below. Let me 
start with pull request altering ex5.c so that partitioner type can be set from 
options properly and this bug can be reproduced easily.
[ 0] ***ASSERTION failed on line 176 of file 
/scratch/petsc-dev/arch-linux-gcc-salvus/externalpackages/git.parmetis/libparmetis/comm.c:
 j == nnbrs
[ 2] ***ASSERTION failed on line 176 of file 
/scratch/petsc-dev/arch-linux-gcc-salvus/externalpackages/git.parmetis/libparmetis/comm.c:
 j == nnbrs
ex5: 
/scratch/petsc-dev/arch-linux-gcc-salvus/externalpackages/git.parmetis/libparmetis/comm.c:176:
 libparmetis__CommSetup: Assertion `j == nnbrs' failed.
ex5: 
/scratch/petsc-dev/arch-linux-gcc-salvus/externalpackages/git.parmetis/libparmetis/comm.c:176:
 libparmetis__CommSetup: Assertion `j == nnbrs' failed.

I'm wondering whether the MatPartitioning interface has the same problem. But 
anyhow I mean it's maybe time to decide about the two interfaces before chasing 
all these PetscPartitioner issues.

I propose
- to rename Mat Partitioning to just Partitioning/-er or take over the name 
PetscPartitioner,
- what is now PetscPartitioner would be MeshPartitioning/-er and it would 
become sort of adapter from DMPlex meshes to general graphs so that the more 
general class above can be employed.

Note that I created an issue request as Barry suggested (#192) but there's no 
feedback yet.

Note my top level intention is to chip in some more distributed mesh loaders - 
currently there's only one for Salome/MED format which is rather non-standard 
and not much documented.

Thanks

Vaclav

> 
> 
> -- 
> Lisandro Dalcin
> 
> Research Scientist
> Computer, Electrical and Mathematical Sciences & Engineering (CEMSE)
> Extreme Computing Research Center (ECRC)
> King Abdullah University of Science and Technology (KAUST)
> http://ecrc.kaust.edu.sa/
> 
> 4700 King Abdullah University of Science and Technology
> al-Khawarizmi Bldg (Bldg 1), Office # 0109
> Thuwal 23955-6900, Kingdom of Saudi Arabia
> http://www.kaust.edu.sa
> 
> Office Phone: +966 12 808-0459



Re: [petsc-dev] DMPlexCreateMedFromFile broken

2017-11-07 Thread Vaclav Hapla
Nico Schlömer just introduced MED support into his meshio tool, based on my 
issue request. When I convert blockcylinder-50.exo directly to the MED format 
with meshio, the result can be loaded into GMSH and looks reasonable. But 
DMPlexCreateMedFromFile denies to open it with the following error:

_MEDmeshAdvancedRd236.c [285] : Erreur de valeur invalide du filtre 
_MEDmeshAdvancedRd236.c [285] : 
_MEDmeshAdvancedRd236.c [286] : geotype = 0
_MEDmeshAdvancedRd236.c [286] : meshname = "mesh0"
_MEDmeshAdvancedRd236.c [286] : _datagroupname2 = "NOE"
_MEDmeshAdvancedRd236.c [287] : (*_filter).storagemode = 2
[0]PETSC ERROR: #1 DMPlexCreateMedFromFile() line 86 in 
/scratch/petsc-dev/src/dm/impls/plex/plexmed.c
[0]PETSC ERROR: #2 DMPlexCreateFromFile() line 2815 in 
/scratch/petsc-dev/src/dm/impls/plex/plexcreate.c
[0]PETSC ERROR: #3 main() line 38 in 
/scratch/petsc-dev/src/dm/impls/plex/examples/tutorials/ex5.c

Can you see whether it's a bug in DMPlexCreateMedFromFile or the MED file is 
broken?

Thanks

Vaclav


> 5. 11. 2017 v 18:48, Matthew Knepley <knep...@gmail.com>:
> 
> On Thu, Nov 2, 2017 at 12:08 PM, Vaclav Hapla <vaclav.ha...@erdw.ethz.ch 
> <mailto:vaclav.ha...@erdw.ethz.ch>> wrote:
> It seems that DMPlexCreateMedFromFile leaves out some mesh elements. I found 
> it out when investigating why ParMetis redistribution crashes.
> 
> I attach the datafile 
> $PETSC_DIR/share/petsc/datafiles/meshes/blockcylinder-50.exo converted to 
> GMSH and MED format.
> The conversion EXO to GMSH was done by meshio (github.com/nschloe/meshio 
> <http://github.com/nschloe/meshio>), and GMSH to MED by GMSH itself.
> 
> I did:
> 
> cd $PETSC_DIR/src/dm/impls/plex/examples/tutorials
> make ex5
> FILE=blockcylinder-50.exo && ./ex5 -filename $FILE -dm_view hdf5:$FILE.h5 && 
> $PETSC_DIR/bin/petsc_gen_xdmf.py $FILE.h5
> FILE=blockcylinder-50.msh && ./ex5 -filename $FILE -dm_view hdf5:$FILE.h5 && 
> $PETSC_DIR/bin/petsc_gen_xdmf.py $FILE.h5
> FILE=blockcylinder-50.med && ./ex5 -filename $FILE -dm_view hdf5:$FILE.h5 && 
> $PETSC_DIR/bin/petsc_gen_xdmf.py $FILE.h5
> 
> While the output from blockcylinder-50.exo and blockcylinder-50.msh looks OK, 
> that from blockcylinder-50.med is corrupted.
> 
> I also attach my HDF5/XDMF outputs and screenshots from ParaView.
> 
> It appears that MED, at least coming from GMsh, inverts hexes just like GMsh 
> does. I have fixed this in branch
> 
>   knepley/fix-plex-med-orientation
> 
> If you run your test in this, everything should be fine.
> 
> Michael, can you check whether this is a general fix, or it only applied to 
> MED from GMsh?
> 
>   Thanks,
> 
>  Matt
>  
> 
> Vaclav
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/~mk51/>


Re: [petsc-dev] proposed minor PetscPartitioner changes

2017-11-06 Thread Vaclav Hapla

> 6. 11. 2017 v 14:37, Smith, Barry F. <bsm...@mcs.anl.gov>:
> 
> 
> 
>> On Nov 6, 2017, at 7:30 AM, Vaclav Hapla <vaclav.ha...@erdw.ethz.ch> wrote:
>> 
>> 
>>> 6. 11. 2017 v 14:27, Matthew Knepley <knep...@gmail.com>:
>>> 
>>> On Mon, Nov 6, 2017 at 8:24 AM, Smith, Barry F. <bsm...@mcs.anl.gov> wrote:
>>> 
>>>   Vaclav,
>>> 
>>>  Actually you should not just do this! PETSc already has a full class 
>>> for managing partitioning (that Matt ignored for no good reason)
>>> 
>>> Obviously, a good reason existed. All sorts of horrible Mat-specific crap 
>>> was in these. It was impossible to use for mesh partitioning. In addition,
>>> when partitioning meshes, you need extra things (like calculation of the 
>>> dual graph), which is not used in the matrix case.
>> 
>> This is what I thought; mesh and matrix partitioning is quite a different 
>> task to me, although typically using the same low-level libraries.
> 
>  Vaclav,
> 
>   MatPartitioning is not just for partitioning Mats, it is for partitioning 
> graphs. Please look at the code, you'll see Matt duplicated tons of it in 
> doing his own "mesh partitioning" stuff and all that code duplication is a 
> sign that they really are not "quite a different task".
> 
>  Barry

OK, I looked at it a little bit and I created a new issue #192. I will continue 
studying it more deeply.

Vaclav


> 
>> 
>> Vaclav
>> 
>>> 
>>>  Matt
>>> 
>>> 
>>> see MatPartitioningCreate(). Please look at all the functionality before 
>>> doing anything.
>>> 
>>> Any refactorization you do needs to combine, simplify, and cleanup the 
>>> two interfaces to create one simpler one. And please let us know your 
>>> design ideas (with for example an issue on bitbucket) before you go ahead 
>>> and write a lot of code we may end up not liking.
>>> 
>>> Thanks for looking at this,
>>> 
>>>   Barry
>>> 
>>> 
>>>> On Nov 6, 2017, at 7:09 AM, Vaclav Hapla <vaclav.ha...@erdw.ethz.ch> wrote:
>>>> 
>>>> Hello
>>>> 
>>>> The whole PetscPartitioner class sources are part of 
>>>> src/dm/impls/plex/plexpartition.c, mixed together with some DMPlex* 
>>>> functions.
>>>> If you don't mind, I would move the PetscPartitioner* stuff into the 
>>>> separate file petscpartitioner.c
>>>> (in future, it could be even moved to a separate directory).
>>>> 
>>>> I would also like to
>>>> * add PetscPartitioner{Add,Set,Get}OptionsPrefix,
>>>> * create the partitioner lazily in DMPlexGetPartitioner, followed by 
>>>> PetscObjectIncrementTabLevel, PetscLogObjectParent and 
>>>> PetscPartitionerSetOptionsPrefix calls,
>>>> * edit src/dm/impls/plex/examples/tutorials/ex5.c so that it calls 
>>>> DMSetFromOptions right after DMPlexCreateFromFile, in order to make the 
>>>> partitioner changeable from options.
>>>> 
>>>> Do you have anything against?
>>>> 
>>>> Vaclav
>>> 
>>> 
>>> 
>>> 
>>> -- 
>>> What most experimenters take for granted before they begin their 
>>> experiments is infinitely more interesting than any results to which their 
>>> experiments lead.
>>> -- Norbert Wiener
>>> 
>>> https://www.cse.buffalo.edu/~knepley/



Re: [petsc-dev] proposed minor PetscPartitioner changes

2017-11-06 Thread Vaclav Hapla

> 6. 11. 2017 v 14:27, Matthew Knepley <knep...@gmail.com>:
> 
> On Mon, Nov 6, 2017 at 8:24 AM, Smith, Barry F. <bsm...@mcs.anl.gov 
> <mailto:bsm...@mcs.anl.gov>> wrote:
> 
>Vaclav,
> 
>   Actually you should not just do this! PETSc already has a full class 
> for managing partitioning (that Matt ignored for no good reason)
> 
> Obviously, a good reason existed. All sorts of horrible Mat-specific crap was 
> in these. It was impossible to use for mesh partitioning. In addition,
> when partitioning meshes, you need extra things (like calculation of the dual 
> graph), which is not used in the matrix case.

This is what I thought; mesh and matrix partitioning is quite a different task 
to me, although typically using the same low-level libraries.

Vaclav

> 
>   Matt
> 
>  
> see MatPartitioningCreate(). Please look at all the functionality before 
> doing anything.
> 
>  Any refactorization you do needs to combine, simplify, and cleanup the 
> two interfaces to create one simpler one. And please let us know your design 
> ideas (with for example an issue on bitbucket) before you go ahead and write 
> a lot of code we may end up not liking.
> 
>  Thanks for looking at this,
> 
>Barry
> 
> 
> > On Nov 6, 2017, at 7:09 AM, Vaclav Hapla <vaclav.ha...@erdw.ethz.ch 
> > <mailto:vaclav.ha...@erdw.ethz.ch>> wrote:
> >
> > Hello
> >
> > The whole PetscPartitioner class sources are part of 
> > src/dm/impls/plex/plexpartition.c, mixed together with some DMPlex* 
> > functions.
> > If you don't mind, I would move the PetscPartitioner* stuff into the 
> > separate file petscpartitioner.c
> > (in future, it could be even moved to a separate directory).
> >
> > I would also like to
> > * add PetscPartitioner{Add,Set,Get}OptionsPrefix,
> > * create the partitioner lazily in DMPlexGetPartitioner, followed by 
> > PetscObjectIncrementTabLevel, PetscLogObjectParent and 
> > PetscPartitionerSetOptionsPrefix calls,
> > * edit src/dm/impls/plex/examples/tutorials/ex5.c so that it calls 
> > DMSetFromOptions right after DMPlexCreateFromFile, in order to make the 
> > partitioner changeable from options.
> >
> > Do you have anything against?
> >
> > Vaclav
> 
> 
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/~mk51/>


[petsc-dev] proposed minor PetscPartitioner changes

2017-11-06 Thread Vaclav Hapla
Hello

The whole PetscPartitioner class sources are part of 
src/dm/impls/plex/plexpartition.c, mixed together with some DMPlex* functions.
If you don't mind, I would move the PetscPartitioner* stuff into the separate 
file petscpartitioner.c
(in future, it could be even moved to a separate directory).

I would also like to
* add PetscPartitioner{Add,Set,Get}OptionsPrefix,
* create the partitioner lazily in DMPlexGetPartitioner, followed by 
PetscObjectIncrementTabLevel, PetscLogObjectParent and 
PetscPartitionerSetOptionsPrefix calls,
* edit src/dm/impls/plex/examples/tutorials/ex5.c so that it calls 
DMSetFromOptions right after DMPlexCreateFromFile, in order to make the 
partitioner changeable from options.

Do you have anything against?

Vaclav

Re: [petsc-dev] DMPlexCreateMedFromFile broken

2017-11-06 Thread Vaclav Hapla

> 5. 11. 2017 v 18:48, Matthew Knepley <knep...@gmail.com>:
> 
> On Thu, Nov 2, 2017 at 12:08 PM, Vaclav Hapla <vaclav.ha...@erdw.ethz.ch 
> <mailto:vaclav.ha...@erdw.ethz.ch>> wrote:
> It seems that DMPlexCreateMedFromFile leaves out some mesh elements. I found 
> it out when investigating why ParMetis redistribution crashes.
> 
> I attach the datafile 
> $PETSC_DIR/share/petsc/datafiles/meshes/blockcylinder-50.exo converted to 
> GMSH and MED format.
> The conversion EXO to GMSH was done by meshio (github.com/nschloe/meshio 
> <http://github.com/nschloe/meshio>), and GMSH to MED by GMSH itself.
> 
> I did:
> 
> cd $PETSC_DIR/src/dm/impls/plex/examples/tutorials
> make ex5
> FILE=blockcylinder-50.exo && ./ex5 -filename $FILE -dm_view hdf5:$FILE.h5 && 
> $PETSC_DIR/bin/petsc_gen_xdmf.py $FILE.h5
> FILE=blockcylinder-50.msh && ./ex5 -filename $FILE -dm_view hdf5:$FILE.h5 && 
> $PETSC_DIR/bin/petsc_gen_xdmf.py $FILE.h5
> FILE=blockcylinder-50.med && ./ex5 -filename $FILE -dm_view hdf5:$FILE.h5 && 
> $PETSC_DIR/bin/petsc_gen_xdmf.py $FILE.h5
> 
> While the output from blockcylinder-50.exo and blockcylinder-50.msh looks OK, 
> that from blockcylinder-50.med is corrupted.
> 
> I also attach my HDF5/XDMF outputs and screenshots from ParaView.
> 
> It appears that MED, at least coming from GMsh, inverts hexes just like GMsh 
> does. I have fixed this in branch
> 
>   knepley/fix-plex-med-orientation
> 
> If you run your test in this, everything should be fine.

Thanks a lot, Matt! It works for me. However, it hasn't fixed the problem with 
ParMetis - I will start another thread.

Vaclav

> 
> Michael, can you check whether this is a general fix, or it only applied to 
> MED from GMsh?
> 
>   Thanks,
> 
>  Matt
>  
> 
> Vaclav
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/~mk51/>


[petsc-dev] tiny bug in DMPlexDistribute

2017-10-13 Thread Vaclav Hapla
Hello

In DMPlexDistribute, when it is run on 1 process and sf != NULL, the output *sf 
is undefined. This in turn leads to crash of PetscSFDestroy, e.g. in 
src/dm/impls/plex/examples/tutorials/ex5.c:45.

I created the corresponding pull request a minute ago. It is my first PR - 
please, should I write there some reviewer?

Vaclav Hapla

[petsc-dev] MatCopy to zero preallocated matrix

2012-03-02 Thread Vaclav Hapla
OK, but:
1. I don't understand what's the use case for MatCopy.
2. I think it's not good that I can't do the thing without reallocations 
though I have an apriori information where the new nonzeros will lie.
3. Documentation of MatAssemblyBegin/End should definitely contain the 
information about the zero filter.

But OK, 2. is not a problem in my case :-)
Thank you,
Vaclav


On 03/02/2012 12:18 AM, Matthew Knepley wrote:
 On Thu, Mar 1, 2012 at 5:14 PM, Vaclav Hapla vaclav.hapla at vsb.cz 
 mailto:vaclav.hapla at vsb.cz wrote:

 This is what I used before for a long time until now when I
 updated PETSc and it started to complain New nonzero caused a
 malloc! when I wanted to add


 You can turn off this message using MatSetOption(). We put it there so 
 people know when allocation is being done since it
 can be slow.

   Thanks,

 Matt

 some new nonzero outside of the original nnz-pattern. MatDuplicate
 is always retaining the pattern, isn't it?
 Vaclav

 On 03/02/2012 12:06 AM, Matthew Knepley wrote:
 On Thu, Mar 1, 2012 at 5:03 PM, Vaclav Hapla vaclav.hapla at vsb.cz
 mailto:vaclav.hapla at vsb.cz wrote:

 Dear PETSc team,
 why I am not able to do this:

 ...
 MatCreate(PETSC_COMM_SELF, Kreg);
 MatSetSizes(Kreg,m,n,m,n);
 MatSetType(Kreg, MATSEQAIJ);
 MatSeqAIJSetPreallocation(Kreg,0,nnz);
 MatAssemblyBegin(Kreg, MAT_FINAL_ASSEMBLY);
 MatAssemblyEnd(   Kreg, MAT_FINAL_ASSEMBLY);
 PetscFree(nnz);
 {
PetscInt nz_Kreg;
MatGetRow(Kreg, 0, nz_Kreg, PETSC_IGNORE, PETSC_IGNORE);
PetscPrintf(PETSC_COMM_SELF, nnz %d  Kreg %d\n, nnz[0],
 nz_Kreg);  // prints nnz 11  Kreg 0 !!!
 }
 MatCopy(K_loc, Kreg, DIFFERENT_NONZERO_PATTERN); //fails: New
 nonzero at (0,0) caused a malloc!


 I think you want

 
 http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatDuplicate.html

 which does it in one line.

Matt


 PETSc complains about new nonzero - not surprisingly because
 MatAssemblyBegin/End filters zeros as I understand.
 But when I comment out MatAssemblyBegin/End, MatCopy
 complains that it is only for assembled matrices.

 I think that to call MatSetValues on all allocated nonzeros
 just to make them survive MatAssemblyBegin/End or to replace
 MatCopy call by loop over raw array is both quite awkward.
 Maybe one should be able to turn the filter in
 MatAssemblyBegin/End off. Or is there any other way out?

 BTW, what I need is to make a copy Kreg of given matrix K_loc
 but with few additional preallocated positions which are
 filled later. Is there some other convenient solution?

 Cheers,
 Vaclav Hapla




 -- 
 What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to
 which their experiments lead.
 -- Norbert Wiener


 -- 

 Vaclav Hapla
 Research assistant
 SPOMECH project http://spomech.vsb.cz/
 Centre of Excellence IT4Innovations http://www.it4i.eu/

 tel.: (+420) 59 732 6291 tel:%28%2B420%29%2059%20732%206291
 VSB-Technical University of Ostrava
 17.listopadu 15/2172
 708 33 Ostrava-Poruba
 Czech Republic




 -- 
 What most experimenters take for granted before they begin their 
 experiments is infinitely more interesting than any results to which 
 their experiments lead.
 -- Norbert Wiener


-- 

Vaclav Hapla
Research assistant
SPOMECH project http://spomech.vsb.cz/
Centre of Excellence IT4Innovations http://www.it4i.eu/

tel.: (+420) 59 732 6291
VSB-Technical University of Ostrava
17.listopadu 15/2172
708 33 Ostrava-Poruba
Czech Republic

-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120302/7d89869f/attachment.html


[petsc-dev] MatDestroy call in MatMerge

2012-02-26 Thread Vaclav Hapla
Dear PETSc folks,
I feel that it is quite incorrect that MatMerge calls MatDestroy(inmat) 
since this corrupts garbage collection - inmat is destroyed but only 
nullified in MatMerge. It remains unchanged outside so subsequent 
MatDestroy call fails. I think that there is no reason for this destroy, 
if am right, MatMerge does not corrupt the inmat so it can be reused and 
then destroyed manually.

The second question is - what is the principal difference between 
MatMerge and MatMerge_SeqsToMPI - I am confused a bit.

And I have finally one note about MatMerge_SeqsToMPI manual page: there 
is instead a synopsis of MatMerge_SeqsToMPINumeric because this comment 
is located above the MatMerge_SeqsToMPINumeric function in source.

Thank you in advance for answer.
Regards,
Vaclav Hapla