[petsc-dev] moab in nightlybuild has erors..

2013-10-20 Thread Satish Balay
from 
ftp://ftp.mcs.anl.gov/pub/petsc/nightlylogs/archive/2013/10/20/configure_next_arch-freebsd-cxx-pkgs-opt_wii.log

>

Downloading MOAB
  Downloading 
http://ftp.mcs.anl.gov/pub/fathom/moab-nightly.tar.gz to 
/usr/home/balay/petsc.clone-2/externalpackages/_d_moab-nightly.tar.gz


libtool: compile:  
/usr/home/balay/petsc.clone-2/arch-freebsd-cxx-pkgs-opt/bin/mpicxx 
-DHAVE_CONFIG_H -I. -I../.. -I../../src/moab -I../../src/parallel -I../../src 
-I/usr/home/balay/petsc.clone-2/arch-freebsd-cxx-pkgs-opt/include 
-DTEMPLATE_SPECIALIZATION -DTEMPLATE_FUNC_SPECIALIZATION -DHAVE_VSNPRINTF 
-D_FILE_OFFSET_BITS=64 -DHAVE_IEEEFP_H -DUSE_MPI -DNETCDF_FILE -DIS_BUILDING_MB 
-I.. -I./.. -I./../parallel -DUNORDERED_MAP_NS=std::tr1 
-DHAVE_UNORDERED_MAP=tr1/unordered_map -DHAVE_UNORDERED_SET=tr1/unordered_set 
-Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O -fPIC -Wall 
-pipe -pedantic -Wno-long-long -Wextra -Wcast-align -Wpointer-arith -Wformat 
-Wformat-security -Wshadow -Wunused-parameter -MT NCHelperMPAS.lo -MD -MP -MF 
.deps/NCHelperMPAS.Tpo -c NCHelperMPAS.cpp  -fPIC -DPIC -o .libs/NCHelperMPAS.o
*** [NCHelperMPAS.lo] Error code 1
Stop in /usr/home/balay/petsc.clone-2/externalpackages/moab-4.7.0pre/src/io.
*** [all-recursive] Error code 1
Stop in /usr/home/balay/petsc.clone-2/externalpackages/moab-4.7.0pre/src/io.
*** [all-recursive] Error code 1
Stop in /usr/home/balay/petsc.clone-2/externalpackages/moab-4.7.0pre/src.
*** [all] Error code 1
Stop in /usr/home/balay/petsc.clone-2/externalpackages/moab-4.7.0pre/src.
*** [all-recursive] Error code 1
Stop in /usr/home/balay/petsc.clone-2/externalpackages/moab-4.7.0pre.
*** [all] Error code 1
Stop in 
/usr/home/balay/petsc.clone-2/externalpackages/moab-4.7.0pre.ReadTetGen.cpp: In 
member function 'moab::ErrorCode 
moab::ReadTetGen::read_elem_file(moab::EntityType, std::istream&, const 
std::vector >&, 
moab::Range&)':
ReadTetGen.cpp:371: warning: 'have_group_id' may be used uninitialized in this 
function
ReadTetGen.cpp:371: warning: 'node_per_elem' may be used uninitialized in this 
function
NCHelperMPAS.cpp: In member function 'virtual moab::ErrorCode 
moab::NCHelperMPAS::create_mesh(moab::Range&)':
NCHelperMPAS.cpp:312: error: 'MBZoltan' was not declared in this scope
NCHelperMPAS.cpp:312: error: 'mbZTool' was not declared in this scope
NCHelperMPAS.cpp:312: error: expected type-specifier before 'MBZoltan'
NCHelperMPAS.cpp:312: error: expected `;' before 'MBZoltan'
NCHelperMPAS.cpp:316: error: 'nc_get_vara_double_all' was not declared in this 
scope
NCHelperMPAS.cpp:317: error: 'nc_get_vara_double_all' was not declared in this 
scope
NCHelperMPAS.cpp:318: error: 'nc_get_vara_double_all' was not declared in this 
scope

<


[petsc-dev] nightlybuild with mpiuni

2013-10-20 Thread Satish Balay
Presumably this code gets nulled out for np=1 - so the following would work?

Satish

<

diff --git a/include/mpiuni/mpi.h b/include/mpiuni/mpi.h
index 24d4a22..4ea3f2d 100644
--- a/include/mpiuni/mpi.h
+++ b/include/mpiuni/mpi.h
@@ -201,6 +201,13 @@ typedef int MPI_Op;
 #define MPI_MAX   0
 #define MPI_MIN   0
 #define MPI_REPLACE   0
+#define MPI_PROD  0
+#define MPI_LAND  0
+#define MPI_BAND  0
+#define MPI_LOR   0
+#define MPI_BOR   0
+#define MPI_LXOR  0
+#define MPI_BXOR  0
 #define MPI_ANY_TAG (-1)
 #define MPI_DATATYPE_NULL 0
 #define MPI_PACKED0
balay@mockingbird /home/balay/petsc (next)

>.


src/vec/is/sf/impls/basic/sfbasic.c: In function ‘PetscSFBasicPackGetUnpackOp’:
src/vec/is/sf/impls/basic/sfbasic.c:594:18: error: ‘MPI_PROD’ undeclared (first 
use in this function)
   else if (op == MPI_PROD) *UnpackOp = link->UnpackMult;
  ^
src/vec/is/sf/impls/basic/sfbasic.c:594:18: note: each undeclared identifier is 
reported only once for each function it appears in
src/vec/is/sf/impls/basic/sfbasic.c:597:18: error: ‘MPI_LAND’ undeclared (first 
use in this function)
   else if (op == MPI_LAND) *UnpackOp = link->UnpackLAND;
  ^
src/vec/is/sf/impls/basic/sfbasic.c:598:18: error: ‘MPI_BAND’ undeclared (first 
use in this function)
   else if (op == MPI_BAND) *UnpackOp = link->UnpackBAND;
  ^
src/vec/is/sf/impls/basic/sfbasic.c:599:18: error: ‘MPI_LOR’ undeclared (first 
use in this function)
   else if (op == MPI_LOR) *UnpackOp = link->UnpackLOR;
  ^
src/vec/is/sf/impls/basic/sfbasic.c:600:18: error: ‘MPI_BOR’ undeclared (first 
use in this function)
   else if (op == MPI_BOR) *UnpackOp = link->UnpackBOR;
  ^
src/vec/is/sf/impls/basic/sfbasic.c:601:18: error: ‘MPI_LXOR’ undeclared (first 
use in this function)
   else if (op == MPI_LXOR) *UnpackOp = link->UnpackLXOR;
  ^
src/vec/is/sf/impls/basic/sfbasic.c:602:18: error: ‘MPI_BXOR’ undeclared (first 
use in this function)
   else if (op == MPI_BXOR) *UnpackOp = link->UnpackBXOR;
  ^
src/vec/is/sf/impls/basic/sfbasic.c: In function 
‘PetscSFBasicPackGetFetchAndOp’:
src/vec/is/sf/impls/basic/sfbasic.c:620:18: error: ‘MPI_PROD’ undeclared (first 
use in this function)
   else if (op == MPI_PROD)   *FetchAndOp = link->FetchAndMult;
  ^
src/vec/is/sf/impls/basic/sfbasic.c:621:18: error: ‘MPI_LAND’ undeclared (first 
use in this function)
   else if (op == MPI_LAND)   *FetchAndOp = link->FetchAndLAND;
  ^
src/vec/is/sf/impls/basic/sfbasic.c:622:18: error: ‘MPI_BAND’ undeclared (first 
use in this function)
   else if (op == MPI_BAND)   *FetchAndOp = link->FetchAndBAND;
  ^
src/vec/is/sf/impls/basic/sfbasic.c:623:18: error: ‘MPI_LOR’ undeclared (first 
use in this function)
   else if (op == MPI_LOR)*FetchAndOp = link->FetchAndLOR;
  ^
src/vec/is/sf/impls/basic/sfbasic.c:624:18: error: ‘MPI_BOR’ undeclared (first 
use in this function)
   else if (op == MPI_BOR)*FetchAndOp = link->FetchAndBOR;
  ^
src/vec/is/sf/impls/basic/sfbasic.c:625:18: error: ‘MPI_LXOR’ undeclared (first 
use in this function)
   else if (op == MPI_LXOR)   *FetchAndOp = link->FetchAndLXOR;
  ^
src/vec/is/sf/impls/basic/sfbasic.c:626:18: error: ‘MPI_BXOR’ undeclared (first 
use in this function)
   else if (op == MPI_BXOR)   *FetchAndOp = link->FetchAndBXOR;
  ^
  CC arch-linux-uni/obj/src/vec/is/is/impls/stride/stride.o
make[2]: *** [arch-linux-uni/obj/src/vec/is/sf/impls/basic/sfbasic.o] Error 1
make[2]: *** Waiting for unfinished jobs
  CC arch-linux-uni/obj/src/vec/is/is/impls/stride/ftn-auto/stridef.o
make[2]: Leaving directory `/sandbox/petsc/petsc.clone-2'
make[1]: *** [gnumake] Error 2
make[1]: Leaving directory `/sandbox/petsc/petsc.clone-2'
ESC[1;31m**ERROR*
  Error during compile, check arch-linux-uni/conf/make.log
  Send it and arch-linux-uni/conf/configure.log to petsc-ma...@mcs.anl.gov
ESC[0;39mESC[0;49m



Re: [petsc-dev] moab in nightlybuild has erors..

2013-10-20 Thread Vijay S. Mahadevan
Satish,

Iulian just fixed the problems on MOAB master and so further builds
should run cleanly. Is there a way to subscribe to buildbot failures
relating only to MOAB in PETSc ? If that is the case, you can add
moab-...@mcs.anl.gov to that list.

Thanks,
Vijay

On Sun, Oct 20, 2013 at 10:17 AM, Satish Balay  wrote:
> from 
> ftp://ftp.mcs.anl.gov/pub/petsc/nightlylogs/archive/2013/10/20/configure_next_arch-freebsd-cxx-pkgs-opt_wii.log
>
>>
>
> Downloading MOAB
>   Downloading 
> http://ftp.mcs.anl.gov/pub/fathom/moab-nightly.tar.gz to 
> /usr/home/balay/petsc.clone-2/externalpackages/_d_moab-nightly.tar.gz
>
>
> libtool: compile:  
> /usr/home/balay/petsc.clone-2/arch-freebsd-cxx-pkgs-opt/bin/mpicxx 
> -DHAVE_CONFIG_H -I. -I../.. -I../../src/moab -I../../src/parallel -I../../src 
> -I/usr/home/balay/petsc.clone-2/arch-freebsd-cxx-pkgs-opt/include 
> -DTEMPLATE_SPECIALIZATION -DTEMPLATE_FUNC_SPECIALIZATION -DHAVE_VSNPRINTF 
> -D_FILE_OFFSET_BITS=64 -DHAVE_IEEEFP_H -DUSE_MPI -DNETCDF_FILE 
> -DIS_BUILDING_MB -I.. -I./.. -I./../parallel -DUNORDERED_MAP_NS=std::tr1 
> -DHAVE_UNORDERED_MAP=tr1/unordered_map -DHAVE_UNORDERED_SET=tr1/unordered_set 
> -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O -fPIC 
> -Wall -pipe -pedantic -Wno-long-long -Wextra -Wcast-align -Wpointer-arith 
> -Wformat -Wformat-security -Wshadow -Wunused-parameter -MT NCHelperMPAS.lo 
> -MD -MP -MF .deps/NCHelperMPAS.Tpo -c NCHelperMPAS.cpp  -fPIC -DPIC -o 
> .libs/NCHelperMPAS.o
> *** [NCHelperMPAS.lo] Error code 1
> Stop in /usr/home/balay/petsc.clone-2/externalpackages/moab-4.7.0pre/src/io.
> *** [all-recursive] Error code 1
> Stop in /usr/home/balay/petsc.clone-2/externalpackages/moab-4.7.0pre/src/io.
> *** [all-recursive] Error code 1
> Stop in /usr/home/balay/petsc.clone-2/externalpackages/moab-4.7.0pre/src.
> *** [all] Error code 1
> Stop in /usr/home/balay/petsc.clone-2/externalpackages/moab-4.7.0pre/src.
> *** [all-recursive] Error code 1
> Stop in /usr/home/balay/petsc.clone-2/externalpackages/moab-4.7.0pre.
> *** [all] Error code 1
> Stop in 
> /usr/home/balay/petsc.clone-2/externalpackages/moab-4.7.0pre.ReadTetGen.cpp: 
> In member function 'moab::ErrorCode 
> moab::ReadTetGen::read_elem_file(moab::EntityType, std::istream&, const 
> std::vector >&, 
> moab::Range&)':
> ReadTetGen.cpp:371: warning: 'have_group_id' may be used uninitialized in 
> this function
> ReadTetGen.cpp:371: warning: 'node_per_elem' may be used uninitialized in 
> this function
> NCHelperMPAS.cpp: In member function 'virtual moab::ErrorCode 
> moab::NCHelperMPAS::create_mesh(moab::Range&)':
> NCHelperMPAS.cpp:312: error: 'MBZoltan' was not declared in this scope
> NCHelperMPAS.cpp:312: error: 'mbZTool' was not declared in this scope
> NCHelperMPAS.cpp:312: error: expected type-specifier before 'MBZoltan'
> NCHelperMPAS.cpp:312: error: expected `;' before 'MBZoltan'
> NCHelperMPAS.cpp:316: error: 'nc_get_vara_double_all' was not declared in 
> this scope
> NCHelperMPAS.cpp:317: error: 'nc_get_vara_double_all' was not declared in 
> this scope
> NCHelperMPAS.cpp:318: error: 'nc_get_vara_double_all' was not declared in 
> this scope
>
> <


Re: [petsc-dev] Compiling Petsc with Intel mpi safe thread library

2013-10-20 Thread Barry Smith

   You will need to have mkl-cpardiso.pycreate and run a simple test 
program to make sure that the MPI supports funneled, see for example

http://www.mcs.anl.gov/research/projects/mpi/mpi-standard/mpi-report-2.0/node165.htm

Based on my reading this can only be determined by actually running a MPI 
program and checking the provided flag to see if the MPI is providing the 
funneled.

  unfortunately in the batch world this is a problem …..


  Barry




On Oct 19, 2013, at 11:13 PM, Jose David Bermeol  wrote:

> many thakns, I didn't know that using -mt_mpi half way wouldn't work. 
> 
> Yes cpardiso need this flag if not, it compiles, but when I run it sends the 
> following error:
> MPI_THREAD_FUNNELED level is not supported!
> Exit...
> 
> So there is a way to check in mkl-cpardiso.py that I'm using the flag -mt_mpi
> 
> Thanks
> - Original Message -
> From: "Satish Balay" 
> To: "Jose David Bermeol" 
> Cc: petsc-dev@mcs.anl.gov
> Sent: Sunday, October 20, 2013 12:04:19 AM
> Subject: Re: [petsc-dev] Compiling Petsc with Intel mpi safe thread library
> 
> sounds like you are inserting -mt_mpi half way through the configure
> process via mkl-cpardiso.py.
> 
> This won't work.
> 
> is mkl-cpardiso tied to [thread safe variant of] intel mpi?
> i.e it won't work with regular intel-mpi or other MPI like mpich?
> 
> To use thread safe mpi - you specify thread safe mpi compilers to
> petsc configure.
> 
> i.e something like:
> 
> --with-cc='mpicc -mt_mpi' or --with-cc='mpicc' CFALGS='-mt_mpi'.
> 
> Satish
> 
> On Sat, 19 Oct 2013, Jose David Bermeol wrote:
> 
>> Hi I'm working again with the solver mkl-cpardiso. I did my implementation, 
>> the configuration works well, the problem is during compilation. The linking 
>> flags petsc is using are the followings:
>> 
>>-Wl,-rpath,/home/jbermeol/software/test/arch-linux2-c-opt/lib 
>> -L/home/jbermeol/software/test/arch-linux2-c-opt/lib  -lpetsc 
>> -Wl,--start-group -L/apps/rhel6/intel/composer_xe_2013.3.163/mkl/lib/intel64 
>> -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -Wl,--end-group -lpthread -lm 
>> -liomp5 -mt_mpi -Wl,--start-group 
>> -L/home/jbermeol/testPetscSolvers/intel_mkl_cpardiso/lib/intel64 
>> -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -Wl,--end-group 
>> -lcpardiso_lp64 -lcpardiso_mpi_lp64 -lpthread -lm -liomp5 -lifcore 
>> -Wl,-rpath,/apps/rhel6/intel/composer_xe_2013.3.163/mkl/lib/intel64 
>> -L/apps/rhel6/intel/composer_xe_2013.3.163/mkl/lib/intel64 -lmkl_intel_lp64 
>> -lmkl_sequential -lmkl_core -lpthread -lm -lX11 -lpthread 
>> -Wl,-rpath,/apps/rhel6/intel/impi/4.1.0.030/intel64/lib 
>> -L/apps/rhel6/intel/impi/4.1.0.030/intel64/lib 
>> -Wl,-rpath,/apps/rhel6/intel/composer_xe_2013.3.163/compiler/lib/intel64 
>> -L/apps/rhel6/intel/composer_xe_2013.3.163/compiler/lib/intel64 
>> -Wl,-rpath,/apps/rhel6/intel/compos
> 
> er_xe_2013.3.163/ipp/lib/intel64 
> -L/apps/rhel6/intel/composer_xe_2013.3.163/ipp/lib/intel64 
> -Wl,-rpath,/apps/rhel6/intel/composer_xe_2013.3.163/tbb/lib/intel64 
> -L/apps/rhel6/intel/composer_xe_2013.3.163/tbb/lib/intel64 
> -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.4.7 
> -L/usr/lib/gcc/x86_64-redhat-linux/4.4.7 
> -Wl,-rpath,/home/jbermeol/software/test/-Xlinker 
> -Wl,-rpath,/opt/intel/mpi-rt/4.1 -lifport -lifcore -lm -lm -lmpigc4 -ldl 
> -lmpigf -lmpi -lmpigi -lrt -lpthread -limf -lsvml -lirng -lipgo -ldecimal 
> -lcilkrts -lstdc++ -lgcc_s -lirc -lirc_s -ldl
>> 
>> The problem is with flag "-mt_mpi", this flag is telling to the Intel 
>> compiler, please link with the library libmpi_mt.so(this is a thread safe 
>> library for intel mpi). However petsc adds the flag "-lmpi" that links with 
>> libmpi.so. Because of this I'm gettig the following error:
>>   ld: MPIR_Thread: TLS definition in 
>> /apps/rhel6/intel/impi/4.1.0.030/intel64/lib/libmpi_mt.so section .tbss 
>> mismatches non-TLS definition in 
>> /apps/rhel6/intel/impi/4.1.0.030/intel64/lib/libmpi.so section .bss
>> 
>> So the first question would be where is this "-lmpi" library flag added??
>> Second how can I compile using the thread save mpi library from intel??
>> Is there a way to set a dependency such that the solver cpardiso is not 
>> installed if petsc is compiled without this thread safe library??
>> I don't know yet how to check if petsc is been configure with mkl blas/lapack
>> 
>> Thanks
>> 



Re: [petsc-dev] moab in nightlybuild has erors..

2013-10-20 Thread Satish Balay
On Sun, 20 Oct 2013, Vijay S. Mahadevan wrote:

> Satish,
> 
> Iulian just fixed the problems on MOAB master and so further builds
> should run cleanly. 

great!

> Is there a way to subscribe to buildbot failures
> relating only to MOAB in PETSc ? If that is the case, you can add
> moab-...@mcs.anl.gov to that list.

petsc builds are not with buildbot. Currently we look at the logs
manually to determine the issues.

Satish

> 
> Thanks,
> Vijay
> 
> On Sun, Oct 20, 2013 at 10:17 AM, Satish Balay  wrote:
> > from 
> > ftp://ftp.mcs.anl.gov/pub/petsc/nightlylogs/archive/2013/10/20/configure_next_arch-freebsd-cxx-pkgs-opt_wii.log
> >
> >>
> >
> > Downloading MOAB
> >   Downloading 
> > http://ftp.mcs.anl.gov/pub/fathom/moab-nightly.tar.gz to 
> > /usr/home/balay/petsc.clone-2/externalpackages/_d_moab-nightly.tar.gz
> >
> >
> > libtool: compile:  
> > /usr/home/balay/petsc.clone-2/arch-freebsd-cxx-pkgs-opt/bin/mpicxx 
> > -DHAVE_CONFIG_H -I. -I../.. -I../../src/moab -I../../src/parallel 
> > -I../../src 
> > -I/usr/home/balay/petsc.clone-2/arch-freebsd-cxx-pkgs-opt/include 
> > -DTEMPLATE_SPECIALIZATION -DTEMPLATE_FUNC_SPECIALIZATION -DHAVE_VSNPRINTF 
> > -D_FILE_OFFSET_BITS=64 -DHAVE_IEEEFP_H -DUSE_MPI -DNETCDF_FILE 
> > -DIS_BUILDING_MB -I.. -I./.. -I./../parallel -DUNORDERED_MAP_NS=std::tr1 
> > -DHAVE_UNORDERED_MAP=tr1/unordered_map 
> > -DHAVE_UNORDERED_SET=tr1/unordered_set -Wall -Wwrite-strings 
> > -Wno-strict-aliasing -Wno-unknown-pragmas -O -fPIC -Wall -pipe -pedantic 
> > -Wno-long-long -Wextra -Wcast-align -Wpointer-arith -Wformat 
> > -Wformat-security -Wshadow -Wunused-parameter -MT NCHelperMPAS.lo -MD -MP 
> > -MF .deps/NCHelperMPAS.Tpo -c NCHelperMPAS.cpp  -fPIC -DPIC -o 
> > .libs/NCHelperMPAS.o
> > *** [NCHelperMPAS.lo] Error code 1
> > Stop in /usr/home/balay/petsc.clone-2/externalpackages/moab-4.7.0pre/src/io.
> > *** [all-recursive] Error code 1
> > Stop in /usr/home/balay/petsc.clone-2/externalpackages/moab-4.7.0pre/src/io.
> > *** [all-recursive] Error code 1
> > Stop in /usr/home/balay/petsc.clone-2/externalpackages/moab-4.7.0pre/src.
> > *** [all] Error code 1
> > Stop in /usr/home/balay/petsc.clone-2/externalpackages/moab-4.7.0pre/src.
> > *** [all-recursive] Error code 1
> > Stop in /usr/home/balay/petsc.clone-2/externalpackages/moab-4.7.0pre.
> > *** [all] Error code 1
> > Stop in 
> > /usr/home/balay/petsc.clone-2/externalpackages/moab-4.7.0pre.ReadTetGen.cpp:
> >  In member function 'moab::ErrorCode 
> > moab::ReadTetGen::read_elem_file(moab::EntityType, std::istream&, const 
> > std::vector >&, 
> > moab::Range&)':
> > ReadTetGen.cpp:371: warning: 'have_group_id' may be used uninitialized in 
> > this function
> > ReadTetGen.cpp:371: warning: 'node_per_elem' may be used uninitialized in 
> > this function
> > NCHelperMPAS.cpp: In member function 'virtual moab::ErrorCode 
> > moab::NCHelperMPAS::create_mesh(moab::Range&)':
> > NCHelperMPAS.cpp:312: error: 'MBZoltan' was not declared in this scope
> > NCHelperMPAS.cpp:312: error: 'mbZTool' was not declared in this scope
> > NCHelperMPAS.cpp:312: error: expected type-specifier before 'MBZoltan'
> > NCHelperMPAS.cpp:312: error: expected `;' before 'MBZoltan'
> > NCHelperMPAS.cpp:316: error: 'nc_get_vara_double_all' was not declared in 
> > this scope
> > NCHelperMPAS.cpp:317: error: 'nc_get_vara_double_all' was not declared in 
> > this scope
> > NCHelperMPAS.cpp:318: error: 'nc_get_vara_double_all' was not declared in 
> > this scope
> >
> > <
> 



Re: [petsc-dev] Push restrictions on branches

2013-10-20 Thread Jed Brown
Jed Brown  writes:

> Bitbucket added support to restrict push access based on branch names
> (glob matching).  For example, that would allow us to have a smaller
> group of people with access to merge to 'maint' or 'master'.
>
> Is this a feature we should start using in petsc.git?

I set this restriction on maint*, master, and next.  The "integrators"
group currently contains Barry, Hong, Karl, Matt, Peter, Satish, and
myself.  Everyone else with write access is still able to push to any
other branches.  We can amend or cancel this at any time.

> One tangible difference from the current model is that it would let give
> more people push access to named branches which then allows an
> integrator to patch up a branch for an open pull request.  (When a PR
> comes from a fork instead of an in-repo branch, we can't push to their
> repository so we can't update the PR.  This sometimes leads to tedious
> fine-tuning of trivial details in the PR comments.)
>
>
> Admins can see the branch list here:
>
> https://bitbucket.org/petsc/petsc/admin/branches


pgpCozdC07F1S.pgp
Description: PGP signature