On Feb 4, 2013, at 8:02 PM, Satish Balay <balay at mcs.anl.gov> wrote:

> On Mon, 4 Feb 2013, Barry Smith wrote:
> 
> 
> If PETSC_ARCH is not specified by user - configure creates one and
> uses it. It looks ugly for prefix build - but then - making one build
> better means compromising somewere else.
> 
> options:
> 
> - don't use PETSC_ARCH in the module file name [but then rely on users
> to rename it when they copy it over to the appropriate location.
> - don't use PETSC_ARCH in module name for only prefix installs.
> But this becomes inconsistant.
> - provide a configure option for users to specify a name for this modulefile.
> [defalt always uses PETSC_ARCH]
> - [others?]

   What about ${PETSC_VERSION}${PETSC_ARCH}?

The user has to rename/move the module files for prefix installs so maybe we 
don't need to give it any particular name, just petsc-module?

Yes it is ugly without the hyphen but at least without PETSC_ARCH it is 
consistent.

The concept of module file seems silly with PETSC_DIR/PETSC_ARCH installs, 
should we  even make them in that case?



BTW: Can you check what the PETSc module files look like at Nersc?



> 
> BTW: I'm not usre if 'lib' is the correct location to stash module/pkgconfig 
> files.
> Autoconf install might provide options like --prefix --module-prefix 
> --pkgconfig-prefix?

Barrys-MacBook-Pro:petsc-dev barrysmith$ ls /usr/lib/pkgconfig/
apr-1.pc         apr-util-1.pc    libcrypto.pc     libedit.pc       libiodbc.pc 
     libpcre.pc       libpcreposix.pc  libssl.pc        openssl.pc       
Barrys-MacBook-Pro:petsc-dev barrysmith$ ls /usr/local/lib/pkgconfig/
PETSc.pc        libpcre.pc      libpcrecpp.pc   libpcreposix.pc libpng.pc       
libpng15.pc     libtiff-4.pc    tesseract.pc    valgrind.pc

bsmith at login2:~$ ls /usr/lib/pkgconfig/
apr-1.pc       dbus-python.pc  fontutil.pc  glew.pc                
ibus-table.pc       mpich2-c.pc    nautilus-sendto.pc  printproto.pc           
python2.pc     sawfish.pc
apr-util-1.pc  fftw3f.pc       fuse.pc      gnome-system-tools.pc  
libquvi-scripts.pc  mpich2-cxx.pc  netcdf.pc           pygtksourceview-2.0.pc  
python-dbg.pc  scrnsaverproto.pc
cln.pc         fftw3l.pc       ginac.pc     gsl.pc                 libR.pc      
       mpich2-f77.pc  notify-python.pc    python-2.7-dbg.pc       python.pc     
 valgrind.pc
cppunit.pc     fftw3.pc        ginn.pc      gts.pc                 mcabber.pc   
       mpich2-f90.pc  pm-utils.pc         python-2.7.pc           rep-gtk.pc    
 xorg-wacom.pc

> 


> Satish
> 
>> 
>>   Thanks
>> 
>>    Barry
>> 
>> 
>> Begin forwarded message:
>> 
>>> From: Philip Papadopoulos <philip.papadopoulos at gmail.com>
>>> Subject: Re: [petsc-maint #149715] Making a PetSc Roll and Rocks Module
>>> Date: January 24, 2013 9:48:36 AM CST
>>> To: "Schneider, Barry I." <bschneid at nsf.gov>
>>> Cc: Barry Smith <bsmith at mcs.anl.gov>
>>> 
>>> Dear Barry^2,
>>> 
>>> The major observation about the build process is that building the software 
>>> so that it can livesw-
>>> in a standard OS package is more work than it should be. Here is the 
>>> standard "meme".
>>> Suppose you have three directories:
>>> sw-src
>>> tmpoot/sw-install
>>> sw-install
>>> 
>>> sw-src: the source directory tree for building the sw package (eg 
>>> petsc-3.3-p5)
>>> sw-install:  the directory where one wants sw installed on the final 
>>> system. We use something 
>>>                 like 
>>> /opt/petsc/compiler/mpi/interconnect/petsc-version/petsc-arch
>>> 
>>> the last directory is an artifact of the way many people build software for 
>>> packaging. Instead of
>>> installing into sw-install directly, you install into a tmproot/sw-install. 
>>>  Then you direct the package
>>> manager to grab all files in tmproot/sw-install to put in the package.  The 
>>> package itself will
>>> strip off the tmproot leading directory. In other words the package labels 
>>> all files as sw-install/.
>>> 
>>> So the build issue that I ran into is that the build directory and/or the 
>>> tmproot directory path becomes embedded into a large number of files 
>>> (include files, mod files, etc). To get around this,
>>> I did a bind mount of (tmproot/sw-install --> /sw-install) and told petsc 
>>> to install into /sw-install.  I consider that to be a "hack", but petsc 
>>> isn't the only software that I've run into that requires this
>>> mounting bit of legerdemain. 
>>> 
>>> Many software packages will support a dest=  or DESTDIR variable for "make 
>>> install"  that supports the "installation" into a tmproot directory but 
>>> leaves no trace of tmproot in any of its configuration files. 
>>> (http://git.rocksclusters.org/cgi-bin/gitweb.cgi?p=core/python/.git;a=blob;f=src/python-2/Makefile;h=ced6cc1c6eb6a4f70dd0f3e1b0ccd1ac1e40f989;hb=HEAD)
>>>   shows a Makefile for python that supports this.
>>> 
>>> Ultimately we create packages of software to simplify a number of things. 
>>> Including update of installed software on many machines. 
>>> 
>>> The other "issue" is not really an issue, but a request. When petsc is 
>>> built it creates a petscvariables (or similarly named) file with lots of 
>>> environment variables.  It would be terrific if it could also create  an 
>>> environment modules files with this same information.  Users often want 
>>> different versions/different configs of the same software and environment 
>>> modules is now standard in Redhat/CentOS distributions.
>>> 
>>> Hope this helps.
>>> 
>>> 
>>> On Thu, Jan 24, 2013 at 2:44 AM, Schneider, Barry I. <bschneid at nsf.gov> 
>>> wrote:
>>> Barry,
>>> I am sending this to Phil Papadopoulos and he can make much more precise 
>>> recommendations.  Glad you have thick skins.  Being at NSF requires the 
>>> same physiology.
>>> 
>>> -----Original Message-----
>>> From: Barry Smith [mailto:bsmith at mcs.anl.gov]
>>> Sent: Wednesday, January 23, 2013 10:19 PM
>>> To: Schneider, Barry I.
>>> Subject: Re: [petsc-maint #149715] Making a PetSc Roll and Rocks Module
>>> 
>>> 
>>>   Barry,
>>> 
>>>   By far the most common email we get is regarding installation, so we are 
>>> always trying to understand how to improve that process. If we can 
>>> eliminate just 20 percent of installation problems that would save us a 
>>> great deal of work, so yes, any critique/suggestions are greatly 
>>> appreciated, and after all these years we have thick skins so can survive 
>>> even the harshest suggestions.
>>> 
>>>   Barry
>>> 
>>> On Jan 23, 2013, at 6:13 PM, "Schneider, Barry I." <bschneid at nsf.gov> 
>>> wrote:
>>> 
>>>> Barry,
>>>> First, thanks for getting back to me so quickly.   I view this as a way to 
>>>> make things easier for everyone for a library that is really invaluable to 
>>>> many of us.  Phil P. is a pro at making rolls and modules for the Rocks 
>>>> distribution of CentOS that is widely used in the NSF cyberinfrastructure 
>>>> and by others.  I am both a program manager for the XSEDE project and a 
>>>> large user as well so I appreciate the ability to use the library in its 
>>>> most transparent fashion.  After Phil did his thing we tested it and it 
>>>> worked perfectly on our cluster.  Basically, you load the module and any 
>>>> associated modules such as the Intel compiler and appropriate MPI module 
>>>> and then you just use it.  No screwing around with building and the rest 
>>>> of it.  That's the good news.  Of course, you need to make a roll for 
>>>> every specific combination of compiler and MPI but I believe what Phil has 
>>>> learned makes that pretty straightforward to do.  While I cannot speak for 
>>>> Phil, I would expect that anythi
> ng he learned would be available to you folks if that's what you wanted.  The 
> next step for us will be to incorporate what we did in the Rocks distribution 
> which will eventually propagate to lots of users.  Eventually, there will be 
> an XSEDE distribution which will be used by an even larger number of sites.  
> We are talking about many thousands of users.  So, I guess what I am really 
> asking is whether you are interested enough in what Phil learned to perhaps 
> modify the current PetSc so that it is easier for users to install or make 
> available the technology for folks to make their own rolls.  Of course, you 
> could decide that is not on your agenda and that's fine.  But if you 
> capitalize on our experience and your great software that would be wonderful 
> and perhaps offer users alternatives to the current way of doing things that 
> would make life easier.
>>>> 
>>>> **************************************************
>>>> Dr Barry I. Schneider
>>>> Program Director for Cyberinfrastructure Past Chair, APS Division of
>>>> Computational Physics Physics Division National Science Foundation
>>>> 4201 Wilson Blvd.
>>>> Arlington, VA 22230
>>>> Phone:(703) 292-7383
>>>> FAX:(703) 292-9078
>>>> Cell:(703) 589-2528
>>>> **************************************************
>>>> 
>>>> 
>>>> -----Original Message-----
>>>> From: Barry Smith [mailto:bsmith at mcs.anl.gov]
>>>> Sent: Wednesday, January 23, 2013 5:56 PM
>>>> To: petsc-maint at mcs.anl.gov; Schneider, Barry I.
>>>> Subject: Re: [petsc-maint #149715] Making a PetSc Roll and Rocks
>>>> Module
>>>> 
>>>> 
>>>>  Barry,
>>>> 
>>>>    Thanks for the feedback. We actually desire to support "system" 
>>>> installs just as easily as "home directory" installs, though our 
>>>> documentation emphasizes "home directory" installs since we feel that is 
>>>> most practical for more users. We'd be interested in more specifics on the 
>>>> difficulties and any suggestions there may be to make the process 
>>>> (especially for multiple compilers/mpis) easier. That is what could we 
>>>> have done differently to make it easier for you?
>>>> 
>>>>   Thanks
>>>> 
>>>>   Barry
>>>> 
>>>> On Jan 23, 2013, at 1:59 PM, "Schneider, Barry I." <bschneid at nsf.gov> 
>>>> wrote:
>>>> 
>>>>> I thought I might pass on the following to the folks maintaining PetSc.  
>>>>> Recently we had the occasion to make a Rocks roll and module to be used 
>>>>> on a local cluster here at NSF.  Phil Papadopoulos the developer of Rocks 
>>>>> at SDSC took the time to help us to do that not simply because we wanted 
>>>>> it but also because he wanted to be able to know how to do it and 
>>>>> distribute it to a much larger set users of NSF resources and also on out 
>>>>> NSF supported platforms.  Here are his comments.  Perhaps if you feel 
>>>>> there are things that could be made easier that would be great.  It also 
>>>>> could be useful to you directly.
>>>>> 
>>>>> BTW, Petsc is kind of a bear to package -- they really expect you to 
>>>>> build it in your home directory :-).
>>>>> I took the time to make the roll pretty flexible in terms of 
>>>>> compiler/mpi/network support to build various versions.
>>>>> This was modeled after other Triton rolls so that it is consistent with 
>>>>> other software -- it likely becomes part of the standard SDSC software 
>>>>> stack, so this was a good thing to do all the way around.
>>>>> 
>>>>> I'm about ready to add this to our code repository, it will show up on 
>>>>> git.rocksclusters.org<http://git.rocksclusters.org> tomorrow morning.
>>>>> 
>>>>> 
>>>>> **************************************************
>>>>> Dr Barry I. Schneider
>>>>> Program Director for Cyberinfrastructure Past Chair, APS Division of
>>>>> Computational Physics Physics Division National Science Foundation
>>>>> 4201 Wilson Blvd.
>>>>> Arlington, VA 22230
>>>>> Phone:(703) 292-7383
>>>>> FAX:(703) 292-9078
>>>>> Cell:(703) 589-2528
>>>>> **************************************************
>>>>> 
>>>>> 
>>>> 
>>> 
>>> 
>>> 
>>> 
>>> -- 
>>> Philip Papadopoulos, PhD
>>> University of California, San Diego
>>> 858-822-3628 (Ofc)
>>> 619-331-2990 (Fax)
>> 
>> 

Reply via email to