On Sat 24/07/10  3:39 PM , David Cournapeau <courn...@gmail.com> wrote:

> On Sun, Jul 25, 2010 at 4:23 AM, Jonathan Tu  wrote:
> >
> > On Jul 24, 2010, at 3:08 PM, Benjamin Root wrote:
> >
> > On Fri, Jul 23, 2010 at 10:10 AM, Jonathan Tu  wrote:
> >>
> >> Hi,
> >> I am trying to install Numpy on a Linux cluster running RHEL4.  I
> >> installed a local copy of Python 2.7 because RHEL4 uses Python 2.3.4
> for
> >> various internal functionalities.  I downloaded the Numpy source code
> using
> >> svn co http://svn.scipy.org/svn/numpy/trunk [1] numpy
> >> and then I tried to build using
> >> python setup.py build
> >> This resulted in the following error:
> >> gcc: numpy/linalg/lapack_litemodule.c gcc:
> numpy/linalg/python_xerbla.c
> >> /usr/bin/g77 -g -Wall -g -Wall -shared
> >> build/temp.linux-x86_64-2.7/numpy/linalg/lapack_litemodule.o
> >> build/temp.linux-x86_64-2.7/numpy/linalg/python_xerbla.o
> -L/usr/lib64/ATLAS
> >> -Lbuild/temp.linux-x86_64-2.7 -llapack -lptf77blas -lptcblas -latlas
> -lg2c
> >> -o build/lib.linux-x86_64-2.7/numpy/linalg/lapack_lite.so /usr/bin/ld:
> >> /usr/lib64/ATLAS/liblapack.a(dgeev.o): relocation R_X86_64_32 against
> `a
> >> local symbol' can not be used when making a shared object; recompile
> with
> >> -fPIC /usr/lib64/ATLAS/liblapack.a: could not read symbols: Bad value
> >> collect2: ld returned 1 exit status /usr/bin/ld:
> >> /usr/lib64/ATLAS/liblapack.a(dgeev.o): relocation R_X86_64_32 against
> `a
> >> local symbol' can not be used when making a shared object; recompile
> with
> >> -fPIC /usr/lib64/ATLAS/liblapack.a: could not read symbols: Bad value
> >> collect2: ld returned 1 exit status error: Command "/usr/bin/g77 -g
> -Wall -g
> >> -Wall -shared
> build/temp.linux-x86_64-2.7/numpy/linalg/lapack_litemodule.o
> >> build/temp.linux-x86_64-2.7/numpy/linalg/python_xerbla.o
> -L/usr/lib64/ATLAS
> >> -Lbuild/temp.linux-x86_64-2.7 -llapack -lptf77blas -lptcblas -latlas
> -lg2c
> >> -o build/lib.linux-x86_64-2.7/numpy/linalg/lapack_lite.so" failed with
> exit
> >> status 1
> >>
> >> Full details of the output are attached in stdout.txt and stderr.txt.
> I
> >> thought maybe it was a compiler error so I tried
> >> python setup.py build -fcompiler=gnu
> >> but this also resulted in errors as well (stdout_2.txt, stderr_2.txt).
> >> I just noticed that on both attempts, it is complaining that it can't
> find
> >> a Fortran 90 compiler. I'm not sure if I have the right compiler
> available.
> >> On this cluster I have the following modules:
> >> ------------ /usr/share/Modules/modulefiles ------------
> >> dot         module-cvs  module-info modules     null      
>  use.own
> >> ------------ /usr/local/share/Modules/modulefiles ------------
> >> mpich/gcc/1.2.7p1/64           openmpi/gcc-ib/1.2.3/64
> >> mpich/intel/1.2.7dmcrp1/64     openmpi/gcc-ib/1.2.5/64
> >> mpich/intel/1.2.7p1/64         openmpi/intel/1.2.3/64
> >> mpich/pgi-7.1/1.2.7p1/64       openmpi/intel-11.0/1.2.8/64
> >> mpich-debug/gcc/1.2.7p1/64     openmpi/intel-9.1/1.2.8/64
> >> mpich-debug/intel/1.2.7p1/64   openmpi/intel-ib/1.1.5/64
> >> mpich-debug/pgi-7.1/1.2.7p1/64 openmpi/intel-ib/1.2.3/64
> >> mvapich/gcc/0.9.9/64           openmpi/intel-ib/1.2.5/64
> >> mvapich/pgi-7.1/0.9.9/64       openmpi/pgi-7.0/1.2.3/64
> >> openmpi/gcc/1.2.8/64           openmpi/pgi-7.1/1.2.5/64
> >> openmpi/gcc/1.3.0/64           openmpi/pgi-7.1/1.2.8/64
> >> openmpi/gcc-ib/1.1.5/64        openmpi/pgi-8.0/1.2.8/64
> >> ------------ /opt/share/Modules/modulefiles ------------
> >> intel/10.0/64/C/10.0.026       intel/9.1/64/default
> >> intel/10.0/64/Fortran/10.0.026 intel-mkl/10/64
> >> intel/10.0/64/Iidb/10.0.026    intel-mkl/10.1/64
> >> intel/10.0/64/default          intel-mkl/9/32
> >> intel/11.1/64/11.1.038         intel-mkl/9/64
> >> intel/11.1/64/11.1.072         pgi/7.0/64
> >> intel/9.1/64/C/9.1.045         pgi/7.1/64
> >> intel/9.1/64/Fortran/9.1.040   pgi/8.0/64
> >> intel/9.1/64/Iidb/9.1.045
> >> If anyone has any ideas, they would be greatly appreciated! I am new
> to
> >> Linux and am unsure how to fix this problem.
> >>
> >>
> >>
> >>
> >> Jonathan Tu
> >>
> >
> > Jonathan,
> >
> > Looking at your error logs, I suspect that the issue is that the ATLAS
> > libraries that were installed on your system were probably built using
> f90,
> > and then your f77-built objects can't link against ATLAS.  Do you have
> admin
> > access to this machine?  If possible, try using the yum package
> manager to
> > install f90 (its availability depends on whatever RHEL license you
> have,
> > though...).
> >
> > If you can install f90, I would then remove the numpy build directory
> and
> > try building it again.
> >
> > I hope this helps,
> > Ben Root
> > _______________________________________________
> > NumPy-Discussion mailing list
> > NumPy-Discussion@scipy.org
> > http://mail.scipy.org/mailman/listinfo/numpy-discussion [2]
> >
> >
> > Hi Ben,
> > I managed to complete my installation.  It required setting up the
> configure
> > file to reference my local installation of Python, rather than the
> default
> > /usr/bin/Python.  However, I would like to see whether or not I am
> using the
> > optimized ATLAS/LAPACK libraries for my matrix computations.
> 
> Run ldd on the concerned extensions: numpy/linalg/lapack_lite.so and
> numpy/core/_dotblas.so
> 
> David


I am unable to find the files lapack_lite.so or _dotblas.so.  I used the locate 
command to look for them.  Does this mean my Numpy installation was not build 
against LAPACK at all?  If so, how would I fix this?  I'm sorry for the very 
basic questions, but I am fairly new to Linux and Python.



Jonathan Tu


_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to