Re: [Numpy-discussion] Integers to integer powers, let's make a decision

2016-06-13 Thread V. Armando Solé

On 11/06/2016 02:28, Allan Haldane wrote:


So as an extra twist in this discussion, this means numpy actually
*does* return a float value for an integer power in a few cases:

 >>> type( np.uint64(2) ** np.int8(3) )
 numpy.float64



Shouldn't that example end up the discussion? I find that behaviour for 
any integer power of an np.uint64. I guess if something was to be 
broken, I guess it is already the case.


We were given the choice between:

1 - Integers to negative integer powers raise an error.
2 - Integers to integer powers always results in floats.

and we were never given the choice to adapt the returned type to the 
result. Assuming that option is not possible, it is certainly better 
option 2 than 1 (why to refuse to perform a clearly defined 
operation???) *and* returning a float is already the behaviour for 
integer powers of np.uint64.


Armando



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [JOB ANNOUNCEMENT] Software Developer permanent position available at ESRF, France

2014-02-20 Thread V. Armando Solé

  
  
Sorry, The link I sent you is in
  French. This is the English version.
  
   EUROPEAN SYNCHROTRON
  RADIATION
  FACILITY

INSTALLATION
  EUROPEENNE DE RAYONNEMENT SYNCHROTRON
  
  
  The ESRF is a multinational
research institute, situated in Grenoble, France and financed by
20 countries mostly European. It operates a powerful synchrotron
X-ray source with some 30 beamlines (instruments) covering a
wide range of scientific research in fields such as biology and
medicine, chemistry, earth and environmental sciences, materials
and surface science, and physics. The ESRF employs about
600staff and is organized as a French socit civil.
  

  Within the Instrumentation
Services and Development Division, the Software Group
is now seeking to recruit a:
  Software Developer
(m/f)
  permanent contract
  THE FUNCTION
  
  The ESRF is in the
  process of a major upgrade of the accelerator source and of
  several beamlines. In particular, the Upgrade Programme has
  created a heavy demand for data visualisation and analysis due
  to the massive data flow coming from the new detectors. The
  next generation of experiments will rely on both advanced
  parallelised algorithms for data analysis and high performance
  tools for data visualization.
  You will join the Data
Analysis Unit in the Software Group of the ISDD and
  will develop software for data analysis and visualization. You
  will be expected to:
  
develop and maintain software and graphical
user interfaces for visualizing scientific data
help develop a long term strategy for the
visualization and analysis of data (online and offline)
contribute to the general effort of adapting
existing software and developing new solutions for data
analysis
  
  You will need to be able
  to understand data analysis requirements and propose working
  solutions.
  
  QUALIFICATIONS AND EXPERIENCE
  
  The candidate should have
  a higher university degree (Master, MSc, DESS, Diplom,
  Diploma, Ingeniera Superior, Licenciatura, Laurea or
  equivalent) in Computer Science, Mathematics, Physics,
  Chemistry, Bioinformatics, Engineering or related areas.
  Applicants must have at least 3 years of experience in
  scientific programming in the fields of data analysis and
  visualisation.
  The candidate must have
  good knowledge of OpenGL and the OpenGL Shading Language or
  similar visualisation libraries. Experience in data analysis,
  especially of large datasets is highly desirable, particularly
  using e.g. OpenCL, CUDA. Knowledge of one high level
  programming language (Python, Matlab, ...), a high-level
  graphics library (VTK ,...) and one low level language (C,
  C++, ...) will be considered assets in addition to competence
  in using development tools for compilation, distribution and
  code management. Proven contributions to open source projects
  will also be appreciated.
  The successful candidate
  should be able to work independently as well as in
  multidisciplinary teams. Good English communication and
  presentation skills are required.
  Further information on
  the post can be obtained from AndyGtz (andy.g...@esrf.fr) and/or
  Claudio Ferrero (ferr...@esrf.fr).
  
  
  Ref.8173
 - Deadline for returning application forms: 
01/04/2014
  

  

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Import error while freezing with cxfreeze

2013-04-10 Thread V. Armando Solé

Hello,

On 10/04/2013 11:13, Anand Gadiyar wrote:

On Friday, April 5, 2013, Anand Gadiyar wrote:


Hi all,

I have a small program that uses numpy and scipy. I ran into a
couple of errors while trying to use cxfreeze to create a
windows executable.

I'm running Windows 7 x64, Python 2.7.3 64-bit, Numpy 1.7.1rc1
64-bit, Scipy-0.11.0 64-bit, all binary installs from
http://www.lfd.uci.edu/~gohlke/pythonlibs/
http://www.lfd.uci.edu/%7Egohlke/pythonlibs/





If you intend to use that binary for yourself, please forget this message.

As far as I know, if you intend to distribute that binary *and* you use 
the numpy version built with MKL support, you need an MKL license from 
Intel.


Best regards,

Armando
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Windows, blas, atlas and dlls

2013-02-18 Thread V. Armando Solé
Hi Sergio,

I faced a similar problem one year ago. I solved it writing a C function 
receiving a pointer to the relevant linear algebra routine I needed.

Numpy does not offers the direct access to the underlying library 
functions, but scipy does it:

from scipy.linalg.blas import fblas
dgemm = fblas.dgemm._cpointer
sgemm = fblas.sgemm._cpointer

So I wrote a small extension receiving the data to operate with and the 
relevant pointer.

The drawback of the approach is the dependency on scipy but it works nicely.

Armando

On 18/02/2013 16:38, Sergio Callegari wrote:
 Hi,

 I have a project that includes a cython script which in turn does some direct
 access to a couple of cblas functions. This is necessary, since some matrix
 multiplications need to be done inside a tight loop that gets called thousands
 of times. Speedup wrt calling scipy.linalg.blas.cblas routines is 10x to 20x.

 Now, all this is very nice on linux where the setup script can assure that the
 cython code gets linked with the atlas dynamic library, which is the same
 library that numpy and scipy link to on this platform.

 However, I now have trouble in providing easy ways to use my project in
 windows. All the free windows distros for scientific python that I have
 looked at (python(x,y) and winpython) seem to repackage the windows version of
 numpy/scipy as it is built in the numpy/scipy development sites. These appear
 to statically link atlas inside some pyd files.  So I get no atlas to link
 against, and I have to ship an additional pre-built atlas with my project.

 All this seems somehow inconvenient.

 In the end, when my code runs, due to static linking I get 3 replicas of 2
 slightly different atlas libs in memory. One coming with _dotblas.pyd in 
 numpy,
 another one with cblas.pyd or fblas.pyd in scipy. And the last one as the one
 shipped in my code.

 Would it be possible to have a win distro of scipy which provides some
 pre built atlas dlls, and to have numpy and scipy dynamically link to them?
 This would save memory and also provide a decent blas to link to for things
 done in cython. But I believe there must be some problem since the scipy site
 says

 IMPORTANT: NumPy and SciPy in Windows can currently only make use of CBLAS 
 and
 LAPACK as static libraries - DLLs are not supported.

 Can someone please explain why or link to an explanation?

 Unfortunately, not having a good, prebuilt and cheap blas implementation in
 windows is really striking me as a severe limitation, since you loose the
 ability to prototype in python/scipy and then move to C or Cython the major
 bottlenecks to achieve speed.

 Many thanks in advance!

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] (2012) Accessing LAPACK and BLAS from the numpy C API

2012-03-07 Thread V. Armando Solé

On 06/03/2012 20:57, Sturla Molden wrote:

On 05.03.2012 14:26, V. Armando Solé wrote:


In 2009 there was a thread in this mailing list concerning the access to
BLAS from C extension modules.

If I have properly understood the thread:

http://mail.scipy.org/pipermail/numpy-discussion/2009-November/046567.html

the answer by then was that those functions were not exposed (only f2py
functions).

I just wanted to know if the situation has changed since 2009 because it
is not uncommon that to optimize some operations one has to sooner or
later access BLAS functions that are already wrapped in numpy (either
from ATLAS, from the Intel MKL, ...)

Why do you want to do this? It does not make your life easier to use
NumPy or SciPy's Python wrappers from C. Just use BLAS directly from C
instead.

Wow! It certainly makes my life much, much easier. I can compile and 
distribute my python extension *even without having ATLAS, BLAS or MKL 
installed*.
Please note I am not using the python wrappers from C. That would make 
no sense. I am using the underlying libraries supplied with python from C.


I had already used the information Robert Kern provided on the 2009 
thread and obtained the PyCObject as:


from scipy.linalg.blas import fblas
dgemm = fblas.dgemm._cpointer
sgemm = fblas.sgemm._cpointer

but I did not find a way to obtain those pointers from numpy. That was 
the goal of my post. My extension needs SciPy installed just to fetch 
the pointer. It would be very nice to have a way to get similar 
information from numpy.


I have made a test on a Debian machine with BLAS installed but no 
ATLAS- Extension slow but working.
Then the system maintainer has installed ATLAS - The extension flies. 
So, one can distribute a python extension that works on its own but that 
can take profit of any advanced library the end user might have installed.


Your point of view is valid if one is not going to distribute the 
extension module but I *have to* distribute the module for Linux and for 
windows. To have a proper fortran compiler for windows 64 bit compatible 
with python is already an issue. If I have to distribute my own ATLAS or 
MKL then it gets even worse. All those issues are solved just by using 
the pointer to the function.


Concerning licenses, if the end user has the right to use MKL, then he 
has the right to use it via my extension. It is not me who is using MKL


Armando
PS. The only issue I see with the whole approach is safety because the 
extension might be used to call some nasty function.



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] (2012) Accessing LAPACK and BLAS from the numpy C API

2012-03-05 Thread V. Armando Solé
Hello,

In 2009 there was a thread in this mailing list concerning the access to 
BLAS from C extension modules.

If I have properly understood the thread:

http://mail.scipy.org/pipermail/numpy-discussion/2009-November/046567.html

the answer by then was that those functions were not exposed (only f2py 
functions).

I just wanted to know if the situation has changed since 2009 because it 
is not uncommon that to optimize some operations one has to sooner or 
later access BLAS functions that are already wrapped in numpy (either 
from ATLAS, from the Intel MKL, ...)

Thanks for your time,

Armando



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Where is arrayobject.h?

2012-02-21 Thread V. Armando Solé
On 21/02/2012 19:26, Neal Becker wrote:
 What is the correct way to find the installed location of arrayobject.h?

 On fedora, I had been using:
 (via scons):

 import distutils.sysconfig
 PYTHONINC = distutils.sysconfig.get_python_inc()
 PYTHONLIB = distutils.sysconfig.get_python_lib(1)

 NUMPYINC = PYTHONLIB + '/numpy/core/include'

 But on ubuntu, this fails.  It seems numpy was installed into
 /usr/local/lib/..., while PYTHONLIB expands to 
 /usr/lib/python2.7/dist-packages.

 Is there a universal method?



I use:

import numpy
numpy.get_include()

If that is universal I cannot tell.

Armando


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] abs for max negative integers - desired behavior?

2011-10-12 Thread V. Armando Solé
 From a pure user perspective, I would not expect the abs function to 
return a negative number. Returning +127 plus a warning the first time 
that happens seems to me a good compromise.

Armando

On 12/10/2011 09:46, David Cournapeau wrote:
 On Tue, Oct 11, 2011 at 8:16 PM, Charles R Harris
 charlesr.har...@gmail.com  wrote:

 On Tue, Oct 11, 2011 at 12:23 PM, Matthew Brettmatthew.br...@gmail.com
 wrote:
 Hi,

 I recently ran into this:

 In [68]: arr = np.array(-128, np.int8)

 In [69]: arr
 Out[69]: array(-128, dtype=int8)

 In [70]: np.abs(arr)
 Out[70]: -128

 This has come up for discussion before, but no consensus was ever reached.
 One solution is for abs to return an unsigned type, but then combining that
 with signed type of the same number of bits will cause both to be cast to
 higher precision. IIRC, matlab was said to return +127 as abs(-128), which,
 if true, is quite curious.
 In C, abs(INT_MIN) is undefined, so both 127 and -128 work :)

 David
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] abs for max negative integers - desired behavior?

2011-10-12 Thread V. Armando Solé
On 12/10/2011 10:46, David Cournapeau wrote:
 On Wed, Oct 12, 2011 at 9:18 AM, V. Armando Solé wrote:
   From a pure user perspective, I would not expect the abs function to
 return a negative number. Returning +127 plus a warning the first time
 that happens seems to me a good compromise.
 I guess the question is what's the common context to use small
 integers in the first place. If it is to save memory, then upcasting
 may not be the best solution. I may be wrong, but if you decide to use
 those types in the first place, you need to know about overflows. Abs
 is just one of them (dividing by -1 is another, although this one
 actually raises an exception).

 Detecting it may be costly, but this would need benchmarking.

 That being said, without context, I don't find 127 a better solution than 
 -128.

Well that choice is just based on getting the closest positive number to 
the true value (128). The context can be anything, for instance you 
could be using a look up table based on the result of an integer 
operation ...

In terms of cost, it would imply to evaluate the cost of something like:

a = abs(x);
  if (a  0) {a -= MIN_INT;}
return a;

Basically is the cost of the evaluation of an if condition since the 
content of the block (with or without warning) will bot be executed very 
often.
I find that even raising an exception is better than returning a 
negative number as result of the abs function.

Anyways, I have just tested numpy.array([129], dtype=numpy.int8) and I 
have got the array as [-127] when I was expecting a sort of unsafe cast 
error/warning. I guess I will just stop here. In any case, I am very 
grateful to the mailing list and the original poster for exposing this 
behavior so that I can keep it in mind.

Best regards,

Armando




___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py : NotImplementedError: Only MS compiler supported with gfortran on win64

2011-09-08 Thread V. Armando Solé
Have you tried to install Visual Studio 2008 Express edition (plus the 
windows SDK to be able to compile 64 bit code)?

Armando

On 08/09/2011 13:56, Jim Vickroy wrote:
 Hello All, I'm attempting to create a python wrapper, for a Fortran
 subroutine, using f2py.

 My system details are:

 sys.version '2.6.4 (r264:75708, Oct 26 2009, 08:23:19) [MSC v.1500
 32 bit (Intel)]'
 sys.getwindowsversion() (5, 1, 2600, 2, 'Service Pack 3')
 scipy.__version__ '0.7.1'
 numpy.__version__ '1.4.0'
 C:\gfortran -dumpversion
 4.7.0


 C:\Python26\Lib\site-packages\numpy\f2pyf2py.py -c --help-fcompiler
 Traceback (most recent call last):
 File C:\Python26\Scripts\f2py.py, line 24, inmodule
   main()
 File C:\Python26\lib\site-packages\numpy\f2py\f2py2e.py, line 557,
 in main
   run_compile()
 File C:\Python26\lib\site-packages\numpy\f2py\f2py2e.py, line 543,
 in run_compile
   setup(ext_modules = [ext])
 File C:\Python26\lib\site-packages\numpy\distutils\core.py, line
 186, in setup
   return old_setup(**new_attr)
 File C:\Python26\lib\distutils\core.py, line 138, in setup
   ok = dist.parse_command_line()
 File C:\Python26\lib\distutils\dist.py, line 460, in parse_command_line
   args = self._parse_command_opts(parser, args)
 File C:\Python26\lib\distutils\dist.py, line 574, in
 _parse_command_opts
   func()
 File
 C:\Python26\lib\site-packages\numpy\distutils\command\config_compiler.py,
 line 13, in show_fortran_compilers
   show_fcompilers(dist)
 File
 C:\Python26\lib\site-packages\numpy\distutils\fcompiler\__init__.py,
 line 855, in show_fcompilers
   c.customize(dist)
 File
 C:\Python26\lib\site-packages\numpy\distutils\fcompiler\__init__.py,
 line 525, in customize
   self.set_libraries(self.get_libraries())
 File
 C:\Python26\lib\site-packages\numpy\distutils\fcompiler\gnu.py, line
 306, in get_libraries
   raise NotImplementedError(Only MS compiler supported with gfortran
 on win64)
 NotImplementedError: Only MS compiler supported with gfortran on win64


 Could someone help me to resolve this?

 Thanks, -- jv
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py : NotImplementedError: Only MS compiler supported with gfortran on win64

2011-09-08 Thread V. Armando Solé
On 08/09/2011 16:16, Jim Vickroy wrote:
 On 9/8/2011 6:09 AM, V. Armando Solé wrote:
 Have you tried to install Visual Studio 2008 Express edition (plus the
 windows SDK to be able to compile 64 bit code)?

 Armando
 Armando, Visual Studio 2008 Professional is installed on the computer
 as well as Intel Visual Fortran Composer XE 2011.

 f2py was not finding the Intel compiler (f2py -c --help-fcompiler) so I
 tried gfortran.

 The Win64 reference, in the Exception, is puzzling to me since this is
 a 32-bit computer.


Oh! I totally misunderstood the situation. I thought the problem was the 
missing compiler.

All what I do with python and the intel fortran compiler is to compile 
numpy. Just in case it helps you, I set my environment from the console 
by running a bat file with the following content (I am on 64 bit but you 
could easily tailor it to your needs):

C:\Program Files\Microsoft SDKs\Windows\v7.0\Setup\WindowsSdkVer.exe 
-version:v7.0
call C:\Program Files (x86)\Microsoft Visual Studio 
9.0\VC\bin\vcvars64.bat
call C:\Program Files 
(x86)\Intel\ComposerXE-2011\bin\ipsxe-comp-vars.bat intel64 vs2008shell
rem call C:\Program Files (x86)\Microsoft Visual Studio 
9.0\VC\bin\vcvars64
rem call C:\Program Files\Microsoft SDKs\Windows\v7.0\Bin\setenv.cmd 
/x64 /Release
set PATH=C:\Python27;C:\Python27\Scripts;%PATH%
set PATH=C:\Program Files 
(x86)\Intel\ComposerXE-2011\redist\intel64\mkl;C:\Program Files 
(x86)\Intel\ComposerXE-2011\mkl\lib\intel64;%PATH%

Perhaps that helps you to set a working environment. All what I can tell 
you is that with that environment, if I run python f2py.py -c 
--help-fcompiler it finds the intel compiler.

Good luck,

Armando


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Ternary plots anywhere?

2010-07-06 Thread V. Armando Solé
Hi Ariel,

Ariel Rokem wrote:
 Hi Armando,

 Here's something in that direction:

 http://nature.berkeley.edu/~chlewis/Sourcecode.html 
 http://nature.berkeley.edu/%7Echlewis/Sourcecode.html

 Hope that helps - Ariel

It really helps. It looks more complete than the only thing I had found 
(http://focacciaman.blogspot.com/2008/05/ternary-plotting-in-python-take-2.html)

Thanks a lot,

Armando


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Ternary plots anywhere?

2010-07-02 Thread V. Armando Solé
Dear all,

Perhaps this is a bit off topic for the mailing list, but this is 
probably the only mailing list that is common to users of all python 
plotting packages.

I am trying to find a python implementation of ternary/triangular plots:

http://en.wikipedia.org/wiki/Ternary_plot

but I have been unsuccessful. Is there any on-going project around?

Thanks for your time.

Best regards,

Armando

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.load raising IOError but EOFError expected

2010-07-01 Thread V. Armando Solé
Ruben Salvador wrote:
 Great! Thanks for all your answers!

 I actually have the files created as .npy (appending a new array eact 
 time). I know it's weird, and it's not its intended use. But, for 
 whatsoever reasons, I came to use that. No turn back now. 

 Fortunately, I am able to read the files correctly, so being weird 
 also, at least, it works. Repeating the tests would be very time 
 consuming. I'll just try the different options mentioned for the 
 following tests. 

 Anyway, I think this is a quite common situation. Tests running for a 
 loong time, producing results at very different times (not 
 necessarily huge amounts of data of results, it could be just a single 
 float, or array), and repeating these tests a lot of times, makes it 
 absolutely necessary to have numpyish functions/filetype to APPEND 
 these freshly-new produced data each time it is available. Having to 
 load a .npz file, adding the new data and saving again is wasting 
 unnecesary resources. Having a single file for each run of the test, 
 though possible, for me, complicates the post-processing section, 
 while increasing the time to copy these files (many small files tend 
 to take longer to copy than one single bigger file). Why not just a 
 modified .npy filetype/function with a header indicating it's hosting 
 more than one array¿?


Well, at our lab we are collecting images and saving them into HDF5 
files. Since the files are self-describing it is quite convenient. You 
can decide if you want the images as individual arrays or stacked into a 
bigger one because you know it when you open the file. You can keep 
adding items at any time because HDF5 does not force you to specify the 
final size of the array and you can access it like any numpy array 
without needing to load the whole array into memory nor being limited in 
memory in 32-bit machines. I am currently working on a 100Gbytes array 
on a 32bit machine without problems.

Really, I would give a try to HDF5. In our case we are using h5py, but 
latest release candidate of PyTables seems to have the same numpy like 
functionality.

Armando

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Simple problem. Is it possible without a loop?

2010-06-10 Thread V. Armando Solé
Hi Bruce,

In the context of the actual problem, I have a long series of 
non-equidistant and irregularly spaced float numbers and I have to take 
values between given limits with the constraint of keeping a minimal 
separation. Option 2 just misses the first value of the input array if 
it is within the limits, but for my purposes (perform a fit with a given 
function) is acceptable. I said this seems to be quite close to what I 
need because I do not like missing the first point because that gives 
equivalent but not exactly the same solutions.

By the way, thanks for the % hint. That should make the .astype(int) 
disappear and make the expression look nicer.

Armando

Bruce Southey wrote:
 On 06/09/2010 10:24 AM, Vicente Sole wrote:
 ? Well a loop or list comparison seems like a good choice to me. It is
 much more obvious at the expense of two LOCs. Did you profile the two
 possibilities and are they actually performance-critical?

 cheers

   
 The second is between 8 and ten times faster on my machine.

 import numpy
 import time
 x0 = numpy.arange(1.)
 niter = 2000   # I expect between 1 and 10


 def option1(x, delta=0.2):
  y = [x[0]]
  for value in x:
  if (value - y[-1])  delta:
  y.append(value)
  return numpy.array(y)

 def option2(x, delta=0.2):
  y = numpy.cumsum((x[1:]-x[:-1])/delta).astype(numpy.int)
  i1 = numpy.nonzero(y[1:]  y[:-1])
  return numpy.take(x, i1)


 t0 = time.time()
 for i in range(niter):
  t = option1(x0)
 print Elapsed = , time.time() - t0
 t0 = time.time()
 for i in range(niter):
  t = option2(x0)
 print Elapsed = , time.time() - t0

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
   
 For integer arguments for delta, I don't see any different between 
 using option1 and using the '%' operator.
  (x0[(x0*10)%2==0]-option1(x0)).sum()
 0.0

 Also option2 gives a different result than option1 so these are not 
 equivalent functions. You can see that from the shapes
  option2(x0).shape
 (1, 9998)
  option1(x0).shape
 (1,)
  ((option1(x0)[:9998])-option2(x0)).sum()
 0.0

 So, allowing for shape difference, option2 is the same for most of 
 output from option1 but it is still smaller than option1.

 Probably the main reason for the speed difference is that option2 is 
 virtually pure numpy (and hence done in C) and option1 is using a lot 
 of array lookups that are always slow. So keep it in numpy as most as 
 possible.


 Bruce
 

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
   


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Simple problem. Is it possible without a loop?

2010-06-09 Thread V. Armando Solé
Hello,

I am trying to solve a simple problem that becomes complex if I try to 
avoid looping.

Let's say I have a 1D array, x,  where x[i] = x[i+1]

Given a certain value delta, I would like to get a subset of x, named y, 
where (y[i+1] - y[i]) = delta

In a non-optimized and trivial way, the operation I would like to do is:

y=[x[0]]
for value in x:
if (y[-1] -value)  delta:
   y.append(value)
y=numpy.array(y)

Any hint?

Best regards,

Armando

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Simple problem. Is it possible without a loop?

2010-06-09 Thread V. Armando Solé
Well, this seems to be quite close to what I need

y = numpy.cumsum((x[1:]-x[:-1])/delta).astype(numpy.int)
i1 = numpy.nonzero(y[1:]  y[:-1])
y = numpy.take(x, i1)

Sorry for the time taken!

Best regards,

Armando

V. Armando Solé wrote:
 Hello,

 I am trying to solve a simple problem that becomes complex if I try to 
 avoid looping.

 Let's say I have a 1D array, x,  where x[i] = x[i+1]

 Given a certain value delta, I would like to get a subset of x, named y, 
 where (y[i+1] - y[i]) = delta

 In a non-optimized and trivial way, the operation I would like to do is:

 y=[x[0]]
 for value in x:
 if (y[-1] -value)  delta:
y.append(value)
 y=numpy.array(y)

 Any hint?

 Best regards,

 Armando

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

   


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Simple problem. Is it possible without a loop?

2010-06-09 Thread V. Armando Solé
That was my first thought, but that only warrants me to skip one point 
in x but not more than one.

  x= numpy.arange(10.)
  delta = 3
  print x[(x[1:] - x[:-1]) = delta]
[]

instead of the requested [0, 4, 8]

Armando

Francesc Alted wrote:
 A Wednesday 09 June 2010 10:00:50 V. Armando Solé escrigué:
   
 Well, this seems to be quite close to what I need

 y = numpy.cumsum((x[1:]-x[:-1])/delta).astype(numpy.int)
 i1 = numpy.nonzero(y[1:]  y[:-1])
 y = numpy.take(x, i1)
 

 Perhaps this is a bit shorter:

 y = x[(x[1:] - x[:-1]) = delta]

   


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Simple problem. Is it possible without a loop?

2010-06-09 Thread V. Armando Solé
Francesc Alted wrote:
 Yeah, damn you! ;-)
   

I think you still have room for improvement ;-)



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Simple problem. Is it possible without a loop?

2010-06-09 Thread V. Armando Solé
Hi Josef,

I do not need regular spacing of the original data. I only need them to 
be sorted and that I get it with a previous numpy call. Then the 
algorithm using the cumsum does the trick without a explicit loop.

Armando


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] MemoryError with dot(A, A.T) where A is 800MB on 32-bit Vista

2010-06-09 Thread V. Armando Solé
greg whittier wrote:
 When I run

 import numpy as np

 a = np.ones((400, 50), dtype=np.float32)
 c = np.dot(a, a.T)

 produces a MemoryError on the 32-bit Enthought Python Distribution
 on 32-bit Vista.  I understand this has to do with the 2GB limit with
 32-bit python and the fact numpy wants a contiguous chunk of memory
 for an array.  When I look at the memory use in the task manager
 though, it looks like it's trying to allocate enough for two
 400x50 arrays.  I guess it's explicitly forming a.T.  Is there a
 way to avoid this?  I tried

 c = scipy.lib.blas.fblas.dgemm(1.0, a, a, trans_b=1)

 but I get the same result.  It appears to be using a lot of extra
 memory.  Isn't this just a wrapper to the blas library that passes a
 pointer to the memory location of a?  Why does it seem to be
 generating the transpose?  Is there a way to do A*A.T without two
 copies of A?
   
In such cases I create a matrix of zeros with the final size and I fill 
it with a loop of dot products of smaller chunks of the original a matrix.

The MDP package also does something similar.

Armando




___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] min bug

2009-11-16 Thread V. Armando Solé
Sebastian Berg wrote:
 Known issue, I think someone posted about it a while ago too. The numpy
 min is array aware, and it expects an array. The second argument is the
 axis, which in the case of a single number doesn't matter.

 On Tue, 2009-11-17 at 07:07 +, Chris wrote:
   
 I'm pretty sure this shouldn't happen:

 In [1]: from numpy import min

 In [2]: min(5000, 4)
 Out[2]: 5000
 

I think I have to agree with the original poster.

It would be more correct to rise an exception because the axis is beyond 
the number of axes than to return a confusing result.

Armando

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Failed installation on Windows XP

2009-11-12 Thread V. Armando Solé
Hola,

I am not an expert, but I had a similar issue with a program of main 
that I could trace to not having installed VS9 runtime libraries in the 
target computer:

http://www.microsoft.com/downloads/details.aspx?FamilyID=9b2da534-3e03-4391-8a4d-074b9f2bc1bfdisplaylang=en

Perhaps you can give a shot at it.

Armando

José María García Pérez wrote:
 Good morning,
 I used to have Python 2.6, Numpy-1.3.0, ... on Windows 2000 in the 
 computer at work (which has the typical clampdown policies). I have 
 been updated to Windows XP SP3 and now the same installation files 
 fail to install.

 My configuration is:

 * Windows XP Serivice Pack 3
 * Processor: T7300 (Intel Core 2 Duo)
 * Python 2.6.4 r264:75708

 The installation of: numpy-1.3.0-win32-superpack-python2.6.exe fails 
 saying:
Executing numpy installer failed

 When I press Show Details:
Output folder: C:\DOCUME~1\user1\LOCALS~1\Temp
Install dir for actual installers is C:\DOCUME~1\user1\LOCALS~1\Temp
Target CPU handles SSE2
Target CPU handles SSE3
native install (arch value: native)
Install SSE 3
Extract: numpy-1.3.0-sse3.exe... 100%
Execute: C:\DOCUME~1\user1\LOCALS~1\Temp\numpy-1.3.0-sse3.exe
Completed

 I suspect it might be a lack of a library. Openining 
 C:\DOCUME~1\user1\LOCALS~1\Temp\numpy-1.3.0-sse3.exe with 
 Dependency Walker shows:
 Error: The Side-by-Side configuration information for c:\documents 
 and settings\user1\local settings\temp\NUMPY-1.3.0-SSE3.EXE contains 
 errors. No se pudo iniciar la aplicacin porque su configuracin es 
 incorrecta. Reinstalar la aplicacin puede solucionar el problema (14001).
 Warning: At least one module has an unresolved import due to a missing 
 export function in a delay-load dependent module.

 The spanish bit says: The application couldn't be started because the 
 configuration is not right. Reinstalling the application cas resolve 
 the issue (14001).

 Do you have any clue about how can i resolve the issue?

 Cheers,
 José M.


 

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
   


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Resize Method for Numpy Array

2009-09-24 Thread V. Armando Solé
Alice Invernizzi wrote:
  
 Dear all,

 I have an Hamletic doubt concerning the numpy array data type.
 A general learned rule concerning the array usage in other high-level
 programming languages is that array data-type are homogeneous datasets
 of  fixed dimension.

 Therefore, is not clear to me why in numpy the size of an array can be
 changed  (either with the 'returning-value' resize() function either with
 the 'in-place' array method resize()).
 More in detail, if the existence of the first function
 ('returning-value') might make sense in array computing operation, the
 existence of the 'in-place' method really make no sense for me.

 Would you please be so kind to give some explanation for the existence 
 of resize operator for numpy array? If array size can be change, 
 what are the real advantages of using numpy array instead of list object?
 Thanks in avdance 

Just to keep into the same line.

import numpy
a=numpy.arange(100.)
a.shape = 10, 10
b = a * 1 # just to get a copy
b.shape = 5, 2, 5, 5
b = (b.sum(axis=3)).sum(axis=1)

In that way, on b I have a binned image of a.

I would expect  a.resize(5, 5) would have given something similar 
(perhaps there is already something to make a binning). In fact 
a.resize(5,5) is much closer to a crop than to a resize. I think the 
resize name is misleading and should be called crop, but that is just my 
view.

Armando


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Resize Method for Numpy Array

2009-09-24 Thread V. Armando Solé
V. Armando Solé wrote:

Sorry, there was a bug in the sent code. It should be:

 import numpy
 a=numpy.arange(100.)
 a.shape = 10, 10
 b = a * 1 # just to get a copy
 b.shape = 5, 2, 5, 2
 b = (b.sum(axis=3)).sum(axis=1)

 In that way, on b I have a binned image of a.


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Dot product performance on python 2.6 (windows)

2009-09-11 Thread V. Armando Solé
Hello,

I have found performance problems under windows when using python 2.6
In my case, they seem to be related to the dot product.

The following simple script:

import numpy
import time
a=numpy.arange(100.)
a.shape=1000,1000
t0=time.time()
b=numpy.dot(a.T,a)
print Elapsed time = ,time.time()-t0

reports an Elapsed time of 1.4 seconds under python 2.5 and 15 seconds 
under python 2.6

Same version of numpy, same machine, official numpy installers for 
windows (both with nosse flag)

Are some libraries missing in the windows superpack for python 2.6?

Perhaps the reported problem is already known, but I did not find any 
information about it.

Best regards,

Armando

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Dot product performance on python 2.6 (windows)

2009-09-11 Thread V. Armando Solé
David Cournapeau wrote:
 V. Armando Solé wrote:
   
 Hello,

 I have found performance problems under windows when using python 2.6
 In my case, they seem to be related to the dot product.

 The following simple script:

 import numpy
 import time
 a=numpy.arange(100.)
 a.shape=1000,1000
 t0=time.time()
 b=numpy.dot(a.T,a)
 print Elapsed time = ,time.time()-t0

 reports an Elapsed time of 1.4 seconds under python 2.5 and 15 seconds 
 under python 2.6

 Same version of numpy, same machine, official numpy installers for 
 windows (both with nosse flag)
   
 

 Could you confirm this by pasting the output of numpy.show_config() in
 both versions 
The output of:

python -c import numpy; import sys; print 
sys.executable;numpy.show_config()  python26.txt

and

python -c import numpy; import sys; print sys.executable; 
numpy.show_config()  python25.txt

are identical except for the first line:

diff python25.txt python26.txt
1c1
 C:\Python25\python.exe
---
  C:\Python26\python.exe

I paste the python26.txt because the other one is the same except for 
the first line:

C:\Python26\python.exe
blas_info:
libraries = ['blas']
library_dirs = ['C:\\local\\lib\\yop\\nosse']
language = f77

lapack_info:
libraries = ['lapack']
library_dirs = ['C:\\local\\lib\\yop\\nosse']
language = f77

atlas_threads_info:
  NOT AVAILABLE

blas_opt_info:
libraries = ['blas']
library_dirs = ['C:\\local\\lib\\yop\\nosse']
language = f77
define_macros = [('NO_ATLAS_INFO', 1)]

atlas_blas_threads_info:
  NOT AVAILABLE

lapack_opt_info:
libraries = ['lapack', 'blas']
library_dirs = ['C:\\local\\lib\\yop\\nosse']
language = f77
define_macros = [('NO_ATLAS_INFO', 1)]

atlas_info:
  NOT AVAILABLE

lapack_mkl_info:
  NOT AVAILABLE

blas_mkl_info:
  NOT AVAILABLE

atlas_blas_info:
  NOT AVAILABLE

mkl_info:
  NOT AVAILABLE

Any hint?

Armando


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Dot product performance on python 2.6 (windows)

2009-09-11 Thread V. Armando Solé
Hello,

It seems to point towards a packaging problem.

In python 2.5, I can do:

import numpy.core._dotblas as dotblas
dotblas.__file__

and I get:

C:\\Python25\\lib\\site-packages\\numpy\\core\\_dotblas.pyd

In python 2.6:

 import numpy.core._dotblas as dotblas
...
ImportError: No module named _dotblas

and, of course, I cannot find the _dotblas.pyd file in the relevant 
directories.

Best regards,

Armando

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Dot product performance on python 2.6 (windows)

2009-09-11 Thread V. Armando Solé
Sturla Molden wrote:
 V. Armando Solé skrev:
   
 In python 2.6:

  import numpy.core._dotblas as dotblas
 ...
 ImportError: No module named _dotblas
   
 

   import numpy.core._dotblas as dotblas
   dotblas.__file__
 'C:\\Python26\\lib\\site-packages\\numpy\\core\\_dotblas.pyd'
   

That's because you have installed either the sse2 or the sse3 versions.

As I said in my post, the problem affects the nosse version.

_dotblas.pyd is missing in the nosse version and that is a problem 
unless one forgets about supporting Pentium III and Socket A processors 
when developing code.

Armando

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Dot product performance on python 2.6 (windows)

2009-09-11 Thread V. Armando Solé
David Cournapeau wrote:
 V. Armando Solé wrote:
   
 Hello,

 It seems to point towards a packaging problem.

 In python 2.5, I can do:

 import numpy.core._dotblas as dotblas
 dotblas.__file__

 and I get:

 C:\\Python25\\lib\\site-packages\\numpy\\core\\_dotblas.pyd
   
 

 That's where the error lies: if you install with nosse, you should not
 get _dotblas.pyd at all. 
Why? The nosse for python 2.5 has _dotblas.pyd

Is it impossible to get it compiled under python 2.6 without using sse2 
or sse3?

If so it should be somewhere written in the release notes.
 I will look into the packaging problem - could you open an issue on
 numpy trac, so that I don't forget about it ?
   
OK. I'll try to do it.

Best regards,

Armando

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] How to concatenate two arrays without duplicating memory?

2009-09-02 Thread V. Armando Solé
Hello,

Let's say we have two arrays A and B of shapes (1, 2000) and (1, 
4000).

If I do C=numpy.concatenate((A, B), axis=1), I get a new array of 
dimension (1, 6000) with duplication of memory.

I am looking for a way to have a non contiguous array C in which the 
left (1, 2000) elements point to A and the right (1, 4000) 
elements point to B. 

Any hint will be appreciated.

Thanks,

Armando


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to concatenate two arrays without duplicating memory?

2009-09-02 Thread V. Armando Solé
Gael Varoquaux wrote:
 You cannot in the numpy memory model. The numpy memory model defines an
 array as something that has regular strides to jump from an element to
 the next one.
   
I expected problems in the suggested case (concatenating columns) but I 
did not expect the problem would be so severe to affect the case of row 
concatenation.

I guess I am still considering a 2D array as an array of pointers and 
that does not apply to numpy arrays.

Thanks for the info.

Armando

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to concatenate two arrayswithout duplicating memory?

2009-09-02 Thread V. Armando Solé
Citi, Luca wrote:
 As Gaël pointed out you cannot create A, B and then C
 as the concatenation of A and B without duplicating
 the vectors.
   
 But you can still re-link A to the left elements
 and B to the right ones afterwards by using views into C.
   

Thanks for the hint. In my case the A array is already present and the 
contents of the B array can be read from disk.

At least I have two workarounds making use of your suggested solution of 
re-linking:

- create the C array, copy the contents of A to it and read the contents 
of B directly into C with duplication of the memory of A during some time.

- save the array A in disk, create the array C, read the contents of A 
and B into it and re-link A and B with no duplication but ugly.

Thanks,

Armando


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: HDF5 for Python (h5py) 1.2

2009-06-23 Thread V. Armando Solé
Dear Andrew,

I have succeeded on generating a win32 binary installer for python 2.6.

Running depends on the installed libraries it shows there are no 
dependencies on other libraries than those of VS2008.

I have tried to send it to you directly but I am not sure if your mail 
address accepts attachments.

Please let me know if you are interested on getting the installer.

Armando


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Solving a memory leak in a numpy extension; PyArray_ContiguousFromObject

2009-04-20 Thread V. Armando Solé
Dan S wrote:
 But as you can see, my C code doesn't perform any malloc() or
 suchlike, so I'm stumped.

 I'd be grateful for any further thoughts
Could it be your memory leak is in:

return PyFloat_FromDouble(3.1415927); // temporary


You are creating a python float object from something. What if you  
return Py_None instead of your float?:

Py_INCREF(Py_None);
return PyNone;

I do not know if it will help you but I guess it falls in the any 
further thought category  :-)

Best regards,

Armando

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to force a particular windows numpy installation?

2009-03-19 Thread V. Armando Solé
Francesc Alted wrote:
 Arch option for windows binary
 ~~

 Automatic arch detection can now be bypassed from the command line for
 the superpack installed:

 numpy-1.3.0-superpack-win32.exe /arch=nosse

 will install a numpy which works on any x86, even if the running 
 computer supports SSE set.
   

Thanks a lot /  Moltes gràcies,

Armando




___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion