[Numpy-discussion] missing from contributor list?

2016-11-02 Thread Sturla Molden
Why am I missing from the contributor hist here? https://github.com/numpy/numpy/blob/master/numpy/_build_utils/src/apple_sgemv_fix.c Sturla ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org

Re: [Numpy-discussion] Accelerate or OpenBLAS for numpy / scipy wheels?

2016-07-01 Thread Sturla Molden
Ralf Gommers wrote: > Thanks Sturla, interesting details as always. You didn't state your > preference by the way, do you have one? I use Accelerate because it is the easier for me to use when building SciPy. But that is from a developer's perspective. As you know,

Re: [Numpy-discussion] Accelerate or OpenBLAS for numpy / scipy wheels?

2016-07-01 Thread Sturla Molden
On 29/06/16 21:55, Nathaniel Smith wrote: Speed is important, but it's far from the only consideration, especially since differences between the top tier libraries are usually rather small. It is not even the more important consideration. I would say that correctness matters most. Everything

Re: [Numpy-discussion] Accelerate or OpenBLAS for numpy / scipy wheels?

2016-07-01 Thread Sturla Molden
On 29/06/16 21:55, Nathaniel Smith wrote: Accelerate is closed, so when we hit bugs then there's often nothing we can do except file a bug with apple and hope that it gets fixed within a year or two. This isn't hypothetical -- we've hit cases where accelerate gave wrong answers. Numpy actually

Re: [Numpy-discussion] Accelerate or OpenBLAS for numpy / scipy wheels?

2016-06-29 Thread Sturla Molden
Ralf Gommers wrote: > For most routines performance seems to be comparable, and both are much > better than ATLAS. When there's a significant difference, I have the > impression that OpenBLAS is more often the slower one (example: >

Re: [Numpy-discussion] Integers to integer powers, let's make a decision

2016-06-05 Thread Sturla Molden
Charles R Harris wrote: >1. Integers to negative integer powers raise an error. >2. Integers to integer powers always results in floats. 2 ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org

Re: [Numpy-discussion] Changing FFT cache to a bounded LRU cache

2016-05-31 Thread Sturla Molden
Joseph Martinot-Lagarde wrote: > The problem with FFTW is that its license is more restrictive (GPL), and > because of this may not be suitable everywhere numpy.fft is. A lot of us use NumPy linked with MKL or Accelerate, both of which have some really nifty FFTs. And

Re: [Numpy-discussion] Changing FFT cache to a bounded LRU cache

2016-05-31 Thread Sturla Molden
Lion Krischer wrote: > I added a slightly more comprehensive benchmark to the PR. Please have a > look. It tests the total time for 100 FFTs with and without cache. It is > over 30 percent faster with cache which it totally worth it in my > opinion as repeated FFTs of

Re: [Numpy-discussion] Numpy arrays shareable among related processes (PR #7533)

2016-05-24 Thread Sturla Molden
Antoine Pitrou wrote: > When writing C code to interact with buffer-providing objects, you > usually don't bother with memoryviews at all. You just use a Py_buffer > structure. I was taking about "typed memoryviews" which is a Cython abstraction for a Py_buffer struct. I

Re: [Numpy-discussion] Numpy arrays shareable among related processes (PR #7533)

2016-05-17 Thread Sturla Molden
Matěj Týč wrote: > Does it mean > that if you pass the numpy array to the child process using Queue, no > significant amount of data will flow through it? This is what my shared memory arrayes do. > Or I shouldn't pass it > using Queue at all and just rely on inheritance?

Re: [Numpy-discussion] Proposal: numpy.random.random_seed

2016-05-17 Thread Sturla Molden
Stephan Hoyer wrote: > I have recently encountered several use cases for randomly generate random > number seeds: > > 1. When writing a library of stochastic functions that take a seed as an > input argument, and some of these functions call multiple other such > stochastic

Re: [Numpy-discussion] Numpy arrays shareable among related processes (PR #7533)

2016-05-17 Thread Sturla Molden
Matěj Týč wrote: > - Parallel processing of HUGE data, and This is mainly a Windows problem, as copy-on-write fork() will solve this on any other platform. I am more in favor of asking Microsoft to fix their broken OS. Also observe that the usefulness of shared memory

Re: [Numpy-discussion] Numpy arrays shareable among related processes (PR #7533)

2016-05-13 Thread Sturla Molden
Feng Yu wrote: > Also, did you checkout http://zeromq.org/blog:zero-copy ? > ZeroMQ is a dependency of Jupyter, so it is quite available. ZeroMQ is great, but it lacks some crucial features. In particular it does not support IPC on Windows. Ideally one should e.g. use

Re: [Numpy-discussion] Numpy arrays shareable among related processes (PR #7533)

2016-05-13 Thread Sturla Molden
Niki Spahiev wrote: > Apparently next Win10 will have fork as part of bash integration. It is Interix/SUA rebranded "Subsystem for Linux". It remains to be seen how long it will stay this time. Also a Python built for this subsystem will not run on the Win32 subsystem,

Re: [Numpy-discussion] Numpy arrays shareable among related processes (PR #7533)

2016-05-12 Thread Sturla Molden
Feng Yu wrote: > In most (half?) situations the result can be directly write back via > preallocated shared array before works are spawned. Then there is no > need to pass data back with named segments. You can work around it in various ways, this being one of them.

Re: [Numpy-discussion] Numpy arrays shareable among related processes (PR #7533)

2016-05-12 Thread Sturla Molden
Niki Spahiev wrote: > Apparently next Win10 will have fork as part of bash integration. That would be great. The lack of fork on Windows is very annoying. Sturla ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org

Re: [Numpy-discussion] Numpy arrays shareable among related processes (PR #7533)

2016-05-12 Thread Sturla Molden
Antoine Pitrou wrote: > Can you define "expensive"? Slow enough to cause complaints on the Cython mailing list. > You're assuming this is the cost of "buffer acquisition", while most > likely it's the cost of creating the memoryview object itself. Constructing a typed

Re: [Numpy-discussion] Numpy arrays shareable among related processes (PR #7533)

2016-05-12 Thread Sturla Molden
Allan Haldane wrote: > You probably already know this, but I just wanted to note that the > mpi4py module has worked around pickle too. They discuss how they > efficiently transfer numpy arrays in mpi messages here: >

Re: [Numpy-discussion] Numpy arrays shareable among related processes (PR #7533)

2016-05-11 Thread Sturla Molden
Feng Yu wrote: > 1. If we are talking about shared memory and copy-on-write > inheritance, then we are using 'fork'. Not available on Windows. On Unix it only allows one-way communication, from parent to child. > 2. Picking of inherited shared memory array can be done

Re: [Numpy-discussion] Numpy arrays shareable among related processes (PR #7533)

2016-05-11 Thread Sturla Molden
Joe Kington wrote: > You're far better off just > communicating between processes as opposed to using shared memory. Yes. ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org

Re: [Numpy-discussion] Numpy arrays shareable among related processes (PR #7533)

2016-05-11 Thread Sturla Molden
Benjamin Root wrote: > Oftentimes, if one needs to share numpy arrays for multiprocessing, I would > imagine that it is because the array is huge, right? That is a case for shared memory, but what. i was taking about is more common than this. In order for processes to

Re: [Numpy-discussion] Numpy arrays shareable among related processes (PR #7533)

2016-05-11 Thread Sturla Molden
Allan Haldane wrote: > That's interesting. I've also used multiprocessing with numpy and didn't > realize that. Is this true in python3 too? I am not sure. As you have noticed, pickle is faster by to orders of magnitude on Python 3. But several microseconds is also a

Re: [Numpy-discussion] Numpy arrays shareable among related processes (PR #7533)

2016-05-11 Thread Sturla Molden
Elliot Hallmark wrote: > Strula, this sounds brilliant! To be clear, you're talking about > serializing the numpy array and reconstructing it in a way that's faster > than pickle? Yes. We know the binary format of NumPy arrays. We don't need to invoke the machinery of

Re: [Numpy-discussion] Numpy arrays shareable among related processes (PR #7533)

2016-05-11 Thread Sturla Molden
are not doing the users a favor by encouraging the use of shared memory arrays. They help with nothing. Sturla Molden Matěj Týč <matej@gmail.com> wrote: > Dear Numpy developers, > I propose a pull request https://github.com/numpy/numpy/pull/7533 that > features numpy arrays tha

Re: [Numpy-discussion] Make as_strided result writeonly

2016-01-25 Thread Sturla Molden
On 23/01/16 22:25, Sebastian Berg wrote: Do you agree with this, or would it be a major inconvenience? I think any user of as_strided should be considered a power user. This is an inherently dangerous function, that can easily segfault the process. Anyone who uses as_strided should be

Re: [Numpy-discussion] Make as_strided result writeonly

2016-01-25 Thread Sturla Molden
On 25/01/16 18:06, Sebastian Berg wrote: That said, I guess I could agree with you in the regard that there are so many *other* awful ways to use as_strided, that maybe it really is just so bad, that improving one thing doesn't actually help anyway ;). That is roughly my position on this,

Re: [Numpy-discussion] Support of @= in numpy?

2015-12-27 Thread Sturla Molden
Charles R Harris wrote: > In any case, we support the `@` operator in 1.10, but not the `@=` > operator. The `@=` operator is tricky to have true inplace matrix > multiplication, as not only are elements on the left overwritten, but the > dimensions need to be

Re: [Numpy-discussion] performance solving system of equations in numpy and MATLAB

2015-12-17 Thread Sturla Molden
On 17/12/15 12:06, Francesc Alted wrote: Pretty good. I did not know that OpenBLAS was so close in performance to MKL. MKL, OpenBLAS and Accelerate are very close in performance, except for level-1 BLAS where Accelerate and MKL are better than OpenBLAS. MKL requires the number of threads

Re: [Numpy-discussion] performance solving system of equations in numpy and MATLAB

2015-12-17 Thread Sturla Molden
On 16/12/15 20:47, Derek Homeier wrote: Getting around 30 s wall time here on a not so recent 4-core iMac, so that would seem to fit (iirc Accelerate should actually largely be using the same machine code as MKL). Yes, the same kernels, but not the same threadpool. Accelerate uses the GCD,

Re: [Numpy-discussion] Fast vectorized arithmetic with ~32 significant digits under Numpy

2015-12-12 Thread Sturla Molden
"Thomas Baruchel" wrote: > While this is obviously the most relevant answer for many users because > it will allow them to use Numpy arrays exactly > as they would have used them with native types, the wrong thing is that > from some point of view "true" vectorization > will be

Re: [Numpy-discussion] Memory mapping and NPZ files

2015-12-11 Thread Sturla Molden
Mathieu Dubois wrote: > The point is precisely that, you can't do memory mapping with Npz files > (while it works with Npy files). The operating system can memory map any file. But as npz-files are compressed, you will need to uncompress the contents in your

Re: [Numpy-discussion] Memory mapping and NPZ files

2015-12-10 Thread Sturla Molden
Mathieu Dubois wrote: > Does it make sense? No. Memory mapping should just memory map, not do all sorts of crap. Sturla ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org

Re: [Numpy-discussion] When to stop supporting Python 2.6?

2015-12-07 Thread Sturla Molden
Charles R Harris wrote: > As a strawman proposal, how about dropping moving to 2.7 and 3.4 minimum > supported version next fall, say around numpy 1.12 or 1.13 depending on how > the releases go. > > I would like to here from the scipy folks first. Personally I would

Re: [Numpy-discussion] Where is Jaime?

2015-12-07 Thread Sturla Molden
Charles R Harris wrote: > The cash economy is nothing to sniff at ;) It is big in NYC and other > places with high taxes and bureaucratic meddling. Cash was one of the great > inventions. Yeah, there is a Sicilian New Yorker called "Gambino" who has been advertising

Re: [Numpy-discussion] future of f2py and Fortran90+

2015-12-04 Thread Sturla Molden
On 03/12/15 22:38, Eric Firing wrote: Right, but for each function that requires writing two wrappers, one in Fortran and a second one in cython. Yes, you need two wrappers for each function, one in Cython and one in Fortran 2003. That is what fwrap is supposed to automate, but it has been

Re: [Numpy-discussion] f2py, numpy.distutils and multiple Fortran source files

2015-12-04 Thread Sturla Molden
On 03/12/15 22:07, David Verelst wrote: Can this workflow be incorporated into |setuptools|/|numpy.distutils|? Something along the lines as: Take a look at what SciPy does. https://github.com/scipy/scipy/blob/81c096001974f0b5efe29ec83b54f725cc681540/scipy/fftpack/setup.py Multiple Fortran

Re: [Numpy-discussion] Compilation problems npy_float64

2015-11-07 Thread Sturla Molden
Johan wrote: > Hello, I searched the forum, but couldn't find a post related to my > problem. I am installing scipy via pip in cygwin environment I think I introduced this error when moving a global variable from the Cython module to a C++ module. The name collision

Re: [Numpy-discussion] isfortran compatibility in numpy 1.10.

2015-11-01 Thread Sturla Molden
Charles R Harris wrote: >1. Return `a.flags.f_contiguous`. This differs for 1-D arrays, but is >most consistent with the name isfortran. If the idea is to determine if an array can safely be passed to Fortran, this is the correct one. >2. Return

Re: [Numpy-discussion] Let's move forward with the current governance document.

2015-10-05 Thread Sturla Molden
Nathaniel Smith wrote: > Are you planning to go around vetoing things I don't consider myself qualified. > for ridiculous reasons and causing havoc? That would be unpolite. > And if not, then who is it that you're worried about? I am not sure :) I just envisioned a Roman

Re: [Numpy-discussion] Cython-based OpenMP-accelerated quartic polynomial solver

2015-10-05 Thread Sturla Molden
On 02/10/15 13:05, Daπid wrote: Have you tried asking Python-dev for help with this? Hopefully they would have some weight there. It seems both GCC dev and Apple (for GCD and Accelerate) has taken a similar stance on this. There is tiny set of functions the POSIX standard demands should

Re: [Numpy-discussion] Let's move forward with the current governance document.

2015-10-05 Thread Sturla Molden
Nathaniel Smith wrote: > Thanks Chuck! It looks like it's just wording tweaks / clarifications > at this point, so nothing we need to discuss in detail on the list. If > anyone wants to watch the sausage being made, then the link is above > :-), and we'll continue the discussion

Re: [Numpy-discussion] Cython-based OpenMP-accelerated quartic polynomial solver

2015-10-02 Thread Sturla Molden
Juha Jeronen wrote: > Mm. I've quite often run MPI locally (it's nice for multicore scientific > computing on Python), but I had no idea that OpenMP had cluster > implementations. Thanks for the tip. Intel has been selling one, I think there are others too. OpenMP has a

Re: [Numpy-discussion] Cython-based OpenMP-accelerated quartic polynomial solver

2015-10-02 Thread Sturla Molden
Sturla Molden <sturla.mol...@gmail.com> wrote: > OpenMP has a flush pragma for synchronizing shared variables. This means > that OpenMP is not restricted to shared memory hardware. A "pragma omp > flush" can just as well invoke some IPC mechanism, even network > commun

Re: [Numpy-discussion] Cython-based OpenMP-accelerated quartic polynomial solver

2015-10-02 Thread Sturla Molden
Sturla Molden <sturla.mol...@gmail.com> wrote: > Cython actually requires that there is a shared address space, and it > invokes something that strictly speaking has undefined behavior under the > OpenMP standard. So thus, a prange block in Cython is expected to work > cor

Re: [Numpy-discussion] Cython-based OpenMP-accelerated quartic polynomial solver

2015-09-30 Thread Sturla Molden
On 30/09/15 18:20, Chris Barker wrote: We'd need a run-time check. We need to amend the compiler classes in numpy.distutils with OpenMP relevant information (compiler flags and libraries). The OpenMP support libraries must be statically linked. Sturla

Re: [Numpy-discussion] Cython-based OpenMP-accelerated quartic polynomial solver

2015-09-30 Thread Sturla Molden
On 30/09/15 18:20, Chris Barker wrote: python threads with nogil? This is often the easiest, particularly if we construct a fork-safe threadpool. Sturla ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org

Re: [Numpy-discussion] Cython-based OpenMP-accelerated quartic polynomial solver

2015-09-30 Thread Sturla Molden
On 01/10/15 02:20, Juha Jeronen wrote: Then again, the matter is further complicated by considering codes that run on a single machine, versus codes that run on a cluster.Threads being local to each node in a cluster, You can run MPI programs on a single machine and you get OpenMP

Re: [Numpy-discussion] Cython-based OpenMP-accelerated quartic polynomial solver

2015-09-30 Thread Sturla Molden
On 30/09/15 11:27, Daπid wrote: Is there a nice way to ship both versions? After all, most implementations of BLAS and friends do spawn OpenMP threads, so I don't think it would be outrageous to take advantage of it in more places; Some do, others don't. ACML uses OpenMP. GotoBLAS uses

Re: [Numpy-discussion] Cython-based OpenMP-accelerated quartic polynomial solver

2015-09-30 Thread Sturla Molden
On 30/09/15 18:20, Nathaniel Smith wrote: This is incorrect -- the only common implementation of BLAS that uses *OpenMP* threads is OpenBLAS, MKL and ACML also use OpenMP. Sturla ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org

Re: [Numpy-discussion] Cython-based OpenMP-accelerated quartic polynomial solver

2015-09-30 Thread Sturla Molden
On 30/09/15 15:57, Nathan Goldbaum wrote: Basically, just try to compile a simple OpenMP test program in a subprocess. If that succeeds, then great, we can add -fopenmp as a compilation flag. If not, don't do that. Unless you use GCC on Linux, it will be more complex than that. I.e. do you

Re: [Numpy-discussion] Cython-based OpenMP-accelerated quartic polynomial solver

2015-09-30 Thread Sturla Molden
On 01/10/15 02:32, Juha Jeronen wrote: Sounds good. Out of curiosity, are there any standard fork-safe threadpools, or would this imply rolling our own? You have to roll your own. Basically use pthreads_atfork to register a callback that shuts down the threadpool before a fork and another

Re: [Numpy-discussion] Governance model request

2015-09-22 Thread Sturla Molden
On 20/09/15 20:20, Travis Oliphant wrote: 1 - define a BDFL for the council. I would nominate chuck Harris 2 - limit the council to 3 people. I would nominate chuck, nathaniel, and pauli. 3 - add me as a permanent member of the steering council. I have stayed out of this governance

Re: [Numpy-discussion] Governance model request

2015-09-22 Thread Sturla Molden
On 22/09/15 14:31, Perry Greenfield wrote: I’ve also stayed out of this until now. I’m surprised and disheartened at the amount of suspicion and distrust directed towards Travis. I have no idea where this distrust comes from. Nobody has invested so much of their time in NumPy. Without

Re: [Numpy-discussion] [Python-ideas] Should our default random number generator be secure?

2015-09-20 Thread Sturla Molden
On 20/09/15 21:48, Sturla Molden wrote: This is where a small subset of C++ would be handy. Making an uint128_t class with overloaded operators is a nobrainer. :-) Meh... The C++ version of PCG already has this. ___ NumPy-Discussion mailing list

Re: [Numpy-discussion] [Python-ideas] Should our default random number generator be secure?

2015-09-20 Thread Sturla Molden
On 19/09/15 18:06, Robert Kern wrote: That said, we'd probably end up doing a significant amount of rewriting so that we will have a C implementation of software-uint128 arithmetic. This is where a small subset of C++ would be handy. Making an uint128_t class with overloaded operators is a

Re: [Numpy-discussion] [Python-ideas] Should our default random number generator be secure?

2015-09-18 Thread Sturla Molden
On 14/09/15 10:34, Antoine Pitrou wrote: Currently we don't provide those APIs on the GPU, since MT is much too costly there. If Numpy wanted to switch to a different generator, and if Numba wanted to remain compatible with Numpy, one of the PCG functions would be an excellent choice (also for

Re: [Numpy-discussion] [Python-ideas] Should our default random number generator be secure?

2015-09-18 Thread Sturla Molden
On 14/09/15 10:26, Robert Kern wrote: I want fast, multiple independent streams on my current hardware first, and PCG gives that to me. DCMT is good for that as well. It should be possible to implement a pluggable design of NumPy's mtrand. Basically call a function pointer instead of

Re: [Numpy-discussion] [Python-ideas] Should our default random number generator be secure?

2015-09-18 Thread Sturla Molden
On 14/09/15 10:34, Antoine Pitrou wrote: If Numpy wanted to switch to a different generator, and if Numba wanted to remain compatible with Numpy, one of the PCG functions would be an excellent choice (also for CPU performance, incidentally). Is Apache license ok in NumPy? (Not sure, thus

Re: [Numpy-discussion] NPY_DOUBLE not declared

2015-08-17 Thread Sturla Molden
Why not do as it says instead? #define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION /* Or: NPY_NO_DEPRECATED_API NPY_API_VERSION */ #include Python.h #include numpy/arrayobject.h Sturla On 17/08/15 13:11, Florian Lindner wrote: Hello, I try to converse a piece of C code to the new NumPy

Re: [Numpy-discussion] Change default order to Fortran order

2015-08-03 Thread Sturla Molden
Matthew Brett matthew.br...@gmail.com wrote: Sure, but to avoid confusion, maybe move the discussion of image indexing order to another thread? I think this thread is about memory layout, which is a different issue. It is actually a bit convoluted and not completely orthogonal. Memory

Re: [Numpy-discussion] Change default order to Fortran order

2015-08-03 Thread Sturla Molden
Juan Nunez-Iglesias jni.s...@gmail.com wrote: The short version is that you'll save yourself a lot of pain by starting to think of your images as (plane, row, column) instead of (x, y, z). There are several things to consider here. 1. The vertices in computer graphics (OpenGL) are (x,y,z).

Re: [Numpy-discussion] Proposal: Deprecate np.int, np.float, etc.?

2015-08-03 Thread Sturla Molden
On 03/08/15 18:25, Chris Barker wrote: 2) The vagaries of the standard C types: int, long, etc (spelled np.intc, which is a int32 on my machine, anyway) [NOTE: is there a C long dtype? I can't find it at the moment...] There is, it is called np.int. This just illustrates the problem...

Re: [Numpy-discussion] Proposal: Deprecate np.int, np.float, etc.?

2015-08-03 Thread Sturla Molden
On 03/08/15 20:51, Chris Barker wrote: well, IIUC, np.int http://np.int is the python integer type, which is a C long in all the implemtations of cPython that I know about -- but is that a guarantee?in the future as well? It is a Python int on Python 2. On Python 3 dtype=np.int means the

Re: [Numpy-discussion] Change default order to Fortran order

2015-08-02 Thread Sturla Molden
On 02/08/15 15:55, Kang Wang wrote: Can anyone provide any insight/help? There is no default order. There was before, but now all operators control the order of their return arrays from the order of their input array. The only thing that makes C order default is the keyword argument to

Re: [Numpy-discussion] Proposal: Deprecate np.int, np.float, etc.?

2015-08-02 Thread Sturla Molden
On 31/07/15 09:38, Julian Taylor wrote: A long is only machine word wide on posix, in windows its not. Actually it is the opposite. A pointer is 64 bit on AMD64, but the native integer and pointer offset is only 32 bit. But it does not matter because it is int that should be machine word

Re: [Numpy-discussion] Change default order to Fortran order

2015-08-02 Thread Sturla Molden
On 02/08/15 22:28, Bryan Van de Ven wrote: And to eliminate the order kwarg, use functools.partial to patch the zeros function (or any others, as needed): This will probably break code that depends on NumPy, like SciPy and scikit-image. But if NumPy is all that matters, sure go ahead and

Re: [Numpy-discussion] Change default order to Fortran order

2015-08-02 Thread Sturla Molden
On 02/08/15 22:14, Kang Wang wrote: Thank you all for replying! I did a quick test, using python 2.6.6, and the original numpy package on my Linux computer without any change. == x = np.zeros((2,3),dtype=np.int32,order='F') print x.strides = print x.strides y = x + 1 print y.strides =

Re: [Numpy-discussion] Proposal: Deprecate np.int, np.float, etc.?

2015-08-01 Thread Sturla Molden
Chris Barker - NOAA Federal chris.bar...@noaa.gov wrote: Which is part of the problem with C -- if two types happen to be the same, the compiler is perfectly happy. That int and long int be the same is not more problematic than int and signed int be the same. Sturla

Re: [Numpy-discussion] Proposal: Deprecate np.int, np.float, etc.?

2015-07-31 Thread Sturla Molden
Chris Barker - NOAA Federal chris.bar...@noaa.gov wrote: Turns out I was passing in numpy arrays that I had typed as np.int. It worked OK two years ago when I was testing only on 32 bit pythons, but today I got a bunch of failed tests on 64 bit OS-X -- a np.int is now a C long! It has always

Re: [Numpy-discussion] Proposal: Deprecate np.int, np.float, etc.?

2015-07-31 Thread Sturla Molden
Chris Barker chris.bar...@noaa.gov wrote: What about Fortan -- I've been out of that loop for ages -- does semi-modern Fortran use well defined integer types? Modern Fortran is completely sane. INTEGER without kind number (Fortran 77) is the fastest integer on the CPU. On AMD64 that is 32

Re: [Numpy-discussion] Proposal: Deprecate np.int, np.float, etc.?

2015-07-27 Thread Sturla Molden
Chris Barker chris.bar...@noaa.gov wrote: we get away with np.float, because every OS/compiler that gets any regular use has np.float == a c double, which is always 64 bit. Not if we are passing an array of np.float to a ac routine that expects float*, e.g. in OpenGL, BLAS or LAPACK. That will

Re: [Numpy-discussion] Proposal: Deprecate np.int, np.float, etc.?

2015-07-27 Thread Sturla Molden
Chris Barker chris.bar...@noaa.gov wrote: 32 bits on all (most) 32 bit platforms 64 bits on 64 bit Linux and OS-X 32 bits on 64 bit Windows (also if compiled by cygwin??) sizeof(long) is 8 on 64-bit Cygwin. This is to make sure it is inconsistent with MSVC and MinGW-w64, and make sure there

Re: [Numpy-discussion] Shared memory check on in-place modification.

2015-07-27 Thread Sturla Molden
On 27/07/15 22:10, Anton Akhmerov wrote: Hi everyone, I have encountered an initially rather confusing problem in a piece of code that attempted to symmetrize a matrix: `h += h.T` The problem of course appears due to `h.T` being a view of `h`, and some elements being overwritten during the

Re: [Numpy-discussion] Proposal: Deprecate np.int, np.float, etc.?

2015-07-24 Thread Sturla Molden
Julian Taylor jtaylor.deb...@googlemail.com wrote: I don't see the issue. They are just aliases so how is np.float worse than just float? I have burned my fingers on it. Since np.double is a C double I assumed np.float is a C float. It is not. np.int has the same problem by being a C long.

Re: [Numpy-discussion] Rationale for integer promotion rules

2015-07-17 Thread Sturla Molden
Matthew Brett matthew.br...@gmail.com wrote: Furthermore, adding int64 and uint64 returns float64. This is a grievous kluge, on the grounds that no-one is really sure *what* to do in this case. It doesn't seem unreasonable to me : casting int64 to uint64 or uint64 to int64 could lead

Re: [Numpy-discussion] future of f2py and Fortran90+

2015-07-14 Thread Sturla Molden
Eric Firing efir...@hawaii.edu wrote: I'm curious: has anyone been looking into what it would take to enable f2py to handle modern Fortran in general? And into prospects for getting such an effort funded? No need. Use Cython and Fortran 2003 ISO C bindings. That is the only portable way to

Re: [Numpy-discussion] floats for indexing, reshape - too strict ?

2015-07-02 Thread Sturla Molden
Antoine Pitrou solip...@pitrou.net wrote: I don't think relaxing type checking here makes any good. I agee. NumPy should do the same as Python in this case. Sturla ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org

Re: [Numpy-discussion] Clarification sought on Scipy Numpy version requirements.

2015-06-21 Thread Sturla Molden
Charles R Harris charlesr.har...@gmail.com wrote: Ralf, I cannot compile Scipy 0.13.3 on my system, it seems to fail her _decomp_update.pyx:60:0: 'cython_blas.pxd' not found Do you have a clean SciPy 0.13.3 source tree? cython_blas.pxd is introduced in 0.16, and should be in 0.13 at all.

Re: [Numpy-discussion] I can't tell if Numpy is configured properly with show_config()

2015-06-19 Thread Sturla Molden
Elliot Hallmark permafact...@gmail.com wrote: And I can't help but wonder if there is further configuration I need to make numpy faster, or if this is just a difference between out machines Try to build NumPy with Intel MKL or OpenBLAS instead. ATLAS is only efficient on the host computer

Re: [Numpy-discussion] Clarification sought on Scipy Numpy version requirements.

2015-06-19 Thread Sturla Molden
Charles R Harris charlesr.har...@gmail.com wrote: I'm looking to change some numpy deprecations into errors as well as remove some deprecated functions. The problem I see is that SciPy claims to support Numpy = 1.5 and Numpy 1.5 is really, really, old. So the question is, does support mean

Re: [Numpy-discussion] How to limit cross correlation window width in Numpy?

2015-06-18 Thread Sturla Molden
Mansour Moufid mansourmou...@gmail.com wrote: The cross-correlation of two arrays of lengths m and n is of length m + n - 1, where m is usually much larger than n. He is thinking about the situation where m == n and m is much larger than maxlag. Truncating the input arrays would also throw

Re: [Numpy-discussion] Python 3 and isinstance(np.int64(42), int)

2015-06-18 Thread Sturla Molden
Nathaniel Smith n...@pobox.com wrote: In py3, 'int' is an arbitrary width integer bignum, like py2 'long', which is fundamentally different from int32 and int64 in both semantics and implementation. Only when stored in an ndarray. An array scalar object does not need to care about the exact

Re: [Numpy-discussion] How to limit cross correlation window width in Numpy?

2015-06-17 Thread Sturla Molden
On 17/06/15 04:38, Honi Sanders wrote: I have now implemented this functionality in numpy.correlate() and numpy.convolve(). https://github.com/bringingheavendown/numpy. The files that were edited are: numpy/core/src/multiarray/multiarraymodule.c numpy/core/numeric.py

Re: [Numpy-discussion] Aternative to PyArray_SetBaseObject in NumPy 1.6?

2015-06-16 Thread Sturla Molden
Eric Moore e...@redtetrahedron.org wrote: You have to do it by hand in numpy 1.6. For example see a href=https://github.com/scipy/scipy/blob/master/scipy/signal/lfilter.c.src#L285-L292;https://github.com/scipy/scipy/blob/master/scipy/signal/lfilter.c.src#L285-L292/a Thank you :) Sturla

[Numpy-discussion] Aternative to PyArray_SetBaseObject in NumPy 1.6?

2015-06-14 Thread Sturla Molden
What would be the best alternative to PyArray_SetBaseObject in NumPy 1.6? Purpose: Keep alive an object owning data passed to PyArray_SimpleNewFromData. Sturla ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org

Re: [Numpy-discussion] changing ValueError to KeyError for bad field access

2015-06-12 Thread Sturla Molden
On 12/06/15 19:46, Allan Haldane wrote: a = np.ones(3, dtype=[('a', 'f4'), ('b', 'f4')]) a['c'] KeyError: 'c' Any opinions? Sounds good to me. But it will probably break someones code. Sturla ___ NumPy-Discussion mailing list

Re: [Numpy-discussion] Verify your sourceforge windows installer downloads

2015-05-28 Thread Sturla Molden
Pauli Virtanen p...@iki.fi wrote: Is it possible to host them on github? I think there's an option to add release notes and (apparently) to upload binaries if you go to the Releases section --- there's one for each tag. And then Sourceforge will put up tainted installers for the benefit of

Re: [Numpy-discussion] Verify your sourceforge windows installer downloads

2015-05-28 Thread Sturla Molden
Julian Taylor jtaylor.deb...@googlemail.com wrote: It has been reported that sourceforge has taken over the gimp unofficial windows downloader page and temporarily bundled the installer with unauthorized adware: https://plus.google.com/+gimp/posts/cxhB1PScFpe WTF?

Re: [Numpy-discussion] Verify your sourceforge windows installer downloads

2015-05-28 Thread Sturla Molden
David Cournapeau courn...@gmail.com wrote: IMO, this really begs the question on whether we still want to use sourceforge at all. At this point I just don't trust the service at all anymore. Here is their lame excuse:

Re: [Numpy-discussion] Backwards-incompatible improvements to numpy.random.RandomState

2015-05-24 Thread Sturla Molden
On 24/05/15 17:13, Anne Archibald wrote: Do we want a deprecation-like approach, so that eventually people who want replicability will specify versions, and everyone else gets bug fixes and improvements? This would presumably take several major versions, but it might avoid people getting

Re: [Numpy-discussion] Backwards-incompatible improvements to numpy.random.RandomState

2015-05-24 Thread Sturla Molden
On 24/05/15 20:04, Nathaniel Smith wrote: I'm not sure what you're envisioning as needing a deprecation cycle? The neat thing about random is that we already have a way for users to say that they want replicability -- the use of an explicit seed -- No, this is not sufficient for random

Re: [Numpy-discussion] Backwards-incompatible improvements to numpy.random.RandomState

2015-05-24 Thread Sturla Molden
On 24/05/15 10:22, Antony Lee wrote: Comments, and help for writing tests (in particular to make sure backwards compatibility is maintained) are welcome. I have one comment, and that is what makes random numbers so special? This applies to the rest of NumPy too, fixing a bug can sometimes

Re: [Numpy-discussion] binary wheels for numpy?

2015-05-18 Thread Sturla Molden
On 18/05/15 21:57, Chris Barker wrote: On Sun, May 17, 2015 at 9:23 PM, Matthew Brett matthew.br...@gmail.com mailto:matthew.br...@gmail.com wrote: I believe OpenBLAS does run-time selection too. very cool! then an excellent option if we can get it to work (make that you can get it to

Re: [Numpy-discussion] binary wheels for numpy?

2015-05-18 Thread Sturla Molden
On 18/05/15 06:09, Chris Barker wrote: IIUC, The Intel libs have the great advantage of run-time selection of hardware specific code -- yes? So they would both work and give high performance on most machines (all?). OpenBLAS can also be built for dynamic architecture with hardware

Re: [Numpy-discussion] binary wheels for numpy?

2015-05-17 Thread Sturla Molden
Matthew Brett matthew.br...@gmail.com wrote: Yes, unfortunately we can't put MKL binaries on pypi because of the MKL license - see I believe we can, because we asked Intel for permission. From what I heard the response was positive. But it doesn't mean we should. :-) Sturla

Re: [Numpy-discussion] binary wheels for numpy?

2015-05-17 Thread Sturla Molden
On 17/05/15 20:54, Ralf Gommers wrote: I suspect; OpenBLAS seems like the way to go (?). I think OpenBLAS is currently the most promising candidate to replace ATLAS. But we need to build OpenBLAS with MinGW gcc, due to ATT syntax in the assembly code. I am not sure if the old toolchain is

Re: [Numpy-discussion] Automatic number of bins for numpy histograms

2015-04-18 Thread Sturla Molden
Jaime Fernández del Río jaime.f...@gmail.com wrote: I think we have an explicit rule against C++, although I may be wrong. Currently there is Python, C and Cython in NumPy. SciPy also has C++ and Fortran code. Sturla ___ NumPy-Discussion mailing

Re: [Numpy-discussion] OS X wheels: speed versus multiprocessing

2015-04-06 Thread Sturla Molden
On 07/04/15 01:49, Nathaniel Smith wrote: Any opinions, objections? Accelerate does not break multiprocessing, quite the opposite. The bug is in multiprocessing and has been fixed in Python 3.4. My vote would nevertheless be for OpenBLAS if we can use it without producing test failures in

Re: [Numpy-discussion] IDE's for numpy development?

2015-04-06 Thread Sturla Molden
On 06/04/15 20:33, Suzen, Mehmet wrote: Hi Chuck, Spider is good. If you are coming from Matlab world. http://spyder-ide.blogspot.co.uk/ I don't think it supports C. But Maybe you are after Eclipse. Spyder supports C. Sturla ___

Re: [Numpy-discussion] OS X wheels: speed versus multiprocessing

2015-04-06 Thread Sturla Molden
On 07/04/15 02:41, Nathaniel Smith wrote: Sure, but in some cases accelerate reduces speed by a factor of infinity by hanging, and OpenBLAS may or may not give wrong answers (but quickly!) since apparently they don't do regression tests, so we have to pick our poison. OpenBLAS is safer on

  1   2   3   4   5   6   7   8   >