Re: [Numpy-discussion] migration of all scipy.org mailing lists

2017-03-23 Thread Neal Becker
Ralf Gommers wrote:

> On Thu, Mar 23, 2017 at 12:18 AM, Neal Becker <ndbeck...@gmail.com> wrote:
> 
>> Has anyone taken care of notifying gmane about this?
>>
> 
> We will have to update this info in quite a few places after the move is
> done. Including Gmane, although that site hasn't been working for half a
> year so is pretty low on the priority list.
> 
> Ralf

I'm reading/writing to you via gmane, so I think it is working :)

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] migration of all scipy.org mailing lists

2017-03-22 Thread Neal Becker
Has anyone taken care of notifying gmane about this?

Ralf Gommers wrote:

> Hi all,
> 
> The server for the scipy.org mailing list is in very bad shape, so we (led
> by Didrik Pinte) are planning to complete the migration of active mailing
> lists to the python.org infrastructure and to decommission the lists than
> seem dormant/obsolete.
> 
> The scipy-user mailing list was already moved to python.org a month or two
> ago, and that migration went smoothly.
> 
> These are the lists we plan to migrate:
> 
> astropy
> ipython-dev
> ipython-user
> numpy-discussion
> numpy-svn
> scipy-dev
> scipy-organizers
> scipy-svn
> 
> And these we plan to retire:
> 
> Announce
> APUG
> Ipython-tickets
> Mailman
> numpy-refactor
> numpy-refactor-git
> numpy-tickets
> Pyxg
> scipy-tickets
> NiPy-devel
> 
> There will be a short period (<24 hours) where messages to the list may be
> refused, with an informative message as to why. The mailing list address
> will change from listn...@scipy.org to listn...@python.org
> 
> This will happen asap, likely within a day or two. So two requests:
> 1.  If you see any issue with this plan, please reply and keep Didrik and
> myself on Cc (we are not subscribed to all lists).
> 2. If you see this message on a numpy/scipy list, but not on another list
> (could be due to a moderation queue) then please forward this message
> again to that other list.
> 
> Thanks,
> Ralf


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy 1.12.0 release

2017-01-18 Thread Neal Becker
Matthew Brett wrote:

> On Tue, Jan 17, 2017 at 3:47 PM, Neal Becker <ndbeck...@gmail.com> wrote:
>> Matthew Brett wrote:
>>
>>> Hi,
>>>
>>> On Tue, Jan 17, 2017 at 5:56 AM, Neal Becker <ndbeck...@gmail.com>
>>> wrote:
>>>> Charles R Harris wrote:
>>>>
>>>>> Hi All,
>>>>>
>>>>> I'm pleased to announce the NumPy 1.12.0 release. This release
>>>>> supports Python 2.7 and 3.4-3.6. Wheels for all supported Python
>>>>> versions may be downloaded from PiPY
>>>>> <https://pypi.python.org/pypi?%3Aaction=pkg_edit=numpy>, the
>>>>> tarball and zip files may be downloaded from Github
>>>>> <https://github.com/numpy/numpy/releases/tag/v1.12.0>. The release
>>>>> notes and files hashes may also be found at Github
>>>>> <https://github.com/numpy/numpy/releases/tag/v1.12.0> .
>>>>>
>>>>> NumPy 1.12.0rc 2 is the result of 418 pull requests submitted by 139
>>>>> contributors and comprises a large number of fixes and improvements.
>>>>> Among
>>>>> the many improvements it is difficult to  pick out just a few as
>>>>> standing above the others, but the following may be of particular
>>>>> interest or indicate areas likely to have future consequences.
>>>>>
>>>>> * Order of operations in ``np.einsum`` can now be optimized for large
>>>>> speed improvements.
>>>>> * New ``signature`` argument to ``np.vectorize`` for vectorizing with
>>>>> core dimensions.
>>>>> * The ``keepdims`` argument was added to many functions.
>>>>> * New context manager for testing warnings
>>>>> * Support for BLIS in numpy.distutils
>>>>> * Much improved support for PyPy (not yet finished)
>>>>>
>>>>> Enjoy,
>>>>>
>>>>> Chuck
>>>>
>>>> I've installed via pip3 on linux x86_64, which gives me a wheel.  My
>>>> question is, am I loosing significant performance choosing this
>>>> pre-built
>>>> binary vs. compiling myself?  For example, my processor might have some
>>>> more features than the base version used to build wheels.
>>>
>>> I guess you are thinking about using this built wheel on some other
>>> machine?   You'd have to be lucky for that to work; the wheel depends
>>> on the symbols it found at build time, which may not exist in the same
>>> places on your other machine.
>>>
>>> If it does work, the speed will primarily depend on your BLAS library.
>>>
>>> The pypi wheels should be pretty fast; they are built with OpenBLAS,
>>> which is at or near top of range for speed, across a range of
>>> platforms.
>>>
>>> Cheers,
>>>
>>> Matthew
>>
>> I installed using pip3 install, and it installed a wheel package.  I did
>> not
>> build it - aren't wheels already compiled packages?  So isn't it built
>> for the common denominator architecture, not necessarily as fast as one I
>> built
>> myself on my own machine?  My question is, on x86_64, is this potential
>> difference large enough to bother with not using precompiled wheel
>> packages?
> 
> Ah - my guess is that you'd be hard pressed to make a numpy that is as
> fast as the precompiled wheel.   The OpenBLAS library included in
> numpy selects the routines for your CPU at run-time, so they will
> generally be fast on your CPU.   You might be able to get equivalent
> or even better performance with a ATLAS BLAS library recompiled on
> your exact machine, but that's quite a serious investment of time to
> get working, and you'd have to benchmark to find if you were really
> doing any better.
> 
> Cheers,
> 
> Matthew

OK, so at least for BLAS things should be pretty well optimized.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [SciPy-Dev] NumPy 1.12.0 release

2017-01-18 Thread Neal Becker
Nathaniel Smith wrote:

> On Tue, Jan 17, 2017 at 3:47 PM, Neal Becker <ndbeck...@gmail.com> wrote:
>> Matthew Brett wrote:
>>
>>> Hi,
>>>
>>> On Tue, Jan 17, 2017 at 5:56 AM, Neal Becker <ndbeck...@gmail.com>
>>> wrote:
>>>> Charles R Harris wrote:
>>>>
>>>>> Hi All,
>>>>>
>>>>> I'm pleased to announce the NumPy 1.12.0 release. This release
>>>>> supports Python 2.7 and 3.4-3.6. Wheels for all supported Python
>>>>> versions may be downloaded from PiPY
>>>>> <https://pypi.python.org/pypi?%3Aaction=pkg_edit=numpy>, the
>>>>> tarball and zip files may be downloaded from Github
>>>>> <https://github.com/numpy/numpy/releases/tag/v1.12.0>. The release
>>>>> notes and files hashes may also be found at Github
>>>>> <https://github.com/numpy/numpy/releases/tag/v1.12.0> .
>>>>>
>>>>> NumPy 1.12.0rc 2 is the result of 418 pull requests submitted by 139
>>>>> contributors and comprises a large number of fixes and improvements.
>>>>> Among
>>>>> the many improvements it is difficult to  pick out just a few as
>>>>> standing above the others, but the following may be of particular
>>>>> interest or indicate areas likely to have future consequences.
>>>>>
>>>>> * Order of operations in ``np.einsum`` can now be optimized for large
>>>>> speed improvements.
>>>>> * New ``signature`` argument to ``np.vectorize`` for vectorizing with
>>>>> core dimensions.
>>>>> * The ``keepdims`` argument was added to many functions.
>>>>> * New context manager for testing warnings
>>>>> * Support for BLIS in numpy.distutils
>>>>> * Much improved support for PyPy (not yet finished)
>>>>>
>>>>> Enjoy,
>>>>>
>>>>> Chuck
>>>>
>>>> I've installed via pip3 on linux x86_64, which gives me a wheel.  My
>>>> question is, am I loosing significant performance choosing this
>>>> pre-built
>>>> binary vs. compiling myself?  For example, my processor might have some
>>>> more features than the base version used to build wheels.
>>>
>>> I guess you are thinking about using this built wheel on some other
>>> machine?   You'd have to be lucky for that to work; the wheel depends
>>> on the symbols it found at build time, which may not exist in the same
>>> places on your other machine.
>>>
>>> If it does work, the speed will primarily depend on your BLAS library.
>>>
>>> The pypi wheels should be pretty fast; they are built with OpenBLAS,
>>> which is at or near top of range for speed, across a range of
>>> platforms.
>>>
>>> Cheers,
>>>
>>> Matthew
>>
>> I installed using pip3 install, and it installed a wheel package.  I did
>> not
>> build it - aren't wheels already compiled packages?  So isn't it built
>> for the common denominator architecture, not necessarily as fast as one I
>> built
>> myself on my own machine?  My question is, on x86_64, is this potential
>> difference large enough to bother with not using precompiled wheel
>> packages?
> 
> Ultimately, it's going to depend on all sorts of things, including
> most importantly your actual code. Like most speed questions, the only
> real way to know is to try it and measure the difference.
> 
> The wheels do ship with a fast BLAS (OpenBLAS configured to
> automatically adapt to your CPU at runtime), so the performance will
> at least be reasonable. Possible improvements would include using a
> different and somehow better BLAS (MKL might be faster in some cases),
> tweaking your compiler options to take advantage of whatever SIMD ISAs
> your particular CPU supports (numpy's build system doesn't do this
> automatically but in principle you could do it by hand -- were you
> bothering before? does it even make a difference in practice? I
> dunno), and using a new compiler (the linux wheels use a somewhat
> ancient version of gcc for Reasons; newer compilers are better at
> optimizing -- how much does it matter? again I dunno).
> 
> Basically: if you want to experiment and report back then I think we'd
> all be interested to hear; OTOH if you aren't feeling particularly
> curious/ambitious then I wouldn't worry about it :-).
> 
> -n
> 

Yes, I always add -march=native, which should pickup whatever SIMD is 
available.  So my question was primarily if I should bother.  Thanks for the 
detailed answer.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy 1.12.0 release

2017-01-17 Thread Neal Becker
Charles R Harris wrote:

> Hi All,
> 
> I'm pleased to announce the NumPy 1.12.0 release. This release supports
> Python 2.7 and 3.4-3.6. Wheels for all supported Python versions may be
> downloaded from PiPY
> , the tarball
> and zip files may be downloaded from Github
> . The release notes
> and files hashes may also be found at Github
>  .
> 
> NumPy 1.12.0rc 2 is the result of 418 pull requests submitted by 139
> contributors and comprises a large number of fixes and improvements. Among
> the many improvements it is difficult to  pick out just a few as standing
> above the others, but the following may be of particular interest or
> indicate areas likely to have future consequences.
> 
> * Order of operations in ``np.einsum`` can now be optimized for large
> speed improvements.
> * New ``signature`` argument to ``np.vectorize`` for vectorizing with core
> dimensions.
> * The ``keepdims`` argument was added to many functions.
> * New context manager for testing warnings
> * Support for BLIS in numpy.distutils
> * Much improved support for PyPy (not yet finished)
> 
> Enjoy,
> 
> Chuck

I've installed via pip3 on linux x86_64, which gives me a wheel.  My 
question is, am I loosing significant performance choosing this pre-built 
binary vs. compiling myself?  For example, my processor might have some more 
features than the base version used to build wheels.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] array comprehension

2016-11-04 Thread Neal Becker
Francesc Alted wrote:

> 2016-11-04 14:36 GMT+01:00 Neal Becker <ndbeck...@gmail.com>:
> 
>> Francesc Alted wrote:
>>
>> > 2016-11-04 13:06 GMT+01:00 Neal Becker <ndbeck...@gmail.com>:
>> >
>> >> I find I often write:
>> >> np.array ([some list comprehension])
>> >>
>> >> mainly because list comprehensions are just so sweet.
>> >>
>> >> But I imagine this isn't particularly efficient.
>> >>
>> >
>> > Right.  Using a generator and np.fromiter() will avoid the creation of
>> the
>> > intermediate list.  Something like:
>> >
>> > np.fromiter((i for i in range(x)))  # use xrange for Python 2
>> >
>> >
>> Does this generalize to >1 dimensions?
>>
> 
> A reshape() is not enough?  What do you want to do exactly?
> 

I was thinking about:
x = np.array ([[L1] L2]) where L1,L2 take the form of a list comprehension,
as a means to create a 2-D array (in this example)

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] array comprehension

2016-11-04 Thread Neal Becker
Francesc Alted wrote:

> 2016-11-04 13:06 GMT+01:00 Neal Becker <ndbeck...@gmail.com>:
> 
>> I find I often write:
>> np.array ([some list comprehension])
>>
>> mainly because list comprehensions are just so sweet.
>>
>> But I imagine this isn't particularly efficient.
>>
> 
> Right.  Using a generator and np.fromiter() will avoid the creation of the
> intermediate list.  Something like:
> 
> np.fromiter((i for i in range(x)))  # use xrange for Python 2
> 
> 
Does this generalize to >1 dimensions?

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] array comprehension

2016-11-04 Thread Neal Becker
I find I often write:
np.array ([some list comprehension])

mainly because list comprehensions are just so sweet.

But I imagine this isn't particularly efficient.

I wonder if numpy has a "better" way, and if not, maybe it would be a nice 
addition?

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] update on mailing list issues

2016-10-04 Thread Neal Becker
Ralf Gommers wrote:

> Hi all,
> 
> We've had a number of issues with the reliability of the mailman setup
> that powers the mailing lists for NumPy, SciPy and several other projects.
> To address that we'll start migrating to the python.org provided
> infrastructure, which should be much more reliable.
> 
> The full set of lists is here: https://mail.scipy.org/mailman/listinfo.
> Looks like we have to migrate at least:
> AstroPy
> IPython-dev
> IPython-user
> NumPy-Discussion
> SciPy-Dev
> SciPy-User
> SciPy-organisers
> 
> Some of the other ones that are not clearly obsolete but have almost zero
> activity (APUG, Nipy-devel) we'll have to contact the owners. *-tickets
> may be useful to archive. The other ones will just be cleaned up, unless
> someone indicates that there's a reason to keep them around.
> 
> And a pre-emptive thanks to Didrik and Enthought for taking on the task of
> migrating the archives and user details.
> 
> Cheers,
> Ralf

Someone will need to update gmane nntp/mail gateway then, I suppose?

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] State-of-the-art to use a C/C++ library from Python

2016-09-01 Thread Neal Becker
Jason Newton wrote:

> I just wanted to follow up on the C++ side of OP email - Cython has quite
> a
> few difficulties working with C++ code at the moment.  It's really more of
> a C solution most of the time and you must split things up into a mostly C
> call interface (that is the C code Cython can call) and limit
> exposure/complications with templates and  complex C++11+ constructs. 
> This may change in the longer term but in the near, that is the state.
> 
> I used to use Boost.Python but I'm getting my feet wet with Pybind (which
> is basically the same api but works more as you expect it to with it's
> signature/type plumbing  (including std::shared_ptr islanding), with some
> other C++11 based improvements, and is header only + submodule friendly!).
> I also remembered ndarray thanks to Neal's post but I haven't figured out
> how to leverage it better than pybind, at the moment.  I'd be interested
> to see ndarray gain support for pybind interoperability...
> 
> -Jason
> 
> On Wed, Aug 31, 2016 at 1:08 PM, David Morris  wrote:
> 
>> On Wed, Aug 31, 2016 at 2:28 PM, Michael Bieri  wrote:
>>
>>> Hi all
>>>
>>> There are several ways on how to use C/C++ code from Python with NumPy,
>>> as given in http://docs.scipy.org/doc/numpy/user/c-info.html .
>>> Furthermore, there's at least pybind11.
>>>
>>> I'm not quite sure which approach is state-of-the-art as of 2016. How
>>> would you do it if you had to make a C/C++ library available in Python
>>> right now?
>>>
>>> In my case, I have a C library with some scientific functions on
>>> matrices and vectors. You will typically call a few functions to
>>> configure the computation, then hand over some pointers to existing
>>> buffers containing vector data, then start the computation, and finally
>>> read back the data. The library also can use MPI to parallelize.
>>>
>>
>> I have been delighted with Cython for this purpose.  Great integration
>> with NumPy (you can access numpy arrays directly as C arrays), very
>> python like syntax and amazing performance.
>>
>> Good luck,
>>
>> David
>>
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>>

pybind11 looks very nice.  My problem is that the numpy API exposed by 
pybind11 is fairly weak at this point, as far as I can see from the docs.  
ndarray exposes a lot of functionality through the Array object, including 
convenient indexing and slicing.  AFAICT, the interface in pybind11 is 
pretty low level - just pointers.

There is also some functionality exposed by pybind11 using eigen.  
Personally, I find eigen rather baroque, and only use it when I see no 
alternative.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] State-of-the-art to use a C/C++ library from Python

2016-08-31 Thread Neal Becker
Michael Bieri wrote:

> Hi all
> 
> There are several ways on how to use C/C++ code from Python with NumPy, as
> given in http://docs.scipy.org/doc/numpy/user/c-info.html . Furthermore,
> there's at least pybind11.
> 
> I'm not quite sure which approach is state-of-the-art as of 2016. How
> would you do it if you had to make a C/C++ library available in Python
> right now?
> 
> In my case, I have a C library with some scientific functions on matrices
> and vectors. You will typically call a few functions to configure the
> computation, then hand over some pointers to existing buffers containing
> vector data, then start the computation, and finally read back the data.
> The library also can use MPI to parallelize.
> 
> Best regards,
> Michael

I prefer ndarray:
https://github.com/ndarray/ndarray

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] mtrand.c update 1.11 breaks my crappy code

2016-04-06 Thread Neal Becker
Nathaniel Smith wrote:

> On Apr 6, 2016 06:31, "Robert Kern" <robert.k...@gmail.com> wrote:
>>
>> On Wed, Apr 6, 2016 at 2:18 PM, Neal Becker <ndbeck...@gmail.com> wrote:
>> >
>> > I have C++ code that tries to share the mtrand state.  It unfortunately
>> > depends on the layout of RandomState which used to be:
>> >
>> > struct __pyx_obj_6mtrand_RandomState {
>> >   PyObject_HEAD
>> >   rk_state *internal_state;
>> >   PyObject *lock;
>> > };
>> >
>> > But with 1.11 it's:
>> > struct __pyx_obj_6mtrand_RandomState {
>> >   PyObject_HEAD
>> >   struct __pyx_vtabstruct_6mtrand_RandomState *__pyx_vtab;
>> >   rk_state *internal_state;
>> >   PyObject *lock;
>> >   PyObject *state_address;
>> > };
>> >
>> > So
>> > 1. Why the change?
>> > 2. How can I write portable code?
>>
>> There is no C API to RandomState at this time, stable, portable or
> otherwise. It's all private implementation detail. If you would like a
> stable and portable C API for RandomState, you will need to contribute one
> using PyCapsules to expose the underlying rk_state* pointer.
>>
>> https://docs.python.org/2.7/c-api/capsule.html
> 
> I'm very wary about the idea of exposing the rk_state pointer at all. We
> could have a C API to random but my strong preference would be for
> something that only exposes opaque function calls that take a RandomState
> and return some random numbers, and getting even this right in a clean and
> maintainable way isn't trivial.
> 
> Obviously another option is to call one of the python methods to get an
> ndarray and read out its memory contents. If you can do this in a batch
> (fetching a bunch of numbers for each call) to amortize the additional
> overhead of going through python, then it might work fine. (Python
> overhead is not actually that much -- mostly just having to do a handful
> of extra allocations.)
> 
> Or, possibly the best option, one could use one of the many fine C random
> libraries inside your code, and if you need your code to be deterministic
> given a RandomState you could derive your state initialization from a
> single call to some RandomState method.
> 
> -n

I prefer to use a single instance of a RandomState so that there are 
guarantees about the independence of streams generated from python random 
functions, and from my c++ code.  True, there are simpler approaches - but 
I'm a purist.

Yes, if there were an api use mkl random functions from a RandomState object 
that would solve my problem.  Or even if there was an API to get a 
internal_state pointer from a RandomState object.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] mtrand.c update 1.11 breaks my crappy code

2016-04-06 Thread Neal Becker
Neal Becker wrote:

> Robert Kern wrote:
> 
>> On Wed, Apr 6, 2016 at 2:18 PM, Neal Becker <ndbeck...@gmail.com> wrote:
>>>
>>> I have C++ code that tries to share the mtrand state.  It unfortunately
>>> depends on the layout of RandomState which used to be:
>>>
>>> struct __pyx_obj_6mtrand_RandomState {
>>>   PyObject_HEAD
>>>   rk_state *internal_state;
>>>   PyObject *lock;
>>> };
>>>
>>> But with 1.11 it's:
>>> struct __pyx_obj_6mtrand_RandomState {
>>>   PyObject_HEAD
>>>   struct __pyx_vtabstruct_6mtrand_RandomState *__pyx_vtab;
>>>   rk_state *internal_state;
>>>   PyObject *lock;
>>>   PyObject *state_address;
>>> };
>>>
>>> So
>>> 1. Why the change?
>>> 2. How can I write portable code?
>> 
>> There is no C API to RandomState at this time, stable, portable or
>> otherwise. It's all private implementation detail. If you would like a
>> stable and portable C API for RandomState, you will need to contribute
>> one using PyCapsules to expose the underlying rk_state* pointer.
>> 
>> https://docs.python.org/2.7/c-api/capsule.html
>> 
>> --
>> Robert Kern
> 
> I don't see how pycapsule helps here.  What I need is, my C++ code
> receives
> a RandomState object.  I need to call e.g., rk_random, passing the pointer
> to rk_state - code looks like this;
> 
> RandomState* r = (RandomState*)(rs.ptr());
> //result_type buffer;
> //rk_fill ((void*), sizeof(buffer), r->internal_state);
> if (sizeof(result_type) == sizeof (uint64_t))
>   return rk_ulong (r->internal_state);
> else if (sizeof(result_type) == sizeof (uint32_t))
>   return rk_random (r->internal_state);

Nevermind, I see it's described here:
https://docs.python.org/2.7/extending/extending.html#using-capsules


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] mtrand.c update 1.11 breaks my crappy code

2016-04-06 Thread Neal Becker
Robert Kern wrote:

> On Wed, Apr 6, 2016 at 2:18 PM, Neal Becker <ndbeck...@gmail.com> wrote:
>>
>> I have C++ code that tries to share the mtrand state.  It unfortunately
>> depends on the layout of RandomState which used to be:
>>
>> struct __pyx_obj_6mtrand_RandomState {
>>   PyObject_HEAD
>>   rk_state *internal_state;
>>   PyObject *lock;
>> };
>>
>> But with 1.11 it's:
>> struct __pyx_obj_6mtrand_RandomState {
>>   PyObject_HEAD
>>   struct __pyx_vtabstruct_6mtrand_RandomState *__pyx_vtab;
>>   rk_state *internal_state;
>>   PyObject *lock;
>>   PyObject *state_address;
>> };
>>
>> So
>> 1. Why the change?
>> 2. How can I write portable code?
> 
> There is no C API to RandomState at this time, stable, portable or
> otherwise. It's all private implementation detail. If you would like a
> stable and portable C API for RandomState, you will need to contribute one
> using PyCapsules to expose the underlying rk_state* pointer.
> 
> https://docs.python.org/2.7/c-api/capsule.html
> 
> --
> Robert Kern

I don't see how pycapsule helps here.  What I need is, my C++ code receives 
a RandomState object.  I need to call e.g., rk_random, passing the pointer 
to rk_state - code looks like this;

RandomState* r = (RandomState*)(rs.ptr());
//result_type buffer;
//rk_fill ((void*), sizeof(buffer), r->internal_state);
if (sizeof(result_type) == sizeof (uint64_t))
  return rk_ulong (r->internal_state);
else if (sizeof(result_type) == sizeof (uint32_t))
  return rk_random (r->internal_state);

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] mtrand.c update 1.11 breaks my crappy code

2016-04-06 Thread Neal Becker
I have C++ code that tries to share the mtrand state.  It unfortunately 
depends on the layout of RandomState which used to be:

struct __pyx_obj_6mtrand_RandomState {
  PyObject_HEAD
  rk_state *internal_state;
  PyObject *lock;
};

But with 1.11 it's:
struct __pyx_obj_6mtrand_RandomState {
  PyObject_HEAD
  struct __pyx_vtabstruct_6mtrand_RandomState *__pyx_vtab;
  rk_state *internal_state;
  PyObject *lock;
  PyObject *state_address;
};

So
1. Why the change?
2. How can I write portable code?

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] tracemalloc + numpy?

2016-03-08 Thread Neal Becker
I'm trying tracemalloc to find memory usage.  Will numpy array memory usage 
be counted by tracemalloc?  (Doesn't seem to)

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] julia - Multidimensional algorithms and iteration

2016-02-02 Thread Neal Becker
http://julialang.org/blog/2016/02/iteration/

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] what would you expect A[none] to do?

2015-12-31 Thread Neal Becker
In my case, what it does is:

A.shape = (5760,)
A[none] -> (1, 5760)

In my case, use of none here is just a mistake.  But why would you want this 
to be accepted at all, and how should it be interpreted?

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] what would you expect A[none] to do?

2015-12-31 Thread Neal Becker
Neal Becker wrote:

> In my case, what it does is:
> 
> A.shape = (5760,)
> A[none] -> (1, 5760)
> 
> In my case, use of none here is just a mistake.  But why would you want
> this to be accepted at all, and how should it be interpreted?

Actually, in my particular case, if it just acted as a noop, returning the 
original array, that would have been perfect.  No idea if that's a good 
result in general.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] reshaping array question

2015-11-17 Thread Neal Becker
I have an array of shape
(7, 24, 2, 1024)

I'd like an array of
(7, 24, 2048)

such that the elements on the last dimension are interleaving the elements 
from the 3rd dimension

[0,0,0,0] -> [0,0,0]
[0,0,1,0] -> [0,0,1]
[0,0,0,1] -> [0,0,2]
[0,0,1,1] -> [0,0,3]
...

What might be the simplest way to do this?


A different question, suppose I just want to stack them

[0,0,0,0] -> [0,0,0]
[0,0,0,1] -> [0,0,1]
[0,0,0,2] -> [0,0,2]
...
[0,0,1,0] -> [0,0,1024]
[0,0,1,1] -> [0,0,1025]
[0,0,1,2] -> [0,0,1026]
...



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] reshaping array question

2015-11-17 Thread Neal Becker
Robert Kern wrote:

> On Tue, Nov 17, 2015 at 3:48 PM, Neal Becker <ndbeck...@gmail.com> wrote:
>>
>> I have an array of shape
>> (7, 24, 2, 1024)
>>
>> I'd like an array of
>> (7, 24, 2048)
>>
>> such that the elements on the last dimension are interleaving the
>> elements from the 3rd dimension
>>
>> [0,0,0,0] -> [0,0,0]
>> [0,0,1,0] -> [0,0,1]
>> [0,0,0,1] -> [0,0,2]
>> [0,0,1,1] -> [0,0,3]
>> ...
>>
>> What might be the simplest way to do this?
> 
> np.transpose(A, (-2, -1)).reshape(A.shape[:-2] + (-1,))

I get an error on that 1st transpose:

here, 'A' is 'fftouts'

print (fftouts.shape)
print (np.transpose (fftouts, (-2,-1)).shape)

(4, 24, 2, 1024)  <<< fftouts.shape prints this
Traceback (most recent call last):
  File "test_uw2.py", line 194, in 
run_line (sys.argv)
  File "test_uw2.py", line 190, in run_line
run (opt)
  File "test_uw2.py", line 103, in run
print (np.transpose (fftouts, (-2,-1)).shape)
  File "/home/nbecker/.local/lib/python2.7/site-
packages/numpy/core/fromnumeric.py", line 551, in transpose
return transpose(axes)
ValueError: axes don't match array

> 
>> 
>> A different question, suppose I just want to stack them
>>
>> [0,0,0,0] -> [0,0,0]
>> [0,0,0,1] -> [0,0,1]
>> [0,0,0,2] -> [0,0,2]
>> ...
>> [0,0,1,0] -> [0,0,1024]
>> [0,0,1,1] -> [0,0,1025]
>> [0,0,1,2] -> [0,0,1026]
>> ...
> 
> A.reshape(A.shape[:-2] + (-1,))
> 
> --
> Robert Kern


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy 1.10.0 release

2015-10-06 Thread Neal Becker
lots of warning with openblas

 python setup.py build
Running from numpy source directory.
/usr/lib64/python2.7/distutils/dist.py:267: UserWarning: Unknown 
distribution option: 'test_suite'
  warnings.warn(msg)
blas_opt_info:
blas_mkl_info:
  libraries mkl,vml,guide not found in ['/usr/local/lib64', 
'/usr/local/lib', '/usr/lib64', '/usr/lib', '/usr/lib/']
  NOT AVAILABLE

openblas_info:
/home/nbecker/numpy-1.10.0/numpy/distutils/system_info.py:635: UserWarning: 
Specified path  is invalid.
  warnings.warn('Specified path %s is invalid.' % d)
  libraries openblas not found in []
Runtime library openblas was not found. Ignoring
  FOUND:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/lib64']
language = c
define_macros = [('HAVE_CBLAS', None)]

  FOUND:
libraries = ['openblas', 'openblas']
extra_compile_args = ['-march=native -O3']
library_dirs = ['/usr/lib64']
language = c
define_macros = [('HAVE_CBLAS', None)]
...
cc /tmp/tmpoQq7Hu/tmp/tmpoQq7Hu/source.o -L/usr/lib64 -lopenblas -o 
/tmp/tmpoQq7Hu/a.out
  libraries openblas not found in []
Runtime library openblas was not found. Ignoring
  FOUND:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/lib64']
language = c
define_macros = [('HAVE_CBLAS', None)]

  FOUND:
libraries = ['openblas', 'openblas']
extra_compile_args = ['-march=native -O3']
library_dirs = ['/usr/lib64']
language = c
define_macros = [('HAVE_CBLAS', None)]
...

But I think openblas was used, despite the warnings, because later on I see 
-lopenblas in the link step.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy 1.10.0 release

2015-10-06 Thread Neal Becker
1 test failure:

FAIL: test_blasdot.test_blasdot_used
--
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
  File "/home/nbecker/.local/lib/python2.7/site-
packages/numpy/testing/decorators.py", line 146, in skipper_func
return f(*args, **kwargs)
  File "/home/nbecker/.local/lib/python2.7/site-
packages/numpy/core/tests/test_blasdot.py", line 31, in test_blasdot_used
assert_(dot is _dotblas.dot)
  File "/home/nbecker/.local/lib/python2.7/site-
packages/numpy/testing/utils.py", line 53, in assert_
raise AssertionError(smsg)
AssertionError


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy 1.10.0 release

2015-10-06 Thread Neal Becker
Sebastian Berg wrote:

> On Di, 2015-10-06 at 07:53 -0400, Neal Becker wrote:
>> 1 test failure:
>> 
>> FAIL: test_blasdot.test_blasdot_used
>> --
>> Traceback (most recent call last):
>>   File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in
>>   runTest
>> self.test(*self.arg)
>>   File "/home/nbecker/.local/lib/python2.7/site-
>> packages/numpy/testing/decorators.py", line 146, in skipper_func
>> return f(*args, **kwargs)
>>   File "/home/nbecker/.local/lib/python2.7/site-
>> packages/numpy/core/tests/test_blasdot.py", line 31, in test_blasdot_used
>> assert_(dot is _dotblas.dot)
>>   File "/home/nbecker/.local/lib/python2.7/site-
>> packages/numpy/testing/utils.py", line 53, in assert_
>> raise AssertionError(smsg)
>> AssertionError
>> 
> 
> My first guess would be, that it sounds like you got some old test files
> flying around. Can you try cleaning up everything and reinstall? It can
> happen that old installed test files survive the new version.
> 
Yes, I rm'd the old ~/.local/lib/python2.7/site-packages/numpy, reinstalled, 
and now no test failure:

Ran 5955 tests in 30.778s

OK (KNOWNFAIL=3, SKIP=1)

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Defining a white noise process using numpy

2015-08-27 Thread Neal Becker
Daniel Bliss wrote:

 Hi all,
 
 Can anyone give me some advice for translating this equation into code
 using numpy?
 
 eta(t) = lim(dt - 0) N(0, 1/sqrt(dt)),
 
 where N(a, b) is a Gaussian random variable of mean a and variance b**2.
 
 This is a heuristic definition of a white noise process.
 
 Thanks,
 Dan

You want noise with infinite variance?  That doesn't make sense.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] strange casting rules

2015-07-29 Thread Neal Becker
np.uint64(-1)+0
Out[36]: 1.8446744073709552e+19

I often work on signal processing requiring bit-exact integral arithmetic.  
Promoting to float is not helpful - I don't understand the logic of the 
above example.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] stats functions with weight support

2015-07-29 Thread Neal Becker
The histogram function supports a weights option, but most others (e.g., 
percentile) do not.

For my problem, I have a trace of the amounts of time (floating point) that 
my machine under test is in each of N states.  I'd like to produce 
histograms, kde, maybe nice pics with seaborn.

I can use weights option to histogram, but cannot do any fancier ops, 
because they don't accept a weights= input.  For that matter, it would even 
perhaps be nicer if they accepted a histogram input directly.

I looked at seaborn distplot, but it doesn't accept weights and doesn't 
accept a histogram.  I thought maybe I could modify it, but it uses 
functions like percentile, which also doesn't accept either.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] not inheriting from ndarray

2015-07-14 Thread Neal Becker
I wanted the function of an array that accumulates my results, it starts at 
zero size, and resizes as needed.  New results are added using

accumulated += new_array

A simple implementation of this is here:

https://gist.github.com/2ab48e25fd460990d045.git

I have 2 questions:

1. Is this a reasonable approach?

2. I overloaded += to do 1) resize 2) call ndarray += operator
Is there some way to just overload all arithmetic operators similarly all at 
the same time?  (I don't need them for the current work, I'm just curious).

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] not inheriting from ndarray

2015-07-14 Thread Neal Becker
Neal Becker wrote:

 I wanted the function of an array that accumulates my results, it starts
 at
 zero size, and resizes as needed.  New results are added using
 
 accumulated += new_array
 
 A simple implementation of this is here:
 
 https://gist.github.com/2ab48e25fd460990d045.git
 
 I have 2 questions:
 
 1. Is this a reasonable approach?
 
 2. I overloaded += to do 1) resize 2) call ndarray += operator
 Is there some way to just overload all arithmetic operators similarly all
 at
 the same time?  (I don't need them for the current work, I'm just
 curious).

Well one thing, due to (what IMO is) a bug in pickle,
https://bugs.python.org/issue5370

you need to modify, this seems to work:

def __getattr__(self, name):# delegate all operations not 
specifically overridden to base array
if 'arr' in self.__dict__:  # this is to allow unpickle avoid 
infinite recursion
return getattr (self.arr, name)
raise AttributeError(name)


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] dimension independent copy of corner of array

2015-07-13 Thread Neal Becker
I want to copy an array to the corner of a new array.  What is a dimension 
independent way to say:

newarr[:i0,:i1,...] = oldarray

where (i0,i1...) is oldarray.shape?

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] dimension independent copy of corner of array

2015-07-13 Thread Neal Becker
Robert Kern wrote:

 newarr[tuple(slice(0, i) for i in oldarray.shape)] = oldarray
 
 On Mon, Jul 13, 2015 at 12:34 PM, Neal Becker ndbeck...@gmail.com wrote:
 
 I want to copy an array to the corner of a new array.  What is a
 dimension independent way to say:

 newarr[:i0,:i1,...] = oldarray

 where (i0,i1...) is oldarray.shape?

Thanks.  I'm making my own class that represents a resizeable array that 
works by preserving the lower corner of the old data, because I believe that 
ndarray arr.resize does not actually work this way.  I'm only interested in 
resizing to a larger size.  Is this correct?

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] floats for indexing, reshape - too strict ?

2015-07-02 Thread Neal Becker
josef.p...@gmail.com wrote:

 On Wed, Jul 1, 2015 at 10:32 AM, Sebastian Berg
 sebast...@sipsolutions.net wrote:
 
 On Mi, 2015-07-01 at 10:05 -0400, josef.p...@gmail.com wrote:
  About the deprecation warning for using another type than integers, in
  ones, reshape, indexing and so on:
 
 
  Wouldn't it be nicer to accept floats that are equal to an integer?
 

I'd be concerned that checking each index for exactness would be costly.
I'm also concerned that using floats for an index is frequently a mistake 
and that a warning is what I want.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] python is cool

2015-05-12 Thread Neal Becker
In order to make sure all my random number generators have good 
independence, it is a good practice to use a single shared instance (because 
it is already known to have good properties).  A less-desirable alternative 
is to used rng's seeded with different starting states - in this case the 
independence properties are not generally known.

So I have some fairly deeply nested data structures (classes) that somewhere 
contain a reference to a RandomState object.

I need to be able to clone these data structures, producing new independent 
copies, but I want the RandomState part to be the shared, singleton rs 
object.

In python, no problem:

---
from numpy.random import RandomState

class shared_random_state (RandomState):
def __init__ (self, rs):
RandomState.__init__(self, rs)

def __deepcopy__ (self, memo):
return self
---

Now I can copy.deepcopy the data structures, but the randomstate part is 
shared.  I just use

rs = shared_random_state (random.RandomState(0))

and provide this rs to all my other objects.  Pretty nice!

-- 
Those who fail to understand recursion are doomed to repeat it

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] python is cool

2015-05-12 Thread Neal Becker
Roland Schulz wrote:

 Hi,
 
 I think the best way to solve this issue to not use a state at all. It is
 fast, reproducible even in parallel (if wanted), and doesn't suffer from
 the shared issue. Would be nice if numpy provided such a stateless RNG as
 implemented in Random123: www.deshawresearch.com/resources_random123.html
 
 Roland

That is interesting.  I think np.random needs to be refactored, so it can 
accept a pluggable rng - then we could switch the underlying rng.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] random.RandomState and deepcopy

2015-03-13 Thread Neal Becker
It is common that to guarantee good statistical independence between various 
random generators, a singleton instance of an RNG is shared between them.

So I typically have various random generator objects, which (sometimes 
several levels objects deep) embed an instance of RandomState.

Now I have a requirement to copy a generator object (without knowing exactly 
what that generator object is).

My solution is to use deepcopy on the top-level object.  But I need to 
overload __deepcopy__ on the singleton RandomState object.

Unfortunately, RandomState doesn't allow customization of __deepcopy__ (or 
anything else).  And it has no __dict__.

My solution is:

class shared_random_state (object):
def __init__ (self, rs):
self.rs = rs

def __getattr__ (self, attr):
return getattr (self.rs, attr)

def __deepcopy__ (self):
return self

An example usage:

rs = shared_random_state (RandomState(0))

from exponential import exponential

e = exponential (rs, 1)

where exponential is:

class exponential (object):
def __init__ (self, rs, mu):
self.rs = rs
self.mu = mu

def __call__ (self, size=None):
if size == None:
return self.rs.exponential (self.mu, 1)[0]
else:
return self.rs.exponential (self.mu, size)

def __repr__ (self):
return 'exp(%s)' % self.mu

I wonder if anyone has any other suggestions?  Personally, I would prefer if 
numpy provided a more direct solution to this.  Either by providing for 
overloading RandomState deepcopy, or by making the copy behavior switchable 
with a flag to the constructor.



-- 
Those who fail to understand recursion are doomed to repeat it

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] random.RandomState and deepcopy

2015-03-13 Thread Neal Becker
Robert Kern wrote:

 On Fri, Mar 13, 2015 at 5:34 PM, Neal Becker ndbeck...@gmail.com wrote:

 It is common that to guarantee good statistical independence between
 various
 random generators, a singleton instance of an RNG is shared between them.

 So I typically have various random generator objects, which (sometimes
 several levels objects deep) embed an instance of RandomState.

 Now I have a requirement to copy a generator object (without knowing
 exactly
 what that generator object is).
 
 Or rather, you want the generator object to *avoid* copies by returning
 itself when a copy is requested of it.
 
 My solution is to use deepcopy on the top-level object.  But I need to
 overload __deepcopy__ on the singleton RandomState object.

 Unfortunately, RandomState doesn't allow customization of __deepcopy__
 (or
 anything else).  And it has no __dict__.
 
 You can always subclass RandomState to override its __deepcopy__.
 
 --
 Robert Kern

Yes, I think I prefer this:

from numpy.random import RandomState

class shared_random_state (RandomState):
def __init__ (self, rs):
RandomState.__init__(self, rs)

def __deepcopy__ (self, memo):
return self

Although, that means I have to use it like this:

rs = shared_random_state (0)

where I really would prefer (for aesthetic reasons):

rs = shared_random_state (RandomState(0))

but I don't know how to do that if shared_random_state inherits from 
RandomState.
 




-- 
Those who fail to understand recursion are doomed to repeat it

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Introductory mail and GSoc Project Vector math library integration

2015-03-12 Thread Neal Becker
Ralf Gommers wrote:

 On Wed, Mar 11, 2015 at 11:20 PM, Dp Docs sdpa...@gmail.com wrote:
 


 On Thu, Mar 12, 2015 at 2:01 AM, Daπid davidmen...@gmail.com wrote:
 
  On 11 March 2015 at 16:51, Dp Docs sdpa...@gmail.com wrote:
  On Wed, Mar 11, 2015 at 7:52 PM, Sturla Molden
  sturla.mol...@gmail.com
 wrote:
  
   There are at least two ways to proceed here. One is to only use
   vector math when strides and alignment allow it.
  I didn't got it. can you explain in detail?
 
 
  One example, you can create a numpy 2D array using only the odd columns
 of a matrix.
 
  odd_matrix = full_matrix[::2, ::2]
 
  This is just a view of the original data, so you save the time and the
 memory of making a copy. The drawback is that you trash
 ​​
 memory locality, as the elements are not contiguous in memory. If the
 memory is guaranteed to be contiguous, a compiler can apply
 ​​
 extra optimisations, and this is what vector libraries usually assume.
 What I think Sturla is suggesting with when strides and aligment
 ​​
 allow it is to use the fast version if the array is contiguous, and fall
 back to the present implementation otherwise. Another would be to
 ​​
 make an optimally aligned copy, but that could eat up whatever time we
 save from using the faster library, and other problems.
 
  The difficulty with Numpy's strides is that they allow so many ways of
 manipulating the data... (alternating elements, transpositions, different
 precisions...).
 
 
  I think the actual problem is not to choose which library to
 integrate, it is how to integrate these libraries? as I have seen the
 code ​​
 base and been told the current implementation uses the c math library,
 Can
 we just use the current  implementation and whenever it
 ​​
 is calling C Maths functions, we will replace by these above fast library
 functions?Then we have to modify the Numpy library (which
 ​​
 usually get imported for maths operation) by using some if else
 conditions
 like first work with the faster one  and if it is not available
 ​​
 the look for the Default one.
 
 
  At the moment, we are linking to whichever LAPACK is avaliable at
 compile time, so no need for a runtime check. I guess it could
 ​​
 (should?) be the same.
 ​I didn't understand this. I was asking about let say I have chosen one
 faster library, now I need to integrate this​ in *some way *without
 changing the default functionality so that when Numpy will import from
 numpy import *,it should be able to access the integrated libraries
 functions as well as default libraries functions, What should we be that*
 some way*?​ Even at the Compile, it need to decide that which Function it
 is going to use, right?

 
 Indeed, it should probably work similar to how BLAS/LAPACK functions are
 treated now. So you can support multiple libraries in numpy (pick only one
 to start with of course), but at compile time you'd pick the one to use.
 Then that library gets always called under the hood, i.e. no new public
 functions/objects in numpy but only improved performance of existing ones.
 
 It have been discussed above about integration of MKL libraries but when
 MKL is not available on the hardware Architecture, will the above library
 support as default library? if yes, then the Above discussed integration
 method may be the required one for integration in this project, right?
 Can you please tell me a bit more or provide some link related to that?​
 Availability of these faster Libraries depends on the Hardware
 Architectures etc. or availability of hardware Resources in a System?
 because if it is later one, this newly integrated library will support
 operations some time while sometimes not?

 
 Not HW resources I'd think. Looking at http://www.yeppp.info, it supports
 all commonly used cpus/instruction sets.
 As long as the accuracy of the library is OK this should not be noticeable
 to users except for the difference in performance.
 
 
 I believe it's the first one but it is better to clear any type of
 confusion. For example, assuming availability of Hardware means later
 one,
  let say if library A needed the A1 for it's support and A1 is busy then
  it
 will not be able to support the operation. Meanwhile, library B, needs
 Support of hardware type B1 , and it's not Busy then it will support
 these operations. What I want to say is Assuming the Availability of
 faster lib. means availability of hardware Resources in a System at a
 particular time when we want to do operation, it's totally unpredictable
 and Availability of these resources will be Random and even worse, if it
 take a bit extra time between compile and running, and that h/d resource
 have been allocated to other process in the meantime then it would be
 very problematic to use these operations. So this leads to think that
 Availability of lib. means type of h/d architecture whether it supports
 or not that lib. Since there are many kind of h/d architecture and it is
 not the case that one library support all 

[Numpy-discussion] unpacking data values into array of bits

2015-02-12 Thread Neal Becker
I need to transmit some data values.  These values will be float and long 
values.  I need them encoded into a string of bits.

The only way I found so far to do this seems rather roundabout:


np.unpackbits (np.array (memoryview(struct.pack ('d', pi
Out[45]: 
array([0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0,
   0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0,
   0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0], dtype=uint8)

(which I'm not certain is correct)

Also, I don't know how to reverse this process 

-- 
-- Those who don't understand recursion are doomed to repeat it

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] unpacking data values into array of bits

2015-02-12 Thread Neal Becker
Robert Kern wrote:

 On Thu, Feb 12, 2015 at 2:21 PM, Neal Becker ndbeck...@gmail.com wrote:

 I need to transmit some data values.  These values will be float and long
 values.  I need them encoded into a string of bits.

 The only way I found so far to do this seems rather roundabout:


 np.unpackbits (np.array (memoryview(struct.pack ('d', pi
 Out[45]:
 array([0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1,
 0,
0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0,
 0,
0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0], dtype=uint8)

 (which I'm not certain is correct)

 Also, I don't know how to reverse this process
 
 You already had your string ready for transmission with `struct.pack('d',
 pi)`.
 
 --
 Robert Kern

my transmitter wants an np array of bits, not a string

-- 
-- Those who don't understand recursion are doomed to repeat it

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] unpacking data values into array of bits

2015-02-12 Thread Neal Becker
Robert Kern wrote:

 On Thu, Feb 12, 2015 at 3:22 PM, Neal Becker ndbeck...@gmail.com wrote:

 Robert Kern wrote:

  On Thu, Feb 12, 2015 at 3:00 PM, Neal Becker ndbeck...@gmail.com
 wrote:
 
  Robert Kern wrote:
 
   On Thu, Feb 12, 2015 at 2:21 PM, Neal Becker ndbeck...@gmail.com
  wrote:
  
   I need to transmit some data values.  These values will be float and
  long
   values.  I need them encoded into a string of bits.
  
   The only way I found so far to do this seems rather roundabout:
  
  
   np.unpackbits (np.array (memoryview(struct.pack ('d', pi
   Out[45]:
   array([0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0,
 0,
  1,
   0,
  0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1,
 0,
  0,
   0,
  0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0],
  dtype=uint8)
  
   (which I'm not certain is correct)
  
   Also, I don't know how to reverse this process
  
   You already had your string ready for transmission with
  `struct.pack('d',
   pi)`.
  
   --
   Robert Kern
 
  my transmitter wants an np array of bits, not a string
 
  Can you provide any details on what your transmitter is?
 
  --

 My transmitter is c++ code that accepts as input a numpy array of
 np.int32.
 Each element of that array has value 0 or 1.
 
 Ah, great. That makes sense, then.
 
 def tobeckerbits(x):
 return np.unpackbits(np.frombuffer(np.asarray(x),
 dtype=np.uint8)).astype(np.int32)
 
 def frombeckerbits(bits, dtype):
 return np.frombuffer(np.packbits(bits), dtype=dtype)[0]
 
 --
 Robert Kern

Nice!  Also seems to work for arrays of values:

def tobeckerbits(x):
return np.unpackbits(np.frombuffer(np.asarray(x), 
dtype=np.uint8)).astype(np.int32)

def frombeckerbits(bits, dtype):
return np.frombuffer(np.packbits(bits), dtype=dtype)   leaving off the [0]

x = tobeckerbits (2.7)
y = frombeckerbits (x, float)

x2 = tobeckerbits (np.array ((1.1, 2.2)))
y2 = frombeckerbits (x2, float)

-- 
-- Those who don't understand recursion are doomed to repeat it

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Views of a different dtype

2015-02-02 Thread Neal Becker
I find it useful to be able to view a simple 1D contiguous array of complex as 
float (alternative real and imag), and also the do the reverse.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] question np.partition

2015-01-29 Thread Neal Becker
It sounds like np.partition could be used to answer the question:
give me the highest K elements in a vector.

Is this a correct interpretation?  Something like partial sort, but returned 
elements are unsorted.

I could really make some use of this, but in my case it is a list of objects I 
need to sort on a particular key.  Is this algorithm available in general 
python 
code (not specific to numpy arrays)?

-- 
-- Those who don't understand recursion are doomed to repeat it

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] simple reduction question

2014-12-24 Thread Neal Becker
What would be the most efficient way to compute:

c[j] = \sum_i (a[i] * b[i,j])

where a[i] is a 1-d vector, b[i,j] is a 2-d array?

This seems to be one way:

import numpy as np
a = np.arange (3)
b = np.arange (12).reshape (3,4)
c = np.dot (a, b).sum()

but np.dot returns a vector, which then needs further reduction.  Don't know if 
there's a better way.

-- 
-- Those who don't understand recursion are doomed to repeat it

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] simple reduction question

2014-12-24 Thread Neal Becker
Nathaniel Smith wrote:

 On Wed, Dec 24, 2014 at 3:25 PM, Neal Becker ndbeck...@gmail.com wrote:
 What would be the most efficient way to compute:

 c[j] = \sum_i (a[i] * b[i,j])

 where a[i] is a 1-d vector, b[i,j] is a 2-d array?
 
 I think this formula is just np.dot(a, b). Did you mean c = \sum_j
 \sum_i (a[i] * b[i, j])?
 
 This seems to be one way:

 import numpy as np
 a = np.arange (3)
 b = np.arange (12).reshape (3,4)
 c = np.dot (a, b).sum()

 but np.dot returns a vector, which then needs further reduction.  Don't know
 if there's a better way.

 --
Sorry, I was a bit confused there.  Actually, c = np.dot(a, b) was just what I 
needed.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] multi-dimensional c++ proposal

2014-10-27 Thread Neal Becker
The multi-dimensional c++ stuff is interesting (about time!)

http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2014/n3851.pdf

-- 
-- Those who don't understand recursion are doomed to repeat it

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] array indexing question

2014-10-14 Thread Neal Becker
I'm using np.nonzero to construct the tuple:
(array([0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2]), array([1, 3, 5, 7, 2, 3, 6, 7, 4, 
5, 6, 7]))

Now what I want is the 2-D index array:

[1,3,5,7,
2,3,6,7,
4,5,6,7]

Any ideas?

-- 
-- Those who don't understand recursion are doomed to repeat it

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] why does u.resize return None?

2014-09-11 Thread Neal Becker
It would be useful if u.resize returned the new array, so it could be used for 
chaining operations

-- 
-- Those who don't understand recursion are doomed to repeat it

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] why does u.resize return None?

2014-09-11 Thread Neal Becker
https://github.com/numpy/numpy/issues/5064

Eelco Hoogendoorn wrote:

 agreed; I never saw the logic in returning none either.
 
 On Thu, Sep 11, 2014 at 4:27 PM, Neal Becker ndbeck...@gmail.com wrote:
 
 It would be useful if u.resize returned the new array, so it could be used
 for
 chaining operations

 --
 -- Those who don't understand recursion are doomed to repeat it

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

-- 
-- Those who don't understand recursion are doomed to repeat it

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] SFMT (faster mersenne twister)

2014-09-05 Thread Neal Becker
Robert Kern wrote:

 On Thu, Sep 4, 2014 at 12:32 PM, Neal Becker ndbeck...@gmail.com wrote:
 http://www.math.sci.hiroshima-u.ac.jp/~%20m-mat/MT/SFMT/index.html
 
 What would you like to say about it?
 

If it is faster (and at least as good), maybe we'd like to adopt it to replace 
that used for mtrand

-- 
-- Those who don't understand recursion are doomed to repeat it

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] SFMT (faster mersenne twister)

2014-09-05 Thread Neal Becker
Robert Kern wrote:

 On Fri, Sep 5, 2014 at 12:05 PM, Neal Becker ndbeck...@gmail.com wrote:
 Robert Kern wrote:

 On Thu, Sep 4, 2014 at 12:32 PM, Neal Becker ndbeck...@gmail.com wrote:
 http://www.math.sci.hiroshima-u.ac.jp/~%20m-mat/MT/SFMT/index.html

 What would you like to say about it?


 If it is faster (and at least as good), maybe we'd like to adopt it to
 replace that used for mtrand
 
 It's a variant of the standard MT rather than just an implementation
 of it, so we can't just drop it in. You will need to build the
 infrastructure to support multiple PRNGs first (or rather, build the
 infrastructure to reuse the non-uniform distribution code with
 multiple core PRNGs).
 

You mean it's not backward compatible because it won't generate exactly the 
same 
sequence of output for a given seed, and therefore we wouldn't want to make 
that 
change?

I think it's somewhat debatable whether generating a different sequence of 
random numbers counts as breaking backward compatibility. 

-- 
-- Those who don't understand recursion are doomed to repeat it

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] SFMT (faster mersenne twister)

2014-09-04 Thread Neal Becker
http://www.math.sci.hiroshima-u.ac.jp/~%20m-mat/MT/SFMT/index.html

-- 
-- Those who don't understand recursion are doomed to repeat it

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: NumPy 1.9.0 release candidate 1 available

2014-08-29 Thread Neal Becker
How do I run tests?

python setup.py --help-commands claims 'test' is a command, but doesn't seem to 
work:

python setup.py test
Running from numpy source directory.
/usr/lib64/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution 
option: 'test_suite'
  warnings.warn(msg)
usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
   or: setup.py --help [cmd1 cmd2 ...]
   or: setup.py --help-commands
   or: setup.py cmd --help


-- Those who don't understand recursion are doomed to repeat it

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: NumPy 1.9.0 release candidate 1 available

2014-08-29 Thread Neal Becker
OK, it's fixed by doing:

rm -rf ~/.local/lib/python2.7/site-packages/numpy*
python setup.py install --user

I guess something was not cleaned out from previous packages

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] fftw supported?

2014-06-02 Thread Neal Becker
Sebastian Berg wrote:

 On Mo, 2014-06-02 at 07:27 -0400, Neal Becker wrote:
 I just d/l numpy-1.8.1 and try to build.  I uncomment:
 
 [fftw]
 libraries = fftw3
 
 This is fedora 20.  fftw3 (and devel) is installed as fftw.
 
 I see nothing written to stderr during the build that has any reference to
 fftw.
 
 
 I don't know the details, but this is not supported currently. It did
 work for some old versions of numpy I think.
 
 - Sebastian
 

If fftw is not supported anymore, then the comments should be removed from 
site.cfg.example


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: Pandas 0.14.0 released

2014-05-30 Thread Neal Becker
pip install --user --up pandas
Downloading/unpacking pandas from 
https://pypi.python.org/packages/source/p/pandas/pandas-0.14.0.tar.gz#md5=b775987c0ceebcc8d5ace4a1241c967a
...

Downloading/unpacking numpy=1.6.1 from 
https://pypi.python.org/packages/source/n/numpy/numpy-1.8.1.tar.gz#md5=be95babe263bfa3428363d6db5b64678
 
(from pandas)
  Downloading numpy-1.8.1.tar.gz (3.8MB): 3.8MB downloaded
  Running setup.py egg_info for package numpy
Running from numpy source directory.

warning: no files found matching 'tools/py3tool.py'
warning: no files found matching '*' under directory 'doc/f2py'
warning: no previously-included files matching '*.pyc' found anywhere in 
distribution
warning: no previously-included files matching '*.pyo' found anywhere in 
distribution
warning: no previously-included files matching '*.pyd' found anywhere in 
distribution
Downloading/unpacking six from 
https://pypi.python.org/packages/source/s/six/six-1.6.1.tar.gz#md5=07d606ac08595d795bf926cc9985674f
 
(from python-dateutil-pandas)
  Downloading six-1.6.1.tar.gz
  Running setup.py egg_info for package six

no previously-included directories found matching 'documentation/_build'
Installing collected packages: pandas, pytz, numpy, six


What?  I already have numpy-1.8.0 installed (also have six, pytz).

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Inverse function of numpy.polyval()

2014-05-20 Thread Neal Becker
Yuxiang Wang wrote:

 Dear all,
 
 I was wondering is there a convenient inverse function of
 np.polyval(), where I give the y value and it solves for x?
 
 I know one way I could do this is:
 
 import numpy as np
 
 # Set up the question
 p = np.array([1, 1, -10])
 y = 100
 
 # Solve
 p_temp = p
 p_temp[-1] -= y
 x = np.roots(p_temp)
 
 However my guess is most would agree on that this code has low
 readability. Any suggestions?
 
 Thanks!
 
 -Shawn
 
 

Did you get the polynomial from polyfit?  In that case just swap x-y


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Use of PyViennaCL on multi-core?

2014-05-19 Thread Neal Becker
Typically, I have multiple CPU cores running in 'trivial parallel' mode - each
running an independent point of a monte-carlo simulation.

Could multiple processes multiplex use of a single GPU, using PyViennaCL? 

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] incremental histogram

2014-05-07 Thread Neal Becker
I needed a histogram that is built incrementally.  My need is for 1D only.

The idea is to not require storage of all the data (assume it could be too 
large).

This is a naive implementation, perhaps someone could suggest something better.

,[ /home/nbecker/sigproc.ndarray/histogram3.py ]
| import numpy as np
| 
| class histogram (object):
| def __init__ (self, nbins):
| self.nbins = nbins
| self.centers = []
| self.counts = []
| def __iadd__ (self, x):
| self.counts, edges = np.histogram (
| np.concatenate ((x, self.centers)),
| weights = np.concatenate ((np.ones (len(x)), self.counts)),
| bins=self.nbins)
| 
| self.centers = 0.5 * (edges[:-1] + edges[1:])
| return self
| 
| 
| if __name__ == '__main__':
| h = histogram (100)
| h += np.arange (10)
| print h.centers, h.counts
| h += np.arange (10)
| print h.centers, h.counts
| h += np.arange (20)
| print h.centers, h.counts
`

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] should rint return int?

2014-04-28 Thread Neal Becker
I notice rint returns float.  Shouldn't it return int?

Would be useful when float is no longer acceptable as an index.  I think
conversion to an index using rint is a common idiom.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] should rint return int?

2014-04-28 Thread Neal Becker
Robert Kern wrote:

 On Mon, Apr 28, 2014 at 6:36 PM, Neal Becker ndbeck...@gmail.com wrote:
 I notice rint returns float.  Shouldn't it return int?

 Would be useful when float is no longer acceptable as an index.  I think
 conversion to an index using rint is a common idiom.
 
 C's rint() does not:
 
   http://linux.die.net/man/3/rint
 
 This is because there are many integers that are representable as
 floats/doubles/long doubles that are well outside of the range of any
 C integer type, e.g. 1e20.
 
 Python 3's round() can return a Python int because Python ints are
 unbounded. Ours aren't.
 
 That said, typically the first thing anyone does with the result of
 rounding is to coerce it to a native int dtype without any checking.
 It would not be terrible to have a function that rounds, then coerces
 to int but checks for overflow and passes that through the numpy error
 mechanism to be controlled. But it shouldn't be called rint(), which
 is intended to be as thin a wrapper over the C function as possible.
 

Well I'd spell it nint, and it works like:

def nint (x):
  return int (x + 0.5) if x = 0 else int (x - 0.5)

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numerical gradient, Jacobian, and Hessian

2014-04-23 Thread Neal Becker
alex wrote:

 On Mon, Apr 21, 2014 at 3:13 AM, Eelco Hoogendoorn
 hoogendoorn.ee...@gmail.com wrote:
 As far as I can tell, [Theano] is actually the only tensor/ndarray aware
 differentiator out there
 
 And AlgoPy, a tensor/ndarray aware arbitrary order automatic
 differentiator (https://pythonhosted.org/algopy/)

I noticed julia seems to have a package

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] mtrand normal sigma = 0 too restrictive

2014-04-03 Thread Neal Becker
Traceback (most recent call last):
  File ./test_inroute_frame.py, line 1694, in module
run_line (sys.argv)
  File ./test_inroute_frame.py, line 1690, in run_line
return run (opt, cmdline)
  File ./test_inroute_frame.py, line 1115, in run
burst.tr (xbits, freq=freqs[i]+burst.freq_offset, tau=burst.time_offset, 
phase=burst.phase)
  File /home/nbecker/hn-inroute-fixed/transmitter.py, line 191, in __call__
self.channel_out, self.complex_channel_gain = self.channel (mix_out)
  File ./test_inroute_frame.py, line 105, in __call__
ampl = 10**(0.05*self.pwr_gen())
  File ./test_inroute_frame.py, line 148, in __call__
pwr = self.gen()
  File ./test_inroute_frame.py, line 124, in __call__
x = self.gen()
  File /home/nbecker/sigproc.ndarray/normal.py, line 11, in __call__
return self.rs.normal (self.mean, self.std, size)
  File mtrand.pyx, line 1479, in mtrand.RandomState.normal 
(numpy/random/mtrand/mtrand.c:9359)
ValueError: scale = 0

I believe this restriction is too restrictive, and should be
scale  0

There is nothing wrong with scale == 0 as far as I know.  It's a convenient way
to turn off the noise in my simulation.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How security holes happen

2014-03-03 Thread Neal Becker
 Todd toddr...@gmail.com Wrote in message:
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
 

use modern programming languages with well designed exception handling
-- 




Android NewsGroup Reader
http://www.piaohong.tk/newsgroup

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] FYI: libflatarray

2014-02-13 Thread Neal Becker
I thought this was interesting:

http://www.libgeodecomp.org/libflatarray.html

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] another interesting high performance vector lib (yeppp)

2014-01-27 Thread Neal Becker
http://www.yeppp.info/

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] adding fused multiply and add to numpy

2014-01-09 Thread Neal Becker
Charles R Harris wrote:

 On Wed, Jan 8, 2014 at 2:39 PM, Julian Taylor jtaylor.deb...@googlemail.com
 wrote:
 
...
 
 Another function that could be useful is a |a|**2 function, abs2 perhaps.
 
 Chuck

I use mag_sqr all the time.  It should be much faster for complex, if computed
via:

x.real**2 + x.imag**2

avoiding the sqrt of abs.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] an indexing question

2014-01-08 Thread Neal Becker
I have a 1d vector d.  I want compute the means of subsets of this vector.
The subsets are selected by looking at another vector s or same shape as d.

This can be done as:

[np.mean (d[s == i]) for i in range (size)]

But I think this could be done directly with numpy addressing, without resorting
to list comprehension?

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] proposal: min, max of complex should give warning

2013-12-31 Thread Neal Becker
Cera, Tim wrote:

 I don't work with complex numbers, but just sampling what others do:
 
 
 Python: no ordering, results in TypeError
 
 Matlab: sorts by magnitude
 http://www.mathworks.com/help/matlab/ref/sort.html
 
 R: sorts first by real, then by imaginary
 http://stat.ethz.ch/R-manual/R-patched/library/base/html/sort.html
 
 Numpy: sorts first by real, then by imaginary (the documentation link
 below calls this sort 'lexicographical' which I don't think is
 correct)
 http://docs.scipy.org/doc/numpy/reference/generated/numpy.sort.html
 
 
 I would think that the Matlab sort might be more useful, but easy
 enough by using the absolute value.
 
 I think what Numpy does is normal enough to not justify a warning, but
 leave this to others because as I pointed out in the beginning I don't
 work with complex numbers.
 
 Kindest regards,
 Tim

But I'm not proposing to change numpy's result, which I'm sure would raise many 
objections.  I'm just asking to give a warning, because I think in most cases
this is actually a mistake on the user's part.  Just like the warning currently
given when complex data are truncated to real part.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] proposal: min, max of complex should give warning

2013-12-31 Thread Neal Becker
Ralf Gommers wrote:

 On Tue, Dec 31, 2013 at 4:52 PM, Neal Becker ndbeck...@gmail.com wrote:
 
 Cera, Tim wrote:

  I don't work with complex numbers, but just sampling what others do:
 
 
  Python: no ordering, results in TypeError
 
  Matlab: sorts by magnitude
  http://www.mathworks.com/help/matlab/ref/sort.html
 
  R: sorts first by real, then by imaginary
  http://stat.ethz.ch/R-manual/R-patched/library/base/html/sort.html
 
  Numpy: sorts first by real, then by imaginary (the documentation link
  below calls this sort 'lexicographical' which I don't think is
  correct)
  http://docs.scipy.org/doc/numpy/reference/generated/numpy.sort.html
 
 
  I would think that the Matlab sort might be more useful, but easy
  enough by using the absolute value.
 
  I think what Numpy does is normal enough to not justify a warning, but
  leave this to others because as I pointed out in the beginning I don't
  work with complex numbers.
 
  Kindest regards,
  Tim

 But I'm not proposing to change numpy's result, which I'm sure would raise
 many
 objections.  I'm just asking to give a warning, because I think in most
 cases
 this is actually a mistake on the user's part.  Just like the warning
 currently
 given when complex data are truncated to real part.

 
 Keep in mind that warnings can be highly annoying. If you're a user who
 uses this functionality regularly (and you know what you're doing), then
 you're going to be very unhappy to have to wrap each function call in:
 olderr = np.seterr(all='ignore')
 max(...)
 np.seterr(**olderr)
 or in:
 with warnings.catch_warnings():
 warnings.filterwarnings('ignore', ...)
 max(...)
 
 The actual behavior isn't documented now it looks like, so that should be
 done. In the Notes section of max/min probably.
 
 As for your proposal, it would be good to know if adding a warning would
 actually catch any bugs. For the truncation warning it caught several in
 scipy and other libs IIRC.
 
 Ralf

I tripped over it yesterday, which is what prompted my suggestion.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] proposal: min, max of complex should give warning

2013-12-30 Thread Neal Becker
I propose the following change:  min, max applied to complex should
give a warning.

The rationale is, when the user applies min or max to complex, it's probably
a mistake.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] nasty bug in 1.8.0??

2013-12-02 Thread Neal Becker
This is np 1.8.0 on fedora x86_64:

In [5]: x =np.array ((1,))

In [6]: x.shape
Out[6]: (1,)

In [7]: x.strides
Out[7]: (9223372036854775807,)

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] nasty bug in 1.8.0??

2013-12-02 Thread Neal Becker
I built using:

CFLAGS='-march=native -O3' NPY_RELAXED_STRIDES_CHECKING=1 python3 setup.py 
install --user


aπid wrote:

 I get:
 
 In [4]: x.strides
 Out[4]: (8,)
 
 Same architecture and OS, Numpy installed via Pip on Python 2.7.5.
 
 
 On 2 December 2013 20:08, Neal Becker ndbeck...@gmail.com wrote:
 
 This is np 1.8.0 on fedora x86_64:

 In [5]: x =np.array ((1,))

 In [6]: x.shape
 Out[6]: (1,)

 In [7]: x.strides
 Out[7]: (9223372036854775807,)

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] nasty bug in 1.8.0??

2013-12-02 Thread Neal Becker
I don't think that behavior is acceptable.

Frédéric Bastien wrote:

 It is the NPY_RELAXED_STRIDES_CHECKING=1 flag that caused this.
 
 Fred
 
 On Mon, Dec 2, 2013 at 2:18 PM, Neal Becker ndbeck...@gmail.com wrote:
 I built using:

 CFLAGS='-march=native -O3' NPY_RELAXED_STRIDES_CHECKING=1 python3 setup.py
 install --user


 aπid wrote:

 I get:

 In [4]: x.strides
 Out[4]: (8,)

 Same architecture and OS, Numpy installed via Pip on Python 2.7.5.


 On 2 December 2013 20:08, Neal Becker ndbeck...@gmail.com wrote:

 This is np 1.8.0 on fedora x86_64:

 In [5]: x =np.array ((1,))

 In [6]: x.shape
 Out[6]: (1,)

 In [7]: x.strides
 Out[7]: (9223372036854775807,)

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] nasty bug in 1.8.0??

2013-12-02 Thread Neal Becker
The software I'm using, which is 

https://github.com/ndarray/ndarray

does depend on this.  Am I the only one who thinks that this
behavior is not desirable?

Frédéric Bastien wrote:

 Just don't compile with NPY_RELAXED_STRIDES_CHECKING to have the old
 behavior I think (which is an not always the same strides depending of
 how it was created, I don't know if they changed that or not).
 
 Do someone else recall the detail of this?
 
 Fred
 
 p.s. I didn't do this or asked for it. But this help test your
 software to don't depend of the strides when shapes is 1.
 
 On Mon, Dec 2, 2013 at 2:35 PM, Neal Becker ndbeck...@gmail.com wrote:
 I don't think that behavior is acceptable.

 Frédéric Bastien wrote:

 It is the NPY_RELAXED_STRIDES_CHECKING=1 flag that caused this.

 Fred

 On Mon, Dec 2, 2013 at 2:18 PM, Neal Becker ndbeck...@gmail.com wrote:
 I built using:

 CFLAGS='-march=native -O3' NPY_RELAXED_STRIDES_CHECKING=1 python3 setup.py
 install --user


 aπid wrote:

 I get:

 In [4]: x.strides
 Out[4]: (8,)

 Same architecture and OS, Numpy installed via Pip on Python 2.7.5.


 On 2 December 2013 20:08, Neal Becker ndbeck...@gmail.com wrote:

 This is np 1.8.0 on fedora x86_64:

 In [5]: x =np.array ((1,))

 In [6]: x.shape
 Out[6]: (1,)

 In [7]: x.strides
 Out[7]: (9223372036854775807,)

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] nasty bug in 1.8.0??

2013-12-02 Thread Neal Becker
Jim Bosch wrote:

 If your arrays are contiguous, you don't really need the strides (use the
 itemsize instead). How is ndarray broken by this?
 
 ndarray is broken by this change because it expects the stride to be a
 multiple of the itemsize (I think; I'm just looking at code here, as I
 haven't had time to build NumPy 1.8 yet to test this); it has a slightly
 more restricted model for what data can look like than NumPy has, and it's
 easier to always just look at the stride for all sizes rather than
 special-case for size=1.  I think that means the bug is ndarray's (indeed,
 it's probably the kind of bug this new behavior was intended to catch, as I
 should be handling the case of non-itemsize-multiple strides more
 gracefully even when size  1), and I'm working on a fix for it there now.
 
 Thanks, Neil, for bringing this to my attention, and to all the NumPy dev's
 for help in explaining what's going on.
 
 Jim

The problem I encountered, is that canonical generic c++ code looks like:

templatetypename in_t
void F (in_t in) {
  int size = boost::size (in);
...

This fails when in is nd::ArrayT,1,0.  In that case, the iterator is 
strided_iterator.  And here, I find (via gdb), that stride==0.

The failure occurs here:

StridedIterator.h:

template typename U
int distance_to(StridedIteratorU const  other) const {
return std::distance(_data, other._data) / _stride; 
}

How it happens that stride==0, and how to fix it, I don't know.


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy.savetxt to string?

2013-11-06 Thread Neal Becker
According to doc, savetxt only allows a file name.  I'm surprised it doesn't 
allow a file-like object.  How can I format text into a string?  I would like 
savetxt to accept StringIO for this.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] strange behavior of += with object array

2013-11-01 Thread Neal Becker
Robert Kern wrote:

 On Thu, Oct 31, 2013 at 11:22 PM, Neal Becker ndbeck...@gmail.com wrote:

 import numpy as np
 #from accumulator import stat2nd_double

 ## Just to make this really clear, I'm making a dummy
 ## class here that overloads +=
 class stat2nd_double (object):
 def __iadd__ (self, x):
 return self

 m = np.empty ((2,3), dtype=object)
 m[:,:] = stat2nd_double()

 m[0,0] += 1.0   no error here

 m += np.ones ((2,3))  but this gives an error

 Traceback (most recent call last):
   File test_stat.py, line 13, in module
 m += np.ones ((2,3))
 TypeError: unsupported operand type(s) for +: 'stat2nd_double' and 'float'
 
 Yeah, numpy doesn't pass down the __iadd__() to the underlying objects.
 object arrays are the only dtype that could implement __iadd__() at that
 level, so it has never been an operation added to the generic numeric ops
 system. Look in numpy/core/src/multiarray/numeric.c for more details. It
 might be possible to implement a special case for object arrays in
 array_inplace_add() and the rest.
 
 --
 Robert Kern

Is it worth filing an enhancement request?

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] strange behavior of += with object array

2013-11-01 Thread Neal Becker
Robert Kern wrote:

 On Thu, Oct 31, 2013 at 11:22 PM, Neal Becker ndbeck...@gmail.com wrote:

 import numpy as np
 #from accumulator import stat2nd_double

 ## Just to make this really clear, I'm making a dummy
 ## class here that overloads +=
 class stat2nd_double (object):
 def __iadd__ (self, x):
 return self

 m = np.empty ((2,3), dtype=object)
 m[:,:] = stat2nd_double()

 m[0,0] += 1.0   no error here

 m += np.ones ((2,3))  but this gives an error

 Traceback (most recent call last):
   File test_stat.py, line 13, in module
 m += np.ones ((2,3))
 TypeError: unsupported operand type(s) for +: 'stat2nd_double' and 'float'
 
 Yeah, numpy doesn't pass down the __iadd__() to the underlying objects.
 object arrays are the only dtype that could implement __iadd__() at that
 level, so it has never been an operation added to the generic numeric ops
 system. Look in numpy/core/src/multiarray/numeric.c for more details. It
 might be possible to implement a special case for object arrays in
 array_inplace_add() and the rest.
 
 --
 Robert Kern

What is a suggested workaround?

The best I could think of is:
np.vectorize (lambda s,x: s.__iadd__(x)) (m, np.ones ((2,3)))

where m is my matrix of objects and np.ones ((2,3)) is the array to += each
element.



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: NumPy 1.8.0 release.

2013-10-31 Thread Neal Becker
Thanks for the release!

I am having a hard time finding the build instructions.  Could you please add 
this to the announcement?

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: NumPy 1.8.0 release.

2013-10-31 Thread Neal Becker
Charles R Harris wrote:

 On Thu, Oct 31, 2013 at 6:58 AM, Neal Becker ndbeck...@gmail.com wrote:
 
 Thanks for the release!

 I am having a hard time finding the build instructions.  Could you please
 add
 this to the announcement?

 
 What sort of build instructions are you looking for?
 
 Chuck

How to build from source, what are some settings for site.cfg.  I did get this 
figured out (wanted to try out openblas), but it could be a small barrier to 
new users.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] strange behavior of += with object array

2013-10-31 Thread Neal Becker
import numpy as np
#from accumulator import stat2nd_double 


## Just to make this really clear, I'm making a dummy
## class here that overloads +=
class stat2nd_double (object):
def __iadd__ (self, x):
return self

m = np.empty ((2,3), dtype=object)
m[:,:] = stat2nd_double()

m[0,0] += 1.0   no error here

m += np.ones ((2,3))  but this gives an error

Traceback (most recent call last):
  File test_stat.py, line 13, in module
m += np.ones ((2,3))
TypeError: unsupported operand type(s) for +: 'stat2nd_double' and 'float'


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] is np vector a sequence?

2013-10-28 Thread Neal Becker
isinstance (np.zeros (10), collections.Sequence)
Out[36]: False

That's unfortunate.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Valid algorithm for generating a 3D Wiener Process?

2013-09-25 Thread Neal Becker
David Goldsmith wrote:

 Is this a valid algorithm for generating a 3D Wiener process?  (When I
 graph the results, they certainly look like potential Brownian motion
 tracks.)
 
 def Wiener3D(incr, N):
 r = incr*(R.randint(3, size=(N,))-1)
 r[0] = 0
 r = r.cumsum()
 t = 2*np.pi*incr*(R.randint(3, size=(N,))-1)
 t[0] = 0
 t = t.cumsum()
 p = np.pi*incr*(R.randint(3, size=(N,))-1)
 p[0] = 0
 p = p.cumsum()
 x = r*np.cos(t)*np.sin(p)
 y = r*np.sin(t)*np.sin(p)
 z = r*np.cos(p)
 return np.array((x,y,z)).T
 
 Thanks!
 
 DG

Not the kind of Wiener process I learned of.  This would be the integral of 
white noise.  Here you have used:

1. discrete increments
2. spherical coordinates

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] openblas?

2013-09-23 Thread Neal Becker
Does numpy/scipy support building with openblas for blas,lapack instead of 
atlas?

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] my code (python version)

2013-09-10 Thread Neal Becker
Here's code I use for basic 2d histogramimport numpy as np

def nint (x):
if x = 0:
return int (x + 0.5)
else:
return int (x - 0.5)

class histogram2d (object):
def __init__ (self, min, max, delta, clip=True):
self.min = min
self.max = max
self.delta = delta
self.clip = clip
self.n_buckets = int ((max - min)/delta + 1)
self.buckets = np.zeros ((self.n_buckets, self.n_buckets), dtype=int)

def apply (self, x):
if x  self.max:
if self.clip:
return self.max
else:
raise RuntimeError
elif x  self.min:
if self.clip:
return self.min
else:
raise RuntimeError
else:
return x

def __iadd__ (self, z):
if hasattr (z, '__len__'):
for e in z:
self += e
return self
else:
x_index = nint ((self.apply (z.real) - self.min) / self.delta)
y_index = nint ((self.apply (z.imag) - self.min) / self.delta)
self.buckets[x_index, y_index] += 1
return self


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: 1.8.0b2 release.

2013-09-09 Thread Neal Becker
Charles R Harris wrote:

 Hi all,
 
 I'm happy to announce the second beta release of Numpy 1.8.0. This release
 should solve the Windows problems encountered in the first beta. Many
 thanks to Christolph Gohlke and Julian Taylor for their hard work in
 getting those issues settled.
 
 It would be good if folks running OS X could try out this release and
 report any issues on the numpy-dev mailing list. Unfortunately the files
 still need to be installed from source as dmg files are not avalable at
 this time.
 
 Source tarballs and release notes can be found at
 https://sourceforge.net/projects/numpy/files/NumPy/1.8.0b2/. The Windows
 and OS X installers will follow when the infrastructure issues are dealt
 with.
 
 Chuck

Fedora 19 linux x86_64
mkl Package ID: l_mkl_11.0.3.163
MKL ERROR: Parameter 4 was incorrect on entry to DGETRF.
...
FAIL: test_linalg.test_xerbla
--
Traceback (most recent call last):
  File /usr/lib/python2.7/site-packages/nose/case.py, line 197, in runTest
self.test(*self.arg)
  File /home/nbecker/.local/lib/python2.7/site-
packages/numpy/testing/decorators.py, line 146, in skipper_func
return f(*args, **kwargs)
  File /home/nbecker/.local/lib/python2.7/site-
packages/numpy/linalg/tests/test_linalg.py, line 925, in test_xerbla
assert_(False)
  File /home/nbecker/.local/lib/python2.7/site-
packages/numpy/testing/utils.py, line 44, in assert_
raise AssertionError(msg)
AssertionError

--
Ran 5271 tests in 56.843s

FAILED (KNOWNFAIL=5, SKIP=13, failures=1)
nose.result.TextTestResult run=5271 errors=0 failures=1
 

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy 1.8.0b1 mkl test_xerbla failure

2013-09-06 Thread Neal Becker
Charles R Harris wrote:

 On Thu, Sep 5, 2013 at 5:34 AM, Neal Becker ndbeck...@gmail.com wrote:
 
 Just want to make sure this post had been noted:

 Neal Becker wrote:

  Built on fedora linux 19 x86_64 using mkl:
 
  build OK using:
  env ATLAS=/usr/lib64 FFTW=/usr/lib64 BLAS=/usr/lib64
 LAPACK=/usr/lib64
  CFLAGS=-mtune=native -march=native -O3 LDFLAGS=-Wl,-
  rpath=/opt/intel/mkl/lib/intel64 python setup.py build
 
  and attached site.cfg:
 
  ==
  FAIL: test_linalg.test_xerbla
  --
  Traceback (most recent call last):
File /usr/lib/python2.7/site-packages/nose/case.py, line 197, in
 runTest
  self.test(*self.arg)
File /home/nbecker/.local/lib/python2.7/site-
  packages/numpy/testing/decorators.py, line 146, in skipper_func
  return f(*args, **kwargs)
File /home/nbecker/.local/lib/python2.7/site-
  packages/numpy/linalg/tests/test_linalg.py, line 925, in test_xerbla
  assert_(False)
File /home/nbecker/.local/lib/python2.7/site-
  packages/numpy/testing/utils.py, line 44, in assert_
  raise AssertionError(msg)
  AssertionError
 
  --
  Ran 5271 tests in 57.567s
 
  FAILED (KNOWNFAIL=5, SKIP=13, failures=1)
  nose.result.TextTestResult run=5271 errors=0 failures=1



 What version of MKL is this? The bug doesn't show in Christolph's compiles
 with MKL on windows, so it might be an MKL bug. Is it repeatable?
 

seems to be 2013.3.163

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy 1.8.0b1 mkl test_xerbla failure

2013-09-05 Thread Neal Becker
Just want to make sure this post had been noted:

Neal Becker wrote:

 Built on fedora linux 19 x86_64 using mkl:
 
 build OK using:
 env ATLAS=/usr/lib64 FFTW=/usr/lib64 BLAS=/usr/lib64 LAPACK=/usr/lib64
 CFLAGS=-mtune=native -march=native -O3 LDFLAGS=-Wl,-
 rpath=/opt/intel/mkl/lib/intel64 python setup.py build
 
 and attached site.cfg:
 
 ==
 FAIL: test_linalg.test_xerbla
 --
 Traceback (most recent call last):
   File /usr/lib/python2.7/site-packages/nose/case.py, line 197, in runTest
 self.test(*self.arg)
   File /home/nbecker/.local/lib/python2.7/site-
 packages/numpy/testing/decorators.py, line 146, in skipper_func
 return f(*args, **kwargs)
   File /home/nbecker/.local/lib/python2.7/site-
 packages/numpy/linalg/tests/test_linalg.py, line 925, in test_xerbla
 assert_(False)
   File /home/nbecker/.local/lib/python2.7/site-
 packages/numpy/testing/utils.py, line 44, in assert_
 raise AssertionError(msg)
 AssertionError
 
 --
 Ran 5271 tests in 57.567s
 
 FAILED (KNOWNFAIL=5, SKIP=13, failures=1)
 nose.result.TextTestResult run=5271 errors=0 failures=1


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: Scipy 0.13.0 beta 1 release

2013-09-04 Thread Neal Becker
Failed building on fedora 19 x86_64 using atlas:

creating build/temp.linux-x86_64-2.7/numpy/linalg
creating build/temp.linux-x86_64-2.7/numpy/linalg/lapack_lite
compile options: '-DATLAS_INFO=\3.8.4\ -I/usr/include -Inumpy/core/include -
Ibuild/src.linux-x86_64-2.7/numpy/core/include/numpy -Inumpy/core/src/private -
Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -
Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -
Inumpy/core/include -I/usr/include/python2.7 -c'
gcc: numpy/linalg/lapack_litemodule.c
gcc: numpy/linalg/lapack_lite/python_xerbla.c
/usr/bin/gfortran -Wall -Wl,-rpath=/opt/intel/mkl/lib/intel64 build/temp.linux-
x86_64-2.7/numpy/linalg/lapack_litemodule.o build/temp.linux-
x86_64-2.7/numpy/linalg/lapack_lite/python_xerbla.o -L/usr/lib64/atlas -
L/usr/lib64 -Lbuild/temp.linux-x86_64-2.7 -llapack -lptf77blas -lptcblas 
-latlas 
-lpython2.7 -lgfortran -o build/lib.linux-x86_64-2.7/numpy/linalg/lapack_lite.so
/usr/lib/gcc/x86_64-redhat-linux/4.8.1/../../../../lib64/crt1.o: In function 
`_start':
(.text+0x20): undefined reference to `main'
collect2: error: ld returned 1 exit status
/usr/lib/gcc/x86_64-redhat-linux/4.8.1/../../../../lib64/crt1.o: In function 
`_start':
(.text+0x20): undefined reference to `main'
collect2: error: ld returned 1 exit status

Build command was:
env ATLAS=/usr/lib64 FFTW=/usr/lib64 BLAS=/usr/lib64 LAPACK=/usr/lib64 
CFLAGS=-mtune=native -march=native -O3 LDFLAGS=-Wl,-
rpath=/opt/intel/mkl/lib/intel64 python setup.py build

attached site.cfg# This file provides configuration information about non-Python dependencies for
# numpy.distutils-using packages. Create a file like this called site.cfg next
# to your package's setup.py file and fill in the appropriate sections. Not all
# packages will use all sections so you should leave out sections that your
# package does not use.

# To assist automatic installation like easy_install, the user's home directory
# will also be checked for the file ~/.numpy-site.cfg .

# The format of the file is that of the standard library's ConfigParser module.
#
#   http://www.python.org/doc/current/lib/module-ConfigParser.html
#
# Each section defines settings that apply to one particular dependency. Some of
# the settings are general and apply to nearly any section and are defined here.
# Settings specific to a particular section will be defined near their section.
#
#   libraries
#   Comma-separated list of library names to add to compile the extension
#   with. Note that these should be just the names, not the filenames. For
#   example, the file libfoo.so would become simply foo.
#   libraries = lapack,f77blas,cblas,atlas
#
#   library_dirs
#   List of directories to add to the library search path when compiling
#   extensions with this dependency. Use the character given by os.pathsep
#   to separate the items in the list. Note that this character is known to
#   vary on some unix-like systems; if a colon does not work, try a comma.
#   This also applies to include_dirs and src_dirs (see below).
#   On UN*X-type systems (OS X, most BSD and Linux systems):
#   library_dirs = /usr/lib:/usr/local/lib
#   On Windows:
#   library_dirs = c:\mingw\lib,c:\atlas\lib
#   On some BSD and Linux systems:
#   library_dirs = /usr/lib,/usr/local/lib
#
#   include_dirs
#   List of directories to add to the header file earch path.
#   include_dirs = /usr/include:/usr/local/include
#
#   src_dirs 
#   List of directories that contain extracted source code for the
#   dependency. For some dependencies, numpy.distutils will be able to build
#   them from source if binaries cannot be found. The FORTRAN BLAS and
#   LAPACK libraries are one example. However, most dependencies are more
#   complicated and require actual installation that you need to do
#   yourself.
#   src_dirs = /home/rkern/src/BLAS_SRC:/home/rkern/src/LAPACK_SRC
#
#   search_static_first
#   Boolean (one of (0, false, no, off) for False or (1, true, yes, on) for
#   True) to tell numpy.distutils to prefer static libraries (.a) over
#   shared libraries (.so). It is turned off by default.
#   search_static_first = false

# Defaults
# 
# The settings given here will apply to all other sections if not overridden.
# This is a good place to add general library and include directories like
# /usr/local/{lib,include}
#
[DEFAULT]
library_dirs = /usr/lib64
include_dirs = /usr/include

# Optimized BLAS and LAPACK
# -
# Use the blas_opt and lapack_opt sections to give any settings that are
# required to link against your chosen BLAS and LAPACK, including the regular
# FORTRAN reference BLAS and also ATLAS. Some other sections still exist for
# linking against certain optimized libraries (e.g. [atlas], [lapack_atlas]),
# however, they are now deprecated and should not be used.
#
# These are typical 

Re: [Numpy-discussion] ANN: Numpy 1.8.0 beta 1 release

2013-09-04 Thread Neal Becker
Built on fedora linux 19 x86_64 using mkl:

build OK using:
env ATLAS=/usr/lib64 FFTW=/usr/lib64 BLAS=/usr/lib64 LAPACK=/usr/lib64 
CFLAGS=-mtune=native -march=native -O3 LDFLAGS=-Wl,-
rpath=/opt/intel/mkl/lib/intel64 python setup.py build

and attached site.cfg:

==
FAIL: test_linalg.test_xerbla
--
Traceback (most recent call last):
  File /usr/lib/python2.7/site-packages/nose/case.py, line 197, in runTest
self.test(*self.arg)
  File /home/nbecker/.local/lib/python2.7/site-
packages/numpy/testing/decorators.py, line 146, in skipper_func
return f(*args, **kwargs)
  File /home/nbecker/.local/lib/python2.7/site-
packages/numpy/linalg/tests/test_linalg.py, line 925, in test_xerbla
assert_(False)
  File /home/nbecker/.local/lib/python2.7/site-
packages/numpy/testing/utils.py, line 44, in assert_
raise AssertionError(msg)
AssertionError

--
Ran 5271 tests in 57.567s

FAILED (KNOWNFAIL=5, SKIP=13, failures=1)
nose.result.TextTestResult run=5271 errors=0 failures=1# This file provides configuration information about non-Python dependencies for
# numpy.distutils-using packages. Create a file like this called site.cfg next
# to your package's setup.py file and fill in the appropriate sections. Not all
# packages will use all sections so you should leave out sections that your
# package does not use.

# To assist automatic installation like easy_install, the user's home directory
# will also be checked for the file ~/.numpy-site.cfg .

# The format of the file is that of the standard library's ConfigParser module.
#
#   http://www.python.org/doc/current/lib/module-ConfigParser.html
#
# Each section defines settings that apply to one particular dependency. Some of
# the settings are general and apply to nearly any section and are defined here.
# Settings specific to a particular section will be defined near their section.
#
#   libraries
#   Comma-separated list of library names to add to compile the extension
#   with. Note that these should be just the names, not the filenames. For
#   example, the file libfoo.so would become simply foo.
#   libraries = lapack,f77blas,cblas,atlas
#
#   library_dirs
#   List of directories to add to the library search path when compiling
#   extensions with this dependency. Use the character given by os.pathsep
#   to separate the items in the list. Note that this character is known to
#   vary on some unix-like systems; if a colon does not work, try a comma.
#   This also applies to include_dirs and src_dirs (see below).
#   On UN*X-type systems (OS X, most BSD and Linux systems):
#   library_dirs = /usr/lib:/usr/local/lib
#   On Windows:
#   library_dirs = c:\mingw\lib,c:\atlas\lib
#   On some BSD and Linux systems:
#   library_dirs = /usr/lib,/usr/local/lib
#
#   include_dirs
#   List of directories to add to the header file earch path.
#   include_dirs = /usr/include:/usr/local/include
#
#   src_dirs 
#   List of directories that contain extracted source code for the
#   dependency. For some dependencies, numpy.distutils will be able to build
#   them from source if binaries cannot be found. The FORTRAN BLAS and
#   LAPACK libraries are one example. However, most dependencies are more
#   complicated and require actual installation that you need to do
#   yourself.
#   src_dirs = /home/rkern/src/BLAS_SRC:/home/rkern/src/LAPACK_SRC
#
#   search_static_first
#   Boolean (one of (0, false, no, off) for False or (1, true, yes, on) for
#   True) to tell numpy.distutils to prefer static libraries (.a) over
#   shared libraries (.so). It is turned off by default.
#   search_static_first = false

# Defaults
# 
# The settings given here will apply to all other sections if not overridden.
# This is a good place to add general library and include directories like
# /usr/local/{lib,include}
#
[DEFAULT]
library_dirs = /usr/lib64
include_dirs = /usr/include

# Optimized BLAS and LAPACK
# -
# Use the blas_opt and lapack_opt sections to give any settings that are
# required to link against your chosen BLAS and LAPACK, including the regular
# FORTRAN reference BLAS and also ATLAS. Some other sections still exist for
# linking against certain optimized libraries (e.g. [atlas], [lapack_atlas]),
# however, they are now deprecated and should not be used.
#
# These are typical configurations for ATLAS (assuming that the library and
# include directories have already been set in [DEFAULT]; the include directory
# is important for the BLAS C interface):
#
#[blas_opt]
#libraries = f77blas, cblas, atlas
#
#[lapack_opt]
#libraries = lapack, f77blas, cblas, atlas
#
# If your ATLAS was compiled with pthreads, the 

Re: [Numpy-discussion] ANN: Scipy 0.13.0 beta 1 release

2013-09-04 Thread Neal Becker
David Cournapeau wrote:

 On Wed, Sep 4, 2013 at 1:00 PM, Neal Becker ndbeck...@gmail.com wrote:
 
 Failed building on fedora 19 x86_64 using atlas:

 creating build/temp.linux-x86_64-2.7/numpy/linalg
 creating build/temp.linux-x86_64-2.7/numpy/linalg/lapack_lite
 compile options: '-DATLAS_INFO=\3.8.4\ -I/usr/include
 -Inumpy/core/include -
 Ibuild/src.linux-x86_64-2.7/numpy/core/include/numpy
 -Inumpy/core/src/private -
 Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -
 Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort
 -
 Inumpy/core/include -I/usr/include/python2.7 -c'
 gcc: numpy/linalg/lapack_litemodule.c
 gcc: numpy/linalg/lapack_lite/python_xerbla.c
 /usr/bin/gfortran -Wall -Wl,-rpath=/opt/intel/mkl/lib/intel64
 build/temp.linux-
 x86_64-2.7/numpy/linalg/lapack_litemodule.o build/temp.linux-
 x86_64-2.7/numpy/linalg/lapack_lite/python_xerbla.o -L/usr/lib64/atlas -
 L/usr/lib64 -Lbuild/temp.linux-x86_64-2.7 -llapack -lptf77blas -lptcblas
 -latlas
 -lpython2.7 -lgfortran -o
 build/lib.linux-x86_64-2.7/numpy/linalg/lapack_lite.so
 /usr/lib/gcc/x86_64-redhat-linux/4.8.1/../../../../lib64/crt1.o: In
 function
 `_start':
 (.text+0x20): undefined reference to `main'
 collect2: error: ld returned 1 exit status
 /usr/lib/gcc/x86_64-redhat-linux/4.8.1/../../../../lib64/crt1.o: In
 function
 `_start':
 (.text+0x20): undefined reference to `main'
 collect2: error: ld returned 1 exit status

 Build command was:
 env ATLAS=/usr/lib64 FFTW=/usr/lib64 BLAS=/usr/lib64 LAPACK=/usr/lib64
 CFLAGS=-mtune=native -march=native -O3 LDFLAGS=-Wl,-
 rpath=/opt/intel/mkl/lib/intel64 python setup.py build

 
 This command never worked: you need to add the -shared flag to LDFLAGS (and
 you may want to remove rpath to MKL if you use ATLAS).
 
 David
 

OK, building with -shared (and removing rpath) works.  numpy.test('full') 
reports no unexpected failures.




___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] lots of warnings with python3

2013-08-28 Thread Neal Becker
I tried running python2 -3 on some code, and found numpy
produces a lot of warnings.

Many like:
python -3 -c 'import numpy'
...
/usr/lib64/python2.7/site-packages/numpy/lib/polynomial.py:928: 
DeprecationWarning: Overriding __eq__ blocks inheritance of __hash__ in 3.x

But also:
/usr/lib64/python2.7/site-packages/numpy/lib/shape_base.py:838: 
DeprecationWarning: classic int division
  n /= max(dim_in,1)

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] forwarded article (embracing tensors)

2013-05-30 Thread Neal Becker
I thought the topic of this article might be of interest here:

https://groups.google.com/forum/?fromgroups#!topic/julia-dev/GAdcYzmibyo

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] another indexing question

2013-05-20 Thread Neal Becker
I have a system that transmits signals for an alphabet of M symbols
over and additive Gaussian noise channel.  The receiver has a
1-d array of complex received values.  I'd like to find the means
of the received values according to the symbol that was transmitted.

So transmit symbol indexes might be:

x = [0, 1, 2, 1, 3, ...]

and receive output might be:

y = [(1+1j), (1-1j), ...]

Suppose the alphabet was M=4.  Then I'd like to get an array of means

m[0...3] that correspond to the values of y for each of the corresponding
values of x.

I can't think of a better way than manually using loops.  Any tricks here?

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] RuntimeWarning: divide by zero encountered in log

2013-05-17 Thread Neal Becker
Nathaniel Smith wrote:

 On 16 May 2013 19:48, Jonathan Helmus jjhel...@gmail.com wrote:

 On 05/16/2013 01:42 PM, Neal Becker wrote:
  Is there a way to get a traceback instead of just printing the
  line that triggered the error?
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 Neal,

  Look at the numpy.seterr function.  You can use it to change how
 floating-point errors are handled, including raising a
 FloatingPointError with a traceback as opposed to printing a
 RuntimeWarning.

 Example

 $ cat foo.py
 import numpy as np

 np.seterr(divide='raise')

 a = np.array([1,1,1], dtype='float32')
 a / 0

 $ python foo.py
 Traceback (most recent call last):
File test.py, line 6, in module
  a / 0
 FloatingPointError: divide by zero encountered in divide
 
 You also have the option of using Python's general ability to customize how
 any warning is handled - see the 'warnings' module and -W switch.
 
 If you just want a traceback printed without an exception then I think you
 can do that with np.seterr too (using np.seterrcall).
 
 -n

I tried this:

import traceback

np.seterrcall (lambda a,b: traceback.print_stack)
np.seterr (all='call')
np.seterrcall (lambda a,b: traceback.print_stack)

but it doesn't seem to do anything, I still see numpy warning as before.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] somewhat less stupid problem with 0-d arrays

2013-05-11 Thread Neal Becker
Sebastian Berg wrote:

 On Fri, 2013-05-10 at 19:57 -0400, Neal Becker wrote:
 It would be convenient if in arithmetic 0-d arrays were just ignored - it
 would seem to me to be convenient in generic code where a degenerate array is
 treated as nothing
 
 
 Small naming detail. A 0-d array is an array with exactly one element
 and no dimensions, i.e. np.array(0), and behaves mostly like a scalar.
 What you have is an empty array with no elements.
 
 np.zeros ((0,0)) + np.ones ((2,2))
 ---
 ValueErrorTraceback (most recent call last)
 ipython-input-17-27af0e0bbc6f in module()
  1 np.zeros ((0,0)) + np.ones ((2,2))
 
 ValueError: operands could not be broadcast together with shapes (0,0) (2,2)
 
 
 
 I am not sure in what general code you need that, it seems weird to me,
 since np.zeros((N, N)) + np.ones((2,2)) would also only work if N=1. And
 if N=1, it looks like it might be a reduction result.
 Empty arrays *do* support most reductions (making them not empty, like
 summing them gives 0). And they do broadcast under the normal
 broadcasting rules, such that np.zeros((0,0)) + np.zeros((10,1,1)) gives
 np.zeros((10,0,0)).  For the most part, they are not a special case and
 just work the same as non-empty arrays, which seems right to me.
 
 - Sebastian
 

OK, my code looks like this:

results = np.dot (a, b) + np.dot (c, d)

I have a case where I want to basically turn off that second dot product, and 
I thought if c and d were 0-size it should have that effect.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] 0-dim arrays inconsistency

2013-05-10 Thread Neal Becker
np.array ((0,0))
Out[10]: array([0, 0])   ok, it's 2 dimensional

In [11]: np.array ((0,0)).shape
Out[11]: (2,)   except, it isn't


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 0-dim arrays inconsistency

2013-05-10 Thread Neal Becker
Neal Becker wrote:

 np.array ((0,0))
 Out[10]: array([0, 0])   ok, it's 2 dimensional
 
 In [11]: np.array ((0,0)).shape
 Out[11]: (2,)   except, it isn't

Sorry for the stupid question - please ignore

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] somewhat less stupid problem with 0-d arrays

2013-05-10 Thread Neal Becker
It would be convenient if in arithmetic 0-d arrays were just ignored - it would
seem to me to be convenient in generic code where a degenerate array is treated
as nothing

np.zeros ((0,0)) + np.ones ((2,2))
---
ValueErrorTraceback (most recent call last)
ipython-input-17-27af0e0bbc6f in module()
 1 np.zeros ((0,0)) + np.ones ((2,2))

ValueError: operands could not be broadcast together with shapes (0,0) (2,2)



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] what do I get if I build with MKL?

2013-04-19 Thread Neal Becker
What sorts of functions take advantage of MKL?

Linear Algebra (equation solving)?

Something like dot product?

exp, log, trig of matrix?

basic numpy arithmetic? (add matrixes)

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


  1   2   3   4   5   >