[Numpy-discussion] ArrayList object

2014-01-04 Thread Nicolas Rougier

Hi all,

I've coding an ArrayList object based on a regular numpy array. This objects 
allows to dynamically append/insert/delete/access items. I found it quite 
convenient since it allows to manipulate an array as if it was a list with 
elements of different sizes but with same underlying type (=array dtype).

# Creation from a nested list
L = ArrayList([ [0], [1,2], [3,4,5], [6,7,8,9] ])

# Creation from an array + common item size
L = ArrayList(np.ones(1000), 3)

# Empty list
L = ArrayList(dype=int)

# Creation from an array + individual item sizes
L = ArrayList(np.ones(10), 1+np.arange(4))

# Access to elements:
print L[0], L[1], L[2], L[3]
[0] [1 2] [3 4 5] [6 7 8 9]

# Operations on elements
L[:2] += 1
print L.data
[1 2 3 3 4 5 6 7 8 9]


Source code is available from: https://github.com/rougier/array-list

I wonder is there is any interest in having such object within core numpy 
(np.list ?) ?


Nicolas





___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] proposal: min, max of complex should give warning

2014-01-04 Thread Ralf Gommers
On Tue, Dec 31, 2013 at 5:45 PM, Neal Becker ndbeck...@gmail.com wrote:

 Ralf Gommers wrote:

  On Tue, Dec 31, 2013 at 4:52 PM, Neal Becker ndbeck...@gmail.com
 wrote:
 
  Cera, Tim wrote:
 
   I don't work with complex numbers, but just sampling what others do:
  
  
   Python: no ordering, results in TypeError
  
   Matlab: sorts by magnitude
   http://www.mathworks.com/help/matlab/ref/sort.html
  
   R: sorts first by real, then by imaginary
   http://stat.ethz.ch/R-manual/R-patched/library/base/html/sort.html
  
   Numpy: sorts first by real, then by imaginary (the documentation link
   below calls this sort 'lexicographical' which I don't think is
   correct)
   http://docs.scipy.org/doc/numpy/reference/generated/numpy.sort.html
  
  
   I would think that the Matlab sort might be more useful, but easy
   enough by using the absolute value.
  
   I think what Numpy does is normal enough to not justify a warning, but
   leave this to others because as I pointed out in the beginning I don't
   work with complex numbers.
  
   Kindest regards,
   Tim
 
  But I'm not proposing to change numpy's result, which I'm sure would
 raise
  many
  objections.  I'm just asking to give a warning, because I think in most
  cases
  this is actually a mistake on the user's part.  Just like the warning
  currently
  given when complex data are truncated to real part.
 
 
  Keep in mind that warnings can be highly annoying. If you're a user who
  uses this functionality regularly (and you know what you're doing), then
  you're going to be very unhappy to have to wrap each function call in:
  olderr = np.seterr(all='ignore')
  max(...)
  np.seterr(**olderr)
  or in:
  with warnings.catch_warnings():
  warnings.filterwarnings('ignore', ...)
  max(...)
 
  The actual behavior isn't documented now it looks like, so that should be
  done. In the Notes section of max/min probably.
 
  As for your proposal, it would be good to know if adding a warning would
  actually catch any bugs. For the truncation warning it caught several in
  scipy and other libs IIRC.
 
  Ralf

 I tripped over it yesterday, which is what prompted my suggestion.


That I had guessed. I meant: can you try to add this warning and then see
if it catches any bugs or displays any incorrect warnings for scipy and
some scikits?

Ralf
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C99 compatible complex number tests fail

2014-01-04 Thread Ralf Gommers
On Mon, Dec 23, 2013 at 12:14 AM, Matti Picus matti.pi...@gmail.com wrote:

 Hi. I started to port the stdlib cmath C99 compatible complex number
 tests to numpy, after noticing that numpy seems to have different
 complex number routines than cmath. The work is available on a
 retest_complex branch of numpy
 https://github.com/mattip/numpy/tree/retest_complex
 The tests can be run by pulling the branch (no need to rebuild numpy)
 and running

 python path-to-branch/numpy/core/tests/test_umath_complex.py 
 test.log 21

 So far it is just a couple of  commits that run the tests on numpy, I
 did not dive into modifying the math routines. If I did the work
 correctly, failures point to some differences, most due to edge cases
 with inf and nan, but there are a number of failures due to different
 finite values (for some small definition of different).
 I guess my first question is did I do the tests properly.


They work fine, however you did it in a nonstandard way which makes the
output hard to read. Some comments:
- the assert_* functions expect actual as first input and desired next,
while you have them reversed.
- it would be good to split those tests into multiple cases, for example
one per function to be tested.
- you shouldn't print anything, just let it fail. If you want to see each
individual failure, use generator tests.
- the cmathtestcases.txt is a little nonstandard but should be OK to keep
it like that.

Assuming I did, the next question is are the inconsistencies
 intentional i.e. are they that way in order to be compatible with
 Matlab  or some other non-C99 conformant library?


The implementation should conform to IEEE 754.


 For instance, a comparison between the implementation of cmath's sqrt
 and numpy's sqrt shows that numpy does not check for subnormals.


I suspect no handling for denormals was done on purpose, since that should
have a significant performance penalty. I'm not sure about other
differences, probably just following a different reference.

And I am probably mistaken since I am new to the generator methods of numpy,
 but could it be that trigonometric functions like acos and acosh are
 generated in umath/funcs.inc.src, using a very different algorithm than
 cmathmodule.c?


You're not mistaken.


 Would there be interest in a pull request that changed the routines to
 be more compatible with results from cmath?


I don't think compatibility with cmath should be a goal, but if you find
differences where cmath has a more accurate or faster implementation, then
a PR to adopt the cmath algorithm would be very welcome.

Ralf
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] C99 compatible complex number tests fail

2014-01-04 Thread Eric Moore
On Saturday, January 4, 2014, Ralf Gommers wrote:




 On Mon, Dec 23, 2013 at 12:14 AM, Matti Picus 
 matti.pi...@gmail.comjavascript:_e({}, 'cvml', 'matti.pi...@gmail.com');
  wrote:

 Hi. I started to port the stdlib cmath C99 compatible complex number
 tests to numpy, after noticing that numpy seems to have different
 complex number routines than cmath. The work is available on a
 retest_complex branch of numpy
 https://github.com/mattip/numpy/tree/retest_complex
 The tests can be run by pulling the branch (no need to rebuild numpy)
 and running

 python path-to-branch/numpy/core/tests/test_umath_complex.py 
 test.log 21

 So far it is just a couple of  commits that run the tests on numpy, I
 did not dive into modifying the math routines. If I did the work
 correctly, failures point to some differences, most due to edge cases
 with inf and nan, but there are a number of failures due to different
 finite values (for some small definition of different).
 I guess my first question is did I do the tests properly.


 They work fine, however you did it in a nonstandard way which makes the
 output hard to read. Some comments:
 - the assert_* functions expect actual as first input and desired
 next, while you have them reversed.
 - it would be good to split those tests into multiple cases, for example
 one per function to be tested.
 - you shouldn't print anything, just let it fail. If you want to see each
 individual failure, use generator tests.
 - the cmathtestcases.txt is a little nonstandard but should be OK to keep
 it like that.

 Assuming I did, the next question is are the inconsistencies
 intentional i.e. are they that way in order to be compatible with
 Matlab  or some other non-C99 conformant library?


 The implementation should conform to IEEE 754.


 For instance, a comparison between the implementation of cmath's sqrt
 and numpy's sqrt shows that numpy does not check for subnormals.


 I suspect no handling for denormals was done on purpose, since that should
 have a significant performance penalty. I'm not sure about other
 differences, probably just following a different reference.

 And I am probably mistaken since I am new to the generator methods of
 numpy,
 but could it be that trigonometric functions like acos and acosh are
 generated in umath/funcs.inc.src, using a very different algorithm than
 cmathmodule.c?


 You're not mistaken.


 Would there be interest in a pull request that changed the routines to
 be more compatible with results from cmath?


 I don't think compatibility with cmath should be a goal, but if you find
 differences where cmath has a more accurate or faster implementation, then
 a PR to adopt the cmath algorithm would be very welcome.

 Ralf


Have you seen  https://github.com/numpy/numpy/pull/3010 ?  This adds C99
compatible complex functions and tests with build time checking if the
system provided functions can pass our tests.

I should have some time to get back to it soon, but somemore eyes and tests
and input would be good. Especially since it's not clear to me if all of
the changes will be accepted.

Eric
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion