Re: [Numpy-discussion] Test error with ATLAS, Windows 64 bit

2014-04-23 Thread Matthew Brett
Hi,

On Wed, Apr 23, 2014 at 2:27 PM, Julian Taylor
 wrote:
> On 23.04.2014 21:25, Matthew Brett wrote:
>> Hi,
>>
>> On Tue, Apr 15, 2014 at 12:34 AM, Julian Taylor
>>  wrote:
>>> On Tue, Apr 15, 2014 at 4:30 AM, Matthew Brett  
>>> wrote:

 It looks as though mingw-w64 is at fault, and I was confused (still
 am) because of the different behavior with double and a constant:

 #include 
 #include 

 int main() {
 double z, i = -0.0;
 printf("With double %f=%f, with constant=%f\n",
i, expm1(i), expm1(-0.));
 }

 gives:

 With double -0.00=0.00, with constant=-0.00

 That was ugly to track down.

 What is the right way to work round this (using the numpy version
 instead of the system version I suppose)?

>>>
>>> The right way is to file a bug at mingw and get it fixed at the source.
>>
>> http://sourceforge.net/p/mingw-w64/code/6594/
>
> great thanks for reporting it.
>
>>
>>> Additionally as this time npymath seems to be better (thats 3 bugs in
>>> npymath vs 1 in mingw on my scoreboard) one could use the mingw
>>> preprocessor define instead of HAVE_EXP1M to select this function from
>>> npymath.
>>
>> Sorry to be slow - could you unpack what you mean in this paragraph?
>> Is there a way to force use of the numpy version of the function using
>> environment variables or similar?
>>
>
> I mean something along the line of attached patch, maybe you can try it.
> If it works I can file a numpy PR.

Thanks for the patch - I see what you mean now.

I wonder if there should be some mechanism by which different
compilers can set these?  What was the issue with MSVC for example -
does it need something like this too?

Cheers,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Dates and times and Datetime64 (again)

2014-04-23 Thread Sankarshan Mudkavi
Thank you very much, I will incorporate it!

I've been quite busy for the past few weeks but I should be much freer after 
next week and can pick up on this (fixing the code and actually implement 
things).

Cheers,
Sankarshan

On Apr 23, 2014, at 5:58 PM, Chris Barker  wrote:

> On Wed, Mar 19, 2014 at 7:07 PM, Sankarshan Mudkavi  
> wrote:
> 
> I've written a rather rudimentary NEP, (lacking in technical details which I 
> will hopefully add after some further discussion and receiving 
> clarification/help on this thread).
> 
> Please let me know how to proceed and what you think should be added to the 
> current proposal (attached to this mail).
> 
> Here is a rendered version of the same:
> https://github.com/Sankarshan-Mudkavi/numpy/blob/Enhance-datetime64/doc/neps/datetime-improvement-proposal.rst
> 
> I've done a bit of copy-editing, and added some more from this discussion. 
> See the pull request on gitHub.
> 
> There are a fair number of rough edges, but I think we have a consensus among 
> the small group of folks that participated in this discussion anyway, so now 
> "all" we need is someone to actually fix the code.
> 
> If someone steps up, then we should also go in and add a bunch of unit tests, 
> as discussed in this thread.
> 
> -CHB
> 
>  
> 
> -- 
> 
> Christopher Barker, Ph.D.
> Oceanographer
> 
> Emergency Response Division
> NOAA/NOS/OR&R(206) 526-6959   voice
> 7600 Sand Point Way NE   (206) 526-6329   fax
> Seattle, WA  98115   (206) 526-6317   main reception
> 
> chris.bar...@noaa.gov
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion

-- 
Sankarshan Mudkavi
Undergraduate in Physics, University of Waterloo
www.smudkavi.com








signature.asc
Description: Message signed with OpenPGP using GPGMail
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Dates and times and Datetime64 (again)

2014-04-23 Thread Chris Barker
On Wed, Mar 19, 2014 at 7:07 PM, Sankarshan Mudkavi
wrote:

>
> I've written a rather rudimentary NEP, (lacking in technical details which
> I will hopefully add after some further discussion and receiving
> clarification/help on this thread).
>
> Please let me know how to proceed and what you think should be added to
> the current proposal (attached to this mail).
>
> Here is a rendered version of the same:
>
> https://github.com/Sankarshan-Mudkavi/numpy/blob/Enhance-datetime64/doc/neps/datetime-improvement-proposal.rst
>

I've done a bit of copy-editing, and added some more from this discussion.
See the pull request on gitHub.

There are a fair number of rough edges, but I think we have a consensus
among the small group of folks that participated in this discussion anyway,
so now "all" we need is someone to actually fix the code.

If someone steps up, then we should also go in and add a bunch of unit
tests, as discussed in this thread.

-CHB



-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Test error with ATLAS, Windows 64 bit

2014-04-23 Thread Julian Taylor
On 23.04.2014 21:25, Matthew Brett wrote:
> Hi,
> 
> On Tue, Apr 15, 2014 at 12:34 AM, Julian Taylor
>  wrote:
>> On Tue, Apr 15, 2014 at 4:30 AM, Matthew Brett  
>> wrote:
>>>
>>> It looks as though mingw-w64 is at fault, and I was confused (still
>>> am) because of the different behavior with double and a constant:
>>>
>>> #include 
>>> #include 
>>>
>>> int main() {
>>> double z, i = -0.0;
>>> printf("With double %f=%f, with constant=%f\n",
>>>i, expm1(i), expm1(-0.));
>>> }
>>>
>>> gives:
>>>
>>> With double -0.00=0.00, with constant=-0.00
>>>
>>> That was ugly to track down.
>>>
>>> What is the right way to work round this (using the numpy version
>>> instead of the system version I suppose)?
>>>
>>
>> The right way is to file a bug at mingw and get it fixed at the source.
> 
> http://sourceforge.net/p/mingw-w64/code/6594/

great thanks for reporting it.

> 
>> Additionally as this time npymath seems to be better (thats 3 bugs in
>> npymath vs 1 in mingw on my scoreboard) one could use the mingw
>> preprocessor define instead of HAVE_EXP1M to select this function from
>> npymath.
> 
> Sorry to be slow - could you unpack what you mean in this paragraph?
> Is there a way to force use of the numpy version of the function using
> environment variables or similar?
> 

I mean something along the line of attached patch, maybe you can try it.
If it works I can file a numpy PR.
diff --git a/numpy/core/src/npymath/npy_math.c.src b/numpy/core/src/npymath/npy_math.c.src
index 1ca7033..e371061 100644
--- a/numpy/core/src/npymath/npy_math.c.src
+++ b/numpy/core/src/npymath/npy_math.c.src
@@ -62,7 +62,9 @@
  */
 
 /* Original code by Konrad Hinsen.  */
-#ifndef HAVE_EXPM1
+#if !defined(HAVE_EXPM1) || defined(__MINGW32__)
+/* http://sourceforge.net/p/mingw-w64/code/6594/ */
+#define NPY_USE_OWN_EXPM1 1
 double npy_expm1(double x)
 {
 if (npy_isinf(x) && x > 0) {
@@ -80,6 +82,8 @@ double npy_expm1(double x)
 }
 }
 }
+#else
+#define NPY_USE_OWN_EXPM1 0
 #endif
 
 #ifndef HAVE_LOG1P
@@ -335,12 +339,13 @@ double npy_log2(double x)
  * log,exp,expm1,asin,acos,atan,asinh,acosh,atanh,log1p,exp2,log2#
  * #KIND = SIN,COS,TAN,SINH,COSH,TANH,FABS,FLOOR,CEIL,RINT,TRUNC,SQRT,LOG10,
  * LOG,EXP,EXPM1,ASIN,ACOS,ATAN,ASINH,ACOSH,ATANH,LOG1P,EXP2,LOG2#
+ * #skip = 0*15,NPY_USE_OWN_EXPM1,0*9#
  */
 
 #ifdef @kind@@c@
 #undef @kind@@c@
 #endif
-#ifndef HAVE_@KIND@@C@
+#if !defined HAVE_@KIND@@C@ || @skip@
 @type@ npy_@kind@@c@(@type@ x)
 {
 return (@type@) npy_@kind@((double)x);
@@ -394,8 +399,9 @@ double npy_log2(double x)
  * log,exp,expm1,asin,acos,atan,asinh,acosh,atanh,log1p,exp2,log2#
  * #KIND = SIN,COS,TAN,SINH,COSH,TANH,FABS,FLOOR,CEIL,RINT,TRUNC,SQRT,LOG10,
  * LOG,EXP,EXPM1,ASIN,ACOS,ATAN,ASINH,ACOSH,ATANH,LOG1P,EXP2,LOG2#
+ * #skip = 0*15,NPY_USE_OWN_EXPM1,0*9#
  */
-#ifdef HAVE_@KIND@@C@
+#if defined HAVE_@KIND@@C@ && !@skip@
 @type@ npy_@kind@@c@(@type@ x)
 {
 return @kind@@c@(x);
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.9.x branch

2014-04-23 Thread Chris Barker
On Tue, Apr 22, 2014 at 2:35 PM, Charles R Harris  wrote:

> *Datetime timezone handling broken in 
> 1.7.x*
>
> I don't think there is time to get this done for 1.9.0 and it needs to be
> pushed off to 1.10.0.
>
> * *
>

Darn! that's what we said for 1.8.

However, Sankarshan Mudkavi has written up a NEP, and this is really  a
matter of ripping code out, rather than adding much. Someone familiar with
the code should be able to whip this out pretty easily (I think we've
abandoned for now any hope of "proper TZ handling)

(
https://github.com/Sankarshan-Mudkavi/numpy/blob/Enhance-datetime64/doc/neps/datetime-improvement-proposal.rst
)

Datetime64 is really pretty broken this way -- the sooner it gets cleaned
up the better.

https://github.com/Sankarshan-Mudkavi/numpy/blob/Enhance-datetime64/doc/neps/datetime-improvement-proposal.rst

The trick is that as simple as it may be, someone still needs to do it.

I, for one, am not familiar with the code, and have pathetic C skills
anyway, so it would not be an efficient use of my time to try to do it.

but I will go and edit the NEP and write test cases!

-Chris
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Test error with ATLAS, Windows 64 bit

2014-04-23 Thread Matthew Brett
Hi,

On Tue, Apr 15, 2014 at 12:34 AM, Julian Taylor
 wrote:
> On Tue, Apr 15, 2014 at 4:30 AM, Matthew Brett  
> wrote:
>>
>> It looks as though mingw-w64 is at fault, and I was confused (still
>> am) because of the different behavior with double and a constant:
>>
>> #include 
>> #include 
>>
>> int main() {
>> double z, i = -0.0;
>> printf("With double %f=%f, with constant=%f\n",
>>i, expm1(i), expm1(-0.));
>> }
>>
>> gives:
>>
>> With double -0.00=0.00, with constant=-0.00
>>
>> That was ugly to track down.
>>
>> What is the right way to work round this (using the numpy version
>> instead of the system version I suppose)?
>>
>
> The right way is to file a bug at mingw and get it fixed at the source.

http://sourceforge.net/p/mingw-w64/code/6594/

> Additionally as this time npymath seems to be better (thats 3 bugs in
> npymath vs 1 in mingw on my scoreboard) one could use the mingw
> preprocessor define instead of HAVE_EXP1M to select this function from
> npymath.

Sorry to be slow - could you unpack what you mean in this paragraph?
Is there a way to force use of the numpy version of the function using
environment variables or similar?

Cheers,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Slightly off-topic - accuracy of C exp function?

2014-04-23 Thread Matthew Brett
Hi,

On Wed, Apr 23, 2014 at 1:43 AM, Nathaniel Smith  wrote:
> On Wed, Apr 23, 2014 at 6:22 AM, Matthew Brett  
> wrote:
>> Hi,
>>
>> I'm exploring Mingw-w64 for numpy building, and I've found it gives a
>> slightly different answer for 'exp' than - say - gcc on OSX.
>>
>> The difference is of the order of the eps value for the output number
>> (2 * eps for a result of ~2.0).
>>
>> Is accuracy somewhere specified for C functions like exp?  Or is
>> accuracy left as an implementation detail for the C library author?
>
> C99 says (sec 5.2.4.2.2) that "The accuracy of the floating point
> operations ... and of the library functions in  and
>  that return floating point results is implemenetation
> defined. The implementation may state that the accuracy is unknown."
> (This last sentence is basically saying that with regard to some
> higher up clauses that required all conforming implementations to
> document this stuff, saying "eh, who knows" counts as documenting it.
> Hooray for standards!)
>
> Presumably the accuracy in this case is a function of the C library
> anyway, not the compiler?

Mingw-w64 implementation is in assembly:

http://sourceforge.net/p/mingw-w64/code/HEAD/tree/trunk/mingw-w64-crt/math/exp.def.h

> Numpy has its own implementations for a
> bunch of the math functions, and it's been unclear in the past whether
> numpy or the libc implementations were better in any particular case.

I only investigated this particular value, in which case it looked as
though the OSX value was closer to the exact value (via sympy.mpmath)
- by ~1 unit-at-the-last-place.  This was causing a divergence in the
powell optimization path and therefore a single scipy test failure.  I
haven't investigated further - was wondering what investigation I
should do, more than running the numpy / scipy test suites.

Cheers,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] ANN: DistArray 0.2 -- first public development release

2014-04-23 Thread Kurt Smith
GitHub repo: https://github.com/enthought/distarray

Documentation: http://distarray.readthedocs.org

License: Three-clause BSD

Python versions: 2.7 and 3.3

OS support: *nix and Mac OS X

DistArray aims to bring the strengths of NumPy to data-parallel
high-performance computing.  It provides distributed multi-dimensional
NumPy-like arrays and distributed ufuncs, distributed IO capabilities, and
can integrate with external distributed libraries, like Trilinos.
 DistArray works with NumPy and builds on top of it in a flexible and
natural way.

Brian Granger started DistArray as a NASA-funded SBIR project in 2008.
 Enthought picked it up as part of a DOE Phase II SBIR [0] to provide a
generally useful distributed array package.  It builds on IPython,
IPython.parallel, NumPy, MPI, and interfaces with the Trilinos suite of
distributed HPC solvers (via PyTrilinos) [1].

Distarray:


   -

   has a client-engine (or master-worker) process design -- data resides on
   the worker processes, commands are initiated from master;
   -

   allows full control over what is executed on the worker processes and
   integrates transparently with the master process;
   -

   allows direct communication between workers bypassing the master process
   for scalability;
   -

   integrates with IPython.parallel for interactive creation and
   exploration of distributed data;
   -

   supports distributed ufuncs (currently without broadcasting);
   -

   builds on and leverages MPI via MPI4Py in a transparent and
   user-friendly way;
   -

   supports NumPy-like structured multidimensional arrays;
   -

   has basic support for unstructured arrays;
   -

   supports user-controllable array distributions across workers (block,
   cyclic, block-cyclic, and unstructured) on a per-axis basis;
   -

   has a straightforward API to control how an array is distributed;
   -

   has basic plotting support for visualization of array distributions;
   -

   separates the array’s distribution from the array’s data -- useful for
   slicing, reductions, redistribution, broadcasting, all of which will be
   implemented in coming releases;
   -

   implements distributed random arrays;
   -

   supports `.npy`-like flat-file IO and hdf5 parallel IO (via h5py);
   leverages MPI-based IO parallelism in an easy-to-use and transparent way;
   and
   -

   supports the distributed array
protocol[2],
which allows independently developed parallel libraries to share
   distributed arrays without copying, analogous to the PEP-3118 new buffer
   protocol.


This is the first public development release.  DistArray is not ready for
real-world use, but we want to get input from the larger scientific-Python
community to help drive its development.  The API is changing rapidly and
we are adding many new features on a fast timescale.  For that reason,
DistArray is currently implemented in pure Python for maximal flexibility.
 Performance improvements are coming.

The 0.2 release's goals are to provide the components necessary to support
upcoming features that are non-trivial to implement in a distributed
environment.

Planned features for upcoming releases:


   -

   Distributed reductions
   -

   Distributed slicing
   -

   Distributed broadcasting
   -

   Distributed fancy indexing
   -

   Re-distribution methods
   -

   Integration with Trilinos [1] and other packages [3] that are compatible
   with the distributed array protocol [2]
   -

   Lazy evaluation and deferred computation for latency hiding
   -

   Out-of-core computations
   -

   Extensive examples, tutorials, documentation
   -

   Support for distributed sorting and other non-trivial distributed
   algorithms
   -

   MPI-only communication for non-interactive deployment on clusters and
   supercomputers
   -

   End-user control over communication, temporary array creation, and other
   performance aspects of distributed computations



[0] http://www.sbir.gov/sbirsearch/detail/410257

[1] http://trilinos.org/

[2] http://distributed-array-protocol.readthedocs.org/en/rel-0.10.0/

[3] http://www.mcs.anl.gov/petsc/


-- 
Kurt W. Smith, Ph.D. 
Enthought, Inc.   | 512.536.1057
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numerical gradient, Jacobian, and Hessian

2014-04-23 Thread Neal Becker
alex wrote:

> On Mon, Apr 21, 2014 at 3:13 AM, Eelco Hoogendoorn
>  wrote:
>> As far as I can tell, [Theano] is actually the only tensor/ndarray aware
>> differentiator out there
> 
> And AlgoPy, a tensor/ndarray aware arbitrary order automatic
> differentiator (https://pythonhosted.org/algopy/)

I noticed julia seems to have a package

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.9.x branch

2014-04-23 Thread josef . pktd
On Wed, Apr 23, 2014 at 5:32 AM, Sebastian Berg
 wrote:
> On Di, 2014-04-22 at 15:35 -0600, Charles R Harris wrote:
>> Hi All,
>>
>>
>> I'd like to branch 1.9.x at the end of the month. There are a couple
>> of reasons for the timing. First, we have a lot of new stuff in the
>> development branch. Second, there is work ongoing in masked arrays
>> that I'd like to keep out of the release so that it has more time to
>> settle. Third, it's past time ;)
>
> Sounds good.
>
>> There are currently a number of 1.9.0 blockers, which can be seen
>> here.
>>
>> Datetime timezone handling broken in 1.7.x
>>
>> I don't think there is time to get this done for 1.9.0 and it needs to
>> be pushed off to 1.10.0.
>>
>> Return multiple field selection as ro view
>>
>> I have a branch for this, but because the returned type differs from a
>> copy by alignment spacing there was a test failure. Merging that
>> branch might cause some incompatibilities.
>>
>
> I am a bit worried here that comparisons might make trouble.
>
>> Object array creation new conversion to int
>>
>>
>> This one needs a decision. Julian, Sebastian, thoughts?
>>
>
> Maybe for all to consider this is about what happens for object arrays
> if you do things like:
>
> # Array cast to object array (np.array(arr) would be identical):
> a = np.arange(10).astype(object)
> # Array passed into new array creation (not just *one* array):
> b = np.array([np.arange(10)], dtype=object)
> # Numerical array is assigned to object array:
> c = np.empty(10, dtype=object)
> c[...] = np.arange(10)
>
> Before this change, the result was:
> type(a[0]) is int
> type(b[0,0]) is np.int_  # Note the numpy type
> type(c[0]) is int
>
> After this change, they are all `int`. Though note that the numpy type
> is preserved for example for long double. On the one hand preserving the
> numpy type might be nice, but on the other hand we don't care much about
> the dtypes of scalars and in practice the python types are probably more
> often wanted.

what if I don't like python?

>>> np.int_(0)**(-1)
inf
>>> 0**-1
Traceback (most recent call last):
  File "", line 1, in 
0**-1
ZeroDivisionError: 0.0 cannot be raised to a negative power


>>> type(np.arange(5)[0])

>>> np.arange(5)[0]**-1
inf

>>> type(np.arange(5)[0].item())

>>> np.arange(5)[0].item()**-1
Traceback (most recent call last):
  File "", line 1, in 
np.arange(5)[0].item()**-1
ZeroDivisionError: 0.0 cannot be raised to a negative power

>>> np.__version__
'1.6.1'


I remember struggling through this (avoiding python operations) quite
a bit in my early bugfixes to scipy.stats.distributions.

(IIRC I ended up avoiding most ints.)

Josef

>
> Since I just realized that things are safe (float128 does not cast to
> float after all), I changed my mind and am tempted to keep the new
> behaviour. That is, if it does not create any problems (there was some
> issue in scipy, not sure how bad).
>
> - Sebastian
>
>> Median of np.matrix is broken(
>>
>>
>> Not sure what the status of this one is.
>>
>> 1.8 deprecations: Follow-up ticket
>>
>>
>> Things that might should be removed.
>>
>> ERROR: test_big_arrays (test_io.TestSavezLoad) on OS X + Python 3.3
>>
>>
>> I believe this one was fixed. For general problems reading/writing big
>> files on OS X, I believe they were fixed in Maverick and I'm inclined
>> to recommend an OS upgrade rather than work to chunk all the io.
>>
>> Deprecate NPY_CHAR
>> This one is waiting on a fix from Pearu to make f2py use numpy
>> strings. I think we will need to do this ourselves if we want to carry
>> through the deprecation. In any case it probably needs to be pushed
>> off to 1.10.
>>
>> 1.7 deprecations: Follow-up ticket
>> Some of these changes have been made, ro diagonal view for instance,
>> some still remain.
>>
>>
>>
>> Additions, updates, and thoughts welcome.
>>
>>
>> Chuck
>>
>>
>>
>>
>>
>>
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.9.x branch

2014-04-23 Thread Sebastian Berg
On Di, 2014-04-22 at 15:35 -0600, Charles R Harris wrote:
> Hi All,
> 
> 
> I'd like to branch 1.9.x at the end of the month. There are a couple
> of reasons for the timing. First, we have a lot of new stuff in the
> development branch. Second, there is work ongoing in masked arrays
> that I'd like to keep out of the release so that it has more time to
> settle. Third, it's past time ;)

Sounds good.

> There are currently a number of 1.9.0 blockers, which can be seen
> here. 
> 
> Datetime timezone handling broken in 1.7.x
> 
> I don't think there is time to get this done for 1.9.0 and it needs to
> be pushed off to 1.10.0. 
> 
> Return multiple field selection as ro view
> 
> I have a branch for this, but because the returned type differs from a
> copy by alignment spacing there was a test failure. Merging that
> branch might cause some incompatibilities.
> 

I am a bit worried here that comparisons might make trouble.

> Object array creation new conversion to int
> 
> 
> This one needs a decision. Julian, Sebastian, thoughts?
> 

Maybe for all to consider this is about what happens for object arrays
if you do things like:

# Array cast to object array (np.array(arr) would be identical):
a = np.arange(10).astype(object)
# Array passed into new array creation (not just *one* array):
b = np.array([np.arange(10)], dtype=object)
# Numerical array is assigned to object array:
c = np.empty(10, dtype=object)
c[...] = np.arange(10)

Before this change, the result was:
type(a[0]) is int
type(b[0,0]) is np.int_  # Note the numpy type
type(c[0]) is int

After this change, they are all `int`. Though note that the numpy type
is preserved for example for long double. On the one hand preserving the
numpy type might be nice, but on the other hand we don't care much about
the dtypes of scalars and in practice the python types are probably more
often wanted.

Since I just realized that things are safe (float128 does not cast to
float after all), I changed my mind and am tempted to keep the new
behaviour. That is, if it does not create any problems (there was some
issue in scipy, not sure how bad).

- Sebastian

> Median of np.matrix is broken(
> 
> 
> Not sure what the status of this one is.
> 
> 1.8 deprecations: Follow-up ticket
> 
> 
> Things that might should be removed.
> 
> ERROR: test_big_arrays (test_io.TestSavezLoad) on OS X + Python 3.3
> 
> 
> I believe this one was fixed. For general problems reading/writing big
> files on OS X, I believe they were fixed in Maverick and I'm inclined
> to recommend an OS upgrade rather than work to chunk all the io.
> 
> Deprecate NPY_CHAR
> This one is waiting on a fix from Pearu to make f2py use numpy
> strings. I think we will need to do this ourselves if we want to carry
> through the deprecation. In any case it probably needs to be pushed
> off to 1.10.
> 
> 1.7 deprecations: Follow-up ticket
> Some of these changes have been made, ro diagonal view for instance,
> some still remain.
> 
> 
> 
> Additions, updates, and thoughts welcome.
> 
> 
> Chuck
> 
> 
> 
> 
> 
> 
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Slightly off-topic - accuracy of C exp function?

2014-04-23 Thread David Cournapeau
On Wed, Apr 23, 2014 at 9:43 AM, Nathaniel Smith  wrote:

> On Wed, Apr 23, 2014 at 6:22 AM, Matthew Brett 
> wrote:
> > Hi,
> >
> > I'm exploring Mingw-w64 for numpy building, and I've found it gives a
> > slightly different answer for 'exp' than - say - gcc on OSX.
> >
> > The difference is of the order of the eps value for the output number
> > (2 * eps for a result of ~2.0).
> >
> > Is accuracy somewhere specified for C functions like exp?  Or is
> > accuracy left as an implementation detail for the C library author?
>
> C99 says (sec 5.2.4.2.2) that "The accuracy of the floating point
> operations ... and of the library functions in  and
>  that return floating point results is implemenetation
> defined. The implementation may state that the accuracy is unknown."
> (This last sentence is basically saying that with regard to some
> higher up clauses that required all conforming implementations to
> document this stuff, saying "eh, who knows" counts as documenting it.
> Hooray for standards!)
>
> Presumably the accuracy in this case is a function of the C library
> anyway, not the compiler? Numpy has its own implementations for a
> bunch of the math functions, and it's been unclear in the past whether
> numpy or the libc implementations were better in any particular case.
>

In the case of MS runtime, at least 9 (as shipped in VS 2008), our
implementation is likely to be better (most of the code was taken from the
sun math library when the license allowed it).

David

>
> -n
>
> --
> Nathaniel J. Smith
> Postdoctoral researcher - Informatics - University of Edinburgh
> http://vorpus.org
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Slightly off-topic - accuracy of C exp function?

2014-04-23 Thread Nathaniel Smith
On Wed, Apr 23, 2014 at 6:22 AM, Matthew Brett  wrote:
> Hi,
>
> I'm exploring Mingw-w64 for numpy building, and I've found it gives a
> slightly different answer for 'exp' than - say - gcc on OSX.
>
> The difference is of the order of the eps value for the output number
> (2 * eps for a result of ~2.0).
>
> Is accuracy somewhere specified for C functions like exp?  Or is
> accuracy left as an implementation detail for the C library author?

C99 says (sec 5.2.4.2.2) that "The accuracy of the floating point
operations ... and of the library functions in  and
 that return floating point results is implemenetation
defined. The implementation may state that the accuracy is unknown."
(This last sentence is basically saying that with regard to some
higher up clauses that required all conforming implementations to
document this stuff, saying "eh, who knows" counts as documenting it.
Hooray for standards!)

Presumably the accuracy in this case is a function of the C library
anyway, not the compiler? Numpy has its own implementations for a
bunch of the math functions, and it's been unclear in the past whether
numpy or the libc implementations were better in any particular case.

-n

-- 
Nathaniel J. Smith
Postdoctoral researcher - Informatics - University of Edinburgh
http://vorpus.org
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion