Re: [Numpy-discussion] appveyor CI

2015-03-05 Thread Denis-Alexander Engemann
Same for MNE-Python:

https://github.com/mne-tools/mne-python/blob/master/appveyor.yml

Denis

2015-03-05 20:42 GMT+01:00 Stefan van der Walt stef...@berkeley.edu:

 Hi Chuck

 On 2015-03-05 10:09:08, Charles R Harris
 charlesr.har...@gmail.com wrote:
  Anyone familiar with appveyor http://www.appveyor.com/? Is
  this something we could use to test/build numpy on windows
  machines? It is free for open source.

 We already use this for scikit-image, and you are welcome to grab
 the setup here:

 https://github.com/scikit-image/scikit-image/blob/master/appveyor.yml

 GitHub now also supports multiple status reporting out of the box:


 https://github.com/blog/1935-see-results-from-all-pull-request-status-checks

 Stéfan
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] appveyor CI

2015-03-05 Thread Stefan van der Walt
Hi Chuck

On 2015-03-05 10:09:08, Charles R Harris 
charlesr.har...@gmail.com wrote:
 Anyone familiar with appveyor http://www.appveyor.com/? Is 
 this something we could use to test/build numpy on windows 
 machines? It is free for open source.

We already use this for scikit-image, and you are welcome to grab 
the setup here:

https://github.com/scikit-image/scikit-image/blob/master/appveyor.yml

GitHub now also supports multiple status reporting out of the box: 

https://github.com/blog/1935-see-results-from-all-pull-request-status-checks

Stéfan
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] appveyor CI

2015-03-05 Thread Charles R Harris
On Thu, Mar 5, 2015 at 12:42 PM, Stefan van der Walt stef...@berkeley.edu
wrote:

 Hi Chuck

 On 2015-03-05 10:09:08, Charles R Harris
 charlesr.har...@gmail.com wrote:
  Anyone familiar with appveyor http://www.appveyor.com/? Is
  this something we could use to test/build numpy on windows
  machines? It is free for open source.

 We already use this for scikit-image, and you are welcome to grab
 the setup here:

 https://github.com/scikit-image/scikit-image/blob/master/appveyor.yml

 GitHub now also supports multiple status reporting out of the box:


 https://github.com/blog/1935-see-results-from-all-pull-request-status-checks


Thanks. Anything tricky about setting up an appveyor account?

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy pickling problem - python 2 vs. python 3

2015-03-05 Thread Ryan Nelson
This works if run from Py3. Don't know if it will *always* work. From that
GH discussion you linked, it sounds like that is a bit of a hack.
##
Illustrate problem with pytables data - python 2 to python 3.

from __future__ import print_function

import sys
import numpy as np
import tables as tb
import pickle as pkl


def main():
 Run the example.
 print(np.__version__=, np.__version__)
 check_on_same_version = False

 arr1 = np.linspace(0.0, 5.0, 6)
 arr2 = np.linspace(0.0, 10.0, 11)
 data = [arr1, arr2]

 # Only generate on python 2.X or check on the same python version:
 if sys.version  3.0 or check_on_same_version:
 fpt = tb.open_file(tstdat.h5, mode=w)
 fpt.set_node_attr(fpt.root, list_of_arrays, data)
 fpt.close()

 # Load the saved file:
 fpt = tb.open_file(tstdat.h5, mode=r)
 result = fpt.get_node_attr(/, list_of_arrays)
 fpt.close()
 print(Loaded:, pkl.loads(result, encoding=latin1))

main()
###
However, I would consider defining some sort of v2 of your HDF file format,
which converts all of the lists of arrays to CArrays or EArrays in the HDF
file. (https://pytables.github.io/usersguide/libref/homogenous_storage.html)
Otherwise, what is the advantage of using HDF files over just plain
shelves?... Just a thought.
Ryan

On Thu, Mar 5, 2015 at 2:52 AM, Anrd Baecker arnd.baec...@web.de wrote:

 Dear all,

 when preparing the transition of our repositories from python 2
 to python 3, I encountered a problem loading pytables (.h5) files
 generated using python 2.
 I suspect that it is caused by a problem with pickling numpy arrays
 under python 3:

 The code appended at the end of this mail works
 fine on either python 2.7 or python 3.4, however,
 generating the data on python 2 and trying to load
 them on python 3 gives some strange string
 ( b'(lp1\ncnumpy.core.multiarray\n_reconstruct\np2\n(cnumpy\nndarray ...)
 instead of
 [array([ 0.,  1.,  2.,  3.,  4.,  5.]),
  array([ 0.,  1.,  2.,  3.,  4.,  5.,  6.,  7.,  8.,  9., 10.])]

 The problem sounds very similar to the one reported here
https://github.com/numpy/numpy/issues/4879
 which was fixed with numpy 1.9.

 I tried different versions/combintations of numpy (including 1.9.2)
 and always end up with the above result.
 Also I tried to reduce the problem down to the level of pure numpy
 and pickle (as in the above bug report):

import numpy as np
import pickle
arr1 = np.linspace(0.0, 1.0, 2)
arr2 = np.linspace(0.0, 2.0, 3)
data = [arr1, arr2]

p = pickle.dumps(data)
print(pickle.loads(p))
p

 Using the resulting string for p as input string
 (with b added at the beginnung) under python 3 gives
UnicodeDecodeError: 'ascii' codec can't decode
byte 0xf0 in position 14: ordinal not in range(128)


 Can someone reproduce the problem with pytables?
 Is there maybe work-around?
 (And no: I can't re-generate the old data files - it's
 hundreds of .h5 files ... ;-).

 Many thanks, best, Arnd


 ##
 Illustrate problem with pytables data - python 2 to python 3.

 from __future__ import print_function

 import sys
 import numpy as np
 import tables as tb


 def main():
  Run the example.
  print(np.__version__=, np.__version__)
  check_on_same_version = False

  arr1 = np.linspace(0.0, 5.0, 6)
  arr2 = np.linspace(0.0, 10.0, 11)
  data = [arr1, arr2]

  # Only generate on python 2.X or check on the same python version:
  if sys.version  3.0 or check_on_same_version:
  fpt = tb.open_file(tstdat.h5, mode=w)
  fpt.set_node_attr(fpt.root, list_of_arrays, data)
  fpt.close()

  # Load the saved file:
  fpt = tb.open_file(tstdat.h5, mode=r)
  result = fpt.get_node_attr(/, list_of_arrays)
  fpt.close()
  print(Loaded:, result)

 main()




 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy pickling problem - python 2 vs. python 3

2015-03-05 Thread Arnd Baecker

On Thu, 5 Mar 2015, Ryan Nelson wrote:


This works if run from Py3. Don't know if it will *always* work. From that GH 
discussion you linked, it sounds
like that is a bit of a hack.


Great - based on your code I could modify my loader routine so that
on python 3 it can load the files generated on python 2. Many thanks!

Still I would have thought that this should be working out-of-the box,
i.e. without the pickle.loads trick?

[... code ...]


However, I would consider defining some sort of v2 of your HDF file format, 
which converts all of the lists of
arrays to CArrays or EArrays in the HDF file.
(https://pytables.github.io/usersguide/libref/homogenous_storage.html) 
Otherwise, what is the advantage of using
HDF files over just plain shelves?... Just a thought.


Thanks for the suggestion - in our usage scenario
lists of arrays is a border case and only small parts of the data in the 
files have this. The larger arrays are written directly.

So at this point I don't mind if the lists of arrays
are written in the current way (as long as things load fine).

For our applications the main benefit of using HDF files is
the possibility to easily look into them (e.g. using vitables)
- so this means that I don't use all the nice more advance features
of HDF at this point... ;-).

Again many thanks for the prompt reply and solution!

Best, Arnd


Ryan

On Thu, Mar 5, 2015 at 2:52 AM, Anrd Baecker arnd.baec...@web.de wrote:
  Dear all,

  when preparing the transition of our repositories from python 2
  to python 3, I encountered a problem loading pytables (.h5) files
  generated using python 2.
  I suspect that it is caused by a problem with pickling numpy arrays
  under python 3:

  The code appended at the end of this mail works
  fine on either python 2.7 or python 3.4, however,
  generating the data on python 2 and trying to load
  them on python 3 gives some strange string
  ( b'(lp1\ncnumpy.core.multiarray\n_reconstruct\np2\n(cnumpy\nndarray ...)
  instead of
      [array([ 0.,  1.,  2.,  3.,  4.,  5.]),
       array([ 0.,  1.,  2.,  3.,  4.,  5.,  6.,  7.,  8.,  9., 10.])]

  The problem sounds very similar to the one reported here
     https://github.com/numpy/numpy/issues/4879
  which was fixed with numpy 1.9.

  I tried different versions/combintations of numpy (including 1.9.2)
  and always end up with the above result.
  Also I tried to reduce the problem down to the level of pure numpy
  and pickle (as in the above bug report):

     import numpy as np
     import pickle
     arr1 = np.linspace(0.0, 1.0, 2)
     arr2 = np.linspace(0.0, 2.0, 3)
     data = [arr1, arr2]

     p = pickle.dumps(data)
     print(pickle.loads(p))
     p

  Using the resulting string for p as input string
  (with b added at the beginnung) under python 3 gives
     UnicodeDecodeError: 'ascii' codec can't decode
     byte 0xf0 in position 14: ordinal not in range(128)


  Can someone reproduce the problem with pytables?
  Is there maybe work-around?
  (And no: I can't re-generate the old data files - it's
  hundreds of .h5 files ... ;-).

  Many thanks, best, Arnd

  
##
  Illustrate problem with pytables data - python 2 to python 3.

  from __future__ import print_function

  import sys
  import numpy as np
  import tables as tb


  def main():
       Run the example.
       print(np.__version__=, np.__version__)
       check_on_same_version = False

       arr1 = np.linspace(0.0, 5.0, 6)
       arr2 = np.linspace(0.0, 10.0, 11)
       data = [arr1, arr2]

       # Only generate on python 2.X or check on the same python version:
       if sys.version  3.0 or check_on_same_version:
           fpt = tb.open_file(tstdat.h5, mode=w)
           fpt.set_node_attr(fpt.root, list_of_arrays, data)
           fpt.close()

       # Load the saved file:
       fpt = tb.open_file(tstdat.h5, mode=r)
       result = fpt.get_node_attr(/, list_of_arrays)
       fpt.close()
       print(Loaded:, result)

  main()




  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] appveyor CI

2015-03-05 Thread Robert McGibbon
I develop on linux and osx, and I haven't experienced any Appveyor problems
related to line endings, so I assume it's normalized somehow.

-Robert
On Mar 5, 2015 5:08 PM, Charles R Harris charlesr.har...@gmail.com
wrote:



 On Thu, Mar 5, 2015 at 5:38 PM, Robert McGibbon rmcgi...@gmail.com
 wrote:

 From my experience, it's pretty easy, assuming you're prepared to pick up
 some powershell.
 Some useful resources are

  - Olivier Grisel's example.
 https://github.com/ogrisel/python-appveyor-demo
  - I made a similar example, using conda.
 https://github.com/rmcgibbo/python-appveyor-conda-example

 One problem is that appveyor is often quite slow compared to TravisCI, so
 this can be a little annoying. But it's better than nothing.


 Do line endings in the scripts matter?

 Chuck


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Adding keyword to asarray and asanyarray.

2015-03-05 Thread Charles R Harris
Hi All,

This is apropos gh-5634 https://github.com/numpy/numpy/pull/5634, a PR
adding a precision keyword to asarray and asanyarray. The PR description is

 The precision keyword differs from the current dtype keyword in the
 following way.

- It specifies a minimum precision. If the precision of the input is
greater than the specified precision, the input precision is preserved.
- Complex types are preserved. A specifies floating precision applies
to the dtype of the real and complex parts separately.

 For example, both complex128 and float64 dtypes have the
 same precision and an array of dtype float64 will be unchanged if the
 specified precision is float32.

 Ideally the precision keyword would be pushed down into the array
 constructor so that the resulting dtype could be determined before the
 array is constructed, but that would require adding new functions as the
 current constructors are part of the API and cannot have their
 signatures changed.

The name of the keyword is open to discussion, as well as its acceptable
values. And of course, anything else that might come to mind ;)

Thoughts?

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Adding keyword to asarray and asanyarray.

2015-03-05 Thread Benjamin Root
dare I say... datetime64/timedelta64 support?

::ducks::

Ben Root

On Thu, Mar 5, 2015 at 11:40 AM, Charles R Harris charlesr.har...@gmail.com
 wrote:

 Hi All,

 This is apropos gh-5634 https://github.com/numpy/numpy/pull/5634, a PR
 adding a precision keyword to asarray and asanyarray. The PR description is

  The precision keyword differs from the current dtype keyword in the
 following way.

- It specifies a minimum precision. If the precision of the input is
greater than the specified precision, the input precision is preserved.
- Complex types are preserved. A specifies floating precision applies
to the dtype of the real and complex parts separately.

 For example, both complex128 and float64 dtypes have the
 same precision and an array of dtype float64 will be unchanged if the
 specified precision is float32.

 Ideally the precision keyword would be pushed down into the array
 constructor so that the resulting dtype could be determined before the
 array is constructed, but that would require adding new functions as the
 current constructors are part of the API and cannot have their
 signatures changed.

 The name of the keyword is open to discussion, as well as its acceptable
 values. And of course, anything else that might come to mind ;)

 Thoughts?

 Chuck

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Adding keyword to asarray and asanyarray.

2015-03-05 Thread Charles R Harris
On Thu, Mar 5, 2015 at 10:04 AM, Chris Barker chris.bar...@noaa.gov wrote:

 On Thu, Mar 5, 2015 at 8:42 AM, Benjamin Root ben.r...@ou.edu wrote:

 dare I say... datetime64/timedelta64 support?


 well, the precision of those is 64 bits, yes? so if you asked for less
 than that, you'd still get a dt64. If you asked for 64 bits, you'd get it,
 if you asked for datetime128  -- what would you get???

 a 128 bit integer? or an Exception, because there is no 128bit datetime
 dtype.

 But I think this is the same problem with any dtype -- if you ask for a
 precision that doesn't exist, you're going to get an error.

 Is there a more detailed description of the proposed feature anywhere? Do
 you specify a dtype as a precision? or jsut the precision, and let the
 dtype figure it out for itself, i.e.:

 precision=64

 would give you a float64 if the passed in array was a float type, but a
 int64 if the passed in array was an int type, or a uint64 if the passed in
 array was a unsigned int type, etc.

 But in the end,  I wonder about the use case. I generaly use asarray one
 of two ways:

 Without a dtype -- to simple make sure I've got an ndarray of SOME dtype.

 or

 With a dtype - because I really care about the dtype -- usually because I
 need to pass it on to C code or something.

 I don't think I'd ever need at least some precision, but not care if I got
 more than that...


The main use that I want to cover is that float64 and complex128 have the
same precision and it would be good if either is acceptable.  Also, one
might just want either float32 or float64, not just one of the two. Another
intent is to make the fewest possible copies. The determination of the
resulting type is made using the result_type function.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] appveyor CI

2015-03-05 Thread Charles R Harris
Anyone familiar with appveyor http://www.appveyor.com/? Is this something
we could use to test/build numpy on windows machines? It is free for open
source.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] ANN: BayesPy 0.3 released

2015-03-05 Thread Jaakko Luttinen
Dear all,

I am pleased to announce that BayesPy 0.3 has been released.

BayesPy provides tools for variational Bayesian inference. The user can
easily constuct conjugate exponential family models from nodes and run
approximate posterior inference. BayesPy aims to be efficient and
flexible enough for experts but also accessible for casual users.

---

This release adds several state-of-the-art VB features. Below is a list
of significant new features in this release:

* Gradient-based optimization of the nodes by using either the Euclidean
or Riemannian/natural gradient. This enables, for instance, the
Riemannian conjugate gradient method.

* Collapsed variational inference to improve the speed of learning.

* Stochastic variational inference to improve scalability.

* Pattern search to improve the speed of learning.

* Deterministic annealing to improve robustness against initializations.

* Gaussian Markov chains can use input signals.

More details about the new features can be found here:
http://www.bayespy.org/user_guide/advanced.html

--

PyPI: https://pypi.python.org/pypi/bayespy/0.3

Git repository: https://github.com/bayespy/bayespy

Documentation: http://www.bayespy.org/

Best regards,
Jaakko

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Adding keyword to asarray and asanyarray.

2015-03-05 Thread Chris Barker
On Thu, Mar 5, 2015 at 8:42 AM, Benjamin Root ben.r...@ou.edu wrote:

 dare I say... datetime64/timedelta64 support?


well, the precision of those is 64 bits, yes? so if you asked for less than
that, you'd still get a dt64. If you asked for 64 bits, you'd get it, if
you asked for datetime128  -- what would you get???

a 128 bit integer? or an Exception, because there is no 128bit datetime
dtype.

But I think this is the same problem with any dtype -- if you ask for a
precision that doesn't exist, you're going to get an error.

Is there a more detailed description of the proposed feature anywhere? Do
you specify a dtype as a precision? or jsut the precision, and let the
dtype figure it out for itself, i.e.:

precision=64

would give you a float64 if the passed in array was a float type, but a
int64 if the passed in array was an int type, or a uint64 if the passed in
array was a unsigned int type, etc.

But in the end,  I wonder about the use case. I generaly use asarray one of
two ways:

Without a dtype -- to simple make sure I've got an ndarray of SOME dtype.

or

With a dtype - because I really care about the dtype -- usually because I
need to pass it on to C code or something.

I don't think I'd ever need at least some precision, but not care if I got
more than that

-Chris


-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Adding keyword to asarray and asanyarray.

2015-03-05 Thread Benjamin Root
On Thu, Mar 5, 2015 at 12:04 PM, Chris Barker chris.bar...@noaa.gov wrote:

 well, the precision of those is 64 bits, yes? so if you asked for less
 than that, you'd still get a dt64. If you asked for 64 bits, you'd get it,
 if you asked for datetime128  -- what would you get???

 a 128 bit integer? or an Exception, because there is no 128bit datetime
 dtype.



I was more thinking of datetime64/timedelta64's ability to specify the time
units.

Ben Root
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] ANN: scikit-image 0.11

2015-03-05 Thread Stefan van der Walt
Announcement: scikit-image 0.11.0
=

We're happy to announce the release of scikit-image v0.11.0!

scikit-image is an image processing toolbox for SciPy that includes algorithms
for segmentation, geometric transformations, color space manipulation,
analysis, filtering, morphology, feature detection, and more.

For more information, examples, and documentation, please visit our website:

http://scikit-image.org

Highlights
--
For this release, we merged over 200 pull requests with bug fixes,
cleanups, improved documentation and new features.  Highlights
include:

- Region Adjacency Graphs
  - Color distance RAGs (#1031)
  - Threshold Cut on RAGs (#1031)
  - Similarity RAGs (#1080)
  - Normalized Cut on RAGs (#1080)
  - RAG drawing (#1087)
  - Hierarchical merging (#1100)
- Sub-pixel shift registration (#1066)
- Non-local means denoising (#874)
- Sliding window histogram (#1127)
- More illuminants in color conversion (#1130)
- Handling of CMYK images (#1360)
- `stop_probability` for RANSAC (#1176)
- Li thresholding (#1376)
- Signed edge operators (#1240)
- Full ndarray support for `peak_local_max` (#1355)
- Improve conditioning of geometric transformations (#1319)
- Standardize handling of multi-image files (#1200)
- Ellipse structuring element (#1298)
- Multi-line drawing tool (#1065), line handle style (#1179)
- Point in polygon testing (#1123)
- Rotation around a specified center (#1168)
- Add `shape` option to drawing functions (#1222)
- Faster regionprops (#1351)
- `skimage.future` package (#1365)
- More robust I/O module (#1189)

API Changes
---
- The ``skimage.filter`` subpackage has been renamed to ``skimage.filters``.
- Some edge detectors returned values greater than 1--their results are now
  appropriately scaled with a factor of ``sqrt(2)``.

Contributors to this release

(Listed alphabetically by last name)

- Fedor Baart
- Vighnesh Birodkar
- François Boulogne
- Nelson Brown
- Alexey Buzmakov
- Julien Coste
- Phil Elson
- Adam Feuer
- Jim Fienup
- Geoffrey French
- Emmanuelle Gouillart
- Charles Harris
- Jonathan Helmus
- Alexander Iacchetta
- Ivana Kajić
- Kevin Keraudren
- Almar Klein
- Gregory R. Lee
- Jeremy Metz
- Stuart Mumford
- Damian Nadales
- Pablo Márquez Neila
- Juan Nunez-Iglesias
- Rebecca Roisin
- Jasper St. Pierre
- Jacopo Sabbatini
- Michael Sarahan
- Salvatore Scaramuzzino
- Phil Schaf
- Johannes Schönberger
- Tim Seifert
- Arve Seljebu
- Steven Silvester
- Julian Taylor
- Matěj Týč
- Alexey Umnov
- Pratap Vardhan
- Stefan van der Walt
- Joshua Warner
- Tony S Yu
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] ufuncs now take a tuple of arrays as 'out' kwarg

2015-03-05 Thread Jaime Fernández del Río
Hi all,

There is a PR, ready to be merged, that adds the possibility of passing a
tuple of arrays in the 'out' kwarg to ufuncs with multiple outputs:

https://github.com/numpy/numpy/pull/5621

The new functionality is as follows:

* If the ufunc has a single output, then the 'out' kwarg can either be a
single array (or None) like today, or a tuple holding a single array (or
None).

* If the ufunc has more than one output, then the 'out' kwarg must be a
tuple with one array (or None) per output argument. The old behavior, where
only the first output could be specified, is now deprecated, will raise a
deprecation warning, and potentially be changed to an error in the future.

* In both cases, positional and keyword output arguments are incompatible.
This has been made a little more strict, as the following is valid in =
1.9.x but will now raise an error:

np.add(2, 2, None, out=arr)

There seemed to be a reasonable amount of agreement on the goodness of this
change from the discussions on github, but I wanted to inform the larger
audience, in case there are any addressable concerns.

Jaime

-- 
(\__/)
( O.o)
(  ) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus planes
de dominación mundial.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] appveyor CI

2015-03-05 Thread Robert McGibbon
From my experience, it's pretty easy, assuming you're prepared to pick up
some powershell.
Some useful resources are

 - Olivier Grisel's example.
https://github.com/ogrisel/python-appveyor-demo
 - I made a similar example, using conda.
https://github.com/rmcgibbo/python-appveyor-conda-example

One problem is that appveyor is often quite slow compared to TravisCI, so
this can be a little annoying. But it's better than nothing.

-Robert
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] appveyor CI

2015-03-05 Thread Charles R Harris
On Thu, Mar 5, 2015 at 5:38 PM, Robert McGibbon rmcgi...@gmail.com wrote:

 From my experience, it's pretty easy, assuming you're prepared to pick up
 some powershell.
 Some useful resources are

  - Olivier Grisel's example.
 https://github.com/ogrisel/python-appveyor-demo
  - I made a similar example, using conda.
 https://github.com/rmcgibbo/python-appveyor-conda-example

 One problem is that appveyor is often quite slow compared to TravisCI, so
 this can be a little annoying. But it's better than nothing.


Do line endings in the scripts matter?

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Adding keyword to asarray and asanyarray.

2015-03-05 Thread josef.pktd
On Thu, Mar 5, 2015 at 12:33 PM, Charles R Harris
charlesr.har...@gmail.com wrote:


 On Thu, Mar 5, 2015 at 10:04 AM, Chris Barker chris.bar...@noaa.gov wrote:

 On Thu, Mar 5, 2015 at 8:42 AM, Benjamin Root ben.r...@ou.edu wrote:

 dare I say... datetime64/timedelta64 support?


 well, the precision of those is 64 bits, yes? so if you asked for less
 than that, you'd still get a dt64. If you asked for 64 bits, you'd get it,
 if you asked for datetime128  -- what would you get???

 a 128 bit integer? or an Exception, because there is no 128bit datetime
 dtype.

 But I think this is the same problem with any dtype -- if you ask for a
 precision that doesn't exist, you're going to get an error.

 Is there a more detailed description of the proposed feature anywhere? Do
 you specify a dtype as a precision? or jsut the precision, and let the dtype
 figure it out for itself, i.e.:

 precision=64

 would give you a float64 if the passed in array was a float type, but a
 int64 if the passed in array was an int type, or a uint64 if the passed in
 array was a unsigned int type, etc.

 But in the end,  I wonder about the use case. I generaly use asarray one
 of two ways:

 Without a dtype -- to simple make sure I've got an ndarray of SOME dtype.

 or

 With a dtype - because I really care about the dtype -- usually because I
 need to pass it on to C code or something.

 I don't think I'd ever need at least some precision, but not care if I got
 more than that...


 The main use that I want to cover is that float64 and complex128 have the
 same precision and it would be good if either is acceptable.  Also, one
 might just want either float32 or float64, not just one of the two. Another
 intent is to make the fewest possible copies. The determination of the
 resulting type is made using the result_type function.


How does this work for object arrays, or datetime?

Can I specify at least float32 or float64, and it raises an exception
if it cannot be converted?

The problem we have in statsmodels is that pandas frequently uses
object arrays and it messes up patsy or statsmodels if it's not
explicitly converted.

Josef





 Chuck


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion