[Numpy-discussion] ZeroRank memmap behavior?

2012-09-21 Thread Wim Bakker
I'm deeply puzzled by the recently changed behavior of zero-rank memmaps. I
think this change happened from version 1.6.0 to 1.6.1, which I'm currently
using.

 import numpy as np

Create a zero-rank memmap.

 x = np.memmap(filename='/tmp/m', dtype=float, mode='w+', shape=())

Give it a value:

 x[...] = 22
 x
memmap(22.0)

So far so good. But now:

 b = (x + x) / 1.5
 b
memmap(29.332)

WT.? Why is the result of this calculation a memmap?

It even thinks that it's still linked to the file, but it's not:

 b.filename
'/tmp/m'

If I try this with arrays then I don't get this weird behavior:

 a = np.array(2, dtype=float)

 (a + a) / 2.5
1.6001

which gives me a Python float, not a zero-rank array.

Why does the memmap behave like that? Why do I get a memmap even
though it's not connected to any file?

Regards,

Wim
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ZeroRank memmap behavior?

2012-09-21 Thread Sebastian Berg
Hey,

this is indirectly related (I think it might fix many of these memmap
oddities though?)...

Why does the memmap object not implement:

def __array_wrap__(self, obj):
if self is obj:
return obj
return np.array(obj, copy=False, subok=False)

By doing so if we have a ufunc with only memmap objects, the result will
not be a memmap object, but if the ufunc has an output parameter, then
self is obj which means that it is not casted. The problem is that
subclass automatically inherit an __array_wrap__ method that sets the
result to the subclasses type (which is not generally wanted for
memmaps). May make a PR for this...

Regards,

Sebastian


On Fri, 2012-09-21 at 14:59 +0200, Wim Bakker wrote:
 I'm deeply puzzled by the recently changed behavior of zero-rank
 memmaps. I think this change happened from version 1.6.0 to 1.6.1,
 which I'm currently using.
 
 
  import numpy as np
 
 
 Create a zero-rank memmap.
 
 
  x = np.memmap(filename='/tmp/m', dtype=float, mode='w+', shape=())
 
 
 Give it a value:
 
 
  x[...] = 22
  x
 memmap(22.0)
 
 
 So far so good. But now:
 
 
  b = (x + x) / 1.5
  b
 memmap(29.332)
 
 
 WT.? Why is the result of this calculation a memmap?
 
 
 It even thinks that it's still linked to the file, but it's not:
 
 
  b.filename
 '/tmp/m'
 
 
 If I try this with arrays then I don't get this weird behavior:
 
 
  a = np.array(2, dtype=float)
 
 
  (a + a) / 2.5
 1.6001
 
 
 which gives me a Python float, not a zero-rank array.
 
 
 Why does the memmap behave like that? Why do I get a memmap even
 though it's not connected to any file?
 
 
 Regards,
 
 
 Wim
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Regression: in-place operations (possibly intentional)

2012-09-21 Thread Chris Barker
On Thu, Sep 20, 2012 at 2:48 PM, Nathaniel Smith n...@pobox.com wrote:
 because a += b
 really should be the same as a = a + b.

I don't think that's the case - the inplace operator should be (and
are) more than syntactic sugar -- they have a different meaning and
use (in fact, I think they should't work at all for immutable, sbut i
guess the common increment-a-counter use was too good to pass up)

in the numpy case:

a = a + b

means make a new array, from the result of adding a and b

whereas:

a += b

means change a in place by adding b to it

In the first case, I'd expect the type of the result to be determined
by both a and b -- casting rules.

In the second case, a should certainly not be a different object, and
should not have a new data buffer, therefor should not change type.

Whereas the general case, there is no assumption that with:

a = b+c

a is the same type as either b or c, but certainly not the same object.

-Chris












-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Regression: in-place operations (possibly intentional)

2012-09-21 Thread Nathaniel Smith
On 21 Sep 2012 17:31, Chris Barker chris.bar...@noaa.gov wrote:

 On Thu, Sep 20, 2012 at 2:48 PM, Nathaniel Smith n...@pobox.com wrote:
  because a += b
  really should be the same as a = a + b.

 I don't think that's the case - the inplace operator should be (and
 are) more than syntactic sugar -- they have a different meaning and
 use (in fact, I think they should't work at all for immutable, sbut i
 guess the common increment-a-counter use was too good to pass up)

 in the numpy case:

 a = a + b

 means make a new array, from the result of adding a and b

 whereas:

 a += b

 means change a in place by adding b to it

 In the first case, I'd expect the type of the result to be determined
 by both a and b -- casting rules.

 In the second case, a should certainly not be a different object, and
 should not have a new data buffer, therefor should not change type.

You're right of course. What I meant is that
  a += b
should produce the same result as
  a[...] = a + b

If we change the casting rule for the first one but not the second, though,
then these will produce different results if a is integer and b is float:
the first will produce an error, while the second will succeed, silently
discarding fractional parts.

-n
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fwd: Package: scipy-0.11.0-0.1.rc2.fc18 Tag: f18-updates-candidate Status: failed Built by: orion

2012-09-21 Thread Ondřej Čertík
Hi Orion,

On Thu, Sep 20, 2012 at 2:56 PM, Orion Poplawski or...@cora.nwra.com wrote:
 This is a plea for some help.  We've been having trouble getting scipy to
 pass all of the tests in the Fedora 18 build with python 3.3 (although it
 seems to build okay in Fedora 19).  Below are the logs of the build.  There
 appears to be some kind of memory corruption that manifests itself a little
 differently on 32-bit vs. 64-bit.  I really have no idea myself how to
 pursue debugging this, though I'm happy to provide any more needed
 information.

Thanks for testing the latest beta2 release.

 Task 4509077 on buildvm-35.phx2.fedoraproject.org
 Task Type: buildArch (scipy-0.11.0-0.1.rc2.fc18.src.rpm, i686)
 logs:
   http://koji.fedoraproject.org/koji/getfile?taskID=4509077name=build.log

This link has the following stacktrace:

/lib/libpython3.3m.so.1.0(PyMem_Free+0x1c)[0xbf044c]
/usr/lib/python3.3/site-packages/numpy/core/multiarray.cpython-33m.so(+0x4d52b)[0x42252b]
/usr/lib/python3.3/site-packages/numpy/core/multiarray.cpython-33m.so(+0xcb7c5)[0x4a07c5]
/usr/lib/python3.3/site-packages/numpy/core/multiarray.cpython-33m.so(+0xcbc5e)[0x4a0c5e]

Which indeed looks like in NumPy. Would you be able to obtain full stacktrace?

There has certainly been segfaults in Python 3.3 with NumPy, but we've
fixed all that we could reproduce. That doesn't mean there couldn't be
more. If you could nail it down a little bit more, that would be
great. I'll help once I can reproduce it somehow.

Ondrej
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] specifying numpy as dependency in your project, install_requires

2012-09-21 Thread Ralf Gommers
Hi,

An issue I keep running into is that packages use:
install_requires = [numpy]
or
install_requires = ['numpy = 1.6']

in their setup.py. This simply doesn't work a lot of the time. I actually
filed a bug against patsy for that (https://github.com/pydata/patsy/issues/5),
but Nathaniel is right that it would be better to bring it up on this list.

The problem is that if you use pip, it doesn't detect numpy (may work
better if you had installed numpy with setuptools) and tries to
automatically install or upgrade numpy. That won't work if users don't have
the right compiler. Just as bad would be that it does work, and the user
didn't want to upgrade for whatever reason.

This isn't just my problem; at Wes' pandas tutorial at EuroScipy I saw
other people have the exact same problem. My recommendation would be to not
use install_requires for numpy, but simply do something like this in
setup.py:

try:
import numpy
except ImportError:
raise ImportError(my_package requires numpy)

or

try:
from numpy.version import short_version as npversion
except ImportError:
raise ImportError(my_package requires numpy)
if npversion  '1.6':
   raise ImportError(Numpy version is %s; required is version = 1.6
% npversion)

Any objections, better ideas? Is there a good place to put it in the numpy
docs somewhere?

Ralf
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] specifying numpy as dependency in your project, install_requires

2012-09-21 Thread Travis Oliphant

On Sep 21, 2012, at 3:13 PM, Ralf Gommers wrote:

 Hi,
 
 An issue I keep running into is that packages use:
 install_requires = [numpy]
 or
 install_requires = ['numpy = 1.6']
 
 in their setup.py. This simply doesn't work a lot of the time. I actually 
 filed a bug against patsy for that 
 (https://github.com/pydata/patsy/issues/5), but Nathaniel is right that it 
 would be better to bring it up on this list.
 
 The problem is that if you use pip, it doesn't detect numpy (may work better 
 if you had installed numpy with setuptools) and tries to automatically 
 install or upgrade numpy. That won't work if users don't have the right 
 compiler. Just as bad would be that it does work, and the user didn't want to 
 upgrade for whatever reason.
 
 This isn't just my problem; at Wes' pandas tutorial at EuroScipy I saw other 
 people have the exact same problem. My recommendation would be to not use 
 install_requires for numpy, but simply do something like this in setup.py:
 
 try:
 import numpy
 except ImportError:
 raise ImportError(my_package requires numpy)
 
 or 
 
 try:
 from numpy.version import short_version as npversion
 except ImportError:
 raise ImportError(my_package requires numpy)
 if npversion  '1.6':
raise ImportError(Numpy version is %s; required is version = 1.6 % 
 npversion)
 
 Any objections, better ideas? Is there a good place to put it in the numpy 
 docs somewhere?

I agree.   I would recommend against using install requires.   

-Travis


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] specifying numpy as dependency in your project, install_requires

2012-09-21 Thread Benjamin Root
On Fri, Sep 21, 2012 at 4:19 PM, Travis Oliphant tra...@continuum.iowrote:


 On Sep 21, 2012, at 3:13 PM, Ralf Gommers wrote:

 Hi,

 An issue I keep running into is that packages use:
 install_requires = [numpy]
 or
 install_requires = ['numpy = 1.6']

 in their setup.py. This simply doesn't work a lot of the time. I actually
 filed a bug against patsy for that (
 https://github.com/pydata/patsy/issues/5), but Nathaniel is right that it
 would be better to bring it up on this list.

 The problem is that if you use pip, it doesn't detect numpy (may work
 better if you had installed numpy with setuptools) and tries to
 automatically install or upgrade numpy. That won't work if users don't have
 the right compiler. Just as bad would be that it does work, and the user
 didn't want to upgrade for whatever reason.

 This isn't just my problem; at Wes' pandas tutorial at EuroScipy I saw
 other people have the exact same problem. My recommendation would be to not
 use install_requires for numpy, but simply do something like this in
 setup.py:

 try:
 import numpy
 except ImportError:
 raise ImportError(my_package requires numpy)

 or

 try:
 from numpy.version import short_version as npversion
 except ImportError:
 raise ImportError(my_package requires numpy)
 if npversion  '1.6':
raise ImportError(Numpy version is %s; required is version = 1.6
 % npversion)

 Any objections, better ideas? Is there a good place to put it in the numpy
 docs somewhere?


 I agree.   I would recommend against using install requires.

 -Travis



Why?  I have personally never had an issue with this.  The only way I could
imagine that this wouldn't work is if numpy was installed via some other
means and there wasn't an entry in the easy-install.pth (or whatever
equivalent pip uses).  If pip is having a problem detecting numpy, then
that is a bug that needs fixing somewhere.

As for packages getting updated unintentionally, easy_install and pip both
require an argument to upgrade any existing packages (I think -U), so I am
not sure how you are running into such a situation.

I have found install_requires to be a powerful feature in my setup.py
scripts, and I have seen no reason to discourage it.  Perhaps I am the only
one?

Ben Root
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] specifying numpy as dependency in your project, install_requires

2012-09-21 Thread Andreas Hilboll
Am Fr 21 Sep 2012 22:37:13 CEST schrieb Benjamin Root:


 On Fri, Sep 21, 2012 at 4:19 PM, Travis Oliphant tra...@continuum.io
 mailto:tra...@continuum.io wrote:


 On Sep 21, 2012, at 3:13 PM, Ralf Gommers wrote:

 Hi,

 An issue I keep running into is that packages use:
 install_requires = [numpy]
 or
 install_requires = ['numpy = 1.6']

 in their setup.py. This simply doesn't work a lot of the time. I
 actually filed a bug against patsy for that
 (https://github.com/pydata/patsy/issues/5), but Nathaniel is
 right that it would be better to bring it up on this list.

 The problem is that if you use pip, it doesn't detect numpy (may
 work better if you had installed numpy with setuptools) and tries
 to automatically install or upgrade numpy. That won't work if
 users don't have the right compiler. Just as bad would be that it
 does work, and the user didn't want to upgrade for whatever reason.

 This isn't just my problem; at Wes' pandas tutorial at EuroScipy
 I saw other people have the exact same problem. My recommendation
 would be to not use install_requires for numpy, but simply do
 something like this in setup.py:

 try:
 import numpy
 except ImportError:
 raise ImportError(my_package requires numpy)

 or

 try:
 from numpy.version import short_version as npversion
 except ImportError:
 raise ImportError(my_package requires numpy)
 if npversion  '1.6':
raise ImportError(Numpy version is %s; required is
 version = 1.6 % npversion)

 Any objections, better ideas? Is there a good place to put it in
 the numpy docs somewhere?

 I agree.   I would recommend against using install requires.

 -Travis



 Why?  I have personally never had an issue with this.  The only way I
 could imagine that this wouldn't work is if numpy was installed via
 some other means and there wasn't an entry in the easy-install.pth (or
 whatever equivalent pip uses).  If pip is having a problem detecting
 numpy, then that is a bug that needs fixing somewhere.

 As for packages getting updated unintentionally, easy_install and pip
 both require an argument to upgrade any existing packages (I think
 -U), so I am not sure how you are running into such a situation.

Quite easily, actually. I ran into pip wanting to upgrade numpy when I 
was installing/upgrading a package depending on numpy. Problem is, -U 
upgrades both the package you explicitly select *and* its dependencies. 
I know there's some way around this, but it's not obvious -- at least 
not for users.

Cheers, Andreas.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] specifying numpy as dependency in your project, install_requires

2012-09-21 Thread Ralf Gommers
On Fri, Sep 21, 2012 at 10:37 PM, Benjamin Root ben.r...@ou.edu wrote:



 On Fri, Sep 21, 2012 at 4:19 PM, Travis Oliphant tra...@continuum.iowrote:


 On Sep 21, 2012, at 3:13 PM, Ralf Gommers wrote:

 Hi,

 An issue I keep running into is that packages use:
 install_requires = [numpy]
 or
 install_requires = ['numpy = 1.6']

 in their setup.py. This simply doesn't work a lot of the time. I actually
 filed a bug against patsy for that (
 https://github.com/pydata/patsy/issues/5), but Nathaniel is right that
 it would be better to bring it up on this list.

 The problem is that if you use pip, it doesn't detect numpy (may work
 better if you had installed numpy with setuptools) and tries to
 automatically install or upgrade numpy. That won't work if users don't have
 the right compiler. Just as bad would be that it does work, and the user
 didn't want to upgrade for whatever reason.

 This isn't just my problem; at Wes' pandas tutorial at EuroScipy I saw
 other people have the exact same problem. My recommendation would be to not
 use install_requires for numpy, but simply do something like this in
 setup.py:

 try:
 import numpy
 except ImportError:
 raise ImportError(my_package requires numpy)

 or

 try:
 from numpy.version import short_version as npversion
 except ImportError:
 raise ImportError(my_package requires numpy)
 if npversion  '1.6':
raise ImportError(Numpy version is %s; required is version =
 1.6 % npversion)

 Any objections, better ideas? Is there a good place to put it in the
 numpy docs somewhere?


 I agree.   I would recommend against using install requires.

 -Travis



 Why?  I have personally never had an issue with this.  The only way I
 could imagine that this wouldn't work is if numpy was installed via some
 other means and there wasn't an entry in the easy-install.pth (or whatever
 equivalent pip uses).


Eh, just installing numpy with python setup.py install uses plain
distutils, not setuptools. So there indeed isn't an entry in
easy-install.pth.  Which some consider a feature:)


 If pip is having a problem detecting numpy, then that is a bug that needs
 fixing somewhere.


Sure. But who's going to do that?


 As for packages getting updated unintentionally, easy_install and pip both
 require an argument to upgrade any existing packages (I think -U), so I am
 not sure how you are running into such a situation.


No, if the version detection fails pip will happily upgrade my 1.8.0-dev
to 1.6.2.


 I have found install_requires to be a powerful feature in my setup.py
 scripts, and I have seen no reason to discourage it.  Perhaps I am the only
 one?


I'm sure you're not the only one. But it's still severely broken.

Ralf
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] specifying numpy as dependency in your project, install_requires

2012-09-21 Thread Frédéric Bastien
On Fri, Sep 21, 2012 at 4:37 PM, Benjamin Root ben.r...@ou.edu wrote:


 On Fri, Sep 21, 2012 at 4:19 PM, Travis Oliphant tra...@continuum.io
 wrote:


 On Sep 21, 2012, at 3:13 PM, Ralf Gommers wrote:

 Hi,

 An issue I keep running into is that packages use:
 install_requires = [numpy]
 or
 install_requires = ['numpy = 1.6']

 in their setup.py. This simply doesn't work a lot of the time. I actually
 filed a bug against patsy for that
 (https://github.com/pydata/patsy/issues/5), but Nathaniel is right that it
 would be better to bring it up on this list.

 The problem is that if you use pip, it doesn't detect numpy (may work
 better if you had installed numpy with setuptools) and tries to
 automatically install or upgrade numpy. That won't work if users don't have
 the right compiler. Just as bad would be that it does work, and the user
 didn't want to upgrade for whatever reason.

 This isn't just my problem; at Wes' pandas tutorial at EuroScipy I saw
 other people have the exact same problem. My recommendation would be to not
 use install_requires for numpy, but simply do something like this in
 setup.py:

 try:
 import numpy
 except ImportError:
 raise ImportError(my_package requires numpy)

 or

 try:
 from numpy.version import short_version as npversion
 except ImportError:
 raise ImportError(my_package requires numpy)
 if npversion  '1.6':
raise ImportError(Numpy version is %s; required is version = 1.6
 % npversion)

 Any objections, better ideas? Is there a good place to put it in the numpy
 docs somewhere?


 I agree.   I would recommend against using install requires.

 -Travis



 Why?  I have personally never had an issue with this.  The only way I could
 imagine that this wouldn't work is if numpy was installed via some other
 means and there wasn't an entry in the easy-install.pth (or whatever
 equivalent pip uses).  If pip is having a problem detecting numpy, then that
 is a bug that needs fixing somewhere.

 As for packages getting updated unintentionally, easy_install and pip both
 require an argument to upgrade any existing packages (I think -U), so I am
 not sure how you are running into such a situation.

If a user use that option, it will also try to updaet NumPy. This is a
bad default behavior. The work aroud is to pass -U and --no-deps to
don't update the dependency. People don't want to update numpy when
they update there package other package as Theano.

 I have found install_requires to be a powerful feature in my setup.py
 scripts, and I have seen no reason to discourage it.  Perhaps I am the only
 one?

What about if numpy is installed and recent enough, don't put in in
the install_require. If not there, add it there?
It will still fail if not c compiler is there, but maybe it won't
update it at then same time?

Fred
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Regression: in-place operations (possibly intentional)

2012-09-21 Thread Chris Barker
On Fri, Sep 21, 2012 at 10:03 AM, Nathaniel Smith n...@pobox.com wrote:

 You're right of course. What I meant is that
   a += b
 should produce the same result as
   a[...] = a + b

 If we change the casting rule for the first one but not the second, though,
 then these will produce different results if a is integer and b is float:

I certainly agree that we would want that, however, numpy still needs
to deal tih pyton symantics, which means that wile (at the numpy
level) we can control what a[...] = means, and we can control what
a + b produces, we can't change what a + b means depending on the
context of the left hand side.

that means we need to do the casting at the assignment stage, which I
gues is your point -- so:

a_int += a_float

should do the addition with the regular casting rules, then cast to
an int after doing that.

not sure the implimentation details.

Oh, and:

a += b

should be the same as

a[..] = a + b

should be the same as

np.add(a, b, out=a)

not sure what the story is with that at this point.

-Chris




-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] specifying numpy as dependency in your project, install_requires

2012-09-21 Thread Nathaniel Smith
On Fri, Sep 21, 2012 at 9:42 PM, Ralf Gommers ralf.gomm...@gmail.com wrote:
 Eh, just installing numpy with python setup.py install uses plain
 distutils, not setuptools. So there indeed isn't an entry in
 easy-install.pth.  Which some consider a feature:)

I don't think this is correct. To be clear on the technical issue:
what's going on is that when pip sees install_requires=[numpy], it
needs to check whether you already have the distribution called
numpy installed. It turns out that in the wonderful world of python
packaging, distributions are not quite the same as packages, so it
can't do this by searching PYTHONPATH for a numpy directory. What it
does is search PYTHONPATH for a file named
numpy-version-number-.egg-info[1]. This isn't *quite* as dumb as it
seems, because in practice there really isn't a 1-to-1 mapping between
source distributions and installed packages, but it's... pretty dumb.
Anyway. The problem is that Ralf installed numpy by doing an in-place
build in his source tree, and then adding his source tree to his
PYTHONPATH. But, he didn't put a .egg-info on his PYTHONPATH, so pip
couldn't tell that numpy was installed, and did something dumb.

So the question is, how do we get a .egg-info? For the specific case
Ralf ran into, I'm pretty sure the solution is just that if you're
clever enough to do an in-place build and add it to your PYTHONPATH,
you should be clever enough to also run 'python setupegg.py egg_info'
which will create a .egg-info to go with your in-place build and
everything will be fine.

The question is whether there are any other situations where this can
break. I'm not aware of any. Contrary to what's claimed in the bit I
quoted above, I just ran a plain vanilla 'python setup.py install' on
numpy inside a virtualenv, and I ended up with a .egg-info installed.
I'm pretty sure plain old distutils installs .egg-infos these days
too. In that bug report Ralf says there's some problem with
virtualenvs, but I'm not sure what (I use virtualenvs extensively and
have never run into anything). Can anyone elaborate?

[1] or several other variants, see some PEP or another for the tedious details.

-n

P.S.: yeah the thing where pip decides to upgrade the world is REALLY
OBNOXIOUS. It also appears to be on the list to be fixed in the next
release or the next release+1, so I guess there's hope?:
https://github.com/pypa/pip/pull/571
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Regression: in-place operations (possibly intentional)

2012-09-21 Thread Nathaniel Smith
On Fri, Sep 21, 2012 at 10:04 PM, Chris Barker chris.bar...@noaa.gov wrote:
 On Fri, Sep 21, 2012 at 10:03 AM, Nathaniel Smith n...@pobox.com wrote:

 You're right of course. What I meant is that
   a += b
 should produce the same result as
   a[...] = a + b

 If we change the casting rule for the first one but not the second, though,
 then these will produce different results if a is integer and b is float:

 I certainly agree that we would want that, however, numpy still needs
 to deal tih pyton symantics, which means that wile (at the numpy
 level) we can control what a[...] = means, and we can control what
 a + b produces, we can't change what a + b means depending on the
 context of the left hand side.

 that means we need to do the casting at the assignment stage, which I
 gues is your point -- so:

 a_int += a_float

 should do the addition with the regular casting rules, then cast to
 an int after doing that.

 not sure the implimentation details.

Yes, that seems to be what happens.

In [1]: a = np.arange(3)

In [2]: a *= 1.5

In [3]: a
Out[3]: array([0, 1, 3])

But still, the question is, can and should we tighten up the
assignment casting rules to same_kind or similar?

-n
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Regression: in-place operations (possibly intentional)

2012-09-21 Thread Eric Firing
On 2012/09/21 12:20 PM, Nathaniel Smith wrote:
 On Fri, Sep 21, 2012 at 10:04 PM, Chris Barker chris.bar...@noaa.gov wrote:
 On Fri, Sep 21, 2012 at 10:03 AM, Nathaniel Smith n...@pobox.com wrote:

 You're right of course. What I meant is that
a += b
 should produce the same result as
a[...] = a + b

 If we change the casting rule for the first one but not the second, though,
 then these will produce different results if a is integer and b is float:

 I certainly agree that we would want that, however, numpy still needs
 to deal tih pyton symantics, which means that wile (at the numpy
 level) we can control what a[...] = means, and we can control what
 a + b produces, we can't change what a + b means depending on the
 context of the left hand side.

 that means we need to do the casting at the assignment stage, which I
 gues is your point -- so:

 a_int += a_float

 should do the addition with the regular casting rules, then cast to
 an int after doing that.

 not sure the implimentation details.

 Yes, that seems to be what happens.

 In [1]: a = np.arange(3)

 In [2]: a *= 1.5

 In [3]: a
 Out[3]: array([0, 1, 3])

 But still, the question is, can and should we tighten up the
 assignment casting rules to same_kind or similar?

An example of where tighter casting seems undesirable is the case of 
functions that return integer values with floating point dtype, such as 
rint().  It seems natural to do something like

In [1]: ind = np.empty((3,), dtype=int)

In [2]: rint(np.arange(3, dtype=float) / 3, out=ind)
Out[2]: array([0, 0, 1])

where one is generating integer indices based on some manipulation of 
floating point numbers.  This works in 1.6 but fails in 1.7.

Eric

 -n
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] specifying numpy as dependency in your project, install_requires

2012-09-21 Thread josef . pktd
On Fri, Sep 21, 2012 at 5:39 PM, Nathaniel Smith n...@pobox.com wrote:
 On Fri, Sep 21, 2012 at 9:42 PM, Ralf Gommers ralf.gomm...@gmail.com wrote:
 Eh, just installing numpy with python setup.py install uses plain
 distutils, not setuptools. So there indeed isn't an entry in
 easy-install.pth.  Which some consider a feature:)

 I don't think this is correct. To be clear on the technical issue:
 what's going on is that when pip sees install_requires=[numpy], it
 needs to check whether you already have the distribution called
 numpy installed. It turns out that in the wonderful world of python
 packaging, distributions are not quite the same as packages, so it
 can't do this by searching PYTHONPATH for a numpy directory. What it
 does is search PYTHONPATH for a file named
 numpy-version-number-.egg-info[1]. This isn't *quite* as dumb as it
 seems, because in practice there really isn't a 1-to-1 mapping between
 source distributions and installed packages, but it's... pretty dumb.
 Anyway. The problem is that Ralf installed numpy by doing an in-place
 build in his source tree, and then adding his source tree to his
 PYTHONPATH. But, he didn't put a .egg-info on his PYTHONPATH, so pip
 couldn't tell that numpy was installed, and did something dumb.

 So the question is, how do we get a .egg-info? For the specific case
 Ralf ran into, I'm pretty sure the solution is just that if you're
 clever enough to do an in-place build and add it to your PYTHONPATH,
 you should be clever enough to also run 'python setupegg.py egg_info'
 which will create a .egg-info to go with your in-place build and
 everything will be fine.

 The question is whether there are any other situations where this can
 break. I'm not aware of any. Contrary to what's claimed in the bit I
 quoted above, I just ran a plain vanilla 'python setup.py install' on
 numpy inside a virtualenv, and I ended up with a .egg-info installed.
 I'm pretty sure plain old distutils installs .egg-infos these days
 too. In that bug report Ralf says there's some problem with
 virtualenvs, but I'm not sure what (I use virtualenvs extensively and
 have never run into anything). Can anyone elaborate?

 [1] or several other variants, see some PEP or another for the tedious 
 details.

 -n

 P.S.: yeah the thing where pip decides to upgrade the world is REALLY
 OBNOXIOUS. It also appears to be on the list to be fixed in the next
 release or the next release+1, so I guess there's hope?:
 https://github.com/pypa/pip/pull/571

In statsmodels we moved to the check that Ralf proposes, and no requires.

When I'm easy_installing a package I always need to watch out when a
package tries to upgrade numpy.
I just had to hit Crtl-C several times when the requires of pandas
tried to update my numpy version.

Josef

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Regression: in-place operations (possibly intentional)

2012-09-21 Thread Charles R Harris
On Fri, Sep 21, 2012 at 5:51 PM, Eric Firing efir...@hawaii.edu wrote:

 On 2012/09/21 12:20 PM, Nathaniel Smith wrote:
  On Fri, Sep 21, 2012 at 10:04 PM, Chris Barker chris.bar...@noaa.gov
 wrote:
  On Fri, Sep 21, 2012 at 10:03 AM, Nathaniel Smith n...@pobox.com
 wrote:
 
  You're right of course. What I meant is that
 a += b
  should produce the same result as
 a[...] = a + b
 
  If we change the casting rule for the first one but not the second,
 though,
  then these will produce different results if a is integer and b is
 float:
 
  I certainly agree that we would want that, however, numpy still needs
  to deal tih pyton symantics, which means that wile (at the numpy
  level) we can control what a[...] = means, and we can control what
  a + b produces, we can't change what a + b means depending on the
  context of the left hand side.
 
  that means we need to do the casting at the assignment stage, which I
  gues is your point -- so:
 
  a_int += a_float
 
  should do the addition with the regular casting rules, then cast to
  an int after doing that.
 
  not sure the implimentation details.
 
  Yes, that seems to be what happens.
 
  In [1]: a = np.arange(3)
 
  In [2]: a *= 1.5
 
  In [3]: a
  Out[3]: array([0, 1, 3])
 
  But still, the question is, can and should we tighten up the
  assignment casting rules to same_kind or similar?

 An example of where tighter casting seems undesirable is the case of
 functions that return integer values with floating point dtype, such as
 rint().  It seems natural to do something like

 In [1]: ind = np.empty((3,), dtype=int)

 In [2]: rint(np.arange(3, dtype=float) / 3, out=ind)
 Out[2]: array([0, 0, 1])

 where one is generating integer indices based on some manipulation of
 floating point numbers.  This works in 1.6 but fails in 1.7.


In [16]: rint(arange(3, dtype=float)/3, out=ind, casting='unsafe')
Out[16]: array([0, 0, 1])

I'm not sure how to make this backward compatible though.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion