Re: [Numpy-discussion] Numpy-vendor vcvarsall.bat problem.

2015-08-07 Thread Charles R Harris
On Fri, Aug 7, 2015 at 3:33 AM, David Cournapeau courn...@gmail.com wrote:

 Which command exactly did you run to have that error ? Normally, the code
 in msvc9compiler should not be called if you call the setup.py with the
 mingw compiler as expected by distutils


I'm running numpy-vendor which is running wine inside ubuntu inside a vm.
The relevant commands are

run(rm -rf ../local)
run(paver sdist)
run(python setup.py install --prefix ../local)
run(paver pdf)
run(paver bdist_superpack -p 3.4)
run(paver bdist_superpack -p 3.3)
run(paver bdist_superpack -p 2.7)
run(paver write_release_and_log)
run(paver bdist_wininst_simple -p 2.7)
run(paver bdist_wininst_simple -p 3.3)
run(paver bdist_wininst_simple -p 3.4)

Which don't look suspicious. I think we may have changed something in
numpy/distutils, possibly as part of
https://github.com/numpy/numpy/pull/6152

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy-vendor vcvarsall.bat problem.

2015-08-07 Thread Jaime Fernández del Río
On Fri, Aug 7, 2015 at 2:33 AM, David Cournapeau courn...@gmail.com wrote:

 Which command exactly did you run to have that error ? Normally, the code
 in msvc9compiler should not be called if you call the setup.py with the
 mingw compiler as expected by distutils


FWIW, the incantation that works for me to compile numpy on Windows with
mingw is:

python setup.py config --compiler=mingw32 build --compiler=mingw32 install

but I am not sure I have ever tried it with Python 3.

I think my source for this was:

http://nipy.sourceforge.net/nipy/devel/devel/install/windows_scipy_build.html

Jaime



 On Fri, Aug 7, 2015 at 12:19 AM, Charles R Harris 
 charlesr.har...@gmail.com wrote:



 On Thu, Aug 6, 2015 at 5:11 PM, Charles R Harris 
 charlesr.har...@gmail.com wrote:



 On Thu, Aug 6, 2015 at 4:22 PM, David Cournapeau courn...@gmail.com
 wrote:

 Sorry if that's obvious, but do you have Visual Studio 2010 installed ?

 On Thu, Aug 6, 2015 at 11:17 PM, Charles R Harris 
 charlesr.har...@gmail.com wrote:

 Anyone know how to fix this? I've run into it before and never got it
 figured out.

 [192.168.121.189:22] out:   File
 C:\Python34\lib\distutils\msvc9compiler.py, line 259, in query_vcvarsall
 [192.168.121.189:22] out:
 [192.168.121.189:22] out: raise DistutilsPlatformError(Unable to
 find vcvarsall.bat)
 [192.168.121.189:22] out:
 [192.168.121.189:22] out: distutils.errors.DistutilsPlatformError:
 Unable to find vcvarsall.bat

 Chuck



 I'm running numpy-vendor, which is running wine. I think it is all mingw
 with a few installed dll's. The error is coming from the Python distutils
 as part of `has_cblas`.


 It's not impossible that we have changed the build somewhere along the
 line.

 Chuck


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
(\__/)
( O.o)
(  ) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus planes
de dominación mundial.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy-vendor cythonize problem

2015-08-07 Thread Ralf Gommers
On Fri, Aug 7, 2015 at 2:44 AM, Charles R Harris charlesr.har...@gmail.com
wrote:

 I note that current numpy-vendor fails to cythonize in windows builds.
 Cython is installed, but I assume it needs to also be installed in each of
 the python versions in wine. Because the need to cythonize was already
 present in 1.9, I assume that the problem has been solved but the solution
 is not present in numpy-vendor in the numpy repos.


It's easy to work around by running cythonize in the Linux env you're using
numpy-vendor in, then the files don't need to be generated in the Windows
build. I think I've done that before.

You only need one Windows Cython installed if the cython script is found.
If not, you go to this except clause which indeed needs a Cython for every
Python version: https://github.com/numpy/numpy/commit/dd220014373f

A change similar to the f2py fix in
https://github.com/numpy/numpy/commit/dd220014373f will likely fix it.

Ralf
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Shared memory check on in-place modification.

2015-08-07 Thread srean
Wait, when assignments and slicing mix wasn't the behavior supposed to be
equivalent to copying the RHS to a temporary and then assigning using the
temporary. Is that a false memory ? Or has the behavior changed ? As long
as the behavior is well defined and succinct it should be ok


On Tuesday, July 28, 2015, Sebastian Berg sebast...@sipsolutions.net
wrote:


 On Mon Jul 27 22:51:52 2015 GMT+0200, Sturla Molden wrote:
  On 27/07/15 22:10, Anton Akhmerov wrote:
   Hi everyone,
  
   I have encountered an initially rather confusing problem in a piece of
   code that attempted to symmetrize a matrix: `h += h.T`
   The problem of course appears due to `h.T` being a view of `h`, and
   some elements being overwritten during the __iadd__ call.
 

 I think the typical proposal is to raise a warning. Note there is
 np.may_share_memoty. But the logic to give the warning is possibly not
 quite easy, since this is ok to use sometimes. If someone figures it out
 (mostly) I would be very happy zo see such warnings.


  Here is another example
 
a = np.ones(10)
a[1:] += a[:-1]
a
  array([ 1.,  2.,  3.,  2.,  3.,  2.,  3.,  2.,  3.,  2.])
 
  I am not sure I totally dislike this behavior. If it could be made
  constent it could be used to vectorize recursive algorithms. In the case
  above I would prefer the output to be:
 
  array([ 1.,  2.,  3.,  4.,  5.,  6.,  7.,  8.,  9.,  10.])
 
  It does not happen because we do not enforce that the result of one
  operation is stored before the next two operands are read. The only way
  to speed up recursive equations today is to use compiled code.
 
 
  Sturla
 
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org javascript:;
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org javascript:;
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Shared memory check on in-place modification.

2015-08-07 Thread Sebastian Berg
On Fr, 2015-08-07 at 13:14 +0530, srean wrote:
 Wait, when assignments and slicing mix wasn't the behavior supposed to
 be equivalent to copying the RHS to a temporary and then assigning
 using the temporary. Is that a false memory ? Or has the behavior
 changed ? As long as the behavior is well defined and succinct it
 should be ok
 

No, NumPy has never done that as far as I know. And since SIMD
instructions etc. make this even less predictable (you used to be able
to abuse in-place logic, even if usually the same can be done with
ufunc.accumulate so it was a bad idea anyway), you have to avoid it.

Pauli is working currently on implementing the logic needed to find if
such a copy is necessary [1] which is very cool indeed. So I think it is
likely we will such copy logic in NumPy 1.11.

- Sebastian


[1] See https://github.com/numpy/numpy/pull/6166 it is not an easy
problem.


 On Tuesday, July 28, 2015, Sebastian Berg sebast...@sipsolutions.net
 wrote:
 
 
 
 On Mon Jul 27 22:51:52 2015 GMT+0200, Sturla Molden wrote:
  On 27/07/15 22:10, Anton Akhmerov wrote:
   Hi everyone,
  
   I have encountered an initially rather confusing problem
 in a piece of
   code that attempted to symmetrize a matrix: `h += h.T`
   The problem of course appears due to `h.T` being a view of
 `h`, and
   some elements being overwritten during the __iadd__ call.
 
 
 I think the typical proposal is to raise a warning. Note there
 is np.may_share_memoty. But the logic to give the warning is
 possibly not quite easy, since this is ok to use sometimes. If
 someone figures it out (mostly) I would be very happy zo see
 such warnings.
 
 
  Here is another example
 
a = np.ones(10)
a[1:] += a[:-1]
a
  array([ 1.,  2.,  3.,  2.,  3.,  2.,  3.,  2.,  3.,  2.])
 
  I am not sure I totally dislike this behavior. If it could
 be made
  constent it could be used to vectorize recursive algorithms.
 In the case
  above I would prefer the output to be:
 
  array([ 1.,  2.,  3.,  4.,  5.,  6.,  7.,  8.,  9.,  10.])
 
  It does not happen because we do not enforce that the result
 of one
  operation is stored before the next two operands are read.
 The only way
  to speed up recursive equations today is to use compiled
 code.
 
 
  Sturla
 
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



signature.asc
Description: This is a digitally signed message part
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy-vendor vcvarsall.bat problem.

2015-08-07 Thread David Cournapeau
Which command exactly did you run to have that error ? Normally, the code
in msvc9compiler should not be called if you call the setup.py with the
mingw compiler as expected by distutils

On Fri, Aug 7, 2015 at 12:19 AM, Charles R Harris charlesr.har...@gmail.com
 wrote:



 On Thu, Aug 6, 2015 at 5:11 PM, Charles R Harris 
 charlesr.har...@gmail.com wrote:



 On Thu, Aug 6, 2015 at 4:22 PM, David Cournapeau courn...@gmail.com
 wrote:

 Sorry if that's obvious, but do you have Visual Studio 2010 installed ?

 On Thu, Aug 6, 2015 at 11:17 PM, Charles R Harris 
 charlesr.har...@gmail.com wrote:

 Anyone know how to fix this? I've run into it before and never got it
 figured out.

 [192.168.121.189:22] out:   File
 C:\Python34\lib\distutils\msvc9compiler.py, line 259, in query_vcvarsall
 [192.168.121.189:22] out:
 [192.168.121.189:22] out: raise DistutilsPlatformError(Unable to
 find vcvarsall.bat)
 [192.168.121.189:22] out:
 [192.168.121.189:22] out: distutils.errors.DistutilsPlatformError:
 Unable to find vcvarsall.bat

 Chuck



 I'm running numpy-vendor, which is running wine. I think it is all mingw
 with a few installed dll's. The error is coming from the Python distutils
 as part of `has_cblas`.


 It's not impossible that we have changed the build somewhere along the
 line.

 Chuck


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Shared memory check on in-place modification.

2015-08-07 Thread srean
I got_misled_by (extrapolated erroneously from) this description of
temporaries in the documentation

http://docs.scipy.org/doc/numpy/user/basics.indexing.html#assigning-values-to-indexed-arrays
,,,])] ... new array is extracted from the original (as a temporary)
containing the values at 1, 1, 3, 1, then the value 1 is added to the
temporary, and then the temporary is assigned back to the original array.
Thus the value of the array at x[1]+1 is assigned to x[1] three times,
rather than being incremented 3 times.

It is talking about a slightly different scenario of course, the temporary
corresponds to the LHS. Anyhow, as long as the behavior is defined
rigorously it should not be a problem. Now, I vaguely remember abusing
ufuncs and aliasing in interactive sessions for some weird cumsum like
operations (I plead bashfully guilty).


On Fri, Aug 7, 2015 at 1:38 PM, Sebastian Berg sebast...@sipsolutions.net
wrote:

 On Fr, 2015-08-07 at 13:14 +0530, srean wrote:
  Wait, when assignments and slicing mix wasn't the behavior supposed to
  be equivalent to copying the RHS to a temporary and then assigning
  using the temporary. Is that a false memory ? Or has the behavior
  changed ? As long as the behavior is well defined and succinct it
  should be ok
 

 No, NumPy has never done that as far as I know. And since SIMD
 instructions etc. make this even less predictable (you used to be able
 to abuse in-place logic, even if usually the same can be done with
 ufunc.accumulate so it was a bad idea anyway), you have to avoid it.

 Pauli is working currently on implementing the logic needed to find if
 such a copy is necessary [1] which is very cool indeed. So I think it is
 likely we will such copy logic in NumPy 1.11.

 - Sebastian


 [1] See https://github.com/numpy/numpy/pull/6166 it is not an easy
 problem.


  On Tuesday, July 28, 2015, Sebastian Berg sebast...@sipsolutions.net
  wrote:
 
 
 
  On Mon Jul 27 22:51:52 2015 GMT+0200, Sturla Molden wrote:
   On 27/07/15 22:10, Anton Akhmerov wrote:
Hi everyone,
   
I have encountered an initially rather confusing problem
  in a piece of
code that attempted to symmetrize a matrix: `h += h.T`
The problem of course appears due to `h.T` being a view of
  `h`, and
some elements being overwritten during the __iadd__ call.
  
 
  I think the typical proposal is to raise a warning. Note there
  is np.may_share_memoty. But the logic to give the warning is
  possibly not quite easy, since this is ok to use sometimes. If
  someone figures it out (mostly) I would be very happy zo see
  such warnings.
 
 
   Here is another example
  
 a = np.ones(10)
 a[1:] += a[:-1]
 a
   array([ 1.,  2.,  3.,  2.,  3.,  2.,  3.,  2.,  3.,  2.])
  
   I am not sure I totally dislike this behavior. If it could
  be made
   constent it could be used to vectorize recursive algorithms.
  In the case
   above I would prefer the output to be:
  
   array([ 1.,  2.,  3.,  4.,  5.,  6.,  7.,  8.,  9.,  10.])
  
   It does not happen because we do not enforce that the result
  of one
   operation is stored before the next two operands are read.
  The only way
   to speed up recursive equations today is to use compiled
  code.
  
  
   Sturla
  
  
   ___
   NumPy-Discussion mailing list
   NumPy-Discussion@scipy.org
   http://mail.scipy.org/mailman/listinfo/numpy-discussion
  
  
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy-vendor vcvarsall.bat problem.

2015-08-07 Thread Charles R Harris
On Fri, Aug 7, 2015 at 9:36 AM, Charles R Harris charlesr.har...@gmail.com
wrote:

 So the problem comes from the has_cblas function

 def has_cblas(self):
 # primitive cblas check by looking for the header
 res = False
 c = distutils.ccompiler.new_compiler()
 tmpdir = tempfile.mkdtemp()
 s = #include cblas.h
 src = os.path.join(tmpdir, 'source.c')
 try:
 with open(src, 'wt') as f:
 f.write(s)
 try:
 c.compile([src], output_dir=tmpdir,
   include_dirs=self.get_include_dirs())
 res = True
 except distutils.ccompiler.CompileError:
 res = False
 finally:
 shutil.rmtree(tmpdir)
 return res

 The problem is the test compile, which does not use the mingw compiler,
 but falls back to the compiler found in python distutils. Not sure what the
 fix is.


See #6175 https://github.com/numpy/numpy/pull/6175 .

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy-vendor vcvarsall.bat problem.

2015-08-07 Thread Charles R Harris
On Fri, Aug 7, 2015 at 8:02 AM, Charles R Harris charlesr.har...@gmail.com
wrote:



 On Fri, Aug 7, 2015 at 3:33 AM, David Cournapeau courn...@gmail.com
 wrote:

 Which command exactly did you run to have that error ? Normally, the code
 in msvc9compiler should not be called if you call the setup.py with the
 mingw compiler as expected by distutils


 I'm running numpy-vendor which is running wine inside ubuntu inside a vm.
 The relevant commands are

 run(rm -rf ../local)
 run(paver sdist)
 run(python setup.py install --prefix ../local)
 run(paver pdf)
 run(paver bdist_superpack -p 3.4)
 run(paver bdist_superpack -p 3.3)
 run(paver bdist_superpack -p 2.7)
 run(paver write_release_and_log)
 run(paver bdist_wininst_simple -p 2.7)
 run(paver bdist_wininst_simple -p 3.3)
 run(paver bdist_wininst_simple -p 3.4)

 Which don't look suspicious. I think we may have changed something in
 numpy/distutils, possibly as part of
 https://github.com/numpy/numpy/pull/6152


Actually, looks like b6d0263239926e8b14ebc26a0d7b9469fa7866d4. Hmm...,
strange.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy-vendor vcvarsall.bat problem.

2015-08-07 Thread Charles R Harris
So the problem comes from the has_cblas function

def has_cblas(self):
# primitive cblas check by looking for the header
res = False
c = distutils.ccompiler.new_compiler()
tmpdir = tempfile.mkdtemp()
s = #include cblas.h
src = os.path.join(tmpdir, 'source.c')
try:
with open(src, 'wt') as f:
f.write(s)
try:
c.compile([src], output_dir=tmpdir,
  include_dirs=self.get_include_dirs())
res = True
except distutils.ccompiler.CompileError:
res = False
finally:
shutil.rmtree(tmpdir)
return res

The problem is the test compile, which does not use the mingw compiler, but
falls back to the compiler found in python distutils. Not sure what the fix
is.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy-vendor vcvarsall.bat problem.

2015-08-07 Thread Charles R Harris
On Fri, Aug 7, 2015 at 8:16 AM, Charles R Harris charlesr.har...@gmail.com
wrote:



 On Fri, Aug 7, 2015 at 8:02 AM, Charles R Harris 
 charlesr.har...@gmail.com wrote:



 On Fri, Aug 7, 2015 at 3:33 AM, David Cournapeau courn...@gmail.com
 wrote:

 Which command exactly did you run to have that error ? Normally, the
 code in msvc9compiler should not be called if you call the setup.py with
 the mingw compiler as expected by distutils


 I'm running numpy-vendor which is running wine inside ubuntu inside a vm.
 The relevant commands are

 run(rm -rf ../local)
 run(paver sdist)
 run(python setup.py install --prefix ../local)
 run(paver pdf)
 run(paver bdist_superpack -p 3.4)
 run(paver bdist_superpack -p 3.3)
 run(paver bdist_superpack -p 2.7)
 run(paver write_release_and_log)
 run(paver bdist_wininst_simple -p 2.7)
 run(paver bdist_wininst_simple -p 3.3)
 run(paver bdist_wininst_simple -p 3.4)

 Which don't look suspicious. I think we may have changed something in
 numpy/distutils, possibly as part of
 https://github.com/numpy/numpy/pull/6152


 Actually, looks like b6d0263239926e8b14ebc26a0d7b9469fa7866d4. Hmm...,
 strange.


OK, that just leads to an earlier cythonize error because random.pyx
changed, so not the root cause.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion