Re: [Numpy-discussion] numpy ndarray questions

2009-01-27 Thread Sturla Molden
On 1/27/2009 6:03 AM, Jochen wrote:

  BTW memmap arrays have
 the same problem
 if I create a memmap array and later do something like 
 a=a+1
 all later changes will not be written to the file. 

= is Python's rebinding operator.

a = a + 1 rebinds a to a different object.

As for ndarray's, I'd like to point out the difference between

a[:] = a + 1

and

a = a + 1



S.M.




___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy ndarray questions

2009-01-27 Thread Sturla Molden
On 1/27/2009 1:26 AM, Jochen wrote:

 a = fftw3.AlignedArray(1024,complex)
 
 a = a+1

= used this way is not assignment, it is name binding.

It is easy to use function's like fftw_malloc with NumPy:


import ctypes
import numpy

fftw_malloc = ctypes.cdll.fftw.fftw_malloc
fftw_malloc.argtypes = [ctypes.c_ulong,]
fftw_malloc.restype = ctypes.c_ulong

def aligned_array(N, dtype):
 d = dtype()
 address = fftw_malloc(N * d.nbytes) # restype = ctypes.c_ulong
 if (address = 0): raise MemoryError, 'fftw_malloc returned NULL'
 class Dummy(object): pass
 d = Dummy()
 d.__array_interface__ = {
 'data' = (address, False),
 'typestr' : dtype.str,
 'descr' : dtype.descr,
 'shape' : shape,
 'strides' : strides,
 'version' : 3
 }
 return numpy.asarray(d)


If you have to check for a particular alignment before calling fftw, 
that is trivial as well:


def is_aligned(array, boundary):
 address = array.__array_interface__['data'][0]
 return not(address % boundary)




 there a way that I get a different object type?  Or even better is there
 a way to prevent operations like a=a+1 or make them automatically
 in-place operations?

a = a + 1 # rebinds the name 'a' to another array

a[:] = a + 1 # fills a with the result of a + 1


This has to do with Python syntax, not NumPy per se. You cannot overload 
the behaviour of Python's name binding operator (=).


Sturla Molden



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy ndarray questions

2009-01-27 Thread Sturla Molden
On 1/27/2009 12:37 PM, Sturla Molden wrote:

  address = fftw_malloc(N * d.nbytes) # restype = ctypes.c_ulong
  if (address = 0): 
if (address == ): raise MemoryError, 'fftw_malloc returned NULL'


Sorry for the typo.


S.M.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy ndarray questions

2009-01-27 Thread Sturla Molden
On 1/27/2009 12:37 PM, Sturla Molden wrote:

 It is easy to use function's like fftw_malloc with NumPy:

Besides this, if I were to write a wrapper for FFTW in Python, I would 
consider wrapping FFTW's Fortran interface with f2py.

It is probably safer, as well as faster, than using ctypes. It would 
also allow the FFTW library to be linked statically to the Python 
extension, avoiding DLL hell.

S.M.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Slicing and structured arrays question

2009-01-27 Thread Hanno Klemm

Hi, 
I have the following question, that I could not find an answer to in the
example list, or by googling:

I have a record array with dtype such as:
dtype([('times', 'f8'), ('sensors', '|S8'), ('prop1', 'f8'), ('prop2',
'f8'), ('prop3', 'f8'), ('prop4', 'f8')])

I would now like to calculate the mean and median for each of the
properties 'props'. Is there a way to do this similarly to a conventional
array with

a[:,2:].mean(axis=0)

or do I have to use a loop over the names of the properties? 

Thanks in advance,
Hanno







-- 
Hanno Klemm
kl...@phys.ethz.ch


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy ndarray questions

2009-01-27 Thread Sturla Molden
On 1/27/2009 12:37 PM, Sturla Molden wrote:

 import ctypes
 import numpy
 
 fftw_malloc = ctypes.cdll.fftw.fftw_malloc
 fftw_malloc.argtypes = [ctypes.c_ulong,]
 fftw_malloc.restype = ctypes.c_ulong
 
 def aligned_array(N, dtype):
  d = dtype()
  address = fftw_malloc(N * d.nbytes) # restype = ctypes.c_ulong
  if (address = 0): raise MemoryError, 'fftw_malloc returned NULL'
  class Dummy(object): pass
  d = Dummy()
  d.__array_interface__ = {
  'data' = (address, False),
  'typestr' : dtype.str,
  'descr' : dtype.descr,
  'shape' : shape,
  'strides' : strides,
  'version' : 3
  }
  return numpy.asarray(d)


Or if you just want to use numpy, aligning to a 16 byte boundary
can be done like this:


def aligned_array(N, dtype):
 d = dtype()
 tmp = numpy.array(N * d.nbytes + 16, dtype=numpy.uint8)
 address = tmp.__array_interface__['data'][0]
 offset = (16 - address % 16) % 16
 return tmp[offset:offset+N].view(dtype=dtype)


S.M.















___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] PyArray_Zeros

2009-01-27 Thread Hanni Ali
Hi,

I have been having trouble with the PyArray_Zeros/PyArray_ZEROS functions. I
cannot seem to create an array using these functions.

resultArray = PyArray_ZEROS(otherArray-nd, otherArray-dimensions,
NPY_DOUBLE, 0);

I would have thought this would have created an array the same shape as the
otherArray, just filled with zeros...

But I seem to get an error.

Am I doing something obviously wrong?

Hanni
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Slicing and structured arrays question

2009-01-27 Thread Travis Oliphant
Hanno Klemm wrote:
 Hi, 
 I have the following question, that I could not find an answer to in the
 example list, or by googling:

 I have a record array with dtype such as:
 dtype([('times', 'f8'), ('sensors', '|S8'), ('prop1', 'f8'), ('prop2',
 'f8'), ('prop3', 'f8'), ('prop4', 'f8')])

 I would now like to calculate the mean and median for each of the
 properties 'props'. Is there a way to do this similarly to a conventional
 array with

 a[:,2:].mean(axis=0)

 or do I have to use a loop over the names of the properties? 
   
You must first view the array as a float array in order to calculate the 
mean:

This should work:

b = a.view(float)
b.shape = (-1,6)
b[:,2:].mean(axis=0)


-Travis

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy ndarray questions

2009-01-27 Thread Jochen
On Tue, 2009-01-27 at 12:37 +0100, Sturla Molden wrote:
 On 1/27/2009 1:26 AM, Jochen wrote:
 
  a = fftw3.AlignedArray(1024,complex)
  
  a = a+1
 
 = used this way is not assignment, it is name binding.
 
 It is easy to use function's like fftw_malloc with NumPy:
 
 
 import ctypes
 import numpy
 
 fftw_malloc = ctypes.cdll.fftw.fftw_malloc
 fftw_malloc.argtypes = [ctypes.c_ulong,]
 fftw_malloc.restype = ctypes.c_ulong
 
 def aligned_array(N, dtype):
  d = dtype()
  address = fftw_malloc(N * d.nbytes) # restype = ctypes.c_ulong
  if (address = 0): raise MemoryError, 'fftw_malloc returned NULL'
  class Dummy(object): pass
  d = Dummy()
  d.__array_interface__ = {
  'data' = (address, False),
  'typestr' : dtype.str,
  'descr' : dtype.descr,
  'shape' : shape,
  'strides' : strides,
  'version' : 3
  }
  return numpy.asarray(d)
 

I actually do it slightly different, because I want to free the memory
using fftw_free. So I'm subclassing ndarray and in the __new__ function
of the subclass I create a buffer object from fftw_malloc using
PyBuffer_FromReadWriteMemory and pass that to ndarray.__new__. In
__del__ I check if the .base is a buffer object and do an fftw_free.

 
 If you have to check for a particular alignment before calling fftw, 
 that is trivial as well:
 
 
 def is_aligned(array, boundary):
  address = array.__array_interface__['data'][0]
  return not(address % boundary)
 
 

Yes I knew that, btw do you know of any systems which need alignment to
boundaries other than 16byte for simd operations?

 
 
  there a way that I get a different object type?  Or even better is there
  a way to prevent operations like a=a+1 or make them automatically
  in-place operations?
 
 a = a + 1 # rebinds the name 'a' to another array
 
 a[:] = a + 1 # fills a with the result of a + 1

I knew about this one, this is what I have been doing.
 
 This has to do with Python syntax, not NumPy per se. You cannot overload 
 the behaviour of Python's name binding operator (=).
 
Yes I actually thought about it some more yesterday when going home and
realised that this would really not be possible. I guess I just have to
document it clearly so that people don't run into it.

Cheers
Jochen
 
 Sturla Molden
 
 
 
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy ndarray questions

2009-01-27 Thread Jochen
On Tue, 2009-01-27 at 14:16 +0100, Sturla Molden wrote:
 On 1/27/2009 12:37 PM, Sturla Molden wrote:
 
  import ctypes
  import numpy
  
  fftw_malloc = ctypes.cdll.fftw.fftw_malloc
  fftw_malloc.argtypes = [ctypes.c_ulong,]
  fftw_malloc.restype = ctypes.c_ulong
  
  def aligned_array(N, dtype):
   d = dtype()
   address = fftw_malloc(N * d.nbytes) # restype = ctypes.c_ulong
   if (address = 0): raise MemoryError, 'fftw_malloc returned NULL'
   class Dummy(object): pass
   d = Dummy()
   d.__array_interface__ = {
   'data' = (address, False),
   'typestr' : dtype.str,
   'descr' : dtype.descr,
   'shape' : shape,
   'strides' : strides,
   'version' : 3
   }
   return numpy.asarray(d)
 
 
 Or if you just want to use numpy, aligning to a 16 byte boundary
 can be done like this:
 
 
 def aligned_array(N, dtype):
  d = dtype()
  tmp = numpy.array(N * d.nbytes + 16, dtype=numpy.uint8)
  address = tmp.__array_interface__['data'][0]
  offset = (16 - address % 16) % 16
  return tmp[offset:offset+N].view(dtype=dtype)
 
 
 S.M.
 
Ah, I didn't think about doing it in python, cool thanks.


 
 
 
 
 
 
 
 
 
 
 
 
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] recfunctions.stack_arrays

2009-01-27 Thread Ryan May
Pierre (or anyone else who cares to chime in),

I'm using stack_arrays to combine data from two different files into a single
array.  In one of these files, the data from one entire record comes back
missing, which, thanks to your recent change, ends up having a boolean dtype.
There is actual data for this same field in the 2nd file, so it ends up having
the dtype of float64.  When I try to combine the two arrays, I end up with the
following traceback:

data = stack_arrays((old_data, data))
  File /home/rmay/.local/lib64/python2.5/site-packages/metpy/cbook.py, line
260, in stack_arrays
output = ma.masked_all((np.sum(nrecords),), newdescr)
  File /home/rmay/.local/lib64/python2.5/site-packages/numpy/ma/extras.py, 
line
79, in masked_all
a = masked_array(np.empty(shape, dtype),
ValueError: two fields with the same name

Which is unsurprising.  Do you think there is any reasonable way to get
stack_arrays() to find a common dtype for fields with the same name?  Or another
suggestion on how to approach this?  If you think coercing one/both of the 
fields
to a common dtype is the way to go, just point me to a function that could 
figure
out the dtype and I'll try to put together a patch.

Thanks,

Ryan

P.S.  Thanks so much for your work on putting those utility functions in
recfunctions.py  It makes it so much easier to have these functions available in
the library itself rather than needing to reinvent the wheel over and over.

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] recfunctions.stack_arrays

2009-01-27 Thread Pierre GM
[Some background: we're talking about numpy.lib.recfunctions, a set of  
functions to manipulate structured arrays]

Ryan,
If the two files have the same structure, you can use that fact and  
specify the dtype of the output directly with the dtype parameter of  
mafromtxt. That way, you're sure that the two arrays will have the  
same dtype. If you don't know the structure beforehand, you could try  
to load one array and use its dtype as input of mafromtxt to load the  
second one.
Now, we could also try to modify stack_arrays so that it would take  
the largest dtype when several fields have the same name. I'm not  
completely satisfied by this approach, as it makes dtype conversions  
under the hood. Maybe we could provide the functionality as an option  
(w/ a forced_conversion boolean input parameter) ?
I'm a bit surprised by the error message you get. If I try:

  a = ma.array([(1,2,3)], mask=[(0,1,0)], dtype=[('a',int),  
('b',bool), ('c',float)])
  b = ma.array([(4, 5, 6)], dtype=[('a', int), ('b', float), ('c',  
float)])
  test = np.stack_arrays((a, b))

I get a TypeError instead (the field 'b' hasn't the same type in a and  
b). Now, I get the 'two fields w/ the same name' when I use  
np.merge_arrays (with the flatten option). Could you send a small  
example ?


 P.S.  Thanks so much for your work on putting those utility  
 functions in
 recfunctions.py  It makes it so much easier to have these functions  
 available in
 the library itself rather than needing to reinvent the wheel over  
 and over.

Indeed. Note that most of the job had been done by John Hunter and the  
matplotlib developer in their matplotlib.mlab module, so you should  
thank them and not me. I just cleaned up some of the functions.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] make latex in numpy/doc failed

2009-01-27 Thread Nils Wagner
Hi all,

a make latex in numpy/doc failed with

...

Intersphinx hit: PyObject 
http://docs.python.org/dev/c-api/structures.html
writing... Sphinx error:
too many nesting section levels for LaTeX, at heading: 
numpy.ma.MaskedArray.__lt__
make: *** [latex] Fehler 1
  

I am using sphinxv0.5.1
BTW, make html works fine here.

Nils
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy ndarray questions

2009-01-27 Thread Sturla Molden
 On Tue, 2009-01-27 at 14:16 +0100, Sturla Molden wrote:

 def aligned_array(N, dtype):
  d = dtype()
  tmp = numpy.array(N * d.nbytes + 16, dtype=numpy.uint8)
  address = tmp.__array_interface__['data'][0]
  offset = (16 - address % 16) % 16
  return tmp[offset:offset+N].view(dtype=dtype)

 Ah, I didn't think about doing it in python, cool thanks.


Doing it from Python means you don't have to worry about manually
deallocating the array afterwards.

It seems the issue of 16-byte alignment has to do with efficient data
alignment for SIMD instructions (SSE, MMX, etc). So this is not just an
FFTW issue.

I would just put a check for 16-byte alignment in the wrapper, and raise
an exception (e.g. MemoryError) if the array is not aligned properly.
Raising an exception will inform the user of the problem. I would not
attempt to make a local copy if the array is errorneously aligned. That is
my personal preference.

Sturla Molden


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] recfunctions.stack_arrays

2009-01-27 Thread Pierre GM

On Jan 27, 2009, at 4:23 PM, Ryan May wrote:


 I definitely wouldn't advocate magic by default, but I think it  
 would be nice to
 be able to get the functionality if one wanted to.

OK. Put on the TODO list.


 There is one problem I
 noticed, however.  I found common_type and lib.mintypecode, but both  
 raise errors
 when trying to find a dtype to match both bool and float.  I don't  
 know if
 there's another function somewhere that would work for what I want.

I'm not familiar with these functions, I'll check that.

 Apparently, I get my error as a result of my use of titles in the  
 dtype to store
 an alternate name for the field.  (If you're not familiar with  
 titles, they're
 nice because you can get fields by either name, so for the following  
 example,
 a['a'] and a['A'] both return array([1]).)  The following version of  
 your case
 gives me the ValueError:

Ah OK. You found a bug. There's a frustrating feature of dtypes:  
dtype.names doesn't always match [_[0] for _ in dtype.descr].


 As a side question, do you have some local mods to your numpy SVN so  
 that some of
 the functions in recfunctions are available in numpy's top level?

Probably. I used the develop option of setuptools to install numpy on  
a virtual environment.

 On mine, I
 can't get to them except by importing them from  
 numpy.lib.recfunctions.  I don't
 see any mention of recfunctions in lib/__init__.py.


Well, till some problems are ironed out, I'm not really in favor of  
advertising them too much...

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Building on WinXP 64-bit, Intel Compilers

2009-01-27 Thread Michael Colonno
   Hi ~

   I'm trying to build numpy (hopefully eventually scipy with the same
setup) with the Intel compilers (and Intel MKL) on the WinXP 64-bit
platform. Finding / linking to the Intel MKL seems to be successful (see
below) but I have an issue with the settings defined somewhere in the
various setup scripts (can't find where). Per the output below, the Intel
compilers on Windows are looking for .obj object files rather than the
Linux-style .o files. I'd also like to get rid of the /L and -L flags (no
longer supported in Intel C++ v. 11.0 it seems) but this just throws a
warning and does not seem to cause any problems. Can anyone point me to the
python file(s) I need to edit to modify the .o object file setting to .obj?
(The file _configtest.obj is created.) Once operational, I'll pass along
all of my config info for anyone else building in this environment.

   Thanks!
   ~Mike C.


Output from build:

python setup.py config --compiler=intel --fcompiler=intelem install

Running from numpy source directory.
Forcing DISTUTILS_USE_SDK=1
F2PY Version 2_5972
blas_opt_info:
blas_mkl_info:
  FOUND:
libraries = ['mkl_em64t', 'mkl_dll']
library_dirs = ['C:\\Program Files (x86)\\Intel\\Compiler\\11.0\\
061\\cpp\\m
kl\\em64t\\lib']
define_macros = [('SCIPY_MKL_H', None)]
include_dirs = ['C:\\Program Files
(x86)\\Intel\\Compiler\\11.0\\061\\cpp\\m
kl\\include']

  FOUND:
libraries = ['mkl_em64t', 'mkl_dll']
library_dirs = ['C:\\Program Files
(x86)\\Intel\\Compiler\\11.0\\061\\cpp\\m
kl\\em64t\\lib']
define_macros = [('SCIPY_MKL_H', None)]
include_dirs = ['C:\\Program Files
(x86)\\Intel\\Compiler\\11.0\\061\\cpp\\m
kl\\include']

lapack_opt_info:
lapack_mkl_info:
mkl_info:
  FOUND:
libraries = ['mkl_em64t', 'mkl_dll']
library_dirs = ['C:\\Program Files
(x86)\\Intel\\Compiler\\11.0\\061\\cpp\\m
kl\\em64t\\lib']
define_macros = [('SCIPY_MKL_H', None)]
include_dirs = ['C:\\Program Files
(x86)\\Intel\\Compiler\\11.0\\061\\cpp\\m
kl\\include']

  FOUND:
libraries = ['mkl_lapack', 'mkl_em64t', 'mkl_dll']
library_dirs = ['C:\\Program Files
(x86)\\Intel\\Compiler\\11.0\\061\\cpp\\m
kl\\em64t\\lib']
define_macros = [('SCIPY_MKL_H', None)]
include_dirs = ['C:\\Program Files
(x86)\\Intel\\Compiler\\11.0\\061\\cpp\\m
kl\\include']

  FOUND:
libraries = ['mkl_lapack', 'mkl_em64t', 'mkl_dll']
library_dirs = ['C:\\Program Files
(x86)\\Intel\\Compiler\\11.0\\061\\cpp\\m
kl\\em64t\\lib']
define_macros = [('SCIPY_MKL_H', None)]
include_dirs = ['C:\\Program Files
(x86)\\Intel\\Compiler\\11.0\\061\\cpp\\m
kl\\include']

running config
running install
running build
running config_cc
unifing config_cc, config, build_clib, build_ext, build commands --compiler
opti
ons
running config_fc
unifing config_fc, config, build_clib, build_ext, build commands --fcompiler
opt
ions
running build_src
building py_modules sources
creating build
creating build\src.win32-2.5
creating build\src.win32-2.5\numpy
creating build\src.win32-2.5\numpy\distutils
building extension numpy.core.multiarray sources
creating build\src.win32-2.5\numpy\core
Generating build\src.win32-2.5\numpy\core\include/numpy\config.h
Could not locate executable icc
Could not locate executable ecc
Ignoring MSVCCompiler instance has no attribute '_MSVCCompiler__root' (I
think
 it is msvccompiler.py bug)
customize IntelEM64TFCompiler
Found executable C:\Program Files
(x86)\Intel\Compiler\11.0\061\fortran\Bin\inte
l64\ifort.exe
Found executable C:\Program Files
(x86)\Intel\Compiler\11.0\061\fortran\Bin\inte
l64\ifort.exe
C compiler: icl

compile options: '-IC:\Python25\include -Inumpy\core\src
-Inumpy\core\include -I
C:\Python25\include -IC:\Python25\PC -c'
icl: _configtest.c
Found executable C:\Program Files
(x86)\Intel\Compiler\11.0\061\cpp\Bin\intel64\
icl.exe
icl _configtest.o -LC:\Python25\lib -LC:\ -LC:\Python25\libs -o _configtest
Intel(R) C++ Intel(R) 64 Compiler Professional for applications running on
Intel
(R) 64, Version 11.0Build 20080930 Package ID: w_cproc_p_11.0.061
Copyright (C) 1985-2008 Intel Corporation.  All rights reserved.
icl: command line warning #10161: unrecognized source type '_configtest.o';
obje
ct file assumed
icl: command line warning #10006: ignoring unknown option
'/LC:\Python25\lib'
icl: command line warning #10006: ignoring unknown option '/LC:\'
icl: command line warning #10006: ignoring unknown option
'/LC:\Python25\libs'

ipo: warning #11009: file format not recognized for _configtest.o
Microsoft (R) Incremental Linker Version 9.00.30729.01
Copyright (C) Microsoft Corporation.  All rights reserved.

-out:_configtest.exe
_configtest.o
LINK : fatal error LNK1181: cannot open input file '_configtest.o'
failure.
removing: _configtest.c _configtest.o
Traceback (most recent call last):
  File setup.py, line 96, in module
setup_package()
  File setup.py, line 89, in setup_package
configuration=configuration )
  File C:\Documents and

Re: [Numpy-discussion] Building on WinXP 64-bit, Intel Compilers

2009-01-27 Thread Michael Colonno
   Thanks for your response. I manually edited one of the python files
(ccompiler.py I think) to change icc.exe to icl.exe. (This is a trick I used
to use to get F2PY to compile on Windows platforms.) Since icl is a drop-in
replacement for the visual studio compiler / linker, I'd like to edit the
python files configuring this (msvc) but I could not find anything(?) If you
could point me towards the config files(s) for the visual studio compiler
(I'm assuming are configured for the Windows file extensions already) I
could likely make some headway.

   Thanks,
   ~Mike C.

On Tue, Jan 27, 2009 at 6:39 PM, David Cournapeau 
da...@ar.media.kyoto-u.ac.jp wrote:

 Michael Colonno wrote:
 Hi ~
 
 I'm trying to build numpy (hopefully eventually scipy with the same
  setup) with the Intel compilers (and Intel MKL) on the WinXP 64-bit
  platform. Finding / linking to the Intel MKL seems to be successful
  (see below)

 Unfortunately, at this stage, it does not say much about linking:
 distutils look for files, and do not do any sanity check beyond that.

  but I have an issue with the settings defined somewhere in the various
  setup scripts (can't find where). Per the output below, the Intel
  compilers on Windows are looking for .obj object files rather than
  the Linux-style .o files.

 I think the problem is simply that intel support in numpy for the C
 compiler is limited to unix. At least, a quick look at the sources did
 not give much informations about windows support: --compiler=intel does
 call for the unix version (icc, and this is called first as you can see
 in your log). I am actually puzzled: where is the icl.exe coming from ?
 grep icl gives nothing in numpy, and nothing in python - did you by any
 chance build python itself with the Intel compiler ?

 cheers,

 David
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion