Re: [Numpy-discussion] Nonblocking Plots with Matplotlib

2007-03-15 Thread Andrew Straw
Bill Baxter wrote:
 On 3/15/07, Bill Baxter [EMAIL PROTECTED] wrote:
   
 Thanks, Sebastian.  I'll take a look at Pyro.  Hadn't heard of it.
 I'm using just xmlrpclib with pickle right now.
 

 I took a look at Pyro -- it looks nice.
 The only thing I couldn't find, though, is how decouple the wx GUI on
 the server side from the Pyro remote call handler.  Both wx and Pyro
 want to run a main loop.

 With the XML-RPC, I could use twisted and its wxreactor class.  That
 does all the necessary magic under the hood to run both loops.
 Basically all you have to do to make it  work is:

class MainApp(wx.App, twisted.web.xmlrpc.XMLRPC):
 ...

 twisted.internet.wxreactor.install()
 app = MainApp()
 twisted.internet.reactor.registerWxApp(app)
 twisted.internet.reactor.run()

 And then you're good to go.  reactor.run() takes care of both main
 loops somehow.

 Do you know of any good examples showing how to do that sort of thing
 with Pyro?  It must be possible.  I mean it's the exact same sort of
 thing you'd need if you're writing a simple GUI internet chat program.
  My googling has turned up nothing, though.

It is possible to do this with Pyro.

I think Pyro can auto-background (my terminology not theirs) if you 
allow it to do threading. Otherwise, you could handle requests by 1) 
putting Pyro in a thread you manage or 2) a GUI timer calling Pyro's 
daemon.handleRequests(...) fairly often.

But I hear nothing but good things about Twisted, and your code works, 
so I say find some bigger fish to fry (especially since I can see 
potential issues with each of the above options...).
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] arrayrange

2007-03-15 Thread Miguel Oliveira, Jr.
Hi Eike!

Yes, I did subscribe! Thanks once again for your reply... I'm sorry  
but I don't know exactly what you mean... Do you think I should replace
t = arange(0, durSecs, 1.0/SRate)
by
t = linspace(0, durSecs, durSecs*SRate)?

That won't work...

Maybe I am missing something?... The resulting file should be am aiff  
sine wave...

Best,

Miguel



On 14 Mar 2007, at 17:55, Eike Welk wrote:

 I would use something like this:
 t = linspace(0, durSecs, durSecs*SRate)

 Do you know the 'Numpy Example List'
 http://www.scipy.org/Numpy_Example_List

 Regards Eike.

 PS: Ah, you did subscribe.

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Possible bug in PyArray_RemoveSmallest?

2007-03-15 Thread Albert Strasheim
Hello all

I was poking around in the NumPy internals and I came across the 
following code in PyArray_RemoveSmallest in arrayobject.c:

intp sumstrides[NPY_MAXDIMS];

...

for (i=0; imulti-nd; i++) {
sumstrides[i] = 0;
for (j=0; jmulti-numiter; j++) {
sumstrides[i] = multi-iters[j]-strides[i];
}
}

This might be a red herring, but from the name of the variable 
(sumstrides) and the code (iterating over a bunch of strides) I'm 
guessing the author might have intended to write:

sumstrides[i] += multi-iters[j]-strides[i];

Cheers,

Albert
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] correct way to specify type in array definition

2007-03-15 Thread Francesc Altet
El dj 15 de 03 del 2007 a les 06:01 -0400, en/na Brian Blais va
escriure:
 Hello,
 
 Can someone tell me what the preferred way to specify the type of an array?  
 I want
 it to be a float array, no matter what is given (say, integers).  I can do:
 
 a=numpy.array([1,2,3],numpy.dtype('float'))
 
 or
 
 a=numpy.array([1,2,3],type(1.0))
 
 or perhaps many others.  Is there a way that is recommended?

Well, this depends on your preferences, I guess, but I like to be
explicit, so I normally use:

a=numpy.array([1,2,3], numpy.float64)

but, if you are a bit lazy to type, the next is just fine as well:

a=numpy.array([1,2,3], 'f8')

Cheers,

-- 
Francesc Altet|  Be careful about using the following code --
Carabos Coop. V.  |  I've only proven that it works, 
www.carabos.com   |  I haven't tested it. -- Donald Knuth

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [Matplotlib-users] Nonblocking Plots with Matplotlib

2007-03-15 Thread Andrew Straw
Bill, very cool. Also, thanks for showing me how Twisted can be used 
like Pyro, more-or-less, I think. (If I understand your code from my 1 
minute perusal.)

On Mac OS X, there's one issue I don't have time to follow any further: 
sys.executable points to  
/Library/Frameworks/Python.framework/Versions/2.4/Resources/Python.app/Contents/MacOS/Python
whereas /Library/Frameworks/Python.framework/Versions/Current/bin/python 
is the file actually on my path. For some reason, when I run the latter 
ezplot is found, when the former, it is not. Thus, your auto-spawning of 
a plotserver instance fails on my installation.

Other than that, the example you gave works as advertised and looks 
great. (Ohh, those anti-aliased lines look better and better the more I 
suffer through my colleagues' aliased plots...)

Bill Baxter wrote:
 Howdy Folks,

 I was missing the good ole days of using Matlab back at the Uni when I
 could debug my code, stop at breakpoints and plot various data without
 fear of blocking the interpreter process.

 Using ipython -pylab is what has been suggested to me in the past,
 but the problem is I don't do my debugging from ipython.  I have a
 very nice IDE that works very well, and it has a lovely interactive
 debugging prompt that I can use to probe my code when stopped at a
 breakpoint.  It's great except I can't really use matplotlib for
 debugging there because it causes things to freeze up.

 So I've come up with a decent (though not perfect) solution for
 quickie interactive plots which is to run matplotlib in a separate
 process.  I call the result it 'ezplot'.  The first alpha version of
 this is now available at the Cheeseshop.  (I made an egg too, so if
 you have setuptools you can do easy_install ezplot.)

 The basic usage is like so:

  In [1]: import ezplot
  In [2]: p = ezplot.Plotter()
  In [3]: p.plot([1,2,3],[1,4,9],marker='o')
  Connecting to server... waiting...
  connected to plotserver 0.1.0a1 on http://localhost:8397
  Out[3]: True
  In [4]: from numpy import *
  In [5]: x = linspace(-5,5,20)
  In [13]: p.clf()
  Out[13]: True
  In [14]: p.plot(x, x*x*log(x*x+0.01))

 (Imagine lovely plots popping up on your screen as these commands are typed.)

 The only return values you get back are True (success...probably) or
 False (failure...for sure).  So no fancy plot object manipulation is
 possible.  But you can do basic plots no problem.

 The nice part is that this (unlike ipython's built-in -pylab threading
 mojo) should work just as well from wherever you're using python.
 Whether it's ipython (no -pylab) or Idle, or a plain MS-DOS console,
 or WingIDE's debug probe, or SPE, or a PyCrust shell or whatever.  It
 doesn't matter because all the client is doing is packing up data and
 shipping over a socket.  All the GUI plotting mojo happens in a
 completely separate process.

 There are plenty of ways this could be made better, but for me, for
 now, this probably does pretty much all I need, so it's back to Real
 Work.  But if anyone is interested in making improvements to this, let
 me know.

 Here's a short list of things that could be improved:
 * Right now I assume use of the wxAGG backend for matplotlib.  Don't
 know how much work it would be to support other back ends (or how to
 go about it, really).   wxAGG is what I always use.
 * Returning more error/exception info from the server would be nice
 * Returning full fledged proxy plot objects would be nice too, but I
 suspect that's a huge effort
 * SOAP may be better for this than xmlrpclib but I just couldn't get
 it to work (SOAPpy + Twisted).
 * A little more safety would be nice.  Anyone know how to make a
 Twisted xmlrpc server not accept connections from anywhere except
 localhost?
 * There's a little glitch in that the spawned plot server dies with
 the parent that created it.  Maybe there's a flag to subprocess.Popen
 to fix that?
 * Sometimes when you click on Exit Server, if there are plot windows
 open it hangs while shutting down.


 Only tested on Win32 but there's nothing much platform specific in there.

 Give it a try and let me know what you think!

 --bb

 -
 Take Surveys. Earn Cash. Influence the Future of IT
 Join SourceForge.net's Techsay panel and you'll get the chance to share your
 opinions on IT  business topics through brief surveys-and earn cash
 http://www.techsay.com/default.php?page=join.phpp=sourceforgeCID=DEVDEV
 ___
 Matplotlib-users mailing list
 Matplotlib-users@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/matplotlib-users
   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Broadcasting for hstack, vstack etc

2007-03-15 Thread Bill Baxter
I just had a need to append a column of 1's to an array, and given how
big numpy is on broadcasting I thought this might work:

   column_stack((m1,m2, 1))

But it doesn't.
Is there any reason why that couldn't or shouldn't be made to work?

--bb
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] arrayrange

2007-03-15 Thread Robert Kern
Miguel Oliveira, Jr. wrote:
 Hi Eike!
 
 Yes, I did subscribe! Thanks once again for your reply... I'm sorry  
 but I don't know exactly what you mean... Do you think I should replace
 t = arange(0, durSecs, 1.0/SRate)
 by
 t = linspace(0, durSecs, durSecs*SRate)?
 
 That won't work...

*How* does it not work?

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Problem compiling numpy with Python 2.5 on Powerbook

2007-03-15 Thread Robert Kern
Roberto De Almeida wrote:
 Hi, Steve.
 
 On 3/14/07, Steve Lianoglou [EMAIL PROTECTED] wrote:
 I'm not sure what the problem is exactly, but is it weird that
 there's something to do w/ 'i686' when you're running on a powerbook
 being that the pbook is PowerPC?
 
 I managed to compile numpy by first compiling Python 2.5 as a ppc
 build only -- the bug occurs when I try to compile numpy using the
 universal Python build from python.org. I think it's trying to create
 a universal build of numpy, that's why you see the
 
 gcc -arch ppc -arch i386
 
 line on the log, ie, both ppc and i386 architectures. IIRC I don't
 have the universal SDK installed, so that could be the problem.

That almost certainly is the problem.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] zoom FFT with numpy? (Nadav Horesh)

2007-03-15 Thread Ray S
Hi Nadev,
 A long time ago I translated a free code of chirp z transform (zoom 
fft)
 into python.

Thanks, I'll try it out.

I did, however read before on the differences:
 From Numerix http://www.numerix-dsp.com/zoomfft.html:
One common question is : Is the zoom FFT the same as the chirp 
z-transform.

The answer is : Absolutely not. The FFT calculates the FFT at N 
equally spaced points around the unit circle in the z-plane, the 
chirp z-transform modifies the locations of these points along a 
contour that can lie anywhere on the z-plane. In contrast, the 
zoom-FFT uses digital down conversion techniques to localise the 
standard FFT to a narrow band of frequencies that are centered on a 
higher frequency. The chirp z-transform is often used to analyze 
signals such as speech, that have certain frequency domain 
charactgeristics. The zoom-FFT is used to reduce the sample rate 
required when analysing narrowband signals - E.G. in HF 
communications.

http://www-gatago.com/comp/dsp/34830442.html I just saw was good 
reading too.

It will be interesting, and the code is appreciated!
Also, czt.c might be particularly fast if compiled with the Intel FFT 
lib and weave.blitz().
Again, the goal is increased f resolution within a known small band 
for the ~same CPU cycles...

Ray

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] zoom FFT with numpy?

2007-03-15 Thread Warren Focke


On Wed, 14 Mar 2007, Charles R Harris wrote:

 On 3/14/07, Ray Schumacher [EMAIL PROTECTED] wrote:
 
  What I had been doing is a 2048 N full real_FFT with a Hann window, and
  further analyzing the side lobe/bin energy (via linear interp) to try to
  more precisely determine the f within the peak's bin. (How legitimately
  valuable that is I'm not sure... IANAM)


 That's usually fine. You might want to zero fill to get more samples through
 the band. It would help if you gave the sample frequency in Hz too. Anyway,
 unless time is important, I would just zero fill by a factor of 4-8 and
 transform. You can get the same effect with a chirp-z transform, but again
 this is depends on how much programming work you want to do. If you just
 have a couple of lines in the band that you want to locate you could also
 try maximum entropy methods.

Or do

1) explicit multipy by the transform matrix but only use a few rows.

but evaluate the transform matrix on a frequency grid finer than the
1/(2048*tsamp) that is actually independent.  Then fit sin(f)/f to the
power spectrum.  Either of these should give better interpolation than
linear.  I've seen this done (and pass peer review) to determine pulsar
frequencies.  I also remain unconvinced that interpolation provides a
better result, but that can be determined by analyzing fake data with a
known frequency.

If you're trying to determine the significance of the result, the fit
should somehow take into account the fact that the interpolated data
points are not real degrees of freedom.  But your power estimates are
already not independent since you've applied a Hann window.  Probably
should also fit to the line response of a Hann window rather than
sin(f)/f.  Plus something (flat?) for the noise.  You could determine the
real # of degrees of freedom by simulating the procedure many times (with
noise) and fitting a chisquare function to the distribution of fit
chisquare values that you see.  This would also give an empirical estimate
of how well you were determining the frequency, which is probably better
than mucking about with degrees of freedom, anyway.

w

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] zoom FFT with numpy? (Nadav Horesh)

2007-03-15 Thread Charles R Harris

On 3/15/07, Ray S [EMAIL PROTECTED] wrote:


Hi Nadev,
A long time ago I translated a free code of chirp z transform (zoom
fft)
into python.

Thanks, I'll try it out.

I did, however read before on the differences:
From Numerix http://www.numerix-dsp.com/zoomfft.html:
One common question is : Is the zoom FFT the same as the chirp
z-transform.

The answer is : Absolutely not. The FFT calculates the FFT at N
equally spaced points around the unit circle in the z-plane, the
chirp z-transform modifies the locations of these points along a
contour that can lie anywhere on the z-plane. In contrast, the
zoom-FFT uses digital down conversion techniques to localise the
standard FFT to a narrow band of frequencies that are centered on a
higher frequency.



The points in the chirp z-transform can be densely spaced in the band you
are interested in, so it is probably closer to what you want to do. The same
effect can be obtained by zero filling, but that is probably not as
efficient unless you down sample first. Essentially, you will end up with
sinc interpolated points in the spectrum and you can just search for the
max.

BTW, did the s/n you gave refer to the signal or to it's transform?

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] zoom FFT with numpy?

2007-03-15 Thread Charles R Harris

On 3/15/07, Warren Focke [EMAIL PROTECTED] wrote:




On Wed, 14 Mar 2007, Charles R Harris wrote:

 On 3/14/07, Ray Schumacher [EMAIL PROTECTED] wrote:
 
  What I had been doing is a 2048 N full real_FFT with a Hann window,
and
  further analyzing the side lobe/bin energy (via linear interp) to try
to
  more precisely determine the f within the peak's bin. (How
legitimately
  valuable that is I'm not sure... IANAM)


 That's usually fine. You might want to zero fill to get more samples
through
 the band. It would help if you gave the sample frequency in Hz too.
Anyway,
 unless time is important, I would just zero fill by a factor of 4-8 and
 transform. You can get the same effect with a chirp-z transform, but
again
 this is depends on how much programming work you want to do. If you just
 have a couple of lines in the band that you want to locate you could
also
 try maximum entropy methods.

Or do

1) explicit multipy by the transform matrix but only use a few rows.

but evaluate the transform matrix on a frequency grid finer than the
1/(2048*tsamp) that is actually independent.  Then fit sin(f)/f to the
power spectrum.



You can actually zero fill by a factor of two, then build optimum least
squares interpolators for bandlimited signals using a reasonable number of
sample around each frequency interval. The result can be fitted to single
precision accuracy with a 9'th degree polynomial and an ordinary zero solver
used on the derivative. Works well, I did this some 20 years ago as part of
a package for fourier spectroscopy, but it is probably more work than
warranted for the present case.

Either of these should give better interpolation than

linear.  I've seen this done (and pass peer review) to determine pulsar
frequencies.  I also remain unconvinced that interpolation provides a
better result, but that can be determined by analyzing fake data with a
known frequency.

If you're trying to determine the significance of the result, the fit
should somehow take into account the fact that the interpolated data
points are not real degrees of freedom.  But your power estimates are
already not independent since you've applied a Hann window.  Probably
should also fit to the line response of a Hann window rather than
sin(f)/f.



Sinc interpolation will work fine for the windowed spectrum as it contains
the same range of frequencies as the original. Where you can gain something
is explicitly interpolating the unwindowed spectrum with the Hann, or
stronger, window. Because the window functions fall off much faster than the
sinc you don't need to use so many points in the convolution.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] correct way to specify type in array definition

2007-03-15 Thread David M. Cooke
On Thu, Mar 15, 2007 at 11:23:36AM +0100, Francesc Altet wrote:
 El dj 15 de 03 del 2007 a les 06:01 -0400, en/na Brian Blais va
 escriure:
  Hello,
  
  Can someone tell me what the preferred way to specify the type of an array? 
   I want
  it to be a float array, no matter what is given (say, integers).  I can do:
  
  a=numpy.array([1,2,3],numpy.dtype('float'))
  
  or
  
  a=numpy.array([1,2,3],type(1.0))
  
  or perhaps many others.  Is there a way that is recommended?
 
 Well, this depends on your preferences, I guess, but I like to be
 explicit, so I normally use:
 
 a=numpy.array([1,2,3], numpy.float64)
 
 but, if you are a bit lazy to type, the next is just fine as well:
 
 a=numpy.array([1,2,3], 'f8')
 

I just do

a = numpy.array([1,2,3], dtype=float)

The Python types int, float, and bool translate to numpy.int_,
numpy.double, and numpy.bool (i.e., the C equivalents of the Pythonn
types; note that int_ is a C long).

-- 
||\/|
/--\
|David M. Cooke  http://arbutus.physics.mcmaster.ca/dmc/
|[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] nd_image.affine_transform edge effects

2007-03-15 Thread James Turner
Hi Stefan,

Thanks for the suggestions!

 Is this related to
 
 http://projects.scipy.org/scipy/scipy/ticket/213
 
 in any way?

As far as I can see, the problems look different, but thanks for
the example of how to document this. I did confirm that your example
exhibits the same behaviour under numarray, in case that is useful.

 Code snippets to illustrate the problem would be welcome.

OK. I have had a go at producing a code snippet. I apologize that
this is based on numarray rather than numpy, since I'm using STScI
Python, but I think it should be very easy to convert if you have
numpy instead.

What I am doing is to transform overlapping input images onto a
common, larger grid and co-adding them. Although I'm using affine_
transform on 3D data from FITS images, the issue can be illustrated
using a simple 1D translation of a single 2D test array. The input
values are just [4., 3., 2., 1.] in each row. With a translation of
-0.1, the values should therefore be something like
[X, 3.1, 2.1, 1.1, X, X], where the Xs represent points outside the
original data range. What I actually get, however, is roughly
[X, 3.1, 2.1, 1.0, 1.9, X]. The 5th value of 1.9 contaminates the
co-added data in the final output array. Now I'm looking at this
element-by-element, I suppose the bad value of 1.9 is just a result
of extrapolating in order to preserve the original number of data
points, isn't it? Sorry I wasn't clear on that in my original post
-- but surely a blank value (as specified by cval) would be better?

I suppose I could work around this by blanking out the extrapolated
column after doing the affine_transform. I could calculate which is
the column to blank out based on the sense of the offset and the
input array dimensions. It seems pretty messy and inefficient though.
Another idea is to split the translation into integer and fractional
parts, keep the input and output array dimensions the same initially
and then copy the output into a larger array with integer offsets.
That is messy to keep track of though. Maybe a parameter could
instead be added to affine_transform that tells it to shrink the
number of elements instead of extrapolating? I'd be a bit out of my
depth trying to implement that though, even if the authors agree...
(maybe in a few months though).

Can anyone comment on whether this problem should be considered a
bug, or whether it's intended behaviour that I should work around?

The code snippet follows below. Thanks for your patience with
someone who isn't accustomed to posting questions like this
routinely :-).

James.

-

import numarray as N
import numarray.nd_image as ndi

# Create a 2D test pattern:
I = N.zeros((2,4),N.Float32)
I[:,:] = N.arange(4.0, 0.0, -1.0)

# Transformation parameters for a simple translation in 1D:
trmatrix = N.array([[1,0],[0,1]])
troffset = (0.0, -0.1)

# Apply the offset to the test pattern:
I_off1 = ndi.affine_transform(I, trmatrix, troffset, order=3, mode='constant',
  cval=-1.0, output_shape=(2,6))

I_off2 = ndi.affine_transform(I, trmatrix, troffset, order=3, mode='constant',
  cval=-1.0, output_shape=(2,6), prefilter=False)

# Compare the data before and after interpolation:
print I
print I_off1
print I_off2

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy.distutils, windows dll vs lib

2007-03-15 Thread David Cournapeau
Hi,

recently, I got some problems detecting a dynamic library (dll) with 
numpy.distutils. Basically, I have a package which uses a class derived 
from system_info from numpy.distutils to detect a dll I use through ctypes.
If only the dll is present, my numpy.distutils.system_info derived 
class does not find the library; if the .lib is present too, then it is 
detected. Why is that ? Can I modify my class do detecting the dll is 
enough ? I don't know how windows dynamic linking and dynamically loaded 
libraries work, and I am kind of confused by this (I thought .lib was 
.a, and .dll was .so, and that symbols were not exported by default on 
windows contrary to Unix, but the difference seems more subtle than 
this).

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.distutils, windows dll vs lib

2007-03-15 Thread David Cournapeau
Robert Kern wrote:
 David Cournapeau wrote:
 Hi,

 recently, I got some problems detecting a dynamic library (dll) with 
 numpy.distutils. Basically, I have a package which uses a class derived 
 from system_info from numpy.distutils to detect a dll I use through ctypes.
 If only the dll is present, my numpy.distutils.system_info derived 
 class does not find the library; if the .lib is present too, then it is 
 detected. Why is that ? Can I modify my class do detecting the dll is 
 enough ? I don't know how windows dynamic linking and dynamically loaded 
 libraries work, and I am kind of confused by this (I thought .lib was 
 .a, and .dll was .so, and that symbols were not exported by default on 
 windows contrary to Unix, but the difference seems more subtle than 
 this).

 Generally, you don't use .dll files to link against. With MSVC, you need a 
 .lib
 file corresponding to your target .dll file which has the symbols. mingw and
 cygwin use similar .a files. This information is coded in the
 system_info.library_extensions() method, which you can override if you need
 something different. Of course, since you don't mention what methods on
 system_info that you are using, I can't quite be sure this will satisfy your 
 needs.
Sorry for the lack of details; I did this a long time ago, and the 
problem popped up for some people my package with windows (which I don't 
at all) just recently.

I don't use the library for linking: I only need to be able to load it 
dynamically through ctypes. What I did is simply overriding the 
calc_info method, in which I try to detect both library and header 
files. For the library, I do the following:

# Look for the shared library
sndfile_libs= self.get_libs('sndfile_libs', self.libname)
lib_dirs= self.get_lib_dirs()
for i in lib_dirs:
tmp = self.check_libs(i, sndfile_libs)
if tmp is not None:
info= tmp
break
else:
return

When I look at the system_info.check_libs code, it looks like it is 
trying to look for any extension, and the first found is returned... But 
this is not what I get, and I am not sure to see what I am doing wrong.

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.distutils, windows dll vs lib

2007-03-15 Thread Robert Kern
David Cournapeau wrote:

 I don't use the library for linking: I only need to be able to load it 
 dynamically through ctypes. What I did is simply overriding the 
 calc_info method, in which I try to detect both library and header 
 files. For the library, I do the following:
 
 # Look for the shared library
 sndfile_libs= self.get_libs('sndfile_libs', self.libname)
 lib_dirs= self.get_lib_dirs()
 for i in lib_dirs:
 tmp = self.check_libs(i, sndfile_libs)
 if tmp is not None:
 info= tmp
 break
 else:
 return
 
 When I look at the system_info.check_libs code, it looks like it is 
 trying to look for any extension, and the first found is returned... But 
 this is not what I get, and I am not sure to see what I am doing wrong.

Well, since the first line of that method is this:

  exts = self.library_extensions()

I conclude that if you were to override that method to return only ['.dll'],
then your code will work.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion