Re: [Numpy-discussion] question about the documentation of linalg.solve

2008-11-20 Thread Gael Varoquaux
On Thu, Nov 20, 2008 at 07:58:52AM +0200, Scott Sinclair wrote:
 A Notes section giving an overview of the algorithm has been added to
 the docstring http://docs.scipy.org/numpy/docs/numpy.linalg.linalg.solve/.

I thank you very much for doing this, and I reckon many users should be
grateful. This is the way forward to making numpy rock.

Gaël
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] glibc memory corruption when running numpy.test()

2008-11-20 Thread Hoyt Koepke
Hi,

Sorry; my first message wasn't under 40 KB with the attachments, so
here's the same message but with the log files at
http://www.stat.washington.edu/~hoytak/logs.tar.bz2.


 Which ones ?

Sorry; ATLAS = 3.9.4 and lapack=3.2.  I'll give 3.8.2 a shot per your advice.

 You should not do that, it won't work as you would expect. It is a good
 rule to assume that you should never set the *FLAGS variable unless you
 really know what you are doing.

Fair enough.  In my case I was having some issues with 32 bit and 64
bit mismatches (I think that fftw defaulted to 32 bit), so I set the
flags variables.  I also wanted to get the extra few percent of
performance by using the tuning flags.  I'll back up a bit now before
playing with them now, though.

 First, can you try without any blas/lapack (Do BLAS=None LAPACK=None
 ATLAS=None python setup.py ) ?

This now works in the sense that it doesn't hang.  I still get a
number of test failures, however (build + test logs attached).

Thanks a lot for the help!

--Hoyt



+ Hoyt Koepke
+ University of Washington Department of Statistics
+ http://www.stat.washington.edu/~hoytak/
+ [EMAIL PROTECTED]
++
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] glibc memory corruption when running numpy.test()

2008-11-20 Thread David Cournapeau
On Thu, 2008-11-20 at 00:26 -0800, Hoyt Koepke wrote:
 Hi,
 
 Sorry; my first message wasn't under 40 KB with the attachments, so
 here's the same message but with the log files at
 http://www.stat.washington.edu/~hoytak/logs.tar.bz2.
 
 
  Which ones ?
 
 Sorry; ATLAS = 3.9.4 and lapack=3.2.  I'll give 3.8.2 a shot per your advice.

Sorry, I meant which problems did you get when trying to build numpy
with those ? Lapack 3.2 is really recent, and seems to use a new BLAS,
which is likely not supported by ATLAS. 

But to be faire, that won't explain most failures you get.

 
  You should not do that, it won't work as you would expect. It is a good
  rule to assume that you should never set the *FLAGS variable unless you
  really know what you are doing.
 
 Fair enough.  In my case I was having some issues with 32 bit and 64
 bit mismatches (I think that fftw defaulted to 32 bit), so I set the
 flags variables.  I also wanted to get the extra few percent of
 performance by using the tuning flags.  I'll back up a bit now before
 playing with them now, though.

I honestly don't think those flags matter much in the case of
numpy/scipy. In particular, using SSE and co automatically is simply
impossible in numpy case, since the C code is very generic (non-aligned
- non contiguous items) and the compiler has no way to know at compile
time which cases are contiguous.

FFTW support has been removed in recent scipy, so this won't be a
problem anymore :)


 This now works in the sense that it doesn't hang.  I still get a
 number of test failures, however (build + test logs attached).

Those errors seem link to  the flags you have been using. Some errors
are really strange (4 vs 8 bytes types), but I don't see how it could be
explained by a mismatch of 32 vs 64 bits machine code (to the best of my
knowledge, you can't mix 32 and 64 bits machine code in one binary).
Maybe a compiler bug when using -march flag. 

Please try building numpy wo BLAS/LAPACK and wo compiler flags first, to
test that the bare configuration does work, and that the problems are
not due to some bugs in your toolchain/OS/etc... The test suite should
run without any failure in this case; then, we can work on the
BLAS/LAPACK thing,

cheers,

David

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] glibc memory corruption when running numpy.test()

2008-11-20 Thread Hoyt Koepke
Hi,

 I honestly don't think those flags matter much in the case of
 numpy/scipy. In particular, using SSE and co automatically is simply
 impossible in numpy case, since the C code is very generic (non-aligned
 - non contiguous items) and the compiler has no way to know at compile
 time which cases are contiguous.

Good to know.  I'll try to surpress my desire to optimize and not care
about them :-).

 Those errors seem link to  the flags you have been using. Some errors
 are really strange (4 vs 8 bytes types), but I don't see how it could be
 explained by a mismatch of 32 vs 64 bits machine code (to the best of my
 knowledge, you can't mix 32 and 64 bits machine code in one binary).
 Maybe a compiler bug when using -march flag.

 Please try building numpy wo BLAS/LAPACK and wo compiler flags first, to
 test that the bare configuration does work, and that the problems are
 not due to some bugs in your toolchain/OS/etc... The test suite should
 run without any failure in this case; then, we can work on the
 BLAS/LAPACK thing,

I believe the logs I attached (or rather linked to) don't involve
atlas or lapack or any compiler flags.  I agree that they are strange
and I may have something weird floating around.  It's getting late
here, so I'll double check everything in the morning and may try to
run gcc's test suite to verify that isn't the problem.

Thanks again!

--Hoyt

 cheers,

 David

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion




-- 


+ Hoyt Koepke
+ University of Washington Department of Statistics
+ http://www.stat.washington.edu/~hoytak/
+ [EMAIL PROTECTED]
++
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] glibc memory corruption when running numpy.test()

2008-11-20 Thread David Cournapeau
On Thu, Nov 20, 2008 at 6:14 PM, Hoyt Koepke [EMAIL PROTECTED] wrote:


 I believe the logs I attached (or rather linked to) don't involve
 atlas or lapack or any compiler flags.

Ah, yes, sorry, I missed the build.log one. The only thing which
surprises me a bit is the size of long double (I have never seen it to
be 16 bytes on Linux, but in theory, it should not matter as long as
the detected size is correct; I don't have a 64 bits in handy ATM,
will check at home).

I must say I don't have any more ideas on what could cause this mess.
Did you clean the install directory and the build directory before
building ?

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] glibc memory corruption when running numpy.test()

2008-11-20 Thread Charles R Harris
On Thu, Nov 20, 2008 at 2:29 AM, David Cournapeau [EMAIL PROTECTED]wrote:

 On Thu, Nov 20, 2008 at 6:14 PM, Hoyt Koepke [EMAIL PROTECTED] wrote:

 
  I believe the logs I attached (or rather linked to) don't involve
  atlas or lapack or any compiler flags.

 Ah, yes, sorry, I missed the build.log one. The only thing which
 surprises me a bit is the size of long double (I have never seen it to
 be 16 bytes on Linux, but in theory, it should not matter as long as


I believe that's normal on 64 bit machines - long doubles are filled out in
the natural word size. Thus 80 goes to 64 + 32 on 32 bit machines, 64 + 64
on 64 bit machines.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ANN: numpy.i - added managed deallocation to ARGOUTVIEW_ARRAY1 (ARGOUTVIEWM_ARRAY1)

2008-11-20 Thread Egor Zindy
Christopher Barker wrote:
 thanks! good stuff.
 It would be great if you could put that in the numpy (scipy?) wiki 
 though, so more folks will find it.

 -Chris
   
Hello Chris,

no problems, you are absolutely right, this is where the documents will 
have to eventually end up for maximum visibility. There is already a bit 
of numpy + SWIG in the cookbook, but that could well have been written 
before numpy.i

http://www.scipy.org/Cookbook/SWIG_and_NumPy

For added exposure, there is also the numpy.i document written by Bill 
Spotz. That could do with more examples (separate document maybe). The 
lack of examples is what prompted me to write my wiki in the first place!

I've also updated my other document with a more credible ARGOUTVIEW 
example. The part about numpy+SWIG+MinGW is now dwarfed by the body of 
numpy.i examples (not necessary a good thing).

Plus all the examples are ARRAY1, people have also asked for some ARRAY2 
/ ARRAY3 examples and FORTRAN arrays (which I don't know anything about 
I'm afraid).

Here are the two wikis so far:
http://code.google.com/p/ezwidgets/wiki/NumpySWIGMinGW
http://code.google.com/p/ezwidgets/wiki/NumpyManagedMemory

Still looking for a good name for my argout arrays with managed 
deallocation... After writing my ARGOUTVIEW example, I am not even sure 
my addition to numpy.i should be called a view anymore.

How does ARGOUTMAD_ARRAY sound? (for Managed Allocation / Deallocation) :-)

Regards,
Egor
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] linalg.norm missing an 'axis' kwarg?!

2008-11-20 Thread Hans Meine
On Thursday 20 November 2008 11:11:14 Hans Meine wrote:
 I have a 2D matrix comprising a sequence of vectors, and I want to compute
 the norm of each vector.  np.linalg.norm seems to be the best bet, but it
 does not support axis.  Wouldn't this be a nice feature?

Here's a basic implementation.  docstring + tests not updated yet, also I 
wonder whether axis should be the first argument, but that could create 
compatibility problems.

Ciao,
  Hans
Index: numpy/linalg/linalg.py
===
--- numpy/linalg/linalg.py	(revision 6085)
+++ numpy/linalg/linalg.py	(working copy)
@@ -1324,7 +1324,7 @@
 st = s[:min(n, m)].copy().astype(_realType(result_t))
 return wrap(x), wrap(resids), results['rank'], st
 
-def norm(x, ord=None):
+def norm(x, ord=None, axis=None):
 
 Matrix or vector norm.
 
@@ -1365,20 +1365,24 @@
 
 x = asarray(x)
 nd = len(x.shape)
-if ord is None: # check the default case first and handle it immediately
+if axis is not None:
+nd = 1
+if ord is None:
+ord = 2
+elif ord is None: # check the default case first and handle it immediately
 return sqrt(add.reduce((x.conj() * x).ravel().real))
 
-if nd == 1:
+if nd == 1 or axis is not None:
 if ord == Inf:
-return abs(x).max()
+return abs(x).max(axis)
 elif ord == -Inf:
-return abs(x).min()
+return abs(x).min(axis)
 elif ord == 1:
-return abs(x).sum() # special case for speedup
+return abs(x).sum(axis) # special case for speedup
 elif ord == 2:
-return sqrt(((x.conj()*x).real).sum()) # special case for speedup
+return sqrt(((x.conj()*x).real).sum(axis)) # special case for speedup
 else:
-return ((abs(x)**ord).sum())**(1.0/ord)
+return ((abs(x)**ord).sum(axis))**(1.0/ord)
 elif nd == 2:
 if ord == 2:
 return svd(x, compute_uv=0).max()
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Reduced row echelon form

2008-11-20 Thread Robert Young
Excellent, thank you all for your input. I don't actually have a specific
problem that I need it for I just wanted to be able to work through some
book examples. I'll take a look at Sage and Sympy.

Thanks
Rob

On Wed, Nov 19, 2008 at 10:14 AM, Stéfan van der Walt [EMAIL PROTECTED]wrote:

 Hi Robert,

 2008/11/18 Robert Young [EMAIL PROTECTED]:
  Is there a method in NumPy that reduces a matrix to it's reduced row
 echelon
  form? I'm brand new to both NumPy and linear algebra, and I'm not quite
 sure
  where to look.

 I use the Sympy package.  It is small, easy to install, runs on pure
 Python, and gets the job done:

  x = np.random.random((3,3))
  import sympy
  sympy.Matrix(x).rref()
 ([1, 0, 0]
 [0, 1, 0]
 [0, 0, 1], [0, 1, 2])

 If you are interested, I can also provide you with a version that runs
 under pure NumPy, using the LU-decomposition.

 Cheers
 Stéfan
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] question about the documentation of linalg.solve

2008-11-20 Thread Alan G Isaac
On 11/20/2008 12:58 AM Scott Sinclair apparently wrote:
 A Notes section giving an overview of the algorithm has been added to
 the docstring http://docs.scipy.org/numpy/docs/numpy.linalg.linalg.solve/.


You beat me to it.
(I was awaiting editing privileges,
which I just received.)
Thanks!
Alan Isaac

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Priority rules between 0d array and np.scalar

2008-11-20 Thread Pierre GM
All,
That time of a month again: could anybody (and I'm thinking about you  
in particular, Travis O.) can explain me what the priority rules are  
between a 0d ndarray and a np.scalar ?

OK, I understand there are no real rules. However, the bug I was  
describing in a previous thread 
(www.mail-archive.com/numpy-discussion@scipy.org 
/msg13235.html) is still around:

When multiplying/adding a np.scalar and ma.masked, the result varies  
depending on the order of the arguments as well as on their dtype.   
(Keep in mind that ma.masked is a subclass 0d ndarray of value 0 and  
dtype np.float64, with a __array_priority__ of 15).

ma.masked * np.float32(1) = ma.masked
np.float32(1) * ma.masked = ma.masked
ma.masked * np.float64(1) = ma.masked
np.floa64(1) * ma.masked = 0

My understanding is that for the first 2 operations, ma.masked takes  
over because it has the higher dtype. In that case, we use the rules  
defined in MaskedArray for multiplication (either __mul__ or  
__array_wrap__).

For the 3rd and 4th operations, the two arguments have the same dtype  
and it looks like we're switching to a different priority rule.
I would have expected ma.masked to take over in both cases, because a  
MaskedArray has a higher __array_priority__ than a ndarray or a  
np.scalar. That's not the case: the fact that ma.masked is a subclass  
of ndarray is not recognized...

I hope I didn't lose anybody in my description. A ticket has recently  
been filled about the same issue:
http://scipy.org/scipy/numpy/ticket/826

Looking forward to hearing from y'all
P.





___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Numpy 1.2.2 ?

2008-11-20 Thread Pierre GM
All,
I've recently introduced some little fixes  in the SVN version of  
numpy.ma.core
Is there any plan for a 1.2.2 release, or will we directly switch to  
1.3.0 ? Do I need to backport these fixes to 12x ?
Thx a lot in advance
P.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] question about the documentation of linalg.solve

2008-11-20 Thread jh
On Thu, Nov 20, 2008 at 07:58:52AM +0200, Scott Sinclair wrote:
 A Notes section giving an overview of the algorithm has been added to
 the docstring http://docs.scipy.org/numpy/docs/numpy.linalg.linalg.solve/.

Doc goals: We would like each function and class to have docs that
compare favorably to those of all our competitors, and some (notably
Matlab) have very good docs.  For our effort, this means (at the very
least):

- readable by a user one level below the likely user of the item
  (i.e., they can read the doc and at least learn the type of use it
  might be for, so that in the future they know where to go)
- complete with regard to both inputs/outputs and methodology
- referenced to the literature, particularly in cases where the
  methods employed impose limitations for certain cases
- both simple examples and some that show more complex cases,
  particularly if the item is designed to work with other routines

There was a big push over the summer, and a large number of people
pitched in, plowing through the list of undocumented functions and
writing.  However, many of the functions that remain are not amenable
to this approach because they require specialist attention to document
methodology that not everyone is familiar with.  This will be a
dominant issue when we start documenting scipy.

So (everyone), if you identify a routine in your specialty that
requires a doc, please either hop over to docs.scipy.org and start
writing, or post a message on [EMAIL PROTECTED] asking to team up
with a writer.  For convenience, the doc wiki contains links to the
sources so you can easily look at the functions you are working on.
Even simply adding something in the Notes section about the method (as
was done in this case), putting in a a reference, or giving a
non-trivial example will provide material for other writers to flesh
out a full doc for the routine.

Thanks everyone for your help!

--jh--
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] unpickle

2008-11-20 Thread Frank Lagor

 This, and your previous question, are mostly off-topic for
 numpy-discussion. You may want to ask such questions in the future on
 more general Python mailing lists.

  http://www.python.org/community/lists/

 --
 Robert Kern


Yes of course.  Sorry for the spam.  The numpy list is just so helpful :)

No prboblem-- I'll use the python list for this stuff.

-Frank
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] contiguous regions

2008-11-20 Thread John Hunter
I frequently want to break a 1D array into regions above and below
some threshold, identifying all such subslices where the contiguous
elements are above the threshold.  I have two related implementations
below to illustrate what I am after.  The first crossings is rather
naive in that it doesn't handle the case where an element is equal to
the threshold (assuming zero for the threshold in the examples below).
 The second is correct (I think) but is pure python.  Has anyone got a
nifty numpy solution for this?

import numpy as np
import matplotlib.pyplot as plt
t = np.arange(0.0123, 2, 0.05)
s = np.sin(2*np.pi*t)

def crossings(x):

return a list of (above, ind0, ind1).  ind0 and ind1 are regions
such that the slice x[ind0:ind1]0 when above is True and
x[ind0:ind1]0 when above is False

N = len(x)
crossings = x[:-1]*x[1:]0
ind = np.nonzero(crossings)[0]+1
lastind = 0
data = []
for i in range(len(ind)):
above = x[lastind]0
thisind = ind[i]
data.append((above, lastind, thisind))
lastind = thisind

# put the one past the end index if not already in
if len(data) and data[-1]!=N-1:
data.append((not data[-1][0], thisind, N))
return data

def contiguous_regions(mask):

return a list of (ind0, ind1) such that mask[ind0:ind1].all() is
True and we cover all such regions


in_region = None
boundaries = []
for i, val in enumerate(mask):
if in_region is None and val:
in_region = i
elif in_region is not None and not val:
boundaries.append((in_region, i))
in_region = None

if in_region is not None:
boundaries.append((in_region, i+1))
return boundaries



fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_title('using crossings')

ax.plot(t, s, 'o')
ax.axhline(0)


for above, ind0, ind1 in crossings(s):
if above: color='green'
else: color = 'red'
tslice = t[ind0:ind1]
ax.axvspan(tslice[0], tslice[-1], facecolor=color, alpha=0.5)

fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_title('using contiguous regions')
ax.plot(t, s, 'o')
ax.axhline(0)

for ind0, ind1 in contiguous_regions(s0):
tslice = t[ind0:ind1]
ax.axvspan(tslice[0], tslice[-1], facecolor='green', alpha=0.5)

for ind0, ind1 in contiguous_regions(s0):
tslice = t[ind0:ind1]
ax.axvspan(tslice[0], tslice[-1], facecolor='red', alpha=0.5)


plt.show()
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] contiguous regions

2008-11-20 Thread Gregor Thalhammer
John Hunter schrieb:
 I frequently want to break a 1D array into regions above and below
 some threshold, identifying all such subslices where the contiguous
 elements are above the threshold.  I have two related implementations
 below to illustrate what I am after.  The first crossings is rather
 naive in that it doesn't handle the case where an element is equal to
 the threshold (assuming zero for the threshold in the examples below).
  The second is correct (I think) but is pure python.  Has anyone got a
 nifty numpy solution for this?

 import numpy as np
 import matplotlib.pyplot as plt
 t = np.arange(0.0123, 2, 0.05)
 s = np.sin(2*np.pi*t)

   
here my proposal, needs some polishing:

mask = (s0).astype(int8)
d = diff(mask)
idx, = d.nonzero()
#now handle the cases that s is above threshold at beginning or end of 
sequence
if d[idx[0]] == -1:
   idx = r_[0, idx]
if d[idx[-1]] == 1:
   idx = r_[idx, len(s)]
idx.shape = (-1,2)

Gregor

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] can't build numpy 1.2.0 under python 2.6 (windows-amd64) using VS9

2008-11-20 Thread Hanni Ali
Hi All,

I have reached the point where I really need to get some sort of
optimised/accelerated BLAS/LAPACK for windows 64 so have been trying a few
different things out to see whether I can get anything usable, today i
stumbled across this:

http://icl.cs.utk.edu/lapack-for-windows/index.html

Has anyone used this before, I plan on seeing where it takes me in the
morning, so I will report back if i get it working with numpy.

Regards,

Hanni


2008/10/12 Michael Abshoff [EMAIL PROTECTED]

 David Cournapeau wrote:

  Michael Abshoff wrote:

 Hi David,

  Sure, but there isn't even a 32 bit gcc out there that can produce 64
  bit PE binaries (aside from the MinGW fork that AFAIK does not work
  particularly well and allegedly has issues with the cleanliness of some
  of the code which is allegedly the reason that the official MinGW people
  will not touch the code base) .
 
  The biggest problem is that officially, there is still no gcc 4 release
  for mingw. I saw a gcc 4 section in cygwin, though, so maybe it is about
  to be released. There is no support at all for 64 bits PE in the 3 serie.

 Yes, you are correct and I was wrong. I just checked out the mingw-64
 project and there has been a lot of activity the last couple month,
 including a patch to build pthread-win32 in 64 bit mode.

  I think binutils officially support 64 bits PE (I can build a linux
  hosted binutils for 64 bits PE with x86_64-pc-mingw32 as a target, and
  it seems to work: disassembling and co). gcc 4 can work, too (you can
  build a bootstrap C compiler which targets windows 64 bits IICR). The
  biggest problem AFAICS is the runtime (mingw64, which is indeed legally
  murky).

 I would really like to find the actual reason *why* the legal status of
 the 64 bit MinGW port is murky (To my knowledge it has to do with taking
 code from the MS Platform toolkit - but that is conjecture), so I guess
 I will do the obvious thing and ask on the MinGW list :)

  Ok, that is a concern I usually do not have since I tend to build my own
  Python :).
 
  I would say that if you can build python by yourself on windows, you can
  certainly build numpy by yourself :) It took me quite a time to be able
  to build python on windows by myself from scratch.

 Sure, I do see your point.

 Accidentally someone posted about

http://debian-interix.net/

 on the sage-windows list today. It offers a gcc 4.2 toolchain and AFAIK
 there is at least a patch set for ATLAS to make it work on Interix.

  cheers,
 
  David

 Cheers,

 Michael

  ___
  Numpy-discussion mailing list
  Numpy-discussion@scipy.org
  http://projects.scipy.org/mailman/listinfo/numpy-discussion
 

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy.loadtxt requires seek()?

2008-11-20 Thread Ryan May
Hi,

Does anyone know why numpy.loadtxt(), in checking the validity of a
filehandle, checks for the seek() method, which appears to have no
bearing on whether an object will work?

I'm trying to use loadtxt() directly with the file-like object returned
by urllib2.urlopen().  If I change the check for 'seek' to one for
'readline', using the urlopen object works with a hitch.

As far as I can tell, all the filehandle object needs to meet is:

1) Have a readline() method so that loadtxt can skip the first N lines
and read the first line of data

2) Be compatible with itertools.chain() (should be any iterable)

At a minimum, I'd ask to change the check for 'seek' to one for 'readline'.

On a bit deeper thought, it would seem that loadtxt would work with any
iterable that returns individual lines.  I'd like then to change the
calls to readline() to just getting the next object from the iterable
(iter.next() ?) and change the check for a file-like object to just a
check for an iterable.  In fact, we could use the iter() builtin to
convert whatever got passed.  That would give automatically a next()
method and would raise a TypeError if it's incompatible.

Thoughts?  I'm willing to write up the patch for either
.
Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] linalg.norm missing an 'axis' kwarg?!

2008-11-20 Thread Hans Meine
On Donnerstag 20 November 2008, Alan G Isaac wrote:
 On 11/20/2008 5:11 AM Hans Meine apparently wrote:
  I have a 2D matrix comprising a sequence of vectors, and I want to
  compute the norm of each vector.  np.linalg.norm seems to be the best
  bet, but it does not support axis.  Wouldn't this be a nice feature?

 Of possible use until then:
 http://docs.scipy.org/doc/numpy/reference/generated/numpy.apply_along_axis.html
  

Thanks for the hint, yes as you see I have already patched norm() in the
meantime.

BTW: Wow, this is an exceptionally nice doc page, sphinx + scipy's doc system
really rocks! :-)

Ciao, /  /.o.
 /--/ ..o
/  / ANS  ooo


signature.asc
Description: This is a digitally signed message part.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.loadtxt requires seek()?

2008-11-20 Thread Ryan May
Stéfan van der Walt wrote:
 2008/11/20 Ryan May [EMAIL PROTECTED]:
 Does anyone know why numpy.loadtxt(), in checking the validity of a
 filehandle, checks for the seek() method, which appears to have no
 bearing on whether an object will work?
 
 I think this is simply a naive mistake on my part.  I was looking for
 a way to identify files; your patch would be welcome.

I've attached a simple patch that changes the check for seek() to a
check for readline().  I'll punt on my idea of just using iterators,
since that seems like slightly greater complexity for no gain. (I'm not
sure how many people end up with data in a list of strings and wish they
could pass that to loadtxt).

While you're at it, would you commit my patch to add support for bzipped
files as well (attached)?

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
Index: numpy/lib/io.py
===
--- numpy/lib/io.py (revision 5953)
+++ numpy/lib/io.py (working copy)
@@ -253,8 +253,8 @@
 Parameters
 --
 fname : file or string
-File or filename to read.  If the filename extension is ``.gz``,
-the file is first decompressed.
+File or filename to read.  If the filename extension is ``.gz`` or
+``.bz2``, the file is first decompressed.
 dtype : data-type
 Data type of the resulting array.  If this is a record data-type,
 the resulting array will be 1-dimensional, and each row will be
@@ -320,6 +320,9 @@
 if fname.endswith('.gz'):
 import gzip
 fh = gzip.open(fname)
+elif fname.endswith('.bz2'):
+import bz2
+fh = bz2.BZ2File(fname)
 else:
 fh = file(fname)
 elif hasattr(fname, 'seek'):
Index: numpy/lib/io.py
===
--- numpy/lib/io.py (revision 6085)
+++ numpy/lib/io.py (working copy)
@@ -333,7 +333,7 @@
 fh = gzip.open(fname)
 else:
 fh = file(fname)
-elif hasattr(fname, 'seek'):
+elif hasattr(fname, 'readline'):
 fh = fname
 else:
 raise ValueError('fname must be a string or file handle')
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy.ma.allclose bug

2008-11-20 Thread Charles سمير Doutriaux
The following shows a bug in numpy.ma.allclose:

import numpy
import numpy.ma

a = numpy.arange(100)
b=numpy.reshape(a,(10,10))
print b
c=numpy.ma.masked_greater(b,98)
print c.count()
numpy.ma.allclose(b,1)
numpy.ma.allclose(c,1)


Since c is masked it fails

I think it should pass returning either False or True, not sure what  
it should return in case all the elements are equal to 1 (except of  
course the masked one)

Note that the following works:

numpy.ma.allclose(c,numpy.ma.ones(c.shape))

So I'm good for now


Thanks,

C


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion