[Numpy-discussion] Anyone with Core i7 and Ubuntu 10.04?

2010-11-08 Thread Ian Goodfellow
I'm wondering if anyone here has successfully built numpy with ATLAS
and a Core i7 CPU on Ubuntu 10.04. If so, I could really use your
help. I've been trying since August (see my earlier messages to this
list) to get numpy running at full speed on my machine with no luck.
The Ubuntu packages don't seem very fast, and numpy won't use the
version of ATLAS that I compiled. It's pretty sad; anything that
involves a lot of BLAS calls runs slower on this 2.8 ghz Core i7 than
on an older 2.66 ghz Core 2 Quad I use at work.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Trac unaware of github move

2010-11-08 Thread Sébastien Barthélemy
On Sun, 7 Nov 2010, Ralf Gommers wrote:
 That will require renaming those files in the source tree from *.txt
 to *.rst, otherwise there's no way to have github render them
 properly. Unless I missed something. Would that be fine?

I think a *.rst.txt extension would also be recognized by github.

Note that the docutils FAQ advises against using .rst as a file 
extension :
http://docutils.sourceforge.net/FAQ.html#what-s-the-standard-filename-extension-for-a-restructuredtext-file

Cheers

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] LapackError:non-native byte order

2010-11-08 Thread LittleBigBrain
Hi everyone,
In my system '' is the native byte-order, but unless I change the
byte-order label to '=', it won't work in linalg sub-module, but in
others works OK. I am not sure whether this is an expected behavior or
a bug?
 import sys
 sys.byteorder
'little'
 a.dtype.byteorder
''
 b.dtype.byteorder
''
 c=a*b
 c.dtype.byteorder
'='
 d=npy.linalg.solve(a, c)

Traceback (most recent call last):
  File pyshell#20, line 1, in module
d=npy.linalg.solve(a, c)
  File C:\Python27\lib\site-packages\numpy\linalg\linalg.py, line
326, in solve
results = lapack_routine(n_eq, n_rhs, a, n_eq, pivots, b, n_eq, 0)
LapackError: Parameter a has non-native byte order in lapack_lite.dgesv
 cc=c.newbyteorder('')
 cc.dtype.byteorder
''
 d=npy.linalg.solve(a, cc)

Traceback (most recent call last):
  File pyshell#46, line 1, in module
d=npy.linalg.solve(a, cc)
  File C:\Python27\lib\site-packages\numpy\linalg\linalg.py, line
326, in solve
results = lapack_routine(n_eq, n_rhs, a, n_eq, pivots, b, n_eq, 0)
LapackError: Parameter a has non-native byte order in lapack_lite.dgesv
 d=npy.linalg.solve(a.newbyteorder('='), c)
 d.shape
(2000L, 1000L)

Thanks,

LittleBigBrain
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] LapackError:non-native byte order

2010-11-08 Thread Pauli Virtanen
ma, 2010-11-08 kello 18:56 +0100, LittleBigBrain kirjoitti:
 In my system '' is the native byte-order, but unless I change the
 byte-order label to '=', it won't work in linalg sub-module, but in
 others works OK. I am not sure whether this is an expected behavior or
 a bug?
  import sys
  sys.byteorder
 'little'
  a.dtype.byteorder
 ''
  b.dtype.byteorder
 ''

The error is here: it's not possible to create such dtypes via any Numpy
methods -- the '' (or '') is always normalized to '='. Numpy and
several other modules consequently assume this normalization.

Where do `a` and `b` come from?

-- 
Pauli Virtanen

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] LapackError:non-native byte order

2010-11-08 Thread Pauli Virtanen
Mon, 08 Nov 2010 19:31:31 +0100, Pauli Virtanen wrote:

 ma, 2010-11-08 kello 18:56 +0100, LittleBigBrain kirjoitti:
 In my system '' is the native byte-order, but unless I change the
 byte-order label to '=', it won't work in linalg sub-module, but in
 others works OK. I am not sure whether this is an expected behavior or
 a bug?
  import sys
  sys.byteorder
 'little'
  a.dtype.byteorder
 ''
  b.dtype.byteorder
 ''
 
 The error is here: it's not possible to create such dtypes via any Numpy
 methods -- the '' (or '') is always normalized to '='. Numpy and
 several other modules consequently assume this normalization.
 
 Where do `a` and `b` come from?

Ok, `x.newbyteorder('')` seems to do this. Now I'm unsure how things are 
supposed to work.

-- 
Pauli Virtanen

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] OFFTOPIC: simple databases

2010-11-08 Thread Renato Fabbri
Dear All,

i want to find simple databases, like a 5 dimensional with more than 30 samples.

i am having difficult times with this.

where do you get them?

all the best,
rf

-- 
GNU/Linux User #479299
skype: fabbri.renato
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] LapackError:non-native byte order

2010-11-08 Thread Matthew Brett
Hi,

On Mon, Nov 8, 2010 at 10:34 AM, Pauli Virtanen p...@iki.fi wrote:
 Mon, 08 Nov 2010 19:31:31 +0100, Pauli Virtanen wrote:

 ma, 2010-11-08 kello 18:56 +0100, LittleBigBrain kirjoitti:
 In my system '' is the native byte-order, but unless I change the
 byte-order label to '=', it won't work in linalg sub-module, but in
 others works OK. I am not sure whether this is an expected behavior or
 a bug?
  import sys
  sys.byteorder
 'little'
  a.dtype.byteorder
 ''
  b.dtype.byteorder
 ''

 The error is here: it's not possible to create such dtypes via any Numpy
 methods -- the '' (or '') is always normalized to '='. Numpy and
 several other modules consequently assume this normalization.

 Where do `a` and `b` come from?

 Ok, `x.newbyteorder('')` seems to do this. Now I'm unsure how things are
 supposed to work.

Yes - it is puzzling that ``x.newbyteorder('')`` makes arrays that
are confusing to numpy.  If numpy generally always normalizes to the
system endian to '=' then should that not also be true of
``newbyteorder``?

See you,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OFFTOPIC: simple databases

2010-11-08 Thread Pauli Virtanen
Mon, 08 Nov 2010 17:00:34 -0200, Renato Fabbri wrote:
[clip: offtopic]

Please post this on the scipy-user list instead, it's more suitable for 
misc questions.

-- 
Pauli Virtanen

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Solving Ax = b: inverse vs cholesky factorization

2010-11-08 Thread Joon



Hi,I was wondering when it is better to store cholesky factor and use it to solve Ax = b, instead of storing the inverse of A. (A is a symmetric, positive-definite matrix.)Even in the repeated case, if I have the inverse of A (invA) stored, then I can solve Ax = b_i, i = 1, ... , n, by x = dot(invA, b_i).Is dot(invA, b_i) slower thancho_solve(cho_factor, b_i)?I heard calculating the inverse is not recommended, but my understanding is that numpy.linalg.inv actually solves Ax = I instead of literally calculating the inverse of A.It would be great if I can get some intuition about this.Thank you,Joon___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Solving Ax = b: inverse vs cholesky factorization

2010-11-08 Thread Joon



Hi,I was wondering when it is better to store cholesky factor and use it to solve Ax = b, instead of storing the inverse of A. (A is a symmetric, positive-definite matrix.)Even in the repeated case, if I have the inverse of A (invA) stored, then I can solve Ax = b_i, i = 1, ... , n, by x = dot(invA, b_i).Is dot(invA, b_i) slower thancho_solve(cho_factor, b_i)?I heard calculating the inverse is not recommended, but my understanding is that numpy.linalg.inv actually solves Ax = I instead of literally calculating the inverse of A.It would be great if I can get some intuition about this.Thank you,Joon___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Solving Ax = b: inverse vs cholesky factorization

2010-11-08 Thread Pauli Virtanen
Mon, 08 Nov 2010 13:17:11 -0600, Joon wrote:
 I was wondering when it is better to store cholesky factor and use it to
 solve Ax = b, instead of storing the inverse of A. (A is a symmetric,
 positive-definite matrix.)
 
 Even in the repeated case, if I have the inverse of A (invA) stored,
 then I can solve Ax = b_i, i = 1, ... , n, by x = dot(invA, b_i). Is
 dot(invA, b_i) slower than cho_solve(cho_factor, b_i)?

Not necessarily slower, but it contains more numerical error.

http://www.johndcook.com/blog/2010/01/19/dont-invert-that-matrix/

 I heard calculating the inverse is not recommended, but my understanding
 is that numpy.linalg.inv actually solves Ax = I instead of literally
 calculating the inverse of A. It would be great if I can get some
 intuition about this.

That's the same thing as computing the inverse matrix. 

-- 
Pauli Virtanen

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Developmental version numbering with git

2010-11-08 Thread Bruce Southey
Hi,
Since the change to git the numpy version in setup.py is '2.0.0.dev' 
regardless because the prior numbering was determined by svn.

Is there a plan to add some numbering system to numpy developmental version?

Regardless of the answer, the 'numpy/numpy/version.py' will need to 
changed because of the reference to the svn naming.

Bruce
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Solving Ax = b: inverse vs cholesky factorization

2010-11-08 Thread Joon



On Mon, 08 Nov 2010 13:23:46 -0600, Pauli Virtanen p...@iki.fi wrote: Mon, 08 Nov 2010 13:17:11 -0600, Joon wrote: I was wondering when it is better to store cholesky factor and use it to solve Ax = b, instead of storing the inverse of A. (A is a symmetric, positive-definite matrix.) Even in the repeated case, if I have the inverse of A (invA) stored, then I can solve Ax = b_i, i = 1, ... , n, by x = dot(invA, b_i). Is dot(invA, b_i) slower than cho_solve(cho_factor, b_i)? Not necessarily slower, but it contains more numerical error. http://www.johndcook.com/blog/2010/01/19/dont-invert-that-matrix/ I heard calculating the inverse is not recommended, but my understanding is that numpy.linalg.inv actually solves Ax = I instead of literally calculating the inverse of A. It would be great if I can get some intuition about this. That's the same thing as computing the inverse matrix.Oh I see. So I guess ininvA = solve(Ax, I) and then x = dot(invA, b) case, there are more places where numerical errors occur, than just x= solve(Ax, b) case.Thank you,Joon-- ___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Solving Ax = b: inverse vs cholesky factorization

2010-11-08 Thread Joon
On Mon, 08 Nov 2010 13:23:46 -0600, Pauli Virtanen p...@iki.fi wrote:

 Mon, 08 Nov 2010 13:17:11 -0600, Joon wrote:
 I was wondering when it is better to store cholesky factor and use it to
 solve Ax = b, instead of storing the inverse of A. (A is a symmetric,
 positive-definite matrix.)

 Even in the repeated case, if I have the inverse of A (invA) stored,
 then I can solve Ax = b_i, i = 1, ... , n, by x = dot(invA, b_i). Is
 dot(invA, b_i) slower than cho_solve(cho_factor, b_i)?

 Not necessarily slower, but it contains more numerical error.

 http://www.johndcook.com/blog/2010/01/19/dont-invert-that-matrix/

 I heard calculating the inverse is not recommended, but my understanding
 is that numpy.linalg.inv actually solves Ax = I instead of literally
 calculating the inverse of A. It would be great if I can get some
 intuition about this.

 That's the same thing as computing the inverse matrix.


Another question is, is it better to do cho_solve(cho_factor(A), b) than  
solve(A, b)?

Thank you,
Joon
--
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Solving Ax = b: inverse vs cholesky factorization

2010-11-08 Thread Bruce Southey
On 11/08/2010 01:38 PM, Joon wrote:
 On Mon, 08 Nov 2010 13:23:46 -0600, Pauli Virtanen p...@iki.fi wrote:

  Mon, 08 Nov 2010 13:17:11 -0600, Joon wrote:
  I was wondering when it is better to store cholesky factor and use 
 it to
  solve Ax = b, instead of storing the inverse of A. (A is a symmetric,
  positive-definite matrix.)
 
  Even in the repeated case, if I have the inverse of A (invA) stored,
  then I can solve Ax = b_i, i = 1, ... , n, by x = dot(invA, b_i). Is
  dot(invA, b_i) slower than cho_solve(cho_factor, b_i)?
 
  Not necessarily slower, but it contains more numerical error.
 
  http://www.johndcook.com/blog/2010/01/19/dont-invert-that-matrix/
 
  I heard calculating the inverse is not recommended, but my 
 understanding
  is that numpy.linalg.inv actually solves Ax = I instead of literally
  calculating the inverse of A. It would be great if I can get some
  intuition about this.
 
  That's the same thing as computing the inverse matrix.
 

 Oh I see. So I guess in invA = solve(Ax, I) and then x = dot(invA, b) 
 case, there are more places where numerical errors occur, than just 
 x = solve(Ax, b) case.

 Thank you,
 Joon



Numpy uses SVD to get the (pseudo) inverse, which is usually very 
accurate at getting (pseudo) inverse.

There are a lot of misconceptions involved but ultimately it comes down 
to two options:
If you need the inverse (like standard errors) then everything else is 
rather moot.
If you are just solving a system then there are better numerical solvers 
available in speed and accuracy than using inverse or similar approaches.

Bruce


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Catching and dealing with floating point errors

2010-11-08 Thread Skipper Seabold
I am doing some optimizations on random samples.  In a small number of
cases, the objective is not well-defined for a given sample (it's not
possible to tell beforehand and hopefully won't happen much in
practice).  What is the most numpythonic way to handle this?  It
doesn't look like I can use np.seterrcall in this case (without
ignoring its actual intent).  Here's a toy example of the method I
have come up with.

import numpy as np

def reset_seterr(d):

Helper function to reset FP error-handling to user's original settings

for action in [i+'='+'+d[i]+' for i in d]:
exec(action)
np.seterr(over=over, divide=divide, invalid=invalid, under=under)

def log_random_sample(X):

Toy example to catch a FP error, re-sample, and return objective

d = np.seterr() # get original values to reset
np.seterr('raise') # set to raise on fp error in order to catch
try:
ret = np.log(X)
reset_seterr(d)
return ret
except:
lb,ub = -1,1  # includes bad domain to test recursion
X = np.random.uniform(lb,ub)
reset_seterr(d)
return log_random_sample(X)

lb,ub = 0,0
orig_setting = np.seterr()
X = np.random.uniform(lb,ub)
log_random_sample(X)
assert(orig_setting == np.seterr())

This seems to work, but I'm not sure it's as transparent as it could
be.  If it is, then maybe it will be useful to others.

Skipper
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Solving Ax = b: inverse vs cholesky factorization

2010-11-08 Thread Pauli Virtanen
On Mon, 08 Nov 2010 14:06:03 -0600, Bruce Southey wrote:
[clip]
 Numpy uses SVD to get the (pseudo) inverse, which is usually very
 accurate at getting (pseudo) inverse.

numpy.linalg.inv does

solve(a, identity(a.shape[0], dtype=a.dtype))

It doesn't use xGETRI since that's not included in lapack_lite.

numpy.linalg.pinv OTOH does use SVD, but that's probably more costly.

-- 
Pauli Virtanen

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Catching and dealing with floating point errors

2010-11-08 Thread Skipper Seabold
On Mon, Nov 8, 2010 at 3:14 PM, Skipper Seabold jsseab...@gmail.com wrote:
 I am doing some optimizations on random samples.  In a small number of
 cases, the objective is not well-defined for a given sample (it's not
 possible to tell beforehand and hopefully won't happen much in
 practice).  What is the most numpythonic way to handle this?  It
 doesn't look like I can use np.seterrcall in this case (without
 ignoring its actual intent).  Here's a toy example of the method I
 have come up with.

 import numpy as np

 def reset_seterr(d):
    
    Helper function to reset FP error-handling to user's original settings
    
    for action in [i+'='+'+d[i]+' for i in d]:
        exec(action)
    np.seterr(over=over, divide=divide, invalid=invalid, under=under)


It just occurred to me that this is unsafe.  Better options for
resetting seterr?

 def log_random_sample(X):
    
    Toy example to catch a FP error, re-sample, and return objective
    
    d = np.seterr() # get original values to reset
    np.seterr('raise') # set to raise on fp error in order to catch
    try:
        ret = np.log(X)
        reset_seterr(d)
        return ret
    except:
        lb,ub = -1,1  # includes bad domain to test recursion
        X = np.random.uniform(lb,ub)
        reset_seterr(d)
        return log_random_sample(X)

 lb,ub = 0,0
 orig_setting = np.seterr()
 X = np.random.uniform(lb,ub)
 log_random_sample(X)
 assert(orig_setting == np.seterr())

 This seems to work, but I'm not sure it's as transparent as it could
 be.  If it is, then maybe it will be useful to others.

 Skipper

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Catching and dealing with floating point errors

2010-11-08 Thread Bruce Southey
On 11/08/2010 02:17 PM, Skipper Seabold wrote:
 On Mon, Nov 8, 2010 at 3:14 PM, Skipper Seaboldjsseab...@gmail.com  wrote:
 I am doing some optimizations on random samples.  In a small number of
 cases, the objective is not well-defined for a given sample (it's not
 possible to tell beforehand and hopefully won't happen much in
 practice).  What is the most numpythonic way to handle this?  It
 doesn't look like I can use np.seterrcall in this case (without
 ignoring its actual intent).  Here's a toy example of the method I
 have come up with.

 import numpy as np

 def reset_seterr(d):
 
 Helper function to reset FP error-handling to user's original settings
 
 for action in [i+'='+'+d[i]+' for i in d]:
 exec(action)
 np.seterr(over=over, divide=divide, invalid=invalid, under=under)

 It just occurred to me that this is unsafe.  Better options for
 resetting seterr?

 def log_random_sample(X):
 
 Toy example to catch a FP error, re-sample, and return objective
 
 d = np.seterr() # get original values to reset
 np.seterr('raise') # set to raise on fp error in order to catch
 try:
 ret = np.log(X)
 reset_seterr(d)
 return ret
 except:
 lb,ub = -1,1  # includes bad domain to test recursion
 X = np.random.uniform(lb,ub)
 reset_seterr(d)
 return log_random_sample(X)

 lb,ub = 0,0
 orig_setting = np.seterr()
 X = np.random.uniform(lb,ub)
 log_random_sample(X)
 assert(orig_setting == np.seterr())

 This seems to work, but I'm not sure it's as transparent as it could
 be.  If it is, then maybe it will be useful to others.

 Skipper

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
What do you mean by 'floating point error'?
For example, log of zero is not what I would consider a 'floating point 
error'.

In this case, if you are after a log distribution, then you should be 
ensuring that the lower bound to the np.random.uniform() is always 
greater than zero. That is, if lb = zero then you *know* you have a 
problem at the very start.


Bruce


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Catching and dealing with floating point errors

2010-11-08 Thread Warren Weckesser
On Mon, Nov 8, 2010 at 2:17 PM, Skipper Seabold jsseab...@gmail.com wrote:

 On Mon, Nov 8, 2010 at 3:14 PM, Skipper Seabold jsseab...@gmail.com
 wrote:
  I am doing some optimizations on random samples.  In a small number of
  cases, the objective is not well-defined for a given sample (it's not
  possible to tell beforehand and hopefully won't happen much in
  practice).  What is the most numpythonic way to handle this?  It
  doesn't look like I can use np.seterrcall in this case (without
  ignoring its actual intent).  Here's a toy example of the method I
  have come up with.
 
  import numpy as np
 
  def reset_seterr(d):
 
 Helper function to reset FP error-handling to user's original settings
 
 for action in [i+'='+'+d[i]+' for i in d]:
 exec(action)
 np.seterr(over=over, divide=divide, invalid=invalid, under=under)
 

 It just occurred to me that this is unsafe.  Better options for
 resetting seterr?



Hey Skipper,

I don't understand why you need your helper function.  Why not just pass the
saved dictionary back to seterr()?  E.g.

saved = np.seterr('raise')
try:
# Do something dangerous...
result = whatever...
except Exception:
# Handle the problems...
result = better result...
np.seterr(**saved)
return result


Warren





  def log_random_sample(X):
 
 Toy example to catch a FP error, re-sample, and return objective
 
 d = np.seterr() # get original values to reset
 np.seterr('raise') # set to raise on fp error in order to catch
 try:
 ret = np.log(X)
 reset_seterr(d)
 return ret
 except:
 lb,ub = -1,1  # includes bad domain to test recursion
 X = np.random.uniform(lb,ub)
 reset_seterr(d)
 return log_random_sample(X)
 
  lb,ub = 0,0
  orig_setting = np.seterr()
  X = np.random.uniform(lb,ub)
  log_random_sample(X)
  assert(orig_setting == np.seterr())
 
  This seems to work, but I'm not sure it's as transparent as it could
  be.  If it is, then maybe it will be useful to others.
 
  Skipper
 
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Catching and dealing with floating point errors

2010-11-08 Thread Skipper Seabold
On Mon, Nov 8, 2010 at 3:42 PM, Bruce Southey bsout...@gmail.com wrote:
 On 11/08/2010 02:17 PM, Skipper Seabold wrote:
 On Mon, Nov 8, 2010 at 3:14 PM, Skipper Seaboldjsseab...@gmail.com  wrote:
 I am doing some optimizations on random samples.  In a small number of
 cases, the objective is not well-defined for a given sample (it's not
 possible to tell beforehand and hopefully won't happen much in
 practice).  What is the most numpythonic way to handle this?  It
 doesn't look like I can use np.seterrcall in this case (without
 ignoring its actual intent).  Here's a toy example of the method I
 have come up with.

 import numpy as np

 def reset_seterr(d):
     
     Helper function to reset FP error-handling to user's original settings
     
     for action in [i+'='+'+d[i]+' for i in d]:
         exec(action)
     np.seterr(over=over, divide=divide, invalid=invalid, under=under)

 It just occurred to me that this is unsafe.  Better options for
 resetting seterr?

 def log_random_sample(X):
     
     Toy example to catch a FP error, re-sample, and return objective
     
     d = np.seterr() # get original values to reset
     np.seterr('raise') # set to raise on fp error in order to catch
     try:
         ret = np.log(X)
         reset_seterr(d)
         return ret
     except:
         lb,ub = -1,1  # includes bad domain to test recursion
         X = np.random.uniform(lb,ub)
         reset_seterr(d)
         return log_random_sample(X)

 lb,ub = 0,0
 orig_setting = np.seterr()
 X = np.random.uniform(lb,ub)
 log_random_sample(X)
 assert(orig_setting == np.seterr())

 This seems to work, but I'm not sure it's as transparent as it could
 be.  If it is, then maybe it will be useful to others.

 Skipper

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
 What do you mean by 'floating point error'?
 For example, log of zero is not what I would consider a 'floating point
 error'.

 In this case, if you are after a log distribution, then you should be
 ensuring that the lower bound to the np.random.uniform() is always
 greater than zero. That is, if lb = zero then you *know* you have a
 problem at the very start.



Just a toy example to get a similar error.  I call x = 0 on purpose here.

 Bruce


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Catching and dealing with floating point errors

2010-11-08 Thread Skipper Seabold
On Mon, Nov 8, 2010 at 3:45 PM, Warren Weckesser
warren.weckes...@enthought.com wrote:


 On Mon, Nov 8, 2010 at 2:17 PM, Skipper Seabold jsseab...@gmail.com wrote:

 On Mon, Nov 8, 2010 at 3:14 PM, Skipper Seabold jsseab...@gmail.com
 wrote:
  I am doing some optimizations on random samples.  In a small number of
  cases, the objective is not well-defined for a given sample (it's not
  possible to tell beforehand and hopefully won't happen much in
  practice).  What is the most numpythonic way to handle this?  It
  doesn't look like I can use np.seterrcall in this case (without
  ignoring its actual intent).  Here's a toy example of the method I
  have come up with.
 
  import numpy as np
 
  def reset_seterr(d):
     
     Helper function to reset FP error-handling to user's original
  settings
     
     for action in [i+'='+'+d[i]+' for i in d]:
         exec(action)
     np.seterr(over=over, divide=divide, invalid=invalid, under=under)
 

 It just occurred to me that this is unsafe.  Better options for
 resetting seterr?


 Hey Skipper,

 I don't understand why you need your helper function.  Why not just pass the
 saved dictionary back to seterr()?  E.g.

 saved = np.seterr('raise')
 try:
     # Do something dangerous...
     result = whatever...
 except Exception:
     # Handle the problems...
     result = better result...
 np.seterr(**saved)
 return result


Ha.  I knew I was forgetting something.  Thanks.


 Warren




  def log_random_sample(X):
     
     Toy example to catch a FP error, re-sample, and return objective
     
     d = np.seterr() # get original values to reset
     np.seterr('raise') # set to raise on fp error in order to catch
     try:
         ret = np.log(X)
         reset_seterr(d)
         return ret
     except:
         lb,ub = -1,1  # includes bad domain to test recursion
         X = np.random.uniform(lb,ub)
         reset_seterr(d)
         return log_random_sample(X)
 
  lb,ub = 0,0
  orig_setting = np.seterr()
  X = np.random.uniform(lb,ub)
  log_random_sample(X)
  assert(orig_setting == np.seterr())
 
  This seems to work, but I'm not sure it's as transparent as it could
  be.  If it is, then maybe it will be useful to others.
 
  Skipper
 
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy.genfromtxt converters issue

2010-11-08 Thread Damien Moore
Pierre GM pgmdevlist at gmail.com writes:
 On Nov 6, 2010, at 2:22 PM, Damien Moore wrote:
 
  Hi List,
  
  I'm trying to import csv data as a numpy array using genfromtxt. 
 
[...]
 Please open a ticket so that I don't forget about it. Thx in advance!
 

The ticket is here: http://projects.scipy.org/numpy/ticket/1665


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Catching and dealing with floating point errors

2010-11-08 Thread Bruce Southey
On 11/08/2010 02:52 PM, Skipper Seabold wrote:
 On Mon, Nov 8, 2010 at 3:42 PM, Bruce Southeybsout...@gmail.com  wrote:
 On 11/08/2010 02:17 PM, Skipper Seabold wrote:
 On Mon, Nov 8, 2010 at 3:14 PM, Skipper Seaboldjsseab...@gmail.com
 wrote:
 I am doing some optimizations on random samples.  In a small number of
 cases, the objective is not well-defined for a given sample (it's not
 possible to tell beforehand and hopefully won't happen much in
 practice).  What is the most numpythonic way to handle this?  It
 doesn't look like I can use np.seterrcall in this case (without
 ignoring its actual intent).  Here's a toy example of the method I
 have come up with.

 import numpy as np

 def reset_seterr(d):
  
  Helper function to reset FP error-handling to user's original settings
  
  for action in [i+'='+'+d[i]+' for i in d]:
  exec(action)
  np.seterr(over=over, divide=divide, invalid=invalid, under=under)

 It just occurred to me that this is unsafe.  Better options for
 resetting seterr?

 def log_random_sample(X):
  
  Toy example to catch a FP error, re-sample, and return objective
  
  d = np.seterr() # get original values to reset
  np.seterr('raise') # set to raise on fp error in order to catch
  try:
  ret = np.log(X)
  reset_seterr(d)
  return ret
  except:
  lb,ub = -1,1  # includes bad domain to test recursion
  X = np.random.uniform(lb,ub)
  reset_seterr(d)
  return log_random_sample(X)

 lb,ub = 0,0
 orig_setting = np.seterr()
 X = np.random.uniform(lb,ub)
 log_random_sample(X)
 assert(orig_setting == np.seterr())

 This seems to work, but I'm not sure it's as transparent as it could
 be.  If it is, then maybe it will be useful to others.

 Skipper

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
 What do you mean by 'floating point error'?
 For example, log of zero is not what I would consider a 'floating point
 error'.

 In this case, if you are after a log distribution, then you should be
 ensuring that the lower bound to the np.random.uniform() is always
 greater than zero. That is, if lb= zero then you *know* you have a
 problem at the very start.


 Just a toy example to get a similar error.  I call x= 0 on purpose here.


I was aware of that.

Messing about warnings is not what I consider Pythonic because you 
should be fixing the source of the problem. In this case, your sampling 
must be greater than zero. If you are sampling from a distribution, then 
that should be built into the call otherwise your samples will not be 
from the requested distribution.


Bruce
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Catching and dealing with floating point errors

2010-11-08 Thread Skipper Seabold
On Mon, Nov 8, 2010 at 4:04 PM, Bruce Southey bsout...@gmail.com wrote:
 On 11/08/2010 02:52 PM, Skipper Seabold wrote:
 On Mon, Nov 8, 2010 at 3:42 PM, Bruce Southeybsout...@gmail.com  wrote:
 On 11/08/2010 02:17 PM, Skipper Seabold wrote:
 On Mon, Nov 8, 2010 at 3:14 PM, Skipper Seaboldjsseab...@gmail.com    
 wrote:
 I am doing some optimizations on random samples.  In a small number of
 cases, the objective is not well-defined for a given sample (it's not
 possible to tell beforehand and hopefully won't happen much in
 practice).  What is the most numpythonic way to handle this?  It
 doesn't look like I can use np.seterrcall in this case (without
 ignoring its actual intent).  Here's a toy example of the method I
 have come up with.

 import numpy as np

 def reset_seterr(d):
      
      Helper function to reset FP error-handling to user's original 
 settings
      
      for action in [i+'='+'+d[i]+' for i in d]:
          exec(action)
      np.seterr(over=over, divide=divide, invalid=invalid, under=under)

 It just occurred to me that this is unsafe.  Better options for
 resetting seterr?

 def log_random_sample(X):
      
      Toy example to catch a FP error, re-sample, and return objective
      
      d = np.seterr() # get original values to reset
      np.seterr('raise') # set to raise on fp error in order to catch
      try:
          ret = np.log(X)
          reset_seterr(d)
          return ret
      except:
          lb,ub = -1,1  # includes bad domain to test recursion
          X = np.random.uniform(lb,ub)
          reset_seterr(d)
          return log_random_sample(X)

 lb,ub = 0,0
 orig_setting = np.seterr()
 X = np.random.uniform(lb,ub)
 log_random_sample(X)
 assert(orig_setting == np.seterr())

 This seems to work, but I'm not sure it's as transparent as it could
 be.  If it is, then maybe it will be useful to others.

 Skipper

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
 What do you mean by 'floating point error'?
 For example, log of zero is not what I would consider a 'floating point
 error'.

 In this case, if you are after a log distribution, then you should be
 ensuring that the lower bound to the np.random.uniform() is always
 greater than zero. That is, if lb= zero then you *know* you have a
 problem at the very start.


 Just a toy example to get a similar error.  I call x= 0 on purpose here.


 I was aware of that.


Ah, ok.  I don't mean to be short, just busy.

 Messing about warnings is not what I consider Pythonic because you
 should be fixing the source of the problem. In this case, your sampling
 must be greater than zero. If you are sampling from a distribution, then
 that should be built into the call otherwise your samples will not be
 from the requested distribution.


Basically, it looks like a small sample issue with an estimator.  I'm
not sure about the theory yet (or the underlying numerical issue), but
I've confirmed that the solution also breaks down using several
different solvers with a constrained version of the primal in GAMS to
ensure that it's not just a domain error or numerical
underflow/overflow.  So at this point I just want to catch the warning
and resample.  Am going to explore the bad cases further at a later
time.

Skipper
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Developmental version numbering with git

2010-11-08 Thread Matthew Brett
Hi,

 Since the change to git the numpy version in setup.py is '2.0.0.dev'
 regardless because the prior numbering was determined by svn.

 Is there a plan to add some numbering system to numpy developmental version?

 Regardless of the answer, the 'numpy/numpy/version.py' will need to
 changed because of the reference to the svn naming.

In case it's useful, we (nipy) went for a scheme where the version
number stays as '2.0.0.dev', but we keep a record of what git commit
has we are on - described here:

http://web.archiveorange.com/archive/v/AW2a1CzoOZtfBfNav9hd

I can post more details of the implementation if it's of any interest,

Best,

Matthew
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Catching and dealing with floating point errors

2010-11-08 Thread Warren Weckesser
On Mon, Nov 8, 2010 at 2:52 PM, Skipper Seabold jsseab...@gmail.com wrote:

 On Mon, Nov 8, 2010 at 3:45 PM, Warren Weckesser
 warren.weckes...@enthought.com wrote:
 
 
  On Mon, Nov 8, 2010 at 2:17 PM, Skipper Seabold jsseab...@gmail.com
 wrote:
 
  On Mon, Nov 8, 2010 at 3:14 PM, Skipper Seabold jsseab...@gmail.com
  wrote:
   I am doing some optimizations on random samples.  In a small number of
   cases, the objective is not well-defined for a given sample (it's not
   possible to tell beforehand and hopefully won't happen much in
   practice).  What is the most numpythonic way to handle this?  It
   doesn't look like I can use np.seterrcall in this case (without
   ignoring its actual intent).  Here's a toy example of the method I
   have come up with.
  
   import numpy as np
  
   def reset_seterr(d):
  
  Helper function to reset FP error-handling to user's original
   settings
  
  for action in [i+'='+'+d[i]+' for i in d]:
  exec(action)
  np.seterr(over=over, divide=divide, invalid=invalid, under=under)
  
 
  It just occurred to me that this is unsafe.  Better options for
  resetting seterr?
 
 
  Hey Skipper,
 
  I don't understand why you need your helper function.  Why not just pass
 the
  saved dictionary back to seterr()?  E.g.
 
  saved = np.seterr('raise')
  try:
  # Do something dangerous...
  result = whatever...
  except Exception:
  # Handle the problems...
  result = better result...
  np.seterr(**saved)
  return result
 

 Ha.  I knew I was forgetting something.  Thanks.


Your question reminded me to file an enhancement request that I've been
meaning to suggest for a while:
http://projects.scipy.org/numpy/ticket/1667


Warren
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Solving Ax = b: inverse vs cholesky factorization

2010-11-08 Thread Nathaniel Smith
On Mon, Nov 8, 2010 at 12:00 PM, Joon groups.and.li...@gmail.com wrote:
 Another question is, is it better to do cho_solve(cho_factor(A), b) than
 solve(A, b)?

If A is symmetric positive definite, then using the cholesky
decomposition should be somewhat faster than using a more general
solver. (Because, basically, the cholesky decomposition routine
knows that your matrix is symmetric, so it only has to look at
half of it, while a generic solver routine has to look at your whole
matrix regardless). And indeed, that seems to be the case in numpy:

In [18]: A = np.random.normal(size=(500, 500))
In [19]: A = np.dot(A, A.T)
In [20]: b = np.random.normal(size=(500, 1))

In [21]: %timeit solve(A, b)
1 loops, best of 3: 147 ms per loop

In [22]: %timeit cho_solve(cho_factor(A), b)
10 loops, best of 3: 92.6 ms per loop

Also of note -- going via the inverse is much slower:

In [23]: %timeit dot(inv(A), b)
1 loops, best of 3: 419 ms per loop

(I didn't check what happens if you have to solve the same set of
equations many times with A fixed and b varying, but I would still use
cholesky for that case. Also, note that for solve(), cho_solve(),
etc., b can be a matrix, which lets you solve for many different b
vectors simultaneously.)

-- Nathaniel
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Solving Ax = b: inverse vs cholesky factorization

2010-11-08 Thread Anne Archibald
On 8 November 2010 14:38, Joon groups.and.li...@gmail.com wrote:

 Oh I see. So I guess in invA = solve(Ax, I) and then x = dot(invA, b) case,
 there are more places where numerical errors occur, than just x = solve(Ax,
 b) case.

That's the heart of the matter, but one can be more specific. You can
think of a matrix by how it acts on vectors. Taking the inverse
amounts to solving Ax=b for all the standard basis vectors
(0,...,0,1,0,...,0); multiplying by the inverse amounts to expressing
your vector in terms of these, finding where they go, and adding them
together. But it can happen that when you break your vector up like
that, the images of the components are large but almost cancel. This
sort of near-cancellation amplifies numerical errors tremendously. In
comparison, solving directly, if you're using a stable algorithm, is
able to avoid ever constructing these nearly-cancelling combinations
explicitly.

The standard reason for trying to construct an inverse is that you
want to solve equations for many vectors with the same matrix. But
most solution methods are implemented as a matrix factorization
followed by a single cheap operation, so if this is your goal, it's
better to simply keep the matrix factorization around.

Anne
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] updated 1.5.1 release schedule

2010-11-08 Thread Russell E. Owen
In article 
aanlktimfgckbg8cprygukcvwvqzxqycykgexvx_=8...@mail.gmail.com,
 Ralf Gommers ralf.gomm...@googlemail.com wrote:

 On Mon, Nov 8, 2010 at 5:16 AM, Vincent Davis vinc...@vincentdavis.net 
 wrote:
 
 
  On Sun, Nov 7, 2010 at 1:51 AM, Ralf Gommers ralf.gomm...@googlemail.com
  wrote:
 
  Hi,
 
  Since we weren't able to stick to the original schedule for 1.5.1,
  here's a short note about the current status. There are two changes
  that need to go in before RC2:
  https://github.com/numpy/numpy/pull/11
  https://github.com/numpy/numpy/pull/9
  If no one has time for a review I'll commit those to 1.5.x by Tuesday
  and tag RC2. Final release should be one week after that, unless an
  RC3 is necessary.
 
  Since we will have 2 different dmgs for python2.7 (osx10.3 and osx10.5) and
  I don't think there is any check in the installer to make sure the right
  python2.7 version is present when installing. The installer only check that
  python2.7 is present. I think a check should be added. I am missing 
  somthing
  or there are other suggestions I would like to get this in 1.5.1rc2. I not
  sure the best way to make this check but I think I can come up with a
  solution. Also would need a useful error message.
  Vincent
 
 To let the user know if there's a mismatch may be helpful, but we
 shouldn't prevent installation. In many cases mixing installers will
 just work. If you have a patch it's welcome, but I think it's not
 critical for this release.

I am strongly in favor of such a check and refusing to install on the 
mismatched version of Python.

I am be concerned about hidden problems that emerge later -- for 
instance when the user bundles an application and somebody else tries to 
use it.

That said, I don't know how to perform such a test.

-- Russell

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Solving Ax = b: inverse vs cholesky factorization

2010-11-08 Thread Joon
Thanks, Nathaniel. Your reply was very helpful.

-Joon

On Mon, 08 Nov 2010 15:47:22 -0600, Nathaniel Smith n...@pobox.com wrote:

 On Mon, Nov 8, 2010 at 12:00 PM, Joon groups.and.li...@gmail.com wrote:
 Another question is, is it better to do cho_solve(cho_factor(A), b) than
 solve(A, b)?

 If A is symmetric positive definite, then using the cholesky
 decomposition should be somewhat faster than using a more general
 solver. (Because, basically, the cholesky decomposition routine
 knows that your matrix is symmetric, so it only has to look at
 half of it, while a generic solver routine has to look at your whole
 matrix regardless). And indeed, that seems to be the case in numpy:

 In [18]: A = np.random.normal(size=(500, 500))
 In [19]: A = np.dot(A, A.T)
 In [20]: b = np.random.normal(size=(500, 1))

 In [21]: %timeit solve(A, b)
 1 loops, best of 3: 147 ms per loop

 In [22]: %timeit cho_solve(cho_factor(A), b)
 10 loops, best of 3: 92.6 ms per loop

 Also of note -- going via the inverse is much slower:

 In [23]: %timeit dot(inv(A), b)
 1 loops, best of 3: 419 ms per loop

 (I didn't check what happens if you have to solve the same set of
 equations many times with A fixed and b varying, but I would still use
 cholesky for that case. Also, note that for solve(), cho_solve(),
 etc., b can be a matrix, which lets you solve for many different b
 vectors simultaneously.)

 -- Nathaniel
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


--
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] LapackError:non-native byte order

2010-11-08 Thread braingateway
Matthew Brett :
 Hi,

 On Mon, Nov 8, 2010 at 10:34 AM, Pauli Virtanen p...@iki.fi wrote:
   
 Mon, 08 Nov 2010 19:31:31 +0100, Pauli Virtanen wrote:

 
 ma, 2010-11-08 kello 18:56 +0100, LittleBigBrain kirjoitti:
   
 In my system '' is the native byte-order, but unless I change the
 byte-order label to '=', it won't work in linalg sub-module, but in
 others works OK. I am not sure whether this is an expected behavior or
 a bug?
 
 import sys
 sys.byteorder
   
 'little'
 
 a.dtype.byteorder
   
 ''
 
 b.dtype.byteorder
   
 ''
 
 The error is here: it's not possible to create such dtypes via any Numpy
 methods -- the '' (or '') is always normalized to '='. Numpy and
 several other modules consequently assume this normalization.

 Where do `a` and `b` come from?
   
 Ok, `x.newbyteorder('')` seems to do this. Now I'm unsure how things are
 supposed to work.
 

 Yes - it is puzzling that ``x.newbyteorder('')`` makes arrays that
 are confusing to numpy.  If numpy generally always normalizes to the
 system endian to '=' then should that not also be true of
 ``newbyteorder``?

 See you,

 Matthew
   
I agree, at it is not documented or hard to find. I think usually all 
users will assume '' is the same as '=' on normal 'little' system.

Thanks for reply,

LittleBigBrain
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
   

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Anyone with Core i7 and Ubuntu 10.04?

2010-11-08 Thread David
Hi Ian,

On 11/08/2010 11:18 PM, Ian Goodfellow wrote:
 I'm wondering if anyone here has successfully built numpy with ATLAS
 and a Core i7 CPU on Ubuntu 10.04. If so, I could really use your
 help. I've been trying since August (see my earlier messages to this
 list) to get numpy running at full speed on my machine with no luck.

Please tell us what error you got - saying that something did not 
working is really not useful to help you. You need to say exactly what 
fails, and which steps you followed before that failure.

 The Ubuntu packages don't seem very fast, and numpy won't use the
 version of ATLAS that I compiled. It's pretty sad; anything that
 involves a lot of BLAS calls runs slower on this 2.8 ghz Core i7 than
 on an older 2.66 ghz Core 2 Quad I use at work.

One simple solution is to upgrade to ubuntu 10.10, which has finally a 
working atlas package, thanks to the work of the debian packagers. There 
is a version compiled for i7,

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Anyone with Core i7 and Ubuntu 10.04?

2010-11-08 Thread David Warde-Farley
On 2010-11-08, at 8:52 PM, David wrote:

 Please tell us what error you got - saying that something did not 
 working is really not useful to help you. You need to say exactly what 
 fails, and which steps you followed before that failure.

I think what he means is that it's very slow, there's no discernable error but 
dot-multiplies don't seem to be using BLAS.

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Anyone with Core i7 and Ubuntu 10.04?

2010-11-08 Thread Wes McKinney
On Mon, Nov 8, 2010 at 11:33 PM, David Warde-Farley
warde...@iro.umontreal.ca wrote:
 On 2010-11-08, at 8:52 PM, David wrote:

 Please tell us what error you got - saying that something did not
 working is really not useful to help you. You need to say exactly what
 fails, and which steps you followed before that failure.

 I think what he means is that it's very slow, there's no discernable error 
 but dot-multiplies don't seem to be using BLAS.

 David
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


Somewhat related topic: anyone know the status of EPD (Enthought
distribution) releases on i7 processors as far as this goes?
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Catching and dealing with floating point errors

2010-11-08 Thread Warren Weckesser
On Mon, Nov 8, 2010 at 3:20 PM, Warren Weckesser 
warren.weckes...@enthought.com wrote:



 On Mon, Nov 8, 2010 at 2:52 PM, Skipper Seabold jsseab...@gmail.comwrote:

 On Mon, Nov 8, 2010 at 3:45 PM, Warren Weckesser
 warren.weckes...@enthought.com wrote:
 
 
  On Mon, Nov 8, 2010 at 2:17 PM, Skipper Seabold jsseab...@gmail.com
 wrote:
 
  On Mon, Nov 8, 2010 at 3:14 PM, Skipper Seabold jsseab...@gmail.com
  wrote:
   I am doing some optimizations on random samples.  In a small number
 of
   cases, the objective is not well-defined for a given sample (it's not
   possible to tell beforehand and hopefully won't happen much in
   practice).  What is the most numpythonic way to handle this?  It
   doesn't look like I can use np.seterrcall in this case (without
   ignoring its actual intent).  Here's a toy example of the method I
   have come up with.
  
   import numpy as np
  
   def reset_seterr(d):
  
  Helper function to reset FP error-handling to user's original
   settings
  
  for action in [i+'='+'+d[i]+' for i in d]:
  exec(action)
  np.seterr(over=over, divide=divide, invalid=invalid, under=under)
  
 
  It just occurred to me that this is unsafe.  Better options for
  resetting seterr?
 
 
  Hey Skipper,
 
  I don't understand why you need your helper function.  Why not just pass
 the
  saved dictionary back to seterr()?  E.g.
 
  saved = np.seterr('raise')
  try:
  # Do something dangerous...
  result = whatever...
  except Exception:
  # Handle the problems...
  result = better result...
  np.seterr(**saved)
  return result
 

 Ha.  I knew I was forgetting something.  Thanks.


 Your question reminded me to file an enhancement request that I've been
 meaning to suggest for a while:
 http://projects.scipy.org/numpy/ticket/1667



I just discovered that a context manager for the error settings already
exists: numpy.errstate.  So a nicer way to write that code is:

with np.errstate(all='raise'):
try:
# Do something dangerous...
result = whatever...
except Exception:
# Handle the problems...
result = better result...
return result


Warren
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion