[Numpy-discussion] sorting ndarray

2008-04-04 Thread harryos
i have a 1 dim numpy array
D=array( [[ 3. ,  2. ,  1.  , 4. ,  5.  , 1.5,  2.2]] )
i need to get this sorted in descending order and then access the
elements .
D.sort() will make D as [[ 1.   1.5  2.   2.2  3.   4.   5. ]]
how will i reverse it?
or is there a simpler way?
harry
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] sorting ndarray

2008-04-04 Thread Eric Firing
harryos wrote:
 i have a 1 dim numpy array
 D=array( [[ 3. ,  2. ,  1.  , 4. ,  5.  , 1.5,  2.2]] )

This is a 2-D array, not a 1-D array.


 i need to get this sorted in descending order and then access the
 elements .
 D.sort() will make D as [[ 1.   1.5  2.   2.2  3.   4.   5. ]]
 how will i reverse it?

In [8]:D[:,::-1]
Out[8]:array([[ 5. ,  4. ,  3. ,  2.2,  2. ,  1.5,  1. ]])



 or is there a simpler way?
 harry
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] multiply array

2008-04-04 Thread wilson
hello
i have two arrays
#of shape (1,6)
eval=array([[3.,3.2,1.,1.1,5.,0.5]])

#of shape (6,8)
egimgs=array([
 [3.,2.,1.,4.,5.,1.5,2.5,1.1],
 [1.1,3.,.5,.2,.1,4.3,3.2,1.2],
 [4.,3.,2.,6.,1.,4.,5.1,2.4],
 [3.2,1.3,2.2,4.4,1.1,2.1,3.3,2.4],
 [.4,.2,.1,.5,.3,.2,1.2,4.2],
 [5.2,2.1,4.3,5.5,2.2,1.1,1.4,3.2]
 ]
)

i need to multiply the first row of egimgs with the first element of
eval , second row of egimgs with second element of eval ..etc ...and
so on..finally i should get a result array of same shape as egimgs.
how should i proceed ? using  loops seems inefficient.can someone
suggest a better way?

thanks
W
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Final push for NumPy 1.0.5 (I need your help!)

2008-04-04 Thread Jarrod Millman
On Wed, Apr 2, 2008 at 9:36 PM, Jarrod Millman [EMAIL PROTECTED] wrote:
  Again, if you have any time to help close tickets or improve
  documentation, please take the time over the next few days to do so.
  And thank you to everyone who has been working to get this release
  ready!

Since I sent my email last night another 5+ tickets have been closed.
If we keep going at this rate, we should be able to release 1.0.5 next
Friday (4/11) with every ticket closed.  Specifically, thanks to
Travis Oliphant, David Huard, and Stefan van der Walt for bug
squashing.

If anyone has some time, could you please check David's fix for the
ticket loadtxt fails with record arrays:
http://projects.scipy.org/scipy/numpy/ticket/623
His fix looks correct to me (and even includes test!!); if someone
else can confirm this, David or I will be able to close this ticket.

There are now only 21 remaining open tickets.  Please take a look over
the open tickets and see if there is anything you can quickly close:
http://scipy.org/scipy/numpy/query?status=newstatus=assignedstatus=reopenedmilestone=1.0.5

Also please take a look at the current release notes for 1.0.5 and let
me know if you see anything missing or incorrect:
http://scipy.org/scipy/numpy/milestone/1.0.5

This release promises to be a very polished and stable release, which
will allow us to quickly move on to developing the new 1.1 series.

Thanks,

-- 
Jarrod Millman
Computational Infrastructure for Research Labs
10 Giannini Hall, UC Berkeley
phone: 510.643.4014
http://cirl.berkeley.edu/
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] multiply array

2008-04-04 Thread wilson

 #of shape (1,6)
 eval=array([[3.,3.2,1.,1.1,5.,0.5]])


eval.shape=(-1,)

please not the correction..i need to multiply first row of egimgs with
3.0 ,second row with 3.2,last(sixth) row with 0.5 ..For that
purpose i made the above into a 1 dimensional array.
A for loop seems inefficient in the case of big arrays
W
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] NumPy/SciPy sprint in Berkeley next week

2008-04-04 Thread Jarrod Millman
Hello,

I have just organized a little NumPy/SciPy mini-sprint for next week.
David Cournapeau is visiting me for a week and several other
developers (Eric Jones, Robert Kern, Peter Wang, Jonathan Taylor, and
Karl Young) will be stopping by during the week to work with the
Berkeley team (Fernando Perez, Chris Burns, Tom Waite, and I).  There
may be a few others who will join us as well.

I am still working on a preliminary list of topics that I hope for us
to work on and I will send it out to the list before Monday.  Among
other things, I will be trying to push NumPy 1.0.5 out the door.

I will send out another announcement as the date draws near, but I
hope that some of you will be able to join us next week at
irc.freenode.net (channel scipy).  Please consider next week an
official Bug/Doc week.  Once we get NumPy 1.0.5 released, I will start
focusing on getting SciPy 0.7.0 released--which means we need to start
squashing SciPy bugs as relentlessly as we have been squashing NumPy's
bugs during the last month.

Thanks to everyone who is working so hard to make NumPy/SciPy the best
foundation for scientific and numerical computing.

Cheers,

-- 
Jarrod Millman
Computational Infrastructure for Research Labs
10 Giannini Hall, UC Berkeley
phone: 510.643.4014
http://cirl.berkeley.edu/
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Efficient reading of binary data

2008-04-04 Thread Sebastian Haase
On Fri, Apr 4, 2008 at 2:14 AM, Nicolas Bigaouette
[EMAIL PROTECTED] wrote:
 2008/4/3, Robert Kern [EMAIL PROTECTED]:

  On Thu, Apr 3, 2008 at 6:53 PM, Nicolas Bigaouette
  [EMAIL PROTECTED] wrote:
   Thanx for the fast response Robert ;)
  
   I changed my code to use the slice:
E = data[6::9]It is indeed faster and less eat less memory. Great.
  
   Thanx for the endiannes! I knew there was something like this ;) I
 suspect
   that, in 'f8', f means float and 8 means 8 bytes?
 
 
  Yes, and the '' means big-endian. '' is little-endian, and '=' is
  native-endian.

 I just tested it with a big-endian machine, it does work indeed great :)


   From some benchmarks, I see that the slowest thing is disk access. It
 can
   slow the displaying of data from around 1sec (when data is in os cache
 or
   buffer) to 8sec.
  
   So the next step would be to only read the needed data from the binary
   file... Is it possible to read from a file with a slice? So instead of:
  
   data = numpy.fromfile(file=f, dtype=float_dtype, count=9*Stot)
   E = data[6::9]
   maybe something like:
   E = numpy.fromfile(file=f, dtype=float_dtype, count=9*Stot, slice=6::9)
 
 
  Instead of reading using fromfile(), you can try memory-mapping the array.
 
from numpy import memmap
E = memmap(f, dtype=float_dtype, mode='r')[6::9]
 
  That may or may not help. At least, it should decrease the latency
  before you start pulling out frames.
 
 
 It did not worked out of the box (memmap() takes the filename and not a file
 handler) but anyway, its getting late.

Hi,
Accidentally I'm exactly trying to do the same thing right now .

What is the best way of memmapping into a file that is already open !?

I have to read some text (header info) off the beginning of the file
before I know where the data actually starts.
I could of course get the position at that point ( f.tell() ) close
the file, and reopen using memmap.
However this doesn't sound optimal to me 

Any hints ?
Could numpy's memmap be changed to also accept file-objects, or there
a rule that memmap always has to have access to the entire file ?


Thanks,
Sebastian Haase
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Efficient reading of binary data

2008-04-04 Thread Sebastian Haase
On Fri, Apr 4, 2008 at 11:33 AM, Jarrod Millman [EMAIL PROTECTED] wrote:
 On Fri, Apr 4, 2008 at 1:50 AM, Sebastian Haase [EMAIL PROTECTED] wrote:
Hi,
Accidentally I'm exactly trying to do the same thing right now .
  
What is the best way of memmapping into a file that is already open !?
  
I have to read some text (header info) off the beginning of the file
before I know where the data actually starts.
I could of course get the position at that point ( f.tell() ) close
the file, and reopen using memmap.
However this doesn't sound optimal to me 
  
Any hints ?
Could numpy's memmap be changed to also accept file-objects, or there
a rule that memmap always has to have access to the entire file ?

  I am getting a little tired, so this may be incorrect.  But I believe
  Stefan modified memmaps to allow them to be created from file-like
  object:  http://projects.scipy.org/scipy/numpy/changeset/4856

  Are you running a released version of NumPy or the trunk?  If you
  aren't using the trunk, could you give it a try?  It would be good to
  have it tested before the 1.0.5 release.

Hi  Jarrod,

Thanks for the reply. Indeed I'm running only N.__version__
'1.0.4.dev4312'

I hope I find time to try the new feature.
To clarify: if the file is already open, and the current position
(f.tell() ) is somewhere in the middle,
would the memmap see the file from there ?  Could a normal file
access and a concurrent memmap into that same file step on each
others feet ?

Thanks,
Sebastian Haase
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] problem with float64's str()

2008-04-04 Thread Stéfan van der Walt
Hi Will

On 03/04/2008, Will Lee [EMAIL PROTECTED] wrote:
 I seem to have problem with floating point printing with the latest numpy,
 python 2.5.2, gcc 4.1.4, and 64-bit linux:

 In [24]: print str(0.0012)
 0.0012

 In [25]: a = numpy.array([0.0012])

  In [26]: print str(a[0])
 0.0011999

 In [27]: print numpy.__version__
 1.0.5.dev4950

 It seems like the str() behavior for float64 in the latest numpy's behavior
 is different than Python's default behavior (and previous numpy's behavior).


 As I have numerous doc tests, this seems to make many of them failed.
 Should the float64's str() behavior be consistent with what's described in
 http://docs.python.org/tut/node16.html?  Is there any way
 to get around it so I can get to the default Python behavior?

As a workaround, you can simply remove 'str' from your print statement:

In [17]: print x[0]
0.0012

I am not sure why str([x[0]) is equivalent to repr(0.0012).

Regards
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Efficient reading of binary data

2008-04-04 Thread Jarrod Millman
On Fri, Apr 4, 2008 at 1:50 AM, Sebastian Haase [EMAIL PROTECTED] wrote:
  Hi,
  Accidentally I'm exactly trying to do the same thing right now .

  What is the best way of memmapping into a file that is already open !?

  I have to read some text (header info) off the beginning of the file
  before I know where the data actually starts.
  I could of course get the position at that point ( f.tell() ) close
  the file, and reopen using memmap.
  However this doesn't sound optimal to me 

  Any hints ?
  Could numpy's memmap be changed to also accept file-objects, or there
  a rule that memmap always has to have access to the entire file ?

I am getting a little tired, so this may be incorrect.  But I believe
Stefan modified memmaps to allow them to be created from file-like
object:  http://projects.scipy.org/scipy/numpy/changeset/4856

Are you running a released version of NumPy or the trunk?  If you
aren't using the trunk, could you give it a try?  It would be good to
have it tested before the 1.0.5 release.

Cheers,

-- 
Jarrod Millman
Computational Infrastructure for Research Labs
10 Giannini Hall, UC Berkeley
phone: 510.643.4014
http://cirl.berkeley.edu/
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] multiply array

2008-04-04 Thread Nadav Horesh

result = (egimgs.T * eval.flat).T

or, in place

E = egimgs.T
E *= eval.flat

(egimgs would updated)

  Nadav.

-הודעה מקורית-
מאת: [EMAIL PROTECTED] בשם wilson
נשלח: ו 04-אפריל-08 08:58
אל: numpy-discussion@scipy.org
נושא: [Numpy-discussion] multiply array
 
hello
i have two arrays
#of shape (1,6)
eval=array([[3.,3.2,1.,1.1,5.,0.5]])

#of shape (6,8)
egimgs=array([
 [3.,2.,1.,4.,5.,1.5,2.5,1.1],
 [1.1,3.,.5,.2,.1,4.3,3.2,1.2],
 [4.,3.,2.,6.,1.,4.,5.1,2.4],
 [3.2,1.3,2.2,4.4,1.1,2.1,3.3,2.4],
 [.4,.2,.1,.5,.3,.2,1.2,4.2],
 [5.2,2.1,4.3,5.5,2.2,1.1,1.4,3.2]
 ]
)

i need to multiply the first row of egimgs with the first element of
eval , second row of egimgs with second element of eval ..etc ...and
so on..finally i should get a result array of same shape as egimgs.
how should i proceed ? using  loops seems inefficient.can someone
suggest a better way?

thanks
W
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion

winmail.dat___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Final push for NumPy 1.0.5 (I need your help!)

2008-04-04 Thread Anne Archibald
On 04/04/2008, Jarrod Millman [EMAIL PROTECTED] wrote:

 Since I sent my email last night another 5+ tickets have been closed.
  If we keep going at this rate, we should be able to release 1.0.5 next
  Friday (4/11) with every ticket closed.  Specifically, thanks to
  Travis Oliphant, David Huard, and Stefan van der Walt for bug
  squashing.

  If anyone has some time, could you please check David's fix for the
  ticket loadtxt fails with record arrays:
  http://projects.scipy.org/scipy/numpy/ticket/623
  His fix looks correct to me (and even includes test!!); if someone
  else can confirm this, David or I will be able to close this ticket.

  There are now only 21 remaining open tickets.  Please take a look over

 the open tickets and see if there is anything you can quickly close:
  
 http://scipy.org/scipy/numpy/query?status=newstatus=assignedstatus=reopenedmilestone=1.0.5

I've submitted patches for two (matrix_power and condition number
functions) but I don't think I can actually manipulate the bug status
in any way.

Anne
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Simple financial functions for NumPy

2008-04-04 Thread Travis E. Oliphant

Hi all,

Last night I put together some simple financial functions based on the 
basic ones available in Excel (and on a financial calculator).  It seems 
to me that NumPy ought to have these basic functions.

There may be some disagreement about what to call them and what the 
interface should be.  I've stuck with the Excel standard names and 
functionality because most of the people that will use these in the 
future, I expect will have seen them from Excel and it minimizes the 
impedance mismatch to have them be similarly named.   The interface is 
also similar to the IMSL libraries. 

However, if clearly better interfaces can be discovered, then we could 
change it.   For now, the functions are not imported into the numpy 
namespace but live in

numpy.lib.financial

I could see a future scipy module containing much, much more.

Comments and improvement suggestions welcome.   We are a week away from 
release of NumPy 1.0.5, and hopefully we can agree before then. 

Best regards,


-Travis O.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] problem with float64's str()

2008-04-04 Thread Dag Sverre Seljebotn
Bruce Southey wrote:
 Hi,
 This topic has come up many times and the only problem is the lack of 
 understanding how computers store numbers and computer numerical precision.

 The NumPy output is consistent with Python on my x86_64 linux system 
 with Python 2.5.1:
   a=0.0012
   a
 0.0011999
   
Wasn't this discussion about the repr vs. str functions?

  repr(0.0012)
'0.0011999'
  str(0.0012)
'0.0012'



Dag Sverre
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Final push for NumPy 1.0.5 (I need your help!)

2008-04-04 Thread Travis E. Oliphant
Anne Archibald wrote:
 On 04/04/2008, Jarrod Millman [EMAIL PROTECTED] wrote:

   
 Since I sent my email last night another 5+ tickets have been closed.
  If we keep going at this rate, we should be able to release 1.0.5 next
  Friday (4/11) with every ticket closed.  Specifically, thanks to
  Travis Oliphant, David Huard, and Stefan van der Walt for bug
  squashing.

  If anyone has some time, could you please check David's fix for the
  ticket loadtxt fails with record arrays:
  http://projects.scipy.org/scipy/numpy/ticket/623
  His fix looks correct to me (and even includes test!!); if someone
  else can confirm this, David or I will be able to close this ticket.

  There are now only 21 remaining open tickets.  Please take a look over

 the open tickets and see if there is anything you can quickly close:
  
 http://scipy.org/scipy/numpy/query?status=newstatus=assignedstatus=reopenedmilestone=1.0.5
 

 I've submitted patches for two (matrix_power and condition number
 functions) but I don't think I can actually manipulate the bug status
 in any way.
   

Hey Anne,

Do you currently have SVN access?   Would you like it?  

I think the SciPy/NumPy sprint would be a good time to clean-up the 
committers list and add new people interested in helping.  

-Travis


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Simple financial functions for NumPy

2008-04-04 Thread Sebastian Haase
Hi Travis,
This sounds of course very interesting, but could you elaborate on the
reasoning why this should not rather be only in SciPy !?
I thought many people think that numpy was already too crowded and
should concentrate mostly on being a basic array handling facility.

I'm sure you have a good reason for putting these into numpy. Do you
have a list of the new functions ? Wiki page ?

And once more, thanks for all your great work on numpy. I'm now even
trying to make a career out of using numpy for microscopy image
analysis.

 - Sebastian Haase


On Fri, Apr 4, 2008 at 3:49 PM, Travis E. Oliphant
[EMAIL PROTECTED] wrote:

  Hi all,

  Last night I put together some simple financial functions based on the
  basic ones available in Excel (and on a financial calculator).  It seems
  to me that NumPy ought to have these basic functions.

  There may be some disagreement about what to call them and what the
  interface should be.  I've stuck with the Excel standard names and
  functionality because most of the people that will use these in the
  future, I expect will have seen them from Excel and it minimizes the
  impedance mismatch to have them be similarly named.   The interface is
  also similar to the IMSL libraries.

  However, if clearly better interfaces can be discovered, then we could
  change it.   For now, the functions are not imported into the numpy
  namespace but live in

  numpy.lib.financial

  I could see a future scipy module containing much, much more.

  Comments and improvement suggestions welcome.   We are a week away from
  release of NumPy 1.0.5, and hopefully we can agree before then.

  Best regards,


  -Travis O.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Simple financial functions for NumPy

2008-04-04 Thread Gael Varoquaux
On Fri, Apr 04, 2008 at 03:58:39PM +0200, Sebastian Haase wrote:
 This sounds of course very interesting, but could you elaborate on the
 reasoning why this should not rather be only in SciPy !?
 I thought many people think that numpy was already too crowded and
 should concentrate mostly on being a basic array handling facility.

+1. Or in a separate package if you don't want to enforce a heavy
C/fortran dependency like with scipy.

Cheers,

Gaël
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Simple financial functions for NumPy

2008-04-04 Thread Travis E. Oliphant
Sebastian Haase wrote:
 Hi Travis,
 This sounds of course very interesting, but could you elaborate on the
 reasoning why this should not rather be only in SciPy !?
 I thought many people think that numpy was already too crowded and
 should concentrate mostly on being a basic array handling facility.
   

This is a valid concern and I'm interested in hearing feedback. 

There are only two reasons that I can think of right now to keep them in 
NumPy instead of moving them to SciPy. 

1) These are basic functions and a scipy toolkit would contain much more.
2) These are widely used and would make NumPy attractive to a wider 
audience who don't want to install all of SciPy just to get
these functions.

NumPy already contains functions that make it equivalent to a basic 
scientific calculator, should it not also contain the functions that 
make it equivalent to the same calculator when placed in financial mode? 

On the other hand,  package distribution is getting better, and having a 
more modular approach is useful.  I could go both ways on this one.

-Travis O.




___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Simple financial functions for NumPy

2008-04-04 Thread Travis E. Oliphant
Sebastian Haase wrote:
 Hi Travis,
 This sounds of course very interesting, but could you elaborate on the
 reasoning why this should not rather be only in SciPy !?
 I thought many people think that numpy was already too crowded and
 should concentrate mostly on being a basic array handling facility.

   
I just thought of one more thing that is probably swaying that 
mysterious gut feel :

NumPy is on the laptop in the OLPC project.   I want to give those kids 
access to simple financial calculations for learning about 
time-preference for money and the value of saving.

-Travis O.


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] problem with float64's str()

2008-04-04 Thread Bruce Southey
Hi,
Note that at least under Python 2.5.1:
  a=0.0012
  a
0.0011999
  str(a)
'0.0012'
  repr(a)
'0.0011999'

 From Python docs, repr(): 'Return a string containing a printable 
representation of an object' and str(): 'Return a nice string 
representation of the object'. In this case the object is a Python 
float  that approximates the base 10 number  of 0.0012.  This is not 
equivalent to convert the base 10 number 0.0012 to a string because 
computer numbers are base 2. Thus, repr() converts a Python object into 
a string but nothing about the numerical precision whereas str() tries 
to do something about maing a 'nice string'.

The only consistent way to get 0.0012 (or any number that can not be 
represented exactly) is to use a Python object that stores 0.0012 
exactly. For example, the Decimal module was introduced in Python 2.4:
http://www.python.org/doc/2.5/lib/module-decimal.html

  str(Decimal('0.0012'))
'0.0012'
  float(Decimal('0.0012'))
0.0011999

Also, see PEP 3141 'A Type Hierarchy for Numbers' 
(http://www.python.org/dev/peps/pep-3141/).

Regards
Bruce


Dag Sverre Seljebotn wrote:
 Bruce Southey wrote:
   
 Hi,
 This topic has come up many times and the only problem is the lack of 
 understanding how computers store numbers and computer numerical precision.

 The NumPy output is consistent with Python on my x86_64 linux system 
 with Python 2.5.1:
   a=0.0012
   a
 0.0011999
   
 
 Wasn't this discussion about the repr vs. str functions?

   repr(0.0012)
 '0.0011999'
   str(0.0012)
 '0.0012'



 Dag Sverre
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] problem with float64's str()

2008-04-04 Thread Will Lee
I understand the implication for the floating point comparison and the need
for allclose.  However, I think in a doctest context, this behavior makes
the doc much harder to read.  For example, if you have this in your doctest:

def doSomething(a):
'''
 print doSomething(0.0011)[0]
 0.0012
'''
return numpy.array([a]) + 0.0001

In the current numpy, you'll have to write:

def doSomething(a):
'''
 print doSomething(0.0011)[0]
 0.0011999
'''
return numpy.array([a]) + 0.0001

Using allclose, you'll need to write it like:

def doSomething(a):
'''
 print numpy.allclose(doSomething(0.0011)[0], 0.0012)
 True
'''
return numpy.array([a]) + 0.0001

I don't think either cases are ideal.  You can also imagine if you're
printing out a string with a float in it (like 'You got 0.0012 back'), then
you can't really use allclose at all.

Also, it's somewhat odd that the behavior for str is different for a
numpy.float64 and python's float, since otherwise they're mostly the same.

Will

On Fri, Apr 4, 2008 at 8:52 AM, Dag Sverre Seljebotn 
[EMAIL PROTECTED] wrote:

 Bruce Southey wrote:
  Hi,
  This topic has come up many times and the only problem is the lack of
  understanding how computers store numbers and computer numerical
 precision.
 
  The NumPy output is consistent with Python on my x86_64 linux system
  with Python 2.5.1:
a=0.0012
a
  0.0011999
 
 Wasn't this discussion about the repr vs. str functions?

   repr(0.0012)
 '0.0011999'
   str(0.0012)
 '0.0012'



 Dag Sverre
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Simple financial functions for NumPy

2008-04-04 Thread Francesc Altet
A Friday 04 April 2008, Travis E. Oliphant escrigué:
 Sebastian Haase wrote:
  Hi Travis,
  This sounds of course very interesting, but could you elaborate on
  the reasoning why this should not rather be only in SciPy !? I
  thought many people think that numpy was already too crowded and
  should concentrate mostly on being a basic array handling facility.

 I just thought of one more thing that is probably swaying that
 mysterious gut feel :

 NumPy is on the laptop in the OLPC project.   I want to give those
 kids access to simple financial calculations for learning about
 time-preference for money and the value of saving.

Yeah :)  +1 for including these in NumPy.

-- 
0,0   Francesc Altet     http://www.carabos.com/
V   V   Cárabos Coop. V.   Enjoy Data
 -
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Simple financial functions for NumPy

2008-04-04 Thread Gael Varoquaux
On Fri, Apr 04, 2008 at 09:11:37AM -0500, Travis E. Oliphant wrote:
 There are only two reasons that I can think of right now to keep them in 
 NumPy instead of moving them to SciPy. 

 1) These are basic functions and a scipy toolkit would contain much more.
 2) These are widely used and would make NumPy attractive to a wider 
 audience who don't want to install all of SciPy just to get
 these functions.

 NumPy already contains functions that make it equivalent to a basic 
 scientific calculator, should it not also contain the functions that 
 make it equivalent to the same calculator when placed in financial mode? 

My concern is consistency. It is already pretty hard to define what goes
in scipy and what goes in numpy, and I am not even mentioning code lying
around in pylab. 

I really thing numpy should be as thin as possible, so that you can
really say that it is only an array manipulation package. This will also
make it easier to sell as a core package for developpers who do not care
about calculator features.

My 2 cents,

Gaël
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Simple financial functions for NumPy

2008-04-04 Thread Joe Harrington
+1 for simple financial functions in numpy, and congrats that it's on
OLPC!  If we have an FFT in numpy, we should have an internal rate of
return.  Anyone with investments needs that, and that's more people
than those needing an FFT.

I agree that Excel will bring in the most familiarity, but their names
are not always rational.  Please don't propagate irrational names.
Consider looking at what they're called in Matlab and IDL, as code
conversion/familiarity from those communities counts as well.  Maybe
for each function take the most rational name and arg order from those
three sources, with strong preference for Excel unless there is a
clear better way to do it.

--jh--
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Simple financial functions for NumPy

2008-04-04 Thread Alan G Isaac
On Fri, 4 Apr 2008, Gael Varoquaux apparently wrote:
 I really thing numpy should be as thin as possible, so 
 that you can really say that it is only an array 
 manipulation package. This will also make it easier to 
 sell as a core package for developpers who do not care 
 about calculator features. 

I'm a user rather than a developer, but I wonder:
is this true?

1. Even as a user, I agree that what I really want from 
NumPy is a core array manipulation package (including 
matrices).  BUT as long as this is the core of NumPy,
will a developer care if other features are available?

2. Even if the answer to 1. is yes, could the 
build/installation process include an option not to 
build/install anything but the core array functionality?

3. It seems to me that pushing things out into SciPy remains 
a problem: a basic NumPy is easy to build on any platform, 
but SciPy still seems to generate many questions.

4. One reason this keeps coming up is that he NumPy/SciPy 
split is rather too crude.  If the split were instead 
something like NumPy/SciPyBasic/SciPyFull/SciPyFull+Kits 
where SciPyBasic contained only pure Python code (no 
extensions), perhaps the desired location would be more 
obvious and some of this recurrent discussion would go away.

fwiw,
Alan Isaac



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Simple financial functions for NumPy

2008-04-04 Thread Angus McMorland
-1 for any functions added to numpy.

As only an end-user, I realize I have little right to a say in these
sorts of issues, but for whatever it may be worth, I strongly agree
with Gael's viewpoint. We should be aiming towards modular systems for
function distribution, and now that it seems that these are being
gradually worked out (scikits?), I think we should avoid adding
anything to numpy, which should rather be kept to a bare minimum: just
the necessaries for array creation and manipulation. Everything else
should go in the add-on modules which can be installed as required.

This have the benefit that the numpy package stays well-defined and
contained, meaning that end-users know exactly what to expect as
available on a given system. Instead of wondering Where do I find
functions for x. I know numpy has some things. Maybe it's in there or
maybe somewhere else. I would always know that in order to get
functions for x I would install the correct, usefully named, module.
This seems like the path of least surprise, and a cleanest interface.

I agree it's great that numpy is on the OLPC, and would like to see it
accompanied there by a Basic Functions module containing, for
example, these financial functions, which certainly sound useful...
but not for everyone.

On 04/04/2008, Joe Harrington [EMAIL PROTECTED] wrote:
 +1 for simple financial functions in numpy, and congrats that it's on
  OLPC!  If we have an FFT in numpy, we should have an internal rate of
  return.  Anyone with investments needs that, and that's more people
  than those needing an FFT.

  I agree that Excel will bring in the most familiarity, but their names
  are not always rational.  Please don't propagate irrational names.
  Consider looking at what they're called in Matlab and IDL, as code
  conversion/familiarity from those communities counts as well.  Maybe
  for each function take the most rational name and arg order from those
  three sources, with strong preference for Excel unless there is a
  clear better way to do it.


  --jh--

 ___
  Numpy-discussion mailing list
  Numpy-discussion@scipy.org
  http://projects.scipy.org/mailman/listinfo/numpy-discussion



-- 
AJC McMorland, PhD candidate
Physiology, University of Auckland

Post-doctoral research fellow
Neurobiology, University of Pittsburgh
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Simple financial functions for NumPy

2008-04-04 Thread Joris De Ridder

On 04 Apr 2008, at 16:11, Travis E. Oliphant wrote:

 snip
 There are only two reasons that I can think of right now to keep  
 them in
 NumPy instead of moving them to SciPy.

 1) These are basic functions and a scipy toolkit would contain  
 much more.

Isn't this something you want to avoid? Functionality in two different  
packages: a small kit of functions in NumPy, and (eventually) another  
large toolkit in scipy. One package only, would be more convenient I  
think.

I agree with Gaël that it's not really consistent with the NumPy/SciPy  
philosophy either. :-).
So, I would prefer to see this nice functionality in SciPy rather than  
in NumPy.

Joris


Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Simple financial functions for NumPy

2008-04-04 Thread Charles R Harris
On Fri, Apr 4, 2008 at 10:17 AM, Joris De Ridder 
[EMAIL PROTECTED] wrote:


 On 04 Apr 2008, at 16:11, Travis E. Oliphant wrote:

  snip
  There are only two reasons that I can think of right now to keep
  them in
  NumPy instead of moving them to SciPy.
 
  1) These are basic functions and a scipy toolkit would contain
  much more.

 Isn't this something you want to avoid? Functionality in two different
 packages: a small kit of functions in NumPy, and (eventually) another
 large toolkit in scipy. One package only, would be more convenient I
 think.

 I agree with Gaël that it's not really consistent with the NumPy/SciPy
 philosophy either. :-).
 So, I would prefer to see this nice functionality in SciPy rather than
 in NumPy.


Agree. I also think that the idea of basic, pure python extensions is a good
one. There is a lot of useful functionality that can be made available
without adding Fortran packages to the mix. These packages could even be
included as part of numpy but they should remain in a separate namespace.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] loading data with gaps

2008-04-04 Thread David Huard
Hi Tim,

Look at the thread posted a couple of weeks ago named: loadtxt and missing
values

I'm guessing you'll find answers to your questions, if not, don't hesitate
to ask.

David


2008/4/3, Tim Michelsen [EMAIL PROTECTED]:

 Hello!

 How can I load a data file (e.g. CSV, DAT) in ASCII which has some gaps?

 The file has been saved with from a spreadsheet program which leaves
 cells with not data empty:


 1,23.
 2,13.
 3,
 4,34.

 Would this code be correct:
 ### test_loadtxt.py ###
 import numpy
 import maskedarray

 # load data which has empty 'cells' as beeing saved from spreadsheet:
 # 1,23.
 # 2,13.
 # 3,
 # 4,34.
 data = numpy.loadtxt('./loadtxt_test.csv',dtype=str,delimiter=',')


 # create a masked array with all no data ('', empty cells from CSV) masked
 my_masked_array = maskedarray.masked_equal(data,'')
 ##

 * How can I change the data type of my maskedarray (my_masked_array) to
 a type that allows me to perform calulations?

 * Would you do this task differently or more efficient?

 * What possibilities do I have to estimate/interpolate the masked values?
 A example would be nice.

 * How to I convert maskedarray (my_masked_array) to a array without
 masked values?

 Thanks in advance for your help,
 Tim Michelsenholg

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Final push for NumPy 1.0.5 (I need your help!)

2008-04-04 Thread Anne Archibald
On 04/04/2008, Travis E. Oliphant [EMAIL PROTECTED] wrote:

 Hey Anne,

  Do you currently have SVN access?   Would you like it?

  I think the SciPy/NumPy sprint would be a good time to clean-up the
  committers list and add new people interested in helping.

I don't have SVN access. I'd be happy (and careful!) to have it, but I
should warn you that I won't have time to do serious, regular
development on scipy/numpy; I do hope to be able to write a little
code here and there, though, and it would be handy to be able to add
it directly instead of sending patches into the ether.

Anne
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy unittest failure on Mac PPC and Solaris 10

2008-04-04 Thread Christopher Hanley
Hi,

We are seeing the following error on both Solaris and Mac PPC when 
running the numpy unittests:

.F...
==
FAIL: test_record (numpy.lib.tests.test_io.Testloadtxt)
--
Traceback (most recent call last):
   File /usr/stsci/pyssgdev/2.5.1/numpy/lib/tests/test_io.py, line 42, 
in test_record
 assert_array_equal(x, a)
   File /usr/stsci/pyssgdev/2.5.1/numpy/testing/utils.py, line 225, in 
assert_array_equal
 verbose=verbose, header='Arrays are not equal')
   File /usr/stsci/pyssgdev/2.5.1/numpy/testing/utils.py, line 217, in 
assert_array_compare
 assert cond, msg
AssertionError:
Arrays are not equal

(mismatch 100.0%)
  x: array([(1, 2), (3, 4)],
   dtype=[('x', 'i4'), ('y', 'i4')])
  y: array([(1, 2), (3, 4)],
   dtype=[('x', 'i4'), ('y', 'i4')])

--
Ran 837 tests in 4.281s

FAILED (failures=1)


Given the platforms this error occurs on I am guessing this is a big vs. 
little-endian problem.

Chris

-- 
Christopher Hanley
Systems Software Engineer
Space Telescope Science Institute
3700 San Martin Drive
Baltimore MD, 21218
(410) 338-4338
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] sorting ndarray

2008-04-04 Thread Nils Wagner
On Thu, 3 Apr 2008 23:02:24 -0700 (PDT)
  harryos [EMAIL PROTECTED] wrote:
 i have a 1 dim numpy array
 D=array( [[ 3. ,  2. ,  1.  , 4. ,  5.  , 1.5,  2.2]] )
 i need to get this sorted in descending order and then 
access the
 elements .
 D.sort() will make D as [[ 1.   1.5  2.   2.2  3.   4. 
  5. ]]
 how will i reverse it?
 or is there a simpler way?
 harry
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 
 D
array([[ 3. ,  2. ,  1. ,  4. ,  5. ,  1.5,  2.2]])
 fliplr(sort(D))
array([[ 5. ,  4. ,  3. ,  2.2,  2. ,  1.5,  1. ]])

Nils
  
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] packaging scipy (was Re: Simple financial functions for NumPy)

2008-04-04 Thread Joe Harrington
Every once in a while the issue of how to split things into packages
comes up.  In '04, I think, we had such a discussion regarding scipy
(with Numeric as its base at the time).  One idea was a
core-plus-many-modules approach.  We could then have metapackages that
just consisted of dependencies and would draw in the packages that
were needed for a given application.  A user would install just one
metapackage (one of which would install every package), or could roll
their own.

Then Travis took as much of scipy as was reasonable to maintain at
once, took a chunk of numarray, and made numpy.  Numpy is now a bit
more than what I would have thought of as core functionality.

One might consider trimming Travis's collection and moving many of the
functions out to add-ons in scipy.  The question is where to draw the
line in the sand with regard to where a given function belongs, and
the problem is that the line is very hard to draw in a way that
pleases many people and mortally offends very few.  Certainly lots of
content areas would straddle the divide and require both a package in
scipy and representation in the core.

I would think of core functionality as what is now the ndarray class
(including all slicing and broadcasting); things to make and index
arrays like array, zeros, ones, where and some truth tests; the most
elementary math functions, such as add, subtract, multiply, divide,
power, sin, cos, tan, asin, acos, atan, log, log10; sum, mean, median,
standard deviaton; the constants pi, e, NaN, Inf, -Inf; a few simple
file I/O functions including loadtxt, and not a whole lot else.  This
is an incomplete list but you get the idea.  Everything else would be
in optional packages, possiby broken down by topic.

Now ask yourself, what would you add to this to make your core?  Would
you take anything out?  And the kicker: do you really think much of
the community would agree with any one of our lists, including mine?

Almost all such very-light collections would require users to load
additional packages.  For example, complex things like FFTs and random
numbers would not belong in the core outlined above, nor would linear
algebra, masked arrays, fitting, random numbers, most stats, or
financial functions.

There is one division that does make sense, and that would be to
distribute ONLY ndarray as the core, and have NO basic math functions,
etc., in the core.  Then you have to load something, a lot of things,
to make it useful, but you don't have the question of what should be
in which package.  But ndarray is already a stand-alone item.

At that point you have to ask, if the core is so small that *everyone*
has to load an add-on, what's the point of making the division?  You
can argue that it's easier maintenance-wise, but I'm not certain that
having many packages to build, test, and distribute is easier.  Travis
already made a decision based on maintenance, and it seems to be
working.

That brings us to the motivation for dividing in the first place.  I
think people like the idea because we're all scientists and we like to
categorize things.  We like neat little packages.  But we're not
thinking about the implications for the code we'd actually write.
Wouldn't you rather do:

import numpy as N

...
c = (N.sin(b) + N.exp(d)) / N.mean(g)

rather than:

import numpy  as N
import numpy.math as N.M
import numpy.trig as N.T
import numpy.stat as N.S

...
c = (N.T.sin(b) + N.M.exp(d)) / N.S.mean(g)

?

In the latter example, you start N.whatevering yourself to death, and
it is harder to learn because you have to remember what little
container each function is in and what you happened to name the
container when you loaded it.  Code is also harder to read as the
N.whatevers distract from the sense of the equation.  Lines start to
lengthen.  Sure, you can do:

from whatever import functionlist

to pull the functions into your top-level namespace, but do you really
want to, in effect, declare every function you ever call?  The whole
point of array languages is to get rid of mechanics like function
declarations so you can focus on programming, not housekeeping.

As I've emphasized before, finding the function you want is a
documentation problem, not a packaging problem.  We're working on
that.  Anne's function index on the web site is an excellent start,
and there will be much more done this summer.

Though I didn't initially agree with it, I now think Travis's line in
the sand is a pretty good one.  Numpy is enough so many people don't
have to go to scipy most of the time.  It's being maintained well and
released reasonably often.  The problems of the rest of scipy are not
holding back the core anymore.  Installing today at a tiny 10 MB,
numpy could easily stand to grow by adding small functions that are
broadly used, without making it unwieldy for even the constrained
space of OLPC.

Let's think good and hard before we introduce more divisions into the
namespace.

--jh--

Re: [Numpy-discussion] problem with float64's str()

2008-04-04 Thread Robert Kern
On Fri, Apr 4, 2008 at 9:56 AM, Will Lee [EMAIL PROTECTED] wrote:
 I understand the implication for the floating point comparison and the need
 for allclose.  However, I think in a doctest context, this behavior makes
 the doc much harder to read.

Tabling the issue of the fact that we changed behavior for a moment,
this is a fundamental problem with using doctests as unit tests for
numerical code. The floating point results that you get *will* be
different on different machines, but the code will still be correct.
Using allclose() and similar techniques are the best tools available
(although they still suck). Relying on visual representations of these
results is simply an untenable strategy. Note that the string
representation of NaNs and Infs are completely different across
platforms.

That said, str(float_numpy_scalar) really should have the same rules
as str(some_python_float).

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
 -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] problem with float64's str()

2008-04-04 Thread Travis E. Oliphant
Robert Kern wrote:
 On Fri, Apr 4, 2008 at 9:56 AM, Will Lee [EMAIL PROTECTED] wrote:
   
 I understand the implication for the floating point comparison and the need
 for allclose.  However, I think in a doctest context, this behavior makes
 the doc much harder to read.
 

 Tabling the issue of the fact that we changed behavior for a moment,
 this is a fundamental problem with using doctests as unit tests for
 numerical code. The floating point results that you get *will* be
 different on different machines, but the code will still be correct.
 Using allclose() and similar techniques are the best tools available
 (although they still suck). Relying on visual representations of these
 results is simply an untenable strategy. Note that the string
 representation of NaNs and Infs are completely different across
 platforms.

   
Well said, Robert.

 That said, str(float_numpy_scalar) really should have the same rules
 as str(some_python_float).

   
+1

-Travis




___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] packaging scipy (was Re: Simple financial functions for NumPy)

2008-04-04 Thread Alan Isaac
On Fri, 04 Apr 2008, Joe Harrington wrote:
 Wouldn't you rather do:
 import numpy as N 
 ... 
 c = (N.sin(b) + N.exp(d)) / N.mean(g)

 rather than:

 import numpy  as N 
 import numpy.math as N.M 
 import numpy.trig as N.T 
 import numpy.stat as N.S 
 ... 
 c = (N.T.sin(b) + N.M.exp(d)) / N.S.mean(g)


I try to think of my students in such an environment.
Frightening.
Alan Isaac



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] packaging scipy (was Re: Simple financial functions for NumPy)

2008-04-04 Thread Gael Varoquaux
On Fri, Apr 04, 2008 at 04:29:03PM -0400, Alan Isaac wrote:
  import numpy  as N 
  import numpy.math as N.M 
  import numpy.trig as N.T 
  import numpy.stat as N.S 
  ... 
  c = (N.T.sin(b) + N.M.exp(d)) / N.S.mean(g)


 I try to think of my students in such an environment.
 Frightening.

+1 (and s/students/colleagues).

Gaël
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] problem with float64's str()

2008-04-04 Thread Timothy Hochberg
On Fri, Apr 4, 2008 at 12:47 PM, Robert Kern [EMAIL PROTECTED] wrote:

 On Fri, Apr 4, 2008 at 9:56 AM, Will Lee [EMAIL PROTECTED] wrote:
  I understand the implication for the floating point comparison and the
 need
  for allclose.  However, I think in a doctest context, this behavior
 makes
  the doc much harder to read.

 Tabling the issue of the fact that we changed behavior for a moment,
 this is a fundamental problem with using doctests as unit tests for
 numerical code. The floating point results that you get *will* be
 different on different machines, but the code will still be correct.
 Using allclose() and similar techniques are the best tools available
 (although they still suck). Relying on visual representations of these
 results is simply an untenable strategy.


That is sometimes, but not always the case. Why? Because most of the time
that one ends up with simple values, one is starting with arbitrary floating
point values and doing at most simple operations on them. Thus a strategy
that helps many of my unit tests look better and function reliably is to
choose values that can be represented exactly in floating point. If the
original value here had been 0.00125 rather than .0012, there would be no
problem here. Well almost, you still are vulnerable to the rules for zero
padding and what no getting changed and so forth, but in general it's more
reliable and prettier.

Of course this isn't always a solution. But I've found it's helpful for a
lot cases.

Note that the string
 representation of NaNs and Infs are completely different across
 platforms.

 That said, str(float_numpy_scalar) really should have the same rules
 as str(some_python_float).


+1






 --
 Robert Kern

 I have come to believe that the whole world is an enigma, a harmless
 enigma that is made terrible by our own mad attempt to interpret it as
 though it had an underlying truth.
  -- Umberto Eco
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion




-- 
. __
. |-\
.
. [EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Simple financial functions for NumPy

2008-04-04 Thread Alexander Michael
On Fri, Apr 4, 2008 at 9:49 AM, Travis E. Oliphant
[EMAIL PROTECTED] wrote:
  However, if clearly better interfaces can be discovered, then we could
  change it.   For now, the functions are not imported into the numpy
  namespace but live in

  numpy.lib.financial

  I could see a future scipy module containing much, much more.

  Comments and improvement suggestions welcome.   We are a week away from
  release of NumPy 1.0.5, and hopefully we can agree before then.

I'm generally in agreement with other opinions about keeping numpy
lightweight even though I think these functions are useful and should
be widely distributed with numpy. I've struggled with the various
masked array implementations being worlds unto their own, falling down
unexpectedly when mixed with other numpy functions, so keeping a
narrow focus seems beneficial (as in its clear that I shouldn't expect
A and B to work necessarily together). Nevertheless, I like getting a
lot of utility from each package as it seems cognitive load is
proportional to the number of packages required-- especially when the
packages are compiled. Perhaps, as others have suggested, there should
be some sort of pure-python numpy library package (a NumPyLib, if you
will) that sits between numpy and scipy? I'm a numpy user but not a
scipy user (I guess from an attempt to decrease the cognitive load of
yet another compiled python package), so I'm speaking from that
perspective. I also wouldn't be opposed to (for NumPy 4 :) breaking
out the core ndarray class and basic linalg (solve, svg, eig, etc.) as
NDArray and putting everything else into logically separated but
independent NumKits. A blessed collection of which are together taken
and distributed as NumPy. Anything depending on one ore more NumKits
would go into a SciKit, with a blessed collection distributed together
as SciPy. Has this basic distribution architecture already been
proposed? I've heard hints of something along these lines. If so, then
the new new financial functions should go into numpy.lib, where
everything will later be broken out into a NumKit. Hmm. I've just
argued myself in a circle... :O

Regards,
Alex
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Efficient reading of binary data

2008-04-04 Thread Christopher Barker
Nicolas Bigaouette wrote:
 So the next step would be to only read the needed data from the binary 
 file...

You've gotten some suggestions, but another option is to use file.seek(0 
to get where your data is, and numpy.fromfile() from there.

-CHB


-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Problem with numpy.linalg.eig?

2008-04-04 Thread Michael McNeil Forbes
On 16 Nov 2007, at 1:46 AM, Michael McNeil Forbes wrote:
 On 15 Nov 2007, at 8:23 PM, David Cournapeau wrote:

 Could you try without atlas ? Also, how did you configure atlas when
 building it ? It seems that atlas is definitely part of the problem
 (everybody having the problem does use atlas), and that it involves
 Core
 2 duo.

 David

 It seems to work fine without ATLAS, but then again, it is a somewhat
 random error.  I will let some code run tonight and see if I detect
 anything.

Just an update.  I am still having this problem, along with some  
additional problems where occasionally even dot returns nan's.  I  
have confirmed that without ATLAS everything seems to be fine, and  
that the problem still remains with newer versions of ATLAS, Python,  
gcc etc.

ATLAS was configured with

../configure --prefix=${BASE}/apps/${ATLAS}_${SUFFIX}\
  --with-netlib-lapack=${BASE}/src/${LAPACK}_${SUFFIX}/ 
lapack_LINUX.a\
  -A Core2Duo64SSE3\
  --cflags=-fPIC\
  -Fa alg -fPIC

and it passed all the tests.

The problem still exists with ATLAS version 3.8.1, gcc 4.3.0, and  
recent versions of numpy.

   sys.version
  '2.5.2 (r252:60911, Mar 29 2008, 02:55:47) \n[GCC 4.3.0]'
   numpy.version.version
  '1.0.5.dev4915'

I have managed to extract a matrix that causes this failure  
repeatedly once every two or four times eigh is called, so hopefully  
I should be able to run gdb and track down the problem...
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Simple financial functions for NumPy

2008-04-04 Thread Anne Archibald
On 04/04/2008, Alan G Isaac [EMAIL PROTECTED] wrote:
 On Fri, 4 Apr 2008, Gael Varoquaux apparently wrote:
   I really thing numpy should be as thin as possible, so
   that you can really say that it is only an array
   manipulation package. This will also make it easier to
   sell as a core package for developpers who do not care
   about calculator features.


 I'm a user rather than a developer, but I wonder:
  is this true?

  1. Even as a user, I agree that what I really want from
  NumPy is a core array manipulation package (including
  matrices).  BUT as long as this is the core of NumPy,
  will a developer care if other features are available?

  2. Even if the answer to 1. is yes, could the
  build/installation process include an option not to
  build/install anything but the core array functionality?

  3. It seems to me that pushing things out into SciPy remains
  a problem: a basic NumPy is easy to build on any platform,
  but SciPy still seems to generate many questions.

  4. One reason this keeps coming up is that he NumPy/SciPy
  split is rather too crude.  If the split were instead
  something like NumPy/SciPyBasic/SciPyFull/SciPyFull+Kits
  where SciPyBasic contained only pure Python code (no
  extensions), perhaps the desired location would be more
  obvious and some of this recurrent discussion would go away.

It seems to me that there are two separate issues people are talking
about when they talk about packaging:

* How should functions be arranged in the namespaces? numpy.foo(),
scipy.foo(), numpy.lib.financial.foo(), scikits.foo(),
numkitfull.foo()?

* Which code should be distributed together? Should scipy require
separate downloading and compilation from numpy?

The two questions are not completely independent - it would be
horribly confusing to have the set of functions available in a given
namespace depend on which packages you had installed - but for the
most part it's not a problem to have several toplevel namespaces in
one package (python's library is the biggest example of this I know
of).

For the first question, there's definitely a question about how much
should be done with namespaces and how much with documentation. The
second is a different story.

Personally, I would prefer if numpy and scipy were distributed
together, preferably with matplotlib. Then anybody who used numpy
would have available all the scpy tools and all the plotting tools; I
think it would cut down on wheel reinvention and make application
development easier. Teachers would not need to restrict themselves to
using only functions built into numpy for fear that their students
might not have scipy installed - how many students have learned to
save their arrays in unportable binary formats because their teacher
didn't want them to have to install scipy?

I realize that this poses technical problems. For me installing scipy
is just a matter of clicking on a checkbox and installing a 30 MB
package, but I realize that some platforms make this much more
difficult. If we can't just bundle the two, fine. But I think it is
mad to consider subdividing further if we don't have to.

I think python's success is due in part to its batteries included
library. The fact that you can just write a short python script with
no extra dependencies that can download files from the Web, parse XML,
manage subprocesses, and save persistent objects makes development
much faster than if you had to forever decide between adding
dependencies and reinventing the wheel. I think numpy and scipy should
follow the same principle, of coming batteries included.

So in this specific case, yes, I think the financial functions should
absolutely be included; whether they should be included in scipy or
numpy is less important to me because I think everyone should install
both packages.

Anne
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Simple financial functions for NumPy

2008-04-04 Thread Timothy Hochberg
On Fri, Apr 4, 2008 at 3:31 PM, Anne Archibald [EMAIL PROTECTED]
wrote:

 On 04/04/2008, Alan G Isaac [EMAIL PROTECTED] wrote:
  On Fri, 4 Apr 2008, Gael Varoquaux apparently wrote:
I really thing numpy should be as thin as possible, so
that you can really say that it is only an array
manipulation package. This will also make it easier to
sell as a core package for developpers who do not care
about calculator features.
 
 
  I'm a user rather than a developer, but I wonder:
   is this true?
 
   1. Even as a user, I agree that what I really want from
   NumPy is a core array manipulation package (including
   matrices).  BUT as long as this is the core of NumPy,
   will a developer care if other features are available?
 
   2. Even if the answer to 1. is yes, could the
   build/installation process include an option not to
   build/install anything but the core array functionality?
 
   3. It seems to me that pushing things out into SciPy remains
   a problem: a basic NumPy is easy to build on any platform,
   but SciPy still seems to generate many questions.
 
   4. One reason this keeps coming up is that he NumPy/SciPy
   split is rather too crude.  If the split were instead
   something like NumPy/SciPyBasic/SciPyFull/SciPyFull+Kits
   where SciPyBasic contained only pure Python code (no
   extensions), perhaps the desired location would be more
   obvious and some of this recurrent discussion would go away.

 It seems to me that there are two separate issues people are talking
 about when they talk about packaging:

 * How should functions be arranged in the namespaces? numpy.foo(),
 scipy.foo(), numpy.lib.financial.foo(), scikits.foo(),
 numkitfull.foo()?

 * Which code should be distributed together? Should scipy require
 separate downloading and compilation from numpy?

 The two questions are not completely independent - it would be
 horribly confusing to have the set of functions available in a given
 namespace depend on which packages you had installed - but for the
 most part it's not a problem to have several toplevel namespaces in
 one package (python's library is the biggest example of this I know
 of).

 For the first question, there's definitely a question about how much
 should be done with namespaces and how much with documentation. The
 second is a different story.

 Personally, I would prefer if numpy and scipy were distributed
 together, preferably with matplotlib. Then anybody who used numpy
 would have available all the scpy tools and all the plotting tools; I
 think it would cut down on wheel reinvention and make application
 development easier. Teachers would not need to restrict themselves to
 using only functions built into numpy for fear that their students
 might not have scipy installed - how many students have learned to
 save their arrays in unportable binary formats because their teacher
 didn't want them to have to install scipy?

 I realize that this poses technical problems. For me installing scipy
 is just a matter of clicking on a checkbox and installing a 30 MB
 package, but I realize that some platforms make this much more
 difficult. If we can't just bundle the two, fine. But I think it is
 mad to consider subdividing further if we don't have to.


If these were tightly tied together, for instance in one big dll , this
would be unpleasant for me. I still have people downloading stuff over 56k
modems and adding an extra 30 MB to the already somewhat bloated numpy
distribution would make there lives more tedious than they already are.

 I think python's success is due in part to its batteries included

 library. The fact that you can just write a short python script with
 no extra dependencies that can download files from the Web, parse XML,
 manage subprocesses, and save persistent objects makes development
 much faster than if you had to forever decide between adding
 dependencies and reinventing the wheel. I think numpy and scipy should
 follow the same principle, of coming batteries included.


One thing they try to do in Python proper is think a lot more before adding
stuff to the standard library. Generally packages need to exist separately
for some period of time to prove there general utility and to stabilize
before they get accepted.  Particularly in the core, but in the library as
well, they make an effort to chose a compact set of primitive operations
without a lot of duplication (the old There should be one-- and preferably
only one --obvious way to do it.). The numpy community has, particularly of
late, been rather quick to add things that seem like they *might *be useful.

One of the advantages of having multiple namespaces would have been to
enforce a certain amount of discipline on what went into numpy, since it
would've been easier to look at and evaluate a few dozen functions that
might have comprised some subpackage rather than, let's say, five hundred or
so.

I suspect it's too late now; numpy has chosen the path of matlab and