Re: [Numpy-discussion] genloadtxt : last call

2008-12-09 Thread Jarrod Millman
On Fri, Dec 5, 2008 at 3:59 PM, Pierre GM [EMAIL PROTECTED] wrote:
 All,
 Here's the latest version of genloadtxt, with some recent corrections. With
 just a couple of tweaking, we end up with some decent speed: it's still
 slower than np.loadtxt, but only 15% so according to the test at the end of
 the package.

 And so, now what ? Should I put the module in numpy.lib.io ? Elsewhere ?

Thanks for working on this.  I think that having simple, easy-to-use,
flexible, and fast IO code is extremely important; so I really
appreciate this work.

I have a few general comments about the IO code and where I would like
to see it going:

Where should IO code go?


From the user's perspective, I would like all the NumPy IO code to be
in the same place in NumPy; and all the SciPy IO code to be in the
same place in SciPy.  So, for instance, the user shouldn't get
`mloadtxt` from `numpy.ma.io`.  Another way of saying this is that in
IPython, I should be able to see all NumPy IO functions by
tab-completing once.

Slightly less important to me is that I would like to be able to do:
  from numpy import io as npio
  from scipy import io as spio

What is the difference between NumPy and SciPy IO?


It was decided last year that numpy io should provide simple, generic,
core io functionality.  While scipy io would provide more domain- or
application-specific io code (e.g., Matlab IO, WAV IO, etc.)  My
vision for scipy io, which I know isn't shared, is to be more or less
aiming to be all inclusive (e.g., all image, sound, and data formats).
 (That is a different discussion; just wanted it to be clear where I
stand.)

For numpy io, it should include:
 - generic helper routines for data io (i.e., datasource, etc.)
 - a standard, supported binary format (i.e., npy/npz)
 - generic ascii file support (i.e, loadtxt, etc.)

What about AstroAsciiData?
-

I sent an email asking about AstroAsciiData last week.  The only
response I got was from Manuel Metz  saying that he was switching to
AstroAsciiData since it did exactly what he needed.  In my mind, I
would prefer that numpy io had the best ascii data handling.  So I
wonder if it would make sense to incorporate AstroAsciiData?

As far as I know, it is pure Python with a BSD license.  Maybe the
authors would be willing to help integrate the code and continue
maintaining it in numpy.  If others are supportive of this general
approach, I would be happy to approach them.  It is possible that we
won't want all their functionality, but it would be good to avoid
duplicating effort.

I realize that this may not be persuasive to everyone, but I really
feel that IO code is special and that it is an area where numpy/scipy
should devote some effort at consolidating the community on some
standard packages and approaches.

3. What about data source?

On a related note, I wanted to point out datasource.  Data source is a
file interface for handling local and remote data files:
http://projects.scipy.org/scipy/numpy/browser/trunk/numpy/lib/_datasource.py

It was originally developed by Jonathan Taylor and then modified by
Brian Hawthorne and Chris Burns.  It is fairly well-documented and
tested, so it would be easier to take a look at it than or me to
reexplain it here.  The basic idea is to have a drop-in replacement
for file handling, which would abstract away whether the file was
remote or local, compressed or not, etc.  The hope was that it would
allow us to simplify support for remote file access and handling
compressed files by merely using a datasource instead of a filename:
  def loadtxt(fname 
vs.
  def loadtxt(datasource 

I would appreciate hearing whether this seems doable or useful.
Should we remove datasource?  Start using it more?  Does it need to be
slightly or dramatically improved/overhauled?  Renamed `datafile` or
paired with a `datadestination`?  Support
versioning/checksumming/provenance tracking (a tad ambitious;))?  Is
anyone interested in picking up where we left off and improving it?

Thoughts? Suggestions?

Documentation
-

The main reason that I am so interested in the IO code is that it
seems like it is one of the first areas that users will look.  (I
have heard about this Python for scientific programming thing and I
wonder what all the fuss is about?  Let me try NumPy; this seems
pretty good.  Now let's see how to load in some of my data)

I just took a quick look through the documentation and I couldn't find
any in the User Guide and this is the main IO page in the reference
manual:
  http://docs.scipy.org/doc/numpy/reference/routines.io.html

I would like to see a section on data IO in the user guide and have a
more prominent mention of IO code in the reference manual (i.e.,
http://docs.scipy.org/doc/numpy/reference/io.html ?).

Unfortunately, I don't have time to help out; but since it looks like

Re: [Numpy-discussion] Line of best fit!

2008-12-09 Thread James
Hi,

Thanks for all your help so far!

Right i think it would be easier to just show you the chart i have so far;

--
import numpy as np
import matplotlib.pyplot as plt

plt.plot([4,8,12,16,20,24], [0.008,0.016,0.021,0.038,0.062,0.116], 'bo')

plt.xlabel(F (Number of washers))
plt.ylabel(v^2/r ms-2)
plt.title(Circular Motion)
plt.axis([2,26,0,0.120])

plt.show()



Very basic i know, all i wish to do is add a line of best fit based on 
that data, in the examples there seems to be far more variables, do i 
need to split my data up etc?

Thanks

Scott Sinclair wrote:
 2008/12/9 Angus McMorland [EMAIL PROTECTED]:
 Hi James,

 2008/12/8 James [EMAIL PROTECTED]:
 
 I have a very simple plot, and the lines join point to point, however i
 would like to add a line of best fit now onto the chart, i am really new
 to python etc, and didnt really understand those links!

 Can anyone help me :)
   
 It sounds like the second link, about linear regression, is a good
 place to start, and I've made a very simple example based on that:

 ---
 import numpy as np
 import matplotlib.pyplot as plt

 x = np.linspace(0, 10, 11) #1
 data_y = np.random.normal(size=x.shape, loc=x, scale=2.5) #2
 plt.plot(x, data_y, 'bo') #3

 coefs = np.lib.polyfit(x, data_y, 1) #4
 fit_y = np.lib.polyval(coefs, x) #5
 plt.plot(x, fit_y, 'b--') #6
 
 

 James, you'll want to add an extra line to the above code snippet so
 that Matplotlib displays the plot:

 plt.show()

 Cheers,
 Scott
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Line of best fit!

2008-12-09 Thread Lane Brooks
James wrote:
 Hi,

 Thanks for all your help so far!

 Right i think it would be easier to just show you the chart i have so far;

 --
 import numpy as np
 import matplotlib.pyplot as plt

 plt.plot([4,8,12,16,20,24], [0.008,0.016,0.021,0.038,0.062,0.116], 'bo')

 plt.xlabel(F (Number of washers))
 plt.ylabel(v^2/r ms-2)
 plt.title(Circular Motion)
 plt.axis([2,26,0,0.120])

 plt.show()

 

 Very basic i know, all i wish to do is add a line of best fit based on 
 that data, in the examples there seems to be far more variables, do i 
 need to split my data up etc?
   

Here is how I would do it:

import numpy as np
import matplotlib.pyplot as plt

x = np.array([4,8,12,16,20,24])
y = np.array([0.008,0.016,0.021,0.038,0.062,0.116])

m = np.polyfit(x, y, 1)

yfit = np.polyval(m, x)

plt.plot(x, y, 'bo', x, yfit, 'k')

plt.xlabel(F (Number of washers))
plt.ylabel(v2/r ms-2)
plt.title(Circular Motion)
plt.axis([2,26,0,0.120])

plt.text(5, 0.06, Slope=%f % m[0])
plt.text(5, 0.05, Offset=%f % m[1])
plt.show()

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Please help prepare the SciPy 0.7 release notes

2008-12-09 Thread Jarrod Millman
We are almost ready for SciPy 0.7.0rc1 (we just need to sort out the
Numerical Recipes issues and I haven't had time to look though them
yet).  So I wanted to ask once more for help with preparing the
release notes:
http://projects.scipy.org/scipy/scipy/browser/trunk/doc/release/0.7.0-notes.rst

There have been numerous improvements and changes.  As always I would
appreciate any feedback about mistakes or omissions.  It would also be
nice to know how many tests were in the last release and how many are
there now.  Highlighting major bug fixes or pointing out know issues
would be very useful.

I would also like to ask if anyone would be interested in stepping
forward to work on something like Andrew Kuchling's What's New in
Python :  http://docs.python.org/whatsnew/2.6.html

This would be a great area to contribute.  The release notes provide
visibility for our developers' immense contributions of time and
effort.  They help provide an atmosphere of momentum, maturity, and
excitement to a project.  It is also a great service to users who
haven't been following the trunk closely as well as other developer's
who have missed what is happening in other areas of the code.  It is
also becomes a nice historical artifact for the future.

It would be great if someone wanted to contribute in this way.
Ideally, I would like to have someone who be interested in doing this
for several releases of scipy and numpy.  Such a person could develop
a standard template for this and write some scripts to gather specific
statistics (e.g., how many lines of code have changed, how many unit
tests were added, what is the test coverage, what is the docstring
coverage, who were the top contributors, who has increased their code
contributions the most, how many new developers, etc.)

Just a thought.  Figure it won't happen, if I don't ask.

Thanks,

-- 
Jarrod Millman
Computational Infrastructure for Research Labs
10 Giannini Hall, UC Berkeley
phone: 510.643.4014
http://cirl.berkeley.edu/
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Importance of order when summing values in an array

2008-12-09 Thread Hanni Ali
Hi All,

I have encountered a puzzling issue and I am not certain if this is a
mistake of my own doing or not. Would someone kindly just look over this
issue to make sure I'm not doing something very silly.

So, why would the sum of an array have a different value depending on the
order I select the indices of the array?

 vector[[39, 46, 49, 50, 6, 9, 12, 14, 15, 17, 21]].sum()
8933281.8757099733
 vector[[6, 9, 12, 14, 15, 17, 21, 39, 46, 49, 50]].sum()
8933281.8757099714
 sum(vector[[39, 46, 49, 50, 6, 9, 12, 14, 15, 17, 21]])
8933281.8757099733
 sum(vector[[6, 9, 12, 14, 15, 17, 21, 39, 46, 49, 50]])
8933281.8757099714

Any thoughts?

Cheers,

Hanni
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Importance of order when summing values in anarray

2008-12-09 Thread Nadav Horesh
The highest accuracy is obtained when you sum an acceding ordered series, and 
the lowest accuracy with descending ordered. In between you might get a variety 
of rounding errors.

  Nadav. 

-הודעה מקורית-
מאת: [EMAIL PROTECTED] בשם Hanni Ali
נשלח: ג 09-דצמבר-08 16:07
אל: Discussion of Numerical Python
נושא: [Numpy-discussion] Importance of order when summing values in anarray
 
Hi All,

I have encountered a puzzling issue and I am not certain if this is a
mistake of my own doing or not. Would someone kindly just look over this
issue to make sure I'm not doing something very silly.

So, why would the sum of an array have a different value depending on the
order I select the indices of the array?

 vector[[39, 46, 49, 50, 6, 9, 12, 14, 15, 17, 21]].sum()
8933281.8757099733
 vector[[6, 9, 12, 14, 15, 17, 21, 39, 46, 49, 50]].sum()
8933281.8757099714
 sum(vector[[39, 46, 49, 50, 6, 9, 12, 14, 15, 17, 21]])
8933281.8757099733
 sum(vector[[6, 9, 12, 14, 15, 17, 21, 39, 46, 49, 50]])
8933281.8757099714

Any thoughts?

Cheers,

Hanni

winmail.dat___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Line of best fit!

2008-12-09 Thread Alan G Isaac
On 12/8/2008 3:32 PM James apparently wrote:
 I have a very simple plot, and the lines join point to point, however i 
 would like to add a line of best fit now onto the chart, i am really new 
 to python etc, and didnt really understand those links!

See the `slope_intercept` method of the OLS class
at http://code.google.com/p/econpy/source/browse/trunk/pytrix/ls.py

Cheers,
Alan Isaac

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Importance of order when summing values in anarray

2008-12-09 Thread Hanni Ali
Thank you Nadav.

2008/12/9 Nadav Horesh [EMAIL PROTECTED]

 The highest accuracy is obtained when you sum an acceding ordered series,
 and the lowest accuracy with descending ordered. In between you might get a
 variety of rounding errors.

  Nadav.

 -הודעה מקורית-
 מאת: [EMAIL PROTECTED] בשם Hanni Ali
 נשלח: ג 09-דצמבר-08 16:07
 אל: Discussion of Numerical Python
 נושא: [Numpy-discussion] Importance of order when summing values in anarray

 Hi All,

 I have encountered a puzzling issue and I am not certain if this is a
 mistake of my own doing or not. Would someone kindly just look over this
 issue to make sure I'm not doing something very silly.

 So, why would the sum of an array have a different value depending on the
 order I select the indices of the array?

  vector[[39, 46, 49, 50, 6, 9, 12, 14, 15, 17, 21]].sum()
 8933281.8757099733
  vector[[6, 9, 12, 14, 15, 17, 21, 39, 46, 49, 50]].sum()
 8933281.8757099714
  sum(vector[[39, 46, 49, 50, 6, 9, 12, 14, 15, 17, 21]])
 8933281.8757099733
  sum(vector[[6, 9, 12, 14, 15, 17, 21, 39, 46, 49, 50]])
 8933281.8757099714

 Any thoughts?

 Cheers,

 Hanni


 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Importance of order when summing values in anarray

2008-12-09 Thread Bruce Southey
Nadav Horesh wrote:
 The highest accuracy is obtained when you sum an acceding ordered series, and 
 the lowest accuracy with descending ordered. In between you might get a 
 variety of rounding errors.

   Nadav. 

 -הודעה מקורית-
 מאת: [EMAIL PROTECTED] בשם Hanni Ali
 נשלח: ג 09-דצמבר-08 16:07
 אל: Discussion of Numerical Python
 נושא: [Numpy-discussion] Importance of order when summing values in anarray
  
 Hi All,

 I have encountered a puzzling issue and I am not certain if this is a
 mistake of my own doing or not. Would someone kindly just look over this
 issue to make sure I'm not doing something very silly.

 So, why would the sum of an array have a different value depending on the
 order I select the indices of the array?

   
 vector[[39, 46, 49, 50, 6, 9, 12, 14, 15, 17, 21]].sum()
 
 8933281.8757099733
   
 vector[[6, 9, 12, 14, 15, 17, 21, 39, 46, 49, 50]].sum()
 
 8933281.8757099714
   
 sum(vector[[39, 46, 49, 50, 6, 9, 12, 14, 15, 17, 21]])
 
 8933281.8757099733
   
 sum(vector[[6, 9, 12, 14, 15, 17, 21, 39, 46, 49, 50]])
 
 8933281.8757099714

 Any thoughts?

 Cheers,

 Hanni

   
 

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion
   
Also, increase the numerical precision as that may depend on your 
platform especially given the input values above are ints. Numpy has 
float128 and int64 that will minimize rounding error.

Bruce
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Importance of order when summing values in anarray

2008-12-09 Thread Hanni Ali
Hi Bruce,

Ahh, but I would have thought the precision for the array operation would be
the same no matter which values I wish to sum? The array is in float64 in
all cases.

I would not have thought altering the type of the integer values would make
any difference as these indices are all below 5 milllion.

Perhaps I have misunderstood your suggestion could you expand.

Cheers,

Hanni


Also, increase the numerical precision as that may depend on your
 platform especially given the input values above are ints. Numpy has
 float128 and int64 that will minimize rounding error.

 Bruce
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Importance of order when summing values in anarray

2008-12-09 Thread Bruce Southey
Hanni Ali wrote:
 Hi Bruce,

 Ahh, but I would have thought the precision for the array operation 
 would be the same no matter which values I wish to sum? The array is 
 in float64 in all cases.

 I would not have thought altering the type of the integer values would 
 make any difference as these indices are all below 5 milllion.

 Perhaps I have misunderstood your suggestion could you expand.

 Cheers,

 Hanni


 Also, increase the numerical precision as that may depend on your
 platform especially given the input values above are ints. Numpy has
 float128 and int64 that will minimize rounding error.

 Bruce
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org mailto:Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion


 

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion
   
Hi,
The main issue is the number of significant digits that you have which 
is not the number of decimals in your case. So while the numerical 
difference in the results is in the order about 1.86e-09, the actual 
difference starts at the 15th significant place. This is expected due to 
the number of significant digits of a 64-bit number (15-16). With higher 
precision like float128 you should get about 34 significant digits 
depending accuracy in all steps (i.e., the numbers must be stored as 
float128 and the summations done in float128 precision).

Note there is a secondary issue of converting numbers between different 
types as well as the binary representation of decimal numbers. Also, 
rather than just simple summing, there are alternative algorithms like 
Kahan summation algorithm that can minimize errors.

Bruce




___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy build error on Solaris, No module named _md5

2008-12-09 Thread Gong, Shawn (Contractor)
hi list,

I tried to build numpy 1.2.1 on Solaris 9 with gcc 3.4.6

when I typed python setup.py build, I got error from hashlib.py
  File /home/sgong/dev181/dist/lib/python2.5/hashlib.py, line 133, in
module
md5 = __get_builtin_constructor('md5')
  File /home/sgong/dev181/dist/lib/python2.5/hashlib.py, line 60, in
__get_builtin_constructor
import _md5
ImportError: No module named _md5

I then tried python 2.6.1 instead of 2.5.2, but got the same error.
I did not get the error while building on Linux. But I performed steps
on Linux:
1) copy *.a Atlas libraries to my local_install/atlas/
2) ranlib *.a
3) created a site.cfg
Do I need to do the same on Solaris?

Any help is appreciated.

thanks,
Shawn



___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy build error on Solaris, No module named _md5

2008-12-09 Thread Matthieu Brucher
Hi,

Does:

 import md5

work? If it doesn't, it's a packaging problem. md5 must be available.

Matthieu

2008/12/9 Gong, Shawn (Contractor) [EMAIL PROTECTED]:
 hi list,

 I tried to build numpy 1.2.1 on Solaris 9 with gcc 3.4.6

 when I typed python setup.py build, I got error from hashlib.py

   File /home/sgong/dev181/dist/lib/python2.5/hashlib.py, line 133, in
 module

 md5 = __get_builtin_constructor('md5')

   File /home/sgong/dev181/dist/lib/python2.5/hashlib.py, line 60, in
 __get_builtin_constructor

 import _md5

 ImportError: No module named _md5

 I then tried python 2.6.1 instead of 2.5.2, but got the same error.

 I did not get the error while building on Linux. But I performed steps on
 Linux:

 1) copy *.a Atlas libraries to my local_install/atlas/

 2) ranlib *.a

 3) created a site.cfg

 Do I need to do the same on Solaris?

 Any help is appreciated.

 thanks,

 Shawn

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion





-- 
Information System Engineer, Ph.D.
Website: http://matthieu-brucher.developpez.com/
Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy build error on Solaris, No module named _md5

2008-12-09 Thread David Cournapeau
On Wed, Dec 10, 2008 at 1:00 AM, Gong, Shawn (Contractor)
[EMAIL PROTECTED] wrote:
 hi list,

 Do I need to do the same on Solaris?

This has nothing to do with ATLAS. You did not build correctly python,
or the python you are using is not built correctly. _md5 is a module
from python, not from numpy.

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] One Solution to: What to use to read and write numpy arrays to a file?

2008-12-09 Thread Lou Pecora
I found one solution that's pretty simple for easy read and write to/from a 
file of a numpy array (see my original message below).  Just use the method 
tolist().

e.g. a complex 2 x 2 array

arr=array([[1.0,3.0-7j],[55.2+4.0j,-95.34]])
ls=arr.tolist()

Then use the repr - eval pairings to write and later read the list from the 
file and then convert the list that is read in back to an array:

[ls_str]=fp.readline()
ls_in= eval(ls_str)
arr_in=array(ls_in)  # arr_in is same as arr

Seems to work well.  Any comments?

-- Lou Pecora,   my views are my own.


--- On Tue, 12/9/08, Lou Pecora wrote:

In looking for simple ways to read and write data (in a text readable format) 
to and from a file and later restoring the actual data when reading back in, 
I've found that numpy arrays don't seem to play well with repr and eval. 

E.g. to write some data (mixed types) to a file I can do this (fp is an open 
file),

  thedata=[3.0,-4.9+2.0j,'another string']
  repvars= repr(thedata)+\n
  fp.write(repvars)

Then to read it back and restore the data each to its original type,

strvars= fp.readline()
sonofdata= eval(strvars)

which gives back the original data list.

BUT when I try this with numpy arrays in the data list I find that repr of an 
array adds extra end-of-lines and that messes up the simple restoration of the 
data using eval.  

Am I missing something simple?  I know I've seen people recommend ways to save 
arrays to files, but I'm wondering what is the most straight-forward?  I really 
like the simple, pythonic approach of the repr - eval pairing.




  
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy build error on Solaris, No module named _md5

2008-12-09 Thread Gong, Shawn (Contractor)
hi Matthieu,

import md5 doesn't work. I got:

 import md5
Traceback (most recent call last):
  File stdin, line 1, in module
  File /home/sgong/dev181/dist.org/lib/python2.5/md5.py, line 6, in
module
from hashlib import md5
  File /home/sgong/dev181/dist.org/lib/python2.5/hashlib.py, line 133,
in module
md5 = __get_builtin_constructor('md5')
  File /home/sgong/dev181/dist.org/lib/python2.5/hashlib.py, line 60,
in __get_builtin_constructor
import _md5
ImportError: No module named _md5


But I followed the same steps to build python 2.5.2 as on Linux:
config
make clean
make
make -i install  (because there is an older python 2.5.1 on my
/usr/local/bin/)


thanks,
Shawn


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Matthieu
Brucher
Sent: Tuesday, December 09, 2008 11:45 AM
To: Discussion of Numerical Python
Subject: Re: [Numpy-discussion] numpy build error on Solaris,No module
named _md5

Hi,

Does:

 import md5

work? If it doesn't, it's a packaging problem. md5 must be available.

Matthieu

2008/12/9 Gong, Shawn (Contractor) [EMAIL PROTECTED]:
 hi list,

 I tried to build numpy 1.2.1 on Solaris 9 with gcc 3.4.6

 when I typed python setup.py build, I got error from hashlib.py

   File /home/sgong/dev181/dist/lib/python2.5/hashlib.py, line 133,
in
 module

 md5 = __get_builtin_constructor('md5')

   File /home/sgong/dev181/dist/lib/python2.5/hashlib.py, line 60, in
 __get_builtin_constructor

 import _md5

 ImportError: No module named _md5

 I then tried python 2.6.1 instead of 2.5.2, but got the same error.

 I did not get the error while building on Linux. But I performed steps
on
 Linux:

 1) copy *.a Atlas libraries to my local_install/atlas/

 2) ranlib *.a

 3) created a site.cfg

 Do I need to do the same on Solaris?

 Any help is appreciated.

 thanks,

 Shawn

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion





-- 
Information System Engineer, Ph.D.
Website: http://matthieu-brucher.developpez.com/
Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy build error on Solaris, No module named _md5

2008-12-09 Thread Matthieu Brucher
You should ask on a general Python list, as it's a Python problem, not
a numpy one ;)

Matthieu

PS: look at the log when you built Python, there must be a mention of
the not building of the md5 module.

2008/12/9 Gong, Shawn (Contractor) [EMAIL PROTECTED]:
 hi Matthieu,

 import md5 doesn't work. I got:

 import md5
 Traceback (most recent call last):
  File stdin, line 1, in module
  File /home/sgong/dev181/dist.org/lib/python2.5/md5.py, line 6, in
 module
from hashlib import md5
  File /home/sgong/dev181/dist.org/lib/python2.5/hashlib.py, line 133,
 in module
md5 = __get_builtin_constructor('md5')
  File /home/sgong/dev181/dist.org/lib/python2.5/hashlib.py, line 60,
 in __get_builtin_constructor
import _md5
 ImportError: No module named _md5


 But I followed the same steps to build python 2.5.2 as on Linux:
 config
 make clean
 make
 make -i install  (because there is an older python 2.5.1 on my
 /usr/local/bin/)


 thanks,
 Shawn


 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of Matthieu
 Brucher
 Sent: Tuesday, December 09, 2008 11:45 AM
 To: Discussion of Numerical Python
 Subject: Re: [Numpy-discussion] numpy build error on Solaris,No module
 named _md5

 Hi,

 Does:

 import md5

 work? If it doesn't, it's a packaging problem. md5 must be available.

 Matthieu

 2008/12/9 Gong, Shawn (Contractor) [EMAIL PROTECTED]:
 hi list,

 I tried to build numpy 1.2.1 on Solaris 9 with gcc 3.4.6

 when I typed python setup.py build, I got error from hashlib.py

   File /home/sgong/dev181/dist/lib/python2.5/hashlib.py, line 133,
 in
 module

 md5 = __get_builtin_constructor('md5')

   File /home/sgong/dev181/dist/lib/python2.5/hashlib.py, line 60, in
 __get_builtin_constructor

 import _md5

 ImportError: No module named _md5

 I then tried python 2.6.1 instead of 2.5.2, but got the same error.

 I did not get the error while building on Linux. But I performed steps
 on
 Linux:

 1) copy *.a Atlas libraries to my local_install/atlas/

 2) ranlib *.a

 3) created a site.cfg

 Do I need to do the same on Solaris?

 Any help is appreciated.

 thanks,

 Shawn

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion





 --
 Information System Engineer, Ph.D.
 Website: http://matthieu-brucher.developpez.com/
 Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92
 LinkedIn: http://www.linkedin.com/in/matthieubrucher
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Website: http://matthieu-brucher.developpez.com/
Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92
LinkedIn: http://www.linkedin.com/in/matthieubrucher
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy build error on Solaris, No module named _md5

2008-12-09 Thread Michael Abshoff
Gong, Shawn (Contractor) wrote:
 hi list,

Hi Shawn,

 I tried to build numpy 1.2.1 on Solaris 9 with gcc 3.4.6
 
 when I typed “python setup.py build”, I got error from hashlib.py
 
   File /home/sgong/dev181/dist/lib/python2.5/hashlib.py, line 133, in 
 module
 
 md5 = __get_builtin_constructor('md5')
 
   File /home/sgong/dev181/dist/lib/python2.5/hashlib.py, line 60, in 
 __get_builtin_constructor
 
 import _md5
 
 ImportError: No module named _md5
 
 I then tried python 2.6.1 instead of 2.5.2, but got the same error.
 
 I did not get the error while building on Linux. But I performed steps 
 on Linux:
 
 1) copy *.a Atlas libraries to my local_install/atlas/
 
 2) ranlib *.a
 
 3) created a site.cfg
 
 Do I need to do the same on Solaris?
 
 Any help is appreciated.

This is a pure Python issue and has nothing to do with numpy. When 
Python was build for that install it did either not have access to 
OpenSSL or the Sun crypto libs or you are missing some bits that need to 
be installed on Solaris. Did you build that Python on your own or where 
did it come from?

 thanks,
 
 Shawn
 

Cheers,

Michael
 
 
 
 
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] genloadtxt : last call

2008-12-09 Thread Christopher Barker
Jarrod Millman wrote:

From the user's perspective, I would like all the NumPy IO code to be
 in the same place in NumPy; and all the SciPy IO code to be in the
 same place in SciPy.

+1

  So I
 wonder if it would make sense to incorporate AstroAsciiData?

Doesn't it overlap a lot with genloadtxt? If so, that's a bit confusing 
to new users.

 3. What about data source?

 Should we remove datasource?  Start using it more?

start  using it more -- it sounds very handy.

  Does it need to be
 slightly or dramatically improved/overhauled?

no comment here - I have no idea.

 Documentation
 -
  Let me try NumPy; this seems
 pretty good.  Now let's see how to load in some of my data)

totally key -- I have a colleague that has used Matlab a fair bi tin 
past that is starting a new project -- he asked me what to use. I, of 
course, suggested python+numpy+scipy. His first question was -- can I 
load data in from excel?


One more comment -- for fast reading of lots of ascii data, fromfile() 
needs some help -- I wish I had more time for it -- maybe some day.

-Chris



-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] genloadtxt : last call

2008-12-09 Thread Pierre GM

On Dec 9, 2008, at 12:59 PM, Christopher Barker wrote:

 Jarrod Millman wrote:

 From the user's perspective, I would like all the NumPy IO code to  
 be
 in the same place in NumPy; and all the SciPy IO code to be in the
 same place in SciPy.

 +1

So, no problem w/ importing numpy.ma and numpy.records in numpy.lib.io ?




 So I
 wonder if it would make sense to incorporate AstroAsciiData?

 Doesn't it overlap a lot with genloadtxt? If so, that's a bit  
 confusing
 to new users.

For the little I browsed, do we need it ? We could get the same thing  
with record arrays...


 3. What about data source?

 Should we remove datasource?  Start using it more?

 start  using it more -- it sounds very handy.

Didn't know it was around. I'll adapt genloadtxt to use it.

 Documentation
 -
 Let me try NumPy; this seems
 pretty good.  Now let's see how to load in some of my data)

 totally key -- I have a colleague that has used Matlab a fair bi tin
 past that is starting a new project -- he asked me what to use. I, of
 course, suggested python+numpy+scipy. His first question was -- can I
 load data in from excel?

So that would go in scipy.io ?



 One more comment -- for fast reading of lots of ascii data, fromfile()
 needs some help -- I wish I had more time for it -- maybe some day.

I'm afraid you'd have to count me out on this one: I don't speak C  
(yet), and don't foresee learning it soon enough to be of any help...
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Excluding index in numpy like negative index in R?

2008-12-09 Thread Bab Tei
Hi
I can exclude a list of items by using negative index in R (R-project) ie 
myarray[-excludeindex]. As negative indexing in numpy (And python) behave 
differently ,how can I exclude a list of item in numpy?
Regards, Teimourpour


  
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Support for sparse matrix in Distance function (and clustering)?

2008-12-09 Thread Bab Tei
Hi
Does the distance function in spatial package support sparse matrix?
regards


  
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Importance of order when summing values in anarray

2008-12-09 Thread Robert Kern
On Tue, Dec 9, 2008 at 09:51, Nadav Horesh [EMAIL PROTECTED] wrote:
 As much as I know float128 are in fact 80 bits (64 mantissa + 16 exponent) so 
 the precision is 18-19 digits (not 34)

float128 should be 128 bits wide. If it's not on your platform, please
let us know as that is a bug in your build.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Excluding index in numpy like negative index in R?

2008-12-09 Thread Joshua Lippai
You can make a mask array in numpy to prune out items from an array
that you don't want, denoting indices you want to keep with 1's and
those you don't want to keep with 0's. For instance,

a = np.array([1,3,45,67,123])
mask = np.array([0,1,1,0,1],dtype=np.bool)
anew = a[mask]

will set anew equal to array([3, 45, 123])


Josh

On Tue, Dec 9, 2008 at 12:25 PM, Bab Tei [EMAIL PROTECTED] wrote:
 Hi
 I can exclude a list of items by using negative index in R (R-project) ie 
 myarray[-excludeindex]. As negative indexing in numpy (And python) behave 
 differently ,how can I exclude a list of item in numpy?
 Regards, Teimourpour



 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Excluding index in numpy like negative index in R?

2008-12-09 Thread Keith Goodman
On Tue, Dec 9, 2008 at 12:25 PM, Bab Tei [EMAIL PROTECTED] wrote:

 I can exclude a list of items by using negative index in R (R-project) ie 
 myarray[-excludeindex]. As negative indexing in numpy (And python) behave 
 differently ,how can I exclude a list of item in numpy?

Here's a painful way to do it:

 x = np.array([0,1,2,3,4])
 excludeindex = [1,3]
 idx = list(set(range(4)) - set(excludeindex))
 x[idx]
   array([0, 2])

To make it more painful, you might want to sort idx.

But if excludeindex is True/False, then just use ~excludeindex.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Support for sparse matrix in Distance function (and clustering)?

2008-12-09 Thread Damian Eads
Hi,

Can you be more specific? Do you need sparse matrices to represent
observation vectors because they are sparse? Or do you need sparse
matrices to represent distance matrices because most vectors you are
clustering are similar while a few are dissimilar?

The clustering code is written mostly in C and does not support sparse
matrices. However, this should not matter because most of the
clustering code does not look at the raw observation vectors
themselves, just the distances passed as a distance matrix.

Damian

On Tue, Dec 9, 2008 at 1:28 PM, Bab Tei [EMAIL PROTECTED] wrote:
 Hi
 Does the distance function in spatial package support sparse matrix?
 regards



 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion




-- 
-
Damian Eads Ph.D. Student
Jack Baskin School of Engineering, UCSCE2-489
1156 High Street Machine Learning Lab
Santa Cruz, CA 95064http://www.soe.ucsc.edu/~eads
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Numscons issues: numpy.core.umath_tests not built, built-in ld detection, MAIN__ not being set-up

2008-12-09 Thread Peter Norton
I've got a few issues that I hope won't be overwhelming on one message:

(1) Because of some issues in the past in building numpy with
numscons, the numpy.core.umath_tests don't get built with
numpy+numscons (at least not as of svn version 6128).

$ python -c 'import numpy; print numpy.__version__; import
numpy.core.umath_tests'
1.3.0.dev6139
Traceback (most recent call last):
  File string, line 1, in module
ImportError: No module named umath_tests

What needs to be done to get this module incorporated into the numscons build?

(2) I've found that in numscons-0.9.4, the detection of the correct
linker assumes that if gcc is in use, the linker is gnu ld. However,
on solaris this isn't the recommended toolchain, so it's typical to
build gcc with gnu as and the solaris /usr/ccs/bin/ld under the hood.
What this means is that when setting a run_path in the binary (which
we need to do) the linker flags are set to -Wl,-rpath=library.
However, this isn't valid for the solaris ld. It needs -Rlibname, or
-Wl,-Rlibname. I'm pretty sure that on Solaris trying to link a
library with -Wl,-rpath= and looking for an error should be enough to
determine the correct format for the linker.

(3) Numscons tries to check for the need for a MAIN__ function when
linking with gfortran. However, any libraries built with numscons come
out with an unsatisfied dependency on MAIN__. The log looks like this
in build/scons/numpy/linalg/config.log looks like this:

scons: Configure: Checking if gfortran needs dummy main -
scons: Configure: build/scons/numpy/linalg/sconf/conftest_0.c is up to date.
scons: Configure: The original builder output was:
  |build/scons/numpy/linalg/sconf/conftest_0.c -
  |  |
  |  |int dummy() { return 0; }
  |  |
  |
scons: Configure: build/scons/numpy/linalg/sconf/conftest_0.o is up to date.
scons: Configure: The original builder output was:
  |gcc -o build/scons/numpy/linalg/sconf/conftest_0.o -c -O3 -m64 -g
-fPIC -DPIC build/scons/numpy/linalg/sconf/conftest_0.c
  |
scons: Configure: Building build/scons/numpy/linalg/sconf/conftest_0
failed in a previous run and all its sources are up to date.
scons: Configure: The original builder output was:
  |gfortran -o build/scons/numpy/linalg/sconf/conftest_0 -O3 -g
-L/usr/local/lib/gcc-4.3.1/amd64 -Wl,-R/usr/local/lib/gcc-4.3.1/amd64
-L/usr/local/amd64/python/lib -Wl,-R/usr/local/amd64/python/lib -L.
-lgcc_s build/scons/numpy/linalg/sconf/conftest_0.o
  |

It then goes on to discover that it needs main:

scons: Configure: build/scons/numpy/linalg/sconf/conftest_1 is up to date.
scons: Configure: The original builder output was:
  |gfortran -o build/scons/numpy/linalg/sconf/conftest_1 -O3 -g
-L/usr/local/lib/gcc-4.3.1/amd64 -Wl,-R/usr/local/lib/gcc-4.3.1/amd64
-L/usr/local/amd64/python/lib -Wl,-R/usr/local/amd64/python/lib -L.
-lgcc_s build/scons/numpy/linalg/sconf/conftest_1.o
  |
scons: Configure: (cached) MAIN__.


Doesn't this clearly indicate that a dummy main is needed?  I'm
working around this with a silly library that just has the MAIN__
symbol in it, but I'd love to do without that.

Thanks,

Peter
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] How to unitize a array in numpy?

2008-12-09 Thread Grissiom
Hi all,

Nice to neet you all. I am a newbie in numpy. Is there any function that
could unitize a array?
Thanks in advance.

-- 
Cheers,
Grissiom
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to unitize a array in numpy?

2008-12-09 Thread Robert Kern
On Tue, Dec 9, 2008 at 20:24, Grissiom [EMAIL PROTECTED] wrote:
 Hi all,

 Nice to neet you all. I am a newbie in numpy. Is there any function that
 could unitize a array?

What do you mean by unitize?

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to unitize a array in numpy?

2008-12-09 Thread Robert Kern
On Tue, Dec 9, 2008 at 20:24, Grissiom [EMAIL PROTECTED] wrote:
 Hi all,

 Nice to neet you all. I am a newbie in numpy. Is there any function that
 could unitize a array?

If you mean like the Mathematica function Unitize[] defined here:

  http://reference.wolfram.com/mathematica/ref/Unitize.html

Then .astype(bool) is probably sufficient.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] how do I delete unused matrix to save the memory?

2008-12-09 Thread Vagabond_Aero
I have the same problem.  I tried the del command below, but foundon that it
removes the names of the ndarrays from memory, but does not free up the
memory on my XP system (python 2.5.2, numpy 1.2.1).  Regular python objects
release their memory when I use the del command, but it looks like the
ndarray objects do not.

On Mon, Dec 8, 2008 at 22:00, Travis Vaught [EMAIL PROTECTED] wrote:

 Try:

 del(myvariable)

 Travis

 On Dec 8, 2008, at 7:15 PM, frank wang [EMAIL PROTECTED] wrote:

 Hi,

 I have a program with some variables consume a lot of memory. The first
 time I run it, it is fine. The second time I run it, I will get MemoryError.
 If I close the ipython and reopen it again, then I can run the program once.
 I am looking for a command to delete the intermediate variable once it is
 not used to save memory like in matlab clear command.

 Thanks

 Frank

 --
 Send e-mail faster without improving your typing skills. Get your Hotmail(R)
 account.http://windowslive.com/Explore/hotmail?ocid=TXT_TAGLM_WL_hotmail_acq_speed_122008

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion


 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] how do I delete unused matrix to save the memory?

2008-12-09 Thread Robert Kern
On Tue, Dec 9, 2008 at 20:40, Vagabond_Aero [EMAIL PROTECTED] wrote:
 I have the same problem.  I tried the del command below, but foundon that it
 removes the names of the ndarrays from memory, but does not free up the
 memory on my XP system (python 2.5.2, numpy 1.2.1).  Regular python objects
 release their memory when I use the del command, but it looks like the
 ndarray objects do not.

It's not guaranteed that the regular Python objects return memory to
the OS, either. The memory should be reused when Python allocates new
memory, though, so I suspect that this is not the problem that Frank
is seeing.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numscons issues: numpy.core.umath_tests not built, built-in ld detection, MAIN__ not being set-up

2008-12-09 Thread Charles R Harris
On Tue, Dec 9, 2008 at 4:50 PM, Peter Norton 
[EMAIL PROTECTED] wrote:

 I've got a few issues that I hope won't be overwhelming on one message:

 (1) Because of some issues in the past in building numpy with
 numscons, the numpy.core.umath_tests don't get built with
 numpy+numscons (at least not as of svn version 6128).

 $ python -c 'import numpy; print numpy.__version__; import
 numpy.core.umath_tests'
 1.3.0.dev6139
 Traceback (most recent call last):
  File string, line 1, in module
 ImportError: No module named umath_tests

 What needs to be done to get this module incorporated into the numscons
 build?


It's also commented out of the usual setup.py file also because of
blas/lapack linkage problems that need to be fixed; I was working on other
things. It's probably time to fix it.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to unitize a array in numpy?

2008-12-09 Thread Grissiom
On Wed, Dec 10, 2008 at 10:36, Robert Kern [EMAIL PROTECTED] wrote:

 On Tue, Dec 9, 2008 at 20:24, Grissiom [EMAIL PROTECTED] wrote:
  Hi all,
 
  Nice to neet you all. I am a newbie in numpy. Is there any function that
  could unitize a array?

 If you mean like the Mathematica function Unitize[] defined here:

  http://reference.wolfram.com/mathematica/ref/Unitize.html

 Then .astype(bool) is probably sufficient.

 --
 Robert Kern


I'm sorry for my poor English. I mean a function that could return a unit
vector which have the same direction with the original one. Thanks.

-- 
Cheers,
Grissiom
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Importance of order when summing values in anarray

2008-12-09 Thread Charles R Harris
On Tue, Dec 9, 2008 at 1:40 PM, Robert Kern [EMAIL PROTECTED] wrote:

 On Tue, Dec 9, 2008 at 09:51, Nadav Horesh [EMAIL PROTECTED] wrote:
  As much as I know float128 are in fact 80 bits (64 mantissa + 16
 exponent) so the precision is 18-19 digits (not 34)

 float128 should be 128 bits wide. If it's not on your platform, please
 let us know as that is a bug in your build.


I think he means the actual precision is the ieee extended precision, the
number just happens to be stored into larger chunks of memory for alignment
purposes.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] how do I delete unused matrix to save the memory?

2008-12-09 Thread Robert Kern
On Mon, Dec 8, 2008 at 19:15, frank wang [EMAIL PROTECTED] wrote:
 Hi,

 I have a program with some variables consume a lot of memory. The first time
 I run it, it is fine. The second time I run it, I will get MemoryError. If I
 close the ipython and reopen it again, then I can run the program once. I am
 looking for a command to delete the intermediate variable once it is not
 used to save memory like in matlab clear command.

How are you running this program? Be aware that IPython may be holding
on to objects and preventing them from being deallocated. For example:

In [7]: !cat memtest.py
class A(object):
def __del__(self):
print 'Deleting %r' % self


a = A()

In [8]: %run memtest.py

In [9]: %run memtest.py

In [10]: %run memtest.py

In [11]: del a

In [12]:
Do you really want to exit ([y]/n)?

$ python memtest.py
Deleting __main__.A object at 0x915ab0


You can remove some of these references with %reset and maybe a
gc.collect() for good measure.


In [1]: %run memtest

In [2]: %run memtest

In [3]: %run memtest

In [4]: %reset
Once deleted, variables cannot be recovered. Proceed (y/[n])?  y
Deleting __main__.A object at 0xf3e950
Deleting __main__.A object at 0xf3e6d0
Deleting __main__.A object at 0xf3e930

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to unitize a array in numpy?

2008-12-09 Thread Robert Kern
On Tue, Dec 9, 2008 at 20:56, Grissiom [EMAIL PROTECTED] wrote:
 On Wed, Dec 10, 2008 at 10:36, Robert Kern [EMAIL PROTECTED] wrote:

 On Tue, Dec 9, 2008 at 20:24, Grissiom [EMAIL PROTECTED] wrote:
  Hi all,
 
  Nice to neet you all. I am a newbie in numpy. Is there any function that
  could unitize a array?

 If you mean like the Mathematica function Unitize[] defined here:

  http://reference.wolfram.com/mathematica/ref/Unitize.html

 Then .astype(bool) is probably sufficient.

 --
 Robert Kern

 I'm sorry for my poor English. I mean a function that could return a unit
 vector which have the same direction with the original one. Thanks.

v / numpy.linalg.norm(v)

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to unitize a array in numpy?

2008-12-09 Thread Grissiom
On Wed, Dec 10, 2008 at 11:04, Robert Kern [EMAIL PROTECTED] wrote:

 v / numpy.linalg.norm(v)


Thanks a lot ~;)

-- 
Cheers,
Grissiom
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Importance of order when summing values in anarray

2008-12-09 Thread Robert Kern
On Tue, Dec 9, 2008 at 21:01, Charles R Harris
[EMAIL PROTECTED] wrote:


 On Tue, Dec 9, 2008 at 1:40 PM, Robert Kern [EMAIL PROTECTED] wrote:

 On Tue, Dec 9, 2008 at 09:51, Nadav Horesh [EMAIL PROTECTED] wrote:
  As much as I know float128 are in fact 80 bits (64 mantissa + 16
  exponent) so the precision is 18-19 digits (not 34)

 float128 should be 128 bits wide. If it's not on your platform, please
 let us know as that is a bug in your build.

 I think he means the actual precision is the ieee extended precision, the
 number just happens to be stored into larger chunks of memory for alignment
 purposes.

Ah, that's good to know. Yes, float128 on my Intel Mac behaves this way.

In [12]: f = finfo(float128)

In [13]: f.nmant
Out[13]: 63

In [14]: f.nexp
Out[14]: 15

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Importance of order when summing values in anarray

2008-12-09 Thread Charles R Harris
On Tue, Dec 9, 2008 at 8:10 PM, Robert Kern [EMAIL PROTECTED] wrote:

 On Tue, Dec 9, 2008 at 21:01, Charles R Harris
 [EMAIL PROTECTED] wrote:
 
 
  On Tue, Dec 9, 2008 at 1:40 PM, Robert Kern [EMAIL PROTECTED]
 wrote:
 
  On Tue, Dec 9, 2008 at 09:51, Nadav Horesh [EMAIL PROTECTED]
 wrote:
   As much as I know float128 are in fact 80 bits (64 mantissa + 16
   exponent) so the precision is 18-19 digits (not 34)
 
  float128 should be 128 bits wide. If it's not on your platform, please
  let us know as that is a bug in your build.
 
  I think he means the actual precision is the ieee extended precision, the
  number just happens to be stored into larger chunks of memory for
 alignment
  purposes.

 Ah, that's good to know. Yes, float128 on my Intel Mac behaves this way.

 In [12]: f = finfo(float128)

 In [13]: f.nmant
 Out[13]: 63

 In [14]: f.nexp
 Out[14]: 15


Yep. That's the reason I worry a bit about what will happen when ieee quad
precision comes out; it really is 128 bits wide and the normal identifiers
won't account for the difference. I expect c will just call them long
doubles and they will get the 'g' letter code just like extended precision
does now.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] how do I delete unused matrix to save the memory?

2008-12-09 Thread Scott Sinclair
 2008/12/10 Robert Kern [EMAIL PROTECTED]:
 On Mon, Dec 8, 2008 at 19:15, frank wang [EMAIL PROTECTED] wrote:
 Hi,

 I have a program with some variables consume a lot of memory. The first time
 I run it, it is fine. The second time I run it, I will get MemoryError. If I
 close the ipython and reopen it again, then I can run the program once. I am
 looking for a command to delete the intermediate variable once it is not
 used to save memory like in matlab clear command.

 How are you running this program? Be aware that IPython may be holding
 on to objects and preventing them from being deallocated. For example:

 In [7]: !cat memtest.py
 class A(object):
def __del__(self):
print 'Deleting %r' % self


 a = A()

 In [8]: %run memtest.py

 In [9]: %run memtest.py

 In [10]: %run memtest.py

 In [11]: del a

 In [12]:
 Do you really want to exit ([y]/n)?

 $ python memtest.py
 Deleting __main__.A object at 0x915ab0


 You can remove some of these references with %reset and maybe a
 gc.collect() for good measure.

Of course, if you don't need to have access to the variables created
in your program from the IPython session, you can run the program in a
separate python process:

In [1]: !python memtest.py
Deleting __main__.A object at 0xb7da5ccc

In [2]: !python memtest.py
Deleting __main__.A object at 0xb7e5fccc

Cheers,
Scott
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numscons issues: numpy.core.umath_tests not built, built-in ld detection, MAIN__ not being set-up

2008-12-09 Thread David Cournapeau
Peter Norton wrote:
 I've got a few issues that I hope won't be overwhelming on one message:

 (1) Because of some issues in the past in building numpy with
 numscons, the numpy.core.umath_tests don't get built with
 numpy+numscons (at least not as of svn version 6128).

 $ python -c 'import numpy; print numpy.__version__; import
 numpy.core.umath_tests'
 1.3.0.dev6139
 Traceback (most recent call last):
   File string, line 1, in module
 ImportError: No module named umath_tests

 What needs to be done to get this module incorporated into the numscons build?

you should not need this module, it is not built using the normal build
of numpy either. Did you do a clean build (rm -rf build and removing the
install directory first) ? It was enabled before but is commented out ATM.


 (2) I've found that in numscons-0.9.4, the detection of the correct
 linker assumes that if gcc is in use, the linker is gnu ld. However,
 on solaris this isn't the recommended toolchain, so it's typical to
 build gcc with gnu as and the solaris /usr/ccs/bin/ld under the hood.
 What this means is that when setting a run_path in the binary (which
 we need to do) the linker flags are set to -Wl,-rpath=library.
 However, this isn't valid for the solaris ld. It needs -Rlibname, or
 -Wl,-Rlibname. I'm pretty sure that on Solaris trying to link a
 library with -Wl,-rpath= and looking for an error should be enough to
 determine the correct format for the linker.

Scons and hence numscons indeed assume that the linker is the same as
the compiler by default. It would be possible to avoid this by detecting
the linker at runtime, to bypass scons tools choice, like I do for C,
C++ and Fortran compilers. The whole scons tools sub-system is
unfortunately very limited ATM, so there is a lot of manual work to do
(that's actually what most of the code in numscons/core is for).


 (3) Numscons tries to check for the need for a MAIN__ function when
 linking with gfortran. However, any libraries built with numscons come
 out with an unsatisfied dependency on MAIN__. The log looks like this
 in build/scons/numpy/linalg/config.log looks like this:

It may be linked to the sun linker problem above. Actually, the dummy
main detection is not used at all for the building -  it is necessary to
detect name mangling used by the fortran compiler, but that's it. I
assumed that a dummy main was never needed for shared libraries, but
that assumption may well be ill founded.

I never had problems related to this on open solaris, with both native
and gcc toolchains, so I am willing to investiage first whether it is
linked to the sun linker problem or not.

Unfortunately, I won't have the time to work on this in the next few
months because of my PhD thesis; the sun linker problem can be fixed by
following a strategy similar to compilers, in
numscons/core/initialization.py. You first need to add a detection
scheme for the linker in compiler_detection.py.

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] genloadtxt : last call

2008-12-09 Thread Gael Varoquaux
On Tue, Dec 09, 2008 at 01:34:29AM -0800, Jarrod Millman wrote:
 It was decided last year that numpy io should provide simple, generic,
 core io functionality.  While scipy io would provide more domain- or
 application-specific io code (e.g., Matlab IO, WAV IO, etc.)  My
 vision for scipy io, which I know isn't shared, is to be more or less
 aiming to be all inclusive (e.g., all image, sound, and data formats).
  (That is a different discussion; just wanted it to be clear where I
 stand.)

Can we get Matthew Brett's nifti reader in there? Please! Pretty please.
That way I can do neuroimaging without compiled code outside of a
standard scientific Python instal.

Gaël
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Some numpy statistics

2008-12-09 Thread Charles R Harris
Hi All,

I bumped into this while searching for something else:
http://www.ohloh.net/p/numpy/analyses/latest

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Some numpy statistics

2008-12-09 Thread robert . kern
On Wed, Dec 10, 2008 at 01:49, Charles R Harris
[EMAIL PROTECTED] wrote:
 Hi All,

 I bumped into this while searching for something else:
 http://www.ohloh.net/p/numpy/analyses/latest

-14 lines of Javascript?

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Importance of order when summing values inanarray

2008-12-09 Thread Nadav Horesh
float128 are 16 bytes wide but have the structure of x87 80-bits + extra 6 
bytes for alignment:
From http://lwn.net/2001/features/OLS/pdf/pdf/x86-64.pdf:
... The x87 stack with 80-bit precision is only used for long double.

And:

 e47 = float128(1e-47)
 e30 = float128(1e-30)
 e50 = float128(1e-50)
 (e30-e50) == e30
True
 (e30-e47) == e30
False
 

This shows that float128 has no more then 19 digits precision

  Nadav.

-הודעה מקורית-
מאת: [EMAIL PROTECTED] בשם Robert Kern
נשלח: ג 09-דצמבר-08 22:40
אל: Discussion of Numerical Python
נושא: Re: [Numpy-discussion] Importance of order when summing values inanarray
 
On Tue, Dec 9, 2008 at 09:51, Nadav Horesh [EMAIL PROTECTED] wrote:
 As much as I know float128 are in fact 80 bits (64 mantissa + 16 exponent) so 
 the precision is 18-19 digits (not 34)

float128 should be 128 bits wide. If it's not on your platform, please
let us know as that is a bug in your build.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion

winmail.dat___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion