Re: [Numpy-discussion] Producing a Histogram When Bins Are Known

2009-11-27 Thread Vincent Schut
Wayne Watson wrote:
 I have a list that already has the frequencies from 0 to 255. However, 
 I'd like to make a histogram  that has say 32 bins whose ranges are 0-7, 
 8-15, ... 248-255. Is it possible?
 
Wayne,

you might find the 'numpy example list with doc' webpage quite 
informative... http://www.scipy.org/Numpy_Example_List_With_Doc (give it 
some time to load, it's pretty large...)
For new users (I was one once...) it usually takes some time to find the 
usual suspects in numpy/scipy help and docs... This one page has really 
become unvaluable for me.

It gives you the docstrings for numpy functions, often including some 
example code.

If you check out the histogram() function, you'll see it takes a 'bins=' 
argument:

bins : int or sequence of scalars, optional
 If `bins` is an int, it defines the number of equal-width
 bins in the given range (10, by default). If `bins` is a sequence,
 it defines the bin edges, including the rightmost edge, allowing
 for non-uniform bin widths.

So, if your bins are known, you can pass it to numpy.histogram, either 
as number-of-bins (if equal width), if necessary combined with the 
'range=' parameter to specify the range to divide into equal bins, or as 
bin edges (e.g. in your case: (0, 8, 16, ... 256) or 
numpy.linspace(0,256,33) which will give you this range nicely.

If you don't specify the 'range=' parameter, it will check the min and 
max from your input data and use that as lower and upper bounds.

Good luck learning numpy! :)

Vincent.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bytes vs. Unicode in Python3

2009-11-27 Thread Pauli Virtanen
to, 2009-11-26 kello 17:37 -0700, Charles R Harris kirjoitti:
[clip]
 I'm not clear on your recommendation here, is it that we should use
 bytes, with unicode converted to UTF8?

The point is that I don't think we can just decide to use Unicode or
Bytes in all places where PyString was used earlier. Which one it will
be should depend on the use. Users will expect that eg. array([1,2,3],
dtype='f4') still works, and they don't have to do e.g. array([1,2,3],
dtype=b'f4').

To summarize the use cases I've ran across so far:

1) For 'S' dtype, I believe we use Bytes for the raw data and the
   interface.

   Maybe we want to introduce a separate bytes dtype that's an alias
   for 'S'?

2) The field names:

a = array([], dtype=[('a', int)])
a = array([], dtype=[(b'a', int)])

This is somewhat of an internal issue. We need to decide whether we
internally coerce input to Unicode or Bytes. Or whether we allow for
both Unicode and Bytes (but preserving previous semantics in this case
requires extra work, due to semantic changes in PyDict).

Currently, there's some code in Numpy to allow for Unicode field names,
but it's not been coherently implemented in all places, so e.g. direct
creation of dtypes with unicode field names fails.

This has also implications on field titles, as also those are stored in
the fields dict.

3) Format strings

a = array([], dtype=b'i4')

I don't think it makes sense to handle format strings in Unicode
internally -- they should always be coerced to bytes. This will make it
easier at many points, since it will be enought to do

PyBytes_AS_STRING(str)

to get the char* pointer, rather than having to encode to utf-8 first.
Same for all other similar uses of string, e.g. protocol descriptors.
User input should just be coerced to ASCII on input, I believe.

The problem here is that preserving repr() in this case requires some
extra work. But maybe that has to be done.

 Will that support arrays that have been pickled and such?

Are the pickles backward compatible between Python 2 and 3 at all?
I think using Bytes for format strings will be backward-compatible.

Field names are then a bit more difficult. Actually, we'll probably just
have to coerce them to either Bytes or Unicode internally, since we'll
need to do that on unpickling if we want to be backward-compatible.

 Or will we just have a minimum of code to fix up?

I think we will need in any case to replace all use of PyString in Numpy
by PyBytes or PyUnicode, depending on context, and #define PyString
PyBytes for Python 2.

This seems to be the easiest way to make sure we have fixed all points
that need fixing.

Currently, 193 of 800 numpy.core tests don't pass, and this seems
largely due to Bytes vs. Unicode issues.

 And could you expand on the changes that repr() might undergo?

The main thing is that

dtype('i4')
dtype([('a', 'i4')])

may become

dtype(b'i4')
dtype([(b'a', b'i4')])

Of course, we can write and #ifdef separate repr formatting code for
Py3, but this is a bit of extra work.

 Mind, I think using bytes sounds best, but I haven't looked into the
 whole strings part of the transition and don't have an informed
 opinion on the matter.

***

By the way, should I commit this stuff (after factoring the commits to
logical chunks) to SVN?

It does not break anything for Python 2, at least as far as the test
suite is concerned.

Pauli


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bytes vs. Unicode in Python3

2009-11-27 Thread David Cournapeau
Pauli Virtanen wrote:
 By the way, should I commit this stuff (after factoring the commits to
 logical chunks) to SVN?
   

I would prefer getting at least one py3 buildbot before doing anything
significant,

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bytes vs. Unicode in Python3

2009-11-27 Thread Pauli Virtanen
pe, 2009-11-27 kello 18:30 +0900, David Cournapeau kirjoitti:
 Pauli Virtanen wrote:
  By the way, should I commit this stuff (after factoring the commits to
  logical chunks) to SVN?

 I would prefer getting at least one py3 buildbot before doing anything
 significant,

I can add it to mine:
http://buildbot.scipy.org/builders/Linux_x86_Ubuntu/builds/279/steps/shell_1/logs/stdio

It already does 2.4, 2.5 and 2.6.

Pauli


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Installing numpy under cygwin

2009-11-27 Thread Olivia Cheronet
Hi,

I have tried to remove my entire numpy directory and starting to build it again 
from a newly downloaded source (numpy-1.3.0.tar.gz), but it has made no 
difference. I still get the output below.

Thank you for the suggestions,

Olivia
...
...
...
creating build/temp.cygwin-1.5.25-i686-2.5
creating build/temp.cygwin-1.5.25-i686-2.5/build
creating build/temp.cygwin-1.5.25-i686-2.5/build/src.cygwin-1.5.25-i686-2.5
creating 
build/temp.cygwin-1.5.25-i686-2.5/build/src.cygwin-1.5.25-i686-2.5/numpy
creating 
build/temp.cygwin-1.5.25-i686-2.5/build/src.cygwin-1.5.25-i686-2.5/numpy/core
creating 
build/temp.cygwin-1.5.25-i686-2.5/build/src.cygwin-1.5.25-i686-2.5/numpy/core/src
compile options: '-Inumpy/core/include 
-Ibuild/src.cygwin-1.5.25-i686-2.5/numpy/core/include/numpy -Inumpy/core/src 
-Inumpy/core/include -I/usr/include/python2.5 -c'
gcc: build/src.cygwin-1.5.25-i686-2.5/numpy/core/src/npy_math.c
numpy/core/src/npy_math.c.src:186: error: parse error before '/' token
numpy/core/src/npy_math.c.src:186: error: parse error before '/' token
error: Command gcc -fno-strict-aliasing -DNDEBUG -g -fwrapv -O3 -Wall 
-Wstrict-prototypes -Inumpy/core/include 
-Ibuild/src.cygwin-1.5.25-i686-2.5/numpy/core/include/numpy -Inumpy/core/src 
-Inumpy/core/include -I/usr/include/python2.5 -c 
build/src.cygwin-1.5.25-i686-2.5/numpy/core/src/npy_math.c -o 
build/temp.cygwin-1.5.25-i686-2.5/build/src.cygwin-1.5.25-i686-2.5/numpy/core/src/npy_math.o
 failed with exit status 1



- Original Message 
 
 I have just tested a fresh svn checkout, and could built numpy
 correctly on cygwin. I would suggest you update your sources, and
 build from scratch (i.e. remove the entire build directory and start
 from scratch).
 
 David



  
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bytes vs. Unicode in Python3

2009-11-27 Thread Francesc Alted
A Friday 27 November 2009 10:47:53 Pauli Virtanen escrigué:
 1) For 'S' dtype, I believe we use Bytes for the raw data and the
interface.
 
Maybe we want to introduce a separate bytes dtype that's an alias
for 'S'?

Yeah.  As regular strings in Python 3 are Unicode, I think that introducing 
separate bytes dtype would help doing the transition.  Meanwhile, the next 
should still work:

In [2]: s = np.array(['asa'], dtype=S10)

In [3]: s[0]
Out[3]: 'asa'  # will become b'asa' in Python 3

In [4]: s.dtype.itemsize
Out[4]: 10 # still 1-byte per element

Also, I suppose that there will be issues with the current Unicode support in 
NumPy:

In [5]: u = np.array(['asa'], dtype=U10)

In [6]: u[0]
Out[6]: u'asa'  # will become 'asa' in Python 3

In [7]: u.dtype.itemsize
Out[7]: 40  # not sure about the size in Python 3

For example, if it is true that internal strings in Python 3 and Unicode UTF-8 
(as René seems to suggest), I suppose that the internal conversions from 2-
bytes or 4-bytes (depending on how the Python interpreter has been compiled) 
in NumPy Unicode dtype to the new Python string should have to be reworked 
(perhaps you have dealt with that already).

Cheers,

-- 
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] New behavior for correlate w.r.t. swapped input: deprecate current default for 1.4.0 ?

2009-11-27 Thread David Cournapeau
Hi,

The function correlate can now implement what is generally accepted
as the definition of correlation, but you need to request this behavior
ATM (by default, it still does the wrong thing). Is it ok to deprecate
the default ? I.e. the behavior is exactly the same, but we warn users
about the change, and in 1.5.0, the default is to use the new behavior.

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bytes vs. Unicode in Python3

2009-11-27 Thread Pauli Virtanen
pe, 2009-11-27 kello 11:17 +0100, Francesc Alted kirjoitti:
 A Friday 27 November 2009 10:47:53 Pauli Virtanen escrigué:
  1) For 'S' dtype, I believe we use Bytes for the raw data and the
 interface.
  
 Maybe we want to introduce a separate bytes dtype that's an alias
 for 'S'?
 
 Yeah.  As regular strings in Python 3 are Unicode, I think that introducing 
 separate bytes dtype would help doing the transition.  Meanwhile, the next 
 should still work:
 
 In [2]: s = np.array(['asa'], dtype=S10)
 
 In [3]: s[0]
 Out[3]: 'asa'  # will become b'asa' in Python 3
 
 In [4]: s.dtype.itemsize
 Out[4]: 10 # still 1-byte per element

Yes. But now I wonder, should

array(['foo'], str)
array(['foo'])

be of dtype 'S' or 'U' in Python 3? I think I'm leaning towards 'U',
which will mean unavoidable code breakage -- there's probably no
avoiding it.

[clip]
 Also, I suppose that there will be issues with the current Unicode support in 
 NumPy:
 
 In [5]: u = np.array(['asa'], dtype=U10)
 
 In [6]: u[0]
 Out[6]: u'asa'  # will become 'asa' in Python 3
 
 In [7]: u.dtype.itemsize
 Out[7]: 40  # not sure about the size in Python 3

I suspect the Unicode stuff will keep working without major changes,
except maybe dropping the u in repr. It is difficult to believe the
CPython guys would have significantly changed the current Unicode
implementation, if they didn't bother changing the names of the
functions :)

 For example, if it is true that internal strings in Python 3 and Unicode 
 UTF-8 
 (as René seems to suggest), I suppose that the internal conversions from 2-
 bytes or 4-bytes (depending on how the Python interpreter has been compiled) 
 in NumPy Unicode dtype to the new Python string should have to be reworked 
 (perhaps you have dealt with that already).

I don't think they are internally UTF-8:
http://docs.python.org/3.1/c-api/unicode.html

Python’s default builds use a 16-bit type for Py_UNICODE and store
Unicode values internally as UCS2.

-- 
Pauli Virtanen


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bytes vs. Unicode in Python3

2009-11-27 Thread Francesc Alted
A Friday 27 November 2009 11:27:00 Pauli Virtanen escrigué:
 Yes. But now I wonder, should
 
   array(['foo'], str)
   array(['foo'])
 
 be of dtype 'S' or 'U' in Python 3? I think I'm leaning towards 'U',
 which will mean unavoidable code breakage -- there's probably no
 avoiding it.

Mmh, you are right.  Yes, this seems to be difficult to solve.  Well, I'm 
changing my mind and think that both 'str' and 'S' should stand for Unicode in 
NumPy for Python 3.  If people is aware of the change for Python 3, they 
should be expecting the same change happening in NumPy too, I guess.  Then, I 
suppose that a new dtype bytes that replaces the existing string would be 
absolutely necessary.

  Also, I suppose that there will be issues with the current Unicode
  support in NumPy:
 
  In [5]: u = np.array(['asa'], dtype=U10)
 
  In [6]: u[0]
  Out[6]: u'asa'  # will become 'asa' in Python 3
 
  In [7]: u.dtype.itemsize
  Out[7]: 40  # not sure about the size in Python 3
 
 I suspect the Unicode stuff will keep working without major changes,
 except maybe dropping the u in repr. It is difficult to believe the
 CPython guys would have significantly changed the current Unicode
 implementation, if they didn't bother changing the names of the
 functions :)
 
  For example, if it is true that internal strings in Python 3 and Unicode
  UTF-8 (as René seems to suggest), I suppose that the internal conversions
  from 2- bytes or 4-bytes (depending on how the Python interpreter has
  been compiled) in NumPy Unicode dtype to the new Python string should
  have to be reworked (perhaps you have dealt with that already).
 
 I don't think they are internally UTF-8:
 http://docs.python.org/3.1/c-api/unicode.html
 
 Python’s default builds use a 16-bit type for Py_UNICODE and store
 Unicode values internally as UCS2.

Ah!  No changes for that matter.  Much better then.

-- 
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Installing numpy under cygwin

2009-11-27 Thread David Cournapeau
On Fri, Nov 27, 2009 at 7:11 PM, Olivia Cheronet
cheronetoli...@yahoo.com wrote:
 Hi,

 I have tried to remove my entire numpy directory and starting to build it 
 again from a newly downloaded source (numpy-1.3.0.tar.gz), but it has made no 
 difference.

Please update to the trunk - I can see the error as well for 1.3.0,
and the trunk does build correctly on cygwin. I don't understand where
the error is coming from in 1.3.0, it almost look like a cpp bug.

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bytes vs. Unicode in Python3

2009-11-27 Thread René Dudfield
On Fri, Nov 27, 2009 at 11:50 AM, Francesc Alted fal...@pytables.org wrote:
 A Friday 27 November 2009 11:27:00 Pauli Virtanen escrigué:
 Yes. But now I wonder, should

       array(['foo'], str)
       array(['foo'])

 be of dtype 'S' or 'U' in Python 3? I think I'm leaning towards 'U',
 which will mean unavoidable code breakage -- there's probably no
 avoiding it.

 Mmh, you are right.  Yes, this seems to be difficult to solve.  Well, I'm
 changing my mind and think that both 'str' and 'S' should stand for Unicode in
 NumPy for Python 3.  If people is aware of the change for Python 3, they
 should be expecting the same change happening in NumPy too, I guess.  Then, I
 suppose that a new dtype bytes that replaces the existing string would be
 absolutely necessary.

  Also, I suppose that there will be issues with the current Unicode
  support in NumPy:
 
  In [5]: u = np.array(['asa'], dtype=U10)
 
  In [6]: u[0]
  Out[6]: u'asa'  # will become 'asa' in Python 3
 
  In [7]: u.dtype.itemsize
  Out[7]: 40      # not sure about the size in Python 3

 I suspect the Unicode stuff will keep working without major changes,
 except maybe dropping the u in repr. It is difficult to believe the
 CPython guys would have significantly changed the current Unicode
 implementation, if they didn't bother changing the names of the
 functions :)

  For example, if it is true that internal strings in Python 3 and Unicode
  UTF-8 (as René seems to suggest), I suppose that the internal conversions
  from 2- bytes or 4-bytes (depending on how the Python interpreter has
  been compiled) in NumPy Unicode dtype to the new Python string should
  have to be reworked (perhaps you have dealt with that already).

 I don't think they are internally UTF-8:
 http://docs.python.org/3.1/c-api/unicode.html

 Python’s default builds use a 16-bit type for Py_UNICODE and store
 Unicode values internally as UCS2.

 Ah!  No changes for that matter.  Much better then.



Hello,


in py3...

 'Hello\u0020World !'.encode()
b'Hello World !'
 Äpfel.encode('utf-8')
b'\xc3\x84pfel'
 Äpfel.encode()
b'\xc3\x84pfel'

The default encoding does appear to be utf-8 in py3.

Although it is compiled with something different, and stores it as
something different, that is UCS2 or UCS4.

I imagine dtype 'S' and 'U' need more clarification.  As it misses the
concept of encodings it seems?  Currently, S appears to mean 8bit
characters no encoding, and U appears to mean 16bit characters no
encoding?  Or are some sort of default encodings assumed?

2to3/3to2 fixers will probably have to be written for users code
here... whatever is decided.  At least warnings should be generated
I'm guessing.


btw, in my numpy tree there is a unicode_() alias to str in py3, and
to unicode in py2 (inside the compat.py file).  This helped us in many
cases with compatible string code in the pygame port.  This allows you
to create unicode strings on both platforms with the same code.



cheers,
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bytes vs. Unicode in Python3

2009-11-27 Thread Pauli Virtanen
pe, 2009-11-27 kello 13:23 +0100, René Dudfield kirjoitti:
[clip]
 I imagine dtype 'S' and 'U' need more clarification.  As it misses the
 concept of encodings it seems?  Currently, S appears to mean 8bit
 characters no encoding, and U appears to mean 16bit characters no
 encoding?  Or are some sort of default encodings assumed?

Currently in Numpy in Python 2, 'S' is the same as Python 3 bytes, 'U'
is same as Python 3 unicode and probably in same internal representation
(need to check). Neither is associated with encoding info.

We need probably to change the meaning of 'S', as Francesc noted, and
add a separate bytes dtype.

 2to3/3to2 fixers will probably have to be written for users code
 here... whatever is decided.  At least warnings should be generated
 I'm guessing.

Possibly. Does 2to3 support plugins? If yes, it could be possible to
write one.

 btw, in my numpy tree there is a unicode_() alias to str in py3, and
 to unicode in py2 (inside the compat.py file).  This helped us in many
 cases with compatible string code in the pygame port.  This allows you
 to create unicode strings on both platforms with the same code.

Yes, I saw that. The name unicode_ is however already taken by the Numpy
scalar type, so we need to think of a different name for it. asstring,
maybe.

Btw, do you want to rebase your distutils changes on top of my tree? I
tried yours out quickly, but there were some issues there that prevented
distutils from working. (Also, you can use absolute imports both for
Python 2 and 3 -- there's probably no need to use relative imports.)

Pauli


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bytes vs. Unicode in Python3

2009-11-27 Thread Francesc Alted
A Friday 27 November 2009 13:23:10 René Dudfield escrigué:
  I don't think they are internally UTF-8:
  http://docs.python.org/3.1/c-api/unicode.html
 
  Python’s default builds use a 16-bit type for Py_UNICODE and store
  Unicode values internally as UCS2.
 
  Ah!  No changes for that matter.  Much better then.
 
 Hello,
 
 
 in py3...
 
  'Hello\u0020World !'.encode()
 
 b'Hello World !'
 
  Äpfel.encode('utf-8')
 
 b'\xc3\x84pfel'
 
  Äpfel.encode()
 
 b'\xc3\x84pfel'
 
 The default encoding does appear to be utf-8 in py3.
 
 Although it is compiled with something different, and stores it as
 something different, that is UCS2 or UCS4.

OK.  One thing is which is the default encoding for Unicode and another is how 
Python keeps Unicode internally.  And internally Python 3 is still using UCS2 
or UCS4, i.e. the same thing than in Python 2, so no worries here.

 I imagine dtype 'S' and 'U' need more clarification.  As it misses the
 concept of encodings it seems?  Currently, S appears to mean 8bit
 characters no encoding, and U appears to mean 16bit characters no
 encoding?  Or are some sort of default encodings assumed?
[clip]

You only need encoding if you are going to represent Unicode strings with 
other types (for example bytes).  Currently, NumPy can transparently 
import/export native Python Unicode strings (UCS2 or UCS4) into its own 
Unicode type (always UCS4).  So, we don't have to worry here either.

 btw, in my numpy tree there is a unicode_() alias to str in py3, and
 to unicode in py2 (inside the compat.py file).  This helped us in many
 cases with compatible string code in the pygame port.  This allows you
 to create unicode strings on both platforms with the same code.

Correct.  But, in addition, we are going to need a new 'bytes' dtype for NumPy 
for Python 3, right?

-- 
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] New behavior for correlate w.r.t. swapped input: deprecate current default for 1.4.0 ?

2009-11-27 Thread Charles R Harris
On Fri, Nov 27, 2009 at 3:04 AM, David Cournapeau 
da...@ar.media.kyoto-u.ac.jp wrote:

 Hi,

The function correlate can now implement what is generally accepted
 as the definition of correlation, but you need to request this behavior
 ATM (by default, it still does the wrong thing). Is it ok to deprecate
 the default ? I.e. the behavior is exactly the same, but we warn users
 about the change, and in 1.5.0, the default is to use the new behavior.


What exactly are the differences? IIRC, there is a change for complex
arguements and what looks like a bugfix for real arguments. Is there a way
to warn *only* when the behavior will be different?

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Producing a Histogram When Bins Are Known

2009-11-27 Thread Wayne Watson
Thanks. That sounds like it should help a lot. Finding meaningful 
examples anywhere hasn't been easy. I thought I'd look through Amazon 
for books on Python and scientific uses. I found almost all were written 
by authors outside the US, and none seemed to talk about items like 
matplotlib. Ezdraw or something like that was often cited. I'm 
definitely in a learning stage, and much of what I need is in graphics 
to support some data analysis that I'm doing.

Glad to hear it can gather bins into groups. It would be very 
disappointing if such a mechanism did not  exist. In the distant past, 
I've all too often had to write my own histogram programs for this, 
FORTRAN, etc.  My data is from a 640x480 collection of b/w pixels, which 
a processor has binned from 0-255, so I don't want repeat doing a 
histogram on 307K data points.

Vincent Schut wrote:
 Wayne Watson wrote:
   
 I have a list that already has the frequencies from 0 to 255. However, 
 I'd like to make a histogram  that has say 32 bins whose ranges are 0-7, 
 8-15, ... 248-255. Is it possible?

 
 Wayne,

 you might find the 'numpy example list with doc' webpage quite 
 informative... http://www.scipy.org/Numpy_Example_List_With_Doc (give it 
 some time to load, it's pretty large...)
 For new users (I was one once...) it usually takes some time to find the 
 usual suspects in numpy/scipy help and docs... This one page has really 
 become unvaluable for me.

 It gives you the docstrings for numpy functions, often including some 
 example code.

 If you check out the histogram() function, you'll see it takes a 'bins=' 
 argument:

 bins : int or sequence of scalars, optional
  If `bins` is an int, it defines the number of equal-width
  bins in the given range (10, by default). If `bins` is a sequence,
  it defines the bin edges, including the rightmost edge, allowing
  for non-uniform bin widths.

 So, if your bins are known, you can pass it to numpy.histogram, either 
 as number-of-bins (if equal width), if necessary combined with the 
 'range=' parameter to specify the range to divide into equal bins, or as 
 bin edges (e.g. in your case: (0, 8, 16, ... 256) or 
 numpy.linspace(0,256,33) which will give you this range nicely.

 If you don't specify the 'range=' parameter, it will check the min and 
 max from your input data and use that as lower and upper bounds.

 Good luck learning numpy! :)

 Vincent.

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

   

-- 
   Wayne Watson (Watson Adventures, Prop., Nevada City, CA)

 (121.015 Deg. W, 39.262 Deg. N) GMT-8 hr std. time)
  Obz Site:  39° 15' 7 N, 121° 2' 32 W, 2700 feet  

   350 350 350 350 350 350 350 350 350 350
 Make the number famous. See 350.org
The major event has passed, but keep the number alive.
 
Web Page: www.speckledwithstars.net/

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Is anyone knowledgeable about dll deployment on windows ?

2009-11-27 Thread Eloi Gaudry
David,

I know this discussion first took place months ago, but I'd like to know 
whether or not you find a solution to the SxS assemblies issues using 
MSVC9.0.

In case you haven't (the binaries package for windows is built using 
mingw), I'd like to know if this would be possible to relocate all numpy 
*.pyd from their original location (i.e. site-packages/numpy/core/, 
etc.) to the main executable one (i.e. python.exe).
AFAIK, this would eventually solve the redistributing issue we are 
facing with python extensions built with SxS policy. Indeed, both 
Microsoft.VC90.CRT.manifest and msvcr90.dll files are located next to 
the python.exe executable in the standard distribution 
(http://www.python.org/ftp/python/2.6.4/python-2.6.4.msi). This way, all 
numpy *.pyd extension would be able to use these files (considering that 
python and numpy are built using the same revision of the crt library). 
IIRC, the SxS look-up sequence starts with the dirname of the 
executable... (i.e. ./python.exe/.)

Regards,
Eloi


-- 


Eloi Gaudry

Free Field Technologies
Axis Park Louvain-la-Neuve
Rue Emile Francqui, 1
B-1435 Mont-Saint Guibert
BELGIUM

Company Phone: +32 10 487 959
Company Fax:   +32 10 454 626

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bytes vs. Unicode in Python3

2009-11-27 Thread René Dudfield
On Fri, Nov 27, 2009 at 1:41 PM, Pauli Virtanen p...@iki.fi wrote:
 2to3/3to2 fixers will probably have to be written for users code
 here... whatever is decided.  At least warnings should be generated
 I'm guessing.

 Possibly. Does 2to3 support plugins? If yes, it could be possible to
 write one.

You can put them in here:
[lib_dir]lib2to3/fixes/fix_*.py

I'm not sure about how to use custom ones without just copying them
in... need to research that.

There's no documentation about how to write custom ones here:
http://docs.python.org/library/2to3.html

You can pass lib2to3 a package to try import fixers from.  However I'm
not sure how to make that appear from the command line, other than
copying the fixer into place.  I guess the numpy setup script could
copy the fixer into place.




 btw, in my numpy tree there is a unicode_() alias to str in py3, and
 to unicode in py2 (inside the compat.py file).  This helped us in many
 cases with compatible string code in the pygame port.  This allows you
 to create unicode strings on both platforms with the same code.

 Yes, I saw that. The name unicode_ is however already taken by the Numpy
 scalar type, so we need to think of a different name for it. asstring,
 maybe.

something like numpy.compat.unicode_ ?


 Btw, do you want to rebase your distutils changes on top of my tree? I
 tried yours out quickly, but there were some issues there that prevented
 distutils from working. (Also, you can use absolute imports both for
 Python 2 and 3 -- there's probably no need to use relative imports.)

        Pauli


hey,

yeah I definitely would :)   I don't have much time for the next week
or so though.


cu,
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bytes vs. Unicode in Python3

2009-11-27 Thread René Dudfield
On Fri, Nov 27, 2009 at 1:49 PM, Francesc Alted fal...@pytables.org wrote:
 Correct.  But, in addition, we are going to need a new 'bytes' dtype for NumPy
 for Python 3, right?

I think so.  However, I think S is probably closest to bytes... and
maybe S can be reused for bytes... I'm not sure though.

Also, what will a bytes dtype mean within a py2 program context?  Does
it matter if the bytes dtype just fails somehow if used in a py2
program?

cheers,
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bytes vs. Unicode in Python3

2009-11-27 Thread René Dudfield
On Fri, Nov 27, 2009 at 3:07 PM, René Dudfield ren...@gmail.com wrote:

 hey,

 yeah I definitely would :)   I don't have much time for the next week
 or so though.

btw, feel free to just copy whatever you like from there into your tree.

cheers,
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Producing a Histogram When Bins Are Known

2009-11-27 Thread josef . pktd
On Fri, Nov 27, 2009 at 8:43 AM, Wayne Watson
sierra_mtnv...@sbcglobal.net wrote:
 Thanks. That sounds like it should help a lot. Finding meaningful
 examples anywhere hasn't been easy. I thought I'd look through Amazon
 for books on Python and scientific uses. I found almost all were written
 by authors outside the US, and none seemed to talk about items like
 matplotlib. Ezdraw or something like that was often cited. I'm
 definitely in a learning stage, and much of what I need is in graphics
 to support some data analysis that I'm doing.

 Glad to hear it can gather bins into groups. It would be very
 disappointing if such a mechanism did not  exist. In the distant past,
 I've all too often had to write my own histogram programs for this,
 FORTRAN, etc.  My data is from a 640x480 collection of b/w pixels, which
 a processor has binned from 0-255, so I don't want repeat doing a
 histogram on 307K data points.

 Vincent Schut wrote:
 Wayne Watson wrote:

 I have a list that already has the frequencies from 0 to 255. However,
 I'd like to make a histogram  that has say 32 bins whose ranges are 0-7,
 8-15, ... 248-255. Is it possible?


 Wayne,

 you might find the 'numpy example list with doc' webpage quite
 informative... http://www.scipy.org/Numpy_Example_List_With_Doc (give it
 some time to load, it's pretty large...)
 For new users (I was one once...) it usually takes some time to find the
 usual suspects in numpy/scipy help and docs... This one page has really
 become unvaluable for me.

 It gives you the docstrings for numpy functions, often including some
 example code.

Numpy_Example_List_With_Doc is for an older version of numpy and hasn't
been kept up to date. So if your results don't match up, then the function
might have changed and the official docs will have the current description.

from numpy import *
is not recommended anymore, it messes up the global namespace too much

Besides the the example list, I found
http://scipy.org/Numpy_Functions_by_Category very helpful, because it
gave a better overview of which functions do similar things. In the
current docs, See Also can be used now in a similar way.

Good luck,

Josef


 If you check out the histogram() function, you'll see it takes a 'bins='
 argument:

 bins : int or sequence of scalars, optional
      If `bins` is an int, it defines the number of equal-width
      bins in the given range (10, by default). If `bins` is a sequence,
      it defines the bin edges, including the rightmost edge, allowing
      for non-uniform bin widths.

 So, if your bins are known, you can pass it to numpy.histogram, either
 as number-of-bins (if equal width), if necessary combined with the
 'range=' parameter to specify the range to divide into equal bins, or as
 bin edges (e.g. in your case: (0, 8, 16, ... 256) or
 numpy.linspace(0,256,33) which will give you this range nicely.

 If you don't specify the 'range=' parameter, it will check the min and
 max from your input data and use that as lower and upper bounds.

 Good luck learning numpy! :)

 Vincent.

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



 --
           Wayne Watson (Watson Adventures, Prop., Nevada City, CA)

             (121.015 Deg. W, 39.262 Deg. N) GMT-8 hr std. time)
              Obz Site:  39° 15' 7 N, 121° 2' 32 W, 2700 feet

                   350 350 350 350 350 350 350 350 350 350
                     Make the number famous. See 350.org
            The major event has passed, but keep the number alive.

                    Web Page: www.speckledwithstars.net/

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bytes vs. Unicode in Python3

2009-11-27 Thread Francesc Alted
A Friday 27 November 2009 15:09:00 René Dudfield escrigué:
 On Fri, Nov 27, 2009 at 1:49 PM, Francesc Alted fal...@pytables.org wrote:
  Correct.  But, in addition, we are going to need a new 'bytes' dtype for
  NumPy for Python 3, right?
 
 I think so.  However, I think S is probably closest to bytes... and
 maybe S can be reused for bytes... I'm not sure though.

That could be a good idea because that would ensure compatibility with 
existing NumPy scripts (i.e. old 'string' dtypes are mapped to 'bytes', as it 
should).  The only thing that I don't like is that that 'S' seems to be the 
initial letter for 'string', which is actually 'unicode' in Python 3 :-/
But, for the sake of compatibility, we can probably live with that.

 Also, what will a bytes dtype mean within a py2 program context?  Does
 it matter if the bytes dtype just fails somehow if used in a py2
 program?

Mmh, I'm of the opinion that the new 'bytes' type should be available only 
with NumPy for Python 3.  Would that be possible?

-- 
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bytes vs. Unicode in Python3

2009-11-27 Thread Pauli Virtanen
pe, 2009-11-27 kello 16:33 +0100, Francesc Alted kirjoitti:
 A Friday 27 November 2009 15:09:00 René Dudfield escrigué:
  On Fri, Nov 27, 2009 at 1:49 PM, Francesc Alted fal...@pytables.org wrote:
   Correct.  But, in addition, we are going to need a new 'bytes' dtype for
   NumPy for Python 3, right?
  
  I think so.  However, I think S is probably closest to bytes... and
  maybe S can be reused for bytes... I'm not sure though.
 
 That could be a good idea because that would ensure compatibility with 
 existing NumPy scripts (i.e. old 'string' dtypes are mapped to 'bytes', as it 
 should).  The only thing that I don't like is that that 'S' seems to be the 
 initial letter for 'string', which is actually 'unicode' in Python 3 :-/
 But, for the sake of compatibility, we can probably live with that.

Well, we can deprecate 'S' (ie. never show it in repr, always only 'B'
or 'U').

  Also, what will a bytes dtype mean within a py2 program context?  Does
  it matter if the bytes dtype just fails somehow if used in a py2
  program?
 
 Mmh, I'm of the opinion that the new 'bytes' type should be available only 
 with NumPy for Python 3.  Would that be possible?

I don't see a problem in making a bytes_ scalar type available for
Python2. In fact, it would be useful for making upgrading to Py3 easier.

-- 
Pauli Virtanen


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bytes vs. Unicode in Python3

2009-11-27 Thread Francesc Alted
A Friday 27 November 2009 16:41:04 Pauli Virtanen escrigué:
   I think so.  However, I think S is probably closest to bytes... and
   maybe S can be reused for bytes... I'm not sure though.
 
  That could be a good idea because that would ensure compatibility with
  existing NumPy scripts (i.e. old 'string' dtypes are mapped to 'bytes',
  as it should).  The only thing that I don't like is that that 'S' seems
  to be the initial letter for 'string', which is actually 'unicode' in
  Python 3 :-/ But, for the sake of compatibility, we can probably live
  with that.
 
 Well, we can deprecate 'S' (ie. never show it in repr, always only 'B'
 or 'U').

Well, deprecating 'S' seems a sensible option too.  But why only avoiding 
showing it in repr?  Why not issue a DeprecationWarning too?

   Also, what will a bytes dtype mean within a py2 program context?  Does
   it matter if the bytes dtype just fails somehow if used in a py2
   program?
 
  Mmh, I'm of the opinion that the new 'bytes' type should be available
  only with NumPy for Python 3.  Would that be possible?
 
 I don't see a problem in making a bytes_ scalar type available for
 Python2. In fact, it would be useful for making upgrading to Py3 easier.

I think introducing a bytes_ scalar dtype can be somewhat confusing for Python 
2 users.  But if the 'S' typecode is to be deprecated also for NumPy for 
Python 2, then it makes perfect sense to introduce bytes_ there too.

-- 
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Producing a Histogram When Bins Are Known

2009-11-27 Thread Wayne Watson
It's good to have some extra references for NumPy.

Actually, it looks like exercising histogram in NunPy has gotten me past 
the difficulties with hist in matplotlib. I Is there a matplotlib or 
Pylab mailing list. It uses hist and looks very much like histogram, but 
has some parameters that I need to understand better. .

-- 
   Wayne Watson (Watson Adventures, Prop., Nevada City, CA)

 (121.015 Deg. W, 39.262 Deg. N) GMT-8 hr std. time)
  Obz Site:  39° 15' 7 N, 121° 2' 32 W, 2700 feet  

   350 350 350 350 350 350 350 350 350 350
 Make the number famous. See 350.org
The major event has passed, but keep the number alive.
 
Web Page: www.speckledwithstars.net/

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is anyone knowledgeable about dll deployment on windows ?

2009-11-27 Thread David Cournapeau
On Fri, Nov 27, 2009 at 11:01 PM, Eloi Gaudry e...@fft.be wrote:
 David,

 I know this discussion first took place months ago, but I'd like to know
 whether or not you find a solution to the SxS assemblies issues using
 MSVC9.0.

Which issue exactly are you talking about ? When python is installed
for one user (when installed for all users, the C runtime is installed
in the SxS) ?


 In case you haven't (the binaries package for windows is built using
 mingw), I'd like to know if this would be possible to relocate all numpy
 *.pyd from their original location (i.e. site-packages/numpy/core/,
 etc.) to the main executable one (i.e. python.exe).

This sounds like a very bad idea: if packages start doing that, there
will quickly be clashes between extensions.

You would need to describe the exact scenario which is failing: how
python is installed, how you build numpy, what is not working with
which message, etc...

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Producing a Histogram When Bins Are Known

2009-11-27 Thread Skipper Seabold
On Fri, Nov 27, 2009 at 12:14 PM, Wayne Watson
sierra_mtnv...@sbcglobal.net wrote:
 It's good to have some extra references for NumPy.

 Actually, it looks like exercising histogram in NunPy has gotten me past
 the difficulties with hist in matplotlib. I Is there a matplotlib or
 Pylab mailing list. It uses hist and looks very much like histogram, but
 has some parameters that I need to understand better. .


I don't know if this has come up yet in your questions, but matplotlib
has a mailing list that can be found here:
http://matplotlib.sourceforge.net/

There are also *numerous* examples that I have found indispensable in
getting over the initial learning curve if you click on examples from
their sourceforge docs.

It is my (quite possibly incorrect) understanding that
matplotlib.pylab has many of the same functions of numpy (and
leverages numpy in most(?) instances when possible).  I think that is
why it is recommended that for numpy users who only wish to use the
plotting functionality of matplotlib that you do

from matplotlib import pyplot as plt # or whatever

In any case, have a look through the examples in the matplotlib docs.
There may also be more examples installed with matplotlib itself.  I
don't know if they're all in the online docs.

Skipper
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Producing a Histogram When Bins Are Known

2009-11-27 Thread Christopher Barker
Wayne Watson wrote:
 Yes, I'm just beginning to deal with the contents of NumPy, SciLab, and 
 SciPy. They all have seemed part of one another, but I think I see how 
 they've divided up the game. 

For the record:

I know this is a bit confusing, particularly for someone used to an 
integrated package like Matlab, etc, but there is a lot of power an 
flexibility gained by the divisions:

Python: is a general-purpose, extensible programming language

Numpy: is a set of package of classes, functions, etc that provide 
facilities for numeric computation -- primarily a n-d array class and 
the utilities to use it.

Matplotlib (MPL): is a plotting package, built on top of numpy -- it was 
originally designed to somewhat mimic the plotting interface of Matlab. 
MPL is the most commonly used plotting package for numpy, but by no 
means the only one.

Pylab: Is a package that integrates matplotlib and numpy and an 
assortment of other utilities into one namespace, making it more like 
Matlab -- personally, I think you should avoid using it, it makes it a 
bit easier to type code, but harder to know where the heck what you are 
doing is coming from.

SciPy: Is a broad collection of assorted utilities that facilitate 
scientific computing, built on Numpy -- it is also sometimes used as an 
umbrella term for anything connected to scientific computing with Python 
(i.e. the SciPy conferences)


These distinctions are a bit confusing (particularly MPL-numpy), because 
MPL includes a number of utility functions that combine computation and 
plotting: like hist, which both computes a histogram, and plots it as 
bar chart in one call -- it's a convenient way to perform a common 
operation, but it does blur the lines a bit!

By the way -- there is also potentially a bit of confusion as to how MPL 
uses/interacts with the command line and GUI toolkits. This is because 
MPL can be used with a number of different GUI front-ends (or none), and 
they tend to take over control from the command line. Which brings up to:

iPython: an enhanced python interactive interpreter command line system. 
It adds many nice features that make using python in interactive mode 
nicer. IN particularly, it adds a --pylab mode that helps it play well 
with MPL. You won't regret using it!


 I thought I'd look through Amazon 
 for books on Python and scientific uses. I found almost all were written 
 by authors outside the US, and none seemed to talk about items like 
 matplotlib.

FWIW, a book about MPL has just been published -- I don't know any more 
about it, but I'm sure google will tell you.

 Is there a matplotlib or Pylab mailing list?

There certainly is:

https://lists.sourceforge.net/lists/listinfo/matplotlib-users

And yes, that is the place for such questions.


HTH,

-Chris



-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Producing a Histogram When Bins Are Known

2009-11-27 Thread Skipper Seabold
On Fri, Nov 27, 2009 at 12:41 PM, Christopher Barker
chris.bar...@noaa.gov wrote:
 Wayne Watson wrote:
 Yes, I'm just beginning to deal with the contents of NumPy, SciLab, and
 SciPy. They all have seemed part of one another, but I think I see how
 they've divided up the game.

 For the record:

 I know this is a bit confusing, particularly for someone used to an
 integrated package like Matlab, etc, but there is a lot of power an
 flexibility gained by the divisions:

 Python: is a general-purpose, extensible programming language

 Numpy: is a set of package of classes, functions, etc that provide
 facilities for numeric computation -- primarily a n-d array class and
 the utilities to use it.

 Matplotlib (MPL): is a plotting package, built on top of numpy -- it was
 originally designed to somewhat mimic the plotting interface of Matlab.
 MPL is the most commonly used plotting package for numpy, but by no
 means the only one.

 Pylab: Is a package that integrates matplotlib and numpy and an
 assortment of other utilities into one namespace, making it more like
 Matlab -- personally, I think you should avoid using it, it makes it a
 bit easier to type code, but harder to know where the heck what you are
 doing is coming from.

 SciPy: Is a broad collection of assorted utilities that facilitate
 scientific computing, built on Numpy -- it is also sometimes used as an
 umbrella term for anything connected to scientific computing with Python
 (i.e. the SciPy conferences)


 These distinctions are a bit confusing (particularly MPL-numpy), because
 MPL includes a number of utility functions that combine computation and
 plotting: like hist, which both computes a histogram, and plots it as
 bar chart in one call -- it's a convenient way to perform a common
 operation, but it does blur the lines a bit!

 By the way -- there is also potentially a bit of confusion as to how MPL
 uses/interacts with the command line and GUI toolkits. This is because
 MPL can be used with a number of different GUI front-ends (or none), and
 they tend to take over control from the command line. Which brings up to:

 iPython: an enhanced python interactive interpreter command line system.
 It adds many nice features that make using python in interactive mode
 nicer. IN particularly, it adds a --pylab mode that helps it play well
 with MPL. You won't regret using it!


 I thought I'd look through Amazon
 for books on Python and scientific uses. I found almost all were written
 by authors outside the US, and none seemed to talk about items like
 matplotlib.

 FWIW, a book about MPL has just been published -- I don't know any more
 about it, but I'm sure google will tell you.

 Is there a matplotlib or Pylab mailing list?

 There certainly is:

 https://lists.sourceforge.net/lists/listinfo/matplotlib-users

 And yes, that is the place for such questions.


 HTH,

 -Chris


Well put, Chris.  It took me a long time get my head around these
distinctions, and then only when others pointed out my errors in
understanding.  This kind of info might be useful to other newcomers
somewhere...  http://www.scipy.org/History_of_SciPy?  Thoughts on
posting this on the wiki here?

Skipper
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Producing a Histogram When Bins Are Known

2009-11-27 Thread josef . pktd
On Fri, Nov 27, 2009 at 12:57 PM, Skipper Seabold jsseab...@gmail.com wrote:
 On Fri, Nov 27, 2009 at 12:41 PM, Christopher Barker
 chris.bar...@noaa.gov wrote:
 Wayne Watson wrote:
 Yes, I'm just beginning to deal with the contents of NumPy, SciLab, and
 SciPy. They all have seemed part of one another, but I think I see how
 they've divided up the game.

 For the record:

 I know this is a bit confusing, particularly for someone used to an
 integrated package like Matlab, etc, but there is a lot of power an
 flexibility gained by the divisions:

 Python: is a general-purpose, extensible programming language

 Numpy: is a set of package of classes, functions, etc that provide
 facilities for numeric computation -- primarily a n-d array class and
 the utilities to use it.

 Matplotlib (MPL): is a plotting package, built on top of numpy -- it was
 originally designed to somewhat mimic the plotting interface of Matlab.
 MPL is the most commonly used plotting package for numpy, but by no
 means the only one.

 Pylab: Is a package that integrates matplotlib and numpy and an
 assortment of other utilities into one namespace, making it more like
 Matlab -- personally, I think you should avoid using it, it makes it a
 bit easier to type code, but harder to know where the heck what you are
 doing is coming from.

 SciPy: Is a broad collection of assorted utilities that facilitate
 scientific computing, built on Numpy -- it is also sometimes used as an
 umbrella term for anything connected to scientific computing with Python
 (i.e. the SciPy conferences)


 These distinctions are a bit confusing (particularly MPL-numpy), because
 MPL includes a number of utility functions that combine computation and
 plotting: like hist, which both computes a histogram, and plots it as
 bar chart in one call -- it's a convenient way to perform a common
 operation, but it does blur the lines a bit!

 By the way -- there is also potentially a bit of confusion as to how MPL
 uses/interacts with the command line and GUI toolkits. This is because
 MPL can be used with a number of different GUI front-ends (or none), and
 they tend to take over control from the command line. Which brings up to:

 iPython: an enhanced python interactive interpreter command line system.
 It adds many nice features that make using python in interactive mode
 nicer. IN particularly, it adds a --pylab mode that helps it play well
 with MPL. You won't regret using it!


 I thought I'd look through Amazon
 for books on Python and scientific uses. I found almost all were written
 by authors outside the US, and none seemed to talk about items like
 matplotlib.

 FWIW, a book about MPL has just been published -- I don't know any more
 about it, but I'm sure google will tell you.

 Is there a matplotlib or Pylab mailing list?

 There certainly is:

 https://lists.sourceforge.net/lists/listinfo/matplotlib-users

 And yes, that is the place for such questions.


 HTH,

 -Chris


 Well put, Chris.  It took me a long time get my head around these
 distinctions, and then only when others pointed out my errors in
 understanding.  This kind of info might be useful to other newcomers
 somewhere...  http://www.scipy.org/History_of_SciPy?  Thoughts on
 posting this on the wiki here?

I also agree. It will improve with the newly redesigned website for scipy.org
However, I cannot find the link right now for the development version of
the new website.

Josef


 Skipper
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix inverse

2009-11-27 Thread Ralf Gommers
On Thu, Nov 26, 2009 at 5:55 PM, josef.p...@gmail.com wrote:

 On Thu, Nov 26, 2009 at 11:49 AM,  josef.p...@gmail.com wrote:
  On Thu, Nov 26, 2009 at 11:34 AM,  josef.p...@gmail.com wrote:
  why is
 http://docs.scipy.org/numpy/docs/numpy.matrixlib.defmatrix.matrix.I/
  classified as unimportant?   .T ?
 
  I was looking at http://projects.scipy.org/numpy/ticket/1093 and
  didn't find any docs for the matrix inverse.
 
  Ok looking some more, I found
  http://docs.scipy.org/numpy/docs/numpy.matrixlib.defmatrix.matrix.getI/
 
  I thought it might be problem with generating docs for attributes, but
  the docstring for .I seems editable.
 
  Aren't A.I and A.T the more common use than A.getI, ..?

 in defmatrix.py:

T = property(getT, None, doc=transpose)
A = property(getA, None, doc=base array)
A1 = property(getA1, None, doc=1-d base array)
H = property(getH, None, doc=hermitian (conjugate) transpose)
I = property(getI, None, doc=inverse)

 could take the docstrings from the get? function instead ?


That seems like a good idea. All that is needed is to delete the doc
arguments for those five properties.

Because those docstrings were marked unimportant, I think using the get?
docs was intended anyway.

Cheers,
Ralf
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Producing a Histogram When Bins Are Known

2009-11-27 Thread Ralf Gommers
On Fri, Nov 27, 2009 at 7:15 PM, josef.p...@gmail.com wrote:

  Well put, Chris.  It took me a long time get my head around these
  distinctions, and then only when others pointed out my errors in
  understanding.  This kind of info might be useful to other newcomers
  somewhere...  http://www.scipy.org/History_of_SciPy?  Thoughts on
  posting this on the wiki here?


+1


 I also agree. It will improve with the newly redesigned website for
 scipy.org
 However, I cannot find the link right now for the development version of
 the new website.


http://new.scipy.org/

Ralf
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bytes vs. Unicode in Python3

2009-11-27 Thread Christopher Barker

 The point is that I don't think we can just decide to use Unicode or
 Bytes in all places where PyString was used earlier.

Agreed.

I think it's helpful to remember the origins of all this:


IMHO, there are two distinct types of data that Python2 strings support:

1) text: this is the traditional string.
2) bytes: raw bytes -- they could represent anything.

This, of course, is what the py3k string and bytes types are all about.

However, when python started, it just so happened that text was 
represented by an array of unsigned single byte integers, so there 
really was no point in having a bytes type, as a string would work 
just as well.

Enter unicode:

Now we have multiple ways of representing text internally, but want a 
single interface to that -- one that looks and acts like a sequence of 
characters to user's code. The result is that the unicode type was 
introduced.

In a way, unicode strings are a bit like arrays: they have an encoding 
associated with them (like a dtype in numpy). You can represent a given 
bit of text in multiple different arangements of bytes, but they are all 
supposed to mean the same thing and, if you know the encoding, you can 
convert between them. This is kind of like how one can represent 5 in 
any of many dtypes: uint8, int16, int32, float32, float64, etc. Not any 
value represented by one dtype can be converted to all other dtypes, but 
many can. Just like encodings.

Anyway, all this brings me to think about the use of strings in numpy in 
this way: if it is meant to be a human-readable piece of text, it should 
be a unicode object. If not, then it is bytes.

So: fromstring and the like should, of course, work with bytes (though 
maybe buffers really...)

 Which one it will
 be should depend on the use. Users will expect that eg. array([1,2,3],
 dtype='f4') still works, and they don't have to do e.g. array([1,2,3],
 dtype=b'f4').

Personally, I try to use np.float32 instead, anyway, but I digress. In 
this case, the type code is supposed to be a human-readable bit of 
text -- it should be a unicode object (convertible to ascii for 
interfacing with C...)

If we used b'f4', it would confuse things, as it couldn't be printed 
right. Also: would the actual bytes involved potentially change 
depending on what encoding was used for the literal? i.e. if the code 
was written in utf16, would that byte string be 4 bytes long?

 To summarize the use cases I've ran across so far:
 
 1) For 'S' dtype, I believe we use Bytes for the raw data and the
interface.

I don't think so here. 'S' is usually used to store human-readable 
strings, I'd certainly expect to be able to do:

s_array = np.array(['this', 'that'], dtype='S10')

And I'd expect it to work with non-literals that were unicode strings, 
i.e. human readable text. In fact, it's pretty rare that I'd ever want 
bytes here. So I'd see 'S' mapped to 'U' here.

Francesc Alted wrote:
 the next  should still work:
 
 In [2]: s = np.array(['asa'], dtype=S10)
 
 In [3]: s[0]
 Out[3]: 'asa'  # will become b'asa' in Python 3

I don't like that -- I put in a string, and get a bytes object back?

 In [4]: s.dtype.itemsize
 Out[4]: 10 # still 1-byte per element

But what it the the strings passed in aren't representable in one byte 
per character? Do we define S as only supporting ANSI-only string? 
what encoding?

Pauli Virtanen wrote:
 'U'
 is same as Python 3 unicode and probably in same internal representation
 (need to check). Neither is associated with encoding info.

Isn't it? I thought the encoding was always the same internally? so it 
is known?

Francesc Alted wrote:
 That could be a good idea because that would ensure compatibility with 
 existing NumPy scripts (i.e. old 'string' dtypes are mapped to 'bytes', as it 
 should).

What do you mean by compatible? It wold mean a lot of user code would 
have to change with the 2-3 transition.

 The only thing that I don't like is that that 'S' seems to be the 
 initial letter for 'string', which is actually 'unicode' in Python 3 :-/
 But, for the sake of compatibility, we can probably live with that.

I suppose we could at least depricate it.

 Also, what will a bytes dtype mean within a py2 program context?  Does
 it matter if the bytes dtype just fails somehow if used in a py2
 program?

well, it should work in 2.6 anyway.

Maybe we want to introduce a separate bytes dtype that's an alias
for 'S'?

What do we need bytes for? does it support anything that np.uint8 
doesn't?


 2) The field names:
 
   a = array([], dtype=[('a', int)])
   a = array([], dtype=[(b'a', int)])
 
 This is somewhat of an internal issue. We need to decide whether we
 internally coerce input to Unicode or Bytes.

Unicode is clear to me here -- it really should match what Python does 
for variable names -- that is unicode in py3k, no?

 3) Format strings
 
   a = array([], dtype=b'i4')
 
 I don't think it makes sense to handle format strings in Unicode
 internally -- they 

Re: [Numpy-discussion] Producing a Histogram When Bins Are Known

2009-11-27 Thread Christopher Barker
josef.p...@gmail.com wrote:
 On Fri, Nov 27, 2009 at 12:57 PM, Skipper Seabold jsseab...@gmail.com wrote:

  This kind of info might be useful to other newcomers
 somewhere...  http://www.scipy.org/History_of_SciPy?  Thoughts on
 posting this on the wiki here?
 
 I also agree. It will improve with the newly redesigned website for scipy.org
 However, I cannot find the link right now for the development version of
 the new website.

Feel free to crib whatever you want from my post for that -- or suggest 
a place for me to put it, and I'll do it. I'm just not sure where it 
should go at this point.

-Chris


-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Producing a Histogram When Bins Are Known

2009-11-27 Thread Wayne Watson
Lots of good suggestions. I'll pull them into a document for further 
reference.

-- 
   Wayne Watson (Watson Adventures, Prop., Nevada City, CA)

 (121.015 Deg. W, 39.262 Deg. N) GMT-8 hr std. time)
  Obz Site:  39° 15' 7 N, 121° 2' 32 W, 2700 feet  

   350 350 350 350 350 350 350 350 350 350
 Make the number famous. See 350.org
The major event has passed, but keep the number alive.
 
Web Page: www.speckledwithstars.net/

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Producing a Histogram When Bins Are Known

2009-11-27 Thread Sebastian
Did you try using the parameter range?
I do something like this.
regards

ax = fig.add_subplot(1,1,1)
 pylab.title(r'\Large  BCG NO radio distribution $ \rm{TITLE}$')
 n, bins, patches = pylab.hist(values, bins=math.sqrt(len(values)),
 range=(numpy.mean(values)-3*scientificstat.standardDeviation(values),numpy.mean(values)+3*scientificstat.standardDeviation(values)),
 normed=1, facecolor='y', alpha=0.5)
 ax.set_xlabel(r'\Large$ \rm{values}$')
 ax.set_ylabel(r'\Large Delatavalue/Value')


 gausx=numpy.arange(numpy.mean(Value)-3*scientificstat.standardDeviation(Value),numpy.mean(Value)+3*scientificstat.standardDeviation(bpty_plt),0.1)

 gaus=normpdf(gausx,numpy.mean(Value),scientificstat.standardDeviation(Value))
 pylab.plot(gausx,gaus, color='red', lw=2)
 ax.set_xlim(-1.5, 1.5)
 ax.grid(True)


On Fri, Nov 27, 2009 at 4:38 PM, Christopher Barker
chris.bar...@noaa.govwrote:

 josef.p...@gmail.com wrote:
  On Fri, Nov 27, 2009 at 12:57 PM, Skipper Seabold jsseab...@gmail.com
 wrote:

   This kind of info might be useful to other newcomers
  somewhere...  http://www.scipy.org/History_of_SciPy?  Thoughts on
  posting this on the wiki here?
 
  I also agree. It will improve with the newly redesigned website for
 scipy.org
  However, I cannot find the link right now for the development version of
  the new website.

 Feel free to crib whatever you want from my post for that -- or suggest
 a place for me to put it, and I'll do it. I'm just not sure where it
 should go at this point.

 -Chris


 --
 Christopher Barker, Ph.D.
 Oceanographer

 Emergency Response Division
 NOAA/NOS/ORR(206) 526-6959   voice
 7600 Sand Point Way NE   (206) 526-6329   fax
 Seattle, WA  98115   (206) 526-6317   main reception

 chris.bar...@noaa.gov
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bytes vs. Unicode in Python3

2009-11-27 Thread Pauli Virtanen
pe, 2009-11-27 kello 10:36 -0800, Christopher Barker kirjoitti:
[clip]
  Which one it will
  be should depend on the use. Users will expect that eg. array([1,2,3],
  dtype='f4') still works, and they don't have to do e.g. array([1,2,3],
  dtype=b'f4').
 
 Personally, I try to use np.float32 instead, anyway, but I digress. In 
 this case, the type code is supposed to be a human-readable bit of 
 text -- it should be a unicode object (convertible to ascii for 
 interfacing with C...)

Yes, this would solve the repr() issue easily. Now that I look more
closely, the format strings are not actually used anywhere else than in
the descriptor user interface, so from an implementation POV Unicode is
not any harder.

[clip]
 Pauli Virtanen wrote:
  'U'
  is same as Python 3 unicode and probably in same internal representation
  (need to check). Neither is associated with encoding info.
 
 Isn't it? I thought the encoding was always the same internally? so it 
 is known?

Yes, so it needs not be associated with a separate piece of encoding
info.

[clip]
 Maybe we want to introduce a separate bytes dtype that's an alias
 for 'S'?
 
 What do we need bytes for? does it support anything that np.uint8 
 doesn't?

It has a string representation, but that's probably it.

Actually, in Python 3, when you index a bytes object, you get integers
back, so we just aliasing bytes_ = uint8 and making sure array() handles
byte objects appropriately would be more or less consistent.

  2) The field names:
  
  a = array([], dtype=[('a', int)])
  a = array([], dtype=[(b'a', int)])
  
  This is somewhat of an internal issue. We need to decide whether we
  internally coerce input to Unicode or Bytes.
 
 Unicode is clear to me here -- it really should match what Python does 
 for variable names -- that is unicode in py3k, no?

Yep, let's follow Python. So Unicode and only Unicode it is.

***

Ok, thanks for the feedback. The right answers seem to be:

1) Unicode works as it is now, and Python3 strings are Unicode.

   Bytes objects are coerced to uint8 by array(). We don't do implicit
   conversions between Bytes and Unicode.

   The 'S' dtype character will be deprecated, never appear in repr(),
   and its usage will result to a warning.

2) Field names are unicode always.

   Some backward compatibility needs to be added in pickling, and
   maybe the npy file format needs a fixed encoding.

3) Dtype strings are an user interface detail, and will be Unicode.

-- 
Pauli Virtanen


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bytes vs. Unicode in Python3

2009-11-27 Thread Anne Archibald
2009/11/27 Christopher Barker chris.bar...@noaa.gov:

 The point is that I don't think we can just decide to use Unicode or
 Bytes in all places where PyString was used earlier.

 Agreed.

I only half agree. It seems to me that for almost all situations where
PyString was used, the right data type is a python3 string (which is
unicode). I realize there may be some few cases where it is
appropriate to use bytes, but I think there needs to be a compelling
reason for each one.

 In a way, unicode strings are a bit like arrays: they have an encoding
 associated with them (like a dtype in numpy). You can represent a given
 bit of text in multiple different arangements of bytes, but they are all
 supposed to mean the same thing and, if you know the encoding, you can
 convert between them. This is kind of like how one can represent 5 in
 any of many dtypes: uint8, int16, int32, float32, float64, etc. Not any
 value represented by one dtype can be converted to all other dtypes, but
 many can. Just like encodings.

This is incorrect. Unicode objects do not have default encodings or
multiple internal representations (within a single python interpreter,
at least). Unicode objects use 2- or 4-byte internal representations
internally, but this is almost invisible to the user. Encodings only
become relevant when you want to convert a unicode object to a byte
stream. It is usually an error to store text in a byte stream (for it
to make sense you must provide some mechanism to specify the
encoding).

 Anyway, all this brings me to think about the use of strings in numpy in
 this way: if it is meant to be a human-readable piece of text, it should
 be a unicode object. If not, then it is bytes.

 So: fromstring and the like should, of course, work with bytes (though
 maybe buffers really...)

I think if you're going to call it fromstring, it should onvert from
strings (i.e. unicode strings). But really, I think it makes more
sense to rename it frombytes() and have it convert bytes objects. One
could then have
def fromstring(s, encoding=utf-8):
return frombytes(s.encode(encoding))
as a shortcut. Maybe ASCII makes more sense as a default encoding. But
really, think about where the user's going to get the srting: most of
the time it's coming from a disk file or a network stream, so it will
be a byte string already, so they should use frombytes.

 To summarize the use cases I've ran across so far:

 1) For 'S' dtype, I believe we use Bytes for the raw data and the
    interface.

 I don't think so here. 'S' is usually used to store human-readable
 strings, I'd certainly expect to be able to do:

 s_array = np.array(['this', 'that'], dtype='S10')

 And I'd expect it to work with non-literals that were unicode strings,
 i.e. human readable text. In fact, it's pretty rare that I'd ever want
 bytes here. So I'd see 'S' mapped to 'U' here.

+1

 Francesc Alted wrote:
 the next  should still work:

 In [2]: s = np.array(['asa'], dtype=S10)

 In [3]: s[0]
 Out[3]: 'asa'  # will become b'asa' in Python 3

 I don't like that -- I put in a string, and get a bytes object back?

I agree.

 In [4]: s.dtype.itemsize
 Out[4]: 10     # still 1-byte per element

 But what it the the strings passed in aren't representable in one byte
 per character? Do we define S as only supporting ANSI-only string?
 what encoding?

Itemsize will change. That's fine.

 3) Format strings

       a = array([], dtype=b'i4')

 I don't think it makes sense to handle format strings in Unicode
 internally -- they should always be coerced to bytes.

 This should be fine -- we control what is a valid format string, and
 thus they can always be ASCII-safe.

I have to disagree. Why should we force the user to use bytes? The
format strings are just that, strings, and we should be able to supply
python strings to them. Keep in mind that coercing strings to bytes
requires extra information, namely the encoding. If you want to
emulate python2's value-dependent coercion - raise an exception only
if non-ASCII is present - keep in mind that python3 is specifically
removing that behaviour because of the problems it caused.


Anne
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bytes vs. Unicode in Python3

2009-11-27 Thread Christopher Barker
Anne Archibald wrote:

 I don't think it makes sense to handle format strings in Unicode
 internally -- they should always be coerced to bytes.
 This should be fine -- we control what is a valid format string, and
 thus they can always be ASCII-safe.
 
 I have to disagree. Why should we force the user to use bytes?

One of us mis-understood that -- I THINK the idea was that internally 
numpy would use bytes (for easy conversion to/from char*), but they 
would get converted, so the use could pass in unicode strings (or 
bytes). I guess the questions remains as to what you'd get when you 
printed a format string.

  Keep in mind that coercing strings to bytes
 requires extra information, namely the encoding.

but that is built-in to the unicode object.

I think the idea is that a format string is ALWAYS ASCII -f there are 
any other characters in there, it's an invalid format anyway.

Unless I mis-understand what a format string is. I think it's a string 
you use to represent a custom dtype -- it that right?

-Chris



-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] another numpy/ATLAS problem

2009-11-27 Thread David Warde-Farley
On 26-Nov-09, at 8:56 PM, Charles R Harris wrote:

 I never had luck with the netlib-lapack-tarfile=file
 option, it didn't build lapack correctly. Try doing them separately.

I just tried that.

-Ss flapack path/to/lapack_LINUX.a didn't seem to build a proper  
LAPACK (it was missing all sorts of stuff and was only about 800kb)  
and so In unarchived the liblapack.a that ATLAS spit out and added  
those files to the one build by Netlib's makefile. The result is still  
the same. I'm wondering if this is maybe a gfortran version issue,  
somehow.

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bytes vs. Unicode in Python3

2009-11-27 Thread Dag Sverre Seljebotn
Francesc Alted wrote:
 A Friday 27 November 2009 16:41:04 Pauli Virtanen escrigué:
 I think so.  However, I think S is probably closest to bytes... and
 maybe S can be reused for bytes... I'm not sure though.
 That could be a good idea because that would ensure compatibility with
 existing NumPy scripts (i.e. old 'string' dtypes are mapped to 'bytes',
 as it should).  The only thing that I don't like is that that 'S' seems
 to be the initial letter for 'string', which is actually 'unicode' in
 Python 3 :-/ But, for the sake of compatibility, we can probably live
 with that.
 Well, we can deprecate 'S' (ie. never show it in repr, always only 'B'
 or 'U').
 
 Well, deprecating 'S' seems a sensible option too.  But why only avoiding 
 showing it in repr?  Why not issue a DeprecationWarning too?

One thing to keep in mind here is that PEP 3118 actually defines a 
standard dtype format string, which is (mostly) incompatible with 
NumPy's. It should probably be supported as well when PEP 3118 is 
implemented.

Just something to keep in the back of ones mind when discussing this. 
For instance one could, instead of inventing something new, adopt the 
characters PEP 3118 uses (if there isn't a conflict):

  - b: Raw byte
  - c: ucs-1 encoding (latin 1, one byte)
  - u: ucs-2 encoding, two bytes
  - w: ucs-4 encoding, four bytes

Long-term I hope the NumPy-specific format string will be deprecated, so 
that repr print out the PEP 3118 format string etc. But, I'm aware that 
API breakage shouldn't happen when porting to Python 3.

-- 
Dag Sverre
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Producing a Histogram When Bins Are Known

2009-11-27 Thread Wayne Watson
I tried this and it put ranges on y from 0 to 0.45 and x from 5 to 50.

import numpy as np
import pylab

v = np.array([20, 15,10,30, 50, 30, 20, 25, 10])
#Plot a normalized histogram
print np.linspace(0,50,10)
pylab.hist(v, normed=1, bins=np.linspace(0,9,10), range=(0,100))
pylab.show()

I  added the two imports. I got a fig error on the first line.
import pylab
import numpy

Shouldn't there by a pylab.Show in there?

ax = fig.add_subplot(1,1,1)
pylab.title(r'\Large  BCG NO radio distribution $ \rm{TITLE}$')
n, bins, patches = pylab.hist(values, bins=math.sqrt(len(values)), 
range=(numpy.mean(values)-3*scientificstat.standardDeviation(values),numpy.mean(values)+3*scientificstat.standardDeviation(values)),
 
normed=1, facecolor='y', alpha=0.5)
ax.set_xlabel(r'\Large$ \rm{values}$')
ax.set_ylabel(r'\Large Delatavalue/Value')

gausx=numpy.arange(numpy.mean(Value)-3*scientificstat.standardDeviation(Value),numpy.mean(Value)+3*scientificstat.standardDeviation(bpty_plt),0.1)
gaus=normpdf(gausx,numpy.mean(Value),scientificstat.standardDeviation(Value))
pylab.plot(gausx,gaus, color='red', lw=2)
ax.set_xlim(-1.5, 1.5)
ax.grid(True)

Sebastian wrote:
 Did you try using the parameter range?
 I do something like this.
 regards

 ax = fig.add_subplot(1,1,1)
 pylab.title(r'\Large  BCG NO radio distribution $ \rm{TITLE}$')
 n, bins, patches = pylab.hist(values, bins=math.sqrt(len(values)),
 
 range=(numpy.mean(values)-3*scientificstat.standardDeviation(values),numpy.mean(values)+3*scientificstat.standardDeviation(values)),
 normed=1, facecolor='y', alpha=0.5)
 ax.set_xlabel(r'\Large$ \rm{values}$')
 ax.set_ylabel(r'\Large Delatavalue/Value')

 
 gausx=numpy.arange(numpy.mean(Value)-3*scientificstat.standardDeviation(Value),numpy.mean(Value)+3*scientificstat.standardDeviation(bpty_plt),0.1)
 
 gaus=normpdf(gausx,numpy.mean(Value),scientificstat.standardDeviation(Value))
 pylab.plot(gausx,gaus, color='red', lw=2)
 ax.set_xlim(-1.5, 1.5)
 ax.grid(True)


 On Fri, Nov 27, 2009 at 4:38 PM, Christopher Barker 
 chris.bar...@noaa.gov mailto:chris.bar...@noaa.gov wrote:

 josef.p...@gmail.com mailto:josef.p...@gmail.com wrote:
  On Fri, Nov 27, 2009 at 12:57 PM, Skipper Seabold
 jsseab...@gmail.com mailto:jsseab...@gmail.com wrote:

   This kind of info might be useful to other newcomers
  somewhere...  http://www.scipy.org/History_of_SciPy?  Thoughts on
  posting this on the wiki here?
 
  I also agree. It will improve with the newly redesigned website
 for scipy.org http://scipy.org
  However, I cannot find the link right now for the development
 version of
  the new website.

 Feel free to crib whatever you want from my post for that -- or
 suggest
 a place for me to put it, and I'll do it. I'm just not sure where it
 should go at this point.

 -Chris


 --
 Christopher Barker, Ph.D.
 Oceanographer

 Emergency Response Division
 NOAA/NOS/ORR(206) 526-6959   voice
 7600 Sand Point Way NE   (206) 526-6329   fax
 Seattle, WA  98115   (206) 526-6317   main reception

 chris.bar...@noaa.gov mailto:chris.bar...@noaa.gov
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org mailto:NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
   

-- 
   Wayne Watson (Watson Adventures, Prop., Nevada City, CA)

 (121.015 Deg. W, 39.262 Deg. N) GMT-8 hr std. time)
  Obz Site:  39° 15' 7 N, 121° 2' 32 W, 2700 feet  

   350 350 350 350 350 350 350 350 350 350
 Make the number famous. See 350.org
The major event has passed, but keep the number alive.
 
Web Page: www.speckledwithstars.net/

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] another numpy/ATLAS problem

2009-11-27 Thread David Warde-Farley
On 27-Nov-09, at 5:02 PM, Charles R Harris wrote:

 What version of ATLAS?

3.9.17, the latest development branch version. At some point in the  
changelog the author mentions he removed other methods building with  
LAPACK, leaving only the tarfile one. I'm giving the entire build  
another try with gfortran-4.2 rather than gfortran-4.4, since I  
already had a problem on another Ubuntu 9.10 machine with gfortran 4.4  
and that exact same function, except it wasn't an unresolved symbol  
issue, rather it was an infinite loop:


http://mail.scipy.org/pipermail/numpy-discussion/2009-November/046373.html

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] another numpy/ATLAS problem

2009-11-27 Thread Charles R Harris
On Fri, Nov 27, 2009 at 4:18 PM, David Warde-Farley d...@cs.toronto.eduwrote:

 On 27-Nov-09, at 5:02 PM, Charles R Harris wrote:

  What version of ATLAS?

 3.9.17, the latest development branch version. At some point in the


3.9.12 segfaulted on me while running, so I haven't bothered with versions
after that. Why not try the stable version 3.8.3?

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] another numpy/ATLAS problem

2009-11-27 Thread David Cournapeau
On Sat, Nov 28, 2009 at 8:29 AM, Charles R Harris
charlesr.har...@gmail.com wrote:


 On Fri, Nov 27, 2009 at 4:18 PM, David Warde-Farley d...@cs.toronto.edu
 wrote:

 On 27-Nov-09, at 5:02 PM, Charles R Harris wrote:

  What version of ATLAS?

 3.9.17, the latest development branch version. At some point in the

 3.9.12 segfaulted on me while running, so I haven't bothered with versions
 after that. Why not try the stable version 3.8.3?

I guess because of the updated threading support.

I think the solution is to simply bypass the atlas mechanism to build
lapack, and do it manually.

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Producing a Histogram When Bins Are Known

2009-11-27 Thread Sebastian
Hi Chris, yeah there should, try the following:
import numpy
import matplotlib.pyplot as pylab
regards

On Fri, Nov 27, 2009 at 8:47 PM, Wayne Watson
sierra_mtnv...@sbcglobal.netwrote:

 I tried this and it put ranges on y from 0 to 0.45 and x from 5 to 50.

 import numpy as np
 import pylab

 v = np.array([20, 15,10,30, 50, 30, 20, 25, 10])
 #Plot a normalized histogram
 print np.linspace(0,50,10)
 pylab.hist(v, normed=1, bins=np.linspace(0,9,10), range=(0,100))
 pylab.show()

 I  added the two imports. I got a fig error on the first line.
 import pylab
 import numpy

 Shouldn't there by a pylab.Show in there?

 ax = fig.add_subplot(1,1,1)
 pylab.title(r'\Large  BCG NO radio distribution $ \rm{TITLE}$')
 n, bins, patches = pylab.hist(values, bins=math.sqrt(len(values)),

 range=(numpy.mean(values)-3*scientificstat.standardDeviation(values),numpy.mean(values)+3*scientificstat.standardDeviation(values)),
 normed=1, facecolor='y', alpha=0.5)
 ax.set_xlabel(r'\Large$ \rm{values}$')
 ax.set_ylabel(r'\Large Delatavalue/Value')


 gausx=numpy.arange(numpy.mean(Value)-3*scientificstat.standardDeviation(Value),numpy.mean(Value)+3*scientificstat.standardDeviation(bpty_plt),0.1)

 gaus=normpdf(gausx,numpy.mean(Value),scientificstat.standardDeviation(Value))
 pylab.plot(gausx,gaus, color='red', lw=2)
 ax.set_xlim(-1.5, 1.5)
 ax.grid(True)

 Sebastian wrote:
  Did you try using the parameter range?
  I do something like this.
  regards
 
  ax = fig.add_subplot(1,1,1)
  pylab.title(r'\Large  BCG NO radio distribution $ \rm{TITLE}$')
  n, bins, patches = pylab.hist(values, bins=math.sqrt(len(values)),
 
 range=(numpy.mean(values)-3*scientificstat.standardDeviation(values),numpy.mean(values)+3*scientificstat.standardDeviation(values)),
  normed=1, facecolor='y', alpha=0.5)
  ax.set_xlabel(r'\Large$ \rm{values}$')
  ax.set_ylabel(r'\Large Delatavalue/Value')
 
 
 gausx=numpy.arange(numpy.mean(Value)-3*scientificstat.standardDeviation(Value),numpy.mean(Value)+3*scientificstat.standardDeviation(bpty_plt),0.1)
 
 gaus=normpdf(gausx,numpy.mean(Value),scientificstat.standardDeviation(Value))
  pylab.plot(gausx,gaus, color='red', lw=2)
  ax.set_xlim(-1.5, 1.5)
  ax.grid(True)
 
 
  On Fri, Nov 27, 2009 at 4:38 PM, Christopher Barker
  chris.bar...@noaa.gov mailto:chris.bar...@noaa.gov wrote:
 
  josef.p...@gmail.com mailto:josef.p...@gmail.com wrote:
   On Fri, Nov 27, 2009 at 12:57 PM, Skipper Seabold
  jsseab...@gmail.com mailto:jsseab...@gmail.com wrote:
 
This kind of info might be useful to other newcomers
   somewhere...  http://www.scipy.org/History_of_SciPy?  Thoughts
 on
   posting this on the wiki here?
  
   I also agree. It will improve with the newly redesigned website
  for scipy.org http://scipy.org
   However, I cannot find the link right now for the development
  version of
   the new website.
 
  Feel free to crib whatever you want from my post for that -- or
  suggest
  a place for me to put it, and I'll do it. I'm just not sure where it
  should go at this point.
 
  -Chris
 
 
  --
  Christopher Barker, Ph.D.
  Oceanographer
 
  Emergency Response Division
  NOAA/NOS/ORR(206) 526-6959   voice
  7600 Sand Point Way NE   (206) 526-6329   fax
  Seattle, WA  98115   (206) 526-6317   main reception
 
  chris.bar...@noaa.gov mailto:chris.bar...@noaa.gov
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org mailto:NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 
  
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 

 --
   Wayne Watson (Watson Adventures, Prop., Nevada City, CA)

 (121.015 Deg. W, 39.262 Deg. N) GMT-8 hr std. time)
  Obz Site:  39° 15' 7 N, 121° 2' 32 W, 2700 feet

   350 350 350 350 350 350 350 350 350 350
 Make the number famous. See 350.org
The major event has passed, but keep the number alive.

Web Page: www.speckledwithstars.net/

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Producing a Histogram When Bins Are Known

2009-11-27 Thread josef . pktd
On Fri, Nov 27, 2009 at 9:05 PM, Sebastian seb...@gmail.com wrote:
 Hi Chris, yeah there should, try the following:
 import numpy
 import matplotlib.pyplot as pylab
 regards

 On Fri, Nov 27, 2009 at 8:47 PM, Wayne Watson sierra_mtnv...@sbcglobal.net
 wrote:

 I tried this and it put ranges on y from 0 to 0.45 and x from 5 to 50.

 import numpy as np
 import pylab

 v = np.array([20, 15,10,30, 50, 30, 20, 25, 10])
 #Plot a normalized histogram
 print np.linspace(0,50,10)
 pylab.hist(v, normed=1, bins=np.linspace(0,9,10), range=(0,100))
 pylab.show()

 I  added the two imports. I got a fig error on the first line.
 import pylab
 import numpy

 Shouldn't there by a pylab.Show in there?


you need to create a figure, before you can use it

fig = pylab.figure()

Josef

 ax = fig.add_subplot(1,1,1)
 pylab.title(r'\Large  BCG NO radio distribution $ \rm{TITLE}$')
 n, bins, patches = pylab.hist(values, bins=math.sqrt(len(values)),

 range=(numpy.mean(values)-3*scientificstat.standardDeviation(values),numpy.mean(values)+3*scientificstat.standardDeviation(values)),
 normed=1, facecolor='y', alpha=0.5)
 ax.set_xlabel(r'\Large$ \rm{values}$')
 ax.set_ylabel(r'\Large Delatavalue/Value')


 gausx=numpy.arange(numpy.mean(Value)-3*scientificstat.standardDeviation(Value),numpy.mean(Value)+3*scientificstat.standardDeviation(bpty_plt),0.1)

 gaus=normpdf(gausx,numpy.mean(Value),scientificstat.standardDeviation(Value))
 pylab.plot(gausx,gaus, color='red', lw=2)
 ax.set_xlim(-1.5, 1.5)
 ax.grid(True)

 Sebastian wrote:
  Did you try using the parameter range?
  I do something like this.
  regards
 
      ax = fig.add_subplot(1,1,1)
      pylab.title(r'\Large  BCG NO radio distribution $ \rm{TITLE}$')
      n, bins, patches = pylab.hist(values, bins=math.sqrt(len(values)),
 
  range=(numpy.mean(values)-3*scientificstat.standardDeviation(values),numpy.mean(values)+3*scientificstat.standardDeviation(values)),
      normed=1, facecolor='y', alpha=0.5)
      ax.set_xlabel(r'\Large$ \rm{values}$')
      ax.set_ylabel(r'\Large Delatavalue/Value')
 
 
  gausx=numpy.arange(numpy.mean(Value)-3*scientificstat.standardDeviation(Value),numpy.mean(Value)+3*scientificstat.standardDeviation(bpty_plt),0.1)
 
  gaus=normpdf(gausx,numpy.mean(Value),scientificstat.standardDeviation(Value))
      pylab.plot(gausx,gaus, color='red', lw=2)
      ax.set_xlim(-1.5, 1.5)
      ax.grid(True)
 
 
  On Fri, Nov 27, 2009 at 4:38 PM, Christopher Barker
  chris.bar...@noaa.gov mailto:chris.bar...@noaa.gov wrote:
 
      josef.p...@gmail.com mailto:josef.p...@gmail.com wrote:
       On Fri, Nov 27, 2009 at 12:57 PM, Skipper Seabold
      jsseab...@gmail.com mailto:jsseab...@gmail.com wrote:
 
        This kind of info might be useful to other newcomers
       somewhere...  http://www.scipy.org/History_of_SciPy?  Thoughts
  on
       posting this on the wiki here?
      
       I also agree. It will improve with the newly redesigned website
      for scipy.org http://scipy.org
       However, I cannot find the link right now for the development
      version of
       the new website.
 
      Feel free to crib whatever you want from my post for that -- or
      suggest
      a place for me to put it, and I'll do it. I'm just not sure where it
      should go at this point.
 
      -Chris
 
 
      --
      Christopher Barker, Ph.D.
      Oceanographer
 
      Emergency Response Division
      NOAA/NOS/ORR            (206) 526-6959   voice
      7600 Sand Point Way NE   (206) 526-6329   fax
      Seattle, WA  98115       (206) 526-6317   main reception
 
      chris.bar...@noaa.gov mailto:chris.bar...@noaa.gov
      ___
      NumPy-Discussion mailing list
      NumPy-Discussion@scipy.org mailto:NumPy-Discussion@scipy.org
      http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 
  
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 

 --
           Wayne Watson (Watson Adventures, Prop., Nevada City, CA)

             (121.015 Deg. W, 39.262 Deg. N) GMT-8 hr std. time)
              Obz Site:  39° 15' 7 N, 121° 2' 32 W, 2700 feet

                   350 350 350 350 350 350 350 350 350 350
                     Make the number famous. See 350.org
            The major event has passed, but keep the number alive.

                    Web Page: www.speckledwithstars.net/

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org

Re: [Numpy-discussion] Producing a Histogram When Bins Are Known

2009-11-27 Thread Wayne Watson
Joseph,
That got it by the fig problem but there is yet another one. value is 
not defined on the very long line:
range = ...
Wayne

josef.p...@gmail.com wrote:
 On Fri, Nov 27, 2009 at 9:05 PM, Sebastian seb...@gmail.com wrote:
   
 ...
 you need to create a figure, before you can use it

 fig = pylab.figure()

 Josef

   
 ax = fig.add_subplot(1,1,1)
 pylab.title(r'\Large  BCG NO radio distribution $ \rm{TITLE}$')
 n, bins, patches = pylab.hist(values, bins=math.sqrt(len(values)),

 range=(numpy.mean(values)-3*scientificstat.standardDeviation(values),numpy.mean(values)+3*scientificstat.standardDeviation(values)),
 normed=1, facecolor='y', alpha=0.5)
 ax.set_xlabel(r'\Large$ \rm{values}$')
 ax.set_ylabel(r'\Large Delatavalue/Value')


 gausx=numpy.arange(numpy.mean(Value)-3*scientificstat.standardDeviation(Value),numpy.mean(Value)+3*scientificstat.standardDeviation(bpty_plt),0.1)

 gaus=normpdf(gausx,numpy.mean(Value),scientificstat.standardDeviation(Value))
 pylab.plot(gausx,gaus, color='red', lw=2)
 ax.set_xlim(-1.5, 1.5)
 ax.grid(True)

 Sebastian wrote:
   
 Did you try using the parameter range?
 I do something like this.
 regards

 ax = fig.add_subplot(1,1,1)
 pylab.title(r'\Large  BCG NO radio distribution $ \rm{TITLE}$')
 n, bins, patches = pylab.hist(values, bins=math.sqrt(len(values)),

 range=(numpy.mean(values)-3*scientificstat.standardDeviation(values),numpy.mean(values)+3*scientificstat.standardDeviation(values)),
 normed=1, facecolor='y', alpha=0.5)
 ax.set_xlabel(r'\Large$ \rm{values}$')
 ax.set_ylabel(r'\Large Delatavalue/Value')


 gausx=numpy.arange(numpy.mean(Value)-3*scientificstat.standardDeviation(Value),numpy.mean(Value)+3*scientificstat.standardDeviation(bpty_plt),0.1)

 gaus=normpdf(gausx,numpy.mean(Value),scientificstat.standardDeviation(Value))
 pylab.plot(gausx,gaus, color='red', lw=2)
 ax.set_xlim(-1.5, 1.5)
 ax.grid(True)


 On Fri, Nov 27, 2009 at 4:38 PM, Christopher Barker
 chris.bar...@noaa.gov mailto:chris.bar...@noaa.gov wrote:

 josef.p...@gmail.com mailto:josef.p...@gmail.com wrote:
  On Fri, Nov 27, 2009 at 12:57 PM, Skipper Seabold
 jsseab...@gmail.com mailto:jsseab...@gmail.com wrote:

   This kind of info might be useful to other newcomers
  somewhere...  http://www.scipy.org/History_of_SciPy?  Thoughts
 on
  posting this on the wiki here?
 
  I also agree. It will improve with the newly redesigned website
 for scipy.org http://scipy.org
  However, I cannot find the link right now for the development
 version of
  the new website.

 Feel free to crib whatever you want from my post for that -- or
 suggest
 a place for me to put it, and I'll do it. I'm just not sure where it
 should go at this point.

 -Chris


 --
 Christopher Barker, Ph.D.
 Oceanographer

 Emergency Response Division
 NOAA/NOS/ORR(206) 526-6959   voice
 7600 Sand Point Way NE   (206) 526-6329   fax
 Seattle, WA  98115   (206) 526-6317   main reception

 chris.bar...@noaa.gov mailto:chris.bar...@noaa.gov
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org mailto:NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

 
 --
   Wayne Watson (Watson Adventures, Prop., Nevada City, CA)

 (121.015 Deg. W, 39.262 Deg. N) GMT-8 hr std. time)
  Obz Site:  39° 15' 7 N, 121° 2' 32 W, 2700 feet

   350 350 350 350 350 350 350 350 350 350
 Make the number famous. See 350.org
The major event has passed, but keep the number alive.

Web Page: www.speckledwithstars.net/

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
   
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

   

-- 
   Wayne Watson (Watson Adventures, Prop., Nevada City, CA)

 (121.015 Deg. W, 39.262 Deg. N) GMT-8 hr std. time)
  Obz Site:  39° 15' 7 N, 121° 2' 32 W, 2700 feet  

   350 350 350 350 350 350 350 350 350 350
 Make the number famous. See 350.org
The major event has passed, but keep the number alive.
 
Web Page: 

[Numpy-discussion] Computing Simple Statistics When Only they Frequency Distribution is Known

2009-11-27 Thread Wayne Watson
How do I compute avg, std dev, min, max and other simple stats if I only 
know the frequency distribution?

-- 
   Wayne Watson (Watson Adventures, Prop., Nevada City, CA)

 (121.015 Deg. W, 39.262 Deg. N) GMT-8 hr std. time)
  Obz Site:  39° 15' 7 N, 121° 2' 32 W, 2700 feet  

   350 350 350 350 350 350 350 350 350 350
 Make the number famous. See 350.org
The major event has passed, but keep the number alive.
 
Web Page: www.speckledwithstars.net/

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Producing a Histogram When Bins Are Known

2009-11-27 Thread josef . pktd
On Fri, Nov 27, 2009 at 9:44 PM, Wayne Watson
sierra_mtnv...@sbcglobal.net wrote:
 Joseph,
 That got it by the fig problem but there is yet another one. value is
 not defined on the very long line:
 range = ...
    Wayne

(values is the data array, ... no idea about scientificstat.standardDeviation)

Sebastian's example is only part of a larger script that defines many
of the variables and functions that are used.

If you are not yet familiar with these examples, maybe you look at the
self contained examples in the matplotlib docs. At least that's what I
do when I only have a rough idea about what graph I want to do but
don't know how to do it with matplotlib. I usually just copy a likely
looking candidate and change it until it (almost)  produces what I
want.
For example look at histogram examples in

http://matplotlib.sourceforge.net/examples/index.html

Josef


 josef.p...@gmail.com wrote:
 On Fri, Nov 27, 2009 at 9:05 PM, Sebastian seb...@gmail.com wrote:

 ...
 you need to create a figure, before you can use it

 fig = pylab.figure()

 Josef


 ax = fig.add_subplot(1,1,1)
 pylab.title(r'\Large  BCG NO radio distribution $ \rm{TITLE}$')
 n, bins, patches = pylab.hist(values, bins=math.sqrt(len(values)),

 range=(numpy.mean(values)-3*scientificstat.standardDeviation(values),numpy.mean(values)+3*scientificstat.standardDeviation(values)),
 normed=1, facecolor='y', alpha=0.5)
 ax.set_xlabel(r'\Large$ \rm{values}$')
 ax.set_ylabel(r'\Large Delatavalue/Value')


 gausx=numpy.arange(numpy.mean(Value)-3*scientificstat.standardDeviation(Value),numpy.mean(Value)+3*scientificstat.standardDeviation(bpty_plt),0.1)

 gaus=normpdf(gausx,numpy.mean(Value),scientificstat.standardDeviation(Value))
 pylab.plot(gausx,gaus, color='red', lw=2)
 ax.set_xlim(-1.5, 1.5)
 ax.grid(True)

 Sebastian wrote:

 Did you try using the parameter range?
 I do something like this.
 regards

     ax = fig.add_subplot(1,1,1)
     pylab.title(r'\Large  BCG NO radio distribution $ \rm{TITLE}$')
     n, bins, patches = pylab.hist(values, bins=math.sqrt(len(values)),

 range=(numpy.mean(values)-3*scientificstat.standardDeviation(values),numpy.mean(values)+3*scientificstat.standardDeviation(values)),
     normed=1, facecolor='y', alpha=0.5)
     ax.set_xlabel(r'\Large$ \rm{values}$')
     ax.set_ylabel(r'\Large Delatavalue/Value')


 gausx=numpy.arange(numpy.mean(Value)-3*scientificstat.standardDeviation(Value),numpy.mean(Value)+3*scientificstat.standardDeviation(bpty_plt),0.1)

 gaus=normpdf(gausx,numpy.mean(Value),scientificstat.standardDeviation(Value))
     pylab.plot(gausx,gaus, color='red', lw=2)
     ax.set_xlim(-1.5, 1.5)
     ax.grid(True)


 On Fri, Nov 27, 2009 at 4:38 PM, Christopher Barker
 chris.bar...@noaa.gov mailto:chris.bar...@noaa.gov wrote:

     josef.p...@gmail.com mailto:josef.p...@gmail.com wrote:
      On Fri, Nov 27, 2009 at 12:57 PM, Skipper Seabold
     jsseab...@gmail.com mailto:jsseab...@gmail.com wrote:

       This kind of info might be useful to other newcomers
      somewhere...  http://www.scipy.org/History_of_SciPy?  Thoughts
 on
      posting this on the wiki here?
     
      I also agree. It will improve with the newly redesigned website
     for scipy.org http://scipy.org
      However, I cannot find the link right now for the development
     version of
      the new website.

     Feel free to crib whatever you want from my post for that -- or
     suggest
     a place for me to put it, and I'll do it. I'm just not sure where it
     should go at this point.

     -Chris


     --
     Christopher Barker, Ph.D.
     Oceanographer

     Emergency Response Division
     NOAA/NOS/ORR            (206) 526-6959   voice
     7600 Sand Point Way NE   (206) 526-6329   fax
     Seattle, WA  98115       (206) 526-6317   main reception

     chris.bar...@noaa.gov mailto:chris.bar...@noaa.gov
     ___
     NumPy-Discussion mailing list
     NumPy-Discussion@scipy.org mailto:NumPy-Discussion@scipy.org
     http://mail.scipy.org/mailman/listinfo/numpy-discussion


 

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 --
           Wayne Watson (Watson Adventures, Prop., Nevada City, CA)

             (121.015 Deg. W, 39.262 Deg. N) GMT-8 hr std. time)
              Obz Site:  39° 15' 7 N, 121° 2' 32 W, 2700 feet

                   350 350 350 350 350 350 350 350 350 350
                     Make the number famous. See 350.org
            The major event has passed, but keep the number alive.

                    Web Page: www.speckledwithstars.net/

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

 ___
 NumPy-Discussion mailing 

Re: [Numpy-discussion] Computing Simple Statistics When Only they Frequency Distribution is Known

2009-11-27 Thread josef . pktd
On Fri, Nov 27, 2009 at 9:47 PM, Wayne Watson
sierra_mtnv...@sbcglobal.net wrote:
 How do I compute avg, std dev, min, max and other simple stats if I only
 know the frequency distribution?

If you are willing to assign to all observations in a bin the value at
the bin midpoint, then you could do it with weights in the statistics
calculations. However, numpy.average is, I think, the only statistic
that takes weights. min max are independent of weight, but std and var
need to be calculated indirectly.

If you need more stats with weights, then the attachment in
http://projects.scipy.org/scipy/ticket/604  is a good start.

Josef



 --
           Wayne Watson (Watson Adventures, Prop., Nevada City, CA)

             (121.015 Deg. W, 39.262 Deg. N) GMT-8 hr std. time)
              Obz Site:  39° 15' 7 N, 121° 2' 32 W, 2700 feet

                   350 350 350 350 350 350 350 350 350 350
                     Make the number famous. See 350.org
            The major event has passed, but keep the number alive.

                    Web Page: www.speckledwithstars.net/

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Computing Simple Statistics When Only they Frequency Distribution is Known

2009-11-27 Thread Wayne Watson
I actually wrote my own several days ago. When I began getting myself 
more familiar with numpy, I was hoping there would be an easy to use 
version in it for this frequency approach. If not, then I'll just stick 
with what I have. It seems something like this should be common.

A simple way to do it with the present capabilities would be to unwind 
the frequencies,  For example, given [2,1,3] for some corresponding set 
of x, say, [1,2,3], produce[1, 1, 2, 3, 3, 3]. I have no idea if numpy 
does anything like that, but, if so, the typical mean, std, ... could be 
used. In my case, it's sort of pointless. It would produce an array of 
307,200 items for 256 x (0,1,2,...,255), and just slow down the 
computations unwinding it in software. The sub-processor hardware 
already produced the 256 frequencies.

Basically, this amounts to having a pdf, and values of x. 
Mathematically, the statistics are produced directly from it.

josef.p...@gmail.com wrote:
 On Fri, Nov 27, 2009 at 9:47 PM, Wayne Watson
 sierra_mtnv...@sbcglobal.net wrote:
   
 How do I compute avg, std dev, min, max and other simple stats if I only
 know the frequency distribution?
 

 If you are willing to assign to all observations in a bin the value at
 the bin midpoint, then you could do it with weights in the statistics
 calculations. However, numpy.average is, I think, the only statistic
 that takes weights. min max are independent of weight, but std and var
 need to be calculated indirectly.

 If you need more stats with weights, then the attachment in
 http://projects.scipy.org/scipy/ticket/604  is a good start.

 Josef


   
 --
   Wayne Watson (Watson Adventures, Prop., Nevada City, CA)

 (121.015 Deg. W, 39.262 Deg. N) GMT-8 hr std. time)
  Obz Site:  39° 15' 7 N, 121° 2' 32 W, 2700 feet

   350 350 350 350 350 350 350 350 350 350
 Make the number famous. See 350.org
The major event has passed, but keep the number alive.

Web Page: www.speckledwithstars.net/

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

 
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

   

-- 
   Wayne Watson (Watson Adventures, Prop., Nevada City, CA)

 (121.015 Deg. W, 39.262 Deg. N) GMT-8 hr std. time)
  Obz Site:  39° 15' 7 N, 121° 2' 32 W, 2700 feet  

   350 350 350 350 350 350 350 350 350 350
 Make the number famous. See 350.org
The major event has passed, but keep the number alive.
 
Web Page: www.speckledwithstars.net/

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] another numpy/ATLAS problem

2009-11-27 Thread David Warde-Farley
On 27-Nov-09, at 7:49 PM, David Cournapeau wrote:

 I guess because of the updated threading support.

Right you are; the 3.9 series is rather faster I find, at least in  
parallel.

 I think the solution is to simply bypass the atlas mechanism to build
 lapack, and do it manually.


Tried that... see below:

 and so I unarchived the liblapack.a that ATLAS spit out and added
 those files to the one built by Netlib's makefile. The result is still
 the same.

I'm gonna try what Chuck suggested and go back a few versions at least.

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion