Re: [Numpy-discussion] Numpy unexpected (for me) behaviour

2009-01-23 Thread V. Armando Sole
At 01:44 23/01/2009 -0600, Robert Kern wrote:
It is an inevitable consequence of several features interacting
together. Basically, Python expands a[b] += 1 into this:

   c = a[b]
   d = c.__iadd__(1)
   a[b] = d

Basically, the array c doesn't know that it was created by indexing a,
so it can't do the accumulation you want.

Well inevitable would not be the word I would use. I would have expected a 
different behaviour between a[b] = a[b] + 1 and a[b] += 1

In the second case python (or numpy) does not need to generate an 
intermediate array and could be doing in-place operations.


  Is there a way I can achieve the first result
  without a for loop? In my application the difference is a factor 10 in
  execution time (1000 secons instead of 100 ...)

In [6]: bincount?
Type:   builtin_function_or_method
Base Class: type 'builtin_function_or_method'
String Form:built-in function bincount
Namespace:  Interactive
Docstring:
 bincount(x,weights=None)

 Return the number of occurrences of each value in x.

 x must be a list of non-negative integers.  The output, b[i],
 represents the number of times that i is found in x.  If weights
 is specified, every occurrence of i at a position p contributes
 weights[p] instead of 1.

 See also: histogram, digitize, unique.

Indeed what I am doing is very close to histogramming.

Unfortunaly what I have to add is not just one but a value. I have a set of 
scattered points (x,y,z) and values corresponding to those points. My goal 
is to get a regular grid in which in each voxel I sum the values of the 
points falling in them. I guess I will have to write a tiny C extension. I 
had expected the += sintax would trigger the in-place operation.

Best regards,

Armando 


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] python numpy code many times slower than c++

2009-01-23 Thread Robert Kern
On Thu, Jan 22, 2009 at 17:09, Wes McKinney wesmck...@gmail.com wrote:
 Windows XP, Pentium D, Python 2.5.2

I can replicate the negative numbers on my Windows VM. I'll take a look at it.

Wrote profile results to foo.py.lprof
Timer unit: 4.17601e-010 s

File: foo.py
Function: f at line 1
Total time: -3.02963 s

Line #  Hits Time  Per Hit   % Time  Line Contents
==
 1   @profile
 2   def f():
 3   101  -1456737621  -1456.7 20.1 for i in
xrange(100):
 4   100  -1540435131  -1540.4 21.2 1+1
 5   100  -1522306067  -1522.3 21.0 1+1
 6   100  -1177199444  -1177.2 16.2 1+1
 7   100  -1558164209  -1558.2 21.5 1+1


-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] coding style: citations

2009-01-23 Thread Tim Michelsen
Hello Allan, Stefan and others,
did you already come to a conclusion regarding this cite topic?

Did you try to run the bibtext extension for Sphinx?

If so, please update the documentation guidelines.

Regards,
Timmie

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] coding style: citations

2009-01-23 Thread Stéfan van der Walt
Hi Tim

2009/1/22 Tim Michelsen timmichel...@gmx-topmail.de:
 did you already come to a conclusion regarding this cite topic?

 Did you try to run the bibtext extension for Sphinx?

I haven't tried it.  One difficulty is that each docstring needs to be
self-contained, i.e., it must include its own references.  If you want
to maintain a central reference list, this list must either be
generated from the docstrings or, alternatively, the docstrings must
be built from the list.  I don't think the latter is an option really
(since it would require massive merges whenever a reference is
updated).

What it then comes down to, is that there is work to be done in the
documentation editor before we modify the standard.  Unfortunately, I
am flat broke when it comes to free time, but hopefully some other
people have more time to invest in the issue.

Regards,
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] strange multiplication behavior with numpy.float64 and ndarray subclass

2009-01-23 Thread Darren Dale
On Wed, Jan 21, 2009 at 1:07 PM, Darren Dale dsdal...@gmail.com wrote:



 On Wed, Jan 21, 2009 at 12:26 PM, Pierre GM pgmdevl...@gmail.com wrote:

  I dont understand why __array_priority__ is not being respected
  here. Ticket 826 lists the component as numpy.ma, it seems the
  problem is in numpy.core. I think the severity of the ticket should
  be increased. But I wasnt able to view the ticket, I keep getting an
  internal server error.

 Ticket #826 bumped.


 Just an additional bit of context. I'm working on a subclass that handles
 physical quantities, and issue 826 causes a quantity to be converted to a
 dimensionless magnitude.


I wonder if this issue is appearing in other places as well. Many of the
ndarray methods work without modification on my Quantity subclass, but the
methods that produce scalars do not. For instance, __getitem__ yields a
dimensionless number when called with an integer index, but  it yields
another Quantity if called with a range, so I have to reimplement
__getitem__ so it yields a quantity for single indices. tolist, min, max,
mean, are the same way. Is there an ndarray attribute I should be using to
tell the superclass what is the desired type (aside from
__array_priority__)?

Thanks,
Darren
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] coding style: citations

2009-01-23 Thread Alan G Isaac
Tim Michelsen wrote:
 did you already come to a conclusion regarding this cite topic?
 Did you try to run the bibtext extension for Sphinx?
 If so, please update the documentation guidelines.


I hope we reached agreement that the documentation
should use reST citations and not reST footnotes.
You can use bib4txt to process *all* the modules Sphinx will
process and extract the citation references and construct
a references.  I explained why it is undesirable to use
`+` or `:` in the citation references.

I am not currently a Sphinx user.
What I can offer is to make useful changes
to bibstuff (including bib4txt) if these
are specfied.

Alan Isaac
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Pattern for reading non-simple binary files

2009-01-23 Thread Ryan May
Hi,

I'm trying to read in a data from a binary-formatted file. I have the data
format, (available at:
http://www1.ncdc.noaa.gov/pub/data/documentlibrary/tddoc/td7000.pdf if you're
really curious), but it's not what I would consider simple, with a lot of
different blocks and messages, some that are optional and some that have
different formats depending on the data type.  My question is, has anyone dealt
with data like this using numpy?  Have you found a good pattern for how to
construct a numpy dtype dynamically to decode the different parts of the file
appropriately as you go along?

Any insight would be appreciated.

Ryan

-- 
Ryan May
Graduate Research Assistant
School of Meteorology
University of Oklahoma
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Pattern for reading non-simple binary files

2009-01-23 Thread Robert Kern
On Fri, Jan 23, 2009 at 15:31, Ryan May rma...@gmail.com wrote:
 Hi,

 I'm trying to read in a data from a binary-formatted file. I have the data
 format, (available at:
 http://www1.ncdc.noaa.gov/pub/data/documentlibrary/tddoc/td7000.pdf if you're
 really curious), but it's not what I would consider simple, with a lot of
 different blocks and messages, some that are optional and some that have
 different formats depending on the data type.  My question is, has anyone 
 dealt
 with data like this using numpy?

Yes!

 Have you found a good pattern for how to
 construct a numpy dtype dynamically to decode the different parts of the file
 appropriately as you go along?

I use mmap and create numpy arrays for each block using the ndarray
constructor with the appropriate offset parameter. There isn't much of
a pattern for constructing the dtypes except to use constructor
functions.

Good luck!

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Pattern for reading non-simple binary files

2009-01-23 Thread Eric Firing
Ryan May wrote:
 Hi,
 
 I'm trying to read in a data from a binary-formatted file. I have the data
 format, (available at:
 http://www1.ncdc.noaa.gov/pub/data/documentlibrary/tddoc/td7000.pdf if you're
 really curious), but it's not what I would consider simple, with a lot of
 different blocks and messages, some that are optional and some that have
 different formats depending on the data type.  My question is, has anyone 
 dealt
 with data like this using numpy?  Have you found a good pattern for how to
 construct a numpy dtype dynamically to decode the different parts of the file
 appropriately as you go along?
 
 Any insight would be appreciated.
 
 Ryan
 

Ryan,

http://currents.soest.hawaii.edu/hg/hgwebdir.cgi/pycurrents/file/d7c5c9aac32d/adcp/rdiraw.py#l1

This gives an example of reading several related and rather complex 
binary file types generated by (oceanographic) acoustic Doppler current 
profilers.  I have not looked at the format you are dealing with, so I 
don't know if the methods I used are applicable to your case.

Eric
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] failure

2009-01-23 Thread Jarrod Millman
On Wed, Jan 21, 2009 at 10:53 AM, Gideon Simpson
simp...@math.toronto.edu wrote:
 ==
 FAIL: test_umath.TestComplexFunctions.test_against_cmath
 --
 Traceback (most recent call last):
   File /usr/local/nonsystem/simpson/lib/python2.5/site-packages/nose/
 case.py, line 182, in runTest
 self.test(*self.arg)
   File /usr/local/nonsystem/simpson/lib/python2.5/site-packages/
 numpy/core/tests/test_umath.py, line 268, in test_against_cmath
 assert abs(a - b)  atol, %s %s: %s; cmath: %s%(fname,p,a,b)
 AssertionError: arcsinh -2j: (-1.31695789692-1.57079632679j); cmath:
 (1.31695789692-1.57079632679j)

 --
 Ran 1740 tests in 9.839s

 FAILED (KNOWNFAIL=1, failures=1)
 nose.result.TextTestResult run=1740 errors=0 failures=1

 How would you recommend I troubleshoot this?  How seriously should I
 take it?

 This is with a fresh Python 2.5.4 installation too.

I think it is a problem with cmath and should probably be marked as a
knownfailure:
http://bugs.python.org/issue1381
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion