Re: [Numpy-discussion] 1.2 tasks

2008-08-05 Thread Vincent Schut
David Huard wrote:
 
 
 On Mon, Aug 4, 2008 at 1:45 PM, Jarrod Millman [EMAIL PROTECTED] 
 mailto:[EMAIL PROTECTED] wrote:

 snip

 Question: Should histogram raise a warning by default (new=True) to warn 
 users that the behaviour has changed ? Or warn only if new=False to 
 remind that
 the old behaviour will be deprecated in 1.3 ?  I think that users will 
 prefer being annoyed at warnings than surprised by an unexpected change, 
 but repeated warnings
 can become a nuisance.
 
 To minimize annoyance, we could also offer three possibilities:
   
 new=None (default) : Equivalent to True, print warning about change.
 new=True : Don't print warning.
 new=False : Print warning about future deprecation.
 
 So those who have already set new=True don't get warnings, and all 
 others are warned. Feedback ?

As a regular user of histogram I say: please warn! Your proposal above 
seems OK to me. I do have histogram in a lot of kind of old (and 
sometimes long-running) code of mine, and I certainly would prefer to be 
warned.

Vincent.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Toward a numpy build with warning: handling unused variable

2008-08-05 Thread David Cournapeau
On Tue, Aug 5, 2008 at 1:29 AM, Lisandro Dalcin [EMAIL PROTECTED] wrote:
 David, I second your approach. Furthermore, look how SWIG handles
 this, it is very similar to your proposal. The difference is that SWIG
 uses SWIGUNUSED for some autogenerated functions. Furthermore, it
 seems the SWIG developers protected the generated code taking into
 account GCC versions ;-) and the C++ case, and also Intel compilers.

Yes, I would have been surprised if most compilers do not have a way
to deal with that. I did not know icc handled __attribute__.

Charles, are you still have strongly against this, since this is in
the vein of what you suggested (tagging the argument) ?

cheers,

David
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Bilateral filter

2008-08-05 Thread Nadav Horesh
Attached here my cython implementation of the bilateral filter, which is
my first cython program. I would ask for the following:


 1. Is there any way to speed up the code just by cosmetic
modifications?
 2. In particular I use the unportable gcc function __builtin_abs:
Is there any way to access this in a portable way?
 3. I would like to donate this code to scipy or any other suitable
project. What can/should I do to realise it?


Remarks:

The code contains 3 end-user routines:

 1. A pure python implementation: Easy to read and modify --- it can
be cut out into a python source code.
 2. A straight forward cython implementation: About 4 times as fast
as the python implementation.
 3. An optimised cython implementation earning another factor of 2
in speed (depends on the parameters used).

I see this code as a research grade that would evolve in the near
future, as I am working on a related project, and hopefully following
your comments.

  Nadav
'''
  A cython implementation of the plain (and slow) algorithm for bilateral
  filtering.

  Copyright 2008, Nadav Horesh
  nadavh at visionsense.com 
'''

from numpy.numarray.nd_image import generic_filter
import numpy as np


cdef extern from arrayobject.h:
ctypedef int intp
ctypedef extern class numpy.ndarray [object PyArrayObject]:
cdef char *data
cdef int nd
cdef intp *dimensions
cdef intp *strides
cdef int flags

cdef extern from math.h:
double exp(double x)

## gcc specific (?)
cdef extern:
int __builtin_abs(int x)

cdef int GAUSS_SAMP = 32
cdef int GAUSS_IDX_MAX = GAUSS_SAMP - 1

class _Bilat_fcn(object):
'''
The class provides the bilaterl filter function to be called by
generic_filter.
initialization parameters:
  spat_sig:The sigma of the spatial Gaussian filter
  inten_sig:   The sigma of the gray-levels Gaussian filter
  filter_size: (int) The size of the spatial convolution kernel. If
   not set, it is set to ~ 4*spat_sig.
'''
def __init__(self, spat_sig, inten_sig, filter_size=None):
if filter_size is not None and filter_size = 2:
self.xy_size = int(filter_size)
else:
self.xy_size = int(round(spat_sig*4))
# Make filter size odd
self.xy_size += 1-self.xy_size%2
x = np.arange(self.xy_size, dtype=float)
x = (x-x.mean())**2
#xy_ker: Spatial convolution kernel
self.xy_ker = np.exp(-np.add.outer(x,x)/(2*spat_sig**2)).ravel()
self.xy_ker /= self.xy_ker.sum()
self.inten_sig = 2 * inten_sig**2
self.index = self.xy_size**2 // 2

## An initialization for LUT instead of a Gaussian function call
## (for the fc_filter method)

x = np.linspace(0,3.0, GAUSS_SAMP)
self.gauss_lut = np.exp(-x**2/2)
self.gauss_lut[-1] = 0.0  # Nullify all distant deltas
self.x_quant = 3*inten_sig / GAUSS_IDX_MAX


##
## Filtering functions
##

def __call__ (self, data):
'An unoptimized (pure python) implementation'
weight = np.exp(-(data-data[self.index])**2/self.inten_sig) * 
self.xy_ker
return np.dot(data, weight) / weight.sum()

def cfilter(self, ndarray data):
'An optimized implementation'
cdef ndarray kernel = self.xy_ker
cdef double sigma   = self.inten_sig
cdef double weight_i, weight, result, centre, dat_i
cdef double *pdata=double *data.data, *pker=double *kernel.data
cdef int i, dim = data.dimensions[0]
centre = pdata[self.index]

weight = 0.0
result = 0.0

for i from  0 = i  dim:
dat_i = pdata[i]
weight_i = exp(-(dat_i-centre)**2 / sigma) * pker[i]
weight += weight_i;
result += dat_i * weight_i
return result / weight


def fc_filter(self, ndarray data):
'Use further optimisation by replacing exp functions calls by a LUT'
cdef ndarray kernel = self.xy_ker
cdef ndarray gauss_lut_arr = self.gauss_lut

cdef double sigma   = self.inten_sig
cdef double weight_i, weight, result, centre, dat_i
cdef double *pdata=double *data.data, *pker=double *kernel.data
cdef int i, dim = data.dimensions[0]
cdef int exp_i# Entry index for the LUT
cdef double x_quant = self.x_quant
cdef double *gauss_lut = double *gauss_lut_arr.data

centre = pdata[self.index]

weight = 0.0
result = 0.0

for i from  0 = i  dim:
dat_i = pdata[i]
exp_i = __builtin_abs(int((dat_i-centre) / x_quant))
if exp_i  GAUSS_IDX_MAX:
exp_i = GAUSS_IDX_MAX
weight_i = gauss_lut[exp_i] * pker[i]
weight += weight_i;
result += dat_i * weight_i
return result / weight


#
# Filtering 

Re: [Numpy-discussion] running numpy tests

2008-08-05 Thread Alan McIntyre
At the moment, bench() doesn't work.  That's something I'll try to
look at this week, but from Friday until the 15th I'm going to be
driving most of the time and may not get as much done as I'd like.

On 8/5/08, Andrew Dalke [EMAIL PROTECTED] wrote:
 On Aug 5, 2008, at 4:19 AM, Robert Kern wrote:
 def test(...):
 ...
 test.__test__ = False

 That did it - thanks!

 Does import numpy; numpy.bench() work for anyone?  When I try it I get


 [josiah:~] dalke% python -c 'import numpy; numpy.bench()'

 --
 Ran 0 tests in 0.003s

 OK

 I can go ahead and remove those if they don't work for anyone.

   Andrew
   [EMAIL PROTECTED]


 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Toward a numpy build with warning: handling unused variable

2008-08-05 Thread Lisandro Dalcin
Of course, you could also call GCC like this '-Wall
-Wno-unused-parameter'. Then you will only get warnings about unused
functions and local variables.

On Tue, Aug 5, 2008 at 9:49 AM, Charles R Harris
[EMAIL PROTECTED] wrote:


 On Tue, Aug 5, 2008 at 4:28 AM, David Cournapeau [EMAIL PROTECTED] wrote:

 On Tue, Aug 5, 2008 at 1:29 AM, Lisandro Dalcin [EMAIL PROTECTED] wrote:
  David, I second your approach. Furthermore, look how SWIG handles
  this, it is very similar to your proposal. The difference is that SWIG
  uses SWIGUNUSED for some autogenerated functions. Furthermore, it
  seems the SWIG developers protected the generated code taking into
  account GCC versions ;-) and the C++ case, and also Intel compilers.

 Yes, I would have been surprised if most compilers do not have a way
 to deal with that. I did not know icc handled __attribute__.

 Charles, are you still have strongly against this, since this is in
 the vein of what you suggested (tagging the argument) ?



 I think tagging the argument is the way to go.

 Chuck

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion





-- 
Lisandro Dalcín
---
Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)
Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)
Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)
PTLC - Güemes 3450, (3000) Santa Fe, Argentina
Tel/Fax: +54-(0)342-451.1594
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.2 tasks

2008-08-05 Thread David Huard
On Tue, Aug 5, 2008 at 4:04 AM, Vincent Schut [EMAIL PROTECTED] wrote:

 David Huard wrote:
 
 
  On Mon, Aug 4, 2008 at 1:45 PM, Jarrod Millman [EMAIL PROTECTED]
  mailto:[EMAIL PROTECTED] wrote:

  snip

  Question: Should histogram raise a warning by default (new=True) to warn
  users that the behaviour has changed ? Or warn only if new=False to
  remind that
  the old behaviour will be deprecated in 1.3 ?  I think that users will
  prefer being annoyed at warnings than surprised by an unexpected change,
  but repeated warnings
  can become a nuisance.
 
  To minimize annoyance, we could also offer three possibilities:
 
  new=None (default) : Equivalent to True, print warning about change.
  new=True : Don't print warning.
  new=False : Print warning about future deprecation.
 
  So those who have already set new=True don't get warnings, and all
  others are warned. Feedback ?

 As a regular user of histogram I say: please warn! Your proposal above
 seems OK to me. I do have histogram in a lot of kind of old (and
 sometimes long-running) code of mine, and I certainly would prefer to be
 warned.

 Vincent.


Thanks for the feedback. Here is what will be printed:

If new=False

The original semantics of histogram is scheduled to be
deprecated in NumPy 1.3. The new semantics fixes
long-standing issues with outliers handling. The main
changes concern
1. the definition of the bin edges,
now including the rightmost edge, and
2. the handling of upper outliers,
now ignored rather than tallied in the rightmost bin.

Please read the docstring for more information.



If new=None  (default)

The semantics of histogram has been modified in
the current release to fix long-standing issues with
outliers handling. The main changes concern
1. the definition of the bin edges,
   now including the rightmost edge, and
2. the handling of upper outliers, now ignored rather
   than tallied in the rightmost bin.
The previous behaviour is still accessible using
`new=False`, but is scheduled to be deprecated in the
next release (1.3).

*This warning will not printed in the 1.3 release.*

Please read the docstring for more information.


I modified the docstring to put the emphasis on the new semantics,
adapted the tests and updated the ticket.

David





 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] running numpy tests

2008-08-05 Thread Andrew Dalke
On Aug 5, 2008, at 3:53 PM, Alan McIntyre wrote:
 At the moment, bench() doesn't work.  That's something I'll try to
 look at this week, but from Friday until the 15th I'm going to be
 driving most of the time and may not get as much done as I'd like.

Thanks for the confirmation.

The import speedup patch I just submitted keeps the 'bench' definitions
there (including one that was missing).  But instead of defining it
as a bound method, I used functions that import the testing submodule
and construct/call the right objects.

It should behave the same.  I think.

Andrew
[EMAIL PROTECTED]


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] member1d and unique elements

2008-08-05 Thread Robert Cimrman
Greg Novak wrote:
 I have two arrays of integers, and would like to know _where_ they
 have elements in common, not just _which_ elements are in common.
 This is because the entries in the integer array are aligned with
 other arrays.  This seems very close to what member1d advertises as
 its function.  However, member1d says that it expects arrays with only
 unique elements.
 
 First of all, my desired operation is well-posed:  I'd like f(ar1,
 ar2) to return something in the shape of ar1 with True if the value at
 that position appears anywhere in ar2 (regardless of duplication) and
 False otherwise.
 
 So I looked at the code and have two questions:
 1) What is this code trying to achieve?
 aux = perm[ii+1]
 perm[ii+1] = perm[ii]
 perm[ii] = aux
 
 Here perm is the stable argsort of the two concatenated arguments:
 perm = concatenate((ar1, ar2)).argsort(kind='mergesort').
 arr is the array of combined inputs in sorted order:
 arr = concatenate((ar1, ar2))[perm]
 and ii is a list of indices into arr where the value of arr is equal
 to the next value in the array (arr[ii] == arr[ii+1]) _and_ arr[ii]
 came from the _second_ input (ar2).
 
 Now, this last bit (looking for elements of arr that are equal and
 both came from the second array) is clearly trying to deal with
 duplication, which is why I'm interested...
 
 So, the code snippet is trying to swap perm[ii+1] with perm[ii], but I
 don't see why.  Furthermore, there are funny results if a value is
 duplicated three times, not just twice -- perm is no longer a
 permutation vector.  Eg, member1d([1], [2,2,2]) results perm=[0,1,2,3]
 and ii=[1,2] before the above snippet, and the above snippet makes
 perm into [0,2,3,2]
 
 I've commented those three lines, and I've never seen any changes to
 the output of member1d.  The new value of perm is used to compute the
 expression: perm.argsort(kind='mergesort')[:len( ar1 )], but the
 changes to that expression as a result of the above three lines are
 always at the high end of the array, which is sliced off by the last
 [:len(ar1)].
 
 Finally, my second question is:
 2) Does anyone have a test case where member1d fails as a result of
 duplicates in the input?  So far I haven't found any, with the above
 lines commented or not.
 
 Upon reflection and review of the changelog, another theory occurs to
 me: member1d did not originally use a stable sort.  What I've written
 above for interpretation of the value ii (indicates duplication within
 ar2) is true for a stable sort, but for an unstable sort the same
 condition has the interpretation that ii holds the values where the
 sorting algorithm swapped the order of equal values unstably.  Then
 the code snippet in question 1) looks like an attempt to swap those
 values in the permutation array to make the sort stable again.  The
 attempt would fail if there was duplication in either array.
 
 So, I would propose deleting those three lines (since they seem to be
 a non-functional relic) and declaring in the docstring that member1d
 doesn't require unique elements.
 
 Also, if this is correct, then the function simplifies considerably
 since several values don't need to be computed anymore:
 
 def setmember1d( ar1, ar2 ):
 ar = nm.concatenate( (ar1, ar2 ) )
 perm = ar.argsort(kind='mergesort')
 aux = ar[perm]
 flag = nm.concatenate( (aux[1:] == aux[:-1], [False] ) )
 indx = perm.argsort(kind='mergesort')[:len( ar1 )]
 return flag[indx]
 
 Corrections to the above are welcome since I'm going to start using
 member1d without regard for uniqueness, and I'd like to know if I'm
 making a big mistake...

Hi Greg,

I do not have much time to investigate it in detail right now, but it 
does not work for repeated entries in ar1:

In [14]: nm.setmember1d( [1,2,3,2], [1, 3] )
Out[14]: array([ True,  True,  True, False], dtype=bool)

thanks for trying to enhance arraysetops!
r.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bilateral filter

2008-08-05 Thread Zachary Pincus
 Attached here my cython implementation of the bilateral filter,  
 which is my first cython program. I would ask for the following:

Thanks for the code! I know it will be of use to me. (Do you place any  
particular license on it?)

Zach


On Aug 5, 2008, at 9:38 AM, Nadav Horesh wrote:

 Attached here my cython implementation of the bilateral filter,  
 which is my first cython program. I would ask for the following:

   • Is there any way to speed up the code just by cosmetic  
 modifications?
   • In particular I use the unportable gcc function __builtin_abs: Is  
 there any way to access this in a portable way?
   • I would like to donate this code to scipy or any other suitable  
 project. What can/should I do to realise it?

 Remarks:

 The code contains 3 end-user routines:
   • A pure python implementation: Easy to read and modify --- it can  
 be cut out into a python source code.
   • A straight forward cython implementation: About 4 times as fast  
 as the python implementation.
   • An optimised cython implementation earning another factor of 2 in  
 speed (depends on the parameters used).
 I see this code as a research grade that would evolve in the near  
 future, as I am working on a related project, and hopefully  
 following your comments.

   Nadav
 bilateral.pyx___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] 64bit issue?

2008-08-05 Thread Charles Doutriaux
Hello I'm running into some strange error on a 64bit machine,
I tracked it down to this line returning a NULL pointer, any idea why is 
this?
I tried both numpy1.1.1 and what in trunk, numpy.test() passes for both

Ok here's the uname of the machine and the offending line:

Linux quartic.txcorp.com 2.6.20-1.2320.fc5 #1 SMP Tue Jun 12 18:50:49 
EDT 2007 x86_64 x86_64 x86_64 GNU/Linux

  array = (PyArrayObject *)PyArray_SimpleNew(d, dims, self-type);

where d is 3 and dims: 120,46,72
self-type is 11

it seems to pass with d=1, but i'm not 100% positive.

Any idea on why it could be failing?

Thanks,

C.

PS: just in case here are numpy.test results:
  import numpy
  numpy.test()
Numpy is installed in 
/home/facets/doutriaux1/cdat/latest/lib/python2.5/site-packages/numpy
Numpy version 1.1.1
Python version 2.5.2 (r252:60911, Aug  4 2008, 13:47:12) [GCC 4.1.1 
20070105 (Red Hat 4.1.1-51)]
  Found 3/3 tests for numpy.core.tests.test_memmap
  Found 145/145 tests for numpy.core.tests.test_regression
  Found 12/12 tests for numpy.core.tests.test_records
  Found 3/3 tests for numpy.core.tests.test_errstate
  Found 72/72 tests for numpy.core.tests.test_numeric
  Found 36/36 tests for numpy.core.tests.test_numerictypes
  Found 290/290 tests for numpy.core.tests.test_multiarray
  Found 18/18 tests for numpy.core.tests.test_defmatrix
  Found 63/63 tests for numpy.core.tests.test_unicode
  Found 16/16 tests for numpy.core.tests.test_umath
  Found 7/7 tests for numpy.core.tests.test_scalarmath
  Found 2/2 tests for numpy.core.tests.test_ufunc
  Found 5/5 tests for numpy.distutils.tests.test_misc_util
  Found 4/4 tests for numpy.distutils.tests.test_fcompiler_gnu
  Found 2/2 tests for numpy.fft.tests.test_fftpack
  Found 3/3 tests for numpy.fft.tests.test_helper
  Found 15/15 tests for numpy.lib.tests.test_twodim_base
  Found 1/1 tests for numpy.lib.tests.test_machar
  Found 1/1 tests for numpy.lib.tests.test_regression
  Found 43/43 tests for numpy.lib.tests.test_type_check
  Found 1/1 tests for numpy.lib.tests.test_ufunclike
  Found 15/15 tests for numpy.lib.tests.test_io
  Found 25/25 tests for numpy.lib.tests.test__datasource
  Found 10/10 tests for numpy.lib.tests.test_arraysetops
  Found 1/1 tests for numpy.lib.tests.test_financial
  Found 4/4 tests for numpy.lib.tests.test_polynomial
  Found 6/6 tests for numpy.lib.tests.test_index_tricks
  Found 49/49 tests for numpy.lib.tests.test_shape_base
  Found 55/55 tests for numpy.lib.tests.test_function_base
  Found 5/5 tests for numpy.lib.tests.test_getlimits
  Found 3/3 tests for numpy.linalg.tests.test_regression
  Found 89/89 tests for numpy.linalg.tests.test_linalg
  Found 96/96 tests for numpy.ma.tests.test_core
  Found 19/19 tests for numpy.ma.tests.test_mrecords
  Found 15/15 tests for numpy.ma.tests.test_extras
  Found 4/4 tests for numpy.ma.tests.test_subclassing
  Found 36/36 tests for numpy.ma.tests.test_old_ma
  Found 1/1 tests for numpy.oldnumeric.tests.test_oldnumeric
  Found 7/7 tests for numpy.tests.test_random
  Found 16/16 tests for numpy.testing.tests.test_utils
  Found 6/6 tests for numpy.tests.test_ctypeslib
..
 
..
--
Ran 1292 tests in 1.237s

OK
unittest._TextTestResult run=1292 errors=0 failures=0


and:
  import numpy
  numpy.test()
Running unit tests for numpy
NumPy version 1.2.0.dev5611
NumPy is installed in 
/home/facets/doutriaux1/cdat/latest/lib/python2.5/site-packages/numpy
Python version 2.5.2 (r252:60911, Aug  4 2008, 13:47:12) [GCC 4.1.1 
20070105 (Red Hat 4.1.1-51)]
nose version 0.10.3

Re: [Numpy-discussion] Bilateral filter

2008-08-05 Thread Jarrod Millman
On Tue, Aug 5, 2008 at 9:29 AM, Zachary Pincus [EMAIL PROTECTED] wrote:
 Attached here my cython implementation of the bilateral filter,
 which is my first cython program. I would ask for the following:

 Thanks for the code! I know it will be of use to me. (Do you place any
 particular license on it?)

It would be great if you would use the new BSD license.  If you want
it included in SciPy, you will need to use that license anyway.

Thanks,

-- 
Jarrod Millman
Computational Infrastructure for Research Labs
10 Giannini Hall, UC Berkeley
phone: 510.643.4014
http://cirl.berkeley.edu/
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.2 tasks

2008-08-05 Thread Stéfan van der Walt
2008/8/5 Jarrod Millman [EMAIL PROTECTED]:
 If new=None  (default)

Could you put in a check for new=True, and suppress those messages?  A
user that knows about the changes wouldn't want to see anything.

Regards
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.2 tasks

2008-08-05 Thread Jarrod Millman
On Tue, Aug 5, 2008 at 10:24 AM, Stéfan van der Walt [EMAIL PROTECTED] wrote:
 Could you put in a check for new=True, and suppress those messages?  A
 user that knows about the changes wouldn't want to see anything.

Yes, that is all ready available.  Maybe the warning message for
'new=None' should mention this, though.

-- 
Jarrod Millman
Computational Infrastructure for Research Labs
10 Giannini Hall, UC Berkeley
phone: 510.643.4014
http://cirl.berkeley.edu/
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] member1d and unique elements

2008-08-05 Thread Greg Novak
Argh.  I could swear that yesterday I typed test cases just like the
one you provide, and it behaved correctly.  Nevertheless, it clearly
fails in spite of my memory, so attached is a version which I believe
gives the correct behavior.

Greg

On Tue, Aug 5, 2008 at 9:00 AM, Robert Cimrman [EMAIL PROTECTED] wrote:
 I do not have much time to investigate it in detail right now, but it
 does not work for repeated entries in ar1:

 In [14]: nm.setmember1d( [1,2,3,2], [1, 3] )
 Out[14]: array([ True,  True,  True, False], dtype=bool)
def setmember1d( ar1, ar2, handle_dupes=True):
Return a boolean array of shape of ar1 containing True where the elements
of ar1 are in ar2 and False otherwise.

If handle_dupes is true, allow for the possibility that ar1 or ar2
each contain duplicate values.  If you are sure that each array
contains only unique elelemnts, you can set handle_dupes to False
for faster execution.

Use unique1d() to generate arrays with only unique elements to use as inputs
to this function.

:Parameters:
  - `ar1` : array
  - `ar2` : array
  - `handle_dupes` : boolean
  
:Returns:
  - `mask` : bool array
The values ar1[mask] are in ar2.

:See also:
  numpy.lib.arraysetops has a number of other functions for performing set
  operations on arrays.

# We need this to be a stable sort, so always use 'mergesort' here. The
# values from the first array should always come before the values from the
# second array.
ar = nm.concatenate( (ar1, ar2 ) )
order = ar.argsort(kind='mergesort')
sar = ar[order]
equal_adj = (sar[1:] == sar[:-1])
flag = nm.concatenate( (equal_adj, [False] ) )

if handle_dupes:
# if there is duplication, then being equal to your next
# higher neighbor in the sorted array equal is not sufficient
# to establish that your value exists in ar2 -- it may have
# come from ar1.  A complication is that that this is
# transitive: setmember1d([2,2], [2]) must recognize _both_
# 2's in ar1 as appearing in ar2, so neither is it sufficient
# to test if you're equal to your neighbor and your neighbor
# came from ar2.  Initially mask is 0 for values from ar1 and
# 1 for values from ar2.  If an entry is equal to the next
# higher neighbor and mask is 1 for the higher neighbor, then
# mask is set to 1 for the lower neighbor also.  At the end,
# mask is 1 if the value of the entry appears in ar2.
zlike = nm.zeros_like
mask = nm.concatenate( (zlike( ar1 ), zlike( ar2 ) + 1) )

smask = mask[order]
prev_smask = zlike(smask) - 1
while not (prev_smask == smask).all():
prev_smask[:] = smask
smask[nm.where(equal_adj  smask[1:])[0]] = 1
flag *= smask

indx = order.argsort(kind='mergesort')[:len( ar1 )]
return flag[indx]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bilateral filter

2008-08-05 Thread Nadav Horesh
As much as I know, the python's community is bound to the BSD style licence. I 
was hoping that the code would be included in scipy, after I would attach the 
required declaration (I do not know which), and maybe with some adjustment to 
the coding style.

In short, for the next few days I am waiting for comments and responses that 
would enable me to donate the code to an existing project (scipy? pyvision?). 
Meanwhile I'll improve the co a bit. If you can't wait, and not willing to use 
the code without a licence statement, I'll repost the code with the scipy's 
licence declaration.

  Nadav.


-הודעה מקורית-
מאת: [EMAIL PROTECTED] בשם Zachary Pincus
נשלח: ג 05-אוגוסט-08 19:29
אל: Discussion of Numerical Python
נושא: Re: [Numpy-discussion] Bilateral filter
 
 Attached here my cython implementation of the bilateral filter,  
 which is my first cython program. I would ask for the following:

Thanks for the code! I know it will be of use to me. (Do you place any  
particular license on it?)

Zach


On Aug 5, 2008, at 9:38 AM, Nadav Horesh wrote:

 Attached here my cython implementation of the bilateral filter,  
 which is my first cython program. I would ask for the following:

   • Is there any way to speed up the code just by cosmetic  
 modifications?
   • In particular I use the unportable gcc function __builtin_abs: Is  
 there any way to access this in a portable way?
   • I would like to donate this code to scipy or any other suitable  
 project. What can/should I do to realise it?

 Remarks:

 The code contains 3 end-user routines:
   • A pure python implementation: Easy to read and modify --- it can  
 be cut out into a python source code.
   • A straight forward cython implementation: About 4 times as fast  
 as the python implementation.
   • An optimised cython implementation earning another factor of 2 in  
 speed (depends on the parameters used).
 I see this code as a research grade that would evolve in the near  
 future, as I am working on a related project, and hopefully  
 following your comments.

   Nadav
 bilateral.pyx___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion

winmail.dat___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.2 tasks

2008-08-05 Thread David Huard
On Tue, Aug 5, 2008 at 1:18 PM, Jarrod Millman [EMAIL PROTECTED] wrote:

 On Tue, Aug 5, 2008 at 8:48 AM, David Huard [EMAIL PROTECTED] wrote:
  Thanks for the feedback. Here is what will be printed:
 
  If new=False
 
  The original semantics of histogram is scheduled to be
  deprecated in NumPy 1.3. The new semantics fixes
  long-standing issues with outliers handling. The main
  changes concern
  1. the definition of the bin edges,
  now including the rightmost edge, and
  2. the handling of upper outliers,
  now ignored rather than tallied in the rightmost bin.
 
  Please read the docstring for more information.
 
 
 
  If new=None  (default)
 
  The semantics of histogram has been modified in
  the current release to fix long-standing issues with
  outliers handling. The main changes concern
  1. the definition of the bin edges,
 now including the rightmost edge, and
  2. the handling of upper outliers, now ignored rather
 than tallied in the rightmost bin.
  The previous behaviour is still accessible using
  `new=False`, but is scheduled to be deprecated in the
  next release (1.3).
 
  *This warning will not printed in the 1.3 release.*
 
  Please read the docstring for more information.

 Thanks for taking care of this.  I thought that we were going to
 remove the new parameter in the 1.3 release.  Is that still the plan?
 If so, shouldn't the warning state will be removed in the next minor
 release (1.3) rather than is scheduled to be deprecated in the next
 release (1.3)?  In my mind the old behavior is deprecated in this
 release (1.2).


The roadmap that I propose is the following:

1.1 we warn about upcoming change, (new=False)
1.2 we make that change, (new=None) + warnings
1.3 we deprecate the old behaviour (new=True), no warnings.
1.4 remove the old behavior and remove the new keyword.

It's pretty much the roadmap exposed in the related ticket that I wrote
following discussions on the ML.

This leaves plenty of time for people to make their changes, and my guess
is that a lot of people will appreciate this, given that you were asked to
delay
the changes to histogram.


 The 1.2 release will be longer lived (~6 months) than the 1.1 release
 and I anticipate several bugfix releases (1.2.1, 1.2.2, 1.2.3, etc).
 So I think it is reasonable to just remove the old behavior in the 1.3
 release.


 --
 Jarrod Millman
 Computational Infrastructure for Research Labs
 10 Giannini Hall, UC Berkeley
 phone: 510.643.4014
 http://cirl.berkeley.edu/
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.2 tasks

2008-08-05 Thread David Huard
On Tue, Aug 5, 2008 at 1:36 PM, Jarrod Millman [EMAIL PROTECTED] wrote:

 On Tue, Aug 5, 2008 at 10:24 AM, Stéfan van der Walt [EMAIL PROTECTED]
 wrote:
  Could you put in a check for new=True, and suppress those messages?  A
  user that knows about the changes wouldn't want to see anything.

 Yes, that is all ready available.  Maybe the warning message for
 'new=None' should mention this, though.


Done



 --
 Jarrod Millman
 Computational Infrastructure for Research Labs
 10 Giannini Hall, UC Berkeley
 phone: 510.643.4014
 http://cirl.berkeley.edu/
 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http://projects.scipy.org/mailman/listinfo/numpy-discussion

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 64bit issue?

2008-08-05 Thread Charles R Harris
On Tue, Aug 5, 2008 at 12:20 PM, Charles Doutriaux [EMAIL PROTECTED]wrote:

 Hello I'm running into some strange error on a 64bit machine,
 I tracked it down to this line returning a NULL pointer, any idea why is
 this?
 I tried both numpy1.1.1 and what in trunk, numpy.test() passes for both

 Ok here's the uname of the machine and the offending line:

 Linux quartic.txcorp.com 2.6.20-1.2320.fc5 #1 SMP Tue Jun 12 18:50:49
 EDT 2007 x86_64 x86_64 x86_64 GNU/Linux

  array = (PyArrayObject *)PyArray_SimpleNew(d, dims, self-type);

 where d is 3 and dims: 120,46,72
 self-type is 11

 it seems to pass with d=1, but i'm not 100% positive.

 Any idea on why it could be failing?


What is the type of dims? Is there also a problem with a 32 bit OS?

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] import numpy is slow

2008-08-05 Thread Christopher Barker
Robert Kern wrote:

 
 It's still pretty bad, though. I do recommend running Disk Repair like Bill 
 did.

I did that, and it found and did nothing -- I suspect it ran when I 
re-booted -- it did take a while to reboot.

However, this is pretty consistently what I'm getting now:

$ time python -c import numpy

real0m0.728s
user0m0.327s
sys 0m0.398s

Which is apparently pretty slow. Robert gets:

$ time python -c import numpy
python -c import numpy  0.18s user 0.46s system 88% cpu 0.716 total

Is that on a similar machine??? Are you running Universal binaries? 
Would that make any difference? I wouldn't think so, I'm just grasping 
at straws here.

This is a Dual 1.8GHz G5 desktop, running OS-X 10.4.11, Python 2.5.2 
(python.org build), numpy 1.1.1 (from binary on sourceforge)

I just tried this on a colleague's machine that is identical, and got 
about 0.4 seconds real -- so faster than mine, but still slow.

This still feels blazingly fast to me, as I was getting something like 
7+ seconds!

thanks for all the help,

-Chris







-- 
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

[EMAIL PROTECTED]
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 64bit issue?

2008-08-05 Thread Charles Doutriaux
Hi chuck, works great on 32bit

  int *dims;
dims = (int *)malloc(self-nd*sizeof(int));

and self-nd is 3

C.

Charles R Harris wrote:


 On Tue, Aug 5, 2008 at 12:20 PM, Charles Doutriaux 
 [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:

 Hello I'm running into some strange error on a 64bit machine,
 I tracked it down to this line returning a NULL pointer, any idea
 why is
 this?
 I tried both numpy1.1.1 and what in trunk, numpy.test() passes for
 both

 Ok here's the uname of the machine and the offending line:

 Linux quartic.txcorp.com http://quartic.txcorp.com/
 2.6.20-1.2320.fc5 #1 SMP Tue Jun 12 18:50:49
 EDT 2007 x86_64 x86_64 x86_64 GNU/Linux

  array = (PyArrayObject *)PyArray_SimpleNew(d, dims, self-type);

 where d is 3 and dims: 120,46,72
 self-type is 11

 it seems to pass with d=1, but i'm not 100% positive.

 Any idea on why it could be failing?

  
 What is the type of dims? Is there also a problem with a 32 bit OS?
  
 Chuck
 

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http:// projects.scipy.org/mailman/listinfo/numpy-discussion
   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 64bit issue?

2008-08-05 Thread Charles R Harris
On Tue, Aug 5, 2008 at 1:14 PM, Charles Doutriaux [EMAIL PROTECTED]wrote:

 Hi chuck, works great on 32bit

  int *dims;
dims = (int *)malloc(self-nd*sizeof(int));

 and self-nd is 3


Should be

npy_intp *dims;

npy_intp will be 32 bits/ 64 bits depending on the architecture, ints tend
to always be 32 bits.

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 64bit issue?

2008-08-05 Thread Charles Doutriaux
Wow! It did it!

Is there other little tricks like this one I should track?

Thanks a lot! It would have take me days to track this one!

C.


Charles R Harris wrote:


 On Tue, Aug 5, 2008 at 1:14 PM, Charles Doutriaux [EMAIL PROTECTED] 
 mailto:[EMAIL PROTECTED] wrote:

 Hi chuck, works great on 32bit

  int *dims;
dims = (int *)malloc(self-nd*sizeof(int));

 and self-nd is 3

  
 Should be
  
 npy_intp *dims;
  
 npy_intp will be 32 bits/ 64 bits depending on the architecture, ints 
 tend to always be 32 bits.
  
 Chuck
 

 ___
 Numpy-discussion mailing list
 Numpy-discussion@scipy.org
 http:// projects.scipy.org/mailman/listinfo/numpy-discussion
   

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 1.2 tasks

2008-08-05 Thread Jarrod Millman
On Tue, Aug 5, 2008 at 10:58 AM, David Huard [EMAIL PROTECTED] wrote:
 The roadmap that I propose is the following:

 1.1 we warn about upcoming change, (new=False)
 1.2 we make that change, (new=None) + warnings
 1.3 we deprecate the old behaviour (new=True), no warnings.
 1.4 remove the old behavior and remove the new keyword.

 It's pretty much the roadmap exposed in the related ticket that I wrote
 following discussions on the ML.

 This leaves plenty of time for people to make their changes, and my guess
 is that a lot of people will appreciate this, given that you were asked to
 delay
 the changes to histogram.

Sounds good.  Thanks for the clarification.

-- 
Jarrod Millman
Computational Infrastructure for Research Labs
10 Giannini Hall, UC Berkeley
phone: 510.643.4014
http://cirl.berkeley.edu/
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Bilateral filter

2008-08-05 Thread Alan G Isaac
On Tue, 5 Aug 2008, Nadav Horesh apparently wrote:
 As much as I know, the python's community is bound to the 
 BSD style licence. I was hoping that the code would be 
 included in scipy, after I would attach the required 
 declaration (I do not know which), and maybe with some 
 adjustment to the coding style. 

 In short, for the next few days I am waiting for comments 
 and responses that would enable me to donate the code to 
 an existing project (scipy? pyvision?). Meanwhile I'll 
 improve the co a bit. If you can't wait, and not willing 
 to use the code without a licence statement, I'll repost 
 the code with the scipy's licence declaration. 


I had a little trouble understanding what you meant above.
So just to be clear...

If you offer the code now under a BSD license, that
will be acceptable to SciPy.  Additionally, I cannot
imagine a project (including GPL projects) who would
find the BSD license a problem.  That is one of the
great things about the BSD and MIT licenses!

Also, just to be clear, if you release as BSD and
later wish to also release your code under another
license, you can do so.

Finally, anyone would be foolish to use the code
without your permission, and once you give your
permission your are licensing the code, so you
may as well be clear from the start.

IANAL,
Alan Isaac




___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 64bit issue?

2008-08-05 Thread Charles R Harris
On Tue, Aug 5, 2008 at 1:45 PM, Charles Doutriaux [EMAIL PROTECTED]wrote:

 Wow! It did it!

 Is there other little tricks like this one I should track?


 The main change from the deprecated compatibility functions to the new
functions is the replacement of

int *dims;

by

npy_intp *dims;

Chuck
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Error creating an array

2008-08-05 Thread Sameer DCosta
Im having a little trouble creating a numpy array with a specific
dtype. I can create the array b with dtype=int, but not with the dtype
I want. Any idea what I am doing wrong here?

In [450]: import numpy as np

In [451]: print np.__version__
1.2.0.dev5243

In [452]: dtype=np.dtype([('spam', 'i4',1),('ham', 'i4',1)] )

In [453]: data = [ [1,2], [3,4], [5,6] ]

In [454]: b = np.array( data, dtype=dtype)
---
TypeError Traceback (most recent call last)

/home/titan/sameer/tmp/test.py
 1
 2
 3
 4
 5

TypeError: expected a readable buffer object

In [455]: b = np.array( data, dtype=int)

In [456]: print b
[[1 2]
 [3 4]
 [5 6]]


Thanks.
Sameer
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Error creating an array

2008-08-05 Thread Travis E. Oliphant
Sameer DCosta wrote:
 Im having a little trouble creating a numpy array with a specific
 dtype. I can create the array b with dtype=int, but not with the dtype
 I want. Any idea what I am doing wrong here?
   
You must uses tuples for the individual records when constructing 
arrays with the array command.

Thus,

data = [(1,2), (3,4), (5,6)]

will work.

-Travis

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Error creating an array

2008-08-05 Thread Sameer DCosta
On Tue, Aug 5, 2008 at 2:35 PM, Travis E. Oliphant
[EMAIL PROTECTED] wrote:

 You must uses tuples for the individual records when constructing
 arrays with the array command.

Thanks Travis. Is there a reason why numpy insists on tuples?
Anyway, moving on, this brings me to the real problem I wanted to
post. I can do an attribute lookup on an element of a recarray which
has been created by fromrecords, however I cannot do attribute lookups
on ndarrays that have been viewed as recarrays. Any ideas how to
proceed.

In [469]: import numpy as np

In [470]: print np.__version__
1.2.0.dev5243

In [471]: dtype=np.dtype([('spam', 'i4'),('ham', 'i4')] )

In [472]: data = [ (1,2), (3,4), (5,6) ]

In [473]: n = np.rec.fromrecords(data, dtype=dtype)

In [474]: print n[0].spam=, n[0].spam
n[0].spam= 1

In [475]: b = np.array( data, dtype=dtype)

In [476]: b = b.view(np.rec.recarray)

In [477]: print b[0].spam=, b[0].spam
b[0].spam=---
AttributeErrorTraceback (most recent call last)

/home/titan/sameer/tmp/test.py
 1
  2
  3
  4
  5

AttributeError: 'numpy.void' object has no attribute 'spam'

In [478]: type(n[0])
Out[478]: class 'numpy.core.records.record'

In [479]: type(b[0])
Out[479]: type 'numpy.void'
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion