Re: [Numpy-discussion] [RFC] should we argue for a matrix power operator, @@?

2014-03-16 Thread Fabrice Silva
Le samedi 15 mars 2014 à 04:32 +, Nathaniel Smith a écrit :
 Hi all,
 
 Here's the second thread for discussion about Guido's concerns about
 PEP 465. The issue here is that PEP 465 as currently written proposes
 two new operators, @ for matrix multiplication and @@ for matrix power
 (analogous to * and **):
   http://legacy.python.org/dev/peps/pep-0465/

Another usecase may rely on tensor contraction.
Matrix multiplication appears to be a particular case of tensor
contraction for matrix seen as 2nd-order tensor : 
(A @ B)_{ij} = A_{ik} B_{kj}
using Einstein summation notation.

@@ might also be used for double contraction as frequently used in
continuum mechanics. For example, the relation between strain and stress
(2nd order tensors) involves the elasticity tensor (a 4nd order one)
using the double contraction :
S_{ij} = C_{ijkl}E_{kl}
that might be simply calculated with S = C @@ E, the variables S, E, C
being instances of whatever class representing tensors.

My two cents

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] savetxt trouble

2013-02-20 Thread Fabrice Silva
Le mercredi 20 février 2013 à 13:35 +, Robert Kern a écrit :
 On Wed, Feb 20, 2013 at 1:25 PM, Neal Becker ndbeck...@gmail.com wrote:
  I tried to save a vector as a csv, but it didn't work.
 
  The vector is:
  a[0,0]
  array([-0.70710678-0.70710678j,  0.70710678+0.70710678j,
  0.70710678-0.70710678j,  0.70710678+0.70710678j,
 ...
   np.savetxt ('test.out', a[0,0], delimiter=',')
 
  The saved txt file says:
   (-7.071067811865540120e-01+-7.071067811865411334e-01j)
   (7.071067811865535679e-01+7.071067811865415775e-01j)
   (7.071067811865422437e-01+-7.071067811865529018e-01j)
   (7.071067811865520136e-01+7.071067811865431318e-01j)
   ...
 
 What were you expecting? A single row? savetxt() always writes out
 len(arr) rows. Reshape your vector into a (1,N) array if you want a
 single row.

The imaginary part seems broken, see the succession +- for negative
imaginary parts.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 3D array problem in Python

2012-12-30 Thread Fabrice Silva
Le dimanche 30 décembre 2012 à 16:47 +0400, Happyman a écrit :
  Actually
 These two functions namely F1 and F2 are really exponential and Bessel 
 functions respectively. But I can not change its analytic form..
 
 I mean is there way to get more quickly the result?
 Let's say above mentioned two functions, both of them, one function but the 
 dimension I showed should not be changed..
 
 
 What do you think here whether the problem is with 3 dimension or???
 Thanks in advance for your answer!

The question was: does F1 and F2 change depending on i,j or k ? The
answer seems to be yes. 
I don't think any improvement with vectorisation is possible while using
integrate.quad or similar. This function does adaptative meshing of the
integration interval. Singular points and/or discontinuities may arise
at different points for the several integrands, so that vectorisation
would even slow the computation!
Maybe you may have a look at romberg's method, which support
vector-form.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Adding solvers to scipy.integrate [Was: A step toward merging odeint and ode]

2012-08-21 Thread Fabrice Silva
Le lundi 20 août 2012 à 22:04 +0200, Ralf Gommers a écrit :
 https://github.com/FabricioS/scipy/commit/f867f2b8133d3f6ea47d449bd760a77a7c90394e

 This is probably not worth the cost for existing users imho. It is a
 backwards compatibility break that doesn't really add anything except
 for some consistency (right?).

Hi Ralf,
Ok concerning this point.


In addition, I have been looking to suggest additional solvers,
essentially simpler scheme, that would thus allow to easily switch
between complex (lsode, vode, cvode) and basic schemes (Euler,
Nicholson, etc...)

I came across some code on the Montana Univ.'s Computer Science dpt:
http://wiki.cs.umt.edu/classes/cs477/index.php/Creating_ODE_Solver_Objects 
and asked Jesse Johnson (the responsible for that class) what is the
license for that code. Here is his answer : 

Any thing that you find on those pages, you may use. However,
I'm not sure how to go about officially giving the code a
particular license. Can I add a license to the wiki, stating
that it applies to all the code therein?

PS It is fantastic you're doing this. I've often thought that
scipy.ode could use some improvements.

He is cc'ed of this mail, could anyone concerned about scipy license
requirements and more generally in code licensing answer him ?

Regards

-- 
Fabrice Silva

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A step toward merging odeint and ode

2012-08-16 Thread Fabrice Silva


Le mercredi 15 août 2012 à 20:54 +0200, Ralf Gommers a écrit :
 I was mixing it up a bit, but yes: the _odepack extension and the C
 source for it. Not necessary to do that at once I guess, but wrapping
 the same function twice is once too many.
 
 And forgot in my first email: nice PR, looks good to me.

OK then, you can found two commits : 

the first one removes the _odepack extension (and the relative
multipack.h, __odepack.h and _odepackmodule.c), replacing it by Python
counterparts in the odeint function itself.
https://github.com/FabricioS/scipy/commit/02e8a4856f29f4ad438fef2c86a41b266d6a9e6c
 

the second one suggests reverting callback arguments convention:
ydot = f(y,t,..)
to ode's one:
ydot = f(t,y,..)
This ones would raise backward compatibility issues but align ordering
to the convention defined in the LLNL when designing the ODEPACK.
https://github.com/FabricioS/scipy/commit/f867f2b8133d3f6ea47d449bd760a77a7c90394e
 

-- 
Fabrice Silva

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A step toward merging odeint and ode

2012-08-15 Thread Fabrice Silva
Le mardi 14 août 2012 à 21:21 +0200, Ralf Gommers a écrit :
 On Sun, Aug 12, 2012, Fabrice Silva si...@lma.cnrs-mrs.fr wrote:
 I made a pull request [1] to integrate the LSODA solver that
 is used in odeint into the modular scipy.integrate.ode generic
 class. In a similar way as for vode, it just wraps the already
 present lsoda.f file (see .pyf file) and exposes it within an
 IntegratorBase subclass adjusting the coefficients before
 calling lsoda.
 
 Does that mean that odeint can be made a wrapper around lsoda and that
 the odepack static extension can be completely removed?

Hi Ralf,
The pull request allows to run the integration using the object-oriented
interface ode, with the same solver than the odeint interface uses, i.e.
lsoda, extending the integrators available for the object-oriented
interface.

As I understand the scipy.integrate architecture, we are by now
building:
* the odepack library, which has all the fortran sources required by
lsoda and vode at least.
* the _odepack extension, which defines the _odepack module needed by
odeint.

This latter would be removable, and odeint a wrapper around the lsoda
pyf'ed function. I suppose you are talking about the _odepack extension,
am I wrong?

-- 
Fabrice Silva

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] A step toward merging odeint and ode

2012-08-12 Thread Fabrice Silva
I made a pull request [1] to integrate the LSODA solver that is used in
odeint into the modular scipy.integrate.ode generic class. In a similar
way as for vode, it just wraps the already present lsoda.f file
(see .pyf file) and exposes it within an IntegratorBase subclass
adjusting the coefficients before calling lsoda.

Note that lsoda provide automatic switching between stiff and non-stiff
methods, feature that is not present in the available vode integrator.

Final note: tests are ok!

Regards,

[1] https://github.com/scipy/scipy/pull/273 
-- 
Fabrice Silva

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Second try: possible bug in assignment to complex array

2012-08-10 Thread Fabrice Silva
Le vendredi 10 août 2012, Dave Hirschfeld a écrit :
 Mark Bakker markbak at gmail.com writes:
  I think there is a problem with assigning a 1D complex array of length one
  to a position in another complex array.
  Example:
  a = ones(1,'D')
  b = ones(1,'D')
  a[0] = b
  ---
  TypeError Traceback (most recent call last)
  ipython-input-37-0c4fc6d780e3 in module()
   1 a[0] = b
  TypeError: can't convert complex to float
  
 
 I can't help unfortunately, but I can confirm that I also see the problem
 on Win32 Python 2.7.3, numpy 1.6.2.
 As a workaround it appears that slicing works:

Same on debian (unstable), Python 2.7, numpy 1.6.2
In [5]: a[0] = b
TypeError: can't convert complex to float
In [6]: a[0] = b[0]

Other workarounds : asscalar and squeeze
In [7]: a[0] = np.asscalar(b)
In [8]: a[0] = b.squeeze()


-- 
Fabrice Silva

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [numpy.testing] re-import when using coverage

2012-05-17 Thread Fabrice Silva
Le mercredi 16 mai 2012 à 22:58 +0200, Ralf Gommers a écrit :

 Both coverage=True and coverage=False work with your attached package.
 But it seems you attached an old version, because test_a.py doesn't
 include the actual test. obj = a.b.mycls() in test_a.py executes
 fine, so it may be a problem with the way you wrote the test case.

Sorry, I forgot to update the archive...
Here new one that lead (on my machine) to failure with coverage


-- 
Fabrice Silva


mypackage.tar
Description: Unix tar archive
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [numpy.testing] re-import when using coverage

2012-05-17 Thread Fabrice Silva
Nautilus and file-roller are *** on me...
I hope this one is good.
Thanks for being patient :)

Best regards

-- 
Fabrice Silva


mypackage.tar
Description: Unix tar archive
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [numpy.testing] re-import when using coverage

2012-05-17 Thread Fabrice Silva
Le jeudi 17 mai 2012 à 11:16 +0200, Ralf Gommers a écrit :
 On Thu, May 17, 2012 at 10:48 AM, Fabrice Silva si...@lma.cnrs-mrs.frwrote:
 
  Nautilus and file-roller are *** on me...
  I hope this one is good.
  Thanks for being patient :)
 
 
 That was the right tar file. The issue was the one Josef was pointing too
 (cover-inclusive), which has already been fixed in
 https://github.com/numpy/numpy/commit/bfaaefe52.
 
 That option indeed reimports everything, so that a.b is b is False.

That was indeed the issue.
Thanks you again (and Josef too)

-- 
Fabrice Silva si...@lma.cnrs-mrs.fr
LMA UPR CNRS 7051

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] [numpy.testing] re-import when using coverage

2012-05-16 Thread Fabrice Silva
Hi,
I am getting into troubles when using numpy.testing with coverage. A
minimal example package is atached to this email. Unpack and run:

$ python -c import mypackage; mypackage.test(verbose=10,coverage=False)
$ python -c import mypackage; mypackage.test(verbose=10,coverage=True)

Some explanations:
This package contains two module files, one of them (b.py) defining a
class (mycls), the other one (a.py) importing the first module. I then
add a simple test file, that does instantiate the class through the
other module (a.py) and checking the type of the resulting object
against b.mycls
Without coverage, everything is ok. With coverage, I got a
(unexpected ?) reload of the modules, leading to a mismatch of types...

The real case is somewhat more complicated, and I would prefer to keep
the instantiation through a.py. Is there a way to solve the problem ?

Best regards

-- 
Fabrice Silva si...@lma.cnrs-mrs.fr
LMA UPR CNRS 7051


mypackage.tar
Description: Unix tar archive
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [numpy.testing] re-import when using coverage

2012-05-16 Thread Fabrice Silva
 maybe it's this
 http://thread.gmane.org/gmane.comp.python.numeric.general/49785/focus=49787
Thanks for your reply, Not sure it is the same trouble, I'll have a
deeper look at that thread...

 It helped in my case, and I don't have a problem running the tests
 with your mypackage (using my patched numpy)

What I get:
$ python -c import mypackage; mypackage.test(coverage=False, 
verbose=10)
Reading a.py file...
Id(mycls in b) =  168022116
Running unit tests for mypackage
NumPy version 1.6.2rc1
NumPy is installed in /usr/lib/pymodules/python2.7/numpy
Python version 2.7.3rc2 (default, Apr 22 2012, 22:35:38) [GCC 4.6.3]
nose version 1.1.2
[...]
test_a.test_instantition ... Id(type(self)) =  168022116
Id(type(obj)) =  168022116
ok

--
Ran 1 test in 0.002s

OK
$ python -c import mypackage; mypackage.test(coverage=True, 
verbose=10)
Reading a.py file...
Id(mycls in b) =  143573092
Running unit tests for mypackage
NumPy version 1.6.2rc1
NumPy is installed in /usr/lib/pymodules/python2.7/numpy
Python version 2.7.3rc2 (default, Apr 22 2012, 22:35:38) [GCC 4.6.3]
nose version 1.1.2
[...]
Reading a.py file...
Id(mycls in b) =  150640036
test_a.test_instantition ... 
Id(type(self)) =  150640036
Id(type(obj)) =  150640036
FAIL

==
FAIL: test_a.test_instantition
--
Traceback (most recent call last):
  File /usr/lib/python2.7/dist-packages/nose/case.py, line 197, in 
runTest
self.test(*self.arg)
  File /tmp/cover/mypackage/tests/test_a.py, line 13, in 
test_instantition
assert a.b.is_mycls(obj)
AssertionError

[coverage results]
Ran 1 test in 0.006s

FAILED (failures=1)



-- 
Fabrice Silva si...@lma.cnrs-mrs.fr
LMA UPR CNRS 7051

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Owndata flag

2011-12-16 Thread Fabrice Silva
Le jeudi 15 décembre 2011 à 18:09 +0100, Gregor Thalhammer a écrit :

 There is an excellent blog entry from Travis Oliphant, that describes
 how to create a ndarray from existing data without copy:
 http://blog.enthought.com/?p=62
 The created array does not actually own the data, but its base
 attribute points to an object, which frees the memory if the numpy
 array gets deallocated. I guess this is the behavior you want to
 achieve. 
 Here is a cython implementation (for a uint8 array)

Even better: the addendum!
http://blog.enthought.com/python/numpy/simplified-creation-of-numpy-arrays-from-pre-allocated-memory/

Within cython:
cimport numpy
numpy.set_array_base(my_ndarray,  PyCObject_FromVoidPtr(pointer_to_Cobj, 
some_destructor))

Seems OK.
Any objections about that ?
-- 
Fabrice Silva


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Owndata flag

2011-12-16 Thread Fabrice Silva
Le vendredi 16 décembre 2011 à 15:33 +0100, Gregor Thalhammer a écrit :
  Even better: the addendum!
  http://blog.enthought.com/python/numpy/simplified-creation-of-numpy-arrays-from-pre-allocated-memory/
  
  Within cython:
  cimport numpy
  numpy.set_array_base(my_ndarray,  PyCObject_FromVoidPtr(pointer_to_Cobj, 
  some_destructor))
  
  Seems OK.
  Any objections about that ?
 
 This is ok, but CObject is deprecated as of Python 3.1, so it's not portable 
 to Python 3.2.

My guess is then that the PyCapsule object is the way to go...

-- 
Fabrice Silva

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Owndata flag

2011-12-15 Thread Fabrice Silva
How can one arbitrarily assumes that an ndarray owns its data ?

More explicitly, I have some temporary home-made C structure that holds
a pointer to an array. I prepare (using Cython) an numpy.ndarray using
the PyArray_NewFromDescr function. I can delete my temporary C structure
without freeing the memory holding array, but I wish the numpy.ndarray
becomes the owner of the data. 

How can do I do such thing ?
-- 
Fabrice Silva

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] dtype and pep3118

2011-11-23 Thread Fabrice Silva
Hi folks,
how should I specify a PEP3118 buffer format that could be understand by
numpy as the following dtype:

dtype = [('t11', '|f8'), ('t22', '|f8'), ('t33', '|f8'),
 ('t23', '|f8'), ('t13', '|f8'), ('t12', '|f8')]

so that I can manipulate a structured array and its fields ?

I tried strings like
T{d:t11: d:t22: d:t33: d:t23: d:t13: d:t12:}:Tensor2d:
d:t11: d:t22: d:t33: d:t23: d:t13: d:t12:

without success
-- 
Fabrice Silva

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] dtype and pep3118

2011-11-23 Thread Fabrice Silva
Le mercredi 23 novembre 2011 à 15:52 +0100, Pauli Virtanen a écrit :
 
  dtype = [('t11', '|f8'), ('t22', '|f8'), ('t33', '|f8'),
 ...  ('t23', '|f8'), ('t13', '|f8'), ('t12', '|f8')]
  x = np.zeros([1], dtype=dtype)
  memoryview(x).format
 'T{d:t11:d:t22:d:t33:d:t23:d:t13:d:t12:}'
  np.array(memoryview(x))
 array([(0.0, 0.0, 0.0, 0.0, 0.0, 0.0)],
   dtype=[('t11', 'f8'), ('t22', 'f8'), ('t33', 'f8'), ('t23',
 'f8'), ('t13', 'f8'), ('t12', 'f8')])

Thanks!

-- 
Fabrice Silva

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] yet another indexing question

2011-10-14 Thread Fabrice Silva
Le vendredi 14 octobre 2011 à 08:04 -0400, Neal Becker a écrit :
 suppose I have:
 
 In [10]: u
 Out[10]: 
 array([[0, 1, 2, 3, 4],
[5, 6, 7, 8, 9]])
 
 And I have a vector v:
  v = np.array ((0,1,0,1,0))
 
 I want to form an output vector which selects items from u where v is the 
 index 
 of the row of u to be selected.
 
 In the above example, I want:
 
 w = [0,6,2,8,4]
 
 I can't seem to find a syntax that does this.
 
 Now, more importantly, I need the result to be a reference to the original 
 array 
 (not a copy), because I'm going to use it on the LHS of an assignment.  Is 
 this 
 possible?

What about np.where?
http://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html

w = np.where(v, u[1], u[0])

if you may want to have more than two options (more than two lines for
u), then np.choose may be more appropriate
http://docs.scipy.org/doc/numpy/reference/generated/numpy.choose.html 
-- 
Fabrice Silva

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] simulate AR

2011-10-14 Thread Fabrice Silva
Le vendredi 14 octobre 2011 à 10:49 -0400, josef.p...@gmail.com a
écrit :
 On Fri, Oct 14, 2011 at 10:24 AM, Alan G Isaac alan.is...@gmail.com wrote:
  As a simple example, if I have y0 and a white noise series e,
  what is the best way to produces a series y such that y[t] = 0.9*y[t-1] + 
  e[t]
  for t=1,2,...?
 
  1. How can I best simulate an autoregressive process using NumPy?
 
  2. With SciPy, it looks like I could do this as
  e[0] = y0
  signal.lfilter((1,),(1,-0.9),e)
  Am I overlooking similar (or substitute) functionality in NumPy?
 
 I don't think so. At least I didn't find anything in numpy for this.
 An MA process would be a convolution, but for simulating AR I only
 found signal.lfilter. (unless numpy has gained extra features that I
 don't have in 1.5)
 
 Except, I think it's possible to do it with fft, if you want to
 fft-inverse-convolve (?)
 But simulating an ARMA with fft was much slower than lfilter in my
 short experimentation with it.

About speed comparison between lfilter, convolve, etc...
http://www.scipy.org/Cookbook/ApplyFIRFilter

-- 
Fabrice Silva

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] help translating matlab to numpy

2011-08-14 Thread Fabrice Silva
Le dimanche 14 août 2011 à 12:43 -0500, a...@ajackson.org a écrit :
 I'm translating some code from Matlab to numpy, and struggling a bit
 since I have very little knowledge of Matlab.
 
 My question is this - the arg function in Matlab (which seems to be 
 deprecated,
 they don't show it in their current documentation) is exactly equivalent to
 what in Numpy? I know it is angle(x, deg=1) to get degrees instead of radians,
 but what is the output range for the Matlab function -pi to pi, or 0 to 2*pi ?

Can you tell from which toolbox your arg function comes from ? Using
help (or which ?) for instance...
It could help!
-- 
Fabrice

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Fill a particular value in the place of number satisfying certain condition by another number in an array.

2011-08-01 Thread Fabrice Silva
Le lundi 01 août 2011 à 15:01 +0530, dileep kunjaai a écrit :
 Dear sir,
 How can we fill a particular value in the place of number satisfying
 certain condition by another number in an array.
 A contain some negative value i want to change the negative numbers to
 '0'. I used 'masked_where', command but I failed.

Does np.clip fulfill your requirements ?
http://docs.scipy.org/doc/numpy/reference/generated/numpy.clip.html

Be aware that it needs an upper limit (which can be np.inf).

Another option
A[A0] = 0.

-- 
Fabrice Silva

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] get a dtypes' range?

2011-06-17 Thread Fabrice Silva
Le vendredi 17 juin 2011 à 10:38 -0700, Christopher Barker a écrit :
 Hi all,
 
 I'm wondering if there is a way to get the range of values a given dtype 
 can hold?
 
 essentially, I'm looking for something like sys.maxint, for for whatever 
 numpy dtype I have n hand (and eps for floating point types).
 
 I was hoping there would be something like
 
 a_dtype.max_value
 a_dtype.min_value
 
 etc.
 
 In this case, I have a uint16, so I can hard code it, but it would be 
 nice to be able to be write the code in a more generic fashion.

http://docs.scipy.org/doc/numpy/reference/routines.dtype.html#data-type-information

-- 
Fabrice Silva

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [ANN]: OpenPiv. A python package for PIV image analysis

2011-04-25 Thread Fabrice Silva
Le dimanche 24 avril 2011 à 16:53 +0200, Davide a écrit :
 Hi all,
 I am pleased to announce the re-birth of OpenPiv, a python package for 
 the analysis of PIV images. PIV is a well established optical 
 measurement technique for fluid flows. The package has the goal of 
 provide an implementation of state-of-the-art processing algorithms, and 
 compete with commercial closed-source software packages.
 OpenPiv is in alpha state and several thing has to be done. However, the 
 code is already usable and a pure python implementation of the standard 
 cross-correlation algorithm is already available.
 OpenPiv needs developers and contributors, helping in with code and 
 documentation. If you can provide some, and if you are interested in 
 becoming a core developer, please drop us an email at openpiv-develop at 
 lists dot sourceforge dot net. A draft website can be found at 
 www.openpiv.net/python.
 
 Thanks for your attention,
 
 Davide Lasagna

Hi Davide
Investigations in fluid mechanics lead me to build a few months a
wrapper on the I/O functions from Davis (LaVision), a commercial
solution used to handle the complete procedure from camera calibration
to postprocessing velocity fields. 
I had to postprocess previous measurements stored in Davis format to a
deeper analysis than their software allows, so that I needed a tool able
to read data from their (almost documented) format.
The solution I found googling were not satisfying at the moment. For
example, PIVMAT [2] only worked for 2D fields and we were dealing with
3D flows and Stereoscopic PIV. After speaking with it author, it appears
that it would require too much work to support the third component in a
decent way.

The ctypes-based wrappers are available here
https://launchpad.net/libim7 
https://github.com/FabricioS/libim7 
It is a quite dumb piece of code, but it allows to extract images or
velocity fields from binary files (extensions im7, imx and vc7),
avoiding some of the traps in the extraction of the data.

I would be happy if you consider useful this tool within OpenPIV. It
would also be possible to change the ctype dependance to cython...

Best regards


[1] http://www.lavision.de/en/products/davis.php
[2] http://www.fast.u-psud.fr/pivmat/
-- 
Fabrice Silva

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] How to import input data to make ndarray for batch processing?

2010-11-18 Thread Fabrice Silva
El jeu., 18-11-2010 a las 20:19 +0530, Venkat escribió:
 I have many files to convert like this. All of them are having file
 names like 0, 1, 2, 500. with out any extension.
 Actually, I renamed actual files so that I can import them in Matlab
 for batch processing. Since Matlab also new for me, I thought I will
 try Numpy first. 
 Can any body help me in writing the script to do this for making batch
 processing.

One point that others did not answer is the 'batch' part. If your files
are named sequentially, you can 'template' the argument you pass to the
loader function.
For example, if you load with numpy.loadtxt your data that is stored in
files named 'mydata0', 'mydata1',  'mydata511', your batch
processing may look like that


for ind in xrange(512): 
filename = 'mydata%d' % ind
data = numpy.loadtxt(filename, ... )
#... your processing on single file

with adapted range of indices (see xrange doc), string formatting (see
string doc) and arguments to loader function
-- 
Fabrice Silva

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] portable doctests despite floating points numbers

2010-10-26 Thread Fabrice Silva
Another solution
http://code.activestate.com/recipes/577124-approximately-equal/ 

-- 
Fabrice Silva

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] polynomial fromroots

2010-10-10 Thread Fabrice Silva
Le dimanche 10 octobre 2010 à 14:19 -0400, josef.p...@gmail.com a 
 Good, this confirms the differences in convention z, or 1/z (and why I
 never remember if the roots are supposed to be inside or outside the
 unit circle)

Some tricks:
* in z-transform, 1-sample delay (in time domain) correspond to z**(-1)
multiplication:
H(z)h[k]present (current instant)
H(z)*z**(-N)h[k-N]  past (N samples backward)
H(z)*z**(+N)h[k+N]  future (N samples forward)

* in linear filter theory (or perhaps generally speaking), you may focus
on causal filter, i.e. filter where current output depends on past
samples. You then build a difference equation as a polynomial in
z**(-1).

* consider the following transform between z and w(complex)
 z=exp(+jwt) for some arbitrary t0.
  z inside unit circle associates with an positive imaginary part of w.
  t-exp(jwt) is then a decaying (stable) function of time
  
  z outside the unit circle lead them to unstable function. 

That's why you should prefer roots of the roots of z**(-1) polynomial
inside unit circle.

-- 
Fabrice Silva

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Array concatenation performance

2010-07-15 Thread Fabrice Silva
Le jeudi 15 juillet 2010 à 16:05 +0100, John Porter a écrit :
 You're right - I screwed up the timing for the one that works...
 It does seem to be faster.
 
 I've always just built arrays using nx.array([]) in the past though
 and was surprised that it performs so badly.

Can anyone provide an explanation (or a pointer) to such differences ?
Thanks

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Another reality check

2010-07-12 Thread Fabrice Silva
Le lundi 12 juillet 2010 à 18:14 +1000, Jochen Schröder a écrit :
 On 07/12/2010 12:36 PM, David Goldsmith wrote:
  On Sun, Jul 11, 2010 at 6:18 PM, David Goldsmith
  d.l.goldsm...@gmail.com mailto:d.l.goldsm...@gmail.com wrote:
 
  In numpy.fft we find the following:
 
  Then A[1:n/2] contains the positive-frequency terms, and A[n/2+1:]
  contains the negative-frequency terms, in order of decreasingly
  negative frequency.
 
  Just want to confirm that decreasingly negative frequency means
  ..., A[n-2] = A_(-2), A[n-1] = A_(-1), as implied by our definition
  (attached).
 
  DG
 
 
  And while I have your attention :-)
 
  For an odd number of input points, A[(n-1)/2] contains the largest
  positive frequency, while A[(n+1)/2] contains the largest [in absolute
  value] negative frequency.  Are these not also termed Nyquist
  frequencies?  If not, would it be incorrect to characterize them as the
  largest realizable frequencies (in the sense that the data contain no
  information about any higher frequencies)?
 
  DG
 
 I would find the term the largest realizable frequency quite 
 confusing. Realizing is a too ambiguous term IMO. It's the largest 
 possible frequency contained in the array, so Nyquist frequency would be 
 correct IMO.

Denoting Fs the sampling frequency (Fs/2 the Nyquist frequency):

For even n
A[n/2-1] stores frequency Fs/2-Fs/n, i.e. Nyquist frequency less a small 
quantity.
A[n/2] stores frequency Fs/2, i.e. exactly Nyquist frequency.
A[n/2+1] stores frequency -Fs/2+Fs/n, i.e. Nyquist frequency less a
small quantity, for negative frequencies.

For odd n
A[(n-1)/2] stores frequency Fs/2-Fs/(2n) and A[(n+1)/2] the opposite
negative frequency. But please pay attention that it does not compute
the content at the exact Nyquist frequency! That justify the careful
'largest realizable frequency'.

Note that the equation for the inverse DFT should state for m=0...n-1
and not for n=0...n-1...


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Nyquist frequency in numpy.fft docstring

2010-07-11 Thread Fabrice Silva
Le dimanche 11 juillet 2010 à 16:13 -0700, David Goldsmith a écrit :
 Hi!  I'm a little confused: in the docstring for numpy.fft we find the
 following:
 
 For an even number of input points, A[n/2] represents both positive
 and negative Nyquist frequency... 
 
 but according to http://en.wikipedia.org/wiki/Nyquist_frequency (I
 know, I know, I've bad mouthed Wikipedia in the past, but that's in a
 different context):
 
 The Nyquist frequency...is half the sampling frequency of a discrete
 signal processing system...The Nyquist frequency should not be
 confused with the Nyquist rate, which is the lower bound of the
 sampling frequency that satisfies the Nyquist sampling criterion for a
 given signal or family of signals...Nyquist rate, as commonly used
 with respect to sampling, is a property of a continuous-time signal,
 not of a system, whereas Nyquist frequency is a property of a
 discrete-time system, not of a signal.
 
 Yet earlier in numpy.fft's docstring we find:
 
 ...the discretized input to the transform is customarily referred to
 as a signal...
 
 Should we be using Nyquist rate instead of Nyquist frequency, and
 if not, why not?

To go further, Nyquist frequency (and also the sampling frequency) is in
fact a property of a sampling system. When dealing with fft, we are
handling the *output of such a system* (the sampled signal). Calling
A[n/2] then Nyquist frequency is then adequate.

The Nyquist rate is something you must care about *before* the
analog-digital conversion, considering the spectral content of the
continuous time signal.

My 2 pesos,
Fabricio

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] OT? Distutils extension with shared libs

2010-07-07 Thread Fabrice Silva
Thanks for your answers.

 Three solutions:
  - ask your users to build the software and install zlib by
 themselves. On windows, I am afraid it means you concretely limit your
 userbase to practically 0.
  - build zlib as part of the build process, and keep zlib internally.
  - include a copy of the zlib library (the binary) in the tarball.

 You cannot build a library loadable with ctypes with distutils nor
 numpy.distutils. You need to implement it in distutils, or copy the
 code from one of the project which implemented it

Ok, the simplest may then to build _im7.dll with make or scons and
include it as install_data in the python package... I was astonished
that the process (building the shared object called by ctypes) that
works on linux does not work in windows!

By the way, do you have any example of project implementing it in
distutils ?



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] OT? Distutils extension with shared libs

2010-07-06 Thread Fabrice Silva
I know it is not directly related to numpy (even if it uses
numpy.distutils), but I ask you folks how do you deal with code
depending on other libs.

In libIM7 projet ( https://launchpad.net/libim7 ), I wrap code from a
device constructor with ctypes in order to read Particle Image
Velocimetry (PIV) files stored by their software (format im7 and vc7).

There is a dependency on zlib which is easy to solve in linux
(installing zlib-dev package in debian). But as I want to use it also in
windows (sharing the commercial dongle amongst various colleagues is a
unconfortable solution), I am trying to configure the setup.py both for
win and linux. But I am new to dev in windows...

My questions are then:
- how do you deal with dependencies in distutils?
- what do you need to build against zlib (or another lib) in windows
using distutils ?

Thanks

Fabricio

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Numpy, swig and docstring

2010-06-13 Thread Fabrice Silva
Hi folks,
I am trying to wrap a library with swig, distutils and numpy, and am
facing troubles. In fact, swig documentation mention that it is possible
to mention a module docsstring in the %module directive : 

%module(docstring=This is the example module's docstring) example

where example is the name of the module to be generated.
When using numpy.distutils, I get the following error:

building extension _example sources
error: mismatch of extension names: example.i provides None but expected
'example'

the build_src command tries to get the module name by using a regular
expression (see numpy/distutils/command/build_src.py)

_swig_module_name_match = re.compile(
r'\s*%module\s*(.*\(\s*package\s*=\s*(?Ppackage[\w_]+).*\)|)
\s*(?Pname[\w_]+)',re.I).match

which in fact does not cope with the docstring option.
Is there a simple workaround or should I manually compile (which is
possible in this simple project) ?

Thanks

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Some help on matlab to numpy translation

2010-03-13 Thread Fabrice Silva
Le samedi 13 mars 2010 à 10:20 +0100, Nicolas Rougier a écrit :
 Hello,
 I'm trying to translate a small matlab program for the simulation in a
 2D flow in a channel past a cylinder and since I do not have matlab
 access, I would like to know if someone can help me, especially on
 array indexing. The matlab source code is available at:
 http://www.lbmethod.org/openlb/lb.examples.html and below is what I've
 done so far in my translation effort. 
 
 In the matlab code, there is a ux array of shape (1,lx,ly) and I do
 not understand syntax: ux(:,1,col) with col = [2:(ly-1)]. If
 someone knows, that would help me a lot...


As ux 's shape is (1,lx,ly), ux(:,1,col) is equal to ux(1,1,col) which
is a vector with the elements [ux(1,1,2), ... ux(1,1,ly-1)].
Using : juste after the reshape seems a lit bit silly...
-- 
Fabrice Silva
LMA UPR CNRS 7051

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Some help on matlab to numpy translation

2010-03-13 Thread Fabrice Silva
  As ux 's shape is (1,lx,ly), ux(:,1,col) is equal to ux(1,1,col) which
  is a vector with the elements [ux(1,1,2), ... ux(1,1,ly-1)].
  Using : juste after the reshape seems a lit bit silly...
 
 Except that python uses 0-based indexing and does not include the last
 number in a slice, while Matlab uses 1-based indexing and includes the
 last number, so really:
 ux(:,1,col)
 becomes:
 ux(0, 0, col) # or ux(:, 0, col)
 
 And if col is
 col = [2:(ly-1)]
 This needs to be:
 col = np.arange([1, ly - 1)

You are right about the 0 or 1 based indexing argument, but I was
speaking matlab language as visible in the symbols used for indexing
( () and not [] )... :)

-- 
Fabrice Silva
LMA UPR CNRS 7051

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] f2py: variable number of arguments of variable lengths

2010-02-17 Thread Fabrice Silva
I previously coded a fortran function that needs a variable number of
scalar arguments. This number is not known at compile time, but at call
time. So I used to pass them within a vector, passing also the length of
this vector

  subroutine systeme(inc,t,nm,Dinc,sn)
C
C  evaluate the derivative of vector x at time t
C  with complex modes (sn). Used for the calculation
C  of auto-oscillations in resonator-valve coupled system.
C
  integer nm,np,ny,ind 
  double precision inc(1:2*nm+2), Dinc(1:2*nm+2)
  complex*16 sn(1:nm)
  
Cf2py double precision, intent(in) :: t
Cf2py integer, intent(in), optional :: nm
Cf2py double precision, intent(in), dimension(2*nm+2) :: inc
Cf2py double precision, intent(out), dimension(2*nm+2) :: Dinc
Cf2py complex, intent(in), dimension(nm) :: sn


I do now want to pass, not nm float values, but nm arrays of variables
lengths. I expect to pass the following objects :
- nm: number of arrays
- L : a 1d-array (dimension nm) containing the lengths of each array
- np: the sum of lengths
- X : a 1d-array (dimension np) containing the concatenated arrays.

Does anyone have an alternative to this suggestion ? any tip or example?
Regards

-- 
Fabrice Silva
LMA UPR CNRS 7051

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py: variable number of arguments of variable lengths

2010-02-17 Thread Fabrice Silva
Le mercredi 17 février 2010 à 15:43 -0600, Robert Kern a écrit :
 On Wed, Feb 17, 2010 at 15:29, Fabrice Silva si...@lma.cnrs-mrs.fr wrote:
  I previously coded a fortran function that needs a variable number of
  scalar arguments. This number is not known at compile time, but at call
  time. So I used to pass them within a vector, passing also the length of
  this vector
 
   subroutine systeme(inc,t,nm,Dinc,sn)
 C
 C  evaluate the derivative of vector x at time t
 C  with complex modes (sn). Used for the calculation
 C  of auto-oscillations in resonator-valve coupled system.
 C
   integer nm,np,ny,ind
   double precision inc(1:2*nm+2), Dinc(1:2*nm+2)
   complex*16 sn(1:nm)
 
 Cf2py double precision, intent(in) :: t
 Cf2py integer, intent(in), optional :: nm
 Cf2py double precision, intent(in), dimension(2*nm+2) :: inc
 Cf2py double precision, intent(out), dimension(2*nm+2) :: Dinc
 Cf2py complex, intent(in), dimension(nm) :: sn
 
 
  I do now want to pass, not nm float values, but nm arrays of variables
  lengths. I expect to pass the following objects :
  - nm: number of arrays
  - L : a 1d-array (dimension nm) containing the lengths of each array
  - np: the sum of lengths
  - X : a 1d-array (dimension np) containing the concatenated arrays.
 
 Yeah, that's pretty much what you would have to do.

What about the next step: a variable number of arguments that are
2d-arrays with different shapes ?

-- 
Fabrice Silva
LMA UPR CNRS 7051

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] f2py: variable number of arguments of variable lengths

2010-02-17 Thread Fabrice Silva
Le mercredi 17 février 2010 à 16:21 -0600, Robert Kern a écrit :
  What about the next step: a variable number of arguments that are
  2d-arrays with different shapes ?
 
 - nm: number of arrays
 - ncols : a 1d-array (dimension nm) containing the number of columns
 in each array
 - nrows : a 1d-array (dimension nm) containing the number of rows in each 
 array
 - np: the sum of array sizes [(ncols * nrows).sum() in numpy terms]
 - X : a 1d-array (dimension np) containing the concatenated arrays.
 

I guess I will need to be careful when building the arrays from X.
Thanks!

-- 
Fabrice Silva
LMA UPR CNRS 7051

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] boolean arrays

2009-11-26 Thread Fabrice Silva
Le jeudi 26 novembre 2009 à 18:26 +0200, Nadav Horesh a écrit :
 It is obvious to me that True+False == True,. Why do you think it should
 be False?
 
I would understand it is not obvious that '+' stands for logical 'or',
and '*' for logical 'and'...


-- 
Fabrice Silva si...@lma.cnrs-mrs.fr
LMA UPR CNRS 7051

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] boolean arrays

2009-11-26 Thread Fabrice Silva
Le jeudi 26 novembre 2009 à 14:44 +0100, Gael Varoquaux a écrit :
 On Thu, Nov 26, 2009 at 02:43:14PM +0100, Fabrice Silva wrote:
  Le jeudi 26 novembre 2009 à 18:26 +0200, Nadav Horesh a écrit :
   It is obvious to me that True+False == True,. Why do you think it should
   be False?
 
  I would understand it is not obvious that '+' stands for logical 'or',
  and '*' for logical 'and'...
 
 In Bool's algebra, this is the common convention. The reason being that
 only 'or' can correspond to the additive law of an algebra: its null
 element is absorbant for 'and'.
 
 In other words, if you map '+' and '*' to the opposite, you'll get
 suprising behaviors.

I fully agree with you. My point was to complete Nadav's comment with
potentially missing information, trying to figrue why Nils was expected
False...

-- 
Fabrice Silva si...@lma.cnrs-mrs.fr
LMA UPR CNRS 7051

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] The problem with zero dimesnsional array

2009-09-29 Thread Fabrice Silva
Le mardi 29 septembre 2009 à 02:32 +0530, yogesh karpate a écrit :
 Dear All,
I'm facing a bog problem in following . the code
 snippet is as follows
  % Compute the area
 indicator###
 for kT in range(leftbound,rightbound):
   #  Here the left bound and rightbound both are indexing array is 
 cutlevel = sum(s[(kT-ptwin):(kT+ptwin)],0)/(ptwin*2+1)
 corsig = s[(kT-swin+1):kT]-cutlevel
 areavalue1 =sum((corsig),0)
 #print areavalue.size
  print leftbound, rightbound
  Tval=areavalue1[leftbound:rightbound]
 Everything works fine till areavalue1, then whenever I try to access
 the Tval=areavalue1[leftbound:rightbound]
 it says IndexError: invalid index to scalar variable..
 When i try to access areavalue1[0] it gives me entire array but for
 areavalue1[2:8]..it gives the same error .
 Thanx in advance..
 Regards

Could you please check the shape of areavalue :
 print type(areavalue)
 print areavalue.shape
Make sure areavalue is not a list or a tuple with the array you are
interested in as the first element. If it is such a case, first extract
the array or replace areavalue by areavalue[0] in your snipplet.

-- 
Fabrice Silva si...@lma.cnrs-mrs.fr
LMA UPR CNRS 7051

PS : you need not to personally email your request. I saw your message
on the mailing list, but I had no time to answer...

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] The problem with arrays

2009-09-22 Thread Fabrice Silva
Le mardi 22 septembre 2009 à 17:42 +0530, yogesh karpate a écrit :
 I just tried your idea but the result is same. it didnt help .
 
 2009/9/22 Nadav Horesh nad...@visionsense.com
 A quick answer with going into the details of your code:
 
 try
  plt.plot(R_time,R_amp,'go',hold=1)
 (one line before the last)
 
  Nadav

You may separate the computation part and the plotting one by storing
your results in a R_time and a R_amp array (length a1 arrays).

Concerning the plotting issue : are you sure the points you want to be
displayed aren't yet? print the values within the loop :
 print (R_amp, R_time)
to check your values.
You may also inspect your graphs to see how many lines they have :
 plt.gca().get_children()
or 
 plt.gca().get_lines()
might help


-- 
Fabrice Silva si...@lma.cnrs-mrs.fr
LMA UPR CNRS 7051

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] The problem with arrays

2009-09-22 Thread Fabrice Silva
Le mardi 22 septembre 2009 à 23:00 +0530, yogesh karpate a écrit :

 This is the main thing . When I try to store it in array like
 R_time=array([R_t[0][i]]). It just stores the final value in that
 array when loop ends.I cant get out of this For loop.I really have
 this small problem. I really need help on this guys.
 
 for i in range(a1):
 data_temp=(bpf[left[0][i]:
 right[0][i]])# left is an array and right is also an array
 maxloc=data_temp.argmax()   #taking indices of
 max. value of data segment
 maxval=data_temp[maxloc] 
 minloc=data_temp.argmin()
 minval=data_temp[minloc]
 maxloc = maxloc-1+left # add offset of present
 location
 minloc = minloc-1+left # add offset of present
 location
 R_index = maxloc
 R_t = t[maxloc]
 R_amp = array([maxval])
 S_amp = minval#%%% Assuming the S-wave is the lowest
 #%%% amp in the given window
 #S_t = t[minloc]
 R_time=array([R_t[0][i]])
 plt.plot(R_time,R_amp,'go');
 plt.show()

Two options :
- you define an empty list before the loop
 R_time = []
  and you append the computed value while looping
 for i:
 ...
 R_time.append(t[maxloc])

- or you define a preallocated array before the loop
 R_time = np.empty(a1)
  and fill it with the computed values
 for i:
 ...
 R_time[i] = t[maxloc]


Same thing with R_amp. After looping, whatever the solution you choose,
you can plot  the whole set of (time, value) tuples
 plt.plot(R_time, R_amp)

-- 
Fabrice Silva si...@lma.cnrs-mrs.fr
LMA UPR CNRS 7051

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] roots and high-order polynomial

2009-07-08 Thread Fabrice Silva
Le lundi 06 juillet 2009 à 17:57 +0200, Fabrice Silva a écrit :
 Le lundi 06 juillet 2009 à 17:13 +0200, Nils Wagner a écrit :
  IIRC, the coefficients of your polynomial are complex.
  So, you cannot guarantee that the roots are complex 
  conjugate pairs.
 
 Correct! If the construction is done with X1 and X1* treated separately,
 the coefficients appear to be complex. But if their processing is done
 simultaneously, the combination gives real coefficients. Let me modify
 this point and come back to tell you if it is the breakpoint...

Alright, the coefficients are now all real (I still wonder how it passed
the check tests I have).

You can find attached a simple test. Looking at the evolution of a.r, it
seems there really is a trouble with the representation of high order
polynomial. 'a' is initialized with its roots, but a.r is far from equal
to the input roots...
-- 
Fabrice Silva
Laboratory of Mechanics and Acoustics - CNRS
31 chemin Joseph Aiguier, 13402 Marseille, France.
import matplotlib.pyplot as plt
import numpy as np
import numpy.lib.polynomial as pol

plt.figure()
S1 = plt.subplot(111)

for n in np.arange(50):
a = pol.poly1d(np.arange(n)*1.,r=True)
print n,np.all(np.diff(a.r)==1), a.r
b = pol.poly1d(a.c, r=False)
sol = b.r
S1.plot(sol, n*np.ones_like(sol), 'r+')
plt.xlabel('roots')
plt.ylabel('order of polynomial')
plt.show()

attachment: test_roots.png___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] roots and high-order polynomial

2009-07-06 Thread Fabrice Silva
Le vendredi 03 juillet 2009 à 10:00 -0600, Charles R Harris a écrit :

 What do you mean by erratic? Are the computed roots different from
 known roots? The connection between polynomial coefficients and
 polynomial values becomes somewhat vague when the polynomial degree
 becomes large, it is numerically ill conditioned.

For an illustration of what I mean by 'erratic', see
http://fsilva.perso.ec-marseille.fr/visible/script_als_nbmodes.png or
http://fsilva.perso.ec-marseille.fr/visible/script_als_nbmodes.pdf
with the real part of the roots on the left, and the imaginary part of
the right:
- for low orders (26), a pair of complex conjugate roots appears when
the order of polynomial is increased (with a step equal to 2). As
expected in my physical application, their real parts are negative.
- for higher order (=26), there is no more 'hermitian
symmetry' (complex conjugate solutions), and real parts become
positive...

The computation of the coefficients of the polynomial is out of topic, I
already checked it and there is no errors.

-- 
Fabrice Silva
Laboratory of Mechanics and Acoustics - CNRS
31 chemin Joseph Aiguier, 13402 Marseille, France.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] roots and high-order polynomial

2009-07-06 Thread Fabrice Silva
Le lundi 06 juillet 2009 à 08:16 -0600, Charles R Harris a écrit :

 Double precision breaks down at about degree 25  if things are well
 scaled, so that is suspicious in itself. Also, the companion matrix
 isn't Hermitean so that property of the roots isn't preserved by the
 algorithm. If it were Hermitian then eigh would be used instead of
 eig. That said, there are other ways of computing polynomial roots
 that might preserve the Hermitean property, but I don't know that that
 would solve your problem. 

I think there is a misunderstanding: I was referring to the fact the
solution had to be real or complex conjugate, and not that the companion
matrix would be a hermitian matrix (which it isn't due to its
construction). 
Something I forgot to tell is that the roots are complex
eigenfrequencies of a physical system, the real and imaginary parts
expressing the damping and the frequency, respectively. If a positive
frequency is solution then the opposite negative is solution too but
with the «same-signed» damping.



 The problem is floating point round off error in representing the
 coefficients. Even if you know the coefficients exactly they can't
 generally be represented exactly in double precision. Any
 computational roundoff error just adds to that. If the coefficients
 were all integers I would have more confidence in the no error claim.
 
 Where do the coefficients come from? Perhaps there are options there.

Here is the construction:
the coefficients are obtained from a modal analysis of a subsystem of a
bigger system. A quantity called impedance depending of a variable X is
the result of the combination of several terms:
Z(X) = C1/(X-X1)+C1*/(X-X1*)+...+CN/(X-XN)+CN*/(X-XN*)
where * denotes the complex conjugate.
I have to get the solutions X of an equation Ze(X)=N(X)/D(X) with N and
D are very-low order polynomial (orders 1 and 2). An alternative of
using iterative root finding (for example with scipy.optimize.fsolve) is
to expand the equation as a polynomial of order close to 2N.
I do understand this approach might seem non-obvious but it is rather
efficient for a low value of N (10)...
-- 
Fabrice Silva
Laboratory of Mechanics and Acoustics - CNRS
31 chemin Joseph Aiguier, 13402 Marseille, France.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] roots and high-order polynomial

2009-07-06 Thread Fabrice Silva
Le lundi 06 juillet 2009 à 17:13 +0200, Nils Wagner a écrit :
 IIRC, the coefficients of your polynomial are complex.
 So, you cannot guarantee that the roots are complex 
 conjugate pairs.

Correct! If the construction is done with X1 and X1* treated separately,
the coefficients appear to be complex. But if their processing is done
simultaneously, the combination gives real coefficients. Let me modify
this point and come back to tell you if it is the breakpoint...
-- 
Fabrice Silva
Laboratory of Mechanics and Acoustics - CNRS
31 chemin Joseph Aiguier, 13402 Marseille, France.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] roots and high-order polynomial

2009-07-06 Thread Fabrice Silva
Le lundi 06 juillet 2009 à 17:57 +0200, Fabrice Silva a écrit :
 Le lundi 06 juillet 2009 à 17:13 +0200, Nils Wagner a écrit :
  IIRC, the coefficients of your polynomial are complex.
  So, you cannot guarantee that the roots are complex 
  conjugate pairs.
 
 Correct! If the construction is done with X1 and X1* treated separately,
 the coefficients appear to be complex. But if their processing is done
 simultaneously, the combination gives real coefficients. Let me modify
 this point and come back to tell you if it is the breakpoint...

Alright, the coefficients are now all real (I still wonder how it passed
the check tests I have).

You can find attached a simple test. Looking at the evolution of a.r, it
seems there really is a trouble with the representation of high order
polynomial. 'a' is initialized with its roots, but a.r is far from equal
to the input roots...
-- 
Fabrice Silva
Laboratory of Mechanics and Acoustics - CNRS
31 chemin Joseph Aiguier, 13402 Marseille, France.
import matplotlib.pyplot as plt
import numpy as np
import numpy.lib.polynomial as pol

plt.figure()
S1 = plt.subplot(111)

for n in np.arange(50):
a = pol.poly1d(np.arange(n)*1.,r=True)
print n,np.all(np.diff(a.r)==1), a.r
b = pol.poly1d(a.c, r=False)
sol = b.r
S1.plot(sol, n*np.ones_like(sol), 'r+')
plt.xlabel('roots')
plt.ylabel('order of polynomial')
plt.show()

attachment: teste_roots.png___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] roots and high-order polynomial

2009-07-03 Thread Fabrice Silva
Hello
Has anyone looked at the behaviour of the (polynomial) roots function
for high-order polynomials ? I have an application which internally
searches for the roots of a polynomial. It works nicely for order less
than 20, and then has an erratic behaviour for upper values...

I looked into the source and I wondered that roots is based on the
eigenvalues of the companion matrix. For high-order, this latter is
rather sparse. Would it improve anything to compute the eigenvalues
using sparse solvers?

-- 
Fabrice Silva
Laboratory of Mechanics and Acoustics - CNRS
31 chemin Joseph Aiguier, 13402 Marseille, France.

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] roots and high-order polynomial

2009-07-03 Thread Fabrice Silva
Le vendredi 03 juillet 2009 à 11:52 +0200, Nils Wagner a écrit :
 You will need multiprecision arithmetic in that case.
 It's an ill-conditioned problem.

I may have said that the solution are of the same order of magnitude, so
that the ratio between the lowest and the highest absolute values of the
eigenvalues are rather close to one.
Does it help ?

Looking in the debian repository, I found python-gmpy, an interface GMP
to Python. But how could it be used to compute the eigenvalues of the
companion matrix ? I will look in the doc...

-- 
Fabrice Silva si...@lma.cnrs-mrs.fr
LMA UPR CNRS 7051

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] roots and high-order polynomial

2009-07-03 Thread Fabrice Silva
Le vendredi 03 juillet 2009 à 14:43 +0200, Nils Wagner a écrit :
 Just curious - Can you provide us with the coefficients of 
 your polynomial ?



Working case :
Polynomial.c = 
[ -1.34100085e+57 +0.e+00j  -2.28806781e+55 +0.e+00j
  -4.34808480e+54 -3.27208577e+36j  -2.44499178e+52 -7.45966050e+34j
  -4.84319247e+51 -1.13783265e+34j   1.84249681e+47 +1.01074670e+33j
  -2.27761972e+48 +2.06997398e+31j   5.35043855e+45 +1.50239041e+30j
  -4.40346465e+44 +3.49745869e+27j   1.16055383e+42 -6.38037062e+25j
  -2.79370528e+40 -1.60550605e+23j   4.72851679e+37 +7.68429228e+20j
  -2.51340384e+35 +2.92105561e+17j]

A non-working case : pretty large range of values!
Polynomial.c = 
[ -1.34100085e+219 +0.e+000j  -3.87620512e+217 +0.e+000j
  -1.59644076e+218 -4.96044532e+200j  -4.36379116e+216 -1.78047353e+201j
  -8.66515071e+216 -7.57270655e+199j  -2.22899258e+215 +1.20695742e+200j
  -2.84415205e+215 +2.04437917e+198j  -6.84491536e+213 +4.11962630e+198j
  -6.31415522e+213 +8.60411394e+196j  -1.41164416e+212 +6.40341637e+195j
  -1.00484766e+212 +1.05851522e+194j  -2.06870108e+210 -6.90610706e+194j
  -1.18557253e+210 -7.04790806e+192j  -2.22304125e+208 +8.69684716e+191j
  -1.05816505e+208 +4.10351383e+190j  -1.78206407e+206 -3.01765476e+190j
  -7.22785804e+205 -3.75229199e+188j  -1.07357199e+204 +1.60517046e+188j
  -3.79912531e+203 +5.72547583e+186j  -4.85746209e+201 -6.62989846e+185j
  -1.53768394e+201 +6.29641862e+183j  -1.63653463e+199 -3.94069765e+183j
  -4.77535548e+198 +2.70460428e+180j  -4.02975379e+196 -8.22561006e+180j
  -1.12909884e+196 -3.28170925e+178j  -7.00288278e+193 +1.87782740e+178j
  -2.00766909e+193 +5.47118767e+176j  -7.99891319e+190 +5.56488189e+175j
  -2.63703112e+190 +1.09163655e+174j  -4.91486942e+187 +6.37859777e+172j
  -2.49451367e+187 +1.07180124e+171j   1.17174241e+183 +4.42130463e+169j
  -1.63881690e+184 -1.39641308e+167j   2.63207649e+181 -5.52568751e+165j
  -7.08593327e+180 +8.81583002e+163j   1.88293454e+178 +2.44846832e+162j
  -1.85373357e+177 +1.80169585e+160j   5.67531810e+174 +8.34051533e+158j
  -2.54119737e+173 +6.83930275e+156j   6.95713759e+170 -3.36080695e+154j
  -1.36763548e+169 -4.25101484e+151j   2.31484033e+166 +2.65804116e+149j
  -1.19894847e+164 +1.50542119e+146j]

-- 
Fabrice Silva si...@lma.cnrs-mrs.fr
LMA UPR CNRS 7051

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Recurrence relationships

2009-05-06 Thread Fabrice Silva
Le mercredi 06 mai 2009 à 10:21 -0400, josef.p...@gmail.com a écrit :
 On Wed, May 6, 2009 at 10:00 AM, Talbot, Gerry gerry.tal...@amd.com wrote:
  Sorry, I guess I wasn't clear, I meant:
 
 for n in xrange(1,N):
   y[n] = A*x[n] + B*y[n-1]
 
  So y[n-1] is the result from the previous loop iteration.
 
 
 I was using scipy.signal for this but I have to look up what I did
 exactly. I think either signal.correlate or using signal.lti.
 
 Josef

Isn't it what scipy.signal.lfilter does ?
y=scipy.signal.lfilter([A],[1,-B],x)

You may be careful with initial conditions...
-- 
Fabrice Silva si...@lma.cnrs-mrs.fr
LMA UPR CNRS 7051

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Initialize numpy array with other numpy arrays

2009-02-20 Thread Fabrice Silva
 On Thu, Feb 19, 2009 at 17:03, Frank Peacock fr...@gis4weather.com wrote:
  img[ngridn,ngride]=(ncolour[0],ncolour[1],ncolour[2])

Le jeudi 19 février 2009 à 18:24 -0600, Robert Kern a écrit :
 for i in range(3):
 img[ngridn,ngride,i] = ncolour[i]

Is it not possible to simply use
img[ngridn, ngride, :] = ncolour[:]

-- 
Fabricio


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] trouble subclassing ndarray

2008-12-03 Thread Fabrice Silva
Le mercredi 03 décembre 2008, Sébastien Barthélemy a écrit :
 Hello,
Hi Sebastien!

 I'm trying to write a small library of differential geometry, and I
 have some trouble subclassing ndarray.
 I'd like an HomogeneousMatrix class that subclasse ndarray and
 overloads some methods, such as inv().
 Here is my first try, the inv() function and the inv_v1() method work
 as expected, but the inv_v2() and inv_v3() methods do not change the
 object at all. Can somebody explain me what is happening here ?
 
 import numpy as np
 def inv(H):
 
 inverse of an homogeneous matrix
 
 R = H[0:3,0:3]
 p = H[0:3,3:4]
 return np.vstack( (np.hstack((R.T,-np.dot(R.T,p))), [0,0,0,1]))
 
 class HomogeneousMatrix(np.ndarray):
 def __new__(subtype, data=np.eye(4)):
 subarr = np.array(data)
 if htr.ishomogeneousmatrix(subarr):
 return subarr.view(subtype)
 else:
 raise ValueError
 def inv_v1(self):
 self[0:4,0:4] = htr.inv(self)
 def inv_v2(self):
 data = htr.inv(self)
 self = HomogeneousMatrix(data)
 def inv_v3(self):
 self = htr.inv(self)

There is something I missed: what is htr? I guess htr.inv is the inv
function defined before the class.
Another point: it seems weird to me that, in the class' methods inv_v2
and inv_v3, you 'unref' the previous instance of HomogeneousMatrix and
link the 'self' label to a new instance... In inv_v1, you just modify
the coefficient of the Homogeneous Matrix with the coefficient of
htr.inv(self)

-- 
Fabrice Silva [EMAIL PROTECTED]
LMA UPR CNRS 7051 - équipe S2M

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] error importing a f2py compiled module.

2008-11-30 Thread Fabrice Silva
Le dimanche 30 novembre 2008 à 14:47 +0900, David Cournapeau a écrit :
 Fabrice Silva wrote:
 
  A way of solving this issue was to move the shared object file to
  another directory. But I want to figure out what is happening exactly.
  Googling a lot indicates that selinux would be the cause of this
  issue... Has anyone a suggestion?
 Disabling selinux ?

It might be an acceptable answer, but I found another :
using f2py-compiled modules seems not to work on ext3 file-systems
without the exec option. I only needed to add the 'exec' option
in /etc/fstab (debian).
-- 
Fabrice Silva [EMAIL PROTECTED]
LMA UPR CNRS 7051 - équipe S2M

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 2D phase unwrapping

2008-11-29 Thread Fabrice Silva
Le mercredi 26 novembre 2008 à 20:15 -0800, Jarrod Millman a écrit :
  One of my colleagues has been using 2D and 3D phase unwrapping code
  from Munther Gdeisat from GERI:  
  https://cirl.berkeley.edu/trac/browser/bic/trunk/recon-tools/src  
  
 https://cirl.berkeley.edu/trac/browser/bic/trunk/recon-tools/root/recon/punwrap
  This code is very high quality and replicating it from scratch would be
  a fairly daunting task. [...] If anyone is interested in picking this
  up and going through the effort of incorporating this code in scipy I
  would be happy to help resolve any remaining licensing issues.  I also
  may be able to devote some programming resources to helping out, if
  someone else volunteers to do the majority of the work.

So what is expected now ? What have to be done in order to include it in
scipy ?
-- 
Fabrice Silva

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] error importing a f2py compiled module.

2008-11-29 Thread Fabrice Silva
Hi all,
I am facing this old problem again :

Fabrice Silva a écrit :
 Dear all
 I've tried to run f2py on a fortran file which used to be usable from
 python some months ago.
 Following command line are applied with success (no errors raised) :
 f2py -m modulename -c --f90exec=gnu95 tmpo.f
 
 When importing in Python with import modulename, I have an
 ImportError:
 Traceback (most recent call last):
   File Solveur.py, line 44, in module
 import modulename as Modele
 ImportError: modulename.so: failed to map segment from shared
 object: Operation not permitted

A way of solving this issue was to move the shared object file to
another directory. But I want to figure out what is happening exactly.
Googling a lot indicates that selinux would be the cause of this
issue...
Has anyone a suggestion?
-- 
Fabrice Silva [EMAIL PROTECTED]
LMA UPR CNRS 7051 - équipe S2M

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 2D phase unwrapping

2008-11-29 Thread Fabrice Silva
Le samedi 29 novembre 2008 à 21:12 +0200, Nadav Horesh a écrit :
  From my point of view: If I'll need it, I'll produce a python binding
  to GERI's code. My schedule is too tight to start this project if not
  strictly necessary. I have as idea of a simple code that might work
  for smooth images, that I might try in a few days. I am working on a
  code to calculate the derivative of the phase avoiding the unwrap.
  I'll release the code on a request with a BSD licence.

Did you look at the punwrap code (links provided by Jarrod)
   https://cirl.berkeley.edu/trac/browser/bic/trunk/recon-tools/src  
   
  https://cirl.berkeley.edu/trac/browser/bic/trunk/recon-tools/root/recon/punwrap

There are already python bindings to 2d and 3d unwrap code. I was asking
what have to be done to include that in scipy or a scikit...
Having a tight schedule too, it seems to me that this task would take
less time than writing a binding to GERI code.
-- 
Fabrice Silva [EMAIL PROTECTED]
LMA UPR CNRS 7051 - équipe S2M

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] 2D phase unwrapping

2008-11-26 Thread Fabrice Silva
Le mercredi 26 novembre 2008 à 09:17 +0200, Nadav Horesh a écrit :
 Is there a 2D phase unwrapping for python?
 I read a presentation by GERI (http://www.ljmu.ac.uk/GERI) that their code is 
 implemented in scipy, but I could not find it.

I had the same problem a couple of days ago! Playing with the unwrap
function and the axis argument, I still did not managed to get rid of
these *** lines!

the kind of results I had are available at :
http://fsilva.perso.ec-marseille.fr/visible/tmp/
- tmp00.png : no unwrapping at all
- tmp10.png : unwrapping along the vertical axis
- tmp11.png : unwrapping along the vertical axis and then unwrapping the
first line and applying the 2pi gaps to all lines...
- tmp20.png : unwrapping along the horizontal axis

-- 
Fabrice Silva

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] question about the documentation of linalg.solve

2008-11-19 Thread Fabrice Silva
Le mercredi 19 novembre 2008 à 14:27 -0500, Alan G Isaac a écrit :
 So my question is not just what is the algorithm
 but also, what is the documentation goal?

Concerning the algorithm (only):
in Joshua answer, you have might have seen that solve is a wrapper to
lapack routines *gesv (z* or d* depending on the input type).

http://www.netlib.org/lapack/complex16/zgesv.f
and
http://www.netlib.org/lapack/double/dgesv.f
mention a LU decomposition with partial pivoting and row interchanges.
-- 
Fabricio

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] any interest in including a second-ordergradient?

2008-10-28 Thread Fabrice Silva
Le mardi 28 octobre 2008 à 15:28 -0600, Andrew Hawryluk a écrit :
 I agree that the gradient functions should be combined, especially
 considering how much redundant code would be added by keeping them
 separate. Here is one possible implementation, but I don't like the
 signature yet as it departs from the current behaviour. At the risk of
 demonstrating my ignorance, is there no way to place the named
 parameter (order) after the variable-length parameter (dx) in Python?

  Stéfan van der Walt
  We should discuss different options for the implementation.  The
  namespace is fairly cluttered, and it may be that we want to implement
  gradient3 some time in the future as well.  Maybe something like
  gradient(f, 1, 2, 3, order=2)
  would work -- then we can combine gradient and gradient2 (and
  gradient3). What do you think?

  Andrew Hawryluk:
   We wrote a simple variation on the gradient() function to calculate
   the second derivatives. Would there be any interest in including a
   gradient2() in numpy?

Are there some parts of the code that may be used only once to calculate
both the gradient and the second derivative (isn't it called the
hessian, at least in the N-d case) ?
If a common function would fasten the computation of the gradient and
the hessian with a single call to a new function gradients(), it is
worth...
If the intent is just a reduction of the total length of the file
containing the gradient and gradient2 functions, I do not understand why
modifying the existent code. Why not creating a new function hessian()
having the same signature than gradient?
-- 
Fabrice Silva

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] efficient way to do this?

2008-09-22 Thread Fabrice Silva
Le lundi 22 septembre 2008 à 09:41 -0500, John Hunter a écrit :
 I have a an array of indices into a larger array where some condition
 is satisfied.  I want to create a larger set of indices which *mark*
 all the indicies following the condition over some Nmark length
 window.

A = np.random.rand(N)
What about that:
marked = (A0.01)

 In the real use case, there will be significant auto-correlation among
 the places where the condition is satisfied.  Eg, if it is satisfied
 at some index, it is likely that it will be satisfied for many of its
 neighbors.  Eg, the real case looks more like
 
 y = np.sin(2*np.pi*np.linspace(0, 2, N))
 
 ind = np.nonzero(y0.95)[0]
 marked2 = np.zeros(N, bool)
 for i in ind:
 marked2[i:i+Nmark] = True

I do not understand what you do expect here to code...

-- 
Fabrice Silva [EMAIL PROTECTED]
LMA UPR CNRS 7051

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] error importing a f2py compiled module.

2008-07-14 Thread Fabrice Silva
Le lundi 23 juin 2008 à 14:10 +0200, Fabrice Silva a écrit :
  I don't  have ideas what is causing this import error. Try
  the instructions above, may be it is due to some compile object
  conflicts.
 The only posts on mailing lists I've read mention security policies
 (SElinux) and /tmp execution limitations...

Another point is that the working directory has been created by a
subversion checkout command but has proper permissions
drwxr-xr-x 4 fab fab 4096 jui 14 13:29 lib
as the fortran and the shared object files :
-rwxr-xr-x 1 fab fab  6753 jui  9 14:14 systeme.f
-rwxr-xr-x 1 fab fab 85746 jui 14 13:21 systeme.so

Moving these files in another directory (and adding this latter to path)
suppress the import problem...
-- 
Fabrice Silva [EMAIL PROTECTED]
LMA UPR CNRS 7051 - équipe S2M

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] multiply array

2008-04-10 Thread Fabrice Silva

Le vendredi 04 avril 2008 à 00:28 -0700, wilson a écrit :
  #of shape (1,6)
  eval=array([[3.,3.2,1.,1.1,5.,0.5]])
 
 
 eval.shape=(-1,)
 
 please not the correction..i need to multiply first row of egimgs with
 3.0 ,second row with 3.2,last(sixth) row with 0.5 ..For that
 purpose i made the above into a 1 dimensional array.
 A for loop seems inefficient in the case of big arrays
 W

Hi
I suggest you three ways:
* using matrices :
mat(eval*identity(eval.shape[1]))*mat(egimgs)

creates first a diagonal matrix with eval coefficients on main
diagonal, then multiply the matrices diagonal and egimgs
* using a temporary matrix with repeated eval coefficients
eval.T.repeat(egimgs.shape[1],axis=1)*egimgs

repeats array eval as many times as required to have two arrays
of the same size that can be multiplied.

and maybe the more efficient that does not require you to create a temp
array of size egimgs.shape :
eval.T*egimgs
or for inplace update of egimgs
egimgs *= eval.T

provided the len of 1darray and one of the dimension of the
2darray match, you can multiply them directly.


-- 
Fabrice Silva, PhD student
LMA UPR CNRS 7051 - équipe S2M

___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] [f2py] Troubles building a module

2008-02-09 Thread Fabrice Silva
Reading the tutorial
http://scipy.org/Cookbook/Theoretical_Ecology/Hastings_and_Powell
I've tried to run the provided code.

But compiling the fortran module with the line command given in the
tuto, I've got the following traceback (you can see it with syntax
highlighting at http://paste.debian.net/48759 ):

[EMAIL PROTECTED]:/tmp$ f2py -c -m hastings hastings.f90
running build
running config_cc
unifing config_cc, config, build_clib, build_ext, build commands 
--compiler options
running config_fc
unifing config_fc, config, build_clib, build_ext, build commands 
--fcompiler options
running build_src
building extension hastings sources
f2py options: []
f2py: /tmp/tmptfNK80/src.linux-i686-2.4/hastingsmodule.c
creating /tmp/tmptfNK80
creating /tmp/tmptfNK80/src.linux-i686-2.4
Reading fortran codes...
Reading file 'hastings.f90' (format:free)
Post-processing...
Block: hastings
Block: model
Block: fweb
Post-processing (stage 2)...
Block: hastings
Block: unknown_interface
Block: model
Block: fweb
Building modules...
Building module hastings...
Constructing F90 module support for model...
  Variables: a1 a2 b1 b2 d2 d1
Constructing wrapper function model.fweb...
  yprime = fweb(y,t)
Wrote C/API module hastings to file 
/tmp/tmptfNK80/src.linux-i686-2.4/hastingsmodule.c
Traceback (most recent call last):
  File /usr/bin/f2py, line 26, in ?
main()
  File /var/lib/python-support/python2.4/numpy/f2py/f2py2e.py, line 
558, in main
run_compile()
  File /var/lib/python-support/python2.4/numpy/f2py/f2py2e.py, line 
545, in run_compile
setup(ext_modules = [ext])
  File /var/lib/python-support/python2.4/numpy/distutils/core.py, 
line 176, in setup
return old_setup(**new_attr)
  File /usr/lib/python2.4/distutils/core.py, line 149, in setup
dist.run_commands()
  File /usr/lib/python2.4/distutils/dist.py, line 946, in run_commands
self.run_command(cmd)
  File /usr/lib/python2.4/distutils/dist.py, line 966, in run_command
cmd_obj.run()
  File /usr/lib/python2.4/distutils/command/build.py, line 113, in run
self.run_command(cmd_name)
  File /usr/lib/python2.4/distutils/cmd.py, line 333, in run_command
self.distribution.run_command(command)
  File /usr/lib/python2.4/distutils/dist.py, line 966, in run_command
cmd_obj.run()
  File 
/var/lib/python-support/python2.4/numpy/distutils/command/build_src.py, line 
130, in run
self.build_sources()
  File 
/var/lib/python-support/python2.4/numpy/distutils/command/build_src.py, line 
147, in build_sources
self.build_extension_sources(ext)
  File 
/var/lib/python-support/python2.4/numpy/distutils/command/build_src.py, line 
256, in build_extension_sources
sources = self.f2py_sources(sources, ext)
  File 
/var/lib/python-support/python2.4/numpy/distutils/command/build_src.py, line 
511, in f2py_sources
numpy.f2py.run_main(f2py_options + ['--lower',
  File /var/lib/python-support/python2.4/numpy/f2py/f2py2e.py, line 
367, in run_main
ret=buildmodules(postlist)
  File /var/lib/python-support/python2.4/numpy/f2py/f2py2e.py, line 
319, in buildmodules
dict_append(ret[mnames[i]],rules.buildmodule(modules[i],um))
  File /var/lib/python-support/python2.4/numpy/f2py/rules.py, line 
1222, in buildmodule
for l in '\n\n'.join(funcwrappers2)+'\n'.split('\n'):
TypeError: cannot concatenate 'str' and 'list' objects


___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion