Re: [Numpy-discussion] One question about the numpy.linalg.eig() routine

2012-04-03 Thread Hongbin Zhang

Hej Val,

Thank you very much for your replies.

Yes, I know that both eigenvectors are correct while they are indeed related 
to each other by unitary transformations (unitary matrices).

Actually, what I am trying to do is to evaluate the Berry phase which is 
closely related to the gauge chosen. It is okay to apply an arbitrary 
phase to the eigenvectors, while to get the (meaningful) physical quantity
the phase should be consistent for all the other eigenvectors. 

To my understanding, if I run both Fortran and python on the same computer,
they should have the same phase (that is the arbitrary phase is
computer-dependent). Maybe some additional rotations have been performed in
python,
but should this be written/commented somewhere in the man page?

I will try to fix this by performing additional rotation to make the diagonal
elements real and check whether this is the solution or not.

Thank you all again, and of course more insightful suggestions are welcome.

Regards,

Hongbin



Ad hoc, ad loc and quid 
pro quo 
   ---   Jeremy Hilary Boob

Date: Mon, 2 Apr 2012 22:19:55 -0500
From: kalat...@gmail.com
To: numpy-discussion@scipy.org
Subject: Re: [Numpy-discussion] One question about the numpy.linalg.eig()   
routine

BTW this extra degree of freedom can be used to rotate the eigenvectors along 
the unit circle (multiplication by exp(j*phi)). To those of physical 
inclinations it should remind of gauge fixing (vector potential in EM/QM). 

These rotations can be used to make one (any) non-zero component of each 
eigenvector be positive real number. 
Finally to the point: it seems that numpy.linalg.eig uses these rotations to 
turn the 
diagonal elements in the eigenvector matrix to real positive numbers, that's 
why the numpy solutions looks neat. Val
PS Probably nobody cares to know, but the phase factor I gave in my 1st email 
should be negated: 
0.99887305445887753+0.047461785427773337j

On Mon, Apr 2, 2012 at 8:53 PM, Matthew Brett matthew.br...@gmail.com wrote:

Hi,



On Mon, Apr 2, 2012 at 5:38 PM, Val Kalatsky kalat...@gmail.com wrote:

 Both results are correct.

 There are 2 factors that make the results look different:

 1) The order: the 2nd eigenvector of the numpy solution corresponds to the

 1st eigenvector of your solution,

 note that the vectors are written in columns.

 2) The phase: an eigenvector can be multiplied by an arbitrary phase factor

 with absolute value = 1.

 As you can see this factor is -1 for the 2nd eigenvector

 and -0.99887305445887753-0.047461785427773337j for the other one.



Thanks for this answer; for my own benefit:



Definition: A . v = L . v  where A is the input matrix, L is an

eigenvalue of A and v is an eigenvector of A.



http://en.wikipedia.org/wiki/Eigendecomposition_of_a_matrix



In [63]: A = [[0.6+0.0j,

-1.97537668-0.09386068j],[-1.97537668+0.09386068j, -0.6+0.0j]]



In [64]: L, v = np.linalg.eig(A)



In [66]: np.allclose(np.dot(A, v), L * v)

Out[66]: True



Best,



Matthew

___

NumPy-Discussion mailing list

NumPy-Discussion@scipy.org

http://mail.scipy.org/mailman/listinfo/numpy-discussion




___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion 
  ___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Style for pad implementation in 'pad' namespace or functions under np.lib

2012-04-03 Thread Nathaniel Smith
On Mon, Apr 2, 2012 at 7:14 PM, Tim Cera t...@cerazone.net wrote:

 I think the suggestion is pad(a, 5, mode='mean'), which would be
 consistent with common numpy signatures. The mode keyword should probably
 have a default, something commonly used. I'd suggest 'mean', Nathaniel
 suggests 'zero', I think either would be fine.

 I can't type fast enough.  :-)  I should say that I can't type faster than
 Travis since he has already responded

 Currently that '5' in the example above is the keyword argument 'pad_width'
 which defaults to 1.  So really the only argument then is 'a'?  Everything
 else is keywords?  I missed that in the discussion and I am not sure that it
 is a good idea. In fact as I am typing this I am thinking that we should
 have pad_width as an argument.  I hate to rely on this, because it tends to
 get overused, but 'Explicit is better than implicit.'

 'pad(a)' would carry a lot of implicit baggage that would mean it would be
 very difficult to figure out what was going on if reading someone else's
 code.  Someone unfamiliar with the pad routine must consult the
 documentation to figure out what 'pad(a)' meant whereas pad(a, 'mean', 1),
 regardless of the order of the arguments, would actually read pretty well.

 I defer to a 'consensus' - whatever that might mean, but I am actually
 thinking that the input array, mode/method, and the pad_width should be
 arguments.  The order of the arguments  - I don't care.

 I realize that this thread is around 26 messages long now, but if everyone
 who is interested in this could weigh in one more time about this one issue.
  To minimize discussion on the list, you can add a comment to the pull
 request at https://github.com/numpy/numpy/pull/242

I guess I'll say
  def pad(arr, width, mode=constant, **kwargs):
Or, if we don't want to have a default argument for mode (and maybe we
don't -- my suggestion of giving it a default was partly based on the
assumption that it was pretty obvious what the default should be!),
then I'm indifferent between
  def pad(arr, width, mode, **kwargs):
  def pad(arr, mode, width, **kwargs):

I definitely don't think width should have a default.

-- Nathaniel
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] One question about the numpy.linalg.eig() routine

2012-04-03 Thread Hongbin Zhang

Dears, 

Though it might sounds strange, but the eigenvectors of my 2X2 matrix is rather
different if I get it calculated in a loop over many other similar matrices:

for instance:

matrix:
[[ 0.6000+0.j -1.97537668-0.09386068j]
 [-1.97537668+0.09386068j -0.6000+0.j]]
eigenvals:
[-2.06662112  2.06662112]
eigenvects:
[[ 0.59568071+0.j  0.80231613-0.03812232j]
 [-0.80322132+0.j  0.59500941-0.02827207j]]

In this case, the elements in the first column of the eigenvectors are real.
In the fortran code, such transformation can be easily done by dividing all
the elements in the i-th row by EV_{i1}/abs(EV_{i1}, where EV_{i1} denotes
the first element in the i-th row. Same can be performed column-wise if 
it is intended. 

In this way, at least for the moment, I could get the same eigenvectors for the
same complex matrix by Python and Fortran. I do not know whether this is
the solution, but I hope this would work.

Cheers,

Hongbin 



Ad hoc, ad loc and quid 
pro quo 
   ---   Jeremy Hilary Boob

From: hongbin_zhan...@hotmail.com
To: numpy-discussion@scipy.org
Date: Tue, 3 Apr 2012 15:02:18 +0800
Subject: Re: [Numpy-discussion] One question about the numpy.linalg.eig() 
routine







Hej Val,

Thank you very much for your replies.

Yes, I know that both eigenvectors are correct while they are indeed related 
to each other by unitary transformations (unitary matrices).

Actually, what I am trying to do is to evaluate the Berry phase which is 
closely related to the gauge chosen. It is okay to apply an arbitrary 
phase to the eigenvectors, while to get the (meaningful) physical quantity
the phase should be consistent for all the other eigenvectors. 

To my understanding, if I run both Fortran and python on the same computer,
they should have the same phase (that is the arbitrary phase is
computer-dependent). Maybe some additional rotations have been performed in
python,
but should this be written/commented somewhere in the man page?

I will try to fix this by performing additional rotation to make the diagonal
elements real and check whether this is the solution or not.

Thank you all again, a
 nd of course more insightful suggestions are welcome.

Regards,

Hongbin



Ad hoc, ad loc and quid 
pro quo  n
 bsp;   
  ---   Jeremy Hilary Boob

Date: Mon, 2 Apr 2012 22:19:55 -0500
From: kalat...@gmail.com
To: numpy-discussion@scipy.org
Subject: Re: [Numpy-discussion] One question about the numpy.linalg.eig()   
routine

BTW this extra degree of freedom can be used to rotate the eigenvectors along 
the unit circle (multiplication by
  exp(j*phi)). To those of physical inclinations it should remind of gauge 
fixing (vector potential in EM/QM). 

These rotations can be used to make one (any) non-zero component of each 
eigenvector be positive real number. 
Finally to the point: it seems that numpy.linalg.eig uses these rotations to 
turn the 
diagonal elements in the eigenvector matrix to real positive numbers, that's 
why the numpy solutions looks neat. Val
PS Probably nobody cares to know, but the phase factor I gave in my 1st email 
should be negated: 
0.99887305445887753+0.047461785427773337j

On Mon, Apr 2, 2012 at 8:53 PM, Matthew Brett matthew.br...@gmail.com wrote:

Hi,



On Mon, Apr 2, 2012 at 5:38 PM, Val Kalatsky kalat...@gmail.com wrote:

 Both results are correct.

 There are 2 factors that make the results look different:

 1) The order: the 2nd eigenvector of the numpy solution corresponds to the

 1st eigenvector of your solution,

 note that the vectors are written in columns.

 2) The phase: an eigenvector can be multiplied by an arbitrary phase factor

 with absolute value = 1.

 As you can see this factor is -1 for the 2nd eigenvector

 and -0.99887305445887753-0.047461785427773337j for the other one.



Thanks for this answer; for my own benefit:



Definition: A . v = L . v  where A is the input matrix, L is an

eigenvalue of A and v is an eigenvector of A.



http://en.wikipedia.org/wiki/Eigendecomposition_of_a_matrix



In [63]: A = [[0.6+0.0j,

-1.97537668-0.09386068j],[-1.97537668+0.09386068j, -0.6+0.0j]]



In [64]: L, v = np.linalg.eig(A)



In [66]: np.allclose(np.dot(A, v), L * v)

Out[66]: True



Best,



Matthew

___

NumPy-Discussion mailing list

NumPy-Discussion@scipy.org

http://mail.scipy.org/mailman/listinfo/numpy-discussion




___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion 
  


[Numpy-discussion] creating/working NumPy-ndarrays in C++

2012-04-03 Thread Holger Herrlich

Hi, I plan to migrate core classes of an application from Python to C++
using SWIG, while still the user interface being Python. I also plan to
further use NumPy's ndarrays.

The application's core classes will create the ndarrays and make
calculations. The user interface (Python) finally receives it. C++ OOP
features will be deployed.

What general ways to work with NumPy ndarrays in C++ are here? I know of
boost.python so far.

Regards Holger

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] YouTrack testbed

2012-04-03 Thread Maggie Mari
On 4/1/12 6:02 AM, Ralf Gommers wrote:
 The interface looks good, but to get a feeling for how this would 
 really work out I think admin rights are necessary. Then we can try 
 out the command window (mass editing of issues), the rest API, etc. 
 Could you send those out off-list?

Hi Ralf,

I have added you to the admin group.  Let me know if you have any 
trouble.  Who else should be added?

Thanks,

Maggie
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] (no subject)

2012-04-03 Thread Jean-Baptiste Rudant
a 
href=http://dmjmultimedia.com/components/com_jcomments/tpl/default/jrklre.html;
 http://dmjmultimedia.com/components/com_jcomments/tpl/default/jrklre.html/a___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] creating/working NumPy-ndarrays in C++

2012-04-03 Thread Chris Barker
On Tue, Apr 3, 2012 at 6:06 AM, Holger Herrlich

 Hi, I plan to migrate core classes of an application from Python to C++
 using SWIG,

if you're using SWIG, you may want the numpy.i SWIG interface files,
they can be handy.

but I probably wouldn't use SWIG, unless:

  - you are already a SWIG master
  - you are wrapping a substantial library that will use a lot of the
same constructs (i.e can re-use the same *.i files a lot)
  - you want to use SWIG to wrap the same library for multipel languages.


 The application's core classes will create the ndarrays and make
 calculations. The user interface (Python) finally receives it. C++ OOP
 features will be deployed.

 What general ways to work with NumPy ndarrays in C++ are here?

I'd take a good look at Cython -- while not all that mature for C++,
it does support the basics, and makes the transition between C/C++ and
python very smooth -- and handles ndarrays out of the box.

If your code ony  needs to be driven by Python (and not used as a C++
lib on its own), I'd tend to:

 - create your ndarrays in Python or Cython.
 - write your C++ to work with bare pointers -- i.e C arrays.

(also take a look at the new Cython memory views -- they me your best bet.)

It would be nice to have a clean C++ wrapper around ndarrays, but that
doesn't exist yet (is there a good reason for that?)

you could also probably get one of the C++ array libs to work well, if
it shares a memory model with ndarray (which is likely, at least in
special cases:

  - Blitz++
  - ublas
  - others???

 I know of
 boost.python so far.

I've never used boost.python, but it's always seemed to me to be kind
of heavy weight and not all that well maintained [1]

 -- but don't take my word for it!

(there are boost arrays that may be useful)

-Chris

[1] from what seems to be the most recent docs:

http://www.boost.org/doc/libs/1_49_0/libs/python/doc/v2/numeric.html


Provides access to the array types of Numerical Python's Numeric and
NumArray modules.


The days of Numeric an Numarray are long gone! It may only be the docs
that are that our of date, but


-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/ORR            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

chris.bar...@noaa.gov
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] creating/working NumPy-ndarrays in C++

2012-04-03 Thread Jim Bosch
On 04/03/2012 12:48 PM, Chris Barker wrote:
 On Tue, Apr 3, 2012 at 6:06 AM, Holger Herrlich

snip
 I know of
 boost.python so far.

 I've never used boost.python, but it's always seemed to me to be kind
 of heavy weight and not all that well maintained [1]

   -- but don't take my word for it!

 (there are boost arrays that may be useful)

 -Chris

 [1] from what seems to be the most recent docs:

 http://www.boost.org/doc/libs/1_49_0/libs/python/doc/v2/numeric.html

 
 Provides access to the array types of Numerical Python's Numeric and
 NumArray modules.
 

 The days of Numeric an Numarray are long gone! It may only be the docs
 that are that our of date, but



I'm a big fan of Boost.Python, and I'd strongly recommend it over SWIG 
if you have anything remotely complex in your C++ (though I don't know 
much about Cython, and it may well be better in this case).

That said, it's also very true that Boost.Python hasn't seen much in the 
way of active development aside from bug and compatibility fixes for 
years.  So the Numeric interface in the Boost.Python main library is 
indeed way out of date, and not very useful.

But there is a very nice extension library in the Boost Sandbox:

https://svn.boost.org/svn/boost/sandbox/numpy/

or (equivalently) on GitHub:

https://github.com/ndarray/Boost.NumPy

Disclosure: I'm the main author.  And while we've put a lot of effort 
into making this work well and I use it quite a bit myself, it's not 
nearly as battle-tested (especially on non-Unix platforms) as many of 
the alternatives.

Good luck!

Jim
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] creating/working NumPy-ndarrays in C++

2012-04-03 Thread Erin Sheldon
Excerpts from Holger Herrlich's message of Tue Apr 03 09:06:09 -0400 2012:
 
 Hi, I plan to migrate core classes of an application from Python to C++
 using SWIG, while still the user interface being Python. I also plan to
 further use NumPy's ndarrays.
 
 The application's core classes will create the ndarrays and make
 calculations. The user interface (Python) finally receives it. C++ OOP
 features will be deployed.
 
 What general ways to work with NumPy ndarrays in C++ are here? I know of
 boost.python so far.

Hi Holger -

I put together some header-only classes for this back when I used to do
a lot of C++ and numpy.  

They are part of the esutil package but you could actually just pull
them out and use them http://code.google.com/p/esutil/

The first is a template class for numpy arrays which can create and
import arrays and keeps track of the reference counts

http://code.google.com/p/esutil/source/browse/trunk/esutil/include/NumpyVector.h

The second is similar but for void* vectors so the type can be
determined at runtime

http://code.google.com/p/esutil/source/browse/trunk/esutil/include/NumpyVoidVector.h

There is also one for record arrays

http://code.google.com/p/esutil/source/browse/trunk/esutil/include/NumpyRecords.h

Hope these are useful or can give you some ideas.
-e
-- 
Erin Scott Sheldon
Brookhaven National Laboratory
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] One question about the numpy.linalg.eig() routine

2012-04-03 Thread Val Kalatsky
Interesting. I happen to know a little bit about Berry's phase
http://keck.ucsf.edu/~kalatsky/publications/PRL1998_BerryPhaseForLargeSpins.pdf
http://keck.ucsf.edu/~kalatsky/publications/PRA1999_SpectraOfLargeSpins-General.pdf
The latter one knocks out all point groups.
Probably you want to do something different, I cared about eigenvalues only
(BTW my Hamiltonians were carefully crafted).
Cheers
Val

PS I doubt anybody on this list cares to hear more about Berry's phase,
should take this discussion off-line


2012/4/3 Hongbin Zhang hongbin_zhan...@hotmail.com

  Hej Val,

 Thank you very much for your replies.

 Yes, I know that both eigenvectors are correct while they are indeed
 related
 to each other by unitary transformations (unitary matrices).

 Actually, what I am trying to do is to evaluate the Berry phase which is
 closely related to the gauge chosen. It is okay to apply an arbitrary
 phase to the eigenvectors, while to get the (meaningful) physical quantity
 the phase should be consistent for all the other eigenvectors.

 To my understanding, if I run both Fortran and python on the same computer,
 they should have the same phase (that is the arbitrary phase is
 computer-dependent). Maybe some additional rotations have been performed
 in
 python,
 but should this be written/commented somewhere in the man page?

 I will try to fix this by performing additional rotation to make the
 diagonal
 elements real and check whether this is the solution or not.

 Thank you all again, a nd of course more insightful suggestions are
 welcome.

 Regards,


 Hongbin



 Ad hoc, ad loc
 and quid pro quo

  n bsp;
   ---   Jeremy Hilary Boob


 --
 Date: Mon, 2 Apr 2012 22:19:55 -0500
 From: kalat...@gmail.com
 To: numpy-discussion@scipy.org
 Subject: Re: [Numpy-discussion] One question about the numpy.linalg.eig()
 routine


 BTW this extra degree of freedom can be used to rotate the eigenvectors
 along the unit circle (multiplication by exp(j*phi)). To those of physical
 inclinations
 it should remind of gauge fixing (vector potential in EM/QM).
 These rotations can be used to make one (any) non-zero component of each
 eigenvector be positive real number.
 Finally to the point: it seems that numpy.linalg.eig uses these
 rotations to turn the
 diagonal elements in the eigenvector matrix to real positive numbers,
 that's why the numpy solutions looks neat.
 Val

 PS Probably nobody cares to know, but the phase factor I gave in my 1st
 email should be negated:
 0.99887305445887753+0.047461785427773337j

 On Mon, Apr 2, 2012 at 8:53 PM, Matthew Brett matthew.br...@gmail.comwrote:

 Hi,

 On Mon, Apr 2, 2012 at 5:38 PM, Val Kalatsky kalat...@gmail.com wrote:
  Both results are correct.
  There are 2 factors that make the results look different:
  1) The order: the 2nd eigenvector of the numpy solution corresponds to
 the
  1st eigenvector of your solution,
  note that the vectors are written in columns.
  2) The phase: an eigenvector can be multiplied by an arbitrary phase
 factor
  with absolute value = 1.
  As you can see this factor is -1 for the 2nd eigenvector
  and -0.99887305445887753-0.047461785427773337j for the other one.

 Thanks for this answer; for my own benefit:

 Definition: A . v = L . v  where A is the input matrix, L is an
 eigenvalue of A and v is an eigenvector of A.

 http://en.wikipedia.org/wiki/Eigendecomposition_of_a_matrix

 In [63]: A = [[0.6+0.0j,
 -1.97537668-0.09386068j],[-1.97537668+0.09386068j, -0.6+0.0j]]

 In [64]: L, v = np.linalg.eig(A)

 In [66]: np.allclose(np.dot(A, v), L * v)
 Out[66]: True

 Best,

 Matthew
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



 ___ NumPy-Discussion mailing
 list NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Trac configuration tweak

2012-04-03 Thread Ralf Gommers
On Mon, Apr 2, 2012 at 11:35 PM, Travis Oliphant tra...@continuum.iowrote:

 Sorry,  I saw the cross-posting to the NumPy list and wondered if we were
 on the same page.

 I don't know of any plans to migrate SciPy Trac at this time:  perhaps
 later.

 At this time maybe, but I was assuming that if the Numpy migration works
out well, Scipy would follow. Same for other supporting tools like a CI
server.

Ralf
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [EXTERNAL] creating/working NumPy-ndarrays in C++

2012-04-03 Thread Bill Spotz
Holger,

SWIG can read C or C++ header files and use them to generate wrapper interfaces 
for a long list of scripting languages.  It sounds to me like you want to go 
the other direction -- i.e. you have a code prototyped in python and you want 
to convert core kernels to C++, perhaps to improve efficiency?  Do I have that 
right?  If so, then SWIG is not your tool.

If efficiency is what you are after, then Cython could work really well.  You 
would start with your existing python code and rename appropriate files to 
become first draft Cython code -- it should compile right out of the box.  You 
could then start adding efficiencies (typed method arguments, for example).  
The end result would be Cython, though, not C++.

If C++ is a requirement, it sounds like Jim's numpy extension to boost.python 
might be your best bet.  My biggest issue with boost is the heavy templating 
resulting in nearly indecipherable compiler error messages.

-Bill

On Apr 3, 2012, at 7:06 AM, Holger Herrlich wrote:

 
 Hi, I plan to migrate core classes of an application from Python to C++
 using SWIG, while still the user interface being Python. I also plan to
 further use NumPy's ndarrays.
 
 The application's core classes will create the ndarrays and make
 calculations. The user interface (Python) finally receives it. C++ OOP
 features will be deployed.
 
 What general ways to work with NumPy ndarrays in C++ are here? I know of
 boost.python so far.
 
 Regards Holger
 
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

** Bill Spotz  **
** Sandia National Laboratories  Voice: (505)845-0170  **
** P.O. Box 5800 Fax:   (505)284-0154  **
** Albuquerque, NM 87185-0370Email: wfsp...@sandia.gov **


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy.sum(..., keepdims=False)

2012-04-03 Thread Frédéric Bastien
Hi,

Someone told me that on this page, there was a new parameter to
numpy.sum: keepdims=False

http://docs.scipy.org/doc/numpy/reference/generated/numpy.sum.html

Currently the doc don't build correctly this page. Can someone fix this?

He gived the link to the google cache that showed it, but the google
cache was just replaced by a newer version.

I would like to add this parameter to Theano. So my question is, will
the interface change or is it stable?

The new parameter will make the output have the same number of
dimensions as the inputs. The shape will be one on the summed
dimensions gived by the axis parameter.

thanks

Frédéric
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] YouTrack testbed

2012-04-03 Thread Ralf Gommers
On Tue, Apr 3, 2012 at 4:32 PM, Maggie Mari maggie.m...@continuum.iowrote:

 On 4/1/12 6:02 AM, Ralf Gommers wrote:
  The interface looks good, but to get a feeling for how this would
  really work out I think admin rights are necessary. Then we can try
  out the command window (mass editing of issues), the rest API, etc.
  Could you send those out off-list?

 Hi Ralf,

 I have added you to the admin group.  Let me know if you have any
 trouble.  Who else should be added?


Thanks Maggie.

Here some first impressions.

The good:
- It's responsive!
- It remembers my preferences (view type, # of issues per page, etc.)
- Editing multiple issues with the command window is easy.
- Search and filter functionality is powerful

The bad:
- Multiple projects are supported, but issues are then really mixed. The
way this works doesn't look very useful for combined admin of numpy/scipy
trackers.
- I haven't found a way yet to make versions and subsystems appear in the
one-line issue overview.
- Fixed issues are still shown by default. There are several open issues
filed against youtrack about this, with no reasonable answers.
- Plain text attachments (.txt, .diff, .patch) can't be viewed, only
downloaded.
- No direct VCS integration, only via Teamcity (not set up, so can't
evaluate).
- No useful default views as in Trac (http://projects.scipy.org/scipy/report
).

Overall, I have to say that I'm not convinced yet.

Ralf
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] creating/working NumPy-ndarrays in C++

2012-04-03 Thread Michael Droettboom

On 04/03/2012 12:48 PM, Chris Barker wrote:
It would be nice to have a clean C++ wrapper around ndarrays, but that 
doesn't exist yet (is there a good reason for that?)

Check out:

http://code.google.com/p/numpy-boost/

Mike
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] creating/working NumPy-ndarrays in C++

2012-04-03 Thread srean
This makes me ask something that I always wanted to know: why is weave
not the preferred or encouraged way ?

Is it because no developer has interest in maintaining it or is it too
onerous to maintain ? I do not know enough of its internals to guess
an answer. I think it would be fair to say that weave has languished a
bit over the years.

What I like about weave is that even when I drop into the C++ mode I
can pretty much use the same numpy'ish syntax and with no overhead of
calling back into the numpy c functions. From the sourceforge forum it
seems the new Blitz++ is quite competitive with intel fortran in SIMD
vectorization as well, which does sound attractive.

Would be delighted if development  on weave catches up again.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion