Re: [Numpy-discussion] svd error checking vs. speed

2014-02-17 Thread Jason Grout
On 2/15/14 3:37 PM, alex wrote:
 The proposal would be to make np.linalg.svd more like scipy.linalg.svd
 with respect to input checking.  The argument against the change is
 raw speed; if you know that you will never feed non-finite input to
 svd, then np.linalg.svd is a bit faster than scipy.linalg.svd.  An
 argument for the change could be to avoid issues reported on github
 like crashes, hangs, spurious non-convergence exceptions, etc. from
 the undefined behavior of svd of non-finite input.

For what my vote is worth, -1.  I thought this was pretty much the 
designed difference between the scipy and numpy linalg routines.  Scipy 
does the checking, and numpy provides the raw speed.  Maybe this is 
better resolved as a note in the documentation for numpy about the 
assumptions for the input and a reference to the scipy implementation?

That said, I don't extensively use the linalg.svd routine in practice, 
so I defer to those that use it.

Thanks,

Jason

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Announcing Bokeh 0.2: interactive web plotting for Python

2013-10-24 Thread Jason Grout
On 10/23/13 6:00 PM, Peter Wang wrote:
 Hi everyone,

 I'm excited to announce the v0.2 release of Bokeh, an interactive web
 plotting library for Python.  The long-term vision for Bokeh is to
 provide rich interactivity, using the full power of Javascript and
 Canvas, to Python users who don't need to write any JS or learn
 the DOM.

 The full blog post announcement is here:
 http://continuum.io/blog/bokeh02

 The project website (with interactive gallery) is at:
 http://bokeh.pydata.org

 And the Git repo is:
 https://github.com/ContinuumIO/bokeh

This looks really cool.  I was checking out how easy it would be to 
embed in the Sage Cell Server [1].  I briefly looked at the code, and it 
appears that the IPython notebook mode does not use nodejs, redis, 
gevent, etc.?  Is that right?

Thanks,

Jason

[1] https://sagecell.sagemath.org


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Announcing Bokeh 0.2: interactive web plotting for Python

2013-10-24 Thread Jason Grout
On 10/24/13 6:35 AM, Jason Grout wrote:
 This looks really cool.  I was checking out how easy it would be to
 embed in the Sage Cell Server [1].  I briefly looked at the code, and it
 appears that the IPython notebook mode does not use nodejs, redis,
 gevent, etc.?  Is that right?

Or maybe the better way to phrase it is: what are the absolute minimum 
dependencies if all I want to do is to display in the IPython notebook?

Thanks,

Jason

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Announcing Bokeh 0.2: interactive web plotting for Python

2013-10-24 Thread Jason Grout
On 10/23/13 6:00 PM, Peter Wang wrote:

 The project website (with interactive gallery) is at:
 http://bokeh.pydata.org

Just a suggestion: could you put the source below each gallery image, 
like matplotlib does in their gallery?  I see lots of pretty plots, but 
I have to go digging in github or somewhere to see how you made these 
plots.  Since Bokeh is (at least partly) about making beautiful plots 
easy, showing off the source code is half of the story.

Thanks,

Jason

--
Jason Grout

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Announcing Bokeh 0.2: interactive web plotting for Python

2013-10-24 Thread Jason Grout
On 10/24/13 9:47 AM, Peter Wang wrote:
 You will still have interactivity *within* plots inside a single
 Notebook, but they will not drive events back to the server side.  Also,
 if your data is large, then the notebook will also get pretty big.  (We
 will be working on more efficient encodings in a future release.)


It would be really cool if you could hook into the new IPython comm 
infrastructure to push events back to the server in IPython (this is not 
quite merged yet, but probably ready for experimentation like this). 
The comm infrastructure basically opens up a communication channel 
between objects on the server and the browser.  Messages get sent over 
the normal IPython channels.  The server and browser objects just use 
either send() or an on_message() handler.  See 
https://github.com/ipython/ipython/pull/4195

Here's a very simple example of the Comm implementation working with 
matplotlib images in the Sage Cell server (which is built on top of the 
IPython infrastructure):  http://sagecell.sagemath.org/?q=fyjgmk (I'd 
love to see a bokeh version of this sort of thing :).

FYI, here is the javascript code we use for the above example: 
https://github.com/sagemath/sagecell/blob/master/static/compute_server.js#L768 
and the python code is at 
https://github.com/sagemath/sagecell/blob/master/graphics.py#L399

Thanks,

Jason

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Announcing Bokeh 0.2: interactive web plotting for Python

2013-10-24 Thread Jason Grout
On 10/24/13 1:42 PM, Peter Wang wrote:
 On Thu, Oct 24, 2013 at 10:11 AM, Jason Grout
 jason-s...@creativetrax.com mailto:jason-s...@creativetrax.com wrote:

 It would be really cool if you could hook into the new IPython comm
 infrastructure to push events back to the server in IPython (this is not
 quite merged yet, but probably ready for experimentation like this).
 The comm infrastructure basically opens up a communication channel
 between objects on the server and the browser.  Messages get sent over
 the normal IPython channels.  The server and browser objects just use
 either send() or an on_message() handler.  See
 https://github.com/ipython/ipython/pull/4195


 Yeah, I think we should definitely look into integrating with this
 mechanism for when we are embedded in a Notebook.  However, we always
 want the underlying infrastructure to be independent of IPython
 Notebook, because we want people to be able to build analytical
 applications on top of these components.

That makes a lot of sense.  And looking at the code, it looks like you 
are cleanly separating out the session objects controlling communication 
from the plot machinery.  That will hopefully make it much easier to 
have different transports for the communication.



 Here's a very simple example of the Comm implementation working with
 matplotlib images in the Sage Cell server (which is built on top of the
 IPython infrastructure): http://sagecell.sagemath.org/?q=fyjgmk (I'd
 love to see a bokeh version of this sort of thing :).


 This is interesting, and introducing widgets is already on the roadmap,
 tentatively v0.4.  When running against a plot server, Bokeh plots
 already push selections back to the server side.  (That's how the linked
 brushing in e.g. this example works:
 https://www.wakari.io/sharing/bundle/pwang/cars)

 Our immediate short-term priorities for 0.3 are improving the layout
 mechanism, incorporating large data processing into the plot server, and
 investigating basic interop with Matplotlib objects.



Great to hear.


Jason

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Proposal of new function: iteraxis()

2013-04-26 Thread Jason Grout
On 4/26/13 6:33 AM, Robert Kern wrote:
 In any case, whether you put the documentation in the rollaxis()
 docstring or in one of the indexing/iteration sections, or
 (preferably) both, I strongly encourage you to do that first and see
 how it goes before adding a new alias.

+1 (for what it's worth) to being conservative with API changes as a 
first resort.

Jason

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Issues to fix for 1.7.0rc2.

2013-02-05 Thread Jason Grout
On 2/4/13 12:04 AM, Ondřej Čertík wrote:
 Hi,

 Here are the last open issues for 1.7, there are 9 of them:

 https://github.com/numpy/numpy/issues?milestone=3sort=updatedstate=open


Here's something we noticed while working on getting 1.7rc1 into Sage 
with one of our doctests.  With numpy 1.5.1 (we skipped 1.6.x because of 
backwards compatibility issues...):

import numpy as np
print np.array([None, None]).any()

prints False, but with 1.7.rc1, we get None.

For comparison, in Python 2.6.1 and 3.3.0:

  any([None,None])
False
  print None or None
None

Was this change between 1.5.1 and 1.7 intentional?

Thanks,

Jason

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Issues to fix for 1.7.0rc2.

2013-02-05 Thread Jason Grout
On 2/6/13 12:46 AM, Charles R Harris wrote:
 if we decide to do so

I should mention that we don't really depend on either behavior (we 
probably should have a better doctest testing for an array of None 
values anyway), but we noticed the oddity and thought we ought to 
mention it.  So it doesn't matter to us which way the decision goes.

Thanks,

Jason

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] DARPA funding for Blaze and passing the NumPy torch

2012-12-17 Thread Jason Grout
On 12/16/12 11:07 PM, Travis Oliphant wrote:
 Hello all,

 There is a lot happening in my life right now and I am spread quite
 thin among the various projects that I take an interest in. In
 particular, I am thrilled to publicly announce on this list that
 Continuum Analytics has received DARPA funding (to the tune of at
 least $3 million) for Blaze, Numba, and Bokeh which we are writing to
 take NumPy, SciPy, and visualization into the domain of very large
 data sets.This is part of the XDATA program, and I will be taking
 an active role in it.You can read more about Blaze here:
 http://blaze.pydata.org.   You can read more about XDATA here:
 http://www.darpa.mil/Our_Work/I2O/Programs/XDATA.aspx


This is awesome.  As with the recent IPython grant, it would be great if 
you guys got some good publicity from this.  For example, I see an 
article up on Hacker News about blaze, but there doesn't seem to be a 
mention of big funding.  Has someone written a press release?  Has 
someone submitted the grant news to Hacker News or Slashdot, where you 
might attract attention and mindshare?

Thanks,

Jason

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] IPython receives $1.15 million from Alfred P. Sloan Foundation

2012-12-12 Thread Jason Grout
Hi everyone,

Just FYI, IPython just received $1.15 million in funding from the Alfred 
P. Sloan Foundation to support development over the next 2 years. 
Fernando talks more about this in his post to the IPython mailing list:

http://mail.scipy.org/pipermail/ipython-dev/2012-December/010799.html

It's great to see a significant open-source python project that many of 
us use on a day-to-day basis get such great funding!

Thanks,

Jason

--
Jason Grout

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] [matplotlib-devel] GitHub attachments

2012-12-07 Thread Jason Grout
On 12/7/12 3:21 PM, Nathaniel Smith wrote:
 Oh, never mind, I guess they *only* allow image files. So, uh, no test
 data files, but if we have any lolcats in the trac attachments, we can
 migrate those.



It looks like what they do is just automatically upload it to their own 
cloud, and then substitute in their standard markup for embedding 
images.  So it's just replacing the upload your file to somewhere to 
we'll upload it automatically to our own cloud.

That said, it is really important and very nice that they're doing this!

Thanks,

Jason


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] matrix norm

2012-10-22 Thread Jason Grout
I'm curious why scipy/numpy defaults to calculating the Frobenius norm 
for matrices [1], when Matlab, Octave, and Mathematica all default to 
calculating the induced 2-norm [2].  Is it solely because the Frobenius 
norm is easier to calculate, or is there some other good mathematical 
reason for doing things differently?

Thanks,

Jason

[1] 
http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.norm.html

[2]

* Matlab (http://www.mathworks.com/help/matlab/ref/norm.html).
* Octave (http://www.network-theory.co.uk/docs/octave3/octave_198.html).
* Mathematica (http://reference.wolfram.com/mathematica/ref/Norm.html)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix norm

2012-10-22 Thread Jason Grout
On 10/22/12 10:56 AM, Charles R Harris wrote:


 On Mon, Oct 22, 2012 at 9:44 AM, Jason Grout
 jason-s...@creativetrax.com mailto:jason-s...@creativetrax.com wrote:

 I'm curious why scipy/numpy defaults to calculating the Frobenius norm
 for matrices [1], when Matlab, Octave, and Mathematica all default to
 calculating the induced 2-norm [2].  Is it solely because the Frobenius
 norm is easier to calculate, or is there some other good mathematical
 reason for doing things differently?


 Looks to me like Matlab, Octave, and Mathematica all default to the
 Frobenius norm .


Am I not reading their docs correctly?

* Matlab (http://www.mathworks.com/help/matlab/ref/norm.html).

n = norm(X) is the same as n = norm(X,2). (and n = norm(X,2) returns 
the 2-norm of X.)

* Octave (http://www.network-theory.co.uk/docs/octave3/octave_198.html).

Compute the p-norm of the matrix a. If the second argument is missing, 
p = 2 is assumed.

* Mathematica (http://reference.wolfram.com/mathematica/ref/Norm.html)

For matrices, Norm[m] gives the maximum singular value of m.

Thanks,

Jason

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix norm

2012-10-22 Thread Jason Grout
On 10/22/12 11:08 AM, Charles R Harris wrote:


 The 2-norm and the Frobenius norm are the same thing.

For vectors, but I was talking about matrices and induced p-norms (sorry 
for not being clear).  Warren pointed out that the spectral norm (the 
induced 2-norm) is used in Octave as the default.  Does someone have 
matlab to test their implementation?  The fact that matlab has a 
separate command for the Frobenius norm indicates that they also may be 
using the spectral norm for the default matrix norm.

Thanks,

Jason

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix norm

2012-10-22 Thread Jason Grout
On 10/22/12 10:44 AM, Jason Grout wrote:
 I'm curious why scipy/numpy defaults to calculating the Frobenius norm
 for matrices [1], when Matlab, Octave, and Mathematica all default to
 calculating the induced 2-norm [2].  Is it solely because the Frobenius
 norm is easier to calculate, or is there some other good mathematical
 reason for doing things differently?

I think we've established that the other software mentioned does indeed 
use the spectral norm by default.  I'm still curious: what was the 
reason for breaking with the norm (pun intended :)?  Any chance that in 
a (probably far distant) future release, the norm default could be 
changed to conform with matlab/octave/mathematica's view of the world? 
It's not a huge deal to me now that I know to watch out for it, but it 
did just bite me and a student a few times.

Thanks,

Jason

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix norm

2012-10-22 Thread Jason Grout
On 10/22/12 3:03 PM, Pauli Virtanen wrote:
 Jason Grout jason-sage at creativetrax.com writes:
 [clip]
 I think we've established that the other software mentioned does indeed
 use the spectral norm by default.  I'm still curious: what was the
 reason for breaking with the norm (pun intended :)?  Any chance that in
 a (probably far distant) future release, the norm default could be
 changed to conform with matlab/octave/mathematica's view of the world?
 It's not a huge deal to me now that I know to watch out for it, but it
 did just bite me and a student a few times.

 The trail leads to here:
 http://projects.scipy.org/numpy/attachment/ticket/36/numpy-6-norm-change-
 default.diff

 Seems like the chances of learning the reason why this change was done
 are pretty slim.

Thanks for tracking this down.  It looks like the default used to be the 
spectral norm.  I guess there still is a question of if it is possible 
to change it back at this point.

FYI, in Sage, we were following the numpy convention (default  = 
Frobenius norm) for numerical matrices, but we are now switching the 
default to be the spectral norm [1].

Thanks,

Jason

[1] http://trac.sagemath.org/sage_trac/ticket/13643

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] matrix norm

2012-10-22 Thread Jason Grout
On 10/22/12 3:08 PM, Robert Kern wrote:

 http://mail.scipy.org/pipermail/numpy-discussion/2006-March/019194.html


Ah, so it was basically speed that was the issue.

I won't push this further.  I'll just note that I was confused for a 
bit, and I probably won't be the last person confused about it, given 
the conventions in the other software packages.

Jason

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Access to SPARC 64

2012-08-30 Thread Jason Grout
On 8/30/12 10:10 PM, Ondřej Čertík wrote:
 Hi,

 Does anyone have a SPARC 64 machine that I could have an access to, so
 that I can try to reproduce and fix the following issue?

 http://projects.scipy.org/numpy/ticket/2076

 That would be greatly appreciated, as it is currently marked as a
 blocker for 1.7.0.


You might ask on sage-devel.  They were just talking about SPARC 
machines the other day on sage-devel.

Thanks,

Jason


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Meta: help, devel and stackoverflow

2012-06-30 Thread Jason Grout
On 6/30/12 12:10 PM, Fernando Perez wrote:
 On Sat, Jun 30, 2012 at 9:51 AM, Matthew Brettmatthew.br...@gmail.com  
 wrote:
 As a matter of interest - do y'all hang out much on stackexchange?  I
 notice that I often go to stackexchange for a good answer, but it
 doesn't seem that good for - discussion.  Or maybe it's just I'm not
 used to it.

 I'm in the same boat as you, but this discussion has made me much more
 interested in starting to use it

I'm curious: do you mean using stackexchange.com itself, or using 
http://scicomp.stackexchange.com/ specifically?


Thanks,

Jason
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Meta: help, devel and stackoverflow

2012-06-30 Thread Jason Grout
On 6/30/12 4:23 PM, srean wrote:
 Indeed they are, but it still needs active pulling on behalf of
 those who would want to answer questions and even then a question can
 sink deep in the well. Deeper than what one typically monitors.
 Sometimes question are not appropriately tagged. Sometimes it is not
 obvious what the tag should be, or  which tag is being monitored by
 the persons who might have the answer.

 Could be less of a problem for us given that its a more focused group
 and the predefined tags are not split too fine.

 I think the main issue is that SO requires more active engagement than
 a mailing list because checking for new mail has become something that
 almost everyone does by default anyway.

 Not saying SO is bad, I have benefited greatly from it, but this
 issues should be kept in mind.

You can subscribe to be notified by email whenever a question is posted 
to a certain tag.  So then it is no different than a mailing list as far 
as push/pull.  As far as mistagging---that is no different than posting 
to the wrong mailing list, so I don't see how that is an extra problem. 
  In fact, since it's easy to switch the tags, it's easier than a 
mailing list to shuttle a question to the right mailing list/tag.

Thanks,

Jason

--
Jason Grout
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Meta: help, devel and stackoverflow

2012-06-28 Thread Jason Grout
On 6/28/12 2:46 PM, Cera, Tim wrote:
 That is really funny.  Looking through the posts, there wasn't any spam
 (could have been deleted), but it wasn't used as much as I would think.
   Have to attract people who answer questions.  Early on the
 registration seemed to be a problem.

 Solace, the software behind ask.scipy.org http://ask.scipy.org looks
 pretty nice, EXCEPT that the last commit was in 2009.  On the other have
 it could be that it has reached perfection.  :-)

I'll just note that askbot.org provides a nice platform for 
ask.sagemath.org (last commit to askbot was yesterday :).  I think it's 
as easy as 'pip install askbot' [1]

Jason


[1] http://askbot.org/doc/install.html
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] NumPy 1.7 release delays

2012-06-26 Thread Jason Grout
On 6/26/12 2:48 PM, Ralf Gommers wrote:
 Unfortunately, I think there are issues we are just now seeing with
 code that was released in 1.6.x, and there are many people who have
 not moved forward to 1.6.x yet.


 Some examples would be nice.


I'll bite.  Here's an issue that prevents Sage from upgrading to 1.6.2 
from 1.5.1: https://github.com/numpy/numpy/issues/291

People are actively working on it (Thanks!  Travis commented 13 hours 
ago about the root of the problem, I think).

Thanks,

Jason
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Created NumPy 1.7.x branch

2012-06-26 Thread Jason Grout
On 6/26/12 3:06 PM, Dag Sverre Seljebotn wrote:
 Something the Sage project does very well is meeting often in person

Another thing we have that has improved the mailing list climate is a 
sage-flame list [1] that serves as a venting release valve for anyone 
to post *anything* at all.  There have been multiple occasions where we 
called on people to move their discussion to sage-flame, and overall 
it's worked very nicely.  Having a public forum to argue things out 
seems to help, and my guess is that most of us may peek at it every now 
and then for kicks and giggles.

Thanks,

Jason

[1] https://groups.google.com/forum/?fromgroups#!forum/sage-flame
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Created NumPy 1.7.x branch

2012-06-26 Thread Jason Grout
On 6/26/12 3:31 PM, Travis Oliphant wrote:

 Thank you for the reminder.   I was already called out for not stopping.   
 Thanks, Dag.   A flame-list might indeed be a good idea at this point if 
 there is further need for clearing the air


Also, having it set up before it is needed is part of the solution. 
Setting it up in the heat of the moment can just further inflame 
feelings.  You put a pressure valve in at the start, instead of waiting 
for a hole to blow in the side :).

Sort of like all the governance discussions about setting up a decision 
procedure before having to face a huge decision

Jason
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Issue Tracking

2012-05-01 Thread Jason Grout
On 5/1/12 3:19 PM, Fernando Perez wrote:
 But if you do decide to go with GHI, it should be based on what the
 system is like*today*, not on the hope that it will get better.
 About a month ago they broke label filtering by turning multi-label
 filters into an OR operation, which effectively rendered the labels
 completely useless.  Despite reporting it multiple times via their
 support tracker AND speaking in person at someone from GH, it still
 took well over a month or two to fix.  For something so simple and so
 essential, I consider that to be atrociously bad response.  So don't
 go for GHI on the hope it will get a lot better soon, b/c their recent
 record doesn't support a hopeful viewpoint.

This example indicates that basing  your decision on what it is like 
*today* may not be valid either.  You'd hope that they won't do 
something really silly, but you can't change it if they do, and you 
can't just keep running the old version of issues to avoid problems 
since you don't have control over that either.

Anyway, like everyone else has said, Ralf, Pauli, et. al. are really the 
ones to vote in this.  Given Fernando's responses, I realize why GHI 
still works for us---our small project has me and 2-4 students, and we 
all pretty much meet each week to triage issues together, and there are 
only about 40 open issues.  It's a simple enough project that we need 
*something*, but we don't need to spend our time setting up complicated 
infrastructure.  I do wish we could use Google Code issues with Github 
pull requests, though :).

Thanks,

Jason

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Issue Tracking

2012-05-01 Thread Jason Grout
On 5/1/12 7:24 PM, Charles R Harris wrote:
 I would agree that a good search facility is essential, and not
 keyword/tag based. I've found some trac tickets with google on occasion,
 although not by initial intent.

I use google to search the sage trac these days, using a shortcut to 
limit search results to the Sage trac site.

To do this in Chrome, go to Preferences, then Basics, then Manage Search 
Engines.  Down at the bottom, I fill in the three fields for a new 
search engine:

Name: trac

Keyword: t

URL: http://www.google.com/#q=site:trac.sagemath.org+%s

Then whenever I want to search trac, I just type t  (t space) in the 
URL bar of Chrome, then type whatever I'm searching for.  Google almost 
always pulls up the right ticket in the top few hits.  And it's way 
faster than the trac search.

Thanks,

Jason
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] What is consensus anyway

2012-04-25 Thread Jason Grout
On 4/25/12 8:11 PM, Travis Oliphant wrote:

 On Apr 25, 2012, at 7:18 PM, josef.p...@gmail.com wrote:


 Except for the big changes like NA and datetime, I think the debate is
 pretty boring.
 The main problem that I see for discussing technical issues is whether
 there are many
 developers really interested in commenting on code and coding.
 I think it mostly comes down to the discussion on tickets or pull requests.

 This is a very insightful comment.   Github has been a great thing for both 
 NumPy and SciPy.   However, it has changed the community feel for many 
 because these pull request discussions don't happen on this list.

 You have to comment on a pull request to get notified of future comments or 
 changes.The process is actually pretty nice, but it does mean you can't 
 just hang out watching this list.  You have to look at the pull requests and 
 get involved there.

 It would be nice if every pull request created a message to this list.Is 
 that even possible?

Sure.  Github has a pretty extensive hook system that can notify (via 
hitting a URL) about lots of events.

https://github.com/blog/964-all-of-the-hooks

http://developer.github.com/v3/repos/hooks/

I haven't actually used it (just read the docs), so I may be mistaken...

Thanks,

Jason
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] What is consensus anyway

2012-04-25 Thread Jason Grout
On 4/25/12 11:08 PM, Puneeth Chaganti wrote:
 On Thu, Apr 26, 2012 at 6:41 AM, Travis Oliphanttra...@continuum.io  wrote:
 [snip]

 It would be nice if every pull request created a message to this list.Is 
 that even possible?

 That is definitely possible and shouldn't be too hard to do, like
 Jason said.  But that can potentially cause some confusion, with some
 of the discussion starting off in the mailing list, and some of the
 discussion happening on the pull-request itself.  Are my concerns
 justified?

It wouldn't be too hard to have mailing list replies sent back to the 
pull request as comments (again, using the github API).  Already, if 
you're on a ticket, you can just reply to a comment email and the reply 
is put as a comment in the pull request.

Jason


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Nature Editorial: Science Software should be Open Source

2012-02-27 Thread Jason Grout
Jan Groenewald posted this link to the Sage development list, and I 
thought people here would be interested (and I figured people on the 
matplotlib, scipy, and ipython lists would see it here too):

http://www.nature.com/nature/journal/v482/n7386/full/nature10836.html

Thanks,

Jason
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Proposed Roadmap Overview

2012-02-17 Thread Jason Grout
On 2/17/12 9:07 PM, Sturla Molden wrote:

 Den 18. feb. 2012 kl. 01:58 skrev Charles R Harris
 charlesr.har...@gmail.com mailto:charlesr.har...@gmail.com:



 On Fri, Feb 17, 2012 at 4:44 PM, David Cournapeau courn...@gmail.com
 mailto:courn...@gmail.com wrote:

 I don't think c++ has any significant advantage over c for high
 performance libraries. I am not convinced by the number of people
 argument either: it is not my experience that c++ is easier to
 maintain in a open source context, where the level of people is
 far from consistent. I doubt many people did not contribute to
 numoy because it is in c instead if c++. While this is somehow
 subjective, there are reasons that c is much more common than c++
 in that context.


 I think C++ offers much better tools than C for the sort of things in
 Numpy. The compiler will take care of lots of things that now have to
 be hand crafted and I wouldn't be surprised to see the code size
 shrink by a significant factor.

 The C++11 standard is fantastic. There are automatic data types,
 closures, reference counting, weak references, an improved STL with
 datatypes that map almost 1:1 against any built-in Python type, a sane
 threading API, regex, ect. Even prng is Mersenne Twister by standard.
 With C++11 it is finally possible to write C++ (almost) like Python.
 On the downside, C++ takes a long term to learn, most C++ text books
 teach bad programming habits from the beginning to the end, and C++
 becomes inherently dangerous if you write C++ like C. Many also abuse
 C++ as an bloatware generator. Templates can also be abused to write
 code that are impossible to debug. While it in theory could be better, C
 is a much smaller language. Personally I prefer C++ to C, but I am not
 convinced it will be better for NumPy.

 I agree about Cython. It is nice for writing a Python interface for C,
 but get messy and unclean when used for anything else. It also has too
 much focus on adding all sorts of new features instead of correctness
 and stability. I don't trust it to generate bug-free code anymore.

For what it's worth, Cython supports C++ now.  I'm sure there are people 
on this list that know much better than me the extent of this support, 
so I will let them chime in, but here are some docs on it:

http://docs.cython.org/src/userguide/wrapping_CPlusPlus.html

If you have specific examples of new features trumping correctness and 
stability, I'm sure the Cython devel list would love to hear about it. 
They seem to be pretty concerned about stability and correctness to me, 
though I admit I don't follow the list extremely deeply.

I don't trust any automated tool to generate bug-free code.  I don't 
even trust myself to generate bug-free code :).

Jason
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Proposed Roadmap Overview

2012-02-17 Thread Jason Grout
On 2/17/12 9:54 PM, Sturla Molden wrote:
 We would have to write a C++ programming tutorial that is based on Pyton 
 knowledge instead of C knowledge.

I personally would love such a thing.  It's been a while since I did 
anything nontrivial on my own in C++.

Jason

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Proposed Roadmap Overview

2012-02-17 Thread Jason Grout
On 2/17/12 10:10 PM, Sturla Molden wrote:

 Sure. They just keep adding features for the expence of stability. No
 focus or sence of direction. Focus on a small feature set, make it
 right, then don't add to it. That is the root of the successes of C,
 Python and Java. NumPy needs a stabile compiler that don't make
 mistakes everywhere. You cannot trust that to Cython.

Again, if you have specific examples of stability being sacrificed, I'm 
sure the Cython list would like to hear about it.  Your statements, 
as-is, are raising huge FUD flags for me.

Anyways, I've said enough on this, and we've seen enough problems in 
discussions on this list already.  Many people in the numpy community 
know Cython well enough to judge these things for themselves.

Thanks,

Jason
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy governance update

2012-02-16 Thread Jason Grout
On 2/15/12 6:27 PM, Dag Sverre Seljebotn wrote:
 But in the very end, when agreement can't
 be reached by other means, the developers are the one making the calls.
 (This is simply a consequence that they are the only ones who can
 credibly threaten to fork the project.)

Interesting point.  I hope I'm not pitching a log onto the fire here, 
but in numpy's case, there are very many capable developers on other 
projects who depend on numpy who could credibly threaten a fork if they 
felt numpy was drastically going wrong.

Jason
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy governance update

2012-02-16 Thread Jason Grout
On 2/16/12 6:23 AM, Francesc Alted wrote:
 On Feb 16, 2012, at 12:15 PM, Jason Grout wrote:

 On 2/15/12 6:27 PM, Dag Sverre Seljebotn wrote:
 But in the very end, when agreement can't be reached by other
 means, the developers are the one making the calls. (This is
 simply a consequence that they are the only ones who can credibly
 threaten to fork the project.)

 Interesting point.  I hope I'm not pitching a log onto the fire
 here, but in numpy's case, there are very many capable developers
 on other projects who depend on numpy who could credibly threaten a
 fork if they felt numpy was drastically going wrong.

 Jason, that there capable developers out there that are able to fork
 NumPy (or any other project you can realize) is a given.  The point
 Dag was signaling is that this threaten is more probable to happen
 *inside* the community.

Sure.  Given numpy's status as a fundamental building block of many 
systems, though, if there was a perceived problem by downstream, it's 
more liable to be forked than most other projects that aren't so close 
to the headwaters.


 And you pointed out an important aspect too by saying if they felt
 numpy was drastically going wrong.  It makes me the impression that
 some people is very frightened about something really bad would
 happen, well before it happens.  While I agree that this is
 *possible*, I'd also advocate to give Travis the benefit of doubt.
 I'm convinced he (and Continuum as a whole) is making things happen
 that will benefit the entire NumPy community; but in case something
 gets really wrong and catastrophic, it is always a relief to know
 that things can be reverted in the pure open source tradition (by
 either doing a fork, creating a new foundation, or even better,
 proposing a new way to do things).  What it does not sound reasonable
 to me is to allow fear to block Continuum efforts for making a better
 NumPy.  I think it is better to relax a bit, see how things are
 going, and then judge by looking at the *results*.

I'm really happy about Continuum.  I agree with Mark that numpy 
certainly could use a few more core developers.  I've not decided on how 
much structure I feel numpy governance needs (nor do I think it's 
particularly important for me to decide how I feel at this point on the 
subject).

Jason
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy governance update

2012-02-16 Thread Jason Grout
On 2/16/12 8:06 AM, Scott Sinclair wrote:
 On 16 February 2012 15:08, Thomas Kluyvertak...@gmail.com  wrote:
 It strikes me that the effort everyone's put into this thread could
 have by now designed some way to resolve disputes. ;-)

 This is not intended to downplay the concerns raised in this thread,
 but I can't help myself.

 I propose the following (tongue-in-cheek) patch against the current
 numpy master branch.

 https://github.com/scottza/numpy/compare/constitution

 If this gets enough interest, I'll consider submitting a real pull request 
 ;-)

Time to start submitting lots of 1-line commits and typo fixes to pad my 
karma :).

Jason
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Issue Tracking

2012-02-14 Thread Jason Grout
Jeroen's reply about the Sage buildbot is below:

 Jeroen, do we have an
  automatic buildbot system for Sage?

Depends on what you mean with automatic.  We have the buildbot setup
at http://build.sagemath.org/sage/waterfall which builds automatically
but I still have to change versions by hand and start the builders by
hand (in theory, this could be automated but in practice, this is not so
easy).

Thanks,

Jason
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy governance update - was: Updated differences between 1.5.1 to 1.6.1

2012-02-14 Thread Jason Grout
On 2/14/12 7:17 PM, Travis Oliphant wrote:
   * Fund Open Source Projects in Science (currently NumPy, SciPy, 
 IPython, and Matplotlib are first-tier with a whole host of second-tier 
 projects that could received funding)
   * through grants

So, for example, would the Foundation apply to mentor Google Summer of 
Code projects?

Jason
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Migrating issues to GitHub

2012-02-11 Thread Jason Grout
On 2/11/12 1:44 PM, Travis Oliphant wrote:
 How to people feel about moving the issue tracking for NumPy to Github?
 It looks like they have improved their issue tracking quite a bit and
 the workflow and integration with commits looks quite good from what I
 can see.

 Here is one tool I saw that might help in the migration:
 https://github.com/trustmaster/trac2github

 Are there others?

Are there any good github trac plugins?  For example:

http://davglass.lighthouseapp.com/projects/21212/home

http://trac-hacks.org/wiki/GitPlugin (git, not github, but still maybe 
useful)

Thanks,

Jason


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Migrating issues to GitHub

2012-02-11 Thread Jason Grout
On 2/11/12 3:12 PM, Eric Firing wrote:
 A third problem is that the entire style of presentation is poorly
 designed from a use standpoint, in comparison to the sourceforge tracker
 which mpl used previously.  The github tracker appears to have been
 designed by a graphics person, not a software maintainer.  The
 information density in the issue list is very low; it is impossible to
 scan a large number of issues at once; there doesn't seem to be any
 useful sorting and selection mechanism.

I agree.  Also, another thing that is really frustrating is that the 
issue number does not appear in the listing.  So when you are trying to 
refer to another issue, and try finding it by scanning the list, you 
have to mouse over the title and extract the information mentally from 
the url.

Google code issues are much, much better in these regards.

Jason

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] consensus

2011-10-29 Thread Jason Grout
On 10/29/11 5:02 PM, Olivier Delalleau wrote:
 I haven't been following the discussion closely, but wouldn't it be instead:
 a.mask[0:2] = True?

 It's something that I actually find a bit difficult to get right in the
 current numpy.ma http://numpy.ma implementation: I would find more
 intuitive to have True for valid data, and False for invalid / missing
 / ... I realize how the implementation makes sense (and is appropriate
 given that the name is mask), but I just thought I'd point this out...
 even if it's just me ;)


Just a thought: what if this also worked:

a.mask[0:2]=np.NA

as a synonym for a.mask[0:2]=True?

Would that be less confusing, and/or would it be less powerful or 
extensible in important ways?

Thanks,

Jason Grout

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] consensus

2011-10-28 Thread Jason Grout
On 10/28/11 10:38 PM, Benjamin Root wrote:
 I might like the idea of a NA to always mean absent (and destroys -
 even through views), and MA (or some other name) which always means
 ignore (and has the masking behavior with views).

I should point out that if I'm dictating code to someone (e.g., 
teaching, or helping someone verbally), it's going to be hard to 
distinguish between the verbal sounds of NA and MA.

And from a lurker (me), thanks for the discussion.  I find it very 
interesting to read.

Thanks,

Jason Grout
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] A Foundation for the support of NumPy and SciPy

2011-10-05 Thread Jason Grout
On 10/4/11 6:36 PM, Nathaniel Smith wrote:
 TL;DR: When it comes to legal matters: starting your own non-profit is
 to joining an existing umbrella non-profit as CVS is to git. (And in
 fact git is also a SF Conservancy member.)

Good point.  William has a Sage Foundation set up through University of 
Washington, and UW (IIRC) handles all of these details.  I think it has 
worked out well (though, of course, William is the one to ask).

Jason
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] dtype repr change?

2011-07-29 Thread Jason Grout
On 7/28/11 4:21 PM, Matthew Brett wrote:
 Hi,

 Do you know if doctests supports any sort of manual intervention, like
 a plugin system?

 Actually, I was going to ask you that question :)

 But yes, there's the NumpyDoctest nose plugin, for example.  Using it
 does mean you have to customize nose somehow - in numpy's case by
 using the 'numpy.test()' machinery.  Sympy I believe has a fair amount
 of machinery to work with doctests, but I haven't looked at that yet,


Sage also has a fair amount of machinery dealing with doctests.  Almost 
all of Sage's testing is done in doctests (covering 85.4% of the Sage 
library, which is 27833 functions).  All doctests must pass before a 
release, and any new functions must have doctests.  We do also have some 
unit tests, and there is sentiment that we should have more unit tests, 
but the requirement right now is only for doctests.

Jason

--
Jason Grout
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] missing data discussion round 2

2011-06-28 Thread Jason Grout
On 6/28/11 5:20 PM, Matthew Brett wrote:
 Hi,

 On Tue, Jun 28, 2011 at 4:06 PM, Nathaniel Smithn...@pobox.com  wrote:
 ...
 (You might think, what difference does it make if you *can* unmask an
 item? Us missing data folks could just ignore this feature. But:
 whatever we end up implementing is something that I will have to
 explain over and over to different people, most of them not
 particularly sophisticated programmers. And there's just no sensible
 way to explain this idea that if you store some particular value, then
 it replaces the old value, but if you store NA, then the old value is
 still there.

 Ouch - yes.  No question, that is difficult to explain.   Well, I
 think the explanation might go like this:

 Ah, yes, well, that's because in fact numpy records missing values by
 using a 'mask'.   So when you say `a[3] = np.NA', what you mean is,
 'a._mask = np.ones(a.shape, np.dtype(bool); a._mask[3] = False`

 Is that fair?

Maybe instead of np.NA, we could say np.IGNORE, which sort of conveys 
the idea that the entry is still there, but we're just ignoring it.  Of 
course, that goes against common convention, but it might be easier to 
explain.

Thanks,

Jason

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Zero row in SVD's unitary matrix on some Mac's

2011-04-28 Thread Jason Grout
On 4/26/11 3:18 PM, Jason Grout wrote:
 On 4/26/11 11:49 AM, Pauli Virtanen wrote:
 But apparently either there's a bug, or the LAPACK man page needs to
 be understood as you say.

 I've posted a question to the Lapack forum:

 http://icl.cs.utk.edu/lapack-forum/viewtopic.php?f=2t=2402

We've got a reply now:

http://icl.cs.utk.edu/lapack-forum/viewtopic.php?f=2t=2402

Pauli, you're right that the statement should be parsed as:

If (JOBZ = 'A') or (JOBZ = 'O' and M = N), VT contains the N-by-N 
unitary matrix V**H;

so you're right that the function is written correctly in the case we 
are bringing up (but still has an error in assuming too much when JOBZ 
is not 'A').

We'll work on providing an example with the Accelerate framework and the 
zero row.

Thanks,

Jason

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Zero row in SVD's unitary matrix on some Mac's

2011-04-26 Thread Jason Grout
On 4/25/11 12:57 PM, Pauli Virtanen wrote:
 On Mon, 25 Apr 2011 10:16:13 -0700, Rob Beezer wrote:
 [clip]
 Many more details and complete transcripts are at:
 http://trac.sagemath.org/sage_trac/ticket/11248

 Any thoughts or advice to help us understand this would be greatly
 appreciated.

 The Numpy routine is a very thin wrapper of LAPACK's ZGESDD, and probably
 cannot have any bugs of this kind, so the problem is most likely with the
 LAPACK and BLAS libraries you use. You will probably be able to reproduce
 the problem also with an equivalent Fortran/C snippet calling LAPACK
 directly.

 Problems like this in BLAS/LAPACK are somewhat difficult to track. You
 could try switching to another BLAS library (or, if you use ATLAS,
 compile it differently) and checking if the problem disappears.


I was just looking up the documentation for ZGESDD and noticed that the 
value we have for rwork in the numpy call [1] does not match the Lapack 
docs.  This was changed in Lapack 3.2.2 [2].  I've submitted a pull request:

https://github.com/numpy/numpy/pull/78

but I have not tested the change.

I doubt this fix will fix the problem on this thread, but it makes sense 
to make the change anyway.

Thanks,

Jason

[1] https://github.com/numpy/numpy/blob/master/numpy/linalg/linalg.py#L1300

[2] http://icl.cs.utk.edu/lapack-forum/viewtopic.php?f=2t=1779, or 
http://www.netlib.org/lapack/lapack-3.2.2.html (bug 0046)

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Zero row in SVD's unitary matrix on some Mac's

2011-04-26 Thread Jason Grout
On 4/25/11 12:57 PM, Pauli Virtanen wrote:
 The Numpy routine is a very thin wrapper of LAPACK's ZGESDD, and probably
 cannot have any bugs of this kind,

As noted in my other message, I've been digging through the ZGESDD docs 
to understand it better.  Here is the doc string for what becomes the 
V^T matrix:

*  VT  (output) COMPLEX*16 array, dimension (LDVT,N)
*  If JOBZ = 'A' or JOBZ = 'O' and M = N, VT contains the
*  N-by-N unitary matrix V**H;
*  if JOBZ = 'S', VT contains the first min(M,N) rows of
*  V**H (the right singular vectors, stored rowwise);
*  if JOBZ = 'O' and M  N, or JOBZ = 'N', VT is not referenced.

Notice that VT is supposed to contain the N-by-N V**H if M=N (i.e., 
more rows than columns).  In our problem cases, we have MN.  I looked 
at the numpy linalg code and it doesn't seem to check to see if MN 
before returning the V matrix.  Is this a problem?

When I run an example C program that links to the OSX Accelerate 
framework, I get a V matrix that has a zero row at the bottom.  I got 
these results by:

1. Download the example found at 
http://software.intel.com/sites/products/documentation/hpc/mkl/lapack/mkl_lapack_examples/zgesdd_ex.c.htm

2. Change the zgesdd call to have first argument A (so it mimics the 
default numpy call)

3. Change

print_matrix( Right singular vectors (stored rowwise), m, n, vt, ldvt );

to

print_matrix( Right singular vectors (stored rowwise), n, n, vt, ldvt );

(this is to print out the full n-by-n matrix, instead of just the first 
m rows)

4. Compile the program with gcc zgesdd_ex.c -framework Accelerate

5. Run the program: ./a.out

And indeed, I get a 0 row as the last row of the V**H matrix

However, when I do the SVD of this same matrix using numpy (which I 
*think* uses the Accelerate framework the way we compile it in Sage), I 
get a full V matrix:

In [13]: I=1j
In [14]: a=numpy.asarray([( -5.40+ I*  7.40), (  6.00+ I*  6.38), ( 
9.91+ I*  0.16), ( -5.28+ I* -4.16),   (  1.09+ I*  1.55), (  2.60+ I* 
0.07), (  3.98+ I* -5.26), (  2.03+ I*  1.11),   (  9.88+ I*  1.91), ( 
4.92+ I*  6.31), ( -2.11+ I*  7.39), ( -9.81+ I* 
-8.98)],dtype=complex).reshape(3,4)

In [15]: numpy.linalg.svd(a)Out[15]:
(array([[ 0.54742764+0.j,  0.76302168+0.j, 
-0.34368721+0.j],
[-0.03507684-0.15148438j,  0.27097680-0.22637514j,
  0.54572628-0.74386208j],
[ 0.81299016+0.12325614j, -0.52311095-0.13956616j,
  0.13357577-0.1135282j ]]),
  array([ 21.75519279,  16.59545017,   3.97327576]),
  array([[ 0.23160531+0.20669796j,  0.36590896+0.3864613j ,
  0.24259328+0.32833854j, -0.56133932-0.37233547j],
[-0.57911906+0.40329699j,  0.10921398+0.17242422j,
  0.59673801-0.27492812j,  0.15998810+0.05510835j],
[ 0.60420072+0.12337134j, -0.18988750+0.29722068j,
  0.39210635+0.19697635j,  0.45451433+0.31015037j],
[-0.08335334+0.13557583j,  0.69622499-0.25685378j,
 -0.15350399+0.43075518j,  0.46295826+0.02290113j]]))

That seems odd.  I must be missing something.  It also seems odd that 
numpy returns V as a full 4x4 matrix even when MN.

Thanks,

Jason
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Zero row in SVD's unitary matrix on some Mac's

2011-04-26 Thread Jason Grout
On 4/26/11 11:07 AM, Jason Grout wrote:
 And indeed, I get a 0 row as the last row of the V**H matrix

I just double-checked things one last time and saw that I actually 
hadn't changed the first argument of zgesdd to A in the program that I 
actually ran.  So with this change, I get a nonzero last row of the V**H 
matrix from the C call to zgesdd.  So everything is consistent between 
the C call to zgesdd and the numpy svd call.

So now my remaining question is: if the Lapack docs only say that V**H 
is the full n-by-n matrix if M=N, why is numpy returning it even if MN?

Thanks,

Jason

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Zero row in SVD's unitary matrix on some Mac's

2011-04-26 Thread Jason Grout
On 4/26/11 11:12 AM, Jason Grout wrote:
 On 4/26/11 11:07 AM, Jason Grout wrote:
 And indeed, I get a 0 row as the last row of the V**H matrix

 I just double-checked things one last time and saw that I actually
 hadn't changed the first argument of zgesdd to A in the program that I
 actually ran.  So with this change, I get a nonzero last row of the V**H
 matrix from the C call to zgesdd.  So everything is consistent between
 the C call to zgesdd and the numpy svd call.

 So now my remaining question is: if the Lapack docs only say that V**H
 is the full n-by-n matrix if M=N, why is numpy returning it even if MN?

One more post talking to myself...

I notice that the zgesvd routine docs guarantee that the V returned is 
unitary, regardless of the size of A.  So this might be another argument 
for calling zgesvd instead of zgesdd.

Thanks,

Jason

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Zero row in SVD's unitary matrix on some Mac's

2011-04-26 Thread Jason Grout
On 4/26/11 11:22 AM, Jason Grout wrote:
 On 4/26/11 11:12 AM, Jason Grout wrote:
 On 4/26/11 11:07 AM, Jason Grout wrote:
 And indeed, I get a 0 row as the last row of the V**H matrix

 I just double-checked things one last time and saw that I actually
 hadn't changed the first argument of zgesdd to A in the program that I
 actually ran.  So with this change, I get a nonzero last row of the V**H
 matrix from the C call to zgesdd.  So everything is consistent between
 the C call to zgesdd and the numpy svd call.

 So now my remaining question is: if the Lapack docs only say that V**H
 is the full n-by-n matrix if M=N, why is numpy returning it even if MN?

 One more post talking to myself...

 I notice that the zgesvd routine docs guarantee that the V returned is
 unitary, regardless of the size of A.  So this might be another argument
 for calling zgesvd instead of zgesdd.


Okay, just one more data point.  Our people that are seeing the problem 
with numpy returning a non-unitary V also see a non-unitary V being 
returned by the test C call to zgesdd.  In other words, it really 
appears that zgesdd follows the Lapack docs, and if rowscolumns, the 
returned V is not necessarily unitary, but may contain a zero row.  This 
makes numpy's assumptions in using zgesdd false.

You can see this report at 
http://trac.sagemath.org/sage_trac/ticket/11248#comment:25

Thanks,

Jason

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Zero row in SVD's unitary matrix on some Mac's

2011-04-26 Thread Jason Grout
On 4/26/11 11:49 AM, Pauli Virtanen wrote:
 Tue, 26 Apr 2011 11:36:19 -0500, Jason Grout wrote:
 [clip]
 Okay, just one more data point.  Our people that are seeing the problem
 with numpy returning a non-unitary V also see a non-unitary V being
 returned by the test C call to zgesdd.  In other words, it really
 appears that zgesdd follows the Lapack docs, and if rowscolumns, the
 returned V is not necessarily unitary, but may contain a zero row.  This
 makes numpy's assumptions in using zgesdd false.

 You can see this report at
 http://trac.sagemath.org/sage_trac/ticket/11248#comment:25

 What LAPACK promises is not very clear. Earlier on the the man page:

 JOBZ(input) CHARACTER*1
 Specifies options for computing all or part of the matrix U:
 = 'A':  all M columns of U and all N rows of V**H are  
 returned
 in the arrays U and VT; = 'S':  the first min(M,N) columns of 
 U
 and the first min(M,N) rows of V**H are returned in the  
 arrays
 U and VT; = 'O':  If M= N, the first N columns of U are over‐
 written in the array A and all rows of V**H are returned in 
 the
 array VT; otherwise, all columns of U are returned in the 
 array
 U and the first M rows of V**H are overwritten in the array  
 A;
 = 'N':  no columns of U or rows of V**H are computed.

 Looks to me the statement should be parsed as:

   return_n_rows = (jobz == 'A') or (jobz == 'O' and m= n)

 So the current usage should be OK. (At least, as long as jobz == 'A',
 in the other cases, I don't think the wrapper does the right thing.)

 But apparently either there's a bug, or the LAPACK man page needs to
 be understood as you say.


Ah, you're right that it makes sense to parse their statement that way 
too, so I'm not so sure what Lapack really is saying anymore either.  If 
it's parsed the way you propose (which makes sense given the JOBZ 
description), I think it points to a bug in the Accelerate Lapack on the 
affected platforms, as we get the same zero row when we call the 
function directly from C, without numpy, python, or Sage in the middle.

The updated rwork calculation makes no difference with a 3x4 matrix 
(both the old calculation and the new calculation give 66 in the 3x4 
case), so I don't think that is affecting anything.

Jason
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Zero row in SVD's unitary matrix on some Mac's

2011-04-26 Thread Jason Grout
On 4/26/11 11:49 AM, Pauli Virtanen wrote:
 But apparently either there's a bug, or the LAPACK man page needs to
 be understood as you say.

I've posted a question to the Lapack forum:

http://icl.cs.utk.edu/lapack-forum/viewtopic.php?f=2t=2402

Thanks,

Jason

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Zero row in SVD's unitary matrix on some Mac's

2011-04-26 Thread Jason Grout
On 4/26/11 3:29 PM, Pauli Virtanen wrote:
 On Tue, 26 Apr 2011 14:52:52 -0500, Jason Grout wrote:
 [clip]
 The updated rwork calculation makes no difference with a 3x4 matrix
 (both the old calculation and the new calculation give 66 in the 3x4
 case), so I don't think that is affecting anything.

 Actually, there *is* a difference for the 3*4 matrix:

   old = 3*3*5 + 5*3 = 60
   new = 3*3*5 + 7*3 = 66

 The old calculation had 5 instead of 7 in the formula --- I don't know if
 it was written according to an older version of the documentation, or if
 was is simply a bug.


I was talking about our C example program, based on the example from 
Intel [1].  Intel already had the 3*3*5+7*3 calculation; that's what I 
called the old calculation.  The new calculation I was referring to 
was the min(M,N)*max(5*min(M,N)+7,2*max(M,N)+2*min(M,N)+1) calculation 
from Lapack 3.3.2.

Thanks,

Jason


[1] 
http://software.intel.com/sites/products/documentation/hpc/mkl/lapack/mkl_lapack_examples/zgesdd_ex.c.htm
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] What Requires C and what is just python

2011-03-21 Thread Jason Grout
On 3/20/11 11:54 AM, Lou Pecora wrote:
 I'll add my $0.02 here.  Someone mentioned SAGE.  I can say that on the Mac 
 the
 sage package seems to install very easily and reliably.   I've done 4
 installations on Macs 10.4 to 10.6.  You can do them with one command line.
   They take a few hours, but all have gone flawlessly.  The installation 
 contains
 a LOT of python stuff (including all the packages mentioned here) and you use 
 it
 just like any other installation except you need to point to the sage folder.
   There are examples in the documentation.
   -- Lou Pecora,   my views are my own.

On a mac, you can also download the drag-and-drop Mac App.  If you 
download the *-app.dmg, you can just drag and drop the app to your 
applications folder.  Launch Sage by double-clicking on the app (it 
pulls up your webbrowser with the notebook frontend).  You can also 
launch terminal sessions with Sage, use Sage from the command line, etc.

Or you can compile from source, which is I believe what the above 
instructions are about.

Jason

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] numpy.linalg.svd documentation

2011-01-29 Thread Jason Grout
The SVD documentation seems a bit misleading.  It says:

Factors the matrix a as u * np.diag(s) * v, where u and v are unitary 
and s is a 1-d array of a‘s singular values.

However, that only is true (i.e., you just have to do np.diag(s) to get 
S) in general if full_matrices is False, which is not the default. 
Otherwise, you have to something like in the first example in the docstring.

I'm not sure what the right fix is here.  Changing the default for 
full_matrices seems too drastic.  But then having u*np.diag(s)*v in the 
first line doesn't work if you have a rectangular matrix.  Perhaps the 
first line could be changed to:

Factors the matrix a as u * S * v, where u and v are unitary and S is a 
matrix with shape (a.shape[0], a.shape[1]) with np.diag(S)=s, where s is 
a 1-d array of a‘s singular values.

It sounds more confusing that way, but at least it's correct.

Maybe even better would be to add a shape option to np.diag, and then 
just make the first line of the svd docstring say 
u*np.diag(s,shape=(a.shape[0],a.shape[1]))*v


Jason
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Development workflow

2010-10-13 Thread Jason Grout
On 10/12/10 10:33 AM, Charles R Harris wrote:
 On Tue, Oct 12, 2010 at 9:22 AM, Pierre GMpgmdevl...@gmail.com  wrote:


 On Oct 12, 2010, at 5:10 PM, David Cournapeau wrote:

 On Tue, Oct 12, 2010 at 6:06 PM, Pierre GMpgmdevl...@gmail.com  wrote:
 All,
 All my sincere apologies for the mess I caused... The changes I wanted
 to commit were quite minimal (just a few lines in a test), but I obviously
 included some stuffs I didn't want too...

 No worries. Unless someone beats me to it, I will revert your changes
 tomorrow morning (in 6-7 hours), so that you can reapply them as you
 intended.

 I gonna try again in a couple of hours. Looks like I need to specifically
 exclude the files created by a `python setup.py develop`. Would it be worth
 to add those files in the numpy/.gitignore ? I just put some in my
 .git/info/exclude.


 What is curious is how the project files got added in the first place ;) But
 Jarrod committed an improved .gitignore file that should help. I always run
 git status before committing anything unless I am going to specify the files
 to commit.


I'll add that I find git gui invaluable, especially while I'm learning 
the multitude of git options.  I can quickly see at a glance all 
changes, stage/unstage lines, hunks, or files (by right-clicking on the 
line/hunk, or clicking on the icon next to the file), and double-check 
that everything is the way I want it.  I can also check a box to amend 
the last commit (and see exactly what that last commit was).

Thanks,

Jason
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] additions to random: innovative names vs. algorithm specification

2010-07-30 Thread Jason Grout
On 7/29/10 1:37 PM, Robert Kern wrote:

 (Book of Guido, 7:42)

That's intriguing.  Do you happen to have a link to it?

Thanks,

Jason

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Disabling Extended Precision in NumPy (like -ffloat-store)

2010-04-25 Thread Jason Grout
On 04/21/2010 09:47 AM, Adrien Guillon wrote:
 Hello all,

 I've recently started to use NumPy to prototype some numerical
 algorithms, which will eventually find their way to a GPU (where I
 want to limit myself to single-precision operations for performance
 reasons).  I have recently switched to the use of the single type in
 NumPy to ensure I use single-precision floating point operations.

 My understanding, however, is that Intel processors may use extended
 precision for some operations anyways unless this is explicitly
 disabled, which is done with gcc via the -ffloat-store operation.
 Since I am prototyping algorithms for a different processor
 architecture, where the extended precision registers simply do not
 exist, I would really like to force NumPy to limit itself to using
 single-precision operations throughout the calculation (no extended
 precision in registers).

 How can I do this?



If you want to avoid any assumptions the hardware might make that you 
aren't aware of, you might look at software libraries which emulate 
single precision numbers.  For example, in Sage [1], we have a floating 
point type that is a python object wrapped around an MPFR [2] number 
(look at the RealField class or email sage-support for details).  You 
might also look at minifloat libraries or related libraries.  For 
example, a google search turns up 
http://groups.google.com/group/comp.lang.python/browse_thread/thread/ba2b69a8fd8d45f6.
  
The cool thing is that the numpy arrays can use these objects as 
naturally as other python objects.

Thanks,

Jason

[1] http://www.sagemath.org
[2] http://www.mpfr.org




___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion