Re: [Numpy-discussion] [Numpy-svn] r5198 - trunk/numpy/f2py
Hi Pearu 2008/5/20 Pearu Peterson [EMAIL PROTECTED]: CC: numpy-discussion because of other reactions on the subject. On Tue, May 20, 2008 1:26 am, Robert Kern wrote: Is this an important bugfix? If not, can you hold off until 1.1.0 is released? The patch fixes a long existing and unreported bug in f2py - I think the bug was introduced when Python defined min and max functions. I learned about the bug when reading a manuscript about f2py. Such bugs should not end up in a paper demonstrating f2py inability to process certain features as it would have not been designed to do so. So, I'd consider the bugfix important. On the other hand, the patch does not affect numpy users who do not use f2py, in any way. So, it is not important for numpy users, in general. Many f2py users currently get their version via NumPy, I assume. Hmm, I also thought that the trunk is open for development, even though r5198 is only fixing a bug (and I do not plan to develop f2py in numpy further, just fix bugs and maintain it). If the release process is going to take for weeks and is locking the trunk, may be the release candidates should live in a separate branch? If the patch a) Fixes an important bug and b) has unit tests to ensure it does what it is supposed to then I'd be +1 for applying. It looks like there are some tests included; to which degree do they cover the bugfix, and do we have tests to make sure that f2py still functions correctly? I'd like to make sure I understood Jarrod's message from earlier this week: 1) Release candidate branch is tagged -- development continues on trunk 2) Release candidate is tested 3) Bug-fixes are back-ported to the release candidate as necessary 4) Release is made Another version I've seen starts with: 1) Release candidate branch is tagged -- no one touches trunk except for bug-fixes Which is it? I want to know where the docstring changes should go. Regards Stéfan ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] 1.1.0rc1 OSX Installer - please test
On Mon, May 19, 2008 at 1:42 PM, Robert Kern [EMAIL PROTECTED] wrote: And now fixed on the trunk. I believe that's the correct place to fix bugs for 1.1.0 at this time. Yes, the trunk is where fixes should go. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] [Numpy-svn] r5198 - trunk/numpy/f2py
Pearu Peterson wrote: So I beg to be flexible with f2py related commits for now. Why not creating a branch for the those changes, and applying only critical bug fixes to the trunk ? cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Current state of the trunk and release
On Fri, May 16, 2008 at 12:20 AM, Jarrod Millman [EMAIL PROTECTED] wrote: I believe that we have now addressed everything that was holding up the 1.1.0 release, so I will be tagging the 1.1.0rc1 in about 12 hours. Please be extremely conservative and careful about any commits you make to the trunk until we officially release 1.1.0 (now may be a good time to spend some effort on SciPy). Once I tag the release candidate I will ask both David and Chris to create Windows and Mac binaries. I will give everyone a few days to test the release candidate and binaries thoroughly. If everything looks good, the release candidate will become the official release. Once I tag 1.1.0, I will open the trunk for 1.1.1 development. Any development for 1.2 will have to occur on a new branch. I also plan to spend sometime once 1.1.0 is released discussing with the community what we want included in 1.2. Since there seems to be some confusion about what should be happening and where, I wanted to clarify the situation. There is currently no branch for 1.1.x; the trunk is still officially the 1.1.x branch. I have tagged a 1.1.0rc1 off the trunk for testing. A development freeze is in effect on the trunk for a few days. I plan to release 1.1.0 officially be this Friday unless something ugly shows up. If you need to work on NumPy, feel free to create a branch: http://projects.scipy.org/scipy/numpy/wiki/MakingBranches I know this is frustrating for some of you, but please just bear with me for a few days and help me try and get a stable 1.1.0 out as fast as possible. The only things that should be committed to the trunk now should be trivial bug-fixes to specific issues found by the release candidate. A good example of the kind of change I intended on the trunk right now is Robert Kern's fix to two tests that were incorrectly assuming little-endianness: http://projects.scipy.org/scipy/numpy/changeset/5196 In fact, this is exactly the type of thing I was hoping we might uncover by creating binaries for the release candidate. So if you want to help get NumPy 1.1.0, please test the release candidate as well as the release candidate binaries. Also please use restraint in making new commits. I don't have time to police ever commit, so I am asking everyone to just use their best judgment. Please be patient, the release will be out very soon. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] [Numpy-svn] r5198 - trunk/numpy/f2py
David Cournapeau wrote: Pearu Peterson wrote: So I beg to be flexible with f2py related commits for now. Why not creating a branch for the those changes, and applying only critical bug fixes to the trunk ? How do you define a critical bug? Critical to whom? f2py changes are never critical to numpy users who do not use f2py. I have stated before that I am not developing numpy.f2py any further. This also means that any changes to f2py should be essentially bug fixes. Creating a branch for bug fixes is a waste of time, imho. If somebody is willing to maintain the branch, that is, periodically sync the branch with the trunk and vice-versa, then I don't mind. Pearu ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Current state of the trunk and release
Jarrod Millman wrote: I know this is frustrating for some of you, but please just bear with me for a few days and help me try and get a stable 1.1.0 out as fast as possible. The only things that should be committed to the trunk now should be trivial bug-fixes to specific issues found by the release candidate. Ok, that should have be more explicit before, I think, because it was not obvious at all. I did a few commits: should I revert them ? It is too late for this release, but why didn't you create a 1.1.0 branch ? That way, only the release manager has to do the work :) More seriously, I do think it is more logical to branch the trunk for a release, specially in the svn trunk/branches/tags model, and I think we should follow this model for the next releases. cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] [Numpy-svn] r5198 - trunk/numpy/f2py
Pearu Peterson wrote: f2py changes are never critical to numpy users who do not use f2py. No, but they are to scipy users if f2py cannot build scipy. I have stated before that I am not developing numpy.f2py any further. This also means that any changes to f2py should be essentially bug fixes. Creating a branch for bug fixes is a waste of time, imho. I was speaking about creating a branch for the unit tests changes you were talking about, that is things which could potentially break a lot of configurations. Is the new f2py available for users ? If yes, you should tell f2py users to use this, and just do not care at all about numpy.f2py anymore, except for critical bugs. Maintaining two versions of the same software is always a PITA, so if you don't want to spend time on it, just focus on the new f2py (as long as numpy.f2py can build scipy, of course). cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Current state of the trunk and release
On Tue, May 20, 2008 at 12:50 AM, David Cournapeau [EMAIL PROTECTED] wrote: Ok, that should have be more explicit before, I think, because it was not obvious at all. I did a few commits: should I revert them ? Sorry I wasn't more clear; I think we could avoided a lot of confusion, if I had done a better job explaining what I meant. Anyway, I don't think there is a need to revert any of the changes that have gone in all ready. I took a quick look at ever change that has been made over the last few days and I didn't see anything that I believe will break anything. If anyone notices something that I have overlooked which should be removed, please let me know ASAP. Also let's be extremely careful about committing to the trunk over the next two to three days. I didn't branch because when I branched for 1.1.x two weeks ago: http://projects.scipy.org/scipy/numpy/changeset/5134 I ended up having to delete it a week later: http://projects.scipy.org/scipy/numpy/changeset/5163 So, I tried something different this time. I actually would prefer to follow the conventional svn trunk/branches/tags model--as long as everyone else is willing to follow it. After all, I actually voted for it before I vote against it. I will send out an email momentarily proposing something closer to this. Please respond to my next email and not this one. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Branching 1.1.x and starting 1.2.x development
Hello everyone, In response to some of the recent discussion (and confusion) surrounding my plan for the trunk and 1.2 development over the next month or two, I propose the following: 1. Stay with the original plan until Thursday, May 22nd. That is, all commits to the trunk should be extremely trivial, bug-fixes for issues that specifically arise during from the release candidate. (I think that there is no need to revert any changes that have happened on the trunk up to this point, but let's be more conservative going forward over the next 2 days.) 2. On Thursday, May 22nd, I will create a 1.1.x branch off the trunk. I will tag 1.1.0 off the branch. The trunk will become open for development of the 1.2.x series. Commits to the 1.1.x branch should only be relatively trivial bug-fixes. Ideally, 1.1.1 will be practically identical to 1.1.0 with only a handful of important bug-fixes. More specifically, this means that 1. There will be no new features added to the 1.1.x series. 2. There will be only very minor documentation fixes to the 1.1.x series. Only if the documentation has something so incorrect that it is considered a bug will it be updated. In particular, most of Stefan's work will go into 1.2.x. 3. There will be only very trivial bug-fixes in the 1.1.x series. I would expect no more than, say, 10 bug-fixes in a given micro/maintenance release. This is just a rule of thumb; but given our current development size, I would prefer to see most of our effort focused on the 1.2.x series. Given that 1.2.0 will be released by the end of August, this shouldn't cause any major problems for our users. Moreover, it will help ensure that if they upgrade from 1.1.x to 1.1.x+1 that there will be almost no chance that their code will break. Commits to the trunk (1.2.x) should follow these rules: 1. Documentation fixes are allowed and strongly encouraged. 2. Bug-fixes are strongly encouraged. 3. Do not break backwards compatibility. 4. New features are permissible. 5. New tests are highly desirable. 6. If you add a new feature, it must have tests. 7. If you fix a bug, it must have tests. If you want to break a rule, don't. If you feel you absolutely have to, please don't--but feel free send an email to the list explain your problem. In addition to these rules for 1.2 development, let me remind everyone that we have all ready agreed that 1.2 will: 1. Use the nose testing framework. 2. Require Python 2.4 or greater. This means we have built-in decorators, set objects, generators, etc. 3. Contain some planned changes to median and histogram. I hope this is more clear than the numerous partial emails I sent out before. Again let me apologize for the earlier confusion. Please let me know if this is a workable plan for you. In particular, let me know it there is some aspect of this that you simply refuse to agree to in at least principle. Also at this point let's focus on the overall picture. Namely, that (1) the trunk is currently is a relatively hard freeze and that (2) I will create a 1.1.x branch on Thursday and open the trunk for 1.2 development. The other details should be viewed as explanatory narrative that can be refined later. Unless there are major objections to this proposal, I will take some time later this week to make this information available on the wiki. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Branching 1.1.x and starting 1.2.x development
2008/5/20 Jarrod Millman [EMAIL PROTECTED]: In response to some of the recent discussion (and confusion) surrounding my plan for the trunk and 1.2 development over the next month or two, I propose the following: Thank you for the clarification, Jarrod. Your plan is sound, I'm on board. Regards Stéfan ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] [Numpy-svn] r5198 - trunk/numpy/f2py
On Tue, May 20, 2008 12:03 pm, David Cournapeau wrote: Pearu Peterson wrote: f2py changes are never critical to numpy users who do not use f2py. No, but they are to scipy users if f2py cannot build scipy. Well, I know pretty well what f2py features scipy uses and what could break scipy build. So, don't worry about that. I have stated before that I am not developing numpy.f2py any further. This also means that any changes to f2py should be essentially bug fixes. Creating a branch for bug fixes is a waste of time, imho. I was speaking about creating a branch for the unit tests changes you were talking about, that is things which could potentially break a lot of configurations. A branch for the unit tests changes is of course reasonable. Is the new f2py available for users ? If yes,.. No, it is far from being usable now. The numpy.f2py and g3 f2py are completely different software. The changeset was fixing a bug in numpy.f2py, it has nothing to do with g3 f2py. amazing-how-paranoiac-is-todays-numpy/scipy-development'ly yours, Pearu ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] [Numpy-svn] r5198 - trunk/numpy/f2py
On Mon, May 19, 2008 at 10:29 PM, Pearu Peterson [EMAIL PROTECTED] wrote: On Tue, May 20, 2008 1:26 am, Robert Kern wrote: Is this an important bugfix? If not, can you hold off until 1.1.0 is released? The patch fixes a long existing and unreported bug in f2py - I think the bug was introduced when Python defined min and max functions. I learned about the bug when reading a manuscript about f2py. Such bugs should not end up in a paper demonstrating f2py inability to process certain features as it would have not been designed to do so. So, I'd consider the bugfix important. I have been struggling to try and get a stable release out since February and every time I think that the release is almost ready some piece of code changes that requires me to delay. While overall the code has continuously improved over this period, I think it is time to get these improvements to our users. That said, I am willing to leave this change on the trunk, but please refrain from making any more changes until we release 1.1.0. I know it can be frustrating, but, I believe, this is the first time I have asked the community to not make commits to the trunk since I started handling releases almost a year ago. The freeze has only been in effect since Saturday and will last less than one week in total. I would have preferred if you could have made this change during any one of the other 51 weeks of the year. Hmm, I also thought that the trunk is open for development, even though r5198 is only fixing a bug (and I do not plan to develop f2py in numpy further, just fix bugs and maintain it). If the release process is going to take for weeks and is locking the trunk, may be the release candidates should live in a separate branch? Sorry for the confusion, I had asked that everyone be extremely conservative and careful about any commits you make to the trunk until we officially release 1.1.0, which is still pretty much the rule of thumb. I have been keeping the 1.1.0 milestone page up-to-date with regard to my planned release date; but I should have highlighted the date in my email. The main reason that this is happening on the trunk is that about two weeks ago I created a 1.1.x branch, but I didn't think all the bug-fixes for the 1.1.0 release were being made to the branch and the branch and the trunk got out of synch enough that it was difficult for me to merge the fixes in the trunk into the branch, so I deleted the branch and declared the trunk to again be where 1.1.x development occurred. I fully intend to release 1.1.0 by the end of the week. I also intend to create a 1.1.x maintenance branch at that point, so the trunk will be open for 1.2 development. As long as you are only going to be adding bug-fixes to numpy.f2py, I think that you should be able to use the trunk for this purpose once I create the 1.1.x branch. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] [Numpy-svn] r5198 - trunk/numpy/f2py
On Tue, May 20, 2008 at 1:00 AM, Pearu Peterson [EMAIL PROTECTED] wrote: How do you define a critical bug? Critical to whom? I know that the definition of a critical bug is somewhat ill-defined, but I think that a long existing and unreported bug probably wouldn't fall into the category of critical bug. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Branching 1.1.x and starting 1.2.x development
On Tue, May 20, 2008 12:59 pm, Jarrod Millman wrote: Commits to the trunk (1.2.x) should follow these rules: 1. Documentation fixes are allowed and strongly encouraged. 2. Bug-fixes are strongly encouraged. 3. Do not break backwards compatibility. 4. New features are permissible. 5. New tests are highly desirable. 6. If you add a new feature, it must have tests. 7. If you fix a bug, it must have tests. If you want to break a rule, don't. If you feel you absolutely have to, please don't--but feel free send an email to the list explain your problem. ... In particular, let me know it there is some aspect of this that you simply refuse to agree to in at least principle. Since you asked, I have a problem with the rule 7 when applying it to packages like numpy.distutils and numpy.f2py, for instance. Do you realize that there exists bugs/features for which unittests cannot be written in principle? An example: say, a compiler vendor changes a flag of the new version of the compiler so that numpy.distutils is not able to detect the compiler or it uses wrong flags for the new compiler when compiling sources. Often, the required fix is trivial to find and apply, also just reading the code one can easily verify that the patch does not break anything. However, to write a unittest covering such a change would mean that one needs to ship also the corresponding compiler to the unittest directory. This is nonsense, of course. I can find other similar examples that have needed attention and changes to numpy.distutils and numpy.f2py in past and I know that few are coming up. Pearu ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] [Numpy-svn] r5198 - trunk/numpy/f2py
On Tue, May 20, 2008 1:36 pm, Jarrod Millman wrote: On Mon, May 19, 2008 at 10:29 PM, Pearu Peterson [EMAIL PROTECTED] wrote: On Tue, May 20, 2008 1:26 am, Robert Kern wrote: Is this an important bugfix? If not, can you hold off until 1.1.0 is released? The patch fixes a long existing and unreported bug in f2py - I think the bug was introduced when Python defined min and max functions. I learned about the bug when reading a manuscript about f2py. Such bugs should not end up in a paper demonstrating f2py inability to process certain features as it would have not been designed to do so. So, I'd consider the bugfix important. I have been struggling to try and get a stable release out since February and every time I think that the release is almost ready some piece of code changes that requires me to delay. While overall the code has continuously improved over this period, I think it is time to get these improvements to our users. That said, I am willing to leave this change on the trunk, but please refrain from making any more changes until we release 1.1.0. I know it can be frustrating, but, I believe, this is the first time I have asked the community to not make commits to the trunk since I started handling releases almost a year ago. The freeze has only been in effect since Saturday and will last less than one week in total. I would have preferred if you could have made this change during any one of the other 51 weeks of the year. Please, go ahead. I'll not commit non-critical changes until the trunk is open again. Pearu ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Branching 1.1.x and starting 1.2.x development
Pearu Peterson wrote: Since you asked, I have a problem with the rule 7 when applying it to packages like numpy.distutils and numpy.f2py, for instance. Although Jarrod did not mention it, I think everybody agrees that distutils is not really testable, by its nature and design. But that's even more a reason to be careful when changing it just before a release. I don't see why f2py would not be testable by nature, though. A big part of it is parsing fortran, right ? Often, the required fix is trivial to find and apply, also just reading the code one can easily verify that the patch does not break anything. IMHO, the most useful aspect of unit tests is not testing that the fix works, but being sure that if it breaks again, it will be detected (regression test). This is specially important when refactoring. David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Branching 1.1.x and starting 1.2.x development
2008/5/20 Pearu Peterson [EMAIL PROTECTED]: 7. If you fix a bug, it must have tests. Since you asked, I have a problem with the rule 7 when applying it to packages like numpy.distutils and numpy.f2py, for instance. Do you realize that there exists bugs/features for which unittests cannot be written in principle? An example: say, a compiler vendor changes a flag of the new version of the compiler so that numpy.distutils is not able to detect the compiler or it uses wrong flags for the new compiler when compiling sources. Often, the required fix is trivial to find and apply, also just reading the code one can easily verify that the patch does not break anything. However, to write a unittest covering such a change would mean that one needs to ship also the corresponding compiler to the unittest directory. This is nonsense, of course. I can find other similar examples that have needed attention and changes to numpy.distutils and numpy.f2py in past and I know that few are coming up. My earlier message, regarding your patch to trunk, was maybe not clearly phrased. I didn't want to start a unit-testing discussion, and was simply trying to say if we apply patches now, we should make sure they work. You did that, so I was happy. I've been the main transgressor in applying new changes to trunk; sorry about the misunderstanding, Jarrod. As for your comment above: yes, writing unit tests is hard. As you mentioned, sometimes changes are trivial, difficult to test and you can see they work. If the code was already exercised in the test suite, I would be less worried about such trivial changes. Then, at least, we know that the code is executed every time the test suite runs, so if a person forgot to close a parenthesis or a string quote, it would be picked up. The case I describe above is exceptional (but it does occur much more frequently in f2py and distutils). Still, I would not say that those tests are impossible to write, just that they require thinking outside the box. For example, it would be quite feasible to have a set of small Python scripts that pretend to be compilers, to assist in asserting that the flags we think we pass are the ones that reach the compiler. Similarly, a fake directory tree can be created to help verify that `site.cfg` is correctly parsed and applied. You are right: some factors are out of our control, but we need to test for every piece of functionality that isn't. Regards Stéfan ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Branching 1.1.x and starting 1.2.x development
On Tue, May 20, 2008 at 3:47 AM, Pearu Peterson [EMAIL PROTECTED] wrote: On Tue, May 20, 2008 12:59 pm, Jarrod Millman wrote: Commits to the trunk (1.2.x) should follow these rules: 1. Documentation fixes are allowed and strongly encouraged. 2. Bug-fixes are strongly encouraged. 3. Do not break backwards compatibility. 4. New features are permissible. 5. New tests are highly desirable. 6. If you add a new feature, it must have tests. 7. If you fix a bug, it must have tests. If you want to break a rule, don't. If you feel you absolutely have to, please don't--but feel free send an email to the list explain your problem. ... In particular, let me know it there is some aspect of this that you simply refuse to agree to in at least principle. Since you asked, I have a problem with the rule 7 when applying it to packages like numpy.distutils and numpy.f2py, for instance. I obviously knew this would be controversial. Personally, I would prefer if we boldly state that tests are required. I know that this rule will be broken occasionally and may not even always make sense. We could change the language to be something like if you a fix a bug, there should be tests. Saying you must means that you're breaking a rule when you don't. Importantly I am not proposing that we have some new enforcement mechanism; I am happy to leave this to Stefan and whoever else wants to join him. (Thanks Stefan--you have taken on a tough job!) However, let's not worry too much about this at this point. Let's get 1.1.0 out. I think everyone agrees that unit tests are a good idea, some are more passionate than others. We need to figure out how best to increase the number of tests, but we are doing a great job currently. NumPy 1.0.4 had around 686 tests and the trunk now has roughly 996 tests. For now, let's officially consider rules 6 and 7 to have question marks at the end of them. Once 1.1.0 is out and we have started developing 1.2, we can start this conversation again. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] ANN: NumPy/SciPy Documentation Marathon 2008
Pauli Virtanen wrote: Hi, su, 2008-05-18 kello 07:16 -0600, Steven H. Rogers kirjoitti: Joe Harrington wrote: NUMPY/SCIPY DOCUMENTATION MARATHON 2008 ... 5. Write a new help function that optionally produces ASCII or points the user's PDF or HTML reader to the right page (either local or global). I can work on this. Fernando suggested this at the IPython sprint in Boulder last year, so I've given it some thought and started a wiki page: http://ipython.scipy.org/moin/Developer_Zone/SearchDocs In Numpy SVN/1.1 there is a function lookfor that searches the docstrings for a substring (no stemming etc. is done). Similar %lookfor magic command got accepted into IPython0 as an extension ipy_lookfor.py. Improvements to these would be surely appreciated. I think that also Sphinx supports searching, so that the generated HTML docs [1] are searchable, as is the generated PDF output. Pauli .. [1] http://mentat.za.net/numpy/refguide/ So far, this preview contains only docs for ndarray, though. Thanks Pauli. Looking at these. # Steve ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] ANN: NumPy/SciPy Documentation Marathon 2008
2008/5/20 Steven H. Rogers [EMAIL PROTECTED]: .. [1] http://mentat.za.net/numpy/refguide/ So far, this preview contains only docs for ndarray, though. The reference guide has been updated to contain the entire numpy. Once we've applied indexing tags to functions, those will be sorted in a more coherent manner. Also, the math role and directive, i.e. :math:`\int_0^\infty` and .. math:: \int_0^\infty now render correctly. This is achieved using mathml in the xhtml files (so you need to install a mathml plugin if you use Internet Explorer). For en example, see bartlett (use the index to find it, quicksearch is currently broken). Regards Stéfan ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] ANN: NumPy/SciPy Documentation Marathon 2008
On Tue, 20 May 2008, Stéfan van der Walt apparently wrote: Also, the math role and directive, i.e. :math:`\int_0^\infty` and .. math:: \int_0^\infty now render correctly. Is this being done with Jens's writers? If not, I'd like to know how. Thank you, Alan Isaac PS There is currently active discussion by the docutils developers about implementing moving the math role and directive into docutils. Discovered issues could be usefully shared right now! ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] ANN: NumPy/SciPy Documentation Marathon 2008
Hi Alan Yes, the one discussed in this thread: http://groups.google.com/group/sphinx-dev/browse_thread/thread/ef74352b9f196002/0e257bc8c116f73f I've only had to make one change so far, to parse '*' as in 'A^*' (patch attached). Unfortunately, the author chose incomprehensible variable names like 'mo', 'mi', and 'mn', so I'm not sure I fixed it the right way. I'd be very glad if these directives became part of docutils -- I'd be glad if you could monitor that situation for us. Regards Stéfan 2008/5/20 Alan G Isaac [EMAIL PROTECTED]: On Tue, 20 May 2008, Stéfan van der Walt apparently wrote: Also, the math role and directive, i.e. :math:`\int_0^\infty` and .. math:: \int_0^\infty now render correctly. Is this being done with Jens's writers? If not, I'd like to know how. Thank you, Alan Isaac PS There is currently active discussion by the docutils developers about implementing moving the math role and directive into docutils. Discovered issues could be usefully shared right now! mathml.patch Description: Binary data ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] data exchange format
I want to store data in a way that can be read by a C or Matlab program. Not too much data, not too complicated: a dozen or so floats, a few integers, a few strings, and a (3, x) numpy array where typically 500 x 3. I was about to create my own format for storage when it occurred to me that I might want to use XML or some other standard format. Like JSON, perhaps. Can anyone comment, esp relating to numpy implementation issues, or offer suggestions? Thanks, Gary ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] building issue in windows
Hi. I have mingw and Visual Studio installed on my computer. I am following the building instructions posted in [1]. I explicitly tell setup.py to use mingw by passing the argument --compiler=mingw32. However, setuptools is using Visual Studio anyways. has anyone encountered this problem? I am putting the mingw bin directory in my PATH without avail. Is there extra configuration needed for a system with both compilers installed? Thank you for your help. Igor [1] http://www.scipy.org/Installing_SciPy/Windows ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] data exchange format
PyTables is an efficient way of doing it (http://www.pytables.org). You essentially write data to a HDF5 file, which is portable and can be read in Matlab or in a C program (using the HDF5 library). Gabriel On Tue, 2008-05-20 at 09:32 -0400, Gary Pajer wrote: I want to store data in a way that can be read by a C or Matlab program. Not too much data, not too complicated: a dozen or so floats, a few integers, a few strings, and a (3, x) numpy array where typically 500 x 3. I was about to create my own format for storage when it occurred to me that I might want to use XML or some other standard format. Like JSON, perhaps. Can anyone comment, esp relating to numpy implementation issues, or offer suggestions? Thanks, Gary ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] 1.1.0rc1 OSX Installer - please test
Hi all, On May 19, 2008, at 3:39 PM, Christopher Burns wrote: I've built a Mac binary for the 1.1 release candidate. Mac users, please test it from: https://cirl.berkeley.edu/numpy/numpy-1.1.0rc1-py2.5-macosx10.5.dmg This is for the MacPython installed from python.org. From System Profiler --- Hardware Overview: Model Name: MacBook Pro Model Identifier:MacBookPro3,1 Processor Name: Intel Core 2 Duo Running 10.5.2 Uneventful installation, tests as follows --- np.test() Numpy is installed in /Library/Frameworks/Python.framework/Versions/ 2.5/lib/python2.5/site-packages/numpy Numpy version 1.1.0rc1 Python version 2.5.2 (r252:60911, Feb 22 2008, 07:57:53) [GCC 4.0.1 (Apple Computer, Inc. build 5363)] --- skipping details --- Ran 1004 tests in 1.939s OK unittest._TextTestResult run=1004 errors=0 failures=0 - Thanks to all for the hard work. Bob Pyle ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] no mail
Can I please be removed from this mailing list ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] no mail
Can I please be removed from this mailing list There is a link at the bottom of every email for the numpy discussion list: http://projects.scipy.org/mailman/listinfo/numpy-discussion At the bottom of that page, you'll find what you're looking for, where it says To unsubscribe from Numpy-discussion ... -steve ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] first recarray steps
Hi, I'm trying to get into recarrays. Unfortunately documentation is a bit on the short side... Lets say I have a rgb image of arbitrary size, as a normal ndarray (that's what my image reading lib gives me). Thus shape is (3,ysize,xsize), dtype = int8. How would I convert/view this as a recarray of shape (ysize, xsize) with the first dimension split up into 'r', 'g', 'b' fields? No need for 'x' and 'y' fields. I tried creating a numpy dtype {names: ('r','g','b'), formats: (numpy.int8,)*3}, but when I try to raw_img.view(rgb_dtype) I get: ValueError: new type not compatible with array. Now this probably should not be too difficult, but I just don't see it... Thanks, Vincent. ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] first recarray steps
Hi Vincent 2008/5/20 Vincent Schut [EMAIL PROTECTED]: Hi, I'm trying to get into recarrays. Unfortunately documentation is a bit on the short side... Lets say I have a rgb image of arbitrary size, as a normal ndarray (that's what my image reading lib gives me). Thus shape is (3,ysize,xsize), dtype = int8. How would I convert/view this as a recarray of shape (ysize, xsize) with the first dimension split up into 'r', 'g', 'b' fields? No need for 'x' and 'y' fields. First, you need to flatten the array so you have one (r,g,b) element per row. Say you have x with shape (3, 4, 4): x = x.T.reshape((-1,3)) Then you can view it with your new dtype: dt = np.dtype([('r',np.int8),('g',np.int8),('b',np.int8)]) x = x.view(dt) Then you must reshape it back to your original pixel arrangement: x = x.reshape((4,4)).T Or you can do it all in one go: x.T.reshape((-1,x.shape[0])).view(dt).reshape(x.shape[1:]).T Maybe someone else comes up with an easier way. Cheers Stéfan ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Branching 1.1.x and starting 1.2.x development
On Tue, May 20, 2008 at 6:11 AM, Jarrod Millman [EMAIL PROTECTED] wrote: On Tue, May 20, 2008 at 3:47 AM, Pearu Peterson [EMAIL PROTECTED] wrote: On Tue, May 20, 2008 12:59 pm, Jarrod Millman wrote: Commits to the trunk (1.2.x) should follow these rules: 1. Documentation fixes are allowed and strongly encouraged. 2. Bug-fixes are strongly encouraged. 3. Do not break backwards compatibility. 4. New features are permissible. 5. New tests are highly desirable. 6. If you add a new feature, it must have tests. 7. If you fix a bug, it must have tests. If you want to break a rule, don't. If you feel you absolutely have to, please don't--but feel free send an email to the list explain your problem. ... In particular, let me know it there is some aspect of this that you simply refuse to agree to in at least principle. Since you asked, I have a problem with the rule 7 when applying it to packages like numpy.distutils and numpy.f2py, for instance. I obviously knew this would be controversial. Personally, I would prefer if we boldly state that tests are required. I know that this rule will be broken occasionally and may not even always make sense. We could change the language to be something like if you a fix a bug, there should be tests. Saying you must means that you're breaking a rule when you don't. Importantly I am not proposing that we have some new enforcement mechanism; I am happy to leave this to Stefan and whoever else wants to join him. (Thanks Stefan--you have taken on a tough job!) However, let's not worry too much about this at this point. Let's get 1.1.0 out. I think everyone agrees that unit tests are a good idea, some are more passionate than others. We need to figure out how best to increase the number of tests, but we are doing a great job currently. NumPy 1.0.4 had around 686 tests and the trunk now has roughly 996 tests. For now, let's officially consider rules 6 and 7 to have question marks at the end of them. Once 1.1.0 is out and we have started developing 1.2, we can start this conversation again. Two of the buildbots are showing problems, probably installation related, but it would be nice to see all green before the release. Chuck ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Branching 1.1.x and starting 1.2.x development
On Tue, May 20, 2008 at 8:37 AM, Charles R Harris [EMAIL PROTECTED] wrote: Two of the buildbots are showing problems, probably installation related, but it would be nice to see all green before the release. Absolutely. Thanks for catching this. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] first recarray steps
Vincent Schut wrote: Lets say I have a rgb image of arbitrary size, as a normal ndarray (that's what my image reading lib gives me). Thus shape is (3,ysize,xsize), dtype = int8. How would I convert/view this as a recarray of shape (ysize, xsize) with the first dimension split up into 'r', 'g', 'b' fields? No need for 'x' and 'y' fields. Take a look in this list for a thread entitled recarray fun about a month ago -- you'll find some more discussion of approaches. Also, if you image data is rgb, usually, that's a (width, height, 3) array: rgbrgbrgbrgb... in memory. If you have a (3, width, height) array, then that's rrr... Some image libs may give you that, I'm not sure. Also, you probably want a uint8 dtype, giving you 0-255 for each byte. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/ORR(206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception [EMAIL PROTECTED] ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] question on NumPy NaN
Original Message Subject:question on NumPy NaN Date: Tue, 20 May 2008 18:03:00 +0200 From: Vasileios Gkinis [EMAIL PROTECTED] To: numpy-discussion@scipy.org Hi all, I have a question concerning nan in NumPy. Lets say i have an array of sample measurements a = array((2,4,nan)) in NumPy calculating the mean of the elements in array a looks like: a = array((2,4,nan)) a array([ 2., 4., NaN]) mean(a) nan What if i simply dont want nan to propagate and get something that would look like: a = array((2,4,nan)) a array([ 2., 4., NaN]) mean(a) 3. Cheers Vasilis -- Vasileios Gkinis, PhD Student Centre for Ice and Climate Niels Bohr Institute Juliane Maries Vej 30, room 321 DK-2100 Copenhagen Denmark Office: +45 35325913 [EMAIL PROTECTED] ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] question on NumPy NaN
2008/5/20 Vasileios Gkinis [EMAIL PROTECTED]: I have a question concerning nan in NumPy. Lets say i have an array of sample measurements a = array((2,4,nan)) in NumPy calculating the mean of the elements in array a looks like: a = array((2,4,nan)) a array([ 2., 4., NaN]) mean(a) nan What if i simply dont want nan to propagate and get something that would look like: a = array((2,4,nan)) a array([ 2., 4., NaN]) mean(a) 3. For more elaborate handling of missing data, look into masked arrays, in numpy.ma. They are designed to deal with exactly this sort of thing. Anne ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] data exchange format
On Tue, May 20, 2008 at 10:26 AM, Gabriel J.L. Beckers [EMAIL PROTECTED] wrote: PyTables is an efficient way of doing it (http://www.pytables.org). You essentially write data to a HDF5 file, which is portable and can be read in Matlab or in a C program (using the HDF5 library). Gabriel I thought about that. It seems to have much more than I need, so I wonder if it's got more overhead / less speed / more complex API than I need. But big isn't necessarily bad, but it might be. Is pytables overkill? On Tue, 2008-05-20 at 09:32 -0400, Gary Pajer wrote: I want to store data in a way that can be read by a C or Matlab program. Not too much data, not too complicated: a dozen or so floats, a few integers, a few strings, and a (3, x) numpy array where typically 500 x 3. I was about to create my own format for storage when it occurred to me that I might want to use XML or some other standard format. Like JSON, perhaps. Can anyone comment, esp relating to numpy implementation issues, or offer suggestions? Thanks, Gary ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Branching 1.1.x and starting 1.2.x development
On Tue, May 20, 2008 at 9:48 AM, Jarrod Millman [EMAIL PROTECTED] wrote: On Tue, May 20, 2008 at 8:37 AM, Charles R Harris [EMAIL PROTECTED] wrote: Two of the buildbots are showing problems, probably installation related, but it would be nice to see all green before the release. Absolutely. Thanks for catching this. It would be good if we could find a PPC to add to the buildbot in order to catch endianess problems. SPARC might also do for this. Chuck ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Branching 1.1.x and starting 1.2.x development
2008/5/20 Charles R Harris [EMAIL PROTECTED]: Two of the buildbots are showing problems, probably installation related, but it would be nice to see all green before the release. Thanks, fixed in SVN. Regards Stéfan ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] question on NumPy NaN
On Tue, May 20, 2008 at 9:11 AM, Anne Archibald [EMAIL PROTECTED] wrote: 2008/5/20 Vasileios Gkinis [EMAIL PROTECTED]: I have a question concerning nan in NumPy. Lets say i have an array of sample measurements a = array((2,4,nan)) in NumPy calculating the mean of the elements in array a looks like: a = array((2,4,nan)) a array([ 2., 4., NaN]) mean(a) nan What if i simply dont want nan to propagate and get something that would look like: a = array((2,4,nan)) a array([ 2., 4., NaN]) mean(a) 3. For more elaborate handling of missing data, look into masked arrays, in numpy.ma. They are designed to deal with exactly this sort of thing. Or np.nansum(a) / np.isfinite(a).sum() A nanmean would be nice to have in numpy. ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Branching 1.1.x and starting 1.2.x development
2008/5/20 Charles R Harris [EMAIL PROTECTED]: On Tue, May 20, 2008 at 9:48 AM, Jarrod Millman [EMAIL PROTECTED] wrote: On Tue, May 20, 2008 at 8:37 AM, Charles R Harris [EMAIL PROTECTED] wrote: Two of the buildbots are showing problems, probably installation related, but it would be nice to see all green before the release. Absolutely. Thanks for catching this. It would be good if we could find a PPC to add to the buildbot in order to catch endianess problems. SPARC might also do for this. Absolutely! If anybody has access to such a machine and is willing to let a build-slave run on it, please let me know and I'll send you the necessary configuration files. Regards Stéfan ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Branching 1.1.x and starting 1.2.x development
Stéfan van der Walt wrote: 2008/5/20 Charles R Harris [EMAIL PROTECTED]: On Tue, May 20, 2008 at 9:48 AM, Jarrod Millman [EMAIL PROTECTED] wrote: On Tue, May 20, 2008 at 8:37 AM, Charles R Harris [EMAIL PROTECTED] wrote: Two of the buildbots are showing problems, probably installation related, but it would be nice to see all green before the release. Absolutely. Thanks for catching this. It would be good if we could find a PPC to add to the buildbot in order to catch endianess problems. SPARC might also do for this. Absolutely! If anybody has access to such a machine and is willing to let a build-slave run on it, please let me know and I'll send you the necessary configuration files. Hi Stefan, I got access to Solaris 9/Sparc and Solaris 10/Sparc and am certainly willing to help out. Regards Stéfan Cheers, Michael ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Branching 1.1.x and starting 1.2.x development
On Tue, May 20, 2008 at 10:30 AM, Stéfan van der Walt [EMAIL PROTECTED] wrote: 2008/5/20 Charles R Harris [EMAIL PROTECTED]: On Tue, May 20, 2008 at 9:48 AM, Jarrod Millman [EMAIL PROTECTED] wrote: On Tue, May 20, 2008 at 8:37 AM, Charles R Harris [EMAIL PROTECTED] wrote: Two of the buildbots are showing problems, probably installation related, but it would be nice to see all green before the release. Absolutely. Thanks for catching this. It would be good if we could find a PPC to add to the buildbot in order to catch endianess problems. SPARC might also do for this. Absolutely! If anybody has access to such a machine and is willing to let a build-slave run on it, please let me know and I'll send you the necessary configuration files. The current Mac machine seems to have a configuration problem. If anyone out there has an SGI machine we could use that too. I heart the buildbot. Chuck ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] data exchange format
On Tue, May 20, 2008 at 10:11 AM, Gary Pajer [EMAIL PROTECTED] wrote: On Tue, May 20, 2008 at 10:26 AM, Gabriel J.L. Beckers [EMAIL PROTECTED] wrote: PyTables is an efficient way of doing it (http://www.pytables.org). You essentially write data to a HDF5 file, which is portable and can be read in Matlab or in a C program (using the HDF5 library). Gabriel I thought about that. It seems to have much more than I need, so I wonder if it's got more overhead / less speed / more complex API than I need. But big isn't necessarily bad, but it might be. Is pytables overkill? PyTables is a nice bit of software and is worth getting familiar with if you want portable data. It will solve issues of endianess, annotation, and data organization, which can all be important, especially if your data sits around for a while and you forget exactly what's in it. Both Matlab and IDL support reading HDF5 files. Chuck ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] numpy.save bug on solaris x86 w/ nans and objects
I have a record array w/ dates (O4) and floats. If some of these floats are NaN, np.save crashes (on my solaris platform but not on a linux machine I tested on). Here is the code that produces the bug: In [1]: pwd Out[1]: '/home/titan/johnh/python/svn/matplotlib/matplotlib/examples/data' In [2]: import matplotlib.mlab as mlab In [3]: import numpy as np In [4]: r = mlab.csv2rec('aapl.csv') In [5]: r.dtype Out[5]: dtype([('date', '|O4'), ('open', 'f8'), ('high', 'f8'), ('low', 'f8'), ('close', 'f8'), ('volume', 'i4'), ('adj_close', 'f8')]) In [6]: r.close[100:] = np.nan In [7]: r.close Out[7]: array([ 124.63, 127.46, 129.4 , ..., NaN, NaN, NaN]) In [8]: np.save('mydata.npy', r) Traceback (most recent call last): File ipython console, line 1, in ? File /home/titan/johnh/dev/lib/python2.4/site-packages/numpy/lib/io.py, line 158, in save format.write_array(fid, arr) File /home/titan/johnh/dev/lib/python2.4/site-packages/numpy/lib/format.py, line 272, in write_array cPickle.dump(array, fp, protocol=2) SystemError: frexp() result out of range In [9]: np.__version__ Out[9]: '1.2.0.dev5136' In [10]: !uname -a SunOS flag 5.10 Generic_118855-15 i86pc i386 i86pc ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] 1.1.0rc1 OSX Installer - please test
Reminder to please test the installer. We already discovered a couple endian bugs on PPC, which is good, but we'd like to verify the release candidate on several more machines before the 1.1.0 tag on Thursday. It only takes a few minutes and you get the added bonus of having a current install of numpy. :) Thank you, Chris On Tue, May 20, 2008 at 7:27 AM, Robert Pyle [EMAIL PROTECTED] wrote: Hi all, On May 19, 2008, at 3:39 PM, Christopher Burns wrote: I've built a Mac binary for the 1.1 release candidate. Mac users, please test it from: https://cirl.berkeley.edu/numpy/numpy-1.1.0rc1-py2.5-macosx10.5.dmg This is for the MacPython installed from python.org. From System Profiler --- Hardware Overview: Model Name: MacBook Pro Model Identifier:MacBookPro3,1 Processor Name: Intel Core 2 Duo Running 10.5.2 Uneventful installation, tests as follows --- np.test() Numpy is installed in /Library/Frameworks/Python.framework/Versions/ 2.5/lib/python2.5/site-packages/numpy Numpy version 1.1.0rc1 Python version 2.5.2 (r252:60911, Feb 22 2008, 07:57:53) [GCC 4.0.1 (Apple Computer, Inc. build 5363)] --- skipping details --- Ran 1004 tests in 1.939s OK unittest._TextTestResult run=1004 errors=0 failures=0 - Thanks to all for the hard work. Bob Pyle ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion -- Christopher Burns Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] data exchange format
I am not exactly an expert on data storage, but I use PyTables a lot for all kinds of scientific data sets and am very happy with it. Indeed it has many advanced capabilities; so it may seem overkill at first glance. But for simple tasks such as the one you describe the api is simple; indeed I also use it for small data sets because it is such a quick way of storing data in a portable way. Regarding speed and overhead: I don't know in general what the penalties or gains are for very small files. On my system an empty file is 1032 bytes, and if I fill it with an array of 3 by 3 random float64's it is 723080. Not so bad. Just try it out yourself: import numpy, tables ta = numpy.random.random((3,3)) f = tables.openFile('test.h5','w') f.createArray('/','testarray',ta) f.close() With most real data file size can be smaller because you have the option of enabling compression. But I must admit that I haven't tried reading HDF5 in Matlab or C (and never will); I know it is possible, but I don't know how difficult it is. Cheers, Gabriel On Tue, 2008-05-20 at 12:11 -0400, Gary Pajer wrote: On Tue, May 20, 2008 at 10:26 AM, Gabriel J.L. Beckers [EMAIL PROTECTED] wrote: PyTables is an efficient way of doing it (http://www.pytables.org). You essentially write data to a HDF5 file, which is portable and can be read in Matlab or in a C program (using the HDF5 library). Gabriel I thought about that. It seems to have much more than I need, so I wonder if it's got more overhead / less speed / more complex API than I need. But big isn't necessarily bad, but it might be. Is pytables overkill? On Tue, 2008-05-20 at 09:32 -0400, Gary Pajer wrote: I want to store data in a way that can be read by a C or Matlab program. Not too much data, not too complicated: a dozen or so floats, a few integers, a few strings, and a (3, x) numpy array where typically 500 x 3. I was about to create my own format for storage when it occurred to me that I might want to use XML or some other standard format. Like JSON, perhaps. Can anyone comment, esp relating to numpy implementation issues, or offer suggestions? Thanks, Gary ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Branching 1.1.x and starting 1.2.x development
2008/5/20 Charles R Harris [EMAIL PROTECTED]: The current Mac machine seems to have a configuration problem. If anyone out there has an SGI machine we could use that too. I heart the buildbot. I mailed Barry, he'll reset it for us. I'm glad the buildbot makes you happy, Charles :) Regards Stéfan ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] numpy.save bug on solaris x86 w/ nans and objects
On Tue, May 20, 2008 at 10:57 AM, John Hunter [EMAIL PROTECTED] wrote: I have a record array w/ dates (O4) and floats. If some of these floats are NaN, np.save crashes (on my solaris platform but not on a linux machine I tested on). Here is the code that produces the bug: In [1]: pwd Out[1]: '/home/titan/johnh/python/svn/matplotlib/matplotlib/examples/data' In [2]: import matplotlib.mlab as mlab In [3]: import numpy as np In [4]: r = mlab.csv2rec('aapl.csv') In [5]: r.dtype Out[5]: dtype([('date', '|O4'), ('open', 'f8'), ('high', 'f8'), ('low', 'f8'), ('close', 'f8'), ('volume', 'i4'), ('adj_close', 'f8')]) In [6]: r.close[100:] = np.nan In [7]: r.close Out[7]: array([ 124.63, 127.46, 129.4 , ..., NaN, NaN, NaN]) In [8]: np.save('mydata.npy', r) Traceback (most recent call last): File ipython console, line 1, in ? File /home/titan/johnh/dev/lib/python2.4/site-packages/numpy/lib/io.py, line 158, in save format.write_array(fid, arr) File /home/titan/johnh/dev/lib/python2.4/site-packages/numpy/lib/format.py, line 272, in write_array cPickle.dump(array, fp, protocol=2) SystemError: frexp() result out of range In [9]: np.__version__ Out[9]: '1.2.0.dev5136' In [10]: !uname -a SunOS flag 5.10 Generic_118855-15 i86pc i386 i86pc Looks like we need to add a test for this before release. But I'm off to work. Chuck ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] data exchange format
On May 20, 2008, at 6:11 PM, Gary Pajer wrote: I thought about that. It seems to have much more than I need, so I wonder if it's got more overhead / less speed / more complex API than I need. But big isn't necessarily bad, but it might be. Is pytables overkill? I use netCDF (which uses the HDF5 libraries netCDF4). NetCDF is good for large, gridded datasets, where the grid does not change in time. For my datasets (numerical ocean models), this format is perfect. HDF is more general and flexible, but a bit more complex. Take a look at the netcdf4-python.googlecode.com package, if you are interested. -Rob ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] ANN: NumPy/SciPy Documentation Marathon 2008
I would like to help, but it's not clear to me exactly how to do that from the wiki. What are the steps? -Rob ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] numpy.save bug on solaris x86 w/ nans and objects
On Tue, May 20, 2008 at 12:13 PM, Charles R Harris [EMAIL PROTECTED] wrote: Looks like we need to add a test for this before release. But I'm off to work. Here's a simpler example in case you want to wrap it in a test harness: import datetime import numpy as np r = np.rec.fromarrays([ [datetime.date(2007,1,1), datetime.date(2007,1,2), datetime.date(2007,1,2)], [.1, .2, np.nan], ], names='date,value') np.save('mytest.npy', r) ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] ANN: NumPy/SciPy Documentation Marathon 2008
Hi Rob Which of the instructions are not clear? We'd like to make this as accessible as possible. In order to start editing, you need to complete step 5, which is to register on the wiki and send us your UserName. Regards Stéfan 2008/5/20 Rob Hetland [EMAIL PROTECTED]: I would like to help, but it's not clear to me exactly how to do that from the wiki. What are the steps? -Rob ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Branching 1.1.x and starting 1.2.x development
Charles R Harris wrote: Absolutely! If anybody has access to such a machine and is willing to let a build-slave run on it, please let me know and I'll send you the necessary configuration files. The current Mac machine seems to have a configuration problem. I've got a PPC Mac that I may be able to use -- any particular ports need to be open -- we've got pretty draconian firewall rules. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/ORR(206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception [EMAIL PROTECTED] ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] dimension aligment
Hi all, just a simple question regarding the alignment of dimensions: given a 3d array a = numpy.array([[[1,2,3],[4,5,6]],[[7,8,9],[10,11,12]],[[13,14,15],[16,17,18]],[[19,20,21],[22,23,24]]]) a.shape returns (4,2,3) so I assume the first digit is the 3rd dimension, second is 2nd dim and third is the first. how is the data aligned in memory now? according to the strides it should be 1,2,3,4,5,6,7,8,9,10,... right? if I had an array of more dimensions, the first digit returned by shape should always be the highest dim. feel free to confirm / correct my assumptions best thomas ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] ANN: NumPy/SciPy Documentation Marathon 2008
Joe Harrington wrote: NUMPY/SCIPY DOCUMENTATION MARATHON 2008 On the wiki it says: Writers should be fluent in English In case someone is working on the dynamic docstring magic, is this a good moment to mention internationalisation and world domination in the same sentence? -Jon ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] dimension aligment
On Tue, May 20, 2008 at 11:47 AM, Thomas Hrabe [EMAIL PROTECTED] wrote: Hi all, just a simple question regarding the alignment of dimensions: given a 3d array a = numpy.array([[[1,2,3],[4,5,6]],[[7,8,9],[10,11,12]],[[13,14,15],[16,17,18]],[[19,20,21],[22,23,24]]]) a.shape returns (4,2,3) so I assume the first digit is the 3rd dimension, second is 2nd dim and third is the first. Only if you count from the right. I would call the first digit the first dimension. how is the data aligned in memory now? according to the strides it should be 1,2,3,4,5,6,7,8,9,10,... right? Like a C array, contiguous and the rightmost dimension varies fastest. if I had an array of more dimensions, the first digit returned by shape should always be the highest dim. Yes. Athough first is less ambiguous than highest Chuck ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] 1.1.0rc1 OSX Installer - please test
Powerbook G4 with 10.5.2 and Activestate Python 2.5.1.1, no problems beyond the two endian test failures Cheers Tommy On May 20, 2008, at 12:57 PM, Christopher Burns wrote: Reminder to please test the installer. We already discovered a couple endian bugs on PPC, which is good, but we'd like to verify the release candidate on several more machines before the 1.1.0 tag on Thursday. It only takes a few minutes and you get the added bonus of having a current install of numpy. :) Thank you, Chris ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] dimension aligment
2008/5/20 Thomas Hrabe [EMAIL PROTECTED]: given a 3d array a = numpy.array([[[1,2,3],[4,5,6]],[[7,8,9],[10,11,12]],[[13,14,15],[16,17,18]],[[19,20,21],[22,23,24]]]) a.shape returns (4,2,3) so I assume the first digit is the 3rd dimension, second is 2nd dim and third is the first. how is the data aligned in memory now? according to the strides it should be 1,2,3,4,5,6,7,8,9,10,... right? if I had an array of more dimensions, the first digit returned by shape should always be the highest dim. You are basically right, but this is a surprisingly subtle issue for numpy. A numpy array is basically a block of memory and some description. One piece of that description is the type of data it contains (i.e., how to interpret each chunk of memory) for example int32, float64, etc. Another is the sizes of all the various dimensions. A third piece, which makes many of the things numpy does possible, is the strides. The way numpy works is that basically it translates A[i,j,k] into a lookup of the item in the memory block at position i*strides[0]+j*strides[1]+k*strides[2] This means, if you have an array A and you want every second element (A[::2]), all numpy needs to do is hand you back a new array pointing to the same data block, but with strides[0] doubled. Similarly if you want to transpose a two-dimensional array, all it needs to do is exchange strides[0] and strides[1]; no data need be moved. This means, though, that if you are handed a numpy array, the elements can be arranged in memory in quite a complicated fashion. Sometimes this is no problem - you can always use the strides to find it all. But sometimes you need the data arranged in a particular way. numpy defines two particular ways: C contiguous and FORTRAN contiguous. C contiguous arrays are what you describe, and they're what numpy produces by default; they are arranged so that the rightmost index has the smallest stride. FORTRAN contiguous arrays are arranged the other way around; the leftmost index has the smallest stride. (This is how FORTRAN arrays are arranged in memory.) There is also a special case: the reshape() function changes the shape of the array. It has an order argument that describes not how the elements are arranged in memory but how you want to think of the elements as arranged in memory for the reshape operation. Anne ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] ANN: NumPy/SciPy Documentation Marathon 2008
On May 20, 2008, at 7:30 PM, Stéfan van der Walt wrote: ...and send us your UserName. This is the part I skipped over... I registered, and wondered why everything was not editable. -Rob Rob Hetland, Associate Professor Dept. of Oceanography, Texas AM University http://pong.tamu.edu/~rob phone: 979-458-0096, fax: 979-845-6331 ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] ANN: NumPy/SciPy Documentation Marathon 2008
On May 20, 2008, at 7:30 PM, Stéfan van der Walt wrote: and send us your UserName. Oh, and my username is RobHetland -Rob Rob Hetland, Associate Professor Dept. of Oceanography, Texas AM University http://pong.tamu.edu/~rob phone: 979-458-0096, fax: 979-845-6331 ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] 1.1.0rc1 OSX Installer - please test
Great! I'm glad to see we have several PPC's in testing also. On Tue, May 20, 2008 at 10:42 AM, Christopher Barker [EMAIL PROTECTED] wrote: Christopher Burns wrote: Reminder to please test the installer. Dual G5 PPC mac, OS-X 10.4.11 python2.5 from python.org We already discovered a couple endian bugs on PPC, Got those same bugs, otherwise not issues. -Chris ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] 1.1.0rc1 OSX Installer - please test
Hey Tommy, Does ActiveState install python in the same location as python.org? [EMAIL PROTECTED] 10:35:05 $ which python /Library/Frameworks/Python.framework/Versions/Current/bin/python On Tue, May 20, 2008 at 11:04 AM, Tommy Grav [EMAIL PROTECTED] wrote: Powerbook G4 with 10.5.2 and Activestate Python 2.5.1.1, no problems beyond the two endian test failures Cheers Tommy ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] 1.1.0rc1 OSX Installer - please test
Yes it does put python in that location as it should ;o) Cheers Tommy On May 20, 2008, at 2:20 PM, Christopher Burns wrote: Hey Tommy, Does ActiveState install python in the same location as python.org? [EMAIL PROTECTED] 10:35:05 $ which python /Library/Frameworks/Python.framework/Versions/Current/bin/python On Tue, May 20, 2008 at 11:04 AM, Tommy Grav [EMAIL PROTECTED] wrote: Powerbook G4 with 10.5.2 and Activestate Python 2.5.1.1, no problems beyond the two endian test failures Cheers Tommy ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] 1.1.0rc1 OSX Installer - please test
Good to know. Thanks! On Tue, May 20, 2008 at 11:28 AM, Tommy Grav [EMAIL PROTECTED] wrote: Yes it does put python in that location as it should ;o) Cheers Tommy On May 20, 2008, at 2:20 PM, Christopher Burns wrote: Hey Tommy, Does ActiveState install python in the same location as python.org? [EMAIL PROTECTED] 10:35:05 $ which python /Library/Frameworks/Python.framework/Versions/Current/bin/python On Tue, May 20, 2008 at 11:04 AM, Tommy Grav [EMAIL PROTECTED] wrote: Powerbook G4 with 10.5.2 and Activestate Python 2.5.1.1, no problems beyond the two endian test failures Cheers Tommy ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion -- Christopher Burns Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] 1.1.0rc1, Win32 Installer: please test it
Hi, No installation or test errors on my AMD Athlon XP 2100 running Win XP and Python 2.5 Bruce On Mon, May 19, 2008 at 9:15 PM, David Cournapeau [EMAIL PROTECTED] wrote: Hi, Sorry for the delay, but it is now ready: numpy superpack installers for numpy 1.1.0rc1: http://www.ar.media.kyoto-u.ac.jp/members/david/archives/numpy-1.1.0rc1-win32-superpack-python2.5.exe http://www.ar.media.kyoto-u.ac.jp/members/david/archives/numpy-1.1.0rc1-win32-superpack-python2.4.exe (Python 2.4 binaries are not there yet). This binary should work on any (32 bits) CPU on windows, and in particular should solve the recurring problem of segfault/hangs on older CPU with previous binary releases. I used a fairly heavy compression scheme (lzma), because it cut the size ~ 30 %. If it is a problem, please let me know, cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] embedded PyArray_FromDimsAndDataSegmentationFault
After all, I figured how to create an numpy in C with the help below. If called in C, import_array() but actually _import_array() successfully creates all the instances needed for the array. However, once I call this function from another environment such as Matlab, PyObject *numpy = PyImport_ImportModule(numpy.core.multiarray); in __import_array() returns NULL, because numpy.core.multiarray is not found? Do you think it might depend on the path settings? As I said, the code works fine from plain C but its odd from within Matlab. Best, Thomas -Original Message- From: [EMAIL PROTECTED] on behalf of Robert Kern Sent: Wed 5/14/2008 5:12 PM To: Discussion of Numerical Python Subject: Re: [Numpy-discussion] embedded PyArray_FromDimsAndDataSegmentationFault On Wed, May 14, 2008 at 6:40 PM, Thomas Hrabe [EMAIL PROTECTED] wrote: I didn't know a person could write a stand-alone program using NumPy this way (can you?) Well, this is possible when you embed python and use the simple objects such as ints, strings, Why should it be impossible to do it for numpy then? numpy exposes its API as a pointer to an array which contains function pointers. import_array() imports the extension module, accesses the PyCObject that contains this pointer, and sets a global pointer appropriately. There are #defines macros to emulate the functions by dereferencing the appropriate element of the array and calling it with the given macro arguments. The reason you get the error about returning nothing when the return type of main() is declared int is because this macro is only intended to work inside of an initmodule() function of an extension module, whose return type is void. import_array() includes error handling logic and will return if there is an error. You get the segfault without import_array() because all of the functions you try to call are trying to dereference an array which has not been initialized. My plan is to send multidimensional arrays from C to python and to apply some python specific functions to them. Well, first you need to call Py_Initialize() to start the VM. Otherwise, you can't import numpy to begin with. I guess you could write a void load_numpy(void) function which just exists to call import_array(). Just be sure to check the exception state appropriately after it returns. But for the most part, it's much better to drive your C code using Python than the other around. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] ANN: NumPy/SciPy Documentation Marathon 2008
ti, 2008-05-20 kello 20:06 +0200, Rob Hetland kirjoitti: On May 20, 2008, at 7:30 PM, Stéfan van der Walt wrote: and send us your UserName. Oh, and my username is RobHetland You're in now. Regards, Pauli ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Branching 1.1.x and starting 1.2.x development
Can we close ticket 605, Incorrect Behavior of Numpy Histogram? http://projects.scipy.org/scipy/numpy/ticket/605 Chuck ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Branching 1.1.x and starting 1.2.x development
On Tue, May 20, 2008 at 12:10 PM, Charles R Harris [EMAIL PROTECTED] wrote: Can we close ticket 605, Incorrect Behavior of Numpy Histogram? http://projects.scipy.org/scipy/numpy/ticket/605 Yes, but we need to create a new ticket for 1.2 detailing the planned changes. If no one else gets to it, I will do so tonight. -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Squeak, squeak. Trac mailing list still broken.
Robert, The dead mailer is a PITA and becoming a major bottleneck to bugfixing/development. So I am putting on my Romex suit and wading into the fires of your irritation to raise the issue once more. Chuck ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Quick Question about Optimization
The time has now been shaved down to ~9 seconds with this suggestion from the original 13-14s, with the inclusing of Eric Firing's suggestions. This is without scipy.weave, which at the moment I can't get to work for all lines, and when I just replace one of them sucessfully it seems to run more slowly, I assume because it is converting data back and forth. There's a nontrivial constant time cost to using scipy.weave.blitz, but it isn't copying the data. Thus it will slow you down on smaller arrays, but you'll notice quite a speed-up on much larger ones. I should have mentioned that earlier -- I assumed your arrays were really large. Are there any major pitfalls to be aware of? It sounds like if I do: f = a[n,:] I get a reference, but if I did something like g = a[n,:]*2 I would get a copy. Well, if you do f = a[n, :], you would get a view, another object that shares the data in memory with a but is a separate object. If anyone has any clues on why scipy.weave is blowing (http://pastebin.com/m79699c04) using the code I attached, I wouldn't mind knowing. Most of the times I've attempted using weave, I've been a little baffled about why things aren't working. I also don't have a sense for whether all numpy functions should be weavable or if it's just general array operations that can be put through weave. Sorry I didn't get back to you earlier on this -- I was a bit busy yesterday. It looks like weave.blitz isn't working on your second line because you're not explicitly putting slices in some of the dimensions, In numpy v[0:2] works for 1,2,3,4, dimensions, but for a 2d array in blitz you have to use v[0:2,:], 3d v[0:2,:,:]. It's a bit more picky. I think that's the problem with your second line -- try replacing v[:] with v[0,:] and theta[1-curidx] with theta[1-curidx, :]. (I may have missed some others.) weave.blitz is currently limited to just array operations... it doesn't really support the numpy functions. Hope that helps a little -- Hoyt -- +++ Hoyt Koepke UBC Department of Computer Science http://www.cs.ubc.ca/~hoytak/ [EMAIL PROTECTED] +++ ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Branching 1.1.x and starting 1.2.x development
On Tue, May 20, 2008 at 1:16 PM, Jarrod Millman [EMAIL PROTECTED] wrote: On Tue, May 20, 2008 at 12:10 PM, Charles R Harris [EMAIL PROTECTED] wrote: Can we close ticket 605, Incorrect Behavior of Numpy Histogram? http://projects.scipy.org/scipy/numpy/ticket/605 Yes, but we need to create a new ticket for 1.2 detailing the planned changes. If no one else gets to it, I will do so tonight. How about ticket 770? Did Roberts endianess fixes cover that? Chuck ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Branching 1.1.x and starting 1.2.x development
On Tue, May 20, 2008 at 2:26 PM, Charles R Harris [EMAIL PROTECTED] wrote: How about ticket 770? Did Roberts endianess fixes cover that? Yes. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] 1.1.0rc1, Win32 Installer: please test it
David Cournapeau david at ar.media.kyoto-u.ac.jp writes: Hi, Sorry for the delay, but it is now ready: numpy superpack installers for numpy 1.1.0rc1: http://www.ar.media.kyoto-u.ac.jp/members/david/archives/numpy-1.1.0rc1-win32-superpack-python2.5.exe http://www.ar.media.kyoto-u.ac.jp/members/david/archives/numpy-1.1.0rc1-win32-superpack-python2.4.exe (Python 2.4 binaries are not there yet). This binary should work on any (32 bits) CPU on windows, and in particular should solve the recurring problem of segfault/hangs on older CPU with previous binary releases. I used a fairly heavy compression scheme (lzma), because it cut the size ~ 30 %. If it is a problem, please let me know, cheers, David installed fine and all tests ran successfully on my machine. Machine specs: - Vista 32 bit - Python 2.5 - Intel Core 2 duo 2ghz ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] embeddedPyArray_FromDimsAndDataSegmentationFault
Thats what PyErr_Print() prints. Python is initialised, for sure Traceback (most recent call last): File /usr/global/python32/lib/python2.4/site-packages/numpy/__init__.py, lin e 34, in ? import testing File /usr/global/python32/lib/python2.4/site-packages/numpy/testing/__init__. py, line 3, in ? from numpytest import * File /usr/global/python32/lib/python2.4/site-packages/numpy/testing/numpytest .py, line 8, in ? import unittest File /usr/global/python32/lib/python2.4/unittest.py, line 51, in ? import time ImportError: /usr/global/python32/lib/python2.4/lib-dynload/time.so: undefined s ymbol: PyExc_IOErro why does it work in C and not in C started within Matlab? -Original Message- From: [EMAIL PROTECTED] on behalf of Robert Kern Sent: Tue 5/20/2008 12:23 PM To: Discussion of Numerical Python Subject: Re: [Numpy-discussion] embeddedPyArray_FromDimsAndDataSegmentationFault On Tue, May 20, 2008 at 2:01 PM, Thomas Hrabe [EMAIL PROTECTED] wrote: After all, I figured how to create an numpy in C with the help below. If called in C, import_array() but actually _import_array() successfully creates all the instances needed for the array. However, once I call this function from another environment such as Matlab, PyObject *numpy = PyImport_ImportModule(numpy.core.multiarray); in __import_array() returns NULL, because numpy.core.multiarray is not found? Something like that. Call PyErr_Print() do display the full traceback so you can find out what the actual problem is. Do you think it might depend on the path settings? You've called Py_Initialize() before you do anything else with Py* functions, right? -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion winmail.dat___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] embeddedPyArray_FromDimsAndDataSegmentationFault
On Tue, May 20, 2008 at 3:36 PM, Thomas Hrabe [EMAIL PROTECTED] wrote: Thats what PyErr_Print() prints. Python is initialised, for sure Traceback (most recent call last): File /usr/global/python32/lib/python2.4/site-packages/numpy/__init__.py, lin e 34, in ? import testing File /usr/global/python32/lib/python2.4/site-packages/numpy/testing/__init__. py, line 3, in ? from numpytest import * File /usr/global/python32/lib/python2.4/site-packages/numpy/testing/numpytest .py, line 8, in ? import unittest File /usr/global/python32/lib/python2.4/unittest.py, line 51, in ? import time ImportError: /usr/global/python32/lib/python2.4/lib-dynload/time.so: undefined s ymbol: PyExc_IOErro why does it work in C and not in C started within Matlab? It depends on how you linked everything. Presumably, you linked in libpython24 for the program but not the MEX. -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] ANN: NumPy/SciPy Documentation Marathon 2008
2008/5/20 Jonathan Wright [EMAIL PROTECTED]: Joe Harrington wrote: NUMPY/SCIPY DOCUMENTATION MARATHON 2008 On the wiki it says: Writers should be fluent in English In case someone is working on the dynamic docstring magic, is this a good moment to mention internationalisation and world domination in the same sentence? I think we'll stick to English for now (I don't think I have the motivation to do an Afrikaans translation!). As for internationali(s/z)ation, we'll see who writes the most docstrings. In a fortuitous twist of events, I find myself able to read American as well :) Cheers Stéfan ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Squeak, squeak. Trac mailing list still broken.
Hey Chuck, I was able to create a ticket just now and received a notification email about it. Can you try modifying or updating a ticket and see if you get notifications? -Peter On May 20, 2008, at 2:19 PM, Charles R Harris wrote: Robert, The dead mailer is a PITA and becoming a major bottleneck to bugfixing/development. So I am putting on my Romex suit and wading into the fires of your irritation to raise the issue once more. Chuck ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] URGENT: Re: 1.1.0rc1, Win32 Installer: please test it
On Mon, May 19, 2008 at 7:15 PM, David Cournapeau [EMAIL PROTECTED] wrote: Sorry for the delay, but it is now ready: numpy superpack installers for numpy 1.1.0rc1: http://www.ar.media.kyoto-u.ac.jp/members/david/archives/numpy-1.1.0rc1-win32-superpack-python2.5.exe http://www.ar.media.kyoto-u.ac.jp/members/david/archives/numpy-1.1.0rc1-win32-superpack-python2.4.exe (Python 2.4 binaries are not there yet). This binary should work on any (32 bits) CPU on windows, and in particular should solve the recurring problem of segfault/hangs on older CPU with previous binary releases. Hello, Please test the Windows binaries. So far I have only seen two testers. I can't tag the release until I know that our binary installers work on a wide variety of Windows machines. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] 1.1.0rc1 OSX Installer - please test
On Mon, May 19, 2008 at 12:39 PM, Christopher Burns [EMAIL PROTECTED] wrote: I've built a Mac binary for the 1.1 release candidate. Mac users, please test it from: https://cirl.berkeley.edu/numpy/numpy-1.1.0rc1-py2.5-macosx10.5.dmg This is for the MacPython installed from python.org. Hello, Please test the Mac binaries. I can't tag the release until I know that our binary installers work on a wide variety of Mac machines. Thanks, -- Jarrod Millman Computational Infrastructure for Research Labs 10 Giannini Hall, UC Berkeley phone: 510.643.4014 http://cirl.berkeley.edu/ ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Quick Question about Optimization
Well, if you do f = a[n, :], you would get a view, another object that shares the data in memory with a but is a separate object. OK, so it is a new object, with the properties of the slice it references, but if I write anything to it, it will consistently go back to the same spot in the original array. In general, if I work on that, and don't do something that allocates a new set of memory locations for that, it will reference the same memory location. If I do: a = np.zeros((20,30)) b = a[2,:] b += 1 # will add 1 to the original slice b.resize will fail... b = np.zeros((1,30)) # allocates new memory and disconnects the view The appropriate way to zero out the original memory locations would be to do something like b *= 0? Is there any way to force a writeback to the original view so long as the dimensions of what is being assigned to b is the same as the original? Or, is there a way to, say, enable a warking if I'm dropping a view? Sorry I didn't get back to you earlier on this -- I was a bit busy yesterday. It looks like weave.blitz isn't working on your second line because you're not explicitly putting slices in some of the dimensions, In numpy v[0:2] works for 1,2,3,4, dimensions, but for a 2d array in blitz you have to use v[0:2,:], 3d v[0:2,:,:]. It's a bit more picky. I think that's the problem with your second line -- try replacing v[:] with v[0,:] and theta[1-curidx] with theta[1-curidx, :]. (I may have missed some others.) OK, that seems to do it. I still actually get better performance (subsequent runs after compilation) with the straight numpy code. Strangely, I'm also getting that the flip/flop method is running a bit slower than having the separate prev_ variables. aff_input is rather large (~2000x14000), but the state vectors are only 14000 (or that x2 w/ flipflopping for some), each. Is there slowdown maybe because it is doing those 3 lines of blitz operations then doing a bunch of python numpy? Either way, It seems like I've got pretty good performance as well as a handle on using weave.blitz in the future. -jsnyder -- James Snyder Biomedical Engineering Northwestern University [EMAIL PROTECTED] PGP: http://fanplastic.org/key.txt ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Quick Question about Optimization
James Snyder wrote: Well, if you do f = a[n, :], you would get a view, another object that shares the data in memory with a but is a separate object. OK, so it is a new object, with the properties of the slice it references, but if I write anything to it, it will consistently go back to the same spot in the original array. In general, if I work on that, and don't do something that allocates a new set of memory locations for that, it will reference the same memory location. If I do: a = np.zeros((20,30)) b = a[2,:] b += 1 # will add 1 to the original slice b.resize will fail... b = np.zeros((1,30)) # allocates new memory and disconnects the view The appropriate way to zero out the original memory locations would be to do something like b *= 0? or, from fastest to slowest: b.fill(0) b.flat = 0 b[:] = 0 Eric ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] URGENT: Re: 1.1.0rc1, Win32 Installer: please test it
On Tue, 20 May 2008, Jarrod Millman apparently wrote: Please test the Windows binaries. What tests would you like? :: np.test(level=1) Numpy is installed in C:\Python25\lib\site-packages\numpy Numpy version 1.1.0rc1 Python version 2.5.1 (r251:54863, Apr 18 2007, 08:51:08) [MSC v.13 ... OK unittest._TextTestResult run=1004 errors=0 failures=0 This is on an old machine (Pentium 3 (no SSE 2)) running Win 2000. Cheers, Alan ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Branching 1.1.x and starting 1.2.x development
Curious bug on Stefan's Ubuntu build client: ImportError: /usr/lib/atlas/libblas.so.3gf: undefined symbol: _gfortran_st_write_done make[1]: *** [test] Error 1 Anyone know what that is about? Chuck ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Shouldn't numpy.core modules avoid calling numpy.lib ?
On Tue, May 20, 2008 at 8:14 PM, David Cournapeau [EMAIL PROTECTED] wrote: Hi, I noticed that some functions in numpy.core call functions in numpy.lib. Shouldn't this be avoided as much as possible, to avoid potential circular import, dependencies, etc... ? Probably not. But numpy/lib looks like a basement closet to me, anyway. What functions are getting called? Chuck ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Branching 1.1.x and starting 1.2.x development
Charles R Harris wrote: Hi Chuck, Curious bug on Stefan's Ubuntu build client: ImportError: /usr/lib/atlas/libblas.so.3gf: undefined symbol: _gfortran_st_write_done make[1]: *** [test] Error 1 Anyone know what that is about? You need to link -lgfortran since the blas you use was compiled with gfortran. Chuck Cheers, Michael ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Branching 1.1.x and starting 1.2.x development
Charles R Harris wrote: Curious bug on Stefan's Ubuntu build client: ImportError: /usr/lib/atlas/libblas.so.3gf: undefined symbol: _gfortran_st_write_done make[1]: *** [test] Error 1 Unfortunately not curious and very well known: I bet the client is configured to build with g77; libblas.so.3gf uses the gfortran ABI (which is incompatible with the g77 ABI). Any library with 3gf is the gfortran ABI. Two possible solutions: - forget about gfortran, and install atlas3* (g77 ABI) instead of libatlas* (gfortran ABI) - use gfortran in the build The first one is safer: I cannot find any concrete information on the Ubuntu side of things, but I believe g77 is still the default ABI. (Debian) Lenny uses gfortran: http://wiki.debian.org/GfortranTransition David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] URGENT: Re: 1.1.0rc1, Win32 Installer: please test it
Alan G Isaac wrote: What tests would you like? :: np.test(level=1) Numpy is installed in C:\Python25\lib\site-packages\numpy Numpy version 1.1.0rc1 Python version 2.5.1 (r251:54863, Apr 18 2007, 08:51:08) [MSC v.13 ... OK unittest._TextTestResult run=1004 errors=0 failures=0 This is on an old machine (Pentium 3 (no SSE 2)) running Win 2000. Joy, no more crashing on machines without SSE2 :) That's exactly the point of this 'new' installer. cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Shouldn't numpy.core modules avoid calling numpy.lib ?
Charles R Harris wrote: On Tue, May 20, 2008 at 8:14 PM, David Cournapeau [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Hi, I noticed that some functions in numpy.core call functions in numpy.lib. Shouldn't this be avoided as much as possible, to avoid potential circular import, dependencies, etc... ? Probably not. Probably not avoided, or should probably not be called ? But numpy/lib looks like a basement closet to me, anyway. What functions are getting called? I can see at least one: numpy.lib.issubtype in core/defmatrix.py, called once. I am trying to see why importing numpy is slow, and those circular import make the thing difficult to understand (or maybe I am just too new to dtrace to understand how to use it effectively here). cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Branching 1.1.x and starting 1.2.x development
On Tue, May 20, 2008 at 8:32 PM, David Cournapeau [EMAIL PROTECTED] wrote: Charles R Harris wrote: Curious bug on Stefan's Ubuntu build client: ImportError: /usr/lib/atlas/libblas.so.3gf: undefined symbol: _gfortran_st_write_done make[1]: *** [test] Error 1 Unfortunately not curious and very well known: I bet the client is configured to build with g77; libblas.so.3gf uses the gfortran ABI (which is incompatible with the g77 ABI). Any library with 3gf is the gfortran ABI. Two possible solutions: - forget about gfortran, and install atlas3* (g77 ABI) instead of libatlas* (gfortran ABI) - use gfortran in the build The first one is safer: I cannot find any concrete information on the Ubuntu side of things, but I believe g77 is still the default ABI. (Debian) Lenny uses gfortran: http://wiki.debian.org/GfortranTransition Ah... Is this going to be a problem with Debian installs? I just ran Stefan's buildbot too see if it was working yet. Chuck ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Shouldn't numpy.core modules avoid calling numpy.lib ?
On Tue, May 20, 2008 at 8:39 PM, David Cournapeau [EMAIL PROTECTED] wrote: Charles R Harris wrote: On Tue, May 20, 2008 at 8:14 PM, David Cournapeau [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Hi, I noticed that some functions in numpy.core call functions in numpy.lib. Shouldn't this be avoided as much as possible, to avoid potential circular import, dependencies, etc... ? Probably not. Probably not avoided, or should probably not be called ? Just that it looks cleaner to me if stuff in numpy/core doesn't call special libraries. I don't know that it is a problem. Maybe some utility functions should be moved into core? Chuck ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Branching 1.1.x and starting 1.2.x development
Charles R Harris wrote: Ah... Is this going to be a problem with Debian installs? No, I meant that Lenny has a clear rule for transition, whereas I could not find such information for Ubuntu. Since the transition happened during the Hardy developement time-frame, I don't know what Ubuntu did. Maybe I am missing something, but I found the distribution of two ABI at the same time extremely annoying in Ubuntu (typically, debian has a package liblapack_pic, very useful to build a shared atlas by yourself, but contrary to all other fortran libraries - renamed for the gfortran transition - this one got gfortran ABI without name change). cheers, David ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] question on NumPy NaN
On Tue, May 20, 2008 at 6:12 PM, David Cournapeau [EMAIL PROTECTED] wrote: Keith Goodman wrote: Or np.nansum(a) / np.isfinite(a).sum() A nanmean would be nice to have in numpy. nanmean, nanstd and nanmedian are available in scipy, though. Thanks for pointing that out. Studying nanmedian, which is twice as fast as my for-loop implementation, taught me about compress and apply_along_axis. import numpy.matlib as mp from numpy.matlib import where timeit x[0, where(x.A 0.5)[1]] 1 loops, best of 3: 60.8 µs per loop timeit x.compress(x.A.ravel() 0.5) 1 loops, best of 3: 44.5 µs per loop Am I missing something obvious or is 'sort' unnecessary in _nanmedian? Perhaps it is left over from a time when _nanmedian did not call median. def _nanmedian(arr1d): # This only works on 1d arrays Private function for rank a arrays. Compute the median ignoring Nan. :Parameters: arr1d : rank 1 ndarray input array :Results: m : float the median. cond = 1-np.isnan(arr1d) x = np.sort(np.compress(cond,arr1d,axis=-1)) if x.size == 0: return np.nan return median(x) ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] URGENT: Re: 1.1.0rc1, Win32 Installer: please test it
Testing the windows installers On an 8 years old Windows 2000 with Intel processor (which ?) no sse, installed numpy-1/1/0rc1-nosse.exe * with Python 2.4.3 (no ctypes) - numpy.test(): 1001 test OK, no errors or failures - numpy.test(): 1271 tests, errors=12, failures=1 no crash, some previous numpy crashed python * with Python 2.5.1 - numpy.test(): 1004 test OK, no errors or failures - numpy.test(): 1277 tests, errors=12, failures=1 On Windows XP with Intel Pentium M on notebook with SSE2, installed numpy-1.1.0rc1-sse2.exe {{{ import numpy numpy.test() Numpy is installed in C:\Programs\Python25\lib\site-packages\numpy Numpy version 1.1.0rc1 Python version 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Int el)] ... Ran 1004 tests in 1.469s OK unittest._TextTestResult run=1004 errors=0 failures=0 }}} running all tests: {{{ Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] on win32 Type help, copyright, credits or license for more information. import numpy numpy.test(all=True) Numpy is installed in C:\Programs\Python25\lib\site-packages\numpy Numpy version 1.1.0rc1 Python version 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Int el)] ... Ran 1277 tests in 5.890s FAILED (failures=1, errors=12) unittest._TextTestResult run=1277 errors=12 failures=1 }}} if I run numpy.test() or numpy.test(level=1) again, it does not go back to the original smaller number (1004) of tests, instead it still picks up some of the tests with errors {{{ numpy.test(level=1) Numpy is installed in C:\Programs\Python25\lib\site-packages\numpy Numpy version 1.1.0rc1 Python version 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Int el)] Ran 1021 tests in 1.500s FAILED (errors=10) unittest._TextTestResult run=1021 errors=10 failures=0 }}} most errors are similar to (as in an open ticket) File C:\Programs\Python25\Lib\site-packages\numpy\ma\tests \test_mrecords.py, line 142, in test_set_mask assert_equal(mbase._fieldmask.tolist(), RuntimeError: array_item not returning smaller-dimensional array So, installer works well on these two windows computers Josef ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] URGENT: Re: 1.1.0rc1, Win32 Installer: please test it
joep wrote: Testing the windows installers On an 8 years old Windows 2000 with Intel processor (which ?) no sse, installed numpy-1/1/0rc1-nosse.exe * with Python 2.4.3 (no ctypes) - numpy.test(): 1001 test OK, no errors or failures typo: missed with option all=True: - numpy.test(all=True): 1271 tests, errors=12, failures=1 no crash, some previous numpy crashed python * with Python 2.5.1 - numpy.test(): 1004 test OK, no errors or failures typo: missed with option all=True - numpy.test(all=True): 1277 tests, errors=12, failures=1 ___ Numpy-discussion mailing list Numpy-discussion@scipy.org http://projects.scipy.org/mailman/listinfo/numpy-discussion