[issue11650] Faulty RESTART/EINTR handling in Parser/myreadline.c
Michael Hudson m...@users.sourceforge.net added the comment: To be clear, I have no idea why the patch for issue 960406 removed the continue from my_fgets. It may have been simply a mistake. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue11650 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1020188] Use Py_CLEAR where necessary to avoid crashes
Michael Hudson m...@users.sourceforge.net added the comment: I think it makes sense to close this; if problems remain they should be reported in more targeted tickets. -- status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue1020188 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1173475] __slots__ for subclasses of variable length types
Michael Hudson m...@users.sourceforge.net added the comment: Well, I can think of some counters to that -- surely it's _more_ confusing if slots only works some of the time? -- but realistically I'm not going to work on this any further. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue1173475 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue6598] calling email.utils.make_msgid frequently has a non-trivial probability of generating colliding ids
New submission from Michael Hudson m...@users.sourceforge.net: If you call email.utils.make_msgid a number of times within the same second, the uniqueness of the results depends on random.randint(10) returning different values each time. A little mathematics proves that you don't have to call make_msgid *that* often to get the same message id twice: if you call it 'n' times, the probability of a collision is approximately 1 - math.exp(-n*(n-1)/20.0), and for n == 100, that's about 5%. For n == 1000, it's over 99%. These numbers are born out by experiment: def collisions(n): ... msgids = [make_msgid() for i in range(n)] ... return len(msgids) - len(set(msgids)) ... sum((collisions(100)0) for i in range(1000)) 49 sum((collisions(1000)0) for i in range(1000)) 991 I think probably having a counter in addition to the randomness would be a good fix for the problem, though I guess then you have to worry about thread safety. -- components: Library (Lib) messages: 91073 nosy: mwh severity: normal status: open title: calling email.utils.make_msgid frequently has a non-trivial probability of generating colliding ids type: behavior versions: 3rd party, Python 2.4, Python 2.5, Python 2.6, Python 2.7, Python 3.0, Python 3.1, Python 3.2 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6598 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue6598] calling email.utils.make_msgid frequently has a non-trivial probability of generating colliding ids
Michael Hudson m...@users.sourceforge.net added the comment: A higher resolution timer would also help, of course. (Thanks to James Knight for the prod). -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6598 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue672115] Assignment to __bases__ of direct object subclasses
Michael Hudson [EMAIL PROTECTED] added the comment: Another 3 and a bit years on wink I still think my comment http://bugs.python.org/msg14169 is the crux of the issue. It's even relevant to your class object(object): pass hack! I'm not at all likely to work on this any time soon myself. ___ Python tracker [EMAIL PROTECTED] http://bugs.python.org/issue672115 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Vilnius/Post EuroPython PyPy Sprint 12-14th of July
The PyPy team is sprinting at EuroPython again and we invite you to participate in our 3 day long sprint at the conference hotel - Reval Hotel Lietuva. If you plan to attend the sprint we recommend you to listen to the PyPy technical talks (`EuroPython schedule`_) during the conference since it will give you a good overview of the status of development. On the morning of the first sprint day (12th) we will also have a tutorial session for those new to PyPy development. As 3 days is relatively short for a PyPy sprint we suggest to travel back home on the 15th if possible (but it is ok to attend less than 3 days too). -- Goals and topics of the sprint -- There are many possible and interesting sprint topics to work on - here we list some possible task areas: * completing the missing python 2.5 features and support * write or port more extension modules (e.g. zlib is missing) * identify slow areas of PyPy through benchmarking and work on improvements, possibly moving app-level parts of the Python interpreter to interp-level if useful. * there are some parts of PyPy in need of refactoring, we may spend some time on those, for example: - rctypes and the extension compiler need some rethinking - support for LLVM 2.0 for the llvm backend - ... * some JIT improvement work * port the stackless transform to ootypesystem * other interesting stuff that you would like to work on ...;-) Registration If you'd like to come, please subscribe to the `pypy-sprint mailing list`_ and drop a note about your interests and post any questions. More organisational information will be sent to that list. Please register by adding yourself on the following list (via svn): http://codespeak.net/svn/pypy/extradoc/sprintinfo/post-ep2007/people.txt or on the pypy-sprint mailing list if you do not yet have check-in rights: http://codespeak.net/mailman/listinfo/pypy-sprint --- Preparation (if you feel it is needed): --- * read the `getting-started`_ pages on http://codespeak.net/pypy * for inspiration, overview and technical status you are welcome to read `the technical reports available and other relevant documentation`_ * please direct any technical and/or development oriented questions to pypy-dev at codespeak.net and any sprint organizing/logistical questions to pypy-sprint at codespeak.net * if you need information about the conference, potential hotels, directions etc we recommend to look at http://www.europython.org. We are looking forward to meet you at the Vilnius Post EuroPython PyPy sprint! The PyPy team .. See also .. .. _getting-started: http://codespeak.net/pypy/dist/pypy/doc/getting-started.html .. _`pypy-sprint mailing list`: http://codespeak.net/mailman/listinfo/pypy-sprint .. _`the technical reports available and other relevant documentation`: http://codespeak.net/pypy/dist/pypy/doc/index.html .. _`EuroPython schedule`: http://europython.org/timetable -- http://mail.python.org/mailman/listinfo/python-list
EuroPython 2007: Call for Proposals
Book Monday 9th July to Wednesday 11th July 2007 in your calendar! EuroPython 2007, the European Python and Zope Conference, will be held in Vilnius, Lithuania. Last year's conference was a great success, featuring a variety of tracks, amazing lightning talks and inspiring keynotes. With your participation, we want to make EuroPython 2007, the sixth EuroPython, even more successful than the previous five. Talks, Papers and Themes This year we have decided to borrow a few good ideas from PyCon, one of which is to move away from the 'track' structure. Instead, speakers are invited to submit presentations about anything they have done that they think would be of interest to the Python community. We will then arrange them into related groups and schedule them in the space available. In the past, EuroPython participants have found the following themes to be of interest: * Science * Python Language and Libraries * Web Related Technologies * Education * Games * Agile Methodologies and Testing * Social Skills In addition to talks, we will also accept full paper submissions about any of the above themes. The Call for Refereed Papers will be posted shortly. The deadline for talk proposals is Friday 18th May at midnight (24:00 CEST, Central European Summer Time, UTC+2). Other ways to participate - Apart from giving talks, there are plenty of other ways to participate in the conference. Just attending and talking to people you find here can be satisfying enough, but there are three other kinds of activity you may wish to plan for: Lightning Talks, Open Space and Sprints. Lightning Talks are very short talks that give you just enough time to introduce a topic or project, Open Space is an area reserved for informal discussions, and Sprints are focused gatherings for developers interested in particular projects. For more information please see the following pages: * Lightning Talks: http://www.europython.org/sections/events/lightning_talks * Open Space: http://www.europython.org/sections/events/open_space * Sprints: http://www.europython.org/sections/sprints_and_wiki Your Contribution - To propose a talk or a paper, go to... * http://www.europython.org/submit For more general information on the conference, please visit... * http://www.europython.org/ Looking forward to seeing what you fine folk have been up to, The EuroPython Team -- http://mail.python.org/mailman/listinfo/python-list
Last chance to join the Summer of PyPy!
Hopefully by now you have heard of the Summer of PyPy, our program for funding the expenses of attending a sprint for students. If not, you've just read the essence of the idea :-) However, the PyPy EU funding period is drawing to an end and there is now only one sprint left where we can sponsor the travel costs of interested students within our program. This sprint will probably take place in Leysin, Switzerland from 8th-14th of January 2007. So, as explained in more detail at: http://codespeak.net/pypy/dist/pypy/doc/summer-of-pypy.html we would encourage any interested students to submit a proposal in the next month or so. If you're stuck for ideas, you can find some at http://codespeak.net/pypy/dist/pypy/doc/project-ideas.html but please do not feel limited in any way by this list! Cheers, mwh ... and the PyPy team -- This is an off-the-top-of-the-head-and-not-quite-sober suggestion, so is probably technically laughable. I'll see how embarassed I feel tomorrow morning.-- Patrick Gosling, ucam.comp.misc -- http://mail.python.org/mailman/listinfo/python-announce-list Support the Python Software Foundation: http://www.python.org/psf/donations.html
Ireland PyPy sprint 21th-27th August 2006
The next PyPy sprint will happen in the nice city of Limerick in Ireland from 21st till 27th August. (Most people intend to arrive 20th August). The main focus of the sprint will be on JIT compiler works, various optimization works, porting extension modules, infrastructure works like a build tool for PyPy, or extended (distributed) testing. It's also open to new topics. If you are a student consider to participate in `Summer of PyPy`_ in order get funding for your travels and accomodation. The sprint is being hosted by University of Limerick (http://www.ul.ie/) - and is arranged in co-operation with people from our sister project Calibre (www.calibre.ie). Our contact at the University is Pär Ågerfalk and Eoin Oconchuir. .. _`Summer of PyPy`: http://codespeak.net/pypy/dist/pypy/doc/summer-of-pypy First day: introduction and workshop (possible to attend only this day) During the first day (21st of August) there will be talks on various subjects related to PyPy: * A tutorial and technical introduction to the PyPy codebase (suited for people interested in getting an overview of PyPy´s architecture and/or contributing to PyPy) * a workshop covering more in-depth technical aspects of PyPy and what PyPy can do for you. The workshop will also cover methodology, aiming at explaining the pros and cons of sprint-driven development. (suited for sprint attendants, students, staff and other interested parties from/around the University and the local area) The tutorial will be part of the sprint introduction - the workshop will take place if there is enough interest raised before the 21st of August from people planning to attend. You are of course welcome to attend just for this first day of the sprint. If you want to come ... If you'd like to come, please subscribe to the `pypy-sprint mailing list`_ and drop a note about your interests and post any questions. More organisational information will be send to that list. We'll keep a list of `people`_ which we'll update (which you can do so yourself if you have codespeak commit rights). .. _`Calibre`: http://www.calibre.ie A small disclaimer: There might be people visiting the sprint in order to do research on how open source communities work, organize and communicate. This research might be done via filming, observing or interviewing. But of course you will be able to opt-out of being filmed at the sprint. Logistics -- NOTE: you need a UK style of power adapter (220V). The sprint will be held in the Computer Science Building, room CSG-025, University of Limerick (no 7 on http://www.ul.ie/main/places/campus.shtml). Bus 308 from Limerick city will take you to no 30 (approx.). See http://www.ul.ie/main/places/travel.shtml for more on how to get to UL. We will have access to the sprint facilities from 09:00-19:00 every day (it might be even later than 19:00). Monday-Wednesday, Friday-Sunday are sprint days, Thursday is likely a break day. Food on campus varies in price and quality ;-) : from ca 4 EUR to 7-8 EUR for a lunch. There are of course a lot more food alternatives in down town Limerick. Next Airports -- Shannon Airport (SNN) is the nearest airport (Ryanair flies there) - you may check out more information about flights to/from the airport at http://www.shannonairport.com/index.html There are busses from there to downtown Limerick, and busses from Limerick to the UL campus. Taxis are about 35 EUR. Accomodation - There is a website address for campus accomodation at http://www.ul.ie/conference/accommodation.htm. The rate should be 49 euro for Bed and Breakfast. If you are interested in booking campus accommodation, please contact deborah.tudge at ul ie and make reference to the PyPy workshop and sprint. Please try to book as soon as possible. As an off-campus accommodation alternative you can also try: Castletroy Lodge and Castletroy Inn (Bed and Breakfast) Dublin Road (15 to 20 mins walk to UL) Tel: +353 61 338385 / +353 61 331167 .. _`pypy-sprint mailing list`: http://codespeak.net/mailman/listinfo/pypy-sprint .. _`people`: people.html -- Remember - if all you have is an axe, every problem looks like hours of fun.-- Frossie -- http://home.xnet.com/~raven/Sysadmin/ASR.Quotes.html -- http://mail.python.org/mailman/listinfo/python-announce-list Support the Python Software Foundation: http://www.python.org/psf/donations.html
Ireland PyPy sprint 21th-27th August 2006
The next PyPy sprint will happen in the nice city of Limerick in Ireland from 21st till 27th August. (Most people intend to arrive 20th August). The main focus of the sprint will be on JIT compiler works, various optimization works, porting extension modules, infrastructure works like a build tool for PyPy, or extended (distributed) testing. It's also open to new topics. If you are a student consider to participate in `Summer of PyPy`_ in order get funding for your travels and accomodation. The sprint is being hosted by University of Limerick (http://www.ul.ie/) - and is arranged in co-operation with people from our sister project Calibre (www.calibre.ie). Our contact at the University is Pär Ågerfalk and Eoin Oconchuir. .. _`Summer of PyPy`: http://codespeak.net/pypy/dist/pypy/doc/summer-of-pypy First day: introduction and workshop (possible to attend only this day) During the first day (21st of August) there will be talks on various subjects related to PyPy: * A tutorial and technical introduction to the PyPy codebase (suited for people interested in getting an overview of PyPy´s architecture and/or contributing to PyPy) * a workshop covering more in-depth technical aspects of PyPy and what PyPy can do for you. The workshop will also cover methodology, aiming at explaining the pros and cons of sprint-driven development. (suited for sprint attendants, students, staff and other interested parties from/around the University and the local area) The tutorial will be part of the sprint introduction - the workshop will take place if there is enough interest raised before the 21st of August from people planning to attend. You are of course welcome to attend just for this first day of the sprint. If you want to come ... If you'd like to come, please subscribe to the `pypy-sprint mailing list`_ and drop a note about your interests and post any questions. More organisational information will be send to that list. We'll keep a list of `people`_ which we'll update (which you can do so yourself if you have codespeak commit rights). .. _`Calibre`: http://www.calibre.ie A small disclaimer: There might be people visiting the sprint in order to do research on how open source communities work, organize and communicate. This research might be done via filming, observing or interviewing. But of course you will be able to opt-out of being filmed at the sprint. Logistics -- NOTE: you need a UK style of power adapter (220V). The sprint will be held in the Computer Science Building, room CSG-025, University of Limerick (no 7 on http://www.ul.ie/main/places/campus.shtml). Bus 308 from Limerick city will take you to no 30 (approx.). See http://www.ul.ie/main/places/travel.shtml for more on how to get to UL. We will have access to the sprint facilities from 09:00-19:00 every day (it might be even later than 19:00). Monday-Wednesday, Friday-Sunday are sprint days, Thursday is likely a break day. Food on campus varies in price and quality ;-) : from ca 4 EUR to 7-8 EUR for a lunch. There are of course a lot more food alternatives in down town Limerick. Next Airports -- Shannon Airport (SNN) is the nearest airport (Ryanair flies there) - you may check out more information about flights to/from the airport at http://www.shannonairport.com/index.html There are busses from there to downtown Limerick, and busses from Limerick to the UL campus. Taxis are about 35 EUR. Accomodation - There is a website address for campus accomodation at http://www.ul.ie/conference/accommodation.htm. The rate should be 49 euro for Bed and Breakfast. If you are interested in booking campus accommodation, please contact deborah.tudge at ul ie and make reference to the PyPy workshop and sprint. Please try to book as soon as possible. As an off-campus accommodation alternative you can also try: Castletroy Lodge and Castletroy Inn (Bed and Breakfast) Dublin Road (15 to 20 mins walk to UL) Tel: +353 61 338385 / +353 61 331167 .. _`pypy-sprint mailing list`: http://codespeak.net/mailman/listinfo/pypy-sprint .. _`people`: people.html -- Touche! But this confirms you are composed of logic gates. Crud, I thought those were neurons in there. -- Thomas F. Burdick, Kenny Tilton, comp.lang.lisp -- http://mail.python.org/mailman/listinfo/python-list
pypy-0.9.0: stackless, new extension compiler
support from numerous people. Please feel free to give feedback and raise questions. contact points: http://codespeak.net/pypy/dist/pypy/doc/contact.html have fun, the pypy team, (Armin Rigo, Samuele Pedroni, Holger Krekel, Christian Tismer, Carl Friedrich Bolz, Michael Hudson, and many others: http://codespeak.net/pypy/dist/pypy/doc/contributor.html) PyPy development and activities happen as an open source project and with the support of a consortium partially funded by a two year European Union IST research grant. The full partners of that consortium are: Heinrich-Heine University (Germany), AB Strakt (Sweden) merlinux GmbH (Germany), tismerysoft GmbH (Germany) Logilab Paris (France), DFKI GmbH (Germany) ChangeMaker (Sweden), Impara (Germany) -- Monte Carlo sampling is no way to understand code. -- Gordon McMillan, comp.lang.python -- http://mail.python.org/mailman/listinfo/python-list
Post-PyCon PyPy Sprint: February 27th - March 2nd 2006
The next PyPy sprint is scheduled to take place right after PyCon 2006 in Dallas, Texas, USA. We hope to see lots of newcomers at this sprint, so we'll give friendly introductions. Note that during the Pycon conference we are giving PyPy talks which serve well as preparation. Goals and topics of the sprint -- While attendees of the sprint are of course welcome to work on what they wish, we offer these ideas: - Work on an 'rctypes' module aiming at letting us use a ctypes implementation of an extension module from the compiled pypy-c. - Writing ctypes implementations of modules to be used by the above tool. - Experimenting with different garbage collection strategies. - Implementing Python 2.5 features in PyPy - Implementation of constraints solvers and integration of dataflow variables to PyPy. - Implement new features and improve the 'py' lib and py.test which are heavily used by PyPy (doctests/test selection/...). - Generally experiment with PyPy -- for example, play with transparent distribution of objects or coroutines and stackless features at application level. - Have fun! Location The sprint will be held wherever the PyCon sprints end up being held, which is to say somewhere within the Dallas/Addison Marriott Quorum hotel. For more information see the PyCon 06 sprint pages: - http://us.pycon.org/TX2006/Sprinting - http://wiki.python.org/moin/PyCon2006/Sprints Exact times --- The PyPy sprint will from from Monday February 27th until Thursday March 2nd 2006. Hours will be from 10:00 until people have had enough. Registration, etc. -- If you know before the conference that you definitely want to attend our sprint, please subscribe to the `PyPy sprint mailing list`_, introduce yourself and post a note that you want to come. Feel free to ask any questions or make suggestions there! There is a separate `PyCon 06 people`_ page tracking who is already planning to come. If you have commit rights on codespeak then you can modify yourself a checkout of http://codespeak.net/svn/pypy/extradoc/sprintinfo/pycon06/people.txt .. _`PyPy sprint mailing list`: http://codespeak.net/mailman/listinfo/pypy-sprint .. _`PyCon 06 people`: http://codespeak.net/pypy/extradoc/sprintinfo/pycon06/people.txt -- M-x psych[TAB][RETURN] -- try it -- http://mail.python.org/mailman/listinfo/python-list
This Week in PyPy 2
Introduction This is the second of what will hopefully be many summaries of what's been going on in the world of PyPy in the last week. I'd still like to remind people that when something worth summarizing happens to recommend if for This Week in PyPy as mentioned on: http://codespeak.net/pypy/dist/pypy/doc/weekly/ where you can also find old summaries. There were about 100 commits to the pypy section of codespeak's repository this week. pypy-c py.py Over the weekend (while I was being blown around Wales by the remnants of hurricane Wilma) Armin and a few others worked on trying to get a translated pypy-c to run the interpreted py.py. This resulted in a fixing a welter of small differences between CPython and pypy-c, though at the end of it all we are still left in the dark of incomprehensible geninterplevel crashes caused by subtle differences between the most internal types of CPython and pypy-c. Multiple Spaces === In one of the reports we're currently writing for the end of phase 1 EU review: http://codespeak.net/pypy/dist/pypy/doc/draft-low-level-encapsulation.html we made this claim: The situation of multiple interpreters is thus handled automatically: if there is only one space instance, it is regarded as a pre-constructed constant and the space object pointer (though not its non-constant contents) disappears from the produced source, i.e. both from function arguments and local variables and from instance fields. If there are two or more such instances, a 'space' attribute will be automatically added to all application objects (or more precisely, it will not be removed by the translation process), the best of both worlds. And then we tried to do it, and had to tune the claim down because it doesn't work. This is because the StdObjSpace class has a 'specialized method' -- a different version of the wrap() method is generated for each type it is seen to be called with. This causes problems when there are genuine StdObjSpace instances in the translated pypy because of limitations in our tools. We looked at these limitations and decided that it was time to rewrite the world again, leading in to the next section... SomePBC-refactoring === One of the more unusual annotations produced by PyPy's annotator is that of 'SomePBC', short for 'SomePrebuiltConstant'. This annotation means that a variable contains a reference to some object that existed before the annotation process began (key example: functions). Up until now, the annotation has actually explicitly included which prebuiltconstants a variable might refer to, which seems like the obvious thing to do. Unfortunately, not all things that we'd like to annotate as a prebuiltconstant actually exist as unique CPython objects -- in particular the point of specializing a function is that it becomes many functions in the translated result. Also for 'external', i.e. not written in RPython, functions we want to be able to supply annotations for the input and exit args even if there is no corresponding CPython function at all. The chosen solution is to have the SomePBC annotation refer not to a CPython object but to a more abstracted 'Description' of this object. In some sense, this isn't a very large change but it affects most files in the annotation directory and a fair fraction of those under rpython/ and translator/. We're also cleaning up some other mess while we're there and breaking everything anyway. Draft-Dynamic-... = It's not linked from anywhere on the website (yet...) but the report that will become Deliverable 05.1: http://codespeak.net/pypy/dist/pypy/doc/draft-dynamic-language-translation.html has been reviewed and re-reviewed in the last couple of weeks and is definitely required reading for anyone who has an interest in the more theoretical side of PyPy. Gtbg Sprint in December === Hopefully very soon, we'll announce the next PyPy sprint... stay tuned! Cheers, mwh -- I'm a little confused. That's because you're Australian! So all the blood flows to your head, away from the organ most normal guys think with. -- Mark Hammond Tim Peters, comp.lang.python -- http://mail.python.org/mailman/listinfo/python-list
Re: visit_decref: Assertion `gc-gc.gc_refs != 0' failed.
alexLIGO [EMAIL PROTECTED] writes: Hi, I got this error when trying to execute the following python command with in a C module: Py_BuildValue You get that error immediately on calling that function? Do anyone have any idea what this error is about? You've probably got your refcounting wrong somewhere. Cheers, mwh -- You can lead an idiot to knowledge but you cannot make him think. You can, however, rectally insert the information, printed on stone tablets, using a sharpened poker.-- Nicolai -- http://home.xnet.com/~raven/Sysadmin/ASR.Quotes.html -- http://mail.python.org/mailman/listinfo/python-list
Re: OpenSource documentation problems
Adriaan Renting [EMAIL PROTECTED] writes: The good commercial docs are better because there it is understood how important this is. Also, they are probably written by people who are trained technical writers which has to help at least a bit... writing good documentation is hard. Whether the Python documentation is good or bad depends on what you're comparing it to. It's probably not as good, say, as Apple's documentation for Cocoa, but it could certainly be much, much worse. Cheers, mwh -- Enlightenment is probably antithetical to impatience. -- Erik Naggum, comp.lang.lisp -- http://mail.python.org/mailman/listinfo/python-list
Re: To the python-list moderator
This is probably a fairly bad way of contacting the python-list admins... Terry Reedy [EMAIL PROTECTED] writes: For a couple of years, I have been reading and posting and posting to python-list and c.l.p via gmane.news.orgs gmane.comp.python.general group. Today I got this from 'python-list-bounces', which I presume is a 'machine' rather than a 'human' address. --- Your mail to 'Python-list' with the subject Re: how to join two Dictionary together? Is being held until the list moderator can review it for approval. The reason it is being held: Message has a suspicious header Either the message will get posted to the list, or you will receive notification of the moderator's decision. - Since I had nothing to do with the headers, the problem is between gmane's sending (perhaps when responding to a message from a particular site) and your review. I hope this can be fixed. It's probably been flagged as UNSURE by spambayes. Cheers, mwh -- You owe The Oracle a TV with an 'intelligence' control - I've tried 'brightness' but that didn't work. -- Internet Oracularity #1192-01 -- http://mail.python.org/mailman/listinfo/python-list
Re: Release of PyPy 0.7.0
Michael Sparks [EMAIL PROTECTED] writes: Valentino Volonghi aka Dialtone wrote: Michael Sparks [EMAIL PROTECTED] wrote: Would it be useful for people to start trying out their modules/code to see if they work with this release, and whether they can likewise be translated using the C/LLVM backends, or would you say this is too early? (I'm more thinking in terms of it providing real world usecases in the hope of finding things that don't work - rather than anything else) This is not how it works. I beg to differ - it is how it can work (just not the default or currently recommended). The chance of any random module you have written being rpython is more or less zero, so it's not _that_ interesting for you to try to compile them with PyPy. You can also use the translate_pypy.py script to try out several smaller programs, e.g. a slightly changed version of Pystone: cd pypy/translator/goal python translate_pypy.py targetrpystone Which is pretty cool of course. For those of interest running pystone with the pypy compiled native binary has the following results for pystones on my machine: [EMAIL PROTECTED]:~/pypy-0.7.0/pypy/translator/goal ./pypy-c debug: entry point starting debug: argv - ./pypy-c debug: importing code debug: calling code.interact() Python 2.4.1 (pypy 0.7.0 build) on linux2 Type help, copyright, credits or license for more information. (InteractiveConsole) from test import pystone pystone.main(1000) Pystone(1.1) time for 1000 passes = 13.97 This machine benchmarks at 71.582 pystones/second The same results for CPython: [EMAIL PROTECTED]:~/pypy-0.7.0/pypy/translator/goal python Python 2.4 (#1, Mar 22 2005, 21:42:42) [GCC 3.3.5 20050117 (prerelease) (SUSE Linux)] on linux2 Type help, copyright, credits or license for more information. from test import pystone pystone.main() Pystone(1.1) time for 5 passes = 1.58 This machine benchmarks at 31645.6 pystones/second Obviously therefore anyone seeking to translate their existing code from python to an executable directly using pypy would not be doing it for performance reasons (again, something I'm aware of watching the updates come out and having run svn checkouts at previous times). No, you're still operating at the wrong level here (very easily done). This is the _translated PyPy_ interpreting pystone. If you run a _translated pystone_ you'll (hopefully) get a different, faster answer. In expected order of execution speed: interpreted pypy interpreting pystone translated pypy interpreting pystone cpython interpreting pystone translated pystone Anyway, whether it's sensible or not I'm going to play with translating some of my modules :) Whatever floats your boat :) Cheers, mwh -- Ability to type on a computer terminal is no guarantee of sanity, intelligence, or common sense. -- Gene Spafford's Axiom #2 of Usenet -- http://mail.python.org/mailman/listinfo/python-list
Re: Yielding a chain of values
Talin [EMAIL PROTECTED] writes: I'm finding that a lot of places within my code, I want to return the output of a generator from another generator. Currently the only method I know of to do this is to explicitly loop over the results from the inner generator, and yield each one: for x in inner(): yield x I was wondering if there was a more efficient and concise way to do this. And if there isn't, Greenlets, perhaps? (for which, see google). Cheers, mwh -- LINTILLA: You could take some evening classes. ARTHUR: What, here? LINTILLA: Yes, I've got a bottle of them. Little pink ones. -- The Hitch-Hikers Guide to the Galaxy, Episode 12 -- http://mail.python.org/mailman/listinfo/python-list
Re: micro-python - is it possible?
Magnus Lycka [EMAIL PROTECTED] writes: Evil Bastard wrote: Hi, Has anyone done any serious work on producing a subset of python's language definition that would suit it to a tiny microcontroller environment? Isn't pypy meant to support different backends with different requirements and constraints using the same basic language? Yup, not part of the project that I'm involved in, but it's part of the plan. Cheers, mwh -- 59. In English every word can be verbed. Would that it were so in our programming languages. -- Alan Perlis, http://www.cs.yale.edu/homes/perlis-alan/quotes.html -- http://mail.python.org/mailman/listinfo/python-list
Re: Precise timings ?
Madhusudan Singh [EMAIL PROTECTED] writes: Hi I am using time.clock() to get the current time of the processor in seconds. For my application, I need really high resolution but currently seem to be limited to 0.01 second. Is there a way to specify the resolution (say 1-10 microseconds) ? Not in standard Python. My processor is a 1.4 MHz Intel processor. Mhz? :) Surely, it should be able to report times a few (or at least 10) microseconds apart. It's probably architecture and operating system dependent. The pentium has a timestamp counter that counts clock cycles and the PowerPC has a similar (but slower) counter. You may need to write a little C or asm. Cheers, mwh -- All of us in here are guilty as hell. Justified, yes, but guilty. and that, your honour, was when I killed him Well, I see. That's all right then -- Devin Rubia, asr -- http://mail.python.org/mailman/listinfo/python-list
Re: Bug in slice type
[EMAIL PROTECTED] writes: Michael Hudson wrote: Bryan Olson writes: In some sense; it certainly does what I intended it to do. [...] I'm not going to change the behaviour. The docs probably aren't especially clear, though. The docs and the behavior contradict: [...] these are the /start/ and /stop/ indices and the /step/ or stride length of the slice [emphasis added]. I'm fine with your favored behavior. What do we do next to get the doc fixed? I guess one of us comes up with some less misleading words. It's not totally obvious to me what to do, seeing as the returned values *are* indices is a sense, just not the sense in which they are used in Python. Any ideas? Cheers, mwh -- First of all, email me your AOL password as a security measure. You may find that won't be able to connect to the 'net for a while. This is normal. The next thing to do is turn your computer upside down and shake it to reboot it. -- Darren Tucker, asr -- http://mail.python.org/mailman/listinfo/python-list
Re: [Python-Dev] implementation of copy standard lib
Simon Brunning [EMAIL PROTECTED] writes: I think that copy is very rarely used. I don't think I've ever imported it. Or is it just me? Not really. I've used it once that I can recall, to copy a kind of generic default value, something like: def value(self, v, default): if hasattr(source, v): return getattr(source, v) else: return copy.copy(default) (except not quite, there would probably be better ways to write exactly that). Cheers, mwh -- washort glyph: you're evil, too glyph washort: I try washort not the good kind of evil washort the other kind -- from Twisted.Quotes -- http://mail.python.org/mailman/listinfo/python-list
Re: extending: new type instance
BranoZ [EMAIL PROTECTED] writes: I'm writing my own (list-like) type in C. It is implementing a Sequence Protocol. In 'sq_slice' method I would like to return a new instance of my class/type. How do I create (and initialize) an instance of a given PyTypeObject MyType ? I have tried to provide (PyObject *)MyType as 'class' argument to: PyInstance_New(PyObject *class, PyObject *arg, PyObject *kw) It failed the PyClass_Check. I have also found a couple of PyObject_New functions which accept PyTypeObject as an argument. They look very low-level and obviously don't call __init__. Should I do it myself manualy ? PyObject_New is the usual way, although there are others -- MyType.tp_new, PyObject_Call ... Cheers, mwh -- 81. In computing, turning the obvious into the useful is a living definition of the word frustration. -- Alan Perlis, http://www.cs.yale.edu/homes/perlis-alan/quotes.html -- http://mail.python.org/mailman/listinfo/python-list
Re: Urgent: Embedding Python problems - advice sought
[EMAIL PROTECTED] writes: Hi, I am embedding Python into a multi-threaded C++ application running on Solaris and need urgent clarification on the embedding architecture and its correct usage (as I am experience weird behaviors). What version of Python are you using? Can anyone clarify: - if Python correctly supports multiple sub-interpreters (Py_NewInterpreter) ? It's supposed to but it's not often used or tested and can get a bit flaky. - if Python correctly supports multiple thread states per sub-interpreter (PyThreadState_New) ? There are bugs in 2.3.5 and 2.4.1 in this area (they are fixed in CVS -- I hope -- and will be in 2.4.2). and the real question: - what is the rationale for choosing one of: [a] one sub-interpreter with many thread states This is the best tested and understood (it's what the core Python interpreter does, after all). [b] many sub-interpreters with one thread state each [c] many sub-interpreters with many threas states each These are probably somewhat broken in recent Python's, I'm afraid. Can you try CVS? Cheers, mwh -- ARTHUR: Yes. It was on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying Beware of the Leopard. -- The Hitch-Hikers Guide to the Galaxy, Episode 1 -- http://mail.python.org/mailman/listinfo/python-list
Re: Urgent: Embedding Python problems - advice sought
[EMAIL PROTECTED] writes: Does anyone have advice on other groups, sites etc that has knowledge of this subject ? I've just replied to your original post, having not seen it the first time around. Cheers, mwh -- Nafai w00t w00t w00t w00t! Nafai I don't understand all of the code, but it works! Nafai I guess I should check it in. -- from Twisted.Quotes -- http://mail.python.org/mailman/listinfo/python-list
Re: __del__ pattern?
[EMAIL PROTECTED] writes: Chris Curvey wrote: I need to ensure that there is only one instance of my python class on my machine at a given time. (Not within an interpreter -- that would just be a singleton -- but on the machine.) These instances are created and destroyed, but there can be only one at a time. So when my class is instantiated, I create a little lock file, and I have a __del__ method that deletes the lock file. Unfortunately, there seem to be some circumstances where my lock file is not getting deleted. Then all the jobs that need that special class start queueing up requests, and I get phone calls in the middle of the night. For a reasonably portable solution, leave the lock file open. On most systems, you cannot delete an open file, Uh, you can on unix -- what else did you have in mind for most systems? Cheers, mwh -- Well, yes. I don't think I'd put something like penchant for anal play and able to wield a buttplug in a CV unless it was relevant to the gig being applied for... -- Matt McLeod, asr -- http://mail.python.org/mailman/listinfo/python-list
Re: __del__ pattern?
Chris Curvey [EMAIL PROTECTED] writes: I need to ensure that there is only one instance of my python class on my machine at a given time. I recommend modifying your requirements such that you ensure that there is only one active instance of your class at any one time (or something like that), and then use try:finally: blocks to ensure your locks get removed. Is there a better pattern to follow than using a __del__ method? I just need to be absolutely, positively sure of two things: 1) There is only one instance of my special class on the machine at a time. 2) If my special class is destroyed for any reason, I need to be able to create another instance of the class. As another poster mentioned, you also need to work out what you're going to do if your process gets killed in a way that doesn't allow finally blocks to run (this doesn't have much to do with Python). Cheers, mwh -- The above comment may be extremely inflamatory. For your protection, it has been rot13'd twice. -- the signature of JWhitlock on slashdot -- http://mail.python.org/mailman/listinfo/python-list
Re: Bug in slice type
Bryan Olson [EMAIL PROTECTED] writes: The Python slice type has one method 'indices', and reportedly: This method takes a single integer argument /length/ and computes information about the extended slice that the slice object would describe if applied to a sequence of length items. It returns a tuple of three integers; respectively these are the /start/ and /stop/ indices and the /step/ or stride length of the slice. Missing or out-of-bounds indices are handled in a manner consistent with regular slices. http://docs.python.org/ref/types.html It behaves incorrectly In some sense; it certainly does what I intended it to do. when step is negative and the slice includes the 0 index. class BuggerAll: def __init__(self, somelist): self.sequence = somelist[:] def __getitem__(self, key): if isinstance(key, slice): start, stop, step = key.indices(len(self.sequence)) # print 'Slice says start, stop, step are:', start, stop, step return self.sequence[start : stop : step] But if that's what you want to do with the slice object, just write start, stop, step = key.start, key.stop, key.step return self.sequence[start : stop : step] or even return self.sequence[key] What the values returned from indices are for is to pass to the range() function, more or less. They're not intended to be interpreted in the way things passed to __getitem__ are. (Well, _actually_ the main motivation for writing .indices() was to use it in unittests...) print range(10) [None : None : -2] print BuggerAll(range(10))[None : None : -2] The above prints: [9, 7, 5, 3, 1] [] Un-commenting the print statement in __getitem__ shows: Slice says start, stop, step are: 9 -1 -2 The slice object seems to think that -1 is a valid exclusive bound, It is, when you're doing arithmetic, which is what the client code to PySlice_GetIndicesEx() which in turn is what indices() is a thin wrapper of, does but when using it to actually slice, Python interprets negative numbers as an offset from the high end of the sequence. Good start-stop-step values are (9, None, -2), or (9, -11, -2), or (-1, -11, -2). The later two have the advantage of being consistend with the documented behavior of returning three integers. I'm not going to change the behaviour. The docs probably aren't especially clear, though. Cheers, mwh -- (ps: don't feed the lawyers: they just lose their fear of humans) -- Peter Wood, comp.lang.lisp -- http://mail.python.org/mailman/listinfo/python-list
Re: Memory leak in PyImport_ReloadModule - URGENT
[EMAIL PROTECTED] writes: Having recently upgraded to Python 2.4, I am having a large memory leak with the following code built with VC++ 6.0: PyObject *pName, *pModule; Py_Initialize(); pName = PyString_FromString(argv[1]); pModule = PyImport_Import(pName); Py_DECREF(pName); PyObject* pModule2 = PyImport_ReloadModule(pModule); Py_DECREF(pModule2); Py_DECREF(pModule); Py_Finalize(); return 0; Given that the builtin function reload() does more or less the same thing, it seems likely that there's something odd about your embedding that is making the difference. Does it make a difference which module you reload? I notice that you're using VC++ 6.0. Is your Python built with VC6 too? (The python.org distribution is built with 7 -- or 7.1, I forget which). Help! You might want to file a bug report. Cheers, mwh -- Like most people, I don't always agree with the BDFL (especially when he wants to change things I've just written about in very large books), ... -- Mark Lutz, http://python.oreilly.com/news/python_0501.html -- http://mail.python.org/mailman/listinfo/python-list
Re: signals (again)
bill [EMAIL PROTECTED] writes: I see this (or similar) question occasionally looking back through the archive, but haven't yet seen a definitive answer, so I'm going to ask it again. Consider the following: while True: do_something_to_files_in_directory(fd) fcntl(fd, F_NOTFIY, DN_CREATE) signal.pause() How do you deal with the signal that occurs after the fcntl and before the pause? I don't think you can, sorry. Cheers, mwh -- Get out your salt shakers folks, this one's going to take more than one grain. -- Ator in an Ars Technica news item -- http://mail.python.org/mailman/listinfo/python-list
Re: Syntax error after upgrading to Python 2.4
Reinhold Birkenfeld [EMAIL PROTECTED] writes: Michael Hudson wrote: [EMAIL PROTECTED] writes: On Sat, Aug 06, 2005 at 05:15:22PM -0400, Terry Reedy wrote: In any case letting developers add new features is part of the price of getting unpaid bug fixes for free software. But note that PSF does not make you to upgrade. Here is the current list of possible downloads. [a mere 8 versions] Oh, don't give such a short list! Here's what I found on the python.org ftp site: [...] And then there's CVS... Which doesn't build for the really early versions. I think python1.0.1.tar.gz is as old as it's easy to get. Can we assume that the 0.9.1 version Guido posted to alt.sources does build? Dunno! Google Groups for Python 0.9.1 group:alt.sources. Oh good grief, Python 0.9.1 part 01/21, I'm much to lazy to sort all that out today... still, would be nice if someone did; in ftp:[EMAIL PROTECTED]:/pub/python/src/README we find: Older sources = If you find an older Python release (e.g. 0.9.8), we're interested in getting a copy! [EMAIL PROTECTED] Cheers, mwh -- teratorn I must be missing something. It is not possible to be this stupid. Yhg1s you don't meet a lot of actual people, do you? -- http://mail.python.org/mailman/listinfo/python-list
Re: Python -- (just) a successful experiment?
Robert Kern [EMAIL PROTECTED] writes: What I'm trying to say is that posting to c.l.py is absolutely ineffective in achieving that goal. Code attracts people that like to code. Tedious, repetitive c.l.py threads attract people that like to write tedious, repetitive c.l.py threads. +1 QOTW, though I may be too late (I've become rather temporally confused when it comes to clpy). Cheers, mwh -- Monte Carlo sampling is no way to understand code. -- Gordon McMillan, comp.lang.python -- http://mail.python.org/mailman/listinfo/python-list
Re: Syntax error after upgrading to Python 2.4
[EMAIL PROTECTED] writes: On Sat, Aug 06, 2005 at 05:15:22PM -0400, Terry Reedy wrote: In any case letting developers add new features is part of the price of getting unpaid bug fixes for free software. But note that PSF does not make you to upgrade. Here is the current list of possible downloads. [a mere 8 versions] Oh, don't give such a short list! Here's what I found on the python.org ftp site: [...] And then there's CVS... Which doesn't build for the really early versions. I think python1.0.1.tar.gz is as old as it's easy to get. Cheers, mwh -- Ignoring the rules in the FAQ: 1 slice in spleen and prevention of immediate medical care. -- Mark C. Langston, asr -- http://mail.python.org/mailman/listinfo/python-list
Re: Decline and fall of scripting languages ?
Donn Cave [EMAIL PROTECTED] writes: On the contrary, there are a couple. Ghc is probably the leading implementation these days, and by any reasonable measure, it is serious. Objective CAML is indeed not a pure functional language. *cough* unsafePerformIO *cough* Cheers, mwh -- MAN: How can I tell that the past isn't a fiction designed to account for the discrepancy between my immediate physical sensations and my state of mind? -- The Hitch-Hikers Guide to the Galaxy, Episode 12 -- http://mail.python.org/mailman/listinfo/python-list
Re: Art of Unit Testing
Terry Reedy [EMAIL PROTECTED] writes: Paul Rubin http://phr.cx@NOSPAM.invalid wrote in message news:[EMAIL PROTECTED] I knew there was some other one before unittest came along but I thought unittest was supposed to replace the older stuff. I believe unittest was an alternative rather than replacement for doctest. Around the time pyunit got added to the stdlib (as unittest) there were some other candidates (one written by AMK et al at the MEMS Exchange -- Sancho or something like that?), and pyunit got chosen by the python-dev cabal, for reasons I don't recall now. It's probably in the archives. What's the preferred one, Pythonically speaking? py.test was written, apparently, by pypy folks to replace unittest for pypy testing. That is a teeny bit inaccurate -- it's mostly Holger Krekel's work, though his work on pypy was quite a lot of the inspiration. Armin Rigo helped a lot, the other PyPy people less so, on average. To me, it is more Pythonic in spirit, and I plan to try it for an upcoming TDD project. It's very cool, indeed. Cheers, mwh -- Darn! I've only got 10 minutes left to get falling-down drunk! I suppose I'll have to smoke crack instead now. -- Tim Peters is checking things in on 2002-12-31 -- http://mail.python.org/mailman/listinfo/python-list
Re: Py: a very dangerous language
Benjamin Niemann [EMAIL PROTECTED] writes: Luis M. Gonzalez wrote: This is great! It's absolutely useless, like a real therapist, but it's free! Never heard of Eliza? Even Emacs has it built in (Menu Help - Emacs Psychiatrist). M-x psytab return Cheers, mwh -- Gullible editorial staff continues to post links to any and all articles that vaguely criticize Linux in any way. -- Reason #4 for quitting slashdot today, from http://www.cs.washington.edu/homes/klee/misc/slashdot.html -- http://mail.python.org/mailman/listinfo/python-list
Re: Is this Pythonic?
[EMAIL PROTECTED] (phil hunt) writes: Suppose I'm writing an abstract superclass which will have some concrete subclasses. I want to signal in my code that the subclasses will implement certan methods. Is this a Pythonic way of doing what I have in mind: class Foo: # abstract superclass def bar(self): raise Exception, Implemented by subclass def baz(self): raise Exception, Implemented by subclass class Concrete(Foo): def bar(self): #...actual implemtation... def baz(self): #...actual implemtation... Well, I guess you know this, but if Foo contains no implementation at all, why inherit from it? It would (possibly) be more Pythonic to define an interface instead, or just use duck typing. Cheers, mwh -- nonono, while we're making wild conjectures about the behavior of completely irrelevant tasks, we must not also make serious mistakes, or the data might suddenly become statistically valid. -- Erik Naggum, comp.lang.lisp -- http://mail.python.org/mailman/listinfo/python-list
Re: Is this Pythonic?
[EMAIL PROTECTED] (phil hunt) writes: It would (possibly) be more Pythonic to define an interface instead, Does Python have the concept of an interface? When was that added? It doesn't have one included, but there are at least two implementations, zope.interface and PyProtocols (I'm sure google will find you more on both of these). -- ... with these conditions cam the realisation that ... nothing turned a perfectly normal healthy individual into a great political or military leader better than irreversible brain damage. -- The Hitch-Hikers Guide to the Galaxy, Episode 11 -- http://mail.python.org/mailman/listinfo/python-list
Re: Changing interpreter's deafult output/error streams
Ira [EMAIL PROTECTED] writes: OK let me rephrase, the standard error stream (and if I'm not mistaken also the one that PyErr_Print() writes to) is the python object sys.stderr. Now say I'd go ahead and write the following in python... Ah, OK, I think you're mistaken, and PyErr_Print prints to the C level FILE* stderr (I agree my first post was confusing on this point, sorry about that...). Cheers, mwh -- Acapnotic jemfinch: What's to parse? A numeric code, perhaps a chicken, and some arguments -- from Twisted.Quotes -- http://mail.python.org/mailman/listinfo/python-list
Re: Changing interpreter's deafult output/error streams
Robert Kern [EMAIL PROTECTED] writes: Michael Hudson wrote: Ira [EMAIL PROTECTED] writes: OK let me rephrase, the standard error stream (and if I'm not mistaken also the one that PyErr_Print() writes to) is the python object sys.stderr. Now say I'd go ahead and write the following in python... Ah, OK, I think you're mistaken, and PyErr_Print prints to the C level FILE* stderr (I agree my first post was confusing on this point, sorry about that...). No, it doesn't. It grabs the appropriate object from sys.stderr. Ah, you're right, I somehow ended up reading PySys_WriteStderr... Cheers, mwh -- The only problem with Microsoft is they just have no taste. -- Steve Jobs, (From _Triumph of the Nerds_ PBS special) and quoted by Aahz on comp.lang.python -- http://mail.python.org/mailman/listinfo/python-list
Re: shelve: writing out updates?!
Robert Kern [EMAIL PROTECTED] writes: The Documentation Wiki could then be used as a basis for the official documentation that comes with each new release. Does this idea make some sense? Or are there hidden pitfalls? Yes! Someone actually has to do it! I think someone is working on this as part of the summer of code program (but am not sure about that...). Cheers, mwh -- QNX... the OS that walks like a duck, quacks like a duck, but is, in fact, a platypus. ... the adventures of porting duck software to the platypus were avoidable this time.-- Chris Klein, asr -- http://mail.python.org/mailman/listinfo/python-list
Re: Asking the user a question and giving him a default answer he can edit
levander [EMAIL PROTECTED] writes: Basically, I've got a bunch of questions to ask a user, the vast majority of which, the answer will only vary by the last few characters. What I'd like to do is every time the user is asked a question, give him the default answer as just whatever he answered last time. But, I want him to be able to edit this default answer. And, the editted answer is what I want to operate on inside my program. Something like this? / def f(): |.. readline.set_startup_hook(lambda :readline.insert_text('aaa')) |.. return raw_input() \__ Basically, I want to the user a line editor, with a default value already populated. Or you could use my pyrepl package (see google for that). Cheers, mwh -- In case you're not a computer person, I should probably point out that Real Soon Now is a technical term meaning sometime before the heat-death of the universe, maybe. -- Scott Fahlman [EMAIL PROTECTED] -- http://mail.python.org/mailman/listinfo/python-list
Re: Changing interpreter's deafult output/error streams
Ira [EMAIL PROTECTED] writes: Using an embedded interpreter, how do I change it's default output streams (specifically the one used by PyErr_Print() which I'm guessing is the default error stream)? It looks as though it writes to stderr unconditionally. But most of the reasons for ended up in PyErr_Print can be intercepted at a higher level (I think -- I mean sys.excepthook co here). Cheers, mwh -- ARTHUR: Yes. It was on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying Beware of the Leopard. -- The Hitch-Hikers Guide to the Galaxy, Episode 1 -- http://mail.python.org/mailman/listinfo/python-list
Re: Ten Essential Development Practices
Steve Holden [EMAIL PROTECTED] writes: If I canpoint out the obvious, the output from import this *is* headed The Zen of Python, so clearly it isn;t intended to be universal in its applicability. It's also mistitled there, given that it was originally posted as '19 Pythonic Theses' and nailed to, erm, something. Cheers, mwh -- Remember - if all you have is an axe, every problem looks like hours of fun.-- Frossie -- http://home.xnet.com/~raven/Sysadmin/ASR.Quotes.html -- http://mail.python.org/mailman/listinfo/python-list
Re: baffling error-handling problem
Chris Fonnesbeck [EMAIL PROTECTED] writes: I thought I knew how to do error handling in python, but apparently I dont. I have a bunch of code to calculate statistical likelihoods, and use error handling to catch invalid parameters. For example, for the [...] bernoulli distribution, I have: I have no idea how this can happen, given how I have coded this. Anyone see what I must be missing? Is it possible you have two classes called LikelihoodError? One in __main__, one in some_module_of_yours, maybe. Cheers, mwh -- bruce how are the jails in israel? itamar well, the one I was in was pretty nice -- from Twisted.Quotes -- http://mail.python.org/mailman/listinfo/python-list
Re: What license to choose for Python programs? (PSF License vs. GPL/LGPL)
[EMAIL PROTECTED] (Volker Grabsch) writes: Hi! I noticed that many packages in the PyPI are using the PSF License. Does this have a special reason? Lots of people are misguided, maybe. Anyway, you want to be reading this: http://wiki.python.org/moin/PythonSoftwareFoundationLicenseFaq Cheers, mwh -- Roll on a game of competetive offence-taking. -- Dan Sheppard, ucam.chat -- http://mail.python.org/mailman/listinfo/python-list
Re: goto
[EMAIL PROTECTED] writes: what is the equivalent of C languages' goto statement in python? You really shouldn't use goto. Fortunately you can't. Steven Of course you can :-) Steven You can write your own Python interpreter, in Python, and add a Steven goto to it. Maybe easier would be to write a Python assembler (there's probably already one out there) and just write to Python's virtual machine... The blockstack gets in the way. Really, I think Richie's goto module is about as good as it can get without vm surgery (apart from the performance, I'd guess). Cheers, mwh -- Reading Slashdot can [...] often be worse than useless, especially to young and budding programmers: it can give you exactly the wrong idea about the technical issues it raises. -- http://www.cs.washington.edu/homes/klee/misc/slashdot.html#reasons -- http://mail.python.org/mailman/listinfo/python-list
Re: Hash functions
Steven D'Aprano [EMAIL PROTECTED] writes: Do people often use hash() on built-in types? Only implicitly. What do you find it useful for? Dictionaries :) How about on custom classes? Same here. Can anyone give me some good tips or hints for writing and using hash functions in Python? Well, the usual tip for writing them is, don't, unless you need to. If implement __eq__, then you need to, so it's fairly common to just hash a tuple containing the things that are considered by the __eq__ method. Something like: class C(object): def __init__(self, a, b, c): self.a = a self.b = b self.c = c def __eq__(self, other): return self.a == other.a and self.b == other.b def __hash__(self): return hash((self.a, self.b)) Cheers, mwh -- I'm a keen cyclist and I stop at red lights. Those who don't need hitting with a great big slapping machine. -- Colin Davidson, cam.misc -- http://mail.python.org/mailman/listinfo/python-list
Re: Are there any decent python memory profilers available?
[EMAIL PROTECTED] writes: I have a rather large python application (uses around 40MB of memory to start) that gradually chews up memory over many hours. I've done a little googling around, but it looks like I'm faced with prowling through the gc.get_objects() myself. I need a tool to identify where the memory is going. It might even be a leak in a DLL, so maybe a pure python profiler isn't the best, although it would certainly help localize the problem. One is being written as part of Google's Summer Of Code program. Cheers, mwh -- Java sucks. [...] Java on TV set top boxes will suck so hard it might well inhale people from off their sofa until their heads get wedged in the card slots. --- Jon Rabone, ucam.chat -- http://mail.python.org/mailman/listinfo/python-list
Re: Why does python break IEEE 754 for 1.0/0.0 and 0.0/0.0?
Grant Edwards [EMAIL PROTECTED] writes: I've read over and over that Python leaves floating point issues up to the underlying platform. Please read the conversation Tim and I are having in the Re: math.nroot [was Re: A brief question.] elsewhere in this same newsgroup. Cheers, mwh -- Also, does the simple algorithm you used in Cyclops have a name? Not officially, but it answers to hey, dumb-ass! -- Neil Schemenauer and Tim Peters, 23 Feb 2001 -- http://mail.python.org/mailman/listinfo/python-list
Re: math.nroot [was Re: A brief question.]
Tim Peters [EMAIL PROTECTED] writes: [Tim] Ah, but as I've said before, virtually all C compilers on 754 boxes support _some_ way to get at this stuff. This includes gcc before C99 and fenv.h -- if the platforms represented in fpectlmodule.c were happy to use gcc, they all could have used the older gcc spellings (which are in fpectlmodule.c, BTW, under the __GLIBC__ #ifdef). [Michael] Um, well, no, not really. The stuff under __GLIBC___ unsurprisingly applies to platforms using the GNU project's implementation of the C library, and GCC is used on many more platforms than just that (e.g. OS X, FreeBSD). Good point taken: parings of C compilers and C runtime libraries are somewhat fluid. So if all the platforms represented in fpectlmodule.c were happy to use glibc, they all could have used the older glibc spellings. Apparently the people who cared enough on those platforms to contribute code to fpectlmodule.c did not want to use glibc, though. It may not have been possible for them, after a little googling. It seems that while glibc is theoretically portable to systems other than linux, in practice it ain't. In the end, I still don't know why there would be a reason to hope that an endless variety of other libms would standardize on the C99 spellings. Point. But as you said in the post I replied to, soon (maybe even now) there won't be an endless variety of other libms to worry about. ... Even given that, the glibc section looks mighty Intel specific to me (I don't see why 0x1372 should have any x-architecture meaning). Why not? I don't know whether glibc ever did this, but Microsoft's spelling of this stuff used to, on Alphas (when MS compilers still supported Alphas), pick apart the bits and rearrange them into the bits needed for the Alpha's FPU control registers. Well, I considered that but decided that it was far too insane. Maybe that was insufficient cynicism :) In any case, glibc's docs today only mention the C99 (and C99-like, for setting traps) interfaces, and I can't be arsed to go through old docs to see if _FPU_SETCW or __setfpucw were ever documented and if so what they were documented to do. One thing GCC doesn't yet support, it turns out, is the #pragma STDC FENV_ACCESS ON gumpf, which means the optimiser is all too willing to reorder feclearexcept(FE_ALL_EXCEPT); r = x * y; fe = fetestexcept(FE_ALL_EXCEPT); into feclearexcept(FE_ALL_EXCEPT); fe = fetestexcept(FE_ALL_EXCEPT); r = x * y; Argh! Declaring r 'volatile' made it work. Oh, sigh. One of the lovely ironies in all this is that CPython _could_ make for an excellent 754 environment, precisely because it does such WYSIWYG code generation. One of my motivations here (other than the instantly discountably one of aesthetic purity :) is to make Python a good system for prototyping numeric codes. Optimizing-compiler writers hate hidden side effects, and every fp operation in 754 is swimming in them -- but Python couldn't care much less. Thing is, I don't see how this particular stuff should be that hard to implement -- if you have an optimizing compiler that can move code around, presumably you have some way of saying what can't be moved past what else. But you're not going to catch me diving into GCC's sources :) (In other news, passing -frounding-math to GCC may also have the desired effect, but I haven't tested this). Anyway, you're rediscovering the primary reason you have to pass a double lvalue to the PyFPE_END_PROTECT protect macro. PyFPE_END_PROTECT(v) expands to an expression including the subexpression PyFPE_dummy((v)) where PyFPE_dummy() is an extern that ignores its double* argument. The point is that this dance prevents C optimizers from moving the code that computes v below the code generated for PyFPE_END_PROTECT(v). Since v is usually used soon after in the routine, it also discourages the optimizer from moving code up above the PyFPE_END_PROTECT(v) (unless the C does cross-file analysis, it has to assume that PyFPE_dummy((v)) may change the value of v). These tricks may be useful here too -- fighting C compilers to the death is part of this game, alas. I did read those comments. Maybe passing the address of the result variable to the routine that checks the flags and decides whether to raise an exception would be a good hack (and, yes, writing that function in another file so GCC doesn't bloody well inline it and then rearrange all my code). PyFPE_END_PROTECT() incorporates an even stranger trick, and I wonder how gcc deals with it. The Pentium architecture made an agonizing (for users who care) choice: if you have a particular FP trap enabled (let's say overflow), and you do an fp operation that overflows, the trap doesn't actually fire until the _next_ fp operation (of any kind) occurs. You can honest-to-God have, e.g., an overflowing fp add on an
Re: math.nroot [was Re: A brief question.]
Tim Peters [EMAIL PROTECTED] writes: [Michael Hudson] I doubt anyone else is reading this by now, so I've trimmed quotes fairly ruthlessly :) Damn -- there goes my best hope at learning how large a message gmail can handle before blowing up wink. OK, I'll cut even more. Heh. [Michael] Can't we use the stuff defined in Appendix F and header fenv.h of C99 to help here? I know this stuff is somewhat optional, but it's available AFAICT on the platforms I actually use (doesn't mean it works, of course). [Tim] It's entirely optional part of C99. Hmm, is fenv.h optional? I'm not finding those words. I know Appendix F is. fenv.h is required, but the standard is carefully worded so that fenv.h may not be of any actual use. For example, a conforming implementation can define FE_ALL_EXCEPT as 0 (meaning it doesn't define _any_ of the (optional!) signal-name macros: FE_DIVBYZERO, etc). That in turn makes feclearexcept() ( so on) pretty much useless -- you couldn't specify any flags. Makes sense. The most important example of a compiler that doesn't support any of that stuff is Microsoft's, although they have their own MS-specific ways to spell most of it. OK, *that's* a serious issue. If you had to guess, do you think it likely that MS would ship fenv.h in the next interation of VC++? Sadly not. If they wanted to do that, they had plenty of time to do so before VC 7.1 was released (C99 ain't exactly new anymore). As it says on http://en.wikipedia.org/wiki/C_programming_language MS and Borland (among others) appear to have no interest in C99. In part I expect this is because C doesn't pay their bills nearly so much as C++ does, and C99 isn't a standard from the C++ world. This also makes sense, in a slightly depressing way. In what way does C99's fenv.h fail? Is it just insufficiently available, or is there some conceptual lack? Just that it's not universally supported. Look at fpectlmodule.c for a sample of the wildly different ways it _is_ spelled across some platforms. C'mon, fpectlmodule.c is _old_. Maybe I'm stupidly optimistic, but perhaps in the last near-decade things have got a little better here. Ah, but as I've said before, virtually all C compilers on 754 boxes support _some_ way to get at this stuff. This includes gcc before C99 and fenv.h -- if the platforms represented in fpectlmodule.c were happy to use gcc, they all could have used the older gcc spellings (which are in fpectlmodule.c, BTW, under the __GLIBC__ #ifdef). Um, well, no, not really. The stuff under __GLIBC___ unsurprisingly applies to platforms using the GNU project's implementation of the C library, and GCC is used on many more platforms than just that (e.g. OS X, FreeBSD). This is all part of the what exactly are you claiming supports 754, again? game, I guess. Even given that, the glibc section looks mighty Intel specific to me (I don't see why 0x1372 should have any x-architecture meaning). Now that GCC supports, or aims to support, or will one day support C99 I think you're right in that any GCC-using code can use the same spelling. One thing GCC doesn't yet support, it turns out, is the #pragma STDC FENV_ACCESS ON gumpf, which means the optimiser is all too willing to reorder feclearexcept(FE_ALL_EXCEPT); r = x * y; fe = fetestexcept(FE_ALL_EXCEPT); into feclearexcept(FE_ALL_EXCEPT); fe = fetestexcept(FE_ALL_EXCEPT); r = x * y; Argh! Declaring r 'volatile' made it work. But they didn't, so they're using minority compilers. I used to write compilers for a living, but I don't think this is an inside secret anymore wink: there are a lot fewer C compiler writers than there used to be, and a lot fewer companies spending a lot less money on developing C compilers than there used to be. Indeed. Also, less architectures and less C libraries. As with other parts of C99, I'd be in favor of following its lead, and defining Py_ versions of the relevant macros and functions. Makes sense! A maze of #ifdefs could work too, provided we defined a PyWhatever_XYZ API to hide platform spelling details. Hopefully it wouldn't be that bad a maze; frankly GCC MSVC++ covers more than all the cases I care about. I'd be happy to settle for just those two at the start, As with threading too, Python has suffered from trying to support dozens of unreasonable platforms, confined to the tiny subset of abilities common to all of them. If, e.g., HP-UX wants a good Python thread or fp story, let HP contribute some work for a change. I think we have enough volunteers to work out good gcc and MSVC stories -- although I expect libm to be an everlasting headache Well, yes. I think a 'thin wrapper' approach like some of the os module stuff makes sense here. Cheers, mwh -- I've reinvented the idea of variables and types as in a programming language, something I do on every
Re: math.nroot [was Re: A brief question.]
I doubt anyone else is reading this by now, so I've trimmed quotes fairly ruthlessly :) Tim Peters [EMAIL PROTECTED] writes: Actually, I think I'm confused about when Underflow is signalled -- is it when a denormalized result is about to be returned or when a genuine zero is about to be returned? Underflow in 754 is involved -- indeed, the definition is different depending on whether the underflow trap is or is not enabled(!). ! Sure, but we already have a conforming implementation of 854 with settable traps and flags and rounding modes and all that jazz. No, we don't, but I assume you're talking about the decimal module. Uh, yes. Apologies for the lack of precision. If so, the decimal module enables traps on overflow, invalid operation, and divide-by-0 by default. A conforming implementation would have to disable them by default. Apart from that difference in defaults, the decimal module does intend to conform fully to the proposed decimal FP standard. Right, that's what I meant. Maybe we should just implement floats in Python. Certainly the easiest way to get 754 semantics across boxes! Been there, done that, BTW -- it's mondo slow. No doubt. (In the mean time can we just kill fpectl, please?) Has it been marked as deprecated yet (entered into the PEP for deprecated modules, raises deprecation warnings, etc)? I don't know. IMO it should become deprecated, but I don't have time to push that. A bit of googling suggests that more people pass --with-fpectl to configure than I expected, but I doubt more than 1% of those actually use the features thus provided (of course, this is a guess). I expect 1% is way high. Before we stopped building fpectl by default, Guido asked and heard back that there were no known users even at LLNL anymore (the organization that contributed the code). Interesting. You're seeing native HW fp behavior then. But anyway, shouldn't we try to raise exceptions in these cases? Note that the only cases you could have been talking about here were the plain * and / examples above. Ah, OK, this distinction passed me by. Why doesn't Python already supply a fully 754-conforming arithmetic on 754 boxes? It's got almost everything to do with implementation headaches, and very little to do with what users care about. Because all the C facilities are a x-platform mess, the difference between calling and not calling libm can be the difference between using the platform libm or Python needing to write its own libm. For example, there's no guarantee that math.sqrt(-1) will raise ValueError in Python, because Python currently relies on the platform libm sqrt to detect _and report_ errors. The C standards don't require much of anything there. Can't we use the stuff defined in Appendix F and header fenv.h of C99 to help here? I know this stuff is somewhat optional, but it's available AFAICT on the platforms I actually use (doesn't mean it works, of course). It's entirely optional part of C99. Hmm, is fenv.h optional? I'm not finding those words. I know Appendix F is. Python doesn't require C99. Sure. But it would be possible to, say, detect C99 floating point facilities at ./configure time and use them if available. The most important example of a compiler that doesn't support any of that stuff is Microsoft's, although they have their own MS-specific ways to spell most of it. OK, *that's* a serious issue. If you had to guess, do you think it likely that MS would ship fenv.h in the next interation of VC++? I'm thinking something like this: fexcept_t flags; feclearexcept(FE_ALL_EXCEPT); /* stuff, e.g. r = exp(PyFloat_AS_DOUBLE(x)) */ fegetexceptflag(flags, FE_ALL_EXCEPT) /* inspect flags to see if any of the flags we're currently trapping are set */ Assuming the platform libm sets 754 flags appropriately, that's a fine way to proceed on platforms that also support that specific spelling. It even seems to work, on darwin/ppc (so, with GCC) at least. ... Well, you can at least be pretty sure that an infinite result is the result of an overflow condition, I guess. There are at least two other causes: some cases of divide-by-0 (like 1/0 returns +Inf), and non-exceptional production of an infinite result from infinite operands (like sqrt(+Inf) should return +Inf, and there's nothing exceptional about that). Yeah, but I think those can be dealt with (if we really wanted to). They certainly could be. The more I think about it, the less wise I think detecting stuff this was is sane. BTW, since there's so little the HW can help us with here in reality (since there's no portable way to get at it), In what way does C99's fenv.h fail? Is it just insufficiently available, or is there some conceptual lack? Just that it's not universally supported. Look at fpectlmodule.c for a sample of the wildly different
Re: math.nroot [was Re: A brief question.]
Tim Peters [EMAIL PROTECTED] writes: [Tim Peters] All Python behavior in the presence of infinities, NaNs, and signed zeroes is a platform-dependent accident, mostly inherited from that all C89 behavior in the presence of infinities, NaNs, and signed zeroes is a platform-dependent crapshoot. [Michael Hudson] As you may have noticed by now, I'd kind of like to stop you saying this :) -- at least on platforms where doubles are good old-fashioned 754 8-byte values. [Tim] Nope, I hadn't noticed! I'll stop saying it when it stops being true, though wink. Note that since there's not even an alpha out for 2.5 yet, none of the good stuff you did in CVS counts for users yet. [Michael] Well, obviously. OTOH, there's nothing I CAN do that will be useful for users until 2.5 actually comes out. Sure. I was explaining why I keep saying what you say you don't want me to say: until 2.5 actually comes out, what purpose would it serve to stop warning people that 754 special-value behavior is a x-platform crapshoot? Much of it (albeit less so) will remain a crapshoot after 2.5 comes out too. Well, OK, I phrased my first post badly. Let me try again: I want to make this situation better, as you may have noticed. But first, I'm going to whinge a bit, and lay out some stuff that Tim at least already knows (and maybe get some stuff wrong, we'll see). Floating point standards lay out a number of conditions: Overflow (number too large in magnitude to represent), Underflow (non-zero number to small in magnitude to represent), Subnormal (non-zero number to small in magnitude to represent in a normalized way), ... The 754 standard has five of them: underflow, overflow, invalid operation, inexact, and divide by 0 (which should be understood more generally as a singularity; e.g., divide-by-0 is also appropriate for log(0)). OK, the decimal standard has more, which confused me for a bit (presumably it has more because it doesn't normalize after each operation). The conditions in IBM's decimal standard map, many-to-one, on to a smaller collection of signals in that standard. It has 8 signals: the 5 I named above from 754, plus clamped, rounded, and subnormal. Distinctions are excruciatingly subtle; e.g., rounded and inexact would be the same thing in 754, but, as you suggest, in the decimal standard a result can be exact yet also rounded (if it rounds away one or more trailing zeroes), due to the unnormalized model. Right, yes, that last one confused me for a while. Why doesn't 754 have subnormal? Actually, I think I'm confused about when Underflow is signalled -- is it when a denormalized result is about to be returned or when a genuine zero is about to be returned? For each condition, it should (at some level) is possible to trap each condition, or continue in some standard-mandated way (e.g. return 0 for Underflow). 754 requires that, yes. While ignoring the issue of allowing the user to control this, I do wish sometimes that Python would make up it's mind about what it does for each condition. Guido and I agreed long ago that Python should, by default, raise an exception on overflow, invalid operation, and divide by 0, and should not, by default, raise an exception on underflow or inexact. And, I'll add, should not on rounded, clamped and subnormal too. Sure, but we already have a conforming implementation of 854 with settable traps and flags and rounding modes and all that jazz. Maybe we should just implement floats in Python. Such defaults favor non-expert use. Experts may or may not be happy with them, so Python should also allow changing the set. Later :) That's a problem, though. 754 subsets are barely an improvement over what Python does today: Well, my contention is that the consistent application of one particular 754 subset would be an improvement. Maybe I'm wrong! (In the mean time can we just kill fpectl, please?) Has it been marked as deprecated yet (entered into the PEP for deprecated modules, raises deprecation warnings, etc)? I don't know. IMO it should become deprecated, but I don't have time to push that. A bit of googling suggests that more people pass --with-fpectl to configure than I expected, but I doubt more than 1% of those actually use the features thus provided (of course, this is a guess). There are a bunch of conditions which we shouldn't and don't trap by default -- Underflow for example. For the conditions that probably should result in an exception, there are inconsistencies galore: inf = 1e300 * 1e300 # - Overflow, no exception nan = inf/inf # - InvalidOperation, no exception Meaning you're running on a 754 platform whose C runtime arranged to disable the overflow and invalid operation traps. Isn't that the standard-mandated start up environment? The 754 standard mandates non-stop mode (all traps disabled
Re: PPC floating equality vs. byte compilation
Terry Reedy [EMAIL PROTECTED] writes: Tim Peters [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] [Donn Cave] I ran into a phenomenon that seemed odd to me, while testing a build of Python 2.4.1 on BeOS 5.04, on PowerPC 603e. test_builtin.py, for example, fails a couple of tests with errors claiming that apparently identical floating point values aren't equal. But it only does that when imported, and only when the .pyc file already exists. Not if I execute it directly (python test_builtin.py), or if I delete the .pyc file before importing it and running test_main(). This is a known problem with marshalling INFs and/or NANs. I hope you've also read all the bits and pieces where Tim says whatever happens to INFs and NANs is a platform dependent crapshoot. We don't test platform dependent crapshoots in test_builtin (or at least, I hope not!). *This* has supposedly been fixed for 2.5. Actually, it's likely that Donn's failure has been fixed for Python 2.5 as well, at least if Tim's guess is correct, because the C string-float routines aren't invovled in loading .pycs any more. It would be most helpful to open a bug report, with the output from failing tests. And assign to Tim. That's mean! :) In general, this can happen if the platform C string-float routines are so poor that eval(repr(x)) != x ... The ultimate cause is most likely in the platform C library's string-float routines (sprintf, strtod, that kind of thing). It would also be helpful if you could do some tests in plain C (no Python) testing, for instance, the same values that failed. Hardly anyone else can ;-). If you confirm a problem with the C library, you can close the report after opening, leaving it as a note for anyone else working with that platform. I agree with this bit! Cheers, mwh -- 112. Computer Science is embarrassed by the computer. -- Alan Perlis, http://www.cs.yale.edu/homes/perlis-alan/quotes.html -- http://mail.python.org/mailman/listinfo/python-list
Re: pyo contains absolute paths
David Siroky [EMAIL PROTECTED] writes: Hi! When I compile my python files with python -OO into pyo files then they still contain absolute paths of the source files which is undesirable for me. How can I deal with that? Are you trying to save space? In 2.4 and later each code object will contain the same copy of the absolute path, so you can't save that much space. There are probably ways to make .pycs that have a path of , if you really want (see py_compile in the stdlib). Cheers, mwh -- I located the link but haven't bothered to re-read the article, preferring to post nonsense to usenet before checking my facts. -- Ben Wolfson, comp.lang.python -- http://mail.python.org/mailman/listinfo/python-list
Re: frozenset question
Will McGugan [EMAIL PROTECTED] writes: Qiangning Hong wrote: On 7/6/05, Will McGugan [EMAIL PROTECTED] wrote: Hi, Are there any benefits in using a frozenset over a set, other than it being immutable? A frozenset can be used as a key of a dict: Thanks, but I meant to imply that. I was wondering if frozenset was faster or more efficient in some way. No, the 'usable as a dict key' is the main motivation for frozenset's existence. Cheers, mwh -- The bottom tier is what a certain class of wanker would call business objects ... -- Greg Ward, 9 Dec 1999 -- http://mail.python.org/mailman/listinfo/python-list
Re: math.nroot [was Re: A brief question.]
Tim Peters [EMAIL PROTECTED] writes: All Python behavior in the presence of infinities, NaNs, and signed zeroes is a platform-dependent accident, mostly inherited from that all C89 behavior in the presence of infinities, NaNs, and signed zeroes is a platform-dependent crapshoot. As you may have noticed by now, I'd kind of like to stop you saying this :) -- at least on platforms where doubles are good old-fashioned 754 8-byte values. But first, I'm going to whinge a bit, and lay out some stuff that Tim at least already knows (and maybe get some stuff wrong, we'll see). Floating point standards lay out a number of conditions: Overflow (number too large in magnitude to represent), Underflow (non-zero number to small in magnitude to represent), Subnormal (non-zero number to small in magnitude to represent in a normalized way), ... For each condition, it should (at some level) is possible to trap each condition, or continue in some standard-mandated way (e.g. return 0 for Underflow). While ignoring the issue of allowing the user to control this, I do wish sometimes that Python would make up it's mind about what it does for each condition. There are a bunch of conditions which we shouldn't and don't trap by default -- Underflow for example. For the conditions that probably should result in an exception, there are inconsistencies galore: inf = 1e300 * 1e300 # - Overflow, no exception nan = inf/inf # - InvalidOperation, no exception pow(1e100, 100) - Overflow, exception Traceback (most recent call last): File stdin, line 1, in ? OverflowError: (34, 'Numerical result out of range') math.sqrt(-1) # - InvalidOperation, exception Traceback (most recent call last): File stdin, line 1, in ? ValueError: math domain error At least we're fairly consistent on DivisionByZero... If we're going to trap Overflow consistently, we really need a way of getting the special values reliably -- which is what pep 754 is about, and its implementation may actually work more reliably in 2.5 since my recent work... On the issue of platforms that start up processes with traps enabled, I think the correct solution is to find the incantation to turn them off again and use that in Py_Initialize(), though that might upset embedders. Cheers, mwh -- Yosomono rasterman is the millionth monkey -- from Twisted.Quotes -- http://mail.python.org/mailman/listinfo/python-list
Re: pickle broken: can't handle NaN or Infinity under win32
Terry Reedy [EMAIL PROTECTED] writes: Grant Edwards [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] I'm working on it. I should have said it's trivial if you have access to the platforms to be supported. I've tested a fix that supports pickle streams generated under Win32 and glibc. That's using the native string representation of a NaN or Inf. A perhaps simpler approach would be to define a string representation for Python to use for NaN and Inf. Just because something isn't defined by the C standard doesn't mean it can't be defined by Python. I believe that changes have been made to marshal/unmarshal in 2.5 CVS with respect to NAN/INF to eliminate annoying/surprising behavior differences between corresponding .py and .pyc files. Perhaps these revisions would be relevant to pickle changes. If you use a binary protocol for pickle, yes. Cheers, mwh -- Java sucks. [...] Java on TV set top boxes will suck so hard it might well inhale people from off their sofa until their heads get wedged in the card slots. --- Jon Rabone, ucam.chat -- http://mail.python.org/mailman/listinfo/python-list
Re: math.nroot [was Re: A brief question.]
Tim Peters [EMAIL PROTECTED] writes: [Tim Peters] All Python behavior in the presence of infinities, NaNs, and signed zeroes is a platform-dependent accident, mostly inherited from that all C89 behavior in the presence of infinities, NaNs, and signed zeroes is a platform-dependent crapshoot. [Michael Hudson] As you may have noticed by now, I'd kind of like to stop you saying this :) -- at least on platforms where doubles are good old-fashioned 754 8-byte values. Nope, I hadn't noticed! I'll stop saying it when it stops being true, though wink. Note that since there's not even an alpha out for 2.5 yet, none of the good stuff you did in CVS counts for users yet. Well, obviously. OTOH, there's nothing I CAN do that will be useful for users until 2.5 actually comes out. But first, I'm going to whinge a bit, and lay out some stuff that Tim at least already knows (and maybe get some stuff wrong, we'll see). Floating point standards lay out a number of conditions: Overflow (number too large in magnitude to represent), Underflow (non-zero number to small in magnitude to represent), Subnormal (non-zero number to small in magnitude to represent in a normalized way), ... The 754 standard has five of them: underflow, overflow, invalid operation, inexact, and divide by 0 (which should be understood more generally as a singularity; e.g., divide-by-0 is also appropriate for log(0)). OK, the decimal standard has more, which confused me for a bit (presumably it has more because it doesn't normalize after each operation). For each condition, it should (at some level) is possible to trap each condition, or continue in some standard-mandated way (e.g. return 0 for Underflow). 754 requires that, yes. While ignoring the issue of allowing the user to control this, I do wish sometimes that Python would make up it's mind about what it does for each condition. Guido and I agreed long ago that Python should, by default, raise an exception on overflow, invalid operation, and divide by 0, and should not, by default, raise an exception on underflow or inexact. OK. Such defaults favor non-expert use. Experts may or may not be happy with them, so Python should also allow changing the set. Later :) (In the mean time can we just kill fpectl, please?) There are a bunch of conditions which we shouldn't and don't trap by default -- Underflow for example. For the conditions that probably should result in an exception, there are inconsistencies galore: inf = 1e300 * 1e300 # - Overflow, no exception nan = inf/inf # - InvalidOperation, no exception Meaning you're running on a 754 platform whose C runtime arranged to disable the overflow and invalid operation traps. Isn't that the standard-mandated start up environment? You're seeing native HW fp behavior then. But anyway, shouldn't we try to raise exceptions in these cases? I don't think it's a particularly good idea to try to utilize the fp hardware's ability to do this at this stage, btw, but to add some kind of check after each operation. pow(1e100, 100) - Overflow, exception Traceback (most recent call last): File stdin, line 1, in ? OverflowError: (34, 'Numerical result out of range') math.sqrt(-1) # - InvalidOperation, exception Traceback (most recent call last): File stdin, line 1, in ? ValueError: math domain error Unlike the first two examples, these call libm functions. And the user cares about this why? Then it's a x-platform crapshoot whether and when the libm functions set errno to ERANGE or EDOM, and somewhat of a mystery whether it's better to reproduce what the native libm considers to be an error, or try to give the same results across platforms. Python makes a weak attempt at the latter. Well, you can at least be pretty sure that an infinite result is the result of an overflow condition, I guess. At least we're fairly consistent on DivisionByZero... When it's a division by 0, yes. It's cheap and easy to test for that. However, many expert uses strongly favor getting back an infinity then instead, so it's not good that Python doesn't support a choice about x/0. Indeed. But I'd rather work on non-settable predictability first. If we're going to trap Overflow consistently, we really need a way of getting the special values reliably -- which is what pep 754 is about, and its implementation may actually work more reliably in 2.5 since my recent work... I don't know what you have in mind. Well, the reason I headbutted into this stuff again recently was writing (grotty) string to float parsing code for PyPy. If you write (where 'e' is the exponent parsed from, say '1e309'): while e 0: result *= 10 e -= 1 you get an infinite result in the large e case. If instead you write the seemingly much more sensible: result *= pow(10, e) you don't, you get an overflow error instead. This make me sad. Whether my
Re: Information about Python Codyng Projects Ideas
M1st0 [EMAIL PROTECTED] writes: I hope that here is the right place for this kind of discussion. There's a new mailing list [EMAIL PROTECTED] which is probably more appropriate for specifics, but this list is probably OK for general discussion. Cheers, mwh -- Solaris: Shire horse that dreams of being a race horse, blissfully unaware that its owners don't quite know whether to put it out to grass, to stud, or to the knackers yard. -- Jim's pedigree of operating systems, asr -- http://mail.python.org/mailman/listinfo/python-list
Re: What's the use of changing func_name?
Robert Kern [EMAIL PROTECTED] writes: could ildg wrote: Thank you for your help. I know the function g is changed after setting the func_name. But I still can't call funciton g by using f(), when I try to do this, error will occur: code g.func_name=f print g function f at 0x00B2CEB0 f() Traceback (most recent call last): File stdin, line 1, in ? NameError: name 'f' is not defined /code Since the name of g is changed into f, why can't I call it by using f()? Should I call it using f through other ways? Please tell me. Thanks~ Others have answered this particular question, but you're probably still wondering what is the use of changing .func_name if it doesn't also change the name by which you call it. The answer is that there are tools that use the .func_name attribute for various purposes. For example, a documentation generating tool might look at the .func_name attribute to make the proper documentation. Actually, that's probably *the* biggest use case because I can't think of any more significant ones. Error messages! Cheers, mwh -- There are two kinds of large software systems: those that evolved from small systems and those that don't work. -- Seen on slashdot.org, then quoted by amk -- http://mail.python.org/mailman/listinfo/python-list
Re: Simple thread-safe counter?
Paul Rubin http://[EMAIL PROTECTED] writes: Tim Peters [EMAIL PROTECTED] writes: The GIL is your friend here: import itertools f = itertools.count().next Thanks, I was hoping something like this would work but was not sure I could rely on it. A similar thing can be done with xrange. But either way sucks if you call it often enough to exceed the size of a Python short int (platform C long). The obvious way with an explicit mutex doesn't have that problem. Xrange, of course :). I don't need to exceed the size of a short int, so either of these should work fine. I wonder what measures the Pypy implementers will take (if any) to make sure these things keep working, but for now I won't worry about it. Well, for now we don't support threads. Easy! Cheers, mwh (no, really, this is for the future) -- Two things I learned for sure during a particularly intense acid trip in my own lost youth: (1) everything is a trivial special case of something else; and, (2) death is a bunch of blue spheres. -- Tim Peters, 1 May 1998 -- http://mail.python.org/mailman/listinfo/python-list
Re: unittest vs py.test?
Roy Smith [EMAIL PROTECTED] writes: In article [EMAIL PROTECTED], [EMAIL PROTECTED] (Bengt Richter) Is there a package that is accessible without svn? That seems to be its weak point right now. Fortunately, you can get pre-built svn clients for many platforms (http://subversion.tigris.org/project_packages.html#binary-packages), and from there you just have to run a single command (svn get URL). wget -r should work fine! Cheers, mwh -- All obscurity will buy you is time enough to contract venereal diseases. -- Tim Peters, python-dev -- http://mail.python.org/mailman/listinfo/python-list
Re: unittest vs py.test?
Peter Hansen [EMAIL PROTECTED] writes: Raymond Hettinger wrote: [Peter Hansen] This is pretty, but I *want* my tests to be contained in separate functions or methods. In py.test, those would read: def test1(): assert a == b def test2(): raises(Error, func, args) Enclosing classes are optional. So basically py.test skips the import statement, near as I can tell, at the cost of requiring a utility to be installed in the PATH. Where was all that weight that unittest supposedly has? For PyPy we wanted to do some things that the designers of unittest obviously hadn't expected[1], such as formatting tracebacks differently. This was pretty tedious to do[2], involving things like accessing __variables, defining subclasses of certain classes, defining subclasses of other classes so the previous subclasses would actually get used, etc. I've not programmed in Java, but I imagine this is what it feels like all the time... (Not to knock unittest too much, we did manage to get the customizations we needed done, but it wasn't fun). Cheers, mwh [1] this in itself is hardly a criticism: there are many special things about PyPy. [2] *this* is the criticism. -- Well, you pretty much need Microsoft stuff to get misbehaviours bad enough to actually tear the time-space continuum. Luckily for you, MS Internet Explorer is available for Solaris. -- Calle Dybedahl, alt.sysadmin.recovery -- http://mail.python.org/mailman/listinfo/python-list
Re: decorators ?
Skip Montanaro [EMAIL PROTECTED] writes: Jacek Anything you can do with decorators, you could do before (with Jacek the exception of rebinding the __name__ of functions). And while that feature was added because we realized it would be nice if the decorated function could have the same name as the original function, it seems like that change could stand on its own merits. Indeed. I'd been meaning to do it for at least a year... Cheers, mwh -- Never meddle in the affairs of NT. It is slow to boot and quick to crash. -- Stephen Harris -- http://home.xnet.com/~raven/Sysadmin/ASR.Quotes.html -- http://mail.python.org/mailman/listinfo/python-list