Re: [Python-Dev] Store timestamps as decimal.Decimal objects
2012/2/1 Nick Coghlan : > The secret to future-proofing such an API while only using integers > lies in making the decimal exponent part of the conversion function > signature: > > def from_components(integer, fraction=0, exponent=-9): > return Decimal(integer) + Decimal(fraction) * Decimal((0, > (1,), exponent)) The fractional part is not necessary related to a power of 10. An earlier version of my patch used also powers of 10, but it didn't work (loose precision) for QueryPerformanceCounter() and was more complex than the new version. NTP timestamp uses a fraction of 2**32. QueryPerformanceCounter() (used by time.clock() on Windows) uses the CPU frequency. We may need more information when adding a new timestamp formats later. If we expose the "internal structure" used to compute any timestamp format, we cannot change the internal structure later without breaking (one more time) the API. My patch uses the format (seconds: int, floatpart: int, divisor: int). For example, I hesitate to add a field to specify the start of the timestamp: undefined for time.wallclock(), time.clock(), and time.clock_gettime(time.CLOCK_MONOTONIC), Epoch for other timestamps. My patch is similar to your idea except that everything is done internally to not have to expose internal structures, and it doesn't touch decimal or datetime modules. It would be surprising to add a method related to timestamp to the Decimal class. > This strategy would have negligible performance impact There is no such performance issue: time.time() performance is exactly the same using my patch. Depending on the requested format, the performance may be better or worse. But even for Decimal, I think that the creation of Decimal is really "fast" (I should provide numbers :-)). Victor ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Store timestamps as decimal.Decimal objects
On Wed, Feb 1, 2012 at 6:03 PM, Victor Stinner wrote: > 2012/2/1 Nick Coghlan : >> The secret to future-proofing such an API while only using integers >> lies in making the decimal exponent part of the conversion function >> signature: >> >> def from_components(integer, fraction=0, exponent=-9): >> return Decimal(integer) + Decimal(fraction) * Decimal((0, >> (1,), exponent)) > > The fractional part is not necessary related to a power of 10. An > earlier version of my patch used also powers of 10, but it didn't work > (loose precision) for QueryPerformanceCounter() and was more complex > than the new version. NTP timestamp uses a fraction of 2**32. > QueryPerformanceCounter() (used by time.clock() on Windows) uses the > CPU frequency. If a callback protocol is used at all, there's no reason those details need to be exposed to the callbacks. Just choose an appropriate exponent based on the precision of the underlying API call. > We may need more information when adding a new timestamp formats > later. If we expose the "internal structure" used to compute any > timestamp format, we cannot change the internal structure later > without breaking (one more time) the API. You're assuming we're ever going to want timestamps that are something more than just a number. That's a *huge* leap (much bigger than increasing the precision, which is the problem we're dealing with now). With arbitrary length integers available, "integer, fraction, exponent" lets you express numbers to whatever precision you like, just as decimal.Decimal does (more on that below). > My patch is similar to your idea except that everything is done > internally to not have to expose internal structures, and it doesn't > touch decimal or datetime modules. It would be surprising to add a > method related to timestamp to the Decimal class. No, you wouldn't add a timestamp specific method to the Decimal class - you'd add one that let you easily construct a decimal from a fixed point representation (i.e. integer + fraction*10**exponent) >> This strategy would have negligible performance impact > > There is no such performance issue: time.time() performance is exactly > the same using my patch. Depending on the requested format, the > performance may be better or worse. But even for Decimal, I think that > the creation of Decimal is really "fast" (I should provide numbers > :-)). But this gets us to my final question. Given that Decimal supports arbitrary precision, *why* increase the complexity of the underlying API by supporting *other* output types? If you're not going to support arbitrary callbacks, why not just have a "high precision" flag to request Decimal instances and be done with it? datetime, timedelta and so forth would be able to get everything they needed from the Decimal value. As I said in my last message, both a 3-tuple (integer, fraction, exponent) based callback protocol effectively supporting arbitrary output types and a boolean flag to request Decimal values make sense to me and I could argue in favour of either of them. However, I don't understand the value you see in this odd middle ground of "instead of picking 1 arbitrary precision timestamp representation, whether an integer triple or decimal.Decimal, we're going to offer a few different ones and make you decide which one of them you actually want every time you call the API". That's seriously ducking our responsibilities as language developers - it's our job to make that call, not each user's. Given the way the discussion has gone, my preference is actually shifting strongly towards just returning decimal.Decimal instances when high precision timestamps are requested via a boolean flag. The flag isn't pretty, but it works, and the extra flexibility of a "type" parameter or a callback protocol doesn't really buy us anything once we have an output type that supports arbitrary precision. FWIW, I did a quick survey of what other languages seem to offer in terms of high resolution time interfaces: - Perl appears to have Time::HiRes (it seems to use floats in the API though, so I'm not sure how that works in practice) - C# (and the CLR) don't appear to care about POSIX and just offer 100 nanosecond resolution in their DateTime libraries - Java appears to have System.nanoTime(), no idea what they do for filesystem times However, I don't know enough about how the APIs in those languages work to do sensible searches. It doesn't appear to be a cleanly solved problem anywhere, though. Cheers, Nick. -- Nick Coghlan | [email protected] | Brisbane, Australia ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Store timestamps as decimal.Decimal objects
On Wed, 1 Feb 2012 14:08:34 +1000 Nick Coghlan wrote: > On Wed, Feb 1, 2012 at 12:35 PM, Antoine Pitrou wrote: > > It strikes me as inelegant to have to do so much typing for something > > as simple as getting the current time. We should approach the > > simplicity of ``time.time(format='decimal')`` or > > ``time.decimal_time()``. > > Getting the current time is simple (you can already do it), getting > access to high precision time without performance regressions or > backwards incompatiblities or excessive code duplication is hard. The implementation of it might be hard, the API doesn't have to be. You can even use a callback system under the hood, you just don't have to *expose* that complication to the user. > There's a very simple rule in large scale software development: > coupling is bad and you should do everything you can to minimise it. The question is: is coupling worse than exposing horrible APIs? ;) If Decimal were a core object as float is, we wouldn't have this discussion because returning a Decimal would be considered "natural". > Victor's approach throws that out the window by requiring that time > and os know about every possible output format for time values. Victor's proposal is maximalist in that it proposes several different output formats. Decimal is probably enough for real use cases, though. > For example, it would become *trivial* to write Alexander's suggested > "hirestime" module that always returned decimal.Decimal objects: Right, but that's not even a plausible request. Nobody wants to write a separate time module just to have a different return type. Regards Antoine. ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Store timestamps as decimal.Decimal objects
On Wed, Feb 1, 2012 at 9:08 PM, Antoine Pitrou wrote: > Right, but that's not even a plausible request. Nobody wants to write a > separate time module just to have a different return type. I can definitely see someone doing "import hirestime as time" to avoid having to pass a flag everywhere, though. I don't think that should be the way *we* expose the functionality - I just think it's a possible end user technique we should keep in mind when assessing the alternatives. As I said in my last reply to Victor though, I'm definitely coming around to the point of view that supporting more than just Decimal is overgeneralising to the detriment of the API design. As you say, if decimal objects were a builtin type, we wouldn't even be considering alternative high precision representations - the only discussion would be about the details of the API for *requesting* high resolution timestamps (and while boolean flags are ugly, I'm not sure there's anything else that will satisfy backwards compatibility constraints). Cheers, Nick. -- Nick Coghlan | [email protected] | Brisbane, Australia ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Store timestamps as decimal.Decimal objects
> If a callback protocol is used at all, there's no reason those details > need to be exposed to the callbacks. Just choose an appropriate > exponent based on the precision of the underlying API call. If the clock divisor cannot be written as a power of 10, you loose precision, just because your format requires a power of 10. Using (seconds, floatpart, divisor) you don't loose any bit. The conversion function using this tuple can choose how to use these numbers and do its best to optimize the precision (e.g. choose how to round the division). By the way, my patch uses a dummy integer division (floatpart / divisor). I hesitate to round to the closest integer. For example, 19//10=1, whereas 2 whould be a better answer. A possibility is to use (floatpart + (divisor/2)) / divisor. >> We may need more information when adding a new timestamp formats >> later. If we expose the "internal structure" used to compute any >> timestamp format, we cannot change the internal structure later >> without breaking (one more time) the API. > > You're assuming we're ever going to want timestamps that are something > more than just a number. That's a *huge* leap (much bigger than > increasing the precision, which is the problem we're dealing with > now). I tried to design an API supporting future timestamp formats. For time methods, it is maybe not useful to produce directly a datetime object. But for os.stat(), it is just *practical* to get directly a high-level object. We may add a new float128 type later, and it would nice to be able to get a timestamp directly as a float128, without having to break the API one more time. Getting a timestamp as a Decimal to convert it to float128 is not optimal. That's why I don't like adding a boolean flag. It doesn't mean that we should add datetime.datetime or datetime.timedelta right now. It can be done later, or never :-) > No, you wouldn't add a timestamp specific method to the Decimal class > - you'd add one that let you easily construct a decimal from a fixed > point representation (i.e. integer + fraction*10**exponent) Only if you use (intpart, floatpart, exponent). Would this function be useful for something else than timestamps? > But this gets us to my final question. Given that Decimal supports > arbitrary precision, *why* increase the complexity of the underlying > API by supporting *other* output types? We need to support at least 3 formats: int, float and (e.g. Decimal), to keep backward compatibilty. > datetime, timedelta and so forth would be able to get everything > they needed from the Decimal value. Yes. Getting timestamps directly as datetime or timedelta is maybe overkill. datetime gives more information than a raw number (int, float or Decimal): you don't have to care the start date of the timestamp. Internally, it would help to support Windows timestamps (number of 100 ns since 1601.1.1), even if we may have to convert the Windows timestamp to a Epoch timestamp if the user requests a number instead of a datetime object (for backward compatibility ?). Victor ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Store timestamps as decimal.Decimal objects
On Wed, Feb 1, 2012 at 9:40 PM, Victor Stinner wrote: >> If a callback protocol is used at all, there's no reason those details >> need to be exposed to the callbacks. Just choose an appropriate >> exponent based on the precision of the underlying API call. > > If the clock divisor cannot be written as a power of 10, you loose > precision, just because your format requires a power of 10. Using > (seconds, floatpart, divisor) you don't loose any bit. The conversion > function using this tuple can choose how to use these numbers and do > its best to optimize the precision (e.g. choose how to round the > division). > > By the way, my patch uses a dummy integer division (floatpart / > divisor). I hesitate to round to the closest integer. For example, > 19//10=1, whereas 2 whould be a better answer. A possibility is to use > (floatpart + (divisor/2)) / divisor. If you would lose precision, make the decimal exponent (and hence fractional part) larger. You have exactly the same problem when converting to decimal, and the solution is the same (i.e. use as many significant digits as you need to preserve the underlying precision). > I tried to design an API supporting future timestamp formats. For time > methods, it is maybe not useful to produce directly a datetime object. > But for os.stat(), it is just *practical* to get directly a high-level > object. > > We may add a new float128 type later, and it would nice to be able to > get a timestamp directly as a float128, without having to break the > API one more time. Getting a timestamp as a Decimal to convert it to > float128 is not optimal. That's why I don't like adding a boolean > flag. Introducing API complexity now for entirely theoretical future needs is a classic case of YAGNI (You Ain't Gonna Need It). Besides, float128 is a bad example - such a type could just be returned directly where we return float64 now. (The only reason we can't do that with Decimal is because we deliberately don't allow implicit conversion of float values to Decimal values in binary operations). >> But this gets us to my final question. Given that Decimal supports >> arbitrary precision, *why* increase the complexity of the underlying >> API by supporting *other* output types? > > We need to support at least 3 formats: int, float and format> (e.g. Decimal), to keep backward compatibilty. int and float are already supported today, and a process global switch works for that (since they're numerically interoperable). A per-call setting is only needed for Decimal due to its deliberate lack of implicit interoperability with binary floats. >> datetime, timedelta and so forth would be able to get everything >> they needed from the Decimal value. > > Yes. Getting timestamps directly as datetime or timedelta is maybe overkill. > > datetime gives more information than a raw number (int, float or > Decimal): you don't have to care the start date of the timestamp. That's a higher level concern though - not something the timestamp APIs themselves should be worrying about. Cheers, Nick. -- Nick Coghlan | [email protected] | Brisbane, Australia ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python 3 optimizations, continued, continued again...
> How many times did you regenerate this code until you got it right? Well, honestly, I changed the code generator to "pack" the new optimized instruction derivatives densly into the available opcodes, so that I can make optimal use of what's there. Thus I only generated the code twice for this patch. > And how do you know that you really got it so right that it was the last time > ever > that you needed your generator for it? I am positive that I am going to need my code generator in the future, as I have several ideas to increase performance even more. As I have mentioned before, my quickening based inline caching technique is very simple, and if it would crash, chances are that any of the inline-cache miss guards don't capture all scenarios, i.e., are non-exhaustive. The regression-tests run, so do the official benchmarks plus the computer language benchmarks game. In addition, this has been my line of research since 2009, so I have extensive experience with it, too. > What if the C structure of any of those "several types" ever changes? Since I optimize interpreter instructions, any change that affects their implementation requires changing of the optimized instructions, too. Having the code generator ready for such things would certainly be a good idea (probably also for generating the default interpreter dispatch loop), since you could also add your own "profile" for your application/domain to re-use the remaining 30+ instruction opcodes. The direct answer is that I would need to re-generate the driver file, which is basically a gdb-dump plus an Emacs macro (please note that I did not need to do that since working with ~ 3.0b1) I will add a list of the types I use for specializing to patch section on the "additional resources" page of my homepage (including a fixed patch of what Georg brought to my attention.) --stefan ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python 3 optimizations, continued, continued again...
Wiadomość napisana przez stefan brunthaler w dniu 1 lut 2012, o godz. 16:55: >> And how do you know that you really got it so right that it was the last >> time ever >> that you needed your generator for it? > > I am positive that I am going to need my code generator in the future, > as I have several ideas to increase performance even more. Hello, Stefan. First let me thank you for your interest in improving the interpreter. We appreciate and encourage efforts to make it perform better. But let me put this straight: as an open-source project, we are hesitant to accept changes which depend on closed software. Even if your optimization techniques would result in performance a hundred times better than what is currently achieved, we would still be wary to accept them. Please note that this is not because of lack of trust or better yet greed for your code. We need to make sure that under no circumstances our codebase is in danger because something important was left out along the way. Maintenance of generated code is yet another nuissance that should better be strongly justified. -- Best regards, Łukasz Langa Senior Systems Architecture Engineer IT Infrastructure Department Grupa Allegro Sp. z o.o.___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python 3 optimizations, continued, continued again...
> But let me put this straight: as an open-source project, we are hesitant to > accept changes which depend on closed software. Even if your optimization > techniques would result in performance a hundred times better than what is > currently achieved, we would still be wary to accept them. > > Please note that this is not because of lack of trust or better yet greed > for your code. We need to make sure > that under no circumstances our codebase is in danger because something > important was left out along the way. > I am positive that the code generator does not depend on any closed source components, I just juse mako for storing the C code templates that I generate -- everything else I wrote myself. Of course, I'll give the code generator to pydev, too, if necessary. However, I need to strip it down, so that it does not do all the other stuff that you don't need. I just wanted to give you the implementation now, since Benjamin said that he wants to see real code and results first. If you want to integrate the inca-optimization, I am going to start working on this asap. > Maintenance of generated code is yet another nuissance that should better be > strongly justified. > I agree, but the nice thing is that the technique is very simple: only if you changed a significant part of the interpreter implementation's, you'd need to change the optimized derivatives, too. If one generates the default interpreter implementation, too, then one gets the optimizations almost for free. For maintenance reasons I chose to use a template-based system, too, since this gives you a direct correspondence between the actual code and what's generated, without interfering with the code generator at all. --stefan ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] A new dictionary implementation
On 30/01/12 00:30:14, Steven D'Aprano wrote: Mark Shannon wrote: Antoine Pitrou wrote: [..] Antoine is right. It is a reorganisation of the dict, plus a couple of changes to typeobject.c and object.c to ensure that instance dictionaries do indeed share keys arrays. I don't quite follow how that could work. If I have this: class C: pass a = C() b = C() a.spam = 1 b.ham = 2 how can a.__dict__ and b.__dict__ share key arrays? I've tried reading the source, but I'm afraid I don't understand it well enough to make sense of it. They can't. But then, your class is atypical. Usually, classes initialize all the attributes of their instances in the __init__ method, perhaps like so: class D: def __init__(self, ham=None, spam=None): self.ham = ham self.spam = spam As long as you follow the common practice of not adding any attributes after the object has been initialized, your instances can share their keys array. Mark's patch will do that. You'll still be allowed to have different attributes per instance, but if you do that, then the patch doesn't buy you much. -- HansM ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Store timestamps as decimal.Decimal objects
On Jan 31, 2012 11:08 PM, "Nick Coghlan" wrote: > PJE is quite right that using a new named protocol rather than a > callback with a particular signature could also work, but I don't see > a lot of advantages in doing so. The advantage is that it fits your brain better. That is, you don't have to remember another symbol besides the type you wanted. (There's probably fewer keystrokes involved, too.) ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python 3 optimizations, continued, continued again...
Let's make one thing clear. The Python core developers need to be able to reproduce your results from scratch, and that means access to the templates, code generators, inputs, and everything else you used. (Of course for stuff you didn't write that's already open source, all we need is a pointer to the open source project and the exact version/configuration you used, plus any local mods you made.) I understand that you're hesitant to just dump your current mess, and you want to clean it up before you show it to us. That's fine. But until you're ready to show it, we're not going to integrate any of your work into CPython, even though some of us (maybe Benjamin) may be interested in kicking its tires. And remember, it doesn't need to be perfect (in fact perfectionism is probably a bad idea here). But it does need to be open source. Every single bit of it. (And no GPL, please.) --Guido 2012/2/1 stefan brunthaler : >> But let me put this straight: as an open-source project, we are hesitant to >> accept changes which depend on closed software. Even if your optimization >> techniques would result in performance a hundred times better than what is >> currently achieved, we would still be wary to accept them. >> >> Please note that this is not because of lack of trust or better yet greed >> for your code. We need to make sure >> that under no circumstances our codebase is in danger because something >> important was left out along the way. >> > I am positive that the code generator does not depend on any closed > source components, I just juse mako for storing the C code templates > that I generate -- everything else I wrote myself. > Of course, I'll give the code generator to pydev, too, if necessary. > However, I need to strip it down, so that it does not do all the other > stuff that you don't need. I just wanted to give you the > implementation now, since Benjamin said that he wants to see real code > and results first. If you want to integrate the inca-optimization, I > am going to start working on this asap. > > >> Maintenance of generated code is yet another nuissance that should better be >> strongly justified. >> > I agree, but the nice thing is that the technique is very simple: only > if you changed a significant part of the interpreter implementation's, > you'd need to change the optimized derivatives, too. If one generates > the default interpreter implementation, too, then one gets the > optimizations almost for free. For maintenance reasons I chose to use > a template-based system, too, since this gives you a direct > correspondence between the actual code and what's generated, without > interfering with the code generator at all. > > --stefan > ___ > Python-Dev mailing list > [email protected] > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/guido%40python.org -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] A new dictionary implementation
On Wed, Feb 1, 2012 at 9:13 AM, Hans Mulder wrote: > On 30/01/12 00:30:14, Steven D'Aprano wrote: >> >> Mark Shannon wrote: >>> >>> Antoine Pitrou wrote: > > [..] > >>> Antoine is right. It is a reorganisation of the dict, plus a couple of >>> changes to typeobject.c and object.c to ensure that instance >>> dictionaries do indeed share keys arrays. >> >> >> >> I don't quite follow how that could work. >> >> If I have this: >> >> class C: >> pass >> >> a = C() >> b = C() >> >> a.spam = 1 >> b.ham = 2 >> >> >> how can a.__dict__ and b.__dict__ share key arrays? I've tried reading >> the source, but I'm afraid I don't understand it well enough to make >> sense of it. > > > They can't. > > But then, your class is atypical. Usually, classes initialize all the > attributes of their instances in the __init__ method, perhaps like so: > > class D: > def __init__(self, ham=None, spam=None): > self.ham = ham > self.spam = spam > > As long as you follow the common practice of not adding any attributes > after the object has been initialized, your instances can share their > keys array. Mark's patch will do that. > > You'll still be allowed to have different attributes per instance, but > if you do that, then the patch doesn't buy you much. Hey, I like this! It's a subtle encouragement for developers to initialize all their instance variables in their __init__ or __new__ method, with a (modest) performance improvement for a carrot. (Though I have to admit I have no idea how you do it. Wouldn't the set of dict keys be different while __init__ is in the middle of setting the instance variables?) Another question: a common pattern is to use (immutable) class variables as default values for instance variables, and only set the instance variables once they need to be different. Does such a class benefit from your improvement? > -- HansM -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 409 - final?
Hm... Reading this draft, I like the idea of using "raise X from None", but I still have one quibble. It seems the from clause sets __cause__, and __cause__ can indicate three things: (1) print __cause__ (explicitly set), (2) print __context__ (default), (3) print neither (raise X from None). For (1), __cause__ must of course be a traceback object. The PEP currently proposes to use two special values: False for (2), None for (3). To me, this has a pretty strong code smell, and I don't want this pattern to be enshrined in a PEP as an example for all to follow. (And I also don't like "do as I say, don't do as I do." :-) Can we think of a different special value to distinguish between (2) and (3)? Ideally one that doesn't change the nice "from None" idiom, which I actually like as a way to spell this. Sorry that life isn't easier, --Guido On Tue, Jan 31, 2012 at 9:14 PM, Nick Coghlan wrote: > On Wed, Feb 1, 2012 at 1:57 PM, Ethan Furman wrote: >> I haven't seen any further discussion here or in the bug tracker. Below is >> the latest version of this PEP, now with a section on Language Details. >> >> Who makes the final call on this? Any idea how long that will take? (Not >> that I'm antsy, or anything... ;) > > Guido still has the final say on PEP approvals as BDFL - it's just > that sometimes he'll tap someone else and say "Your call!" (thus > making them a BDFOP - Benevolent Dictator for One PEP). > > FWIW, I'm personally +1 on the latest version of this. > > Cheers, > Nick. > > -- > Nick Coghlan | [email protected] | Brisbane, Australia > ___ > Python-Dev mailing list > [email protected] > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/guido%40python.org -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] A new dictionary implementation
Guido van Rossum python.org> writes: > Hey, I like this! It's a subtle encouragement for developers to > initialize all their instance variables in their __init__ or __new__ > method, with a (modest) performance improvement for a carrot. (Though > I have to admit I have no idea how you do it. Wouldn't the set of dict > keys be different while __init__ is in the middle of setting the > instance variables?) > > Another question: a common pattern is to use (immutable) class > variables as default values for instance variables, and only set the > instance variables once they need to be different. Does such a class > benefit from your improvement? > > > -- HansM > While I absolutely cannot speak to this implementation. Traditionally this type of approach is refered to as maps, and was pioneered in SELF, originally presented at OOPSLA '89: http://dl.acm.org/citation.cfm?id=74884 . PyPy also uses these maps to back it's object, although from what I've read the implementation looks nothing like the proposed one for CPython, you can read about that here: http://bit.ly/zwlOkV , and if you're really excited about this you can read our implementation here: https://bitbucket.org/pypy/pypy/src/default/pypy/objspace/std/mapdict.py . Alex ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python 3 optimizations, continued, continued again...
On Wed, Feb 1, 2012 at 09:46, Guido van Rossum wrote: > Let's make one thing clear. The Python core developers need to be able > to reproduce your results from scratch, and that means access to the > templates, code generators, inputs, and everything else you used. (Of > course for stuff you didn't write that's already open source, all we > need is a pointer to the open source project and the exact > version/configuration you used, plus any local mods you made.) > > I understand that you're hesitant to just dump your current mess, and > you want to clean it up before you show it to us. That's fine. But > until you're ready to show it, we're not going to integrate any of > your work into CPython, even though some of us (maybe Benjamin) may be > interested in kicking its tires. And remember, it doesn't need to be > perfect (in fact perfectionism is probably a bad idea here). But it > does need to be open source. Every single bit of it. (And no GPL, > please.) > I understand all of these issues. Currently, it's not really a mess, but much more complicated as it needs to be for only supporting the inca optimization. I don't know what the time frame for a possible integration is (my guess is that it'd be safe anyways to disable it, like the threaded code support was handled.) As for the license: I really don't care about that at all, the only thing nice to have would be to have a pointer to my home page and/or the corresponding research, but that's about all on my wish list. --stefan ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python 3 optimizations, continued, continued again...
On Feb 1, 2012, at 12:46 PM, Guido van Rossum wrote:
> I understand that you're hesitant to just dump your current mess, and
> you want to clean it up before you show it to us. That's fine. (...) And
> remember, it doesn't need to be
> perfect (in fact perfectionism is probably a bad idea here).
Just as a general point of advice to open source contributors, I'd suggest
erring on the side of the latter rather than the former suggestion here: dump
your current mess, along with the relevant caveats ("it's a mess, much of it is
irrelevant") so that other developers can help you clean it up, rather than
putting the entire burden of the cleanup on yourself. Experience has taught me
that most people who hold back work because it needs cleanup eventually run out
of steam and their work never gets integrated and maintained.
-glyph___
Python-Dev mailing list
[email protected]
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 409 - final?
Guido van Rossum wrote: Hm... Reading this draft, I like the idea of using "raise X from None", but I still have one quibble. It seems the from clause sets __cause__, and __cause__ can indicate three things: (1) print __cause__ (explicitly set), (2) print __context__ (default), (3) print neither (raise X from None). For (1), __cause__ must of course be a traceback object. Actually, for (1) __cause__ is an exception object, not a traceback. The PEP currently proposes to use two special values: False for (2), None for (3). To me, this has a pretty strong code smell, and I don't want this pattern to be enshrined in a PEP as an example for all to follow. (And I also don't like "do as I say, don't do as I do." :-) My apologies for my ignorance, but is the code smell because both False and None evaluate to bool(False)? I suppose we could use True for (2) to indicate that __context__ should be printed, leaving None for (3)... but having __context__ at None and __cause__ at True could certainly be confusing (the default case when no chaining is in effect). Can we think of a different special value to distinguish between (2) and (3)? Ideally one that doesn't change the nice "from None" idiom, which I actually like as a way to spell this. How about this: Exception Life Cycle Stage 1 - brand new exception - raise ValueError() * __context__ is None * __cause__ is None Stage 2 - exception caught, exception raised try: raise ValueError() except Exception: raise CustomError() * __context__ is previous exception * __cause__ is True Stage 3 - exception raised from [exception | None] -- try: raise ValueError() except Exception: raise CustomError() from [OtherException | None] * __context__ is previous exception * __cause__ is [OtherException | None] Sorry that life isn't easier, Where would be the fun without the challenge? ~Ethan~ ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 409 - final?
On Wed, Feb 1, 2012 at 10:48 AM, Ethan Furman wrote: > Guido van Rossum wrote: >> >> Hm... Reading this draft, I like the idea of using "raise X from >> None", but I still have one quibble. It seems the from clause sets >> __cause__, and __cause__ can indicate three things: (1) print >> __cause__ (explicitly set), (2) print __context__ (default), (3) print >> neither (raise X from None). For (1), __cause__ must of course be a >> traceback object. > > > Actually, for (1) __cause__ is an exception object, not a traceback. Ah, sorry. I'm not as detail-oriented as I was. :-) >> The PEP currently proposes to use two special >> values: False for (2), None for (3). To me, this has a pretty strong >> code smell, and I don't want this pattern to be enshrined in a PEP as >> an example for all to follow. (And I also don't like "do as I say, >> don't do as I do." :-) > > > My apologies for my ignorance, but is the code smell because both False and > None evaluate to bool(False)? That's part of it, but the other part is that the type of __context__ is now truly dynamic. I often *think* of variables as having some static type, e.g. "integer" or "Foo instance", and for most Foo instances I consider None an acceptable value (since that's how pointer types work in most static languages). But the type of __context__ you're proposing is now a union of exception and bool, except that the bool can only be False. > I suppose we could use True for (2) to > indicate that __context__ should be printed, leaving None for (3)... but > having __context__ at None and __cause__ at True could certainly be > confusing (the default case when no chaining is in effect). It seems you really need a marker object. I'd be fine with using some other opaque marker -- IMO that's much better than using False but disallowing True. >> Can we think of a different special value to distinguish between (2) >> and (3)? Ideally one that doesn't change the nice "from None" idiom, >> which I actually like as a way to spell this. > > > How about this: > > > Exception Life Cycle > > > > Stage 1 - brand new exception > - > > raise ValueError() > > * __context__ is None > * __cause__ is None > > > Stage 2 - exception caught, exception raised > > > try: > raise ValueError() > except Exception: > raise CustomError() > > * __context__ is previous exception > * __cause__ is True > > > Stage 3 - exception raised from [exception | None] > -- > > try: > raise ValueError() > except Exception: > raise CustomError() from [OtherException | None] > > * __context__ is previous exception > * __cause__ is [OtherException | None] No, this has the same code smell for me. See above. >> Sorry that life isn't easier, > > > Where would be the fun without the challenge? +1 :-) -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] A new dictionary implementation
Hey, I like this! It's a subtle encouragement for developers to initialize all their instance variables in their __init__ or __new__ method, with a (modest) performance improvement for a carrot. (Though I have to admit I have no idea how you do it. Wouldn't the set of dict keys be different while __init__ is in the middle of setting the instance variables?) The "type's attribute set" will be a superset of the instance's, for a shared key set. Initializing the first instance grows the key set, which is put into the type. Subsequent instances start out with the key set as a candidate, and have all values set to NULL in the dict values set. As long as you are only setting attributes that are in the shared key set, the values just get set. When it encounters a key not in the shared key set, the dict dissociates itself from the shared key set. Another question: a common pattern is to use (immutable) class variables as default values for instance variables, and only set the instance variables once they need to be different. Does such a class benefit from your improvement? It depends. IIUC, if the first instance happens to get this attribute set, it ends up in the shared key set, and subsequent instances may have a NULL value for the key. I'm unsure how *exactly* the key set gets frozen. You cannot allow resizing the key set once it is shared, as you would have to find all instances with the same key set and resize their values. It would be possible (IIUC) to add more keys to the shared key set if that doesn't cause a resize, but I'm not sure whether the patch does that. Regards, Martin ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Switching to Visual Studio 2010
On Thu, Jan 26, 2012 at 12:54:31PM -0800, [email protected] wrote: > > Is this considered a new feature that has to be in by the first beta? > > I'm hoping to have it completed much sooner than that so we can get > > mileage on it, but is there a cutoff for changing the compiler? > > At some point, I'll start doing this myself if it hasn't been done by > then, and I would certainly want the build process adjusted (with > all buildbots updated) before beta 1. I... I think I might have already done this, inadvertently. I needed an x64 VS2010 debug build of Subversion/APR*/Python a few weeks ago -- forgetting the fact that we're still on VS2008. By the time I got to building Python, I'd already coerced everything else to use VS2010, so I just bit the bullet and coerced Python to use it too, including updating all the buildbot scripts and relevant externals to use VS2010, too. Things that immediately come to mind as potentially being useful: * Three new buildbot scripts: - build-amd64-vs10.bat - clean-amd64-vs10.bat - external-amd64-vs10.bat * Updates to externals/(tcl|tk)-8.5.9.x so that they both build with VS2010. This was a tad fiddly. I ended up creating makefile.vs10 from win/makefile.vc and encapsulating the changes there, then calling that from the buildbot *-vs10.bat scripts. I had to change win/rules.vc, too. * A few other things I can't remember off the top of my head. So, I guess my question is, is that work useful? Based on Martin's original list, it seems to check a few boxes. Brian, what are your plans? Are you going to continue working in hg.python.org/sandbox/vs2010port then merge everything over when ready? I have some time available to work on this for the next three weeks or so and would like to help out. Regards, Trent. ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Switching to Visual Studio 2010
On Sun, Jan 29, 2012 at 14:23, Trent Nelson wrote: > Brian, what are your plans? Are you going to continue working in > hg.python.org/sandbox/vs2010port then merge everything over when > ready? I have some time available to work on this for the next > three weeks or so and would like to help out. Yep, I'm working out of that repo, and any help you can provide would be great. I need to go back over Martin's checklist to find out what I've actually done in terms of moving old stuff around and whatnot, but the basic gist is that it builds and passes most of the test suite save for 5-6 modules IIRC. ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python 3 optimizations, continued, continued again...
On Wed, Feb 1, 2012 at 11:08 AM, stefan brunthaler wrote: > On Wed, Feb 1, 2012 at 09:46, Guido van Rossum wrote: >> Let's make one thing clear. The Python core developers need to be able >> to reproduce your results from scratch, and that means access to the >> templates, code generators, inputs, and everything else you used. (Of >> course for stuff you didn't write that's already open source, all we >> need is a pointer to the open source project and the exact >> version/configuration you used, plus any local mods you made.) >> >> I understand that you're hesitant to just dump your current mess, and >> you want to clean it up before you show it to us. That's fine. But >> until you're ready to show it, we're not going to integrate any of >> your work into CPython, even though some of us (maybe Benjamin) may be >> interested in kicking its tires. And remember, it doesn't need to be >> perfect (in fact perfectionism is probably a bad idea here). But it >> does need to be open source. Every single bit of it. (And no GPL, >> please.) >> > I understand all of these issues. Currently, it's not really a mess, > but much more complicated as it needs to be for only supporting the > inca optimization. I don't know what the time frame for a possible > integration is (my guess is that it'd be safe anyways to disable it, > like the threaded code support was handled.) It won't be integrated until you have published your mess. > As for the license: I really don't care about that at all, the only > thing nice to have would be to have a pointer to my home page and/or > the corresponding research, but that's about all on my wish list. Please don't try to enforce that in the license. That usually backfires. Use Apache 2, which is what the PSF prefers. -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 409 - final?
On 2/1/2012 3:07 PM, Guido van Rossum wrote: On Wed, Feb 1, 2012 at 10:48 AM, Ethan Furman wrote: Guido van Rossum wrote: Hm... Reading this draft, I like the idea of using "raise X from None", but I still have one quibble. It seems the from clause sets __cause__, and __cause__ can indicate three things: (1) print __cause__ (explicitly set), (2) print __context__ (default), (3) print neither (raise X from None). For (1), __cause__ must of course be a traceback object. Actually, for (1) __cause__ is an exception object, not a traceback. Ah, sorry. I'm not as detail-oriented as I was. :-) The PEP currently proposes to use two special values: False for (2), None for (3). To me, this has a pretty strong code smell, and I don't want this pattern to be enshrined in a PEP as an example for all to follow. (And I also don't like "do as I say, don't do as I do." :-) My apologies for my ignorance, but is the code smell because both False and None evaluate to bool(False)? That's part of it, but the other part is that the type of __context__ is now truly dynamic. I often *think* of variables as having some static type, e.g. "integer" or "Foo instance", and for most Foo instances I consider None an acceptable value (since that's how pointer types work in most static languages). But the type of __context__ you're proposing is now a union of exception and bool, except that the bool can only be False. It sounds like you are asking for a special class __NoException__(BaseException) to use as the marker. -- Terry Jan Reedy ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 409 - final?
Not a bad idea. On Wed, Feb 1, 2012 at 12:53 PM, Terry Reedy wrote: > On 2/1/2012 3:07 PM, Guido van Rossum wrote: >> >> On Wed, Feb 1, 2012 at 10:48 AM, Ethan Furman wrote: >>> >>> Guido van Rossum wrote: Hm... Reading this draft, I like the idea of using "raise X from None", but I still have one quibble. It seems the from clause sets __cause__, and __cause__ can indicate three things: (1) print __cause__ (explicitly set), (2) print __context__ (default), (3) print neither (raise X from None). For (1), __cause__ must of course be a traceback object. >>> >>> >>> >>> Actually, for (1) __cause__ is an exception object, not a traceback. >> >> >> Ah, sorry. I'm not as detail-oriented as I was. :-) >> The PEP currently proposes to use two special values: False for (2), None for (3). To me, this has a pretty strong code smell, and I don't want this pattern to be enshrined in a PEP as an example for all to follow. (And I also don't like "do as I say, don't do as I do." :-) >>> >>> >>> >>> My apologies for my ignorance, but is the code smell because both False >>> and >>> None evaluate to bool(False)? >> >> >> That's part of it, but the other part is that the type of __context__ >> is now truly dynamic. I often *think* of variables as having some >> static type, e.g. "integer" or "Foo instance", and for most Foo >> instances I consider None an acceptable value (since that's how >> pointer types work in most static languages). But the type of >> __context__ you're proposing is now a union of exception and bool, >> except that the bool can only be False. > > > It sounds like you are asking for a special class > __NoException__(BaseException) to use as the marker. > > -- > Terry Jan Reedy > > > ___ > Python-Dev mailing list > [email protected] > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/guido%40python.org -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 409 - final?
Guido van Rossum wrote: On Wed, Feb 1, 2012 at 10:48 AM, Ethan Furman wrote: My apologies for my ignorance, but is the code smell because both False and None evaluate to bool(False)? That's part of it, but the other part is that the type of __context__ is now truly dynamic. I often *think* of variables as having some static type, e.g. "integer" or "Foo instance", and for most Foo instances I consider None an acceptable value (since that's how pointer types work in most static languages). But the type of __context__ you're proposing is now a union of exception and bool, except that the bool can only be False. It seems you really need a marker object. I'd be fine with using some other opaque marker -- IMO that's much better than using False but disallowing True. So for __cause__ we need three values: 1) Not set special value (prints __context__ if present) 2) Some exception (print instead of __context__) 3) Ignore __context__ special value (and stop following the __context__ chain) For (3) we're hoping for None, for (2) we have an actual exception, and for (1) -- hmmm. It seems like a stretch, but we could do (looking at both __context__ and __cause__): __context__ __cause__ raise None False [1] reraiseprevious True [2] reraise from previous None [3] | exception [1] False means non-chained exception [2] True means chained exception [3] None means chained exception, but by default we do not print nor follow the chain The downside to this is that effectively either False and True mean the same thing, i.e. try to follow the __context__ chain, or False and None mean the same thing, i.e. don't bother trying to follow the __context__ chain because it either doesn't exist or is being suppressed. Feels like a bunch of complexity for marginal value. As you were saying, some other object to replace both False and True in the above table would be ideal. ~Ethan~ ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 409 - final?
On Wed, Feb 1, 2012 at 12:55 PM, Ethan Furman wrote: > Guido van Rossum wrote: >> >> On Wed, Feb 1, 2012 at 10:48 AM, Ethan Furman wrote: >>> >>> My apologies for my ignorance, but is the code smell because both False >>> and >>> None evaluate to bool(False)? >> >> >> That's part of it, but the other part is that the type of __context__ >> is now truly dynamic. I often *think* of variables as having some >> static type, e.g. "integer" or "Foo instance", and for most Foo >> instances I consider None an acceptable value (since that's how >> pointer types work in most static languages). But the type of >> __context__ you're proposing is now a union of exception and bool, >> except that the bool can only be False. >> >> It seems you really need a marker object. I'd be fine with using some >> other opaque marker -- IMO that's much better than using False but >> disallowing True. > > > So for __cause__ we need three values: > > 1) Not set special value (prints __context__ if present) > > 2) Some exception (print instead of __context__) > > 3) Ignore __context__ special value (and stop following the > __context__ chain) > > For (3) we're hoping for None, for (2) we have an actual exception, and for > (1) -- hmmm. > > It seems like a stretch, but we could do (looking at both __context__ and > __cause__): > > __context__ __cause__ > > raise None False [1] > > reraise previous True [2] > > reraise from previous None [3] | exception > > [1] False means non-chained exception > [2] True means chained exception > [3] None means chained exception, but by default we do not print > nor follow the chain > > The downside to this is that effectively either False and True mean the same > thing, i.e. try to follow the __context__ chain, or False and None mean the > same thing, i.e. don't bother trying to follow the __context__ chain because > it either doesn't exist or is being suppressed. > > Feels like a bunch of complexity for marginal value. As you were saying, > some other object to replace both False and True in the above table would be > ideal. So what did you think of Terry Reedy's idea of using a special exception class? -- --Guido van Rossum (python.org/~guido) ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Switching to Visual Studio 2010
On Tue, Jan 17, 2012 at 9:43 PM, "Martin v. Löwis" wrote: ... > P.S. Here is my personal list of requirements and non-requirements: ... > - must generate binaries that run on Windows XP I recently read about Firefox switching to VS2010 and therefore needing to drop support for Windows 2000, XP RTM (no service pack) and XP SP1. Indeed, [1] confirms that the VS2010 runtime (it's not clear if the C one, the C++ one or both) needs XP SP2 or higher. Just thought I'd share this so that an informed decision can be made, in my opinion it would be ok for Python 3.3 to drop everything prior to XP SP2. Maybe not very relevant, but [2] has some mention of statistics for Firefox usage on systems prior to XP SP2. [1] http://connect.microsoft.com/VisualStudio/feedback/details/526821/executables-built-with-visual-c-2010-do-not-run-on-windows-xp-prior-to-sp2 [2] http://weblogs.mozillazine.org/asa/archives/2012/01/end_of_firefox_win2k.html ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Switching to Visual Studio 2010
On Wed, Feb 1, 2012 at 15:37, Catalin Iacob wrote: > On Tue, Jan 17, 2012 at 9:43 PM, "Martin v. Löwis" wrote: > ... >> P.S. Here is my personal list of requirements and non-requirements: > ... >> - must generate binaries that run on Windows XP > > I recently read about Firefox switching to VS2010 and therefore > needing to drop support for Windows 2000, XP RTM (no service pack) and > XP SP1. Indeed, [1] confirms that the VS2010 runtime (it's not clear > if the C one, the C++ one or both) needs XP SP2 or higher. > > Just thought I'd share this so that an informed decision can be made, > in my opinion it would be ok for Python 3.3 to drop everything prior > to XP SP2. > > Maybe not very relevant, but [2] has some mention of statistics for > Firefox usage on systems prior to XP SP2. > > [1] > http://connect.microsoft.com/VisualStudio/feedback/details/526821/executables-built-with-visual-c-2010-do-not-run-on-windows-xp-prior-to-sp2 > [2] > http://weblogs.mozillazine.org/asa/archives/2012/01/end_of_firefox_win2k.html We already started moving forward with dropping Windows 2000 prior to this coming up. http://mail.python.org/pipermail/python-dev/2011-May/59.html was the discussion (which links an older discussion) and PEP-11 (http://www.python.org/dev/peps/pep-0011/) was updated accordingly. ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Switching to Visual Studio 2010
On Wed, Feb 1, 2012 at 15:41, Brian Curtin wrote: > On Wed, Feb 1, 2012 at 15:37, Catalin Iacob wrote: >> On Tue, Jan 17, 2012 at 9:43 PM, "Martin v. Löwis" >> wrote: >> ... >>> P.S. Here is my personal list of requirements and non-requirements: >> ... >>> - must generate binaries that run on Windows XP >> >> I recently read about Firefox switching to VS2010 and therefore >> needing to drop support for Windows 2000, XP RTM (no service pack) and >> XP SP1. Indeed, [1] confirms that the VS2010 runtime (it's not clear >> if the C one, the C++ one or both) needs XP SP2 or higher. >> >> Just thought I'd share this so that an informed decision can be made, >> in my opinion it would be ok for Python 3.3 to drop everything prior >> to XP SP2. >> >> Maybe not very relevant, but [2] has some mention of statistics for >> Firefox usage on systems prior to XP SP2. >> >> [1] >> http://connect.microsoft.com/VisualStudio/feedback/details/526821/executables-built-with-visual-c-2010-do-not-run-on-windows-xp-prior-to-sp2 >> [2] >> http://weblogs.mozillazine.org/asa/archives/2012/01/end_of_firefox_win2k.html > > We already started moving forward with dropping Windows 2000 prior to > this coming up. > http://mail.python.org/pipermail/python-dev/2011-May/59.html was > the discussion (which links an older discussion) and PEP-11 > (http://www.python.org/dev/peps/pep-0011/) was updated accordingly. Sorry, hit send too soon... Anyway, I can't imagine many of our users (and their users) are still using pre-SP2. It was released in 2004 and was superseded by SP3 and two entire OS releases. I don't know of a reliable way of figuring out whether or not pre-SP2 is a measurable demographic for us, but I can't imagine it's enough to make us hold up the move for another ~2 years. ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 409 - final?
raise from None seems pretty "in band". A NoException class could have many other uses and leaves no confusion about intent. ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Switching to Visual Studio 2010
On Sun, Jan 29, 2012 at 12:23:14PM -0800, Trent Nelson wrote: > * Updates to externals/(tcl|tk)-8.5.9.x so that they both build with > VS2010. Before I go updating tcl/tk, any thoughts on bumping our support to the latest revision, 8.5.11? I guess the same question applies to all the externals, actually (zlib, openssl, sqlite, bsddb, etc). In the past we've typically bumped up our support to the latest version prior to beta, then stuck with that for the release's life, right? Semi-related note: is svn.python.org/externals still the primary repo for externals? (I can't see a similarly named hg repo.) Trent. ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 409 - final?
Terry Reedy wrote: > It sounds like you are asking for a special class > __NoException__(BaseException) to use as the marker. Guido van Rossum wrote: So what did you think of Terry Reedy's idea of using a special exception class? Our table would then look like: __context__ __cause__ raise None __NoException__ reraiseprevious __NoException__ reraise from previous None | exception It is certainly simpler than trying to force the use of both True and False. :) The purist side of me thinks it's still slightly awkward; the practical side recognizes that there probably is not a perfect solution and thinks this is workable, and is willing to deal with the slight awkwardness to get 'from None' up and running. :) The main reason for the effort in keeping the previous exception in __context__ instead of just clobbering it is for custom error handlers, yes? So here is a brief comparison of the two: def complete_traceback(): Actually, I got about three lines into that and realized that whatever __cause__ is set to is completely irrelevant for that function: if *__context__* is not None, follow the chain; the only relevance __cause__ has is when would it print? If it is a (valid) exception. And how do we know if it's valid? # True, False, None if isinstance(exc.__cause__, BaseException): print_exc(exc) or if exc.__cause__ not in (True, False, None): print_exc(exc) vs # None, __NoException__ (forced to be an instance) if (exc.__cause__ is not None and not isinstance(exc.__cause__, __NoException__): print_exc(exc) or # None, __NoException__ (forced to stay a class) if exc.__cause__ not in (None, __NoException__): print_exc(exc) Having gone through all that, I'm equally willing to go either way (True/False/None or __NoException__). Implementation questions for the __NoException__ route: 1) Do we want double underscores, or just a single one? I'm thinking double to mark it as special as opposed to private. 2) This is a new exception class -- do we want to store the class itself in __context__, or it's instance? If its class, should we somehow disallow instantiation of it? 3) Should it be an exception, or just inherit from object? Is it worth worrying about somebody trying to raise it, or raise from it? 4) Is the name '__NoException__' confusing? ~Ethan~ ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 409 - final?
On 2 February 2012 11:18, Ethan Furman wrote: > Implementation questions for the __NoException__ route: > > 1) Do we want double underscores, or just a single one? > > I'm thinking double to mark it as special as opposed > to private. > Double and exposed allows someone to explicitly the __cause__ to __NoException__ on an existing exception. > 2) This is a new exception class -- do we want to store the > class itself in __context__, or it's instance? If its > class, should we somehow disallow instantiation of it? > > 3) Should it be an exception, or just inherit from object? > Is it worth worrying about somebody trying to raise it, or > raise from it? > If it's not actually an exception, we get prevention of instantiation for free. My feeling is just make it a singleton object. > 4) Is the name '__NoException__' confusing? Seems perfectly expressive to me so long as it can't itself be raised. Tim Delaney ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 409 - final?
On Thu, Feb 2, 2012 at 10:44 AM, Tim Delaney wrote: >> 3) Should it be an exception, or just inherit from object? >> Is it worth worrying about somebody trying to raise it, or >> raise from it? > > If it's not actually an exception, we get prevention of instantiation for > free. My feeling is just make it a singleton object. Yeah, a new Ellipsis/None style singleton probably makes more sense than an exception instance. Cheers, Nick. -- Nick Coghlan | [email protected] | Brisbane, Australia ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 409 - final?
On 2/1/2012 7:49 PM, Nick Coghlan wrote: > On Thu, Feb 2, 2012 at 10:44 AM, Tim Delaney > wrote: >>> 3) Should it be an exception, or just inherit from object? >>> Is it worth worrying about somebody trying to raise it, or >>> raise from it? >> >> If it's not actually an exception, we get prevention of instantiation for >> free. My feeling is just make it a singleton object. > > Yeah, a new Ellipsis/None style singleton probably makes more sense > than an exception instance. But now we're adding a new singleton, unrelated to exceptions (other than its name) because we don't want to use an existing singleton (False). Maybe the name difference is good enough justification. Eric. ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] PEP: New timestamp formats
Even if I am not really conviced that a PEP helps to design an API, here is a draft of a PEP to add new timestamp formats to Python 3.3. Don't see the draft as a final proposition, it is just a document supposed to help the discussion :-) --- PEP: xxx Title: New timestamp formats Version: $Revision$ Last-Modified: $Date$ Author: Victor Stinner Status: Draft Type: Standards Track Content-Type: text/x-rst Created: 01-Feburary-2012 Python-Version: 3.3 Abstract Python 3.3 introduced functions supporting nanosecond resolutions. Python 3.3 only supports int or float to store timestamps, but these types cannot be use to store a timestamp with a nanosecond resolution. Motivation == Python 2.3 introduced float timestamps to support subsecond resolutions, os.stat() uses float timestamps by default since Python 2.5. Python 3.3 introduced functions supporting nanosecond resolutions: * os.stat() * os.utimensat() * os.futimens() * time.clock_gettime() * time.clock_getres() * time.wallclock() (reuse time.clock_gettime(time.CLOCK_MONOTONIC)) The problem is that floats of 64 bits are unable to store nanoseconds (10^-9) for timestamps bigger than 2^24 seconds (194 days 4 hours: 1970-07-14 for an Epoch timestamp) without loosing precision. .. note:: 64 bits float starts to loose precision with microsecond (10^-6) resolution for timestamp bigger than 2^33 seconds (272 years: 2242-03-16 for an Epoch timestamp). Timestamp formats = Choose a new format for nanosecond resolution - To support nanosecond resolution, four formats were considered: * 128 bits float * decimal.Decimal * datetime.datetime * tuple of integers Criteria It should be possible to do arithmetic, for example:: t1 = time.time() # ... t2 = time.time() dt = t2 - t1 Two timestamps should be comparable (t2 > t1). The format should have a resolution of a least 1 nanosecond (without loosing precision). It is better if the format can have an arbitrary resolution. 128 bits float -- Add a new IEEE 754-2008 quad-precision float type. The IEEE 754-2008 quad precision float has 1 sign bit, 15 bits of exponent and 112 bits of mantissa. 128 bits float is supported by GCC (4.3), Clang and ICC. The problem is that Visual C++ 2008 doesn't support it. Python must be portable and so cannot rely on a type only available on some platforms. Another example: GCC 4.3 does not support __float128 in 32-bit mode on x86 (but gcc 4.4 does). Intel CPUs have FPU supporting 80-bit floats, but not using SSE intructions. Other CPU vendors don't support this float size. There is also a license issue: GCC uses the MPFR library which is distributed under the GNU LGPL license. This license is incompatible with the Python Software License. datetime.datetime - datetime.datetime only supports microsecond resolution, but can be enhanced to support nanosecond. datetime.datetime has issues: - there is no easy way to convert it into "seconds since the epoch" - any broken-down time has issues of time stamp ordering in the duplicate hour of switching from DST to normal time - time zone support is flaky-to-nonexistent in the datetime module decimal.Decimal --- The decimal module is implemented in Python and is not really fast. Using Decimal by default would cause bootstrap issue because the module is implemented in Python. Decimal can store a timestamp with any resolution, not only nanosecond, the resolution is configurable at runtime. Decimal objects support all arithmetics operations and are compatible with int and float. The decimal module is slow, but there is a C reimplementation of the decimal module which is almost ready for inclusion. tuple - Various kind of tuples have been proposed. All propositions only use integers: * a) (sec, nsec): C timespec structure, useful for os.futimens() for example * b) (sec, floatpart, exponent): value = sec + floatpart * 10**exponent * c) (sec, floatpart, divisor): value = sec + floatpart / divisor The format (a) only supports nanosecond resolution. The format (a) and (b) may loose precision if the clock divisor is not a power of 10. For format (c) should be enough for most cases. Creating a tuple of integers is fast. Arithmetic operations cannot be done directly on tuple: t2-t1 doesn't work for example. Final formats - The PEP proposes to provide 5 different timestamp formats: * numbers: * int * float * decimal.Decimal * datetime.timedelta * broken-down time: * datetime.datetime API design == Change the default result type -- Python 2.3 introduced os.stat_float_times(). The problem is that this flag is global, and so may break libraries if the application changes the type. Changing the default result type would break backward compatibility. Callback and creating a new module to con
Re: [Python-Dev] PEP 409 - final?
On Thu, Feb 2, 2012 at 11:01 AM, Eric V. Smith wrote: > On 2/1/2012 7:49 PM, Nick Coghlan wrote: >> On Thu, Feb 2, 2012 at 10:44 AM, Tim Delaney >> wrote: 3) Should it be an exception, or just inherit from object? Is it worth worrying about somebody trying to raise it, or raise from it? >>> >>> If it's not actually an exception, we get prevention of instantiation for >>> free. My feeling is just make it a singleton object. >> >> Yeah, a new Ellipsis/None style singleton probably makes more sense >> than an exception instance. > > But now we're adding a new singleton, unrelated to exceptions (other > than its name) because we don't want to use an existing singleton (False). > > Maybe the name difference is good enough justification. That's exactly the thought process that led me to endorse the idea of using False as the "not set" marker in the first place. With None being stolen to mean "No cause and don't print the context either", the choices become: - set some *other* exception attribute to indicate whether or not to print the context - use an existing singleton like False to mean "not set, use the context" - add a new singleton specifically to mean "not set, use the context" - use a new exception type to mean "not set, use the context" Hmm, after writing up that list, the idea of using "__cause__ is Ellipsis" (or even "__cause__ is ...")to mean "use __context__ instead" occurs to me. After all, "..." has the right connotations of "fill this in from somewhere else", and since we really just want a known sentinel object that isn't None and isn't a meaningful type like the boolean singletons... Regards, Nick. -- Nick Coghlan | [email protected] | Brisbane, Australia ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 409 - final?
On 2 February 2012 12:43, Nick Coghlan wrote: > Hmm, after writing up that list, the idea of using "__cause__ is > Ellipsis" (or even "__cause__ is ...")to mean "use __context__ > instead" occurs to me. After all, "..." has the right connotations of > "fill this in from somewhere else", and since we really just want a > known sentinel object that isn't None and isn't a meaningful type like > the boolean singletons... > It's cute yet seems appropriate ... I quite like it. Tim Delaney ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP: New timestamp formats
On Thu, Feb 2, 2012 at 11:03 AM, Victor Stinner wrote: > Even if I am not really conviced that a PEP helps to design an API, > here is a draft of a PEP to add new timestamp formats to Python 3.3. > Don't see the draft as a final proposition, it is just a document > supposed to help the discussion :-) Helping keep a discussion on track (and avoiding rehashing old ground) is precisely why the PEP process exists. Thanks for writing this up :) > --- > > PEP: xxx > Title: New timestamp formats > Version: $Revision$ > Last-Modified: $Date$ > Author: Victor Stinner > Status: Draft > Type: Standards Track > Content-Type: text/x-rst > Created: 01-Feburary-2012 > Python-Version: 3.3 > > > Abstract > > > Python 3.3 introduced functions supporting nanosecond resolutions. Python 3.3 > only supports int or float to store timestamps, but these types cannot be use > to store a timestamp with a nanosecond resolution. > > > Motivation > == > > Python 2.3 introduced float timestamps to support subsecond resolutions, > os.stat() uses float timestamps by default since Python 2.5. Python 3.3 > introduced functions supporting nanosecond resolutions: > > * os.stat() > * os.utimensat() > * os.futimens() > * time.clock_gettime() > * time.clock_getres() > * time.wallclock() (reuse time.clock_gettime(time.CLOCK_MONOTONIC)) > > The problem is that floats of 64 bits are unable to store nanoseconds (10^-9) > for timestamps bigger than 2^24 seconds (194 days 4 hours: 1970-07-14 for an > Epoch timestamp) without loosing precision. > > .. note:: > 64 bits float starts to loose precision with microsecond (10^-6) resolution > for timestamp bigger than 2^33 seconds (272 years: 2242-03-16 for an Epoch > timestamp). > > > Timestamp formats > = > > Choose a new format for nanosecond resolution > - > > To support nanosecond resolution, four formats were considered: > > * 128 bits float > * decimal.Decimal > * datetime.datetime > * tuple of integers I'd add datetime.timedelta to this list. It's exactly what timestamps are, after all - the difference between the current time and the relevant epoch value. > Various kind of tuples have been proposed. All propositions only use integers: > > * a) (sec, nsec): C timespec structure, useful for os.futimens() for example > * b) (sec, floatpart, exponent): value = sec + floatpart * 10**exponent > * c) (sec, floatpart, divisor): value = sec + floatpart / divisor > > The format (a) only supports nanosecond resolution. > > The format (a) and (b) may loose precision if the clock divisor is not a > power of 10. > > For format (c) should be enough for most cases. Format (b) only loses precision if the exponent chosen for a given value is too small relative to the precision of the underlying timer (it's the same as using decimal.Decimal in that respect). The problem with (a) is that it simply cannot represent times with greater than nanosecond precision. Since we have the opportunity, we may as well deal with the precision question once and for all. Alternatively, you could return a 4-tuple that specifies the base in addition to the exponent. > Callback and creating a new module to convert timestamps > > > Use a callback taking integers to create a timestamp. Example with float: > > def timestamp_to_float(seconds, floatpart, divisor): > return seconds + floatpart / divisor > > The time module can provide some builtin converters, and other module, like > datetime, can provide their own converters. Users can define their own types. > > An alternative is to add new module for all functions converting timestamps. > > The problem is that we have to design the API of the callback and we cannot > change it later. We may need more information for future needs later. I'd be more specific here - either of the 3-tuple options already presented in the PEP, or the 4-tuple option I mentioned above, would be suitable as the signature of an arbitrary precision callback API that assumes timestamps are always expressed as "seconds since a particular epoch value". Such an API could only become limiting if timestamps ever become something other than "the difference in time between right now and the relevant epoch value", and that's a sufficiently esoteric possibility that it really doesn't seem worthwhile to take it into account. The past problems with timestamp APIs have all related to increases in precision, not timestamps being redefined as something radically different. The PEP should also mention PJE's suggestion of creating a new named protocol specifically for the purpose (with a signature based on one of the proposed tuple formats), such that you could simply write: time.time() # output=float by default time.time(output=float) time.time(output=int) time.time(output=fractions.Fraction) time.time(output=decimal.Decimal) time.time(o
Re: [Python-Dev] PEP 409 - final?
Tim Delaney wrote: On 2 February 2012 12:43, Nick Coghlan wrote: Hmm, after writing up that list, the idea of using "__cause__ is Ellipsis" (or even "__cause__ is ...")to mean "use __context__ instead" occurs to me. After all, "..." has the right connotations of "fill this in from somewhere else", and since we really just want a known sentinel object that isn't None and isn't a meaningful type like the boolean singletons... It's cute yet seems appropriate ... I quite like it. I find it very amusing, yet also appropriate -- I'm happy with it. ~Ethan~ ___ Python-Dev mailing list [email protected] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
