Re: [Python-Dev] PEP 383: Non-decodable Bytes in System Character Interfaces
How about another str-like type, a sequence of char-or-bytes? Could be called strbytes or stringwithinvalidcharacters. It would support whatever subset of str functionality makes sense / is easy to implement plus a to_escaped_str() method (that does the escaping the PEP talks about) for people who want to use regexes or other str-only stuff. Here is a description by example: os.listdir('.') -> [strbytes('normal_file'), strbytes('bad', 128, 'file')] strbytes('a')[0] -> strbytes('a') strbytes('bad', 128, 'file')[3] -> strbytes(128) strbytes('bad', 128, 'file').to_escaped_str() -> 'bad?128file' Having a separate type is cleaner than a "str that isn't exactly what it represents". And making the escaping an explicit (but rarely-needed) step would be less surprising for users. Anyway, I don't know a whole lot about this issue so there may an obvious reason this is a bad idea. On Wed, Apr 22, 2009 at 6:50 AM, "Martin v. Löwis" wrote: > I'm proposing the following PEP for inclusion into Python 3.1. > Please comment. > > Regards, > Martin > > PEP: 383 > Title: Non-decodable Bytes in System Character Interfaces > Version: $Revision: 71793 $ > Last-Modified: $Date: 2009-04-22 08:42:06 +0200 (Mi, 22. Apr 2009) $ > Author: Martin v. Löwis > Status: Draft > Type: Standards Track > Content-Type: text/x-rst > Created: 22-Apr-2009 > Python-Version: 3.1 > Post-History: > > Abstract > > > File names, environment variables, and command line arguments are > defined as being character data in POSIX; the C APIs however allow > passing arbitrary bytes - whether these conform to a certain encoding > or not. This PEP proposes a means of dealing with such irregularities > by embedding the bytes in character strings in such a way that allows > recreation of the original byte string. > > Rationale > = > > The C char type is a data type that is commonly used to represent both > character data and bytes. Certain POSIX interfaces are specified and > widely understood as operating on character data, however, the system > call interfaces make no assumption on the encoding of these data, and > pass them on as-is. With Python 3, character strings use a > Unicode-based internal representation, making it difficult to ignore > the encoding of byte strings in the same way that the C interfaces can > ignore the encoding. > > On the other hand, Microsoft Windows NT has correct the original > design limitation of Unix, and made it explicit in its system > interfaces that these data (file names, environment variables, command > line arguments) are indeed character data, by providing a > Unicode-based API (keeping a C-char-based one for backwards > compatibility). > > For Python 3, one proposed solution is to provide two sets of APIs: a > byte-oriented one, and a character-oriented one, where the > character-oriented one would be limited to not being able to represent > all data accurately. Unfortunately, for Windows, the situation would > be exactly the opposite: the byte-oriented interface cannot represent > all data; only the character-oriented API can. As a consequence, > libraries and applications that want to support all user data in a > cross-platform manner have to accept mish-mash of bytes and characters > exactly in the way that caused endless troubles for Python 2.x. > > With this PEP, a uniform treatment of these data as characters becomes > possible. The uniformity is achieved by using specific encoding > algorithms, meaning that the data can be converted back to bytes on > POSIX systems only if the same encoding is used. > > Specification > = > > On Windows, Python uses the wide character APIs to access > character-oriented APIs, allowing direct conversion of the > environmental data to Python str objects. > > On POSIX systems, Python currently applies the locale's encoding to > convert the byte data to Unicode. If the locale's encoding is UTF-8, > it can represent the full set of Unicode characters, otherwise, only a > subset is representable. In the latter case, using private-use > characters to represent these bytes would be an option. For UTF-8, > doing so would create an ambiguity, as the private-use characters may > regularly occur in the input also. > > To convert non-decodable bytes, a new error handler "python-escape" is > introduced, which decodes non-decodable bytes using into a private-use > character U+F01xx, which is believed to not conflict with private-use > characters that currently exist in Python codecs. > > The error handler interface is extended to allow the encode error > handler to return byte strings immediately, in addition to returning > Unicode strings which then get encoded again. > > If the locale's encoding is UTF-8, the file system encoding is set to > a new encoding "utf-8b". The UTF-8b codec decodes non-decodable bytes > (which must be >= 0x80) into half surrogate codes U+DC80..U+DCFF. > > Discussion > == > > While providing a uniform API to non-decodable bytes,
Re: [Python-Dev] PEP 553 V2 - builtin breakpoint() (was Re: PEP 553: Built-in debug())
Would that not be a security concern, if you can get Python to execute arbitrary code just by setting an environment variable? On Thu, Sep 7, 2017 at 10:47 PM, Barry Warsaw wrote: > On Sep 7, 2017, at 19:34, Nick Coghlan wrote: > > > Now that you put it that way, it occurs to me that CI environments > > could set "PYTHONBREAKPOINTHOOK=sys:exit" to make breakpoint() an > > immediate failure rather than halting the CI run waiting for input > > that will never arrive. > > You better watch out Nick. You’re starting to sway me on adding the > environment variable. > > -Barry > > > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > apetresc%40gmail.com > > ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Re: PEP 647 (type guards) -- final call for comments
Here's another suggestion: PEP 593 introduced the `Annotated` type annotation. This could be used to annotate a TypeGuard like this: `def is_str_list(val: List[object]) -> Annotated[bool, TypeGuard(List[str])]` Note that I used ( ) instead of [ ] for the TypeGuard, as it is no longer a type. This should fulfill all four requirements, but is a lot more verbose and therefore also longer. It would also be extensible for other annotations. For the most extensible approach both `-> TypeGuard(...)` and `-> Annotated[bool, TypeGuard(...)]` could be allowed, which would open the path for future non-type-annotations, which could be used regardless of whether the code is type-annotated. -- Adrian On February 14, 2021 2:20:14 PM GMT+01:00, Steven D'Aprano wrote: >On Sat, Feb 13, 2021 at 07:48:10PM -, Eric Traut wrote: > >> I think it's a reasonable criticism that it's not obvious that a >> function annotated with a return type of `TypeGuard[x]` should return >> a bool. >[...] >> As Guido said, it's something that a developer can easily >> look up if they are confused about what it means. > >Yes, developers can use Bing and Google :-) > >But it's not the fact that people have to look it up. It's the fact that >they need to know that this return annotation is not what it seems, but >a special magic value that needs to be looked up. > >That's my objection: we're overloading the return annotation to be >something other than the return annotation, but only for this one >special value. (So far.) If you don't already know that it is special, >you won't know that you need to look it up to learn that its special. > > >> I'm open to alternative formulations that meet the following requirements: >> >> 1. It must be possible to express the type guard within the function >> signature. In other words, the implementation should not need to be >> present. This is important for compatibility with type stubs and to >> guarantee consistent behaviors between type checkers. > >When you say "implementation", do you mean the body of the function? > >Why is this a hard requirement? Stub files can contain function >bodies, usually `...` by convention, but alternatives are often useful, >such as docstrings, `raise NotImplementedError()` etc. > >https://mypy.readthedocs.io/en/stable/stubs.html > >I don't think that the need to support stub files implies that the type >guard must be in the function signature. Have I missed something? > > >> 2. It must be possible to annotate the input parameter types _and_ the >> resulting (narrowed) type. It's not sufficient to annotate just one or >> the other. > >Naturally :-) > >That's the whole point of a type guard, I agree that this is a truly >hard requirement. > > >> 3. It must be possible for a type checker to determine when narrowing >> can be applied and when it cannot. This implies the need for a bool >> response. > >Do you mean a bool return type? Sorry Eric, sometimes the terminology >you use is not familiar to me and I have to guess what you mean. > > >> 4. It should not require changes to the grammar because that would >> prevent this from being adopted in most code bases for many years. > >Fair enough. > > >> Mark, none of your suggestions meet these requirements. > >Mark's suggestion to use a variable annotation in the body meets >requirements 2, 3, and 4. As I state above, I don't think that >requirement 1 needs to be a genuinely hard requirement: stub files can >include function bodies. > >To be technically precise, stub functions **must** include function >bodies. It's just that by convention we use typically use `...` as the >body. > > >> Gregory, one of your suggestions meets these requirements: >> >> ```python >> def is_str_list(val: Constrains[List[object]:List[str]) -> bool: >> ... >> ``` > >That still misleadingly tells the reader (or naive code analysis >software) that parameter val is of type > >Contrains[List[object]:List[str]] > >whatever that object is, rather than what it *actually* is, namely >`List[object]`. I dislike code that misleads the reader. > > >> As for choosing the name of the annotation >[...] >> `TypeGuard` is the term that is used in other languages to describe >> this notion, so it seems reasonable to me to adopt this term > >Okay, this reasoning makes sense to me. Whether spelled as a decorator >or an annotation, using TypeGuard is reasonable. > > >> Steven, you said you'd like to explore a d
[Python-Dev] Re: Request for comments on final version of PEP 653 (Precise Semantics for Pattern Matching)
Hi Mark, I also wanted to give some feedback on this. While most of the discussion so far has been about the matching of the pattern itself I think it should also be considered what happens in the block below. Consider this code: ``` m = ... match m: case [a, b, c] as l: # what can we safely do with l? ``` or in terms of the type system: What is the most specific type that we can know l to be? With PEP 634 you can be sure that l is a sequence and that it's length is 3. With PEP 653 this is currently not explicitly defined. Judging from the pseudo code we can only assume that l is an iterable (because we use it in an unpacking assignment) and that it's length is 3, which greatly reduces the operations that can be safely done on l. For mapping matches with PEP 634 we can assume that l is a mapping. With PEP 653 all we can assume is that it has a .get method that takes two parameters, which is even more restrictive, as we can't even be sure if we can use len(), .keys, ... or iterate over it. This also makes it a lot harder for static type checkers to check match statements, because instead of checking against an existing type they now have to hard-code all the guarantees made my the match statement or not narrow the type at all. Additionally consider this typed example: ``` m: Mapping[str, int] = ... match m: case {'version': v}: pass ``` With PEP 634 we can statically check that v is an int. With PEP 653 there is no such guarantee. Therefore I would strongly be in favor of having sequence and mapping patterns only match certain types instead of relying on dunder attributes. If implementing all of sequence is really to much work just to be matched by a sequence pattern, as PEP 653 claims, then maybe a more general type could be chosen instead. I don't have any objections against the other parts of the PEP. Adrian Freund On 3/27/21 2:37 PM, Mark Shannon wrote: > Hi everyone, > > As the 3.10 beta is not so far away, I've cut down PEP 653 down to the > minimum needed for 3.10. The extensions will have to wait for 3.11. > > The essence of the PEP is now that: > > 1. The semantics of pattern matching, although basically unchanged, > are more precisely defined. > > 2. The __match_kind__ special attribute will be used to determine > which patterns to match, rather than relying on the collections.abc > module. > > Everything else has been removed or deferred. > > The PEP now has only the slightest changes to semantics, which should be > undetectable in normal use. For those corner cases where there is a > difference, it is to make pattern matching more robust. > E.g. With PEP 653, pattern matching will work in the collections.abc > module. With PEP 634 it does not. > > > As always, all thoughts and comments are welcome. > > Cheers, > Mark. > ___ > Python-Dev mailing list -- python-dev@python.org > To unsubscribe send an email to python-dev-le...@python.org > https://mail.python.org/mailman3/lists/python-dev.python.org/ > Message archived at > https://mail.python.org/archives/list/python-dev@python.org/message/YYIT3QXMLPNLXQAQ5BCXE4LLJ57EE7JV/ > Code of Conduct: http://python.org/psf/codeofconduct/ ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/OAVTGQCFXNX47JH52ZMOAHVISDZ2JNMO/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Typing syntax and ecosystem
I see multiple problems with including a type checker as part of the standard library: First of all this would require defining precise type checking semantics and not making breaking changes to them. Currently some parts of type checking are not precisely defined and are done differently by different type checkers. Take this example: if condition: a = 1 else: a = "foo" reveal_type(a) Mypy raises an error at the second assignment and infers a as int. Pyright on the other hand doesn't report errors and infers a as Union[int, str]. Both approaches are equally valid and have their advantages and drawbacks. My second concern would be the development speed. Python type checking is still relatively young and is developing at a vastly different rate from the rest of the language. While cpython currently has a 1 year release cycle mypy had 5 releases (excluding minor releases) in the last year. This development speed difference can also be seen by the use of typing_extensions to back port typing features to older python versions. GitHub search shows 16.500 python results for "from typing_extensions" (excluding forks). Being tied to the cpython release cycle would probably significantly hinder the development of a type checker. I agree that a delay between a python release and mypy (and other type checkers) supporting it isn't optimal, but it can probably be solved much easier: By having more developers put in work to keep it up do date. This work would still need to be done, even for a type checker that's part of the standard library. The only difference would be that changes would then cause tests in cpython to fail instead of just in mypy. In the future, when development on type checkers has slowed, adding a type checker to the standard library might be useful, but in my opinion it would currently do more harm than good. Adrian Freund On April 13, 2021 11:55:05 PM GMT+02:00, Luciano Ramalho wrote: >Hugh was unfortunate in presenting the problem, but I agree that we >should commit all the way to supporting type hints, and that means >bundling a type checker as part of the standard library and >distribution. > >There is always a delay after a Python release before Mypy catches up >to—and that's the type checker hosted in the python organization on >github. > >I believe this is an unfortunate state of affairs for many users. I am >not aware of any other optionally typed language that underwent core >changes to support type annotations and yet does not bundle a type >checker. > >Cheers, > >Luciano > > > >On Mon, Apr 12, 2021 at 7:01 AM Hugh Fisher wrote: >> >> > Message: 1 >> > Date: Sun, 11 Apr 2021 13:31:12 -0700 >> > From: Barry Warsaw >> > Subject: [Python-Dev] Re: PEP 647 Accepted >> >> > >> > This is something the SC has been musing about, but as it’s not a fully >> > formed idea, I’m a little hesitant to bring it up. That said, it’s >> > somewhat relevant: We wonder if it may be time to in a sense separate the >> > typing syntax from Python’s regular syntax. TypeGuards are a case where >> > if typing had more flexibility to adopt syntax that wasn’t strictly legal >> > “normal” Python, maybe something more intuitive could have been proposed. >> > I wonder if the typing-sig has discussed this possibility (in the future, >> > of course)? >> >> [ munch ] >> >> > >> > Agreed. It’s interesting that PEP 593 proposes a different approach to >> > enriching the typing system. Typing itself is becoming a little ecosystem >> > of its own, and given that many Python users are still not fully embracing >> > typing, maybe continuing to tie the typing syntax to Python syntax is >> > starting to strain. >> >> I would really like to see either "Typed Python" become a different >> programming >> language, or progress to building type checking into the CPython >> implementation >> itself. (Python 4 seems to me the obvious release.) The current halfway >> approach >> is confusing and slightly ridiculous. >> >> The first, a separate programming language, would be like RATFOR and CFront >> in the past and TypeScript today. Typed Python can have whatever syntax the >> designers want because it doesn't have to be compatible with Python, just as >> TypeScript is not constrained by JavaScript. A type checker translates >> the original >> Typed Python source into "dynamic" or "classic" Python for execution. (Maybe >> into .pyc instead of .py?) >> >> This would mean no overhead for type checking in CPython itself. No need
[Python-Dev] Re: PEP 563 and 649: The Great Compromise
I think there is a point to be made for requiring a function call to resolve annotations, in regard to the ongoing discussion about relaxing the annotation syntax (https://mail.python.org/archives/list/python-dev@python.org/message/2F5PVC5MOWMGFVOX6FUQOUC7EJEEXFN3/) Type annotation are still a fast moving topic compared to python as a whole. Should the annotation syntax be relaxed and annotations be stored as strings then requiring a function call to resolve annotations would allow third party libraries, be it typing_extensions or something else, to backport new type annotation syntax by offering their own version of "get_annotated_values". Typing features are already regularly backported using typing_extensions and this could not be done for new typing syntax unless annotations are stored as strings and resolved by a function. Note: Obviously new typing syntax couldn't be backported to versions before the typing syntax was relaxed, unless explicitly wrapped in a string, but I would imagine that if we see a relaxed annotation syntax we might see new typing syntax every now and then after that. Adrian Freund On April 18, 2021 6:49:59 PM GMT+02:00, Larry Hastings wrote: >On 4/18/21 9:10 AM, Damian Shaw wrote: >> Hi Larry, all, I was thinking also of a compromise but a slightly >> different approach: >> >> Store annotations as a subclass of string but with the required frames >> attached to evaluate them as though they were in their local context. >> Then have a function "get_annotation_values" that knows how to >> evaluate these string subclasses with the attached frames. >> >> This would allow those who use runtime annotations to access local >> scope like PEP 649, and allow those who use static type checking to >> relax the syntax (as long as they don't try and evaluate the syntax at >> runtime) as per PEP 563. > > >Something akin to this was proposed and discarded during the discussion >of PEP 563, although the idea there was to still use actual Python >bytecode instead of strings: > > > https://www.python.org/dev/peps/pep-0563/#keeping-the-ability-to-use-function-local-state-when-defining-annotations > >It was rejected because it would be too expensive in terms of >resources. PEP 649's approach uses significantly fewer resources, which >is one of the reasons it seems viable. > >Also, I don't see the benefit of requiring a function like >"get_annotation_values" to see the actual values. This would force >library code that examined annotations to change; I think it's better >that we preserve the behavior that "o.__annotations__" are real values. > > >Cheers, > > >//arry/ > ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/N74BWTHGBWWR46EB3TVLZZDJXTMHSIP2/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Keeping Python a Duck Typed Language.
According to PEP 484 all missing annotations in checked functions should be handled as Any. Any is compatible with all types. I think from a technical standpoint it should be possible to infer protocols for arguments for most functions, but there are some edge cases where this would not be possible, making it impractical to make this the default behavior. Having an annotation to make a type checker infer a protocol would be interesting though. For example: def f(x: int): ... def g(x: str): ... def main(t): if t[0] == 'version': f(t[1]) elif t[0] == 'name': g(t[1]) You could statically type t as Union[Tuple[Literal['version'], int], Tuple[Literal['name'], str]], but inferring a Protocol for this would be either very hard or even impossible, especially with even more complex conditions. Adrian Freund On April 22, 2021 1:04:11 PM GMT+02:00, Paul Moore wrote: >On Thu, 22 Apr 2021 at 11:21, Paul Moore wrote: >> >> On Thu, 22 Apr 2021 at 11:06, Chris Angelico wrote: >> > >> > Someone will likely correct me if this is inaccurate, but my >> > understanding is that that's exactly what you get if you just don't >> > give a type hint. The point of type hints is to give more information >> > to the type checker when it's unable to simply infer from usage and >> > context. >> >> Hmm, I sort of wondered about that as I wrote it. But in which case, >> what's the problem here? My understanding was that people were >> concerned that static typing was somehow in conflict with duck typing, >> but if the static checkers enforce the inferred duck type on untyped >> arguments, then that doesn't seem to be the case. Having said that, I >> thought that untyped arguments were treated as if they had a type of >> "Any", which means "don't type check". > >Looks like it doesn't: > >> cat .\test.py >def example(f) -> None: >f.close() > >import sys >example(12) >example(sys.stdin) >PS 12:00 00:00.009 C:\Work\Scratch\typing >> mypy .\test.py >Success: no issues found in 1 source file > >What I was after was something that gave an error on the first call, >but not on the second. Compare this: > >> cat .\test.py >from typing import Protocol > >class X(Protocol): >def close(self): ... > >def example(f: X) -> None: >f.close() > >import sys >example(12) >example(sys.stdin) >PS 12:03 00:00.015 C:\Work\Scratch\typing >> mypy .\test.py >test.py:10: error: Argument 1 to "example" has incompatible type >"int"; expected "X" >Found 1 error in 1 file (checked 1 source file) > >Paul >___ >Python-Dev mailing list -- python-dev@python.org >To unsubscribe send an email to python-dev-le...@python.org >https://mail.python.org/mailman3/lists/python-dev.python.org/ >Message archived at >https://mail.python.org/archives/list/python-dev@python.org/message/54C6G2JLYYD6B37J5KVKPCKSQDCGLRKA/ >Code of Conduct: http://python.org/psf/codeofconduct/ ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/M4XDZUPWKPGRO3NF5VONG22YHOHAYZCM/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Keeping Python a Duck Typed Language.
On April 22, 2021 3:15:27 PM GMT+02:00, Paul Moore wrote: >On Thu, 22 Apr 2021 at 13:23, Adrian Freund wrote: >> >> According to PEP 484 all missing annotations in checked functions should be >> handled as Any. Any is compatible with all types. > >Yep, that's what I understood to be the case. > >> I think from a technical standpoint it should be possible to infer protocols >> for arguments for most functions, but there are some edge cases where this >> would not be possible, making it impractical to make this the default >> behavior. Having an annotation to make a type checker infer a protocol would >> be interesting though. > >Absolutely, I see no problem with "use duck typing for this argument" >being opt-in. > >> For example: >> >> def f(x: int): ... >> def g(x: str): ... >> >> def main(t): >> if t[0] == 'version': >> f(t[1]) >> elif t[0] == 'name': >> g(t[1]) >> >> >> You could statically type t as Union[Tuple[Literal['version'], int], >> Tuple[Literal['name'], str]], but inferring a Protocol for this would be >> either very hard or even impossible, especially with even more complex >> conditions. > >Yes, but that's inferred static typing which is *not* what I was >proposing. I think I understood what you were proposing, but my example might have been less than ideal. Sorry for that. I mixed some static types in there to simplify it. The union wasn't meant at what it should infer but was meant as a comparison to what we would to currently, with static, nominal typing. Let me try again without static types. def file(x): print(x.read()) # x has to have .read(): object def string(x): print(str(x)) # x has to have .__str__(self): object def main(t): if t[0] == 'file': file(t[1]) elif t[0] == 'string': string(t[1]) Here we can infer that t has to have a __getitem__(self, idx: int), but we can't infer it's return type >I was suggesting that the checker could easily infer that t >must have a __getitem__ method, and nothing more. So the protocol to >infer is > >class TypeOfT(Protocol): >def __getitem__(self, idx): ... > >It would be nice to go one step further and infer > >class TypeOfT(Protocol): >def __getitem__(self, idx: int): ... > >but that's *absolutely* as far as I'd want to go. Note in particular >that I don't want to constrain the return value The problem is that this isn't enough to have a type safe program. You need to also constrain the return type to make sure the returned value can be safely passed to other functions. If you don't do this large parts of your codebase will either need explicit annotations or will be unchecked. >- we've no way to know >what type it might have in the general case. IMO, inferring anything >else would over-constrain t - there's nothing in the available >information, for example, that says t must be a tuple, or a list, or >that t[3] should have any particular type, or anything like that. You can infer the return type of a function by looking at all the returns it contains, and inferring the types of the returned expressions. That isn't too hard and pytype for example already does it. You can infer the return type a protocol function should have by looking at all the places it's result are used. If you have inferred return types then constraining return types using inferred protocols would be practical in my opinion. > >My instinct is that working out that t needs to have a __getitem__ >that takes an int is pretty straightforward, as all you have to do is >look at where t is used in the function. Four places, all followed by >[] with a literal integer in the brackets. That's it. I fully >appreciate that writing *code* to do that can be a lot harder than it >looks, but that's an implementation question, not a matter of whether >it's reasonable as a proposal in theory. > >This feels like *precisely* where there seems to be a failure of >communication between the static typing and the duck typing worlds. I >have no idea what I said that would make you think that I wanted >anything like that Union type you quoted above. And yet obviously, you >somehow got that message from what I did say. Like I said above the Union. Was just meant as an example of that we would do with static, nominal typing, not what we want with duck typing. Sorry for the misunderstanding. > >Anyway, as I said this is just an interesting idea as far as I'm >concerned. I've no actual need for it right now, so I'm happy to leave >it to the mypy developers whether they want to do anything with it. > >Paul ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/AG6LWGMUNGP5O3PXJZUVDRRXO5VTJ7FW/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Keeping Python a Duck Typed Language.
On 4/22/21 5:00 PM, Paul Moore wrote: > On Thu, 22 Apr 2021 at 15:22, Adrian Freund wrote: >> On April 22, 2021 3:15:27 PM GMT+02:00, Paul Moore >> wrote: >>> but that's *absolutely* as far as I'd want to go. Note in particular >>> that I don't want to constrain the return value >> The problem is that this isn't enough to have a type safe program. You need to also constrain the return type to make sure the returned value can be safely passed to other functions. > But I don't want a type safe program. At least not in an absolute > sense. All I want is for mypy to catch the occasional error I make > where I pass the wrong parameter. For me, that's the "gradual" in > "gradual typing" - it's not a lifestyle, just a convenience. You seem > to be implying that it's "all or nothing". I don't think that inferring the required return type breaks gradual typing, but it is required for people who want type safety. If I understand correctly your concerns with inferring return types for inferred protocols are that it might be to restrictive and prevent gradual typing. Here are some examples to show how gradual typing would still work. If you have any concrete examples where inferring the return type would break gradual typing let me know and I'll have a look at them. def foo(x: DuckType): # x has to have a .bar(self) method. # The return type of which is inferred as Any, as it isn't used x.bar() def bar(x): x.bar() def foo(x: DuckType): # x has to have a .read(self) method. # The return type of which ist inferred as Any, as the parameter to bar isn't typed. bar(x.read()) Contrast that with def bar(x: DuckType): # x has to have a .bar(self) method. # The return type of which is inferred as Any. x.bar() def foo(x: DuckType): # x has to have a .read(self) method that returns something with a .bar(self) method. # If we don't infer the return type our call to bar() might be unsafe despite both foo and bar being typed. bar(x.read()) > I repeat, all I'm proposing is that > > def f(x: int): ... > def g(x: str): ... > > def main(t: DuckTyped) -> None: > if t[0] == 'version': > f(t[1]) > elif t[0] == 'name': > g(t[1]) > > gets interpreted *exactly* the same as if I'd written > > class TType(Protocol): > def __getitem__(self, int): ... > > def f(x: int): ... > def g(x: str): ... > > def main(t: TType) -> None: > if t[0] == 'version': > f(t[1]) > elif t[0] == 'name': > g(t[1]) > > How can you claim that the second example requires that " large parts > of your codebase will either need explicit annotations or will be > unchecked"? And if the second example doesn't require that, nor does > the first because it's equivalent. Both examples don't check the calls to f and g despite f and g both being typed functions and being called from typed functions. In a real codebase this will lead to a lot more instances of this happening. It would happen every time you do anything with something returned from a method on an inferred protocol > > Honestly, this conversation is just reinforcing my suspicion that > people invested in type annotations have a blind spot when it comes to > dealing with people and use cases that don't need to go "all in" with > typing :-( I don't think this is an all in or nothing. You can infer return types of inferred protocols and still use gradual typing. It's just that not inferring return types causes problems for both full and gradual typing. Adrian Freund ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/KVB33BV6T6H7PFXN574CMFZUJOVSVUBV/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: python3.10rc2 compilation on android/termux/clang12.0.1 fails
As you are using termux it might be worth checking out the build arguments and patches termux uses to build their own version of python (Currently 3.9.7): https://github.com/termux/termux-packages/tree/master/packages/python I'm not sure if this will be enough to build python3.10 or if additional patches and build arguments are needed though. Adrian On 9/15/21 19:06, Sandeep Gupta wrote: I am trying to compile Python3.10rc2 on rather unusual platform (termux on android). The gcc version is listed below: ~ $ g++ -v clang version 12.0.1 Target: aarch64-unknown-linux-android24 Thread model: posix InstalledDir: /data/data/com.termux/files/usr/bin I get following warnings and errors: Python/pytime.c:398:10: warning: implicit conversion from 'long' to 'double' changes value from 9223372036854775807 to 9223372036854775808 [-Wimplicit-const-int-float-conver sion] if (!_Py_InIntegralTypeRange(_PyTime_t, d)) { Python/bootstrap_hash.c:141:17: error: implicit declaration of function 'getrandom' is invalid in C99 [-Werror,-Wimplicit-function-declaration] n = getrandom(dest, n, flags); ^ Python/bootstrap_hash.c:145:17: error: implicit declaration of function 'getrandom' is invalid in C99 [-Werror,-Wimplicit-function-declaration] n = getrandom(dest, n, flags); Not sure if this is limitation of the platform or Python codebase needs fixes. Thanks -S ___ Python-Dev mailing list --python-dev@python.org To unsubscribe send an email topython-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived athttps://mail.python.org/archives/list/python-dev@python.org/message/KEURSMCLUVI7VPKM6M2VUV4JIW6FP66Z/ Code of Conduct:http://python.org/psf/codeofconduct/___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/WFKYODUXY3UHENII7U47XQPUIZQFJDG4/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: python3.10rc2 compilation on android/termux/clang12.0.1 fails
I looked a bit more into this and managed to run python 3.10.0rc2 on android in termux. No additional patches were needed, but the build process isn't straight forward. Some packages don't build on-device and have to be built on a computer. Python is one of them (https://github.com/termux/termux-packages/issues/4157. The issue is closed because making all packages build on-device is no longer a goal of termux) Follow https://github.com/termux/termux-packages/wiki/Build-environment to set up your termux build environment. You can use Docker, Vagrant, Ubuntu or Arch Linux. I used docker. Next build and install python3.10.0rc2 *on the build machine*. You need python3.10 installed to cross-compile python3.10. After that go back to the termux packages and change packages/python/build.sh. All you need to change is version number, url and hash. Here's a patch anyways: diff --git a/packages/python/build.sh b/packages/python/build.sh index c36bff5e5..d9fd86d02 100644 --- a/packages/python/build.sh +++ b/packages/python/build.sh @@ -2,10 +2,11 @@ TERMUX_PKG_HOMEPAGE=https://python.org/ TERMUX_PKG_DESCRIPTION="Python 3 programming language intended to enable clear programs" TERMUX_PKG_LICENSE="PythonPL" TERMUX_PKG_MAINTAINER="@termux" -_MAJOR_VERSION=3.9 -TERMUX_PKG_VERSION=${_MAJOR_VERSION}.7 -TERMUX_PKG_SRCURL=https://www.python.org/ftp/python/${TERMUX_PKG_VERSION}/Python-${TERMUX_PKG_VERSION}.tar.xz -TERMUX_PKG_SHA256=f8145616e68c00041d1a6399b76387390388f8359581abc24432bb969b5e3c57 +_MAJOR_VERSION=3.10 +_MINOR_VERSION=.0 +TERMUX_PKG_VERSION=${_MAJOR_VERSION}${_MINOR_VERSION}rc2 +TERMUX_PKG_SRCURL=https://www.python.org/ftp/python/${_MAJOR_VERSION}${_MINOR_VERSION}/Python-${TERMUX_PKG_VERSION}.tar.xz +TERMUX_PKG_SHA256=e75b56088548b7b9ad1f2571e6f5a2315e4808cb6b5fbe8288502afc802b2f24 TERMUX_PKG_DEPENDS="gdbm, libandroid-support, libbz2, libcrypt, libffi, liblzma, libsqlite, ncurses, ncurses-ui-libs, openssl, readline, zlib" TERMUX_PKG_RECOMMENDS="clang, make, pkg-config" TERMUX_PKG_SUGGESTS="python-tkinter" Finally just run "./build-package.sh -i -f python" and send "output/python*.deb" to your phone, where you can install it using dpkg -i. Adrian On 9/16/21 14:27, Adrian Freund wrote: As you are using termux it might be worth checking out the build arguments and patches termux uses to build their own version of python (Currently 3.9.7): https://github.com/termux/termux-packages/tree/master/packages/python I'm not sure if this will be enough to build python3.10 or if additional patches and build arguments are needed though. Adrian On 9/15/21 19:06, Sandeep Gupta wrote: I am trying to compile Python3.10rc2 on rather unusual platform (termux on android). The gcc version is listed below: ~ $ g++ -v clang version 12.0.1 Target: aarch64-unknown-linux-android24 Thread model: posix InstalledDir: /data/data/com.termux/files/usr/bin I get following warnings and errors: Python/pytime.c:398:10: warning: implicit conversion from 'long' to 'double' changes value from 9223372036854775807 to 9223372036854775808 [-Wimplicit-const-int-float-conver sion] if (!_Py_InIntegralTypeRange(_PyTime_t, d)) { Python/bootstrap_hash.c:141:17: error: implicit declaration of function 'getrandom' is invalid in C99 [-Werror,-Wimplicit-function-declaration] n = getrandom(dest, n, flags); ^ Python/bootstrap_hash.c:145:17: error: implicit declaration of function 'getrandom' is invalid in C99 [-Werror,-Wimplicit-function-declaration] n = getrandom(dest, n, flags); Not sure if this is limitation of the platform or Python codebase needs fixes. Thanks -S ___ Python-Dev mailing list --python-dev@python.org To unsubscribe send an email topython-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived athttps://mail.python.org/archives/list/python-dev@python.org/message/KEURSMCLUVI7VPKM6M2VUV4JIW6FP66Z/ Code of Conduct:http://python.org/psf/codeofconduct/ ___ Python-Dev mailing list --python-dev@python.org To unsubscribe send an email topython-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived athttps://mail.python.org/archives/list/python-dev@python.org/message/WFKYODUXY3UHENII7U47XQPUIZQFJDG4/ Code of Conduct:http://python.org/psf/codeofconduct/___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/2V5AVBB4QDN2NSEUFBQJ4X2M2FC6IMUA/ Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-Dev] Enumeration items: `type(EnumClass.item) is EnumClass` ?
On Apr 29, 2013, at 7:32 AM, Ethan Furman wrote: > On 04/28/2013 01:02 PM, Guido van Rossum wrote: >> On Sun, Apr 28, 2013 at 12:32 PM, Ethan Furman wrote: >> - should enum items be of the type of the Enum class? (i.e. type(SPRING) >> is Seasons) > > IMO Yes. This decision seems natural to me, so I wrote an enumeration library some time ago that uses a simple metaclass to achieve "type(Season.spring) is Season": https://github.com/sampsyo/beets/blob/master/beets/util/enumeration.py The module has other warts but perhaps it can be helpful anyway. Cheers, Adrian ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Re: Python 3.9.11
It looks like the 3.9.11 commit was done 14 hours ago: https://github.com/python/cpython/commit/0f0c55c9f0f4c358d759470702821c471fcd1bc8 It hasn't been tagged and uploaded to the website yet. I'm not involved in the release process, but I'd imagine that it will show up on the website soon. 3.9.10 was released approximately one day after the commit was made. Regards, Adrian Freund On 3/15/22 18:43, Prasad, PCRaghavendra wrote: Hi Team, Can someone please let us know the release date of Python 3.9.11 ( with libexpat 2.4.8 security issues fixed ) In the python.org releases it was mentioned as 14-march-2022, but still, I couldn’t see the bin/source code. Can someone help with this Thanks, Raghavendra Internal Use - Confidential ___ Python-Dev mailing list --python-dev@python.org To unsubscribe send an email topython-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived athttps://mail.python.org/archives/list/python-dev@python.org/message/5LWAYP7A4BBGPXXBAUWTSL6YQWHDX25N/ Code of Conduct:http://python.org/psf/codeofconduct/___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/JTOX2B4GDW2W2C4JJGQ7VI57NE4EUPXJ/ Code of Conduct: http://python.org/psf/codeofconduct/
Re: [Python-Dev] Mercurial conversion repositories
On 2011-02-25 17:12, Barry Warsaw wrote: > On Feb 25, 2011, at 01:50 AM, Raymond Hettinger wrote: > >> >> On Feb 25, 2011, at 12:09 AM, Martin v. Löwis wrote: >> >>> I think I would have liked the strategy of the PEP better (i.e. >>> create clones for feature branches, rather than putting all >>> in a single repository). >> >> In my brief tests, the single repository has been easy to work with. >> If they were separate, it would complicate backporting patches >> and merges. So, I'm happy with how George and Benjamin put this together. > > The way I work with the Subversion branches is to have all the active branches > checked out into separate directories under a common parent, e.g. > > ~/projects/python/py26 > ~/projects/python/py27 > ~/projects/python/trunk > ~/projects/python/py31 > ~/projects/python/py32 > ~/projects/python/py3k > > This makes it very easy to just cd, svn up, make distclean, configure, make to > test things. How can I do this with the hg clone when all the branches are > in the single repository, but more or less hidden? After doing the 'hg clone' > operation specified by Antoine, I'm left with a single cpython directory > containing (iiuc) the contents of the 'default' branch. > > I'm sure I'm not the only one who works this way with Subversion. IWBN to > cover this in the devguide (or is it there and I missed it?). I know (almost) nothing about developing Python (this is my first post to this list after lurking for quite a while now), but as a regular Mercurial contributor, I think the following could be useful for you: First, get an initial clone (let's name it 'master') over the wire using: [1] $ hg clone -U ssh://h...@hg.python.org/cpython master Then create a hardlinked clone [2] for working in each branch, specifying the branch to check out using option -u: $ hg clone master py26 -u 2.6 updating to branch 2.6 NNN files updated, 0 files merged, 0 files removed, 0 files unresolved $ hg clone master py27 -u 2.7 updating to branch 2.7 NNN files updated, 0 files merged, 0 files removed, 0 files unresolved $ hg clone master trunk -u trunk updating to branch trunk NNN files updated, 0 files merged, 0 files removed, 0 files unresolved $ hg clone master py31 -u 3.1 updating to branch 3.1 NNN files updated, 0 files merged, 0 files removed, 0 files unresolved $ hg clone master py32 -u 3.2 updating to branch 3.2 NNN files updated, 0 files merged, 0 files removed, 0 files unresolved This will be fast and save space as these local 'branch clones' will share diskspace inside .hg/store by using hardlinks, and you need to do the initial slow clone over the wire only once. Note that each of these branch clones will initially have your local master repo as the default path [3,4]. If you'd like to have the default push/pull path to point to ssh://h...@hg.python.org/cpython instead, you'd want to edit the [paths] section in the .hg/hgrc file in each of the branch repos. But of course you can also leave the default paths as they are and synchronize via the master repo (e.g. pull new changesets into master first, and then pull into the specific branch repo). [1] http://selenic.com/repo/hg/help/clone [2] http://mercurial.selenic.com/wiki/HardlinkedClones [3] http://www.selenic.com/mercurial/hgrc.5.html#paths [4] http://selenic.com/repo/hg/help/urls ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Mercurial conversion repositories
On 2011-02-26 22:06, Barry Warsaw wrote: > On Feb 26, 2011, at 02:05 PM, R. David Murray wrote: > >> On Sat, 26 Feb 2011 13:08:47 -0500, Barry Warsaw wrote: >>> $ cd py27 # now I want to synchronize >>> $ hg pull -u ssh://h...@hg.python.org/cpython >>> >>> but I'm not going to remember that url every time. It wouldn't be so bad if >>> Mercurial remembered the pull URL for me, as (you guessed it :) Bazaar does. >> >> How does setting it in the hgrc differ from "remebering" it? > > It's different because you don't use a familiar interface to set it (i.e hg). > You have to know to hack a file to make it work. That's not awesome user > interface. ;) You'd have to take this up with Mercurial's BDFL Matt. He is a strong advocate for teaching users to learn edit their .hg/hgrc files. And he's quite firm on not wanting to have Mercurial touch .hg/hgrc -- with the single exception being to write a initial .hg/hgrc on 'hg clone', containing the default path with the location from where the repo was cloned. Regarding Bazaar: FWIW, I periodically retried the speed of 'bzr check' - and always gave up again looking at bzr due to the horrible slowness of that command. If I have to use a DVCS I want to be able to check the integrity of my clones in reasonable time. I do it with a cron job on our internal server here and I expect it to have finished checking all our repos when I get to my desk in the morning and look into my email inbox, reading the daily email with the result of the verify runs. After all, we do have everything secured with hashes, so we can use them, don't we? >> I've never been comfortable with the bzr --remember option because I'm never >> sure what it is remembering. Much easier for me to see it in a config file. >> But, then, that's how my brain works, and other people's will work >> differently. > > It's easy to tell what it remembers because it's exactly what you told it to > remember ;). But I guess you're talking about push and pull automatically > remembering the location when none was previously set. I love that feature. > > And of course, bzr 'remembers' by setting a value in a config file, which of > course you *could* hack if you wanted to. It's just that you don't normally > have to open your editor and remember which value in which config file you > have to manually modify to set the push and pull locations. I think that's a > win, but YMMV. :) > > Oh, and 'bzr info' always tells you what the push and pull locations are. You can use 'hg paths' for that: See http://selenic.com/repo/hg/help/paths or 'hg help paths' on the command line. >> I find bazaar's model confusing, and hg's intuitive, just like Éric. >> And consider that I learned bazaar before mercurial. To me, it makes >> perfect sense that in a DVCS the "unit" is a directory containing >> a repository and a working copy, and that the repository is *the* >> repository. That is, it has everything related to the project in it, >> just like the master SVN repository does (plus, since it is a DVCS, >> whatever I've committed locally but not pushed to the master). To have >> a repository that only has some of the stuff in it is, IMO, confusing. >> I advocated for having all the Python history in one repo partly for >> that reason. > > I would feel better about Mercurial's if the repo where not intimately tied > with a default working tree (yes, I know -U). In a sense, that's what > Bazaar's shared repositories are: a place where all your history goes. In > Bazaar's model though, it's not tied to a specific working tree, and it's > hidden in a dot-directory. > > It's still kind of beside the point - this is the way Mercurial works, and I > don't really mean this thread to be an in-depth comparison between the two. I'm quite surprised indeed to read that much about Bazaar in this thread here :) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] hg extensions was Mercurial conversion repositories
On 2011-02-27 00:13, Dj Gilcrease wrote: > Branch Management > bookmarks > http://mercurial.selenic.com/wiki/BookmarksExtension Bookmarks will be in Mercurial core for Mercurial 1.8, which will be released in a few days (March 1st). So, with 1.8 it's no longer needed to enable this extension in the configuration -- the feature will be built-in. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Mercurial conversion repositories
On 2011-02-27 01:50, Barry Warsaw wrote: > On Feb 26, 2011, at 11:45 PM, Adrian Buehlmann wrote: > >> You'd have to take this up with Mercurial's BDFL Matt. He is a strong >> advocate for teaching users to learn edit their .hg/hgrc files. > > Well, I guess it's doubtful I'd change his mind then. :) Yep. >> Regarding Bazaar: FWIW, I periodically retried the speed of 'bzr check' >> - and always gave up again looking at bzr due to the horrible slowness >> of that command. If I have to use a DVCS I want to be able to check the >> integrity of my clones in reasonable time. I do it with a cron job on >> our internal server here and I expect it to have finished checking all >> our repos when I get to my desk in the morning and look into my email >> inbox, reading the daily email with the result of the verify runs. >> >> After all, we do have everything secured with hashes, so we can use >> them, don't we? > > Do you know how thorough 'bzr check' is? I don't, but then I've never used it > or felt the need to. ;) That's quite amazing. If I talk with people about that, it often turns out that they don't check the integrity of their repos. Well, hg verify *is* through and fast enough. That's good enough for me. And being slow is not sufficient to earn my trust. FWIW, be aware that Mercurial does not do integrity checks on normal operations, so chances are you will be able to use a repo that fails verify for quite a while -- without even noticing it. For example you can remove *some* file X inside .hg/store/data and continue to add history to that repo without any sign of errors, as long as the file X isn't used during the operations you do. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] of branches and heads
On 2011-02-26 23:26, Greg Ewing wrote: > From: Antoine Pitrou >> - a "branch" usually means a "named branch": a set of changesets >> bearing the same label (e.g. "default"); that label is freely chosen >> by the committer at any point, and enforces no topological >> characteristic > > There are *some* topological restrictions, because hg won't > let you assign a branch name that's been used before to a node > unless one of its parents has that name. So you can't create > two disconnected subgraphs whose nodes have the same branch > name. That's not completely correct. You *can* do that. Mercurial by default assumes you're probably in error if you are trying to create such disconnected branch name subgraphs, but you can convince it that it's really what you want by doing: hg branch --force Example (glog command requires the graphlog extension enabled [1]): $ hg init a $ cd a $ echo foo > bla $ hg ci -Am1 adding bla $ hg branch b1 marked working directory as branch b1 $ hg ci -m2 $ hg branch default abort: a branch of the same name already exists (use 'hg update' to switch to it) $ hg branch --force default marked working directory as branch default $ hg ci -m3 created new head $ hg glog --template "{rev}, {branch}\n" @ 2, default | o 1, b1 | o 0, default [1] http://mercurial.selenic.com/wiki/GraphlogExtension ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] devguide (hg_transition): Advertise hg import over patch.
On 2011-02-27 16:35, Scott Dial wrote: > On 2/27/2011 10:18 AM, Antoine Pitrou wrote: >> Well, chances are TortoiseHG comes with an UI to apply patches >> (TortoiseSVN had one), so the command-line instructions may be of >> little use to them. > > I don't believe TortoiseHG has such a feature (or I can't find it), > although if you have TortoiseSVN, you can still use that as a patch tool. TortoiseHg can import patches just fine. FWIW, we are very close to releasing TortoiseHg 2.0 (due March 1st), which ported the current Gtk based TortoiseHg to Qt (although, it was more like a rewrite :-). For the old Gtk TortoiseHg, see the online docs here: http://tortoisehg.bitbucket.org/manual/1.1/patches.html#import-patches Homepage for the Qt port: https://bitbucket.org/tortoisehg/thg/wiki/Home For people on Windows, we have beta installers for the new Qt based TortoiseHg at: https://bitbucket.org/tortoisehg/thg/downloads Feedback is welcome on thg-...@googlegroups.com or tortoisehg-disc...@lists.sourceforge.net (we moved the development list to google groups) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] devguide (hg_transition): Advertise hg import over patch.
On 2011-02-27 23:21, Neil Hodgson wrote: > Adrian Buehlmann: > >> FWIW, we are very close to releasing TortoiseHg 2.0 (due March 1st), >> which ported the current Gtk based TortoiseHg to Qt (although, it was >> more like a rewrite :-). > >I hope this is going to be fast. Here, the Workbench window [1] starts in under 2s (Windows 7 x64 on Intel Core2 Quad). As installed with the x64 msi (installs true 64 bit exe's, including 64 bit command line hg). There's quite a lot of demand loading behind the scenes. So it's fast even for repos with many changesets. [1] http://tortoisehg.bitbucket.org/manual/2.0/workbench.html (brand new first manual version by Steve was just uploaded a few minutes ago :) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] hg pull failed
On 2011-03-06 20:09, "Martin v. Löwis" wrote: >>> So, when I cloned, I should have done something like this: >>> >>> hg clone http://hg.python.org/cpython >>> hg clone cpython 3.2 >>> hg clone 3.2 3.1 >>> hg clone cpython 2.7 >>> hg clone 2.7 2.6 >>> hg clone 2.6 2.5 >>> hg clone 2.5 2.4 >>> >>> instead of cloning everything from cpython, right? >> >> You can still change the "default" entries in .hg/hgrc to point to >> your desired default location. That's the *only* thing that changes >> depending on where you clone from. > > What I often do is to add another line (below default=, or in > ~/.hgrc), so I can do > > hg pull # pulls from my local copy > hg push pydotorg # pushes directly into the remote directory > Not sure if it fits in your specific case you mention here, but Mercurial has a reserved path alias name "default-push" with special meaning: 'hg push' pushes to (1) the path defined as default-push under [paths] in .hg/hgrc (2) if default-push is not defined, to the default path (3) if neither is defined, the command aborts with an error message 'hg pull' always pulls from the default path (default-push doesn't matter for pull). (Same for the outgoing/incoming commands.) http://selenic.com/repo/hg/help/paths ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] hg diff
On 2011-03-08 09:38, "Martin v. Löwis" wrote: >> However, as Michael points out, you can have your tools generate the >> patch. For example, it shouldn't be too hard to add a dynamic patch >> generator to Roundup (although I haven't thought about the UI or the >> CPU burden). > > For Mercurial, that's more difficult than you might expect. There is "hg > incoming -p", but it has the nasty problem that it may produce > multiple patches for a single file. I didn't follow/understand closely/completely what your problems is, but I wouldn't be surprised if mercurial's 'incoming' command has its limitations (it's most likely intentional, since remote inspection is limited on purpose and frowned upon by design). In general, you have to have all the changesets in a local repo to enjoy the full power of mercurial's history inspection tools. Maybe the following trick could be interesting for you: If you don't want to do an outright pull from a (possibly dubious) remote repo into your precious local repo yet, you can instead "superimpose" a separate overlay bundle file on your local repo. Example (initial step): $ cd cpython $ hg -q incoming --bundle in.hg 68322:8947c47a9fef 68323:a7e0cff05597 68324:88bbc574cfb0 68325:c43d685e1533 68326:a69ef22b60e3 68327:770d45d22a40 Now, you have a mercurial bundle file named "in.hg" which contains all these incoming changes, but -- of course -- the changes haven't yet been added to the repo. The interesting thing is, you can now superimpose this bundle on your repo, which has the effect that the aggregate is treated as if the changes had already been pulled. Continuing my example, let's now specify the bundle "in.hg" as an *overlay* by using the option -R/--repository [1]: $ hg -R in.hg log -r tip changeset: 68327:770d45d22a40 branch: 2.7 tag: tip parent: 68321:d9cc58f93d72 user:Benjamin Peterson <...> date:Mon Mar 07 22:50:37 2011 -0600 summary: transform izip_longest #11424 The fun thing with overlay bundles is: you have the full power of all mercurial history inspection commands as if the changesets had already been added to your repo. As an added extra bonus, you can later unbundle the bundle into your repo without another network round trip -- assuming you are pleased with what you've seen coming in: $ hg unbundle in.hg adding changesets adding manifests adding file changes added 6 changesets with 6 changes to 3 files (run 'hg update' to get a working copy) BTW, we regularly use overlay bundles under the hood of TortoiseHg. [1] 'hg help -v' says: global options: -R --repository REPOrepository root directory or name of overlay bundle file ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] hg diff
On 2011-03-08 10:53, Adrian Buehlmann wrote: > On 2011-03-08 09:38, "Martin v. Löwis" wrote: >>> However, as Michael points out, you can have your tools generate the >>> patch. For example, it shouldn't be too hard to add a dynamic patch >>> generator to Roundup (although I haven't thought about the UI or the >>> CPU burden). >> >> For Mercurial, that's more difficult than you might expect. There is "hg >> incoming -p", but it has the nasty problem that it may produce >> multiple patches for a single file. > > I didn't follow/understand closely/completely what your problems is, but > I wouldn't be surprised if mercurial's 'incoming' command has its > limitations (it's most likely intentional, since remote inspection is > limited on purpose and frowned upon by design). > > In general, you have to have all the changesets in a local repo to enjoy > the full power of mercurial's history inspection tools. > > Maybe the following trick could be interesting for you: > > If you don't want to do an outright pull from a (possibly dubious) > remote repo into your precious local repo yet, you can instead > "superimpose" a separate overlay bundle file on your local repo. OOPS. I failed to notice that this has already been proposed in the thread "combined hg incoming patch". Sorry for the noise. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] I am now lost - committed, pulled, merged, what is "collapse"?
On 2011-03-21 14:40, R. David Murray wrote: > On Mon, 21 Mar 2011 18:33:00 +0900, "Stephen J. Turnbull" > wrote: >> R. David Murray writes: >> > On Mon, 21 Mar 2011 14:07:46 +0900, "Stephen J. Turnbull" >> wrote: >> > > No, at best the DVCS workflow forces the developer on a branch to >> > > merge and test the revisions that will actually be added to the >> > > repository, and perhaps notice system-level anomolies before pushing. >> > >> > hg does not force the developer to test, it only forces the merge. >> >> I didn't say any VCS forces the test; I said that the workflow can (in >> the best case). That's also inaccurate, of course. I should have >> said "require", not "force". > > The workflow in svn "can" "require" this same thing: before committing, > you do an svn up and run the test suite. But with svn you have to redo the test after the commit *if* someone else committed just before you in the mean time, thereby changing the preconditions "behind your back", thus creating a different state of the tree compared to the state in which it was at the time you ran your test. With a DVCS, you can't push in that situation. At least not without creating a new head (which would require --force in Mercurial). ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Hg: inter-branch workflow
On 2011-03-22 11:19, John Arbash Meinel wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > On 3/21/2011 9:19 PM, Barry Warsaw wrote: >> On Mar 21, 2011, at 11:56 AM, Daniel Stutzbach wrote: >> >>> Keeping the repository clean makes it easier to use a bisection search to >>> hunt down the introduction of a bug. If every developer's intermediate >>> commits make it into the main repository, it's hard to go back to an older >>> revision to test something, because many of the older revisions will be >>> broken in some way. >> >> So maybe this gets at my earlier question about rebase being cultural >> vs. technology, and the response about bzr having a strong sense of mainline >> where hg doesn't. >> >> I don't use the bzr-bisect plugin too much, but I think by default it only >> follows commits on the main line, unless a bisect point is identified within >> a >> merge (i.e. side) line. So again, those merged intermediate changes are >> mostly ignored until they're needed. >> >> -Barry > > Bazaar is, to my knowledge, the only DVCS that has a "mainline" > emphasis. Which shows up in quite a few areas. The defaults for 'log', > having branch-stable revnos[1], and the 'bzr checkout' model for > managing a mainline. > FWIW, Mercurial's "mainline" is the branch with the name 'default'. This branch name is reserved, and it implies that the head with the highest revision number from that branch will be checked out on 'hg clone'. Which is why it makes sense to have something sensible on the default branch in Mercurial repositories. Or inadvertent people may be surprised when they clone ("After cloning, the files I have seem to be from a very old state of the project, WTF?"). What's more, 'hg log' suppresses printing the line 'branch: default' for changesets on that branch and commits after 'hg init' go into the default branch if 'hg branch ' hasn't been used. So, the initial branch name of a fresh working copy is 'default'. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] running/stepping python backwards
This may seem like an odd question, but I’m intrigued by the idea of using Python as a data definition language with “undo” support. If I were to try and instrument the Python interpreter to be able to step backwards, would that be an unduly difficult or inefficient thing to do? (Please reply to me directly.) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] file open in python interpreter
hi all, i am trying to find out where is the part of the code in the python interpreter that opens up the .py file and parses it. in particular, i am trying to find the file open command in that file. I greped and i just want to make sure this is it: /Python-2.6/Parser/pgenmain.c i am intending to take a hash measurement of the .py file just before i open it to run the script. is the above file the right place to call for the measurement before the file open function? thank you - adrian ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] file open in python interpreter
hi all, (benjamin and nick thank you!) i have another question to ask something about permissions for the python interpreter. in my earlier post, i was saying i want to measure the python script before it is parsed. what happens is that i will write the measurement of that script file to another file (call is 'measurereq') i have, it gives a Permission denied error. i have modified other programs to do the same measurement (eg, modified Bash to measure .sh files) and it works fine. So i am suspecting it has something to do with the python files. ie, i'm thinking the Permission denied error is not because of i'm unable to write on my 'measurereq' file, but its because i'm unable to measure (read) the python file. any clues? how can i go around this? thanks - adrian On Mon, Nov 3, 2008 at 9:57 PM, Nick Coghlan <[EMAIL PROTECTED]> wrote: > Benjamin Peterson wrote: > > On Mon, Nov 3, 2008 at 7:25 AM, Benjamin Peterson > > <[EMAIL PROTECTED]> wrote: > >> On Mon, Nov 3, 2008 at 1:04 AM, adrian golding <[EMAIL PROTECTED]> > wrote: > >>> hi all, i am trying to find out where is the part of the code in the > python > >>> interpreter that opens up the .py file and parses it. in particular, i > am > >>> trying to find the file open command in that file. I greped and i just > want > >>> to make sure this is it: /Python-2.6/Parser/pgenmain.c > >>> i am intending to take a hash measurement of the .py file just before i > open > >>> it to run the script. is the above file the right place to call for > the > >>> measurement before the file open function? > >> You want Parser/tokenizer.c. > > > > Sorry, that's not correct. opening of modules happens in > > Python/import.c. There's also a case in Modules/main.c. > > And some indirect ones from runpy.py (via pkgutils) if you use the -m > switch, or are executing a zipfile or directory. > > But for the specific case of an exact filename being provided on the > command line, then main.c is the one the original poster will want to > look at (line 567 to be exact). > > Cheers, > Nick. > > -- > Nick Coghlan | [EMAIL PROTECTED] | Brisbane, Australia > --- > ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Re: Move support of legacy platforms/architectures outside Python
On 2/22/21 12:30 PM, Victor Stinner wrote: > On Mon, Feb 22, 2021 at 8:19 AM wrote: >> There are zero technical reasons for what you are planning here. > > Multiple core developers explained how it's a maintenance burden. It > has been explained in multiple different ways. Well, that doesn't mean these statements are correct. Please don't assume that the people you are talking to are inexperienced developers, we aren't. Downstream distribution maintainers certainly have enough experience with project maintenance to be able to asses whether your claims are valid or not. >> You are inflating a few lines of autoconf into a "platform support", so you >> have a reason to justify adding multiple lines of extra autoconf codes to >> make >> life for downstream distributions harder. > > "Making life harder" sounds to me like oh, maybe supporting one > additional platform is not free and comes with a cost. This cost is > something called the "maintenance burden". Please explain to me how guarding some platforms with *additional* lines autoconf is helping to reduce maintenance burden for the upstream project. > My question is if Python wants to pay this cost, or if we want > transfering the maintenance burden to people who actually care about > these legacy platforms and architectures. > > Your position is: Python must pay this price. My position is: Python should > not. No, my position is that such changes should have valid technical reasons which is simply not the case. You're not helping your point if you are basing your arguments on incorrect technical assumptions. > Honestly, if it's just a few lines, it will be trivial for you to > maintain a downstream patch and I'm not sure why we even need this > conversation. If it's more than a few lines, well, again, we come back > to the problem of the real maintenance burden. This argument goes both ways. The code we are talking about here are just a few lines of autoconf which are hardly touched during normal development work. And the architecture-mapping you have in [1] is probably not even needed (CC @jrtc27). >> The thing is you made assumptions about how downstream distributions use >> Python without doing some research first ("16-bit m68k-linux"). > > I'm talking about 16-bit memory alignment which causes SIGBUS if it's > not respected on m68k. For example, unicodeobject.c requires special > code just for this arch: > > /* > * Issue #17237: m68k is a bit different from most architectures in > * that objects do not use "natural alignment" - for example, int and > * long are only aligned at 2-byte boundaries. Therefore the assert() > * won't work; also, tests have shown that skipping the "optimised > * version" will even speed up m68k. > */ > #if !defined(__m68k__) > (...) > > Such issue is hard to guess when you write code and usually only spot > it while actually running the code on such architecture. This is the only such place in the code where there is an extra section for m68k that I could find. And the bug was fixed by Andreas Schwab [2], so another downstream maintainer which was my point earlier in the discussion. We downstreams care about the platform support, hence we keep it working. Thanks, Adrian > [1] > https://github.com/python/cpython/blob/63298930fb531ba2bb4f23bc3b915dbf1e17e9e1/configure.ac#L724 > [2] > https://github.com/python/cpython/commit/8b0e98426dd0e1fde93715256413bc707759db6f -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaub...@debian.org `. `' Freie Universitaet Berlin - glaub...@physik.fu-berlin.de `-GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/4OALXE4CUIFA4JGH3Y2BCLQ7WI4LR6U6/ Code of Conduct: http://python.org/psf/codeofconduct/
[Python-Dev] Re: Move support of legacy platforms/architectures outside Python
Hello! On 2/22/21 12:30 PM, Victor Stinner wrote: >> The thing is you made assumptions about how downstream distributions use >> Python without doing some research first ("16-bit m68k-linux"). > > I'm talking about 16-bit memory alignment which causes SIGBUS if it's > not respected on m68k. For example, unicodeobject.c requires special > code just for this arch: > > /* > * Issue #17237: m68k is a bit different from most architectures in > * that objects do not use "natural alignment" - for example, int and > * long are only aligned at 2-byte boundaries. Therefore the assert() > * won't work; also, tests have shown that skipping the "optimised > * version" will even speed up m68k. > */ > #if !defined(__m68k__) > (...) > > Such issue is hard to guess when you write code and usually only spot > it while actually running the code on such architecture. Just as a heads-up: There is a PR by Jessica Clarke now [1] which gets rid of this architecture-specific #ifdef. I think this is a good approach as it gets rid of one of your complaining points. I have already verified that these changes don't break on 32-bit PowerPC, 64-bit SPARC and, of course, M68k. Thanks, Adrian > [1] https://github.com/python/cpython/pull/24624 -- .''`. John Paul Adrian Glaubitz : :' : Debian Developer - glaub...@debian.org `. `' Freie Universitaet Berlin - glaub...@physik.fu-berlin.de `-GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913 ___ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/TQVOGEZ3NXQP6PN5RTOG5IDBCGTIPAU5/ Code of Conduct: http://python.org/psf/codeofconduct/