Eric Fahlgren wrote:
I think you're missing something here, since it seems clear to me that
indeed the arguments are evaluated prior to the function call.
I think the OP may be confusing "evaluating the function" with
"calling the function".
If the function being called is determined by
Guido van Rossum wrote:
The source for sleep() isn't very helpful -- e.g. @coroutine is mostly a
backwards compatibility thing.
So how are you supposed to write that *without* using @coroutine?
--
Greg
___
Python-Dev mailing list
Chris Angelico wrote:
I'm not sure there's any distinction between a "point" and a "vector
from the origin to a point".
They transform differently. For example, translation affects
a point, but makes no difference to a vector.
There are two ways of dealing with that. One is to use vectors
to
Nick Coghlan wrote:
Perhaps the check could be:
(type(lhs) == type(rhs) or fields(lhs) == fields(rhs)) and all
(individual fields match)
I think the types should *always* have to match, or at least
one should be a subclass of the other. Consider:
@dataclass
class Point3d:
x: float
Serhiy Storchaka wrote:
Ivan explained that this function should be rough equivalent to
def f():
t = [(yield i) for i in range(3)]
return (x for x in t)
This is a *rough* equivalent. There are differences in details.
The details would seem to be overwhelmingly important,
Nick Coghlan wrote:
def example():
comp1 = yield from [(yield x) for x in ('1st', '2nd')]
comp2 = yield from [(yield x) for x in ('3rd', '4th')]
return comp1, comp2
If the implicit "yield from" idea seems too magical, then the other
direction we could go is to
Guido van Rossum wrote:
the extra scope is now part of the language definition.
It can't be removed as a "bug fix".
Does anyone actually rely on the scope-ness of comprehensions
in any way other than the fact that it prevents local variable
leakage?
If not, probably nobody would notice if it
Guido van Rossum wrote:
The debugger does stop at each iteration. It does see a local named ".0"
I suppose there currently is no way for the debugger to map the variable
names to what they are named in the source, right?
If the hidden local were named "a.0" where "a" is the original
name,
Serhiy Storchaka wrote:
Ivan explained that
this function should be rough equivalent to
def f():
t = [(yield i) for i in range(3)]
return (x for x in t)
This seems useless to me. It turns a lazy iterator
into an eager one, which is a gross violation of the
author's intent in
Paul Moore wrote:
has anyone confirmed
why a function scope was considered necessary at the time of the
original implementation, but it's apparently not now?
At the time I got the impression that nobody wanted to
spend the time necessary to design and implement a subscope
mechanism. What's
Ivan Levkivskyi wrote:
"People sometimes want to refactor for-loops containing `yield` into a
comprehension
By the way, do we have any real-life examples of people wanting to
do this? It might help us decide what the semantics should be.
--
Greg
___
Ivan Levkivskyi wrote:
On 23 November 2017 at 05:44, Greg Ewing <greg.ew...@canterbury.ac.nz
<mailto:greg.ew...@canterbury.ac.nz>> wrote:
def g():
return ((yield i) for i in range(10))
I think this code should be just equivalent to this code
def g():
Ivan Levkivskyi wrote:
"People sometimes want to refactor for-loops containing `yield` into a
comprehension but that doesn't work (particularly because of the hidden
function scope) - lets make it a SyntaxError"
Personally I'd be fine with removing the implicit function
scope from
Ivan Levkivskyi wrote:
The key idea is that neither comprehensions nor generator expressions
should create a function scope surrounding the `expr`
I don't see how you can avoid an implicit function scope in
the case of a generator expression, though. And I can't see
how to make yield in a
Paul Moore wrote:
3. List comprehensions are the same as list(the equivalent generator
expression).
I don't think that's ever been quite true -- there have
always been odd cases such as what happens if you
raise StopIteration in list(generator_expression).
To my mind, these equivalences have
Paul Moore wrote:
At the moment, I know I tend to treat Python semantics as I always
did, but with an implicit proviso, "unless async is involved, when I
can't assume any of my intuitions apply". That's *not* a good
situation to be in.
It's disappointing that PEP 3152 was rejected, because I
Ivan Levkivskyi wrote:
while `g = list((yield i) for i in range(3))` is defined as this code:
def __gen():
for i in range(3):
yield (yield i)
g = list(__gen())
Since this is almost certainly not what was intended, I think that
'yield' inside a generator expression
Ethan Furman wrote:
The second way is fairly similar, but instead of replacing the entire
sys.modules entry, its class is updated to be the class just created --
something like sys.modules['mymod'].__class__ = MyNewClass .
If the recent suggestion to replace the global namespace
dict with the
Guido van Rossum wrote:
But Python's syntax changes in nearly every release.
The changes are almost always additions, so there's no
reason why the AST can't remain backwards compatible.
the AST level ... elides many details
(such as whitespace and parentheses).
That's okay, because the AST
Guido van Rossum wrote:
The PEP answers that clearly (under Implementation):
> If an annotation was already a string, this string is preserved
> verbatim.
This bothers me, because it means the transformation from
what you write in the source and the object you get at
run time is not
Ethan Furman wrote:
Contriwise, "annotation_strings" sounds like a different type of
annotation -- they are now being stored as strings, instead of something
else.
How about "annotations_as_strings"?
--
Greg
___
Python-Dev mailing list
Tres Seaver wrote:
IIUC, that would be as expected: you would see the warnings when running
your test suite exercising that imported code (which should run with all
warnings enabled), but not when running the app.
But then what benefit is there in turning on deprecation
warnings automatically
Guido van Rossum wrote:
I did not assume totally opaque -- but code objects are not very
introspection friendly (and they have no strong compatibility guarantees).
If I understand the proposal correctly, there wouldn't be any
point in trying to introspect the lambdas/thunks/whatever.
They're
On 8 November 2017 at 19:21, Antoine Pitrou wrote:
The idea that __main__ scripts should
get special treatment here is entirely gratuitous.
When I'm writing an app in Python, very often my __main__ is
just a stub that imports the actual functionality from another
module
Guido van Rossum wrote:
From this I understand that when using e.g. findall() it forces
successive matches to be adjacent.
Seems to me this would be better addressed using an option
to findall() rather than being part of the regex. That would
avoid the issue of where to keep the state.
--
Tim Peters wrote:
In that case, it's because Python
_does_ mutate the objects' refcount members under the covers, and so
the OS ends up making fresh copies of the memory anyway.
Has anyone ever considered addressing that by moving the
refcounts out of the objects and keeping them somewhere
Elvis Pranskevichus wrote:
By default, generators reference an empty LogicalContext object that is
allocated once (like the None object). We can do that because LCs are
immutable.
Ah, I see. That wasn't clear from the implementation, where
gen.__logical_context__ =
There are a couple of things in the PEP I'm confused about:
1) Under "Generators" it says:
once set in the generator, the context variable is guaranteed
not to change between iterations;
This suggests that you're not allowed to set() a given
context variable more than once in a given
There is one thing I misunderstood. Since generators and
coroutines are almost exactly the same underneath, I had
thought that the automatic logical_context creation for
generators was also going to apply to coroutines, but
from reading the PEP again it seems that's not the case.
Somehow I missed
Yury Selivanov wrote:
I understand what Koos is
talking about, but I really don't follow you. Using the
"with-statements to be skipped" language is very confusing and doesn't
help to understand you.
If I understand correctly, instead of using a context
manager, your fractions example could be
Yury Selivanov wrote:
def foo():
var = ContextVar()
var.set(1)
for _ in range(10**6): foo()
If 'var' is strongly referenced, we would have a bunch of them.
Erk. This is not how I envisaged context vars would be
used. What I thought you would do is this:
Yury Selivanov wrote:
The PEP gives you a Task Local Storage, where Task is:
1. your single-threaded code
2. a generator
3. an async task
If you correctly use context managers, PEP 550 works intuitively and
similar to how one would think that threading.local() should work.
My version works
Yury Selivanov wrote:
1. So essentially this means that we will have one "local context" per
context manager storing one value.
I can't see that being a major problem. Context vars will
(I hope!) be very rare things, and needing to change a
bunch of them in one function ought to be rarer
Yury Selivanov wrote:
I still think that giving Python programmers one strong rule: "context
mutation is always isolated in generators" makes it easier to reason
about the EC and write maintainable code.
Whereas I think it makes code *harder* to reason about,
because to take advantage of it
Guido van Rossum wrote:
Yeah, so my claim this is simply a non-problem, and you've pretty much
just proved that by failing to come up with pointers to actual code that
would suffer from this. Clearly you're not aware of any such code.
In response I'd ask Yuri to come up with examples of real
Yury Selivanov wrote:
It would be great if you or Greg could show a couple of real-world
examples showing the "issue" (with the current PEP 550
APIs/semantics).
Here's one way that refactoring could trip you up.
Start with this:
async def foo():
calculate_something()
#in a
Guido van Rossum wrote:
This feels like a very abstract argument. I have a feeling that context
state propagating out of a call is used relatively rarely -- it must
work for cases where you refactor something that changes context inline
into a utility function (e.g. decimal.setcontext()), but
Nathaniel Smith wrote:
The implementation strategy changed radically between v1
and v2 because of considerations around generator (not coroutine)
semantics. I'm not sure what more it can do to dispel these feelings
:-).
I can't say the changes have dispelled any feelings on my part.
The
Nathaniel Smith wrote:
Literally the first motivating example at the beginning of the PEP
('def fractions ...') involves only generators, not coroutines, and
only works correctly if generators get special handling. (In fact, I'd
be curious to see how Greg's {push,pop}_local_storage could handle
Ivan Levkivskyi wrote:
Normal generators fall out from this "scheme", and it looks like their
behavior is determined by the fact that coroutines are implemented as
generators.
This is what I disagree with. Generators don't implement
coroutines, they implement *parts* of coroutines.
We want
Yury Selivanov wrote:
Greg, have you seen this new section:
https://www.python.org/dev/peps/pep-0550/#should-yield-from-leak-context-changes
That section seems to be addressing the idea of a generator
behaving differently depending on whether you use yield-from
on it.
I never suggested that,
Yury Selivanov wrote:
Question: how to write a context manager with contextvar.new?
var = new_context_var()
class CM:
def __enter__(self):
var.new(42)
with CM():
print(var.get() or 'None')
My understanding that the above code will print "None", because
Chris Angelico wrote:
This particular example is safe, because the arguments get passed
individually - so 'args' has one reference, plus there's one more for
the actual function call
However, that's also true when you use the += operator,
so if the optimisation is to trigger at all in any
Yury Selivanov wrote:
BTW we already have mechanisms to always propagate context to the
caller -- just use threading.local() or a global variable.
But then you don't have a way to *not* propagate the
context change when you don't want to.
Here's my suggestion: Make an explicit distinction
Yury Selivanov wrote:
While we want "yield from" to have semantics close to a function call,
That's not what I said! I said that "yield from foo()" should
have semantics close to a function call. If you separate the
"yield from" from the "foo()", then of course you can get
different
Yury Selivanov wrote:
Consider the following generator:
def gen():
with decimal.context(...):
yield
We don't want gen's context to leak to the outer scope
That's understandable, but fixing that problem shouldn't come
at the expense of breaking the ability to
Yury Selivanov wrote:
I saying that the following should not work:
def nested_gen():
set_some_context()
yield
def gen():
# some_context is not set
yield from nested_gen()
# use some_context ???
And I'm saying it *should* work, otherwise it breaks
Yury Selivanov wrote:
On Mon, Aug 28, 2017 at 1:33 PM, Stefan Krah wrote:
[..]
* Context managers like decimal contexts, numpy.errstate, and
warnings.catch_warnings.
The decimal context works like this:
1) There is a default context template (user settable).
2)
Barry Warsaw wrote:
I actually
think Python’s scoping rules are fairly easy to grasp,
The problem is that the word "scope", as generally used in
relation to programming languages, has to do with visibility
of names. A variable is "in scope" at a particular point in the
code if you can acccess
Barry Warsaw wrote:
This is my problem with using "Context" for this PEP. Although I can't
keep up with all names being thrown around,
Not sure whether it helps, but a similar concept in
some Scheme dialects is called a "fluid binding".
e.g. Section 5.4 of
Guido van Rossum wrote:
Perhaps the latter can be
shortened to just ContextStack (since the Foo part can probably be
guessed from context. :-)
-0.9, if I saw something called ContextStack turn up in a traceback
I wouldn't necessarily jump to the conclusion that it was a stack
of FooContexts
Ethan Furman wrote:
So I like ExecutionContext for the stack of
WhateverWeCallTheOtherContext contexts. But what do we call it?
How about ExecutionContextFrame, by analogy with stack/stack frame.
--
Greg
___
Python-Dev mailing list
Yury Selivanov wrote:
I can certainly see how "ContextFrame" can be correct if we think
about "frame" as a generic term, but in Python, people will
inadvertently think about a connection with frame objects/stacks.
Calling it ExecutionContextFrame rather than just ContextFrame
would make it
Barry Warsaw wrote:
namedtuple is great and clever, but it’s also a bit clunky. It has a weird
signature and requires a made up type name.
Maybe a metaclass could be used to make something
like this possible:
class Foo(NamedTuple, fields = 'x,y,z'):
...
Then the name is explicit
Seems like a good idea to tighten it up.
If a style guide is going to say "you can either do X or
not do X", it might as well not mention X at all. :-)
--
Greg
___
Python-Dev mailing list
Python-Dev@python.org
Serhiy Storchaka wrote:
In early ages of C structures didn't create namespaces, and member names
were globals.
That would certainly explain the origins of it, but I'm
pretty sure it wasn't the case by the time Python was
invented. So Guido must have liked it for other reasons.
--
Greg
Steven D'Aprano wrote:
What does "tp" stand for? Type something, I guess.
I think it's just short for "type". There's an old tradition
in C of giving member names a short prefix reminiscent of
the type they belong to. Not sure why, maybe someone thought
it helped readability.
--
Greg
David Wilson wrote:
They're referred to as slots throughout typeobject.c
That's probably where he got the term from. But it really refers
to C-level fields in the type object. Magic methods that don't
correspond to C-level type fields are not called slots.
--
Greg
M.-A. Lemburg wrote:
In my role as PSF TM committee member, it's often painful to have to
tell community members that they cannot use e.g. really nice looking
variants of the Python logo for their projects. Let's not add more
pain.
But it's always within the PSF's power to give that community
When testing things like this, as well as testing whether it speeds
up your target cases, remember to check that it doesn't slow everything
else down due to the increased size of the eval code pushing something
out of instruction cache or some such effect.
--
Greg
Oleg Broytman wrote:
On Sat, Mar 18, 2017 at 10:27:58AM -0500, Ryan Gonzalez
wrote:
exec -a would seem to end up setting argv[0] on the CPython interpreter
itself, which I don't think is the desired effect...
That's exactly what OP asked -- how to change that?
Maybe
Armin Rigo wrote:
The theoretical kind of regexp is about giving a "yes/no" answer,
whereas the concrete "re" or "regexp" modules gives a match object,
which lets you ask for the subgroups' location, for example.
Another issue is that the theoretical engine has no notion of
greedy/non-greedy
Elliot Gorokhovsky wrote:
I ran the
benchmark a couple of times and the numbers seem to exactly line up
something like one in five times; perhaps not that crazy considering
they're executing nearly the same code?
Could this be a result of a time being measured in seconds
somewhere and then
Random832 wrote:
Or is this a special kind of lock that you
can "assert isn't locked" without locking it for yourself, and
INCREF/DECREF does so?
I don't think that would work. It might be unlocked at
the moment you test it, but someone might lock it between
then and the following
Nathaniel Smith wrote:
IIRC to handle
this gilectomy adds per-object mutexes that you have to hold whenever
you're mucking around with that object's internals.
What counts as "mucking around with the object's internals",
though?
If I do the C equivalent of:
x = somedict[5]
On Tue, Oct 11, 2016 at 4:03 AM, Paul Moore wrote:
If you have an object that forever owns a reference to
another object, it's safe to return borrowed references.
Even if there are such cases, it seems we're going to
have to be a lot more careful about identifying them.
Larry Hastings wrote:
In contrast, the "borrowed" reference returned by PyWeakRef_GetObject()
seems to be "borrowed" from some unspecified entity.
It's a delocalised quantum reference, borrowing a little
bit from all strong references in existence. :-)
--
Greg
Ben Leslie wrote:
But the idea of transmitting these offsets outside of a running
process is not something that I had anticipated. It got me thinking:
is there a guarantee that these opaque values returned from tell() is
stable across different versions of Python?
Are they even guaranteed to
MRAB wrote:
On 2016-09-13 07:57, Mark Lawrence via Python-Dev wrote:
"tables the idea" has the US meaning of close it down, not the UK
meaning of open it up? :)
A better phrase would've been "shelves the idea". There's even a module
in Python called "shelve", which makes it Pythonic. :-)
Mark Shannon wrote:
Unless of course, others may have a different idea of what the "type of
a variable" means.
To me, it means it means that for all assignments `var = expr`
the type of `expr` must be a subtype of the variable,
and for all uses of var, the type of the use is the same as the
On Sun, Sep 04, 2016 at 12:31:26PM +0100, Mark Shannon wrote:
As defined in PEP 526, I think that type
annotations become a hindrance to type inference.
In Haskell-like languages, type annotations have no
ability to influence whether types can be inferred.
The compiler infers a type for
Nick Coghlan wrote:
For synchronous code, that's a relatively easy burden to push back
onto the programmer - assuming fair thread scheduling, a with
statement can ensure reliably ensure prompt resource cleanup.
That assurance goes out the window as soon as you explicitly pause
code execution
Ethan Furman wrote:
The problem with only having `bchr` is that it doesn't help with
`bytearray`; the problem with not having `bchr` is who wants to write
`bytes.fromord`?
If we called it 'bytes.fnord' (From Numeric Ordinal)
people would want to write it just for the fun factor.
--
Greg
Chris Angelico wrote:
Forcing people to write 1.0 just to be compatible with 1.5 will cause
a lot of annoyance.
Indeed, this would be unacceptable IMO.
The checker could have a type 'smallint' that it
considers promotable to float. But that wouldn't
avoid the problem entirely, because e.g.
Nikolaus Rath wrote:
I think EOFError conveys more information. UnpicklingError can mean a
lot of things, EOFError tells you the precise problem: pickle expected
more data, but there was nothing left.
I think EOFError should be used for EOF between pickles,
but UnpicklingError should be used
Nick Coghlan wrote:
Something that isn't currently defined in PEP 520 ... is where
descriptors implicitly defined via __slots__ will appear relative to
other attributes.
In the place where the __slots__ attribute appears?
--
Greg
___
Python-Dev
Steven D'Aprano wrote:
I'm
satisfied that the choice made by Python is the right choice, and that
it meets the spirit (if, arguably, not the letter) of the RFC.
IMO it meets the letter (if you read it a certain way)
but *not* the spirit.
--
Greg
Simon Cross wrote:
If we only support one, I would prefer it to be bytes since (bytes ->
bytes -> unicode) seems like less overhead and slightly conceptually
clearer than (bytes -> unicode -> bytes),
Whereas bytes -> unicode, followed if needed by unicode -> bytes,
seems conceptually clearer
Stephen J. Turnbull wrote:
The RFC is unclear on this point, but I read it as specifying the
ASCII coded character set, not the ASCII repertoire of (abstract)
characters.
Well, I think you've misread it. Or at least there is a
more general reading possible that is entirely consistent
with the
Stephen J. Turnbull wrote:
it does refer to *encoded* characters as the output of
the encoding process:
> The encoding process represents 24-bit groups of input bits
> as output strings of 4 encoded characters.
The "encoding" being referred to there is the encoding
from input bytes
R. David Murray wrote:
The fundamental purpose of the base64 encoding is to take a series
of arbitrary bytes and reversibly turn them into another series of
bytes in which the eighth bit is not significant.
No, it's not. If that were its only purpose, it would be
called base128, and the RFC
Joao S. O. Bueno wrote:
The arguments about compactness and what is most likely to happen
next applies (transmission trhough a binary network protocol),
I'm not convinced that this is what is most likely to
happen next *in a Python program*. How many people
implement their own binary network
Steven D'Aprano wrote:
- Linux /dev/urandom doesn't block, but it might return predictable,
poor-quality pseudo-random bytes (i.e. a potential exploit);
- Other OSes may block for potentially many minutes (i.e. a
potential DOS).
It's even possible that it could block *forever*.
There
On Jun 8, 2016 4:04 PM, "Neil Schemenauer" > wrote:
>
> I've temporarily named it "Pragmatic Python". I'd like a better
> name if someone can suggest one. Maybe something like Perverted,
> Debauched or Impure Python.
Python Two and Three Quarters.
Steven D'Aprano wrote:
That can't be right. How can you reduce memory usage by more than one
hundred percent? That would mean you have saved more memory than was
originally used and are now using a negative amount of memory.
It emails an order for more RAM to Amazon, who send out
a robot
Steven D'Aprano wrote:
TypeAlias? Because A is an alias for int?
That suggests it's just another name for the same type,
but it's not. It's a distinct type as far as the static
type checker is concerned.
--
Greg
___
Python-Dev mailing list
Guido van Rossum wrote:
Also -- the most important thing. :-) What to call these things? We're
pretty much settled on the semantics and how to create them (A =
NewType('A', int)) but what should we call types like A when we're
talking about them? "New types" sounds awkward.
Fake types? Virtual
Paul Moore wrote:
On 13 May 2016 at 17:57, Ethan Furman wrote:
1) What is a wallet garden?
I assumed he meant "walled garden"
Works either way -- you'd want a wall around your wallet
garden to stop people stealing your wallets.
--
Greg
Guido van Rossum wrote:
We could also consider this a general weakness of the "alternative
constructors are class methods" pattern. If instead these alternative
constructors were folded into the main constructor (e.g. via special
keyword args) it would be altogether clearer what a subclass
Brett Cannon wrote:
There's all sorts of weird stuff going on in that import, like having a
dot in the `from` part of the import instead of doing `from .py2exe
import mf as modulefinder`.
If I did that, it would try to import mf from the
py2exe submodule rather than the global one.
In
It seems that 2to3 is a bit simplistic when it comes to
translating import statements. I have a module GUI.py2exe
containing:
import py2exe.mf as modulefinder
2to3 translates this into:
from . import py2exe.mf as modulefinder
which is a syntax error.
It looks like 2to3 is getting
Chris Barker - NOAA Federal wrote:
Why in the world do the os.path functions need to work with Path
objects?
So that applications using path objects can pass them
to library code that uses os.path to manipulate them.
I'm confused about what a bytes path IS -- is it encoded?
It's a
Jon Ribbens wrote:
So far it looks like blocking "_*" and the frame object attributes
appears to be sufficient.
Even if your sandbox as it currently exists is secure, it's
only an extremely restricted subset. You seem to be assuming
that if your technique works so far, then it can be extended
Ethan Furman wrote:
# after new protocol with bytes/str support
def zingar(a_path):
a_path = fspath(a_path)
if not isinstance(a_path, (bytes,str)):
raise TypeError('bytes or str required')
...
I think that one would be just
def zingar(a_path):
a_path
Nick Coghlan wrote:
Similar to my proposal for dealing with DirEntry.path being a
bytes-like object, I'd like to suggest raising TypeError in __fspath__
if the request is nonsensical for the currently running system - *nix
systems can *manipulate* Windows paths (and vice-versa), but actually
On 9 April 2016 at 23:02, R. David Murray wrote:
That is, a 'filename' is the identifier we've assigned to this thing
pointed to by an inode in linux, but an os path is a text representation
of the path from the root filename to a specified filename. That is,
the path
Eric Snow wrote:
All this matters because it impacts the value returned from
__ospath__(). Should it return the string representation of the path
for the current OS or some standardized representation?
What standardized representation? I'm not aware of such
a thing.
I'd expect
the former.
Nick Coghlan wrote:
We want to be able to readily use the protocol helper in builtin
modules like os and low level Python modules like os.path, which means
we want it to be much lower down in the import hierarchy than pathlib.
Also, it's more general than that. It works on any
object that
Brett Cannon wrote:
Depends if you use `/` or `\` as your path separator
Or whether your pathnames look entirely different, e.g VMS:
device:[topdir.subdir.subsubdir]filename.ext;version
Pathnames are very much OS-dependent in both syntax *and* semantics.
Even the main two in use today
Chris Angelico wrote:
-1 for __os_path__, unless it's reasonable to justify it as "most of
the standard library uses Path objects, but os.path uses strings, so
before you pass a Path to anything in os.path, you call path.ospath()
on it, which calls __os_path__()".
A less roundabout
301 - 400 of 2277 matches
Mail list logo