>
> So if A is capable of looking up "B.C.D", but it's also capable of
> looking up "B" and following the chain... what happens when you
> reassign something? What does this line of code do?
>

A is capable of looking up B.C.D in one step if you use `getattr` because
the way A forwards on attribute lookups is roughly equivalent to
(copy-pasting this from an earlier post):

class NamespaceProxy:
    ...
    def __getattr__(self, name):
        return getattr(self.__parent_scope__,
f"{self.__namespace_name__}.{name}")

    def __setattr__(self, name, value):
        setattr(self.__parent_scope__,
f"{self.__namespace_name__}.{name}", value)


So it would concatenate its own name ("A") with a dot with the name being
looked up ("B.C.D") and then serve up globals()['A.B.C.D'], which is the
key under which the value of D is stored. It can also follow the chain
because:

>>>globals()['A']
<namespace object <A> of  <module '__main__' (built-in)>>

>>>globals()['A.B']
<namespace object <A.B> of  <module '__main__' (built-in)>>

>>>globals()['A.B.C']
<namespace object <A.B.C> of  <module '__main__' (built-in)>>

These also exist in the module globals(), (they were set there with those
names in the `namespace` statement). A chained lookup just involves looking
these namespace objects up one-by-one in the globals().

>>> A.B.C.D = "spam"
>
> Does it change what getattr(A, "B.C.D") returns?
>


Yes, it would, because it is no different to just explicitly setting:

globals()[''A.B.C.D'] = "spam"


That's not why ideas get dismissed out of hand. Onus is not on the
> status quo to prove itself; onus is on the proposal to show its value.
> If you want your idea to be taken seriously, you have to demonstrate
> that it would give some sort of real improvement.
>

I get this, I do. And you're not wrong.

I guess what I'm trying to say is that there's a lot to be said for being
kind. I've seen several instances of people coming here with a new idea and
sort-of cringed sympathetically at the kind of response they've gotten. I
can just imagine someone relatively new to python getting really puppy-dog
excited because they think they've got something worthwhile to contribute
(without understanding why it may not work) and jumping through a few hoops
to get onto this list only to be practically yelled out of the room by
cranky old developers. What a horribly demoralizing experience that must
be! There are some people who genuinely act like it is a personal attack on
them when someone else comes along to discuss their shiny new idea on a
list intended for... discussing potential new language ideas. I don't think
that reflects on our community very well to newcomers. It costs us nothing
to be kind, even if you *do* have to say something another person might not
like to hear. There are ways to soften it and be informational rather than
rude. Especially in a medium like this where tone doesn't carry and it's
easy to misconstrue something as harsher than the person meant it.

Even little things like your smiley face earlier:

Precisely. :)
>

Go a shockingly long way towards conveying that we're on the same page
having a discussion to try and improve python together, rather than an
ego-fuelled shouting match.

That's not the same, though. Binary operators are NOT syntactic sugar
> for method calls - they do a number of checks that allow for reflected
> methods and such.
>

Fair enough, I just used the first example that came to mind, which in this
case was wrong. My point was more about syntactic sugar in general.


> To give an example:
>
>     def spam():
>         return "spam spam spam!"
>
>     def eggs():
>         return spam()
>
>     namespace Shop:
>         def spam():
>             return "There's not much call for spam here."
>         def eggs():
>             return spam()
>
>     print(eggs())
>     # should print "spam spam spam!"
>     print(Shop.eggs())
>     # should print "There's not much call for spam here."
>


I'm guessing this was a typo and you meant to type:

    print(spam())
    # should print "spam spam spam!"
    print(Shop.spam())
    # should print "There's not much call for spam here."

Because if you did, then this is precisely how it *would* work under this
proposal. :)

In the module globals() you would have the first version of `spam` under
the key 'spam', and the second one under 'Shop.spam', which is the object
that the attribute lookup `Shop.spam` would return

If we have a namespace concept, it should actually be a namespace, not
> an weird compiler directive to bind names in the surrounding global
> scope.
>

This is an implementation detail that the end-user would basically not need
to worry about except when iterating dynamically over the globals()/class
__dict__/object __dict__. But that's the entire point. `namespace` is
intended to build namespaces within some scope, not to create whole new
scopes. If you wanted a new scope you could create a new
class/object/module.

> For example, you could rewrite this:
> >
> > class Legs:
> >   def __init__(self, left, right):
> >       self.left, self.right = left, right
> >
> >
> > class Biped:
> >     def __init__(self):
> >         self.legs = Legs(left=LeftLeg(), right=RightLeg())
> >
> >
> > As this:
> >
> > class Biped:
> >     def __init__(self):
> >         namespace self.legs:
> >             left, right = LeftLeg(), RightLeg()
>
>
> Oh, I hope that's not what you consider a good use-case! For starters,
> the "before" with two classes seems to be a total misuse of classes.
> `Legs` is a do-nothing class, and `self.legs` seems to be adding an
> unnecessary level of indirection that has no functional or conceptual
> benefit.
>
> I hope that the purpose of "namespace" is not to encourage people to
> write bad code like the above more easily.
>

See I disagree on this point.

I agree that the 'before' code is not at all ideal. But it is what you
would have to do currently if you wanted to namespace out your Leg objects
with statically analyzable names (instead of using a SimpleNamespace, or a
dict, or similar).

Sure, in this toy example the benefit is minor. There's only two attributes
in `Biped` and they are both legs. You could get away with writing them as
`self.left_leg` and `self.right_leg`. But I would argue that this violates
DRY principles somewhat (just like with the example with pseudo-namespaced
methods using underscores that Paul mentioned earlier) and feels hacky.
What if the project requirements later on changed and you needed to change
the '_leg' suffix to '_ambulatory_appendage'? For two attributes it's not
so bad, but what if you've got an `Octoped` class with eight distinct and
uniquely identifiable legs? At some point in the process of refactoring 8
separate attributes you would probably realize that your code is repeating
itself, which is one of the very first lessons programming students are
taught it should never do.

I would argue that this is better:

class Octoped:
    def __init__(self):
        self.foo = "bar"

        namespace self.leg:
            namespace left:
                anterior = LeftLeg()
                posterior = LeftLeg()
                ...  # other anatomical terms
            namespace right:
                anterior = RightLeg()
                ...  # same as above

And now you have neatly namespaced legs, accessible like this:

octoped = Octoped()
octoped.leg.left.posterior
octoped.leg.right.anterior
octoped.foo

Creating lots of unnecessary classes *just for namespacing*, as we
currently have to resort to, isn't ideal.

What you're calling 'an unnecessary level of indirection', quickly becomes
a big boost to clarity (both in the written code and when using said code
in an IDE) in more complex situations. Ask yourself why library authors
namespace out the API of their library across multiple modules/packages.
It's the same reason.


Surely that's just a limitation of the *specific* tools. There is no
> reason why they couldn't be upgraded to understand SimpleNamespace.
>

See but I would argue that the reason static analysis tools don't give
autocompletions for SimpleNamespace is fundamentally the same reason they
don't give them for dicts. Basically, those are mappings that are *intended*
to change over the course of their lifetime and have keys
added/popped/altered (for example, in argparse). So attempting to
statically analyze them would be not very useful at best, and misleading at
worst.

This is different to attributes, which are considered to be more
'permanent' (for lack of a better word). When using an attribute you
haven't assigned to elsewhere most linters will give you an 'undefined
reference' warning, for instance. Depending on your strictness settings
they might also warn you if you try to assign to an attribute that you
haven't declared within the constructor. I don't think it would be
appropriate to strictly statically check SimpleNamespace any more than it
would be to try to do it with dictionary keys. Different tools for
different jobs.


If I have understood you, that means that things will break when you do:
>
>     Z = A
>     del A
>     Z.B.C  # NameError name 'A.B' is not defined
>
> Objects should not rely on their parents keeping the name they were
> originally defined under.
>

This wouldn't happen. It would work as expected. You would still have a
reference to `namespace A` as normal, only bound to the name `Z` rather
than `A`. I don't see anything in the semantics of `del` that would cause
this to behave in an unexpected way.

I guess you might be confused that something being set on Z later on, like:

>>>Z.foo = "bar"

is saved under `globals()['A.foo']` rather than `globals()[Z.foo]`, but if
you follow the namespace object referenced by `Z` back to its source you
would see that it's because it was originally declared as `namespace A`.

Basically, a namespace acquires its name during the namespace block (taking
into account any other namespace blocks it is nested under) and then
doesn't change when you pass it around as a reference.


That's no different from the situation today:
>
>     obj = spam.eggs.cheese.aardvark.hovercraft
>     obj.eels  # only one lookup needed
>
Yes, exactly. I've been trying to explain that attribute lookup for
namespaces uses *the same* semantics as normal attribute lookup (it just
looks things up on its parent scope rather than 'owning' the attributes
itself). It doesn't require any special-cased semantics.

Can I just say that referencing `vars(sys.modules[__name__])` *really*
> works against the clarity of your examples?
>
> Are there situations where that couldn't be written as
>
>     globals()["constants.NAMESPACED_CONSTANT"]
>
> instead?
>

You're right. I forgot that globals() is literally just a reference to the
module '__dict__', not some magical accessor object for module attributes.
I'll use `globals` from now on, starting with this post. Thank you :)

And remind me, what's `Truevars`?
>

I think when quoting me your editor must've moved the `True` to the
following line, the original version is:

vars(sys.modules[__name__])["constants.NAMESPACED_CONSTANT"] =
Truevars(sys.modules[__name__])["constants.inner.ANOTHER_CONSTANT"] =
"hi"

Although, as you've pointed out, it is better written as:

globals()["constants.NAMESPACED_CONSTANT"] =
Trueglobals()["constants.inner.ANOTHER_CONSTANT"] = "hi"



On Wed, May 5, 2021 at 7:11 AM Steven D'Aprano <st...@pearwood.info> wrote:

> My comments follow, interleaved with Matt's.
>
>
> On Mon, May 03, 2021 at 11:30:51PM +0100, Matt del Valle wrote:
>
> > But you've pretty much perfectly identified the benefits here, I'll just
> > elaborate on them a bit.
> >
> > - the indentation visually separates blocks of conceptually-grouped
> > attributes/methods in the actual code (a gain in clarity when code is
> read)
>
> Indeed, that is something I often miss: a way to conceptually group
> named functions, classes and variables which is lighter weight than
> separating them into a new file.
>
> But you don't need a new keyword for that. A new keyword would be nice,
> but grouping alone may not be sufficient to justify a keyword.
>
>
> > - the dot notation you use to invoke such methods improves the experience
> > for library consumers by giving a small amount conceptually-linked
> > autocompletions at each namespace step within a class with a large API,
> > rather getting a huge flat list.
>
> We don't need a new keyword for people to separate names with dots.
>
> Although I agree with your position regarding nested APIs, *to a point*,
> I should mention that, for what it is worth, it goes against the Zen:
>
> Flat is better than nested.
>
>
> [...]
> > - you can put functions inside a namespace block, which would become
> > methods if you had put them in a class block
>
> This is a feature which I have *really* missed.
>
>
> > - you don't have the same (in some cases extremely unintuitive)
> > scoping/variable binding rules that you do within a class block (see the
> > link in my doc). It's all just module scope.
>
> On the other hand, I don't think I like this. What I would expect is
> that namespaces ought to be a separate scope.
>
> To give an example:
>
>     def spam():
>         return "spam spam spam!"
>
>     def eggs():
>         return spam()
>
>     namespace Shop:
>         def spam():
>             return "There's not much call for spam here."
>         def eggs():
>             return spam()
>
>     print(eggs())
>     # should print "spam spam spam!"
>     print(Shop.eggs())
>     # should print "There's not much call for spam here."
>
>
> If we have a namespace concept, it should actually be a namespace, not
> an weird compiler directive to bind names in the surrounding global
> scope.
>
>
> > - it mode clearly indicates intent (you don't want a whole new class,
> just
> > a new namespace)
>
> Indeed.
>
>
> > When using the namespaces within a method (using self):
> >
> > - It allows you to namespace out your instance attributes without needing
> > to create intermediate objects (an improvement to the memory footprint,
> and
> > less do-nothing classes to clutter up your codebase)
> >
> > - While the above point of space complexity will not alway be relevant I
> > think the more salient point is that creating intermediate objects for
> > namespacing is often cognitively more effort than it's worth. And humans
> > are lazy creatures by nature. So I feel like having an easy and intuitive
> > way of doing it would have a positive effect on people's usage patterns.
> > It's one of those things where you likely wouldn't appreciate the
> benefits
> > until you'd actually gotten to play around with it a bit in the wild.
>
>
> I'm not entirely sure what this means.
>
>
> > For example, you could rewrite this:
> >
> > class Legs:
> >   def __init__(self, left, right):
> >       self.left, self.right = left, right
> >
> >
> > class Biped:
> >     def __init__(self):
> >         self.legs = Legs(left=LeftLeg(), right=RightLeg())
> >
> >
> > As this:
> >
> > class Biped:
> >     def __init__(self):
> >         namespace self.legs:
> >             left, right = LeftLeg(), RightLeg()
>
>
> Oh, I hope that's not what you consider a good use-case! For starters,
> the "before" with two classes seems to be a total misuse of classes.
> `Legs` is a do-nothing class, and `self.legs` seems to be adding an
> unnecessary level of indirection that has no functional or conceptual
> benefit.
>
> I hope that the purpose of "namespace" is not to encourage people to
> write bad code like the above more easily.
>
>
> > And sure, the benefit for a single instance of this is small. But across
> a
> > large codebase it adds up. It completely takes away the tradeoff between
> > having neatly namespaced code where it makes sense to do so and writing a
> > lot of needless intermediate classes.
> >
> > SimpleNamespace does not help you here as much as you would think because
> > it cannot be understood by static code analysis tools when invoked like
> > this:
> >
> > class Biped:
> >     def __init__(self):
> >         self.legs = SimpleNamespace(left=LeftLeg(), right=RightLeg())
>
> Surely that's just a limitation of the *specific* tools. There is no
> reason why they couldn't be upgraded to understand SimpleNamespace.
>
>
> [...]
> > > 2. If __dict__ contains "B.C" and "B", then presumably the interpreter
> > > would need to try combinations against the outer __dict__ as well as
> B. Is
> > > the namespace proxy you've mentioned intended to prevent further
> lookup in
> > > the "B" attribute?
> > >
> >
> > The namespace proxy must know its fully-qualified name all the way up to
> > its parent scope (this is the bit that would require some magic in the
> > python implementation), so it only needs to forward on a single attribute
> > lookup to its parent scope. It does not need to perform several
> > intermediate lookups on all of its parent namespaces.
> >
> > So in the case of:
> >
> > namespace A:
> >     namespace B:
> >         C = True
> >
> >
> > >>>A.B
> > <namespace object <A.B> of <module '__main__' (built-in)>>
> >
> >
> > Note that namespace B 'knows' that its name is 'A.B', not just 'B'
>
> If I have understood you, that means that things will break when you do:
>
>     Z = A
>     del A
>     Z.B.C  # NameError name 'A.B' is not defined
>
> Objects should not rely on their parents keeping the name they were
> originally defined under.
>
>
>
> [...]
> > Traversing all the way through A.B.C does involve 2 intermediate lookups
> > (looking up 'A.B' on the parent scope from namespace A, then looking up
> > 'A.B.C' on the parent scope from namespace A.B). But once you have a
> > reference to a deeply nested namespace, looking up any value on it is
> only
> > a single lookup step.
>
> That's no different from the situation today:
>
>     obj = spam.eggs.cheese.aardvark.hovercraft
>     obj.eels  # only one lookup needed
>
>
> > > 3. Can namespaces be nested? If so, will their attributed they always
> > > resolve to flat set of attributes in the encapsulating class?
> > >
> >
> > Yes, namespaces can be nested arbitrarily, and they will always set their
> > attributes in the nearest real scope (module/class/locals). There's an
> > example of this early on in the doc:
> >
> > namespace constants:
> >     NAMESPACED_CONSTANT = True
> >
> >     namespace inner:
> >         ANOTHER_CONSTANT = "hi"
> >
> > Which is like:
> >
> > vars(sys.modules[__name__])["constants.NAMESPACED_CONSTANT"] =
> > Truevars(sys.modules[__name__])["constants.inner.ANOTHER_CONSTANT"] =
> > "hi"
>
> Can I just say that referencing `vars(sys.modules[__name__])` *really*
> works against the clarity of your examples?
>
> Are there situations where that couldn't be written as
>
>     globals()["constants.NAMESPACED_CONSTANT"]
>
> instead?
>
> And remind me, what's `Truevars`?
>
>
>
> --
> Steve
> _______________________________________________
> Python-ideas mailing list -- python-ideas@python.org
> To unsubscribe send an email to python-ideas-le...@python.org
> https://mail.python.org/mailman3/lists/python-ideas.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-ideas@python.org/message/JF6OWQJ7JRH4CGUWU3APCDMSAB37MR67/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
_______________________________________________
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/ATNXLIDIXVKAWNO5X7Z32YPAIXFD4R3M/
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to