On 05/15/18 17:55, Jeroen Demeyer wrote:
On 2018-05-15 18:36, Petr Viktorin wrote:
Naturally, large-scale
changes have less of a chance there.

Does it really matter that much how large the change is? I think you are focusing too much on the change instead of the end result.

As I said in my previous post, I could certainly make less disruptive changes. But would that really be better? (If you think that the answer is "yes" here, I honestly want to know).

Yes, I believe it is better.
The larger a change is, the harder it is to understand, meaning that less people can meaningfully join the conversation, think about how it interacts with their own use cases, and notice (and think through) any unpleasant details.
Less disruptive changes tend to have a better backwards compatibility story.
A less intertwined change makes it easier to revert just a single part, in case that becomes necessary.

I could make the code less different than today but at the cost of added complexity. Building on top of the existing code is like building on a bad foundation: the higher you build, the messier it gets. Instead, I propose a solid new foundation. Of course, that requires more work to build but once it is built, the finished building looks a lot better.

To continue the analogy: the tenants have been customizing their apartments inside that building, possibly depending on structural details that we might think should be hidden from them. And they expect to continue living there while the foundation is being swapped under them :)

With such a "finished product" PEP, it's hard to see if some of the
various problems could be solved in a better way -- faster, more
maintainable, or less disruptive.

With "faster", you mean runtime speed? I'm pretty confident that we won't lose anything there.

As I argued above, my PEP might very well make things "more maintainable", but this is of course very subjective. And "less disruptive" was never a goal for this PEP.

It's also harder from a psychological point of view: you obviously
already put in a lot of good work, and it's harder to waste that work if
an even better solution is found.

I hope that this won't be my psychology. As a developer, I prefer to focus on problems rather than on solutions: I don't want to push a particular solution, I want to fix a particular problem. If an even better solution is accepted, I will be a very happy man.

What I would hate is that this PEP gets rejected because some people claim that the problem can be solved in a better way, but without actually suggesting such a better way.

Mark Shannon has an upcoming PEP with an alternative to some of the issues. (Not all of them – but less intertwined is better, all else being equal.)

Is a branching class hierarchy, with quite a few new of flags for
feature selection, the kind of simplicity we want?

Maybe yes because it *concentrates* all complexity in one small place. Currently, we have several independent classes (builtin_function_or_method, method_descriptor, function, method) which all require various forms of special casing in the interpreter with some code duplication. With my PEP, this all goes away and instead we need to understand just one class, namely base_function.

Would it be possible to first decouple things, reducing the complexity,
and then tackle the individual problems?

What do you mean with "decouple things"? Can you be more concrete?

Currently, the "outside" of a function (how it looks when introspected) is tied to the "inside" (what happens internally when it's called). That's what I'd like to see decoupled. Can we better enable pydoc/IPython developers to tackle introspection problems without wading deep in the internals and call optimizations?

The class hierarchy still makes it hard to decouple the introspection
side (how functions look on the outside) from the calling mechanism (how
the calling works internally).

Any class who wants to profit from fast function calls can inherit from base_function. It can add whatever attributes it wants and it can choose to implement documentation and/or introspection in whatever way it wants. It can choose to not care about that at all. That looks very decoupled to me.

But, it still has to inherit from base_function to "look like a function". Can we remove that limitation in favor of duck typing?

Starting from an idea and ironing out the details it lets you (and, if
since you published results, everyone else) figure out the tricky
details. But ultimately it's exploring one path of doing things – it
doesn't necessarily lead to the best way of doing something.

So far I haven't seen any other proposals...

That's a good question. Maybe inspect.isfunction() serves too many use
cases to be useful. Cython functons should behave like "def" functions
in some cases, and like built-in functions in others.

From the outside, i.e. user's point of view, I want them to behave like Python functions. Whether it's implemented in C or Python should just be an implementation detail. Of course there are attributes like __code__ which dive into implementation details, so there you will see the difference.

before we change how inspect.isfunction ultimately behaves,
I'd like to make its purpose clearer (and try to check how that meshes
with the current use cases).

The problem is that this is not easy to do. You could search CPython for occurrences of inspect.isfunction() and you could search your favorite Python projects. This will give you some indication, but I'm not sure whether that will be representative.

From what I can tell, inspect.isfunction() is mainly used as guard for attribute access: it implies for example that a __globals__ attribute exists.

That's unfortunate -- I don't see "is a function" and "has __globals__" as related. Can we provide better tools for people that currently need to rely on this?

And it's used by documentation tools to decide that it should be documented as Python function whose signature can be extracted using inspect.signature().

I think inspect.signature should never call "inspect.isfunction" for objects with a (lazily computed) __signature__ attribute. If that's not the case, let's fix that. Special handling for functions is reasonable because the signature object and lazy attributes are much higher-level than function. But, IMO, anything that's not low-level (e.g. a real "def" function, methods) can and should use __signature__.



_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to