On Sun, Dec 30, 2018 at 11:07:19AM -0500, Avi Gross wrote:
> Steve,
> 
> I had the same thoughts and many more when I played with these ideas 
> last night.

Pity that one of those thoughts wasn't "I shouldn't suggest a bad 
solution on a mailing list populated by beginners who won't recognise 
how bad it is". That would have saved both you and me a lot of time.

> I thought I stated clearly that this was an EXPLORATION 

And I explored the problems with the design.

[...]
> Consider a NORMAL function with no external gimmicks. Say it 
> accepts an argument (positional or by name does not matter) and 
> processes it by checking they argument type, the specific values 
> against a range expected, and so on. It then unilaterally makes 
> changes. If it expects a single value of type integer, ang gets a 
> floating point number, it may round or truncate it back to an integer.
[...]

Sure. But in general, such a design applies a known transformation of 
input argument to value actually used:

- any number -> ignore any fraction part;
- any string -> ignore leading and trailing whitespace;
- any string -> treat as case-insensitive;
- list of values -> sorted list of values;

which is not the same replacing out-of-band values with some default 
value. In each of the above examples, there is a known, direct 
connection between the caller's argument, and the value used:

- the argument, ignoring any fraction part;
- the argument, ignoring leading and trailing whitespace;
- the argument, ignoring case differences;
- the argument, ignoring the original order of items;

rather than:

- throw away the argument, and use an unrelated default value.

The closest analogue would be clamping values to a given size, so 
that out-of-band values are truncated to the outermost value:

    clamp(999, min=1, max=10)
    => returns 10

but even that should be used with care. And as I said, it is acceptable 
to use an dedicated sentinel value like None or Ellipsis.

But using otherwise valid-looking, but out of range, values as a trigger 
for the default should be used with care, if at all.

Some designs are better (more likely to be useful, less likely to lead 
to bugs and surprises) than others.


> If your choice is in some sense wrong, or just not convenient, it 
> replaces it with another choice. 

That would very likely be a poor or dangerous design. If your argument 
is "wrong", that's an error, and errors should not in general pass 
silently.


[...]
> Again, discussing a different scenario. Would you agree that kind of 
> design does happen and sometimes is seen as a smart thing but also as 
> full of pitfalls even sometimes for the wary?

Sure, it does happen, often badly. 


> In that spirit, any design like the ones I played with is equally 
> scary. Even worse, I have a QUESTION. Let me remind readers of the 
> initial idea behind this. You have a function in which you want to 
> communicate with the python ruler that it have a positional argument 
> that will be switched to a default. But the current rules are that the 
> only way to make something optional is by making it a keyword 
> parameter and declare the default right there.

No, that's not the current rules. Positional values can have default 
values too. You just can't skip positional values: there's no way of 
saying "I'm not supplying argument 1, but I am supplying argument 2" 
using only positional arguments:

    function(a, b)  # provide argument 1 and 2

    function(a)     # only provide argument 1

    function(b)     still argument 1, not argument 2

[...]
> The problem is if you want a non-key word to be used as a default and 
> then also want to gather up additional positional arguments and so on. 
> There is no way designed to do that and possibly there should not be.

I'm open to suggestions for good designs that allow more flexible 
calling conventions, but as Python exists now, you can't skip over 
positional arguments except at the end.

> So what I am trying to understand is this. When someone types in a 
> function invocation and it is evaluated, when does the programmer get 
> a chance to intervene?

Aside from compile-time errors, any exception can be caught and 
processed *by the caller*:

    try:
        function(a, b)
    except TypeError as e:
        ...

Python also allows you to customize the way exceptions are printed:

https://docs.python.org/3/library/sys.html#sys.excepthook

But the callee -- in this case, "function()" -- doesn't get a chance to 
customize the TypeError generated by the interpreter when you pass the 
wrong number of arguments, or an invalid keyword argument.

(Aside: it is unfortunate that TypeError gets used for both actual type 
errors, and mismatched arguments/parameter errors.)

[...]
> Is there a way to create a function and set it up so you have some 
> control over the error message?

Not easily.

You could, I suppose, take over parameter parsing yourself. Write your 
function like this:

def function(*args, **kwargs):
    ...

and then do all the parameter parsing yourself. Then you can provide any 
error messages you like.

But that's a lot of trouble for not much benefit. I have, however, done 
it myself in order to support keyword-only arguments in Python 2. Its a 
real PITA.


-- 
Steve
_______________________________________________
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
https://mail.python.org/mailman/listinfo/tutor

Reply via email to