I'm trying to keep an open mind, but I am perplexed about something in Python that strikes me as a poor design.
py> def func(a,b): py> print a,b py> func(1) Traceback (most recent call last): File "<stdin>", line 1, in ? TypeError: func() takes exactly 2 arguments (1 given) Why is the exception raised by passing the wrong number of arguments a TypeError instead of a more specific exception? I'm asking because of a practical problem I had. I have written some test suites for a module, and wanted a test to ensure that the functions were accepting the correct number of arguments, eg if the specs say they take three arguments, that they actually do fail as advertised if you pass the wrong number of arguments. That should be simple stuff to do. Except that I have to distinguish between TypeErrors raised because of wrong argument counts, and TypeErrors raised inside the function. To add an extra layer of complication, the error string from the TypeError differs according to how many parameters were expected and how many were supplied, eg: func() takes exactly 2 arguments (1 given) func() takes at least 2 arguments (1 given) func() takes at most 1 argument (2 given) etc. I worked around this problem by predicting what error message to expect given N expected arguments and M supplied arguments. Yuck: this is a messy, inelegant, ugly hack :-( Thank goodness that functions are first class objects that support introspection :-) So, I'm wondering if there is a good reason why TypeError is generated instead of (say) ArgumentError, or if it is just a wart of the language for historical reasons? -- Steven. -- http://mail.python.org/mailman/listinfo/python-list