Github user zero323 commented on the issue:
https://github.com/apache/spark/pull/16537
For me it is all about the bigger picture. I've been working with Python
for quite a while right now (probably to long for my own good) and I am used to
two things:
- Language is relatively forgiving when it comes to types. I am more used
to thinking about abstract base classes than concrete types.
- Language which communicates failures in a clear way. If there is problem
with incompatible types or interfaces I expect to receive clear feedback. And
of course interactive debugger on top of that if something goes particularly
wrong.
PySpark is not there yet. We get Py4j exceptions (albeit it improved a lot
in 2.x), we get runtime exceptions with huge JVM tracebacks, when it is
possible to fail fast (on the driver), and finally we get silent errors (like
returning `int` from and UDF with declared type `float`).
It is not always possible or practical to avoid these failures but I
believe that in cases where:
- We have very strict requirements regarding types.
- Cost of checking is low (O(1) not for example O(N)).
- We fail early and prevent an expensive failure in the middle of a
pipeline.
it is a good idea to be proactive.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]