Github user BryanCutler commented on the issue:
https://github.com/apache/spark/pull/21939
Wow, thanks @shaneknapp for helping to get this worked out! I think you're
plan to move to Python 3.5 sounds great, but it does make me a bit nervous
making a change like this at a a critical time before the code freeze. I was
thinking that maybe there is a temporary way to get this tested without
completely dropping Python 3.4. Would it be possible to add an additional job
to Jenkins that runs in an environment with python 3.5 and pyarrow 0.10.0 and
leave the existing tests as is for now? It would increase the testing time, but
we could limit it by just running the pyspark-sql module, like:
`python/run-tests --python-executable=python3.5 --modules=pyspark-sql`. So we
would be testing with python3.5/pyarrow0.10.0 and python3.4/pyarrow0.8.0
(which is passing) for now, then after the code freeze we could discuss more
about dropping Python 3.4 and executing your plan. What do you think about
this? I'm not sure if it's pretty straightforward to add a job like this or if
it will sen
d you to a further level of dependency hell - so if that's the case then it's
probably not a good idea.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]