Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/20625#discussion_r168709505
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -2000,10 +2001,12 @@ def toPandas(self):
return _check_dataframe_localize_timestamps(pdf,
timezone)
else:
return pd.DataFrame.from_records([],
columns=self.columns)
- except ImportError as e:
- msg = "note: pyarrow must be installed and available on
calling Python process " \
- "if using spark.sql.execution.arrow.enabled=true"
- raise ImportError("%s\n%s" % (_exception_message(e), msg))
+ except Exception as e:
+ msg = (
+ "Note: toPandas attempted Arrow optimization because "
+ "'spark.sql.execution.arrow.enabled' is set to true.
Please set it to false "
+ "to disable this.")
--- End diff --
hmm, this says why it's trying arrow and how to turn it off, but doesn't
say why I have to turn it off? perhaps say something like pyarrow is not found
(if it is the cause if we know)?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]