[
https://issues.apache.org/jira/browse/SPARK-34803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Hyukjin Kwon resolved SPARK-34803.
----------------------------------
Fix Version/s: 3.1.2
3.2.0
Resolution: Fixed
Issue resolved by pull request 31902
[https://github.com/apache/spark/pull/31902]
> Util methods requiring certain versions of Pandas & PyArrow don't pass
> through the raised ImportError
> -----------------------------------------------------------------------------------------------------
>
> Key: SPARK-34803
> URL: https://issues.apache.org/jira/browse/SPARK-34803
> Project: Spark
> Issue Type: Bug
> Components: PySpark
> Affects Versions: 3.1.1
> Reporter: John Hany
> Assignee: John Hany
> Priority: Major
> Fix For: 3.2.0, 3.1.2
>
>
> When checking that the we can import either {{pandas}} or {{pyarrow}}, we
> except any {{ImportError}} and raise an error declaring the minimum version
> of the respective package that's required to be in the Python environment.
> We don't however, pass the {{ImportError}} that might have been thrown by the
> package itself. Take {{pandas}} as an example, when we call {{import
> pandas}}, pandas itself might be in the environment, but can throw an
> {{ImportError}}
> [https://github.com/pandas-dev/pandas/blob/0.24.x/pandas/compat/__init__.py#L438]
> if another package it requires isn't there. This error wouldn't be passed
> through and we'd end up getting a misleading error message that states that
> {{pandas}} isn't in the environment, while in fact it is but something else
> makes us unable to import it.
> I believe this can be improved by chaining the exceptions and am happy to
> provide said contribution.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]