Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18949
Previously, the concern was that this workaround was still necessary
because the runtime environment might have a different snappy version from the
one Spark uses. I thought that was because Hadoop provides it, but, I am not
sure that's true? I can't see a dependency on it in Hadoop, and `mvn
dependency:tree` shows this as a compile-scope dependency at 1.1.2.6 for all of
Spark. We don't explicitly pick it up from the env.
I think this is likely OK, but wonder if I'm missing something @viirya as
you commented on it?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]