Github user srowen commented on the issue:
https://github.com/apache/spark/pull/15733
I added some warnings -- examples --
```
$ ./bin/spark-shell
...
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use
setLogLevel(newLevel).
16/11/02 20:22:49 WARN SparkContext: Support for Java 7 is deprecated as of
Spark 2.0.0
16/11/02 20:22:50 WARN NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
...
$ ./bin/pyspark
...
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use
setLogLevel(newLevel).
16/11/02 20:26:51 WARN NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
/Users/srowen/Documents/spark/python/pyspark/context.py:192: UserWarning:
Support for Python 2.6 is deprecated as of Spark 2.0.0
warnings.warn("Support for Python 2.6 is deprecated as of Spark 2.0.0")
...
```
I found an API for Hadoop version info but it's labeled "private" and
"unstable", so wasn't sure whether it's worth it to access it just to warn.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]