Github user a-roberts commented on the issue:
https://github.com/apache/spark/pull/14961
Thanks, so are we saying netty 4.0.29 can't be upgraded to 4.0.41 without
breaking changes? That's not even a minor version change...
On branch 1.6 with the netty change for myself I see 8477 tests, two
failures (flaky network events and DateTimeUtilsSuite to UTC timestamp,
unrelated to this)
And against master I see 11,148 tests with unrelated failures again
(furtherRequestsDelay, hive metastore warehouse dir, executor allocation
manager basic functionality, replsuite clone and clean line object)
I'm using two maven commands to first build and then run
```
mvn -T 1C ${R_PROFILE} -Pyarn -Phadoop-${HADOOP_VERSION} -Phive
-Phive-thriftserver -DskipTests -Dscala-$SCALA_VERSION clean package
```
```
mvn -Pyarn -Phadoop-${HADOOP_VERSION} -Phive -Phive-thriftserver
-Dscala-$SCALA_VERSION -Dtest.exclude.tags=org.apache.spark.tags.DockerTest
${TESTS_RUN_OPTIONS} -fn test
```
In this case the profiles used are for **Hadoop 2.6**, Scala 2.10 on
branch-1.6 then **Hadoop 2.7** and Scala 2.11 for branch-2.0, no additional
test run options. Therefore I think it's all about the Hadoop version we use
because in the community job I see:
```
**-Phadoop-2.3** -Phive -Pyarn -Pmesos -Phive-thriftserver -Pkinesis-asl
-Dtest.exclude.tags=org.apache.spark.tags.ExtendedHiveTest,org.apache.spark.tags.ExtendedYarnTest
test
```
If this is the case then surely we actually should not upgrade the version
of Netty until we either drop support for Hadoop 2.3 and below (and perhaps we
see the problem in 2.4 too) or make the necessary changes in our Spark codebase
to address issues seen in the above jobs using Hadoop 2.3.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]