cshannon commented on issue #5139: URL: https://github.com/apache/accumulo/issues/5139#issuecomment-2523535430
I tested this out with main (4.0.0-SNAPSHOT), Uno, and accumulo-testing and it seems to be working. Based on the Hadoop Jira ticket it seems like there are some incompatibilities with libraries such as Guice 4.x but testing with the extra java arguments seems to work in my testing. I didn't see any map reduce failures as noted in slack. I made the following changes to get things to work: 1. Bumped the required target JDK to 17 for accumulo and I built a new 4.0.0-SNAPSHOT off main. 2. Set up a new accumulo install with Uno using 4.0.0-SNAPSHOT 3. Modified hadoop.sh to no longer skip yarn when using JDK 17 and to start it up 4. Tried to start up the instance and verified the same errors we saw before were in the resource manager and node manager logs as shown in the Slack [chat](https://the-asf.slack.com/archives/CERNB8NDC/p1712870215519739). 5. Added the following to yarn-env.sh: ``` export YARN_RESOURCEMANAGER_OPTS="--add-opens java.base/java.lang=ALL-UNNAMED" export YARN_NODEMANAGER_OPTS="--add-opens java.base/java.lang=ALL-UNNAMED" ``` 6. Restarted Uno and verified all the errors were gone out of the hadoop logs 7. I bumped the required JDK to 17 for accumulo-testing project and updated the accumulo dependency to 4.0.0-SNAPSHOT and rebuilt the project. 8. I ran the continuous ingest test and ingested some data for a few minutes and then i ran the verify test which submits a map/reduce job and it completed successfully. I ran this a few times. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
