Hi all,

I did some initial hands-on exploration to see what it would take to
compile Hive with JDK 25.

*Compilation:*

To compile with *JDK 25* (class file major version 69), following upgrades
are required:

   - *datanucleus-core*: 6.0.10 → 6.0.11 (includes ASM 9.8 for Java 25
   bytecode support)
   - *maven-shade-plugin*: 3.6.0 → 3.6.1 (fixes shading phase failures)
   - *Error Prone*: Updated to latest version

With these changes, compilation succeeds, although there are still multiple
warnings (only errors blocking compilation were addressed).

*Testing: *

I ran TestDriver.java
<https://github.com/apache/hive/blob/master/ql/src/test/org/apache/hadoop/hive/ql/TestDriver.java>
and
encountered the following runtime failure:

java.lang.UnsupportedOperationException: getSubject is not supported

at java.base/javax.security.auth.Subject.getSubject(Subject.java:277)
at 
org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:588)
at 
org.apache.hadoop.conf.Configuration$Resource.getRestrictParserDefault(Configuration.java:294)
at org.apache.hadoop.conf.Configuration$Resource.<init>(Configuration.java:262)
at org.apache.hadoop.conf.Configuration.addResource(Configuration.java:999)
at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:6494)
at org.apache.hadoop.hive.conf.HiveConf.<init>(HiveConf.java:6438)
at org.apache.hadoop.hive.conf.HiveConfForTest.<init>(HiveConfForTest.java:42)
at org.apache.hadoop.hive.ql.TestDriver.beforeTest(TestDriver.java:38)


*Root Cause:*
Hadoop 3.4.2 relies on* javax.security.auth.Subject* APIs that are no
longer supported as of Java 23+ due to *JEP 486: Permanently Disable the
Security Manager*. These APIs now throw *UnsupportedOperationException*.

🔴* Critical Blocker:* Hadoop Incompatibility

Specifically, Hadoop 3.4.2 uses the following removed APIs in
 UserGroupInformation:
   - Subject.getSubject(context)
<https://github.com/apache/hadoop/blob/603cd61a56d884baca0f0ee91462f42721d2dd9d/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L588>in
getCurrentUser()
   - Subject.doAs(subject, action)
<https://github.com/apache/hadoop/blob/603cd61a56d884baca0f0ee91462f42721d2dd9d/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L1930>
in
the doAs() methods

*Impact:*
This breaks at runtime during basic configuration initialization not just
in tests but likely in any Hive operation that initializes HiveConf.

*Solution:*
The fix is in Hadoop 3.4.3 and 3.5.0, which aren't released yet.
HADOOP JIRA: https://issues.apache.org/jira/browse/HADOOP-19212
<https://issues.apache.org/jira/browse/HADOOP-19212>

We cannot move Hive to JDK 25 until a Hadoop release including this fix is
available.

Happy to create a JIRA to track this and address it once Hadoop has a
release that includes this fix.

- Kokila

On Mon, Jan 19, 2026 at 3:45 PM lisoda <[email protected]> wrote:

> Actually, I think as long as hadoopClient supports JDK 25+, we can just
> reuse the compatibility layer we built for the older Hadoop versions—it
> should work just as well.
>
>
> ---- Replied Message ----
> From Ayush Saxena<[email protected]> <[email protected]>
> Date 01/19/2026 17:19
> To dev<[email protected]> <[email protected]>
> Cc
> Subject [DISCUSS] Thoughts on JDK 25 (LTS) adoption for Hive
> Hi folks,
>
> As we know, JDK 25 has been released and is now the latest LTS. I
> wanted to start a discussion on whether and when it makes sense for
> Hive to start chasing it.
>
> From what I’ve read so far, the release looks solid and there are
> generally positive signals around it, which makes it an interesting
> option to consider. I also had a few offline discussions last week,
> and a common sentiment was that it might be a bit early to move
> aggressively. Our last major shift was to JDK 21, and much of the
> Hadoop ecosystem isn’t moving at the same pace. Given that, an
> immediate jump may be ambitious.
>
> One possible middle ground could be to acknowledge JDK 25 as a target,
> but not aim for it in the very next release—perhaps instead in the one
> after that, once the ecosystem has had more time to catch up.
>
> I haven’t done any hands-on validation yet, so I can’t comment
> concretely on what might break or the level of effort involved. That
> said, from some initial looking around, a potential prerequisite could
> be moving to Hadoop 3.5.0+ (or beyond), which in itself could be a
> blocker. On top of that, there’s the usual question of how third-party
> dependencies—and our own code—would behave.
>
> Would be good to hear what others think: whether this is something we
> should start planning for now, or keep on the radar and revisit after
> some more ecosystem movement.
>
> -Ayush
>

Reply via email to