Abhishek Chennaka has posted comments on this change. ( 
http://gerrit.cloudera.org:8080/22374 )

Change subject: Upgrade Java dependencies
......................................................................


Patch Set 21:

(3 comments)

http://gerrit.cloudera.org:8080/#/c/22374/20/java/kudu-hive/build.gradle
File java/kudu-hive/build.gradle:

http://gerrit.cloudera.org:8080/#/c/22374/20/java/kudu-hive/build.gradle@37
PS20, Line 37:
             : shadowJar {
             :   dependencies {
             :     exclude(dependency("log4j::.*"))
             :     exclude(dependency("org.apache.hadoop::.*"))
             :     exclude(dependency("org.apache.hive::.*"))
             :     exclude(dependency("org.apache.hbase::.*"))
             :     exclude(dependency("org.apache.hbase.thirdparty::.*"))
             :     exclude(dependency("junit::.*"))
             :     exclude(dependency("javax.servlet.jsp::.*"))
             :
> It's really hard to gauge how much effort this would take in my opinion. Th
So I think I figured out what was the issue here:
Q. Why is the test failing?
During the test run, hadoop-common:3.1.0 is being used the issue is encountered 
when loading the filesystem classes[1]. As in that version 
DistributedFileSystems class didn't extend BatchListingOperations class[2] 
unlike in 3.4.1[3] and we don't have 3.4.1 available during runtime.

Q. How did we get hadoop-common-3.1.0 in the first place?
It was a transitive dependency from hive-metastore:3.1.2

Q. Why is the test failing with hadoop-common:3.4.1 and not with 
hadoop-common:3.3.1?
While both the versions have DistributedFileSystems class extending 
BatchListingOperations, the FileSystems class from hadoop-common:3.3.1  was 
present from spark-core_2.12:3.2.4 -> hadoop-client-api:3.3.1 as a transitive 
dependency.

Q. What is the best way forward?
While I do not think we use the Filesystem class from hadoop-common in 
kudu-hive plugin and the chances of having the right class are pretty high in 
Hive-metastore server(where the plugin is used), we might be better off 
declaring it as implementation to be safe.

[1] 
https://github.com/apache/hadoop/blame/e9aa1789c2c75fd20600d2a0d11774161319af2a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java#L3264
[2]
https://github.com/apache/hadoop/blob/branch-3.1.0/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java#L131
[3] 
https://github.com/apache/hadoop/blob/626b227094027ed08883af97a0734d2db7863864/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java#L148


http://gerrit.cloudera.org:8080/#/c/22374/21/java/kudu-hive/build.gradle
File java/kudu-hive/build.gradle:

PS21:
> Comparing the base version and PS21, it seems there were no actual modifica
I was testing the run after clearing gradle cache in build machines. Hence this 
change, but I figured out the root cause of the test failure. I will be posting 
it under the other comment thread we have running.


http://gerrit.cloudera.org:8080/#/c/22374/21/java/kudu-test-utils/build.gradle
File java/kudu-test-utils/build.gradle:

http://gerrit.cloudera.org:8080/#/c/22374/21/java/kudu-test-utils/build.gradle@75
PS21, Line 75:   exclude "META-INF/versions/9/module-info.class"
> Is this still necessary given there is a broader
Good spot, that is not needed aymore.



--
To view, visit http://gerrit.cloudera.org:8080/22374
To unsubscribe, visit http://gerrit.cloudera.org:8080/settings

Gerrit-Project: kudu
Gerrit-Branch: master
Gerrit-MessageType: comment
Gerrit-Change-Id: Id1b43e3cc8228e94fbbd3085933cd62bf089e23d
Gerrit-Change-Number: 22374
Gerrit-PatchSet: 21
Gerrit-Owner: Abhishek Chennaka <[email protected]>
Gerrit-Reviewer: Abhishek Chennaka <[email protected]>
Gerrit-Reviewer: Alexey Serbin <[email protected]>
Gerrit-Reviewer: Attila Bukor <[email protected]>
Gerrit-Reviewer: Kudu Jenkins (120)
Gerrit-Reviewer: Zoltan Chovan <[email protected]>
Gerrit-Comment-Date: Tue, 01 Apr 2025 06:06:50 +0000
Gerrit-HasComments: Yes

Reply via email to