alberttwong opened a new issue, #11851:
URL: https://github.com/apache/hudi/issues/11851

   **Describe the problem you faced**
   
   hudi-sync needs to upgraded to avoid AWS SDK V1 warning message
   
   **To Reproduce**
   
   ```
   oot@spark:/spark-3.4.3-bin-hadoop3/bin# 
/opt/hudi/hudi-sync/hudi-hive-sync/run_sync_tool.sh  \
   > --metastore-uris 'thrift://hive-metastore:9083' \
   > --partitioned-by dt \
   > --base-path 's3a://warehouse/stock_ticks_mor' \
   > --database default \
   > --table stock_ticks_mor \
   > --sync-mode hms \
   > --partition-value-extractor 
org.apache.hudi.hive.SlashEncodedDayPartitionValueExtractor
   ls: cannot access 
'/opt/hudi/hudi-sync/hudi-hive-sync/../../packaging/hudi-hive-sync-bundle/target/hudi-hive-sync-*.jar':
 No such file or directory
   setting hadoop conf dir
   Running Command : java -cp 
/hive/lib/hive-metastore-2.3.10.jar::/hive/lib/hive-service-2.3.10.jar::/hive/lib/hive-exec-2.3.10.jar::/hive/lib/hive-jdbc-2.3.10.jar:/hive/lib/hive-jdbc-handler-2.3.10.jar::/hive/lib/jackson-annotations-2.12.0.jar:/hive/lib/jackson-core-2.12.0.jar:/hive/lib/jackson-core-asl-1.9.13.jar:/hive/lib/jackson-databind-2.12.0.jar:/hive/lib/jackson-dataformat-smile-2.12.0.jar:/hive/lib/jackson-datatype-guava-2.12.0.jar:/hive/lib/jackson-datatype-joda-2.12.0.jar:/hive/lib/jackson-jaxrs-1.9.13.jar:/hive/lib/jackson-jaxrs-base-2.12.0.jar:/hive/lib/jackson-jaxrs-json-provider-2.12.0.jar:/hive/lib/jackson-jaxrs-smile-provider-2.12.0.jar:/hive/lib/jackson-mapper-asl-1.9.13.jar:/hive/lib/jackson-module-jaxb-annotations-2.12.0.jar:/hive/lib/jackson-module-scala_2.11-2.12.0.jar:/hive/lib/jackson-xc-1.9.13.jar::/hadoop/share/hadoop/common/*:/hadoop/share/hadoop/mapreduce/*:/hadoop/share/hadoop/hdfs/*:/hadoop/share/hadoop/common/lib/*:/hadoop/share/hadoop/hdfs/lib/*:/hado
 op/etc/hadoop: org.apache.hudi.hive.HiveSyncTool --metastore-uris 
thrift://hive-metastore:9083 --partitioned-by dt --base-path 
s3a://warehouse/stock_ticks_mor --database default --table stock_ticks_mor 
--sync-mode hms --partition-value-extractor 
org.apache.hudi.hive.SlashEncodedDayPartitionValueExtractor
   SLF4J: Class path contains multiple SLF4J bindings.
   SLF4J: Found binding in 
[jar:file:/root/.ivy2/jars/org.slf4j_slf4j-reload4j-1.7.33.jar!/org/slf4j/impl/StaticLoggerBinder.class]
   SLF4J: Found binding in 
[jar:file:/hadoop-2.10.2/share/hadoop/common/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
   SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
explanation.
   SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory]
   2024-08-28 21:00:01,262 INFO  [main] conf.HiveConf 
(HiveConf.java:findConfigFile(187)) - Found configuration file null
   WARNING: An illegal reflective access operation has occurred
   WARNING: Illegal reflective access by 
org.apache.hadoop.security.authentication.util.KerberosUtil 
(file:/root/.ivy2/jars/org.apache.hadoop_hadoop-auth-2.10.2.jar) to method 
sun.security.krb5.Config.getInstance()
   WARNING: Please consider reporting this to the maintainers of 
org.apache.hadoop.security.authentication.util.KerberosUtil
   WARNING: Use --illegal-access=warn to enable warnings of further illegal 
reflective access operations
   WARNING: All illegal access operations will be denied in a future release
   2024-08-28 21:00:01,473 WARN  [main] util.NativeCodeLoader 
(NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for 
your platform... using builtin-java classes where applicable
   2024-08-28 21:00:01,587 WARN  [main] util.VersionInfoUtils 
(VersionInfoUtils.java:printDeprecationAnnouncement(85)) - The AWS SDK for Java 
1.x entered maintenance mode starting July 31, 2024 and will reach end of 
support on December 31, 2025. For more information, see 
https://aws.amazon.com/blogs/developer/the-aws-sdk-for-java-1-x-is-in-maintenance-mode-effective-july-31-2024/
   You can print where on the file system the AWS SDK for Java 1.x core runtime 
is located by setting the AWS_JAVA_V1_PRINT_LOCATION environment variable or 
aws.java.v1.printLocation system property to 'true'.
   This message can be disabled by setting the 
AWS_JAVA_V1_DISABLE_DEPRECATION_ANNOUNCEMENT environment variable or 
aws.java.v1.disableDeprecationAnnouncement system property to 'true'.
   The AWS SDK for Java 1.x is being used here:
   at java.base/java.lang.Thread.getStackTrace(Unknown Source)
   at 
com.amazonaws.util.VersionInfoUtils.printDeprecationAnnouncement(VersionInfoUtils.java:81)
   at com.amazonaws.util.VersionInfoUtils.<clinit>(VersionInfoUtils.java:59)
   at 
com.amazonaws.internal.EC2ResourceFetcher.<clinit>(EC2ResourceFetcher.java:44)
   at 
com.amazonaws.auth.InstanceMetadataServiceCredentialsFetcher.<init>(InstanceMetadataServiceCredentialsFetcher.java:38)
   at 
com.amazonaws.auth.InstanceProfileCredentialsProvider.<init>(InstanceProfileCredentialsProvider.java:111)
   at 
com.amazonaws.auth.InstanceProfileCredentialsProvider.<init>(InstanceProfileCredentialsProvider.java:91)
   at 
com.amazonaws.auth.InstanceProfileCredentialsProvider.<init>(InstanceProfileCredentialsProvider.java:75)
   at 
com.amazonaws.auth.InstanceProfileCredentialsProvider.<clinit>(InstanceProfileCredentialsProvider.java:58)
   at 
org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProviderSet(S3AUtils.java:384)
   at 
org.apache.hadoop.fs.s3a.DefaultS3ClientFactory.createS3Client(DefaultS3ClientFactory.java:51)
   at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:229)
   at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3261)
   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121)
   at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3310)
   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3278)
   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:475)
   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
   at org.apache.hudi.hadoop.fs.HadoopFSUtils.getFs(HadoopFSUtils.java:116)
   at org.apache.hudi.hadoop.fs.HadoopFSUtils.getFs(HadoopFSUtils.java:109)
   at 
org.apache.hudi.storage.hadoop.HoodieHadoopStorage.<init>(HoodieHadoopStorage.java:63)
   at 
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
 Method)
   at 
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(Unknown
 Source)
   at 
java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown
 Source)
   at java.base/java.lang.reflect.Constructor.newInstance(Unknown Source)
   at 
org.apache.hudi.common.util.ReflectionUtils.loadClass(ReflectionUtils.java:73)
   at 
org.apache.hudi.storage.HoodieStorageUtils.getStorage(HoodieStorageUtils.java:41)
   at 
org.apache.hudi.common.table.HoodieTableMetaClient.getStorage(HoodieTableMetaClient.java:309)
   at 
org.apache.hudi.common.table.HoodieTableMetaClient.access$000(HoodieTableMetaClient.java:81)
   at 
org.apache.hudi.common.table.HoodieTableMetaClient$Builder.build(HoodieTableMetaClient.java:770)
   at 
org.apache.hudi.sync.common.HoodieSyncClient.<init>(HoodieSyncClient.java:68)
   at 
org.apache.hudi.hive.HoodieHiveSyncClient.<init>(HoodieHiveSyncClient.java:84)
   at org.apache.hudi.hive.HiveSyncTool.initSyncClient(HiveSyncTool.java:125)
   at org.apache.hudi.hive.HiveSyncTool.<init>(HiveSyncTool.java:119)
   at org.apache.hudi.hive.HiveSyncTool.main(HiveSyncTool.java:480)
   ```
   
   
   **Expected behavior**
   
   Upgraded to AWS SDK V2. 
   
   **Environment Description**
   
   * Hudi version :
   
   * Spark version :
   
   * Hive version :
   
   * Hadoop version :
   
   * Storage (HDFS/S3/GCS..) :
   
   * Running on Docker? (yes/no) :
   
   
   **Additional context**
   
   Add any other context about the problem here.
   
   **Stacktrace**
   
   ```Add the stacktrace of the error.```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to