[ https://issues.apache.org/jira/browse/FLINK-30745 ]
Dheeraj Panangat deleted comment on FLINK-30745:
------------------------------------------
was (Author: JIRAUSER297631):
Hi [~surendralilhore] ,
Post configuring these properties, the check-pointing works, but the hudi-flink
integration fails.
Both read the same properties and expect different classes.
So when I give the shaded classes, hudi does not work as it expect Hadoop class
instances.
I think giving the shaded hadoop classes in *core-default-shaded.xml* for
checkpointing and hadoop classes in *core-site.xml* for Hudi should ideally
work.
Please let me know if my understanding is correct.
Also, I feel, is we are using shaded classes, then we should maybe read
different properties rather than original hadoop properties.
For eg: shaded.fs.azure.account.*
Can you please take a look?
Thanks,
Dheeraj Panangat.
> Check-pointing with Azure Data Lake Storage
> -------------------------------------------
>
> Key: FLINK-30745
> URL: https://issues.apache.org/jira/browse/FLINK-30745
> Project: Flink
> Issue Type: Bug
> Components: Connectors / FileSystem
> Affects Versions: 1.15.2, 1.14.6
> Reporter: Dheeraj Panangat
> Priority: Major
>
> Hi,
> While checkpointing to Azure Blob Storage using Flink, we get the following
> error :
> {code:java}
> Caused by: Configuration property <accoutnname>.dfs.core.windows.net not
> found.
> at
> org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.azurebfs.AbfsConfiguration.getStorageAccountKey(AbfsConfiguration.java:372)
> at
> org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.initializeClient(AzureBlobFileSystemStore.java:1133)
> at
> org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.<init>(AzureBlobFileSystemStore.java:174)
> at
> org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:110)
> {code}
> We have given the configurations in core-site.xml too for following
> {code:java}
> fs.hdfs.impl
> fs.abfs.impl -> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem
> fs.file.impl
> fs.azure.account.auth.type
> fs.azure.account.oauth.provider.type
> fs.azure.account.oauth2.client.id
> fs.azure.account.oauth2.client.secret
> fs.azure.account.oauth2.client.endpoint
> fs.azure.createRemoteFileSystemDuringInitialization -> true {code}
> On debugging found that flink reads from core-default-shaded.xml, but even if
> the properties are specified there, the default configs are not loaded and we
> get a different exception as :
> {code:java}
> Caused by: Unable to load key provider class.
> at
> org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.azurebfs.AbfsConfiguration.getTokenProvider(AbfsConfiguration.java:540)
> at
> org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.initializeClient(AzureBlobFileSystemStore.java:1136)
> at
> org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.<init>(AzureBlobFileSystemStore.java:174)
> at
> org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:110)
> {code}
>
--
This message was sent by Atlassian Jira
(v8.20.10#820010)