[ 
https://issues.apache.org/jira/browse/FLINK-30128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17643653#comment-17643653
 ] 

ramkrishna.s.vasudevan edited comment on FLINK-30128 at 12/6/22 4:36 AM:
-------------------------------------------------------------------------

Trying to add tests for Azure fs, seems some of the IT tests are already not 
running in the CI AzureFileSystemBehaviorITCase. The others that run are 
AzureBlobStorageFSFactoryTest and AzureDataLakeStoreGen2FSFactoryTest. 
Any idea on how we should be adding those tests here?
For reference
{code}
Dec 05 17:06:36 [INFO] Running 
org.apache.flink.fs.azurefs.AzureFileSystemBehaviorITCase
Dec 05 17:06:37 [INFO] Tests run: 0, Failures: 0, Errors: 0, Skipped: 0, Time 
elapsed: 0.792 s - in org.apache.flink.fs.azurefs.AzureFileSystemBehaviorITCase
Dec 05 17:06:37 [INFO] 
Dec 05 17:06:37 [INFO] Results:
Dec 05 17:06:37 [INFO] 
Dec 05 17:06:37 [INFO] Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
Dec 05 17:06:37 [INFO] 
Dec 05 17:06:37 [INFO] 
Dec 05 17:06:37 [INFO] --- japicmp-maven-plugin:0.17.1.1_m325:cmp (default) @ 
flink-azure-fs-hadoop ---
Dec 05 17:06:37 [INFO] Skipping execution because parameter 'skip' was set to 
true.
{code}


was (Author: ram_krish):
Trying to add tests for Azure fs, seems some of the IT tests are already not 
running in the CI AzureFileSystemBehaviorITCase. The others that run are 
AzureBlobStorageFSFactoryTest and AzureDataLakeStoreGen2FSFactoryTest. 
Any idea on how we should be adding those tests here?

> Introduce Azure Data Lake Gen2 APIs in the Hadoop Recoverable path
> ------------------------------------------------------------------
>
>                 Key: FLINK-30128
>                 URL: https://issues.apache.org/jira/browse/FLINK-30128
>             Project: Flink
>          Issue Type: Sub-task
>    Affects Versions: 1.13.1
>            Reporter: ramkrishna.s.vasudevan
>            Priority: Major
>         Attachments: Flink_ABFS_support_1.pdf
>
>
> Currently the HadoopRecoverableWriter assumes that the underlying FS is 
> Hadoop and so it checks for DistributedFileSystem. It also tries to do a 
> truncate and ensure the lease is recovered before the 'rename' operation is 
> done.
> In the Azure Data lake gen 2 world, the driver does not support truncate and 
> lease recovery API. We should be able to get the last committed size and if 
> it matches go for the rename. Will be back with more details here. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to