[ 
https://issues.apache.org/jira/browse/FLINK-13758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16917639#comment-16917639
 ] 

luoguohao commented on FLINK-13758:
-----------------------------------

[~Zentol] Thanks for reply, in the current implementation, HDFS files will not 
stored in the BlobServer, and HDFS files is retrieved directly from HDFS when 
the Task initialized in TaskManager, so if we want to store all the files into 
BlobServer, the scope of code adjustment is wider. And in my opinion, as long 
as the user choose the HDFS file as a DistributeCache file, the user should 
make sure that the file is accessible from the cluster, but not the cluster 
itself.

> failed to submit JobGraph when registered hdfs file in DistributedCache 
> ------------------------------------------------------------------------
>
>                 Key: FLINK-13758
>                 URL: https://issues.apache.org/jira/browse/FLINK-13758
>             Project: Flink
>          Issue Type: Bug
>          Components: Command Line Client
>    Affects Versions: 1.6.3, 1.6.4, 1.7.2, 1.8.0, 1.8.1
>            Reporter: luoguohao
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> when using HDFS files for DistributedCache, it would failed to submit 
> jobGraph, we can see exceptions stack traces in log file after a while, but 
> if DistributedCache file is a local file, every thing goes fine.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

Reply via email to