[ 
https://issues.apache.org/jira/browse/FLINK-13758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16909628#comment-16909628
 ] 

luoguohao commented on FLINK-13758:
-----------------------------------

after reading the code, i found that every files registered in the 
distributedCache will uploaded to the jobManager, but in fact, only local file 
should be uploaded to add the file into the blobServer, other file types such 
as hdfs, is not needed, which only stores the file path in jobManager. When 
Task inits in TaskManager,  the hdfs file in distributedCache would be 
retrieved directly from HDFS,but not the blobServer.

> failed to submit JobGraph when registered hdfs file in DistributedCache 
> ------------------------------------------------------------------------
>
>                 Key: FLINK-13758
>                 URL: https://issues.apache.org/jira/browse/FLINK-13758
>             Project: Flink
>          Issue Type: Bug
>          Components: Command Line Client
>    Affects Versions: 1.6.3, 1.6.4, 1.7.2, 1.8.0, 1.8.1
>            Reporter: luoguohao
>            Priority: Major
>
> when using HDFS files for DistributedCache, it would failed to submit 
> jobGraph, we can see exceptions stack traces in log file after a while, but 
> if DistributedCache file is a local file, every thing goes fine.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

Reply via email to