[ 
https://issues.apache.org/jira/browse/FLINK-14619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16986006#comment-16986006
 ] 

Gary Yao edited comment on FLINK-14619 at 12/2/19 12:11 PM:
------------------------------------------------------------

[~yuliang] Are you setting ulimit -n to 650000? What operating system are you 
using? How many file descriptors are are open by the flink process(es)? Unless 
Flink is leaking file descriptors, this issue does not seem to be Flink related.


was (Author: gjy):
[~yuliang] Are you setting ulimit -n to 650000? What operating system are you 
using? How many file descriptors are are open by the flink process(es)?   

> Failed to fetch BLOB
> --------------------
>
>                 Key: FLINK-14619
>                 URL: https://issues.apache.org/jira/browse/FLINK-14619
>             Project: Flink
>          Issue Type: Bug
>          Components: Runtime / Coordination
>    Affects Versions: 1.9.1
>            Reporter: liang yu
>            Priority: Major
>
> java.io.IOException: Failed to fetch BLOB 
> e78e9574da4f5e4bdbc8de9678ebfb36/p-650534cd619de1069630141f1dcc9876d6ce2ce0-ee11ae52caa20ff81909708a783fd596
>  from xxxx/xxxx:xxxx and store it under 
> /hadoop/yarn/local/usercache/hdfs/appcache/application_1570784539965_0165/blobStore-79420f3a-6a83-40d4-8058-f01686a1ced8/incoming/temp-00000072
>      at 
> org.apache.flink.runtime.blob.BlobClient.downloadFromBlobServer(BlobClient.java:169)
>      at 
> org.apache.flink.runtime.blob.AbstractBlobCache.getFileInternal(AbstractBlobCache.java:181)
>      at 
> org.apache.flink.runtime.blob.PermanentBlobCache.getFile(PermanentBlobCache.java:202)
>      at 
> org.apache.flink.runtime.execution.librarycache.BlobLibraryCacheManager.registerTask(BlobLibraryCacheManager.java:120)
>      at 
> org.apache.flink.runtime.taskmanager.Task.createUserCodeClassloader(Task.java:915)
>      at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:595)
>      at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530)
>      at java.lang.Thread.run(Thread.java:748)
>  Caused by: java.io.IOException: Could not connect to BlobServer at address 
> xxxx/xxxx:xxxx 
>      at org.apache.flink.runtime.blob.BlobClient.<init>(BlobClient.java:100)
>      at 
> org.apache.flink.runtime.blob.BlobClient.downloadFromBlobServer(BlobClient.java:143)
>      ... 7 more
>  Caused by: java.net.SocketException: 打开的文件过多
>      at java.net.Socket.createImpl(Socket.java:460)
>      at java.net.Socket.connect(Socket.java:587)
>      at org.apache.flink.runtime.blob.BlobClient.<init>(BlobClient.java:95)
>      ... 8 more
>   
>  I set the ulimit-a parameter of the server to 650000, but this error will 
> still occur. The server opening file descriptor with the error in the 
> monitoring display is 50000. Who can tell me why?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to