Hi,

可以在JobMaster里面看一下jstack吗?看下具体卡在哪里?

On Sat, Sep 5, 2020 at 11:11 PM Peihui He <peihu...@gmail.com> wrote:

> Hi, all
>
> 经过这几天的测试发现,当hdfs目录下的文件比较多的时候就是出现上述情况,比如我这边文件个数接近2k个。
> 简单的测试当文件个数为1到2个的时候会很快提交job,并且flink session web 页面也没有感觉到卡着。
>
> 请问有什么好的解决方式没呢?
>
> Best Wishes.
>
> Peihui He <peihu...@gmail.com> 于2020年9月4日周五 下午6:25写道:
>
>> Hi, all
>>
>> 当指定partition的时候这个问题通过path 也没法解决了
>>
>> CREATE TABLE MyUserTable (
>>   column_name1 INT,
>>   column_name2 STRING,  dt string,) PARTITIONED BY (dt) WITH (
>>   'connector' = 'filesystem',           -- required: specify the connector
>>   'path' = 'file:///path/to/whatever',  -- required: path to a directory
>>   'format' = 'json',                     -- required: file system connector)
>>
>>
>> select  * from  MyUserTable  limit 10;
>>
>> job 会一直卡在一个地方
>> [image: image.png]
>>
>> 这种改怎么解决呢?
>>
>> Peihui He <peihu...@gmail.com> 于2020年9月4日周五 下午6:02写道:
>>
>>> hi, all
>>> 我这边用flink sql client 创建表的时候
>>>
>>> CREATE TABLE MyUserTable (
>>>   column_name1 INT,
>>>   column_name2 STRING,) PARTITIONED BY (part_name1, part_name2) WITH (
>>>   'connector' = 'filesystem',           -- required: specify the connector
>>>   'path' = 'file:///path/to/whatever',  -- required: path to a directory
>>>   'format' = 'json',                     -- required: file system connector)
>>>
>>> 当path后面多一个"/"时, 比如: 'path' = 'file:///path/to/whatever/'
>>> sql client 提交job会很慢,最后会报错
>>>
>>> Caused by: org.apache.flink.runtime.rest.util.RestClientException:
>>> [Internal server error., <Exception on server side:
>>> org.apache.flink.runtime.client.DuplicateJobSubmissionException: Job has
>>> already been submitted. at
>>> org.apache.flink.runtime.dispatcher.Dispatcher.submitJob(Dispatcher.java:280)
>>> at sun.reflect.GeneratedMethodAccessor127.invoke(Unknown Source) at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>> at java.lang.reflect.Method.invoke(Method.java:498) at
>>> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:284)
>>> at
>>> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:199)
>>> at
>>> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
>>> at
>>> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
>>> at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26) at
>>> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21) at
>>> scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123) at
>>> akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21) at
>>> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170) at
>>> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) at
>>> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) at
>>> akka.actor.Actor$class.aroundReceive(Actor.scala:517) at
>>> akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225) at
>>> akka.actor.ActorCell.receiveMessage(ActorCell.scala:592) at
>>> akka.actor.ActorCell.invoke(ActorCell.scala:561) at
>>> akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258) at
>>> akka.dispatch.Mailbox.run(Mailbox.scala:225) at
>>> akka.dispatch.Mailbox.exec(Mailbox.scala:235) at
>>> akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at
>>> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>>> at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at
>>> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
>>> End of exception on server side>] at
>>> org.apache.flink.runtime.rest.RestClient.parseResponse(RestClient.java:390)
>>> at
>>> org.apache.flink.runtime.rest.RestClient.lambda$submitRequest$3(RestClient.java:374)
>>> at
>>> java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:952)
>>> at
>>> java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:926)
>>>
>>>
>>> flink session cluster job 页面基本上都打不开,要过好久才可以。最后看到job 确实提交成功了。
>>>
>>> 这种情况不知道有没有遇到过?
>>>
>>> Best Wishes.
>>>
>>>
>>>
>>

-- 
Best, Jingsong Lee

回复