I had this type of exception when trying to build and test Flink on a
"small machine". I worked around the test increasing the timeout for Akka.

https://github.com/stefanobortoli/flink/blob/FLINK-1827/flink-tests/src/test/java/org/apache/flink/test/checkpointing/EventTimeAllWindowCheckpointingITCase.java

it happened only on my machine (a VirtualBox I use for development), but
not on Flavio's. Is it possible that on load situations the JobManager
slows down a bit too much?

saluti,
Stefano

2016-04-27 17:50 GMT+02:00 Flavio Pompermaier <pomperma...@okkam.it>:

> A precursor of the modified connector (since we started a long time ago).
> However the idea is the same, I compute the inputSplits and then I get the
> data split by split (similarly to what it happens in FLINK-3750 -
> https://github.com/apache/flink/pull/1941 )
>
> Best,
> Flavio
>
> On Wed, Apr 27, 2016 at 5:38 PM, Chesnay Schepler <ches...@apache.org>
> wrote:
>
>> Are you using your modified connector or the currently available one?
>>
>>
>> On 27.04.2016 17:35, Flavio Pompermaier wrote:
>>
>> Hi to all,
>> I'm running a Flink Job on a JDBC datasource and I obtain the following
>> exception:
>>
>> java.lang.RuntimeException: Requesting the next InputSplit failed.
>> at
>> org.apache.flink.runtime.taskmanager.TaskInputSplitProvider.getNextInputSplit(TaskInputSplitProvider.java:91)
>> at
>> org.apache.flink.runtime.operators.DataSourceTask$1.hasNext(DataSourceTask.java:342)
>> at
>> org.apache.flink.runtime.operators.DataSourceTask.invoke(DataSourceTask.java:137)
>> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)
>> at java.lang.Thread.run(Thread.java:745)
>> Caused by: java.util.concurrent.TimeoutException: Futures timed out after
>> [10000 milliseconds]
>> at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
>> at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
>> at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
>> at
>> scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
>> at scala.concurrent.Await$.result(package.scala:107)
>> at scala.concurrent.Await.result(package.scala)
>> at
>> org.apache.flink.runtime.taskmanager.TaskInputSplitProvider.getNextInputSplit(TaskInputSplitProvider.java:71)
>> ... 4 more
>>
>> What can be the cause? Is it because the whole DataSource reading has
>> cannot take more than 10000 milliseconds?
>>
>> Best,
>> Flavio
>>
>>
>>
>
>

Reply via email to