[ 
https://issues.apache.org/jira/browse/FLINK-2770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann closed FLINK-2770.
--------------------------------
    Resolution: Duplicate

> KafkaITCase.testConcurrentProducerConsumerTopology fails
> --------------------------------------------------------
>
>                 Key: FLINK-2770
>                 URL: https://issues.apache.org/jira/browse/FLINK-2770
>             Project: Flink
>          Issue Type: Bug
>    Affects Versions: 0.10
>            Reporter: Matthias J. Sax
>            Priority: Critical
>             Fix For: 0.10
>
>
> https://travis-ci.org/mjsax/flink/jobs/82308003
> {noformat}
> Running org.apache.flink.streaming.connectors.kafka.KafkaITCase
> 09/26/2015 17:52:50   Job execution switched to status RUNNING.
> 09/26/2015 17:52:50   Source: Custom Source -> Sink: Unnamed(1/1) switched to 
> SCHEDULED 
> 09/26/2015 17:52:50   Source: Custom Source -> Sink: Unnamed(1/1) switched to 
> DEPLOYING 
> 09/26/2015 17:52:50   Source: Custom Source -> Sink: Unnamed(1/1) switched to 
> RUNNING 
> 09/26/2015 17:52:50   Source: Custom Source -> Sink: Unnamed(1/1) switched to 
> FINISHED 
> 09/26/2015 17:52:50   Job execution switched to status FINISHED.
> 09/26/2015 17:52:50   Job execution switched to status RUNNING.
> 09/26/2015 17:52:50   Source: Custom Source -> Map -> Flat Map(1/1) switched 
> to SCHEDULED 
> 09/26/2015 17:52:50   Source: Custom Source -> Map -> Flat Map(1/1) switched 
> to DEPLOYING 
> 09/26/2015 17:52:50   Source: Custom Source -> Map -> Flat Map(1/1) switched 
> to RUNNING 
> 09/26/2015 17:52:51   Source: Custom Source -> Map -> Flat Map(1/1) switched 
> to FAILED 
> java.lang.Exception: Could not forward element to next operator
>       at 
> org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher.run(LegacyFetcher.java:242)
>       at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer.run(FlinkKafkaConsumer.java:382)
>       at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:57)
>       at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:57)
>       at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:198)
>       at org.apache.flink.runtime.taskmanager.Task.run(Task.java:580)
>       at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.RuntimeException: Could not forward element to next 
> operator
>       at 
> org.apache.flink.streaming.runtime.tasks.OutputHandler$CopyingChainingOutput.collect(OutputHandler.java:332)
>       at 
> org.apache.flink.streaming.runtime.tasks.OutputHandler$CopyingChainingOutput.collect(OutputHandler.java:316)
>       at 
> org.apache.flink.streaming.runtime.io.CollectorWrapper.collect(CollectorWrapper.java:50)
>       at 
> org.apache.flink.streaming.runtime.io.CollectorWrapper.collect(CollectorWrapper.java:30)
>       at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask$SourceOutput.collect(SourceStreamTask.java:106)
>       at 
> org.apache.flink.streaming.api.operators.StreamSource$NonTimestampContext.collect(StreamSource.java:92)
>       at 
> org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher$SimpleConsumerThread.run(LegacyFetcher.java:449)
> Caused by: java.lang.RuntimeException: Could not forward element to next 
> operator
>       at 
> org.apache.flink.streaming.runtime.tasks.OutputHandler$CopyingChainingOutput.collect(OutputHandler.java:332)
>       at 
> org.apache.flink.streaming.runtime.tasks.OutputHandler$CopyingChainingOutput.collect(OutputHandler.java:316)
>       at 
> org.apache.flink.streaming.runtime.io.CollectorWrapper.collect(CollectorWrapper.java:50)
>       at 
> org.apache.flink.streaming.runtime.io.CollectorWrapper.collect(CollectorWrapper.java:30)
>       at 
> org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:37)
>       at 
> org.apache.flink.streaming.runtime.tasks.OutputHandler$CopyingChainingOutput.collect(OutputHandler.java:329)
>       ... 6 more
> Caused by: 
> org.apache.flink.streaming.connectors.kafka.testutils.SuccessException
>       at 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase$7.flatMap(KafkaConsumerTestBase.java:931)
>       at 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase$7.flatMap(KafkaConsumerTestBase.java:911)
>       at 
> org.apache.flink.streaming.api.operators.StreamFlatMap.processElement(StreamFlatMap.java:47)
>       at 
> org.apache.flink.streaming.runtime.tasks.OutputHandler$CopyingChainingOutput.collect(OutputHandler.java:329)
>       ... 11 more
> 09/26/2015 17:52:51   Job execution switched to status FAILING.
> 09/26/2015 17:52:51   Job execution switched to status FAILED.
> Tests run: 12, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 80.981 sec 
> <<< FAILURE! - in org.apache.flink.streaming.connectors.kafka.KafkaITCase
> testConcurrentProducerConsumerTopology(org.apache.flink.streaming.connectors.kafka.KafkaITCase)
>   Time elapsed: 0.682 sec  <<< ERROR!
> org.apache.flink.client.program.ProgramInvocationException: The program 
> execution failed: Job execution failed.
>       at org.apache.flink.client.program.Client.runBlocking(Client.java:407)
>       at 
> org.apache.flink.streaming.api.environment.RemoteStreamEnvironment.executeRemotely(RemoteStreamEnvironment.java:131)
>       at 
> org.apache.flink.streaming.api.environment.RemoteStreamEnvironment.execute(RemoteStreamEnvironment.java:96)
>       at 
> org.apache.flink.streaming.api.environment.RemoteStreamEnvironment.execute(RemoteStreamEnvironment.java:37)
>       at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestBase.tryExecutePropagateExceptions(KafkaTestBase.java:321)
>       at 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.runSimpleConcurrentProducerConsumerTopology(KafkaConsumerTestBase.java:349)
>       at 
> org.apache.flink.streaming.connectors.kafka.KafkaITCase.testConcurrentProducerConsumerTopology(KafkaITCase.java:50)
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Job 
> execution failed.
>       at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1.applyOrElse(JobManager.scala:442)
>       at 
> scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
>       at 
> org.apache.flink.runtime.testingUtils.TestingJobManager$$anonfun$handleTestingMessage$1.applyOrElse(TestingJobManager.scala:285)
>       at scala.PartialFunction$OrElse.apply(PartialFunction.scala:167)
>       at 
> org.apache.flink.runtime.LeaderSessionMessageFilter$$anonfun$receive$1.applyOrElse(LeaderSessionMessageFilter.scala:36)
>       at 
> scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
>       at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
>       at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
>       at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
>       at 
> org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
>       at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
>       at 
> org.apache.flink.runtime.jobmanager.JobManager.aroundReceive(JobManager.scala:107)
>       at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
>       at akka.actor.ActorCell.invoke(ActorCell.scala:487)
>       at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:254)
>       at akka.dispatch.Mailbox.run(Mailbox.scala:221)
>       at akka.dispatch.Mailbox.exec(Mailbox.scala:231)
>       at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>       at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>       at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>       at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> Caused by: java.lang.Exception: Unable to get last offset for topic 
> concurrentProducerConsumerTopic_cfbda11d-09d6-4ce5-8981-3109110075b4 and 
> partitions [FetchPartition {partition=1, offset=-915623761776}]. 
> Exception for partition 1: kafka.common.UnknownException
>       at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>       at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>       at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>       at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
>       at java.lang.Class.newInstance(Class.java:438)
>       at kafka.common.ErrorMapping$.exceptionFor(ErrorMapping.scala:86)
>       at kafka.common.ErrorMapping.exceptionFor(ErrorMapping.scala)
>       at 
> org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher$SimpleConsumerThread.getLastOffset(LegacyFetcher.java:521)
>       at 
> org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher$SimpleConsumerThread.run(LegacyFetcher.java:370)
>       at 
> org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher.run(LegacyFetcher.java:242)
>       at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer.run(FlinkKafkaConsumer.java:382)
>       at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:57)
>       at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:57)
>       at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:198)
>       at org.apache.flink.runtime.taskmanager.Task.run(Task.java:580)
>       at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.RuntimeException: Unable to get last offset for topic 
> concurrentProducerConsumerTopic_cfbda11d-09d6-4ce5-8981-3109110075b4 and 
> partitions [FetchPartition {partition=1, offset=-915623761776}]. 
> Exception for partition 1: kafka.common.UnknownException
>       at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>       at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>       at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>       at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
>       at java.lang.Class.newInstance(Class.java:438)
>       at kafka.common.ErrorMapping$.exceptionFor(ErrorMapping.scala:86)
>       at kafka.common.ErrorMapping.exceptionFor(ErrorMapping.scala)
>       at 
> org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher$SimpleConsumerThread.getLastOffset(LegacyFetcher.java:521)
>       at 
> org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher$SimpleConsumerThread.run(LegacyFetcher.java:370)
>       at 
> org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher$SimpleConsumerThread.getLastOffset(LegacyFetcher.java:524)
>       at 
> org.apache.flink.streaming.connectors.kafka.internals.LegacyFetcher$SimpleConsumerThread.run(LegacyFetcher.java:370)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to