Re: Re: Re: spark2.1 kafka0.10

2017-06-22 Thread lk_spark
thank you Kumar , I will try it later.

2017-06-22 

lk_spark 



发件人:Pralabh Kumar <pralabhku...@gmail.com>
发送时间:2017-06-22 20:20
主题:Re: Re: spark2.1 kafka0.10
收件人:"lk_spark"<lk_sp...@163.com>
抄送:"user.spark"<user@spark.apache.org>

It looks like your replicas for partition are getting failed. If u have more 
brokers , can u try increasing ,replicas ,just to make sure atleast one leader 
is always available. 


On Thu, Jun 22, 2017 at 10:34 AM, lk_spark <lk_sp...@163.com> wrote:

each topic have 5 partition  ,  2 replicas .

2017-06-22 

lk_spark 



发件人:Pralabh Kumar <pralabhku...@gmail.com>
发送时间:2017-06-22 17:23
主题:Re: spark2.1 kafka0.10
收件人:"lk_spark"<lk_sp...@163.com>
抄送:"user.spark"<user@spark.apache.org>

How many replicas ,you have for this topic . 


On Thu, Jun 22, 2017 at 9:19 AM, lk_spark <lk_sp...@163.com> wrote:

java.lang.IllegalStateException: No current assignment for partition pages-2
 at 
org.apache.kafka.clients.consumer.internals.SubscriptionState.assignedState(SubscriptionState.java:264)
 at 
org.apache.kafka.clients.consumer.internals.SubscriptionState.needOffsetReset(SubscriptionState.java:336)
 at 
org.apache.kafka.clients.consumer.KafkaConsumer.seekToEnd(KafkaConsumer.java:1236)
 at 
org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.latestOffsets(DirectKafkaInputDStream.scala:197)
 at 
org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.compute(DirectKafkaInputDStream.scala:214)
 at 
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:341)
 at 
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:341)
 at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
 at 
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:340)
 at 
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:340)
 at 
org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:415)
 at 
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:335)
 at 
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:333)
 at scala.Option.orElse(Option.scala:289)
 at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:330)
 at 
org.apache.spark.streaming.dstream.ForEachDStream.generateJob(ForEachDStream.scala:48)
 at 
org.apache.spark.streaming.DStreamGraph$$anonfun$1.apply(DStreamGraph.scala:117)
 at 
org.apache.spark.streaming.DStreamGraph$$anonfun$1.apply(DStreamGraph.scala:116)
 at 
scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
 at 
scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
 at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
 at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
 at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
 at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
 at org.apache.spark.streaming.DStreamGraph.generateJobs(DStreamGraph.scala:116)
 at 
org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$3.apply(JobGenerator.scala:249)
 at 
org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$3.apply(JobGenerator.scala:247)
 at scala.util.Try$.apply(Try.scala:192)
 at 
org.apache.spark.streaming.scheduler.JobGenerator.generateJobs(JobGenerator.scala:247)
 at 
org.apache.spark.streaming.scheduler.JobGenerator.org$apache$spark$streaming$scheduler$JobGenerator$$processEvent(JobGenerator.scala:183)
 at 
org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.onReceive(JobGenerator.scala:89)
 at 
org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.onReceive(JobGenerator.scala:88)
 at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)


2017-06-22 

lk_spark 



发件人:"lk_spark"<lk_sp...@163.com>
发送时间:2017-06-22 11:13
主题:spark2.1 kafka0.10
收件人:"user.spark"<user@spark.apache.org>
抄送:

hi,all: 
when I run stream application for a few minutes ,I got this error : 

17/06/22 10:34:56 INFO ConsumerCoordinator: Revoking previously assigned 
partitions [comment-0, profile-1, profile-3, cwb-3, bizs-1, cwb-1, 
weibocomment-0, bizs-2, pages-0, bizs-4, pages-2, weibo-0, pages-4, weibo-4, 
clicks-1, comment-1, weibo-2, clicks-3, weibocomment-4, weibocomment-2, 
profile-0, profile-2, profile-4, cwb-4, cwb-2, cwb-0, bizs-0, bizs-3, pages-1, 
weibo-1, pages-3, clicks-2, weibo-3, clicks-4, comment-2, weibocomment-3, 
clicks-0, weibocomment-1] for group youedata1
17/06/22 10:34:56 INFO AbstractCoordinator: (Re-)joining group youedata1
17/06/22 10:34:56 INFO AbstractCoordinator: Successfully joined group youedata1 
with generation 3
17/06/22 10:34:56 INFO ConsumerCoor

Re: Re: spark2.1 kafka0.10

2017-06-22 Thread Pralabh Kumar
It looks like your replicas for partition are getting failed. If u have
more brokers , can u try increasing ,replicas ,just to make sure atleast
one leader is always available.

On Thu, Jun 22, 2017 at 10:34 AM, lk_spark <lk_sp...@163.com> wrote:

> each topic have 5 partition  ,  2 replicas .
>
> 2017-06-22
> --
> lk_spark
> --
>
> *发件人:*Pralabh Kumar <pralabhku...@gmail.com>
> *发送时间:*2017-06-22 17:23
> *主题:*Re: spark2.1 kafka0.10
> *收件人:*"lk_spark"<lk_sp...@163.com>
> *抄送:*"user.spark"<user@spark.apache.org>
>
> How many replicas ,you have for this topic .
>
> On Thu, Jun 22, 2017 at 9:19 AM, lk_spark <lk_sp...@163.com> wrote:
>
>> java.lang.IllegalStateException: No current assignment for partition
>> pages-2
>>  at org.apache.kafka.clients.consumer.internals.SubscriptionStat
>> e.assignedState(SubscriptionState.java:264)
>>  at org.apache.kafka.clients.consumer.internals.SubscriptionStat
>> e.needOffsetReset(SubscriptionState.java:336)
>>  at org.apache.kafka.clients.consumer.KafkaConsumer.seekToEnd(
>> KafkaConsumer.java:1236)
>>  at org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.
>> latestOffsets(DirectKafkaInputDStream.scala:197)
>>  at org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.
>> compute(DirectKafkaInputDStream.scala:214)
>>  at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCom
>> pute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:341)
>>  at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCom
>> pute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:341)
>>  at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
>>  at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCom
>> pute$1$$anonfun$1.apply(DStream.scala:340)
>>  at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCom
>> pute$1$$anonfun$1.apply(DStream.scala:340)
>>  at org.apache.spark.streaming.dstream.DStream.createRDDWithLoca
>> lProperties(DStream.scala:415)
>>  at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCom
>> pute$1.apply(DStream.scala:335)
>>  at org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCom
>> pute$1.apply(DStream.scala:333)
>>  at scala.Option.orElse(Option.scala:289)
>>  at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStr
>> eam.scala:330)
>>  at org.apache.spark.streaming.dstream.ForEachDStream.generateJo
>> b(ForEachDStream.scala:48)
>>  at org.apache.spark.streaming.DStreamGraph$$anonfun$1.apply(DSt
>> reamGraph.scala:117)
>>  at org.apache.spark.streaming.DStreamGraph$$anonfun$1.apply(DSt
>> reamGraph.scala:116)
>>  at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(Tr
>> aversableLike.scala:241)
>>  at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(Tr
>> aversableLike.scala:241)
>>  at scala.collection.mutable.ResizableArray$class.foreach(Resiza
>> bleArray.scala:59)
>>  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>>  at scala.collection.TraversableLike$class.flatMap(TraversableLi
>> ke.scala:241)
>>  at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
>>  at org.apache.spark.streaming.DStreamGraph.generateJobs(DStream
>> Graph.scala:116)
>>  at org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$
>> 3.apply(JobGenerator.scala:249)
>>  at org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$
>> 3.apply(JobGenerator.scala:247)
>>  at scala.util.Try$.apply(Try.scala:192)
>>  at org.apache.spark.streaming.scheduler.JobGenerator.generateJo
>> bs(JobGenerator.scala:247)
>>  at org.apache.spark.streaming.scheduler.JobGenerator.org$apache
>> $spark$streaming$scheduler$JobGenerator$$processEvent(
>> JobGenerator.scala:183)
>>  at org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.
>> onReceive(JobGenerator.scala:89)
>>  at org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.
>> onReceive(JobGenerator.scala:88)
>>  at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
>>
>> 2017-06-22
>> --
>> lk_spark
>> --
>>
>> *发件人:*"lk_spark"<lk_sp...@163.com>
>> *发送时间:*2017-06-22 11:13
>> *主题:*spark2.1 kafka0.10
>> *收件人:*"user.spark"<user@spark.apache.org>
>> *抄送:*
>>
>> hi,all:
>> when I run stream application for a few minutes ,I got this error :
>>
>> 17/06/22 10:34:56 INFO ConsumerCoordinator: Rev

Re: Re: spark2.1 kafka0.10

2017-06-21 Thread lk_spark
each topic have 5 partition  ,  2 replicas .

2017-06-22 

lk_spark 



发件人:Pralabh Kumar <pralabhku...@gmail.com>
发送时间:2017-06-22 17:23
主题:Re: spark2.1 kafka0.10
收件人:"lk_spark"<lk_sp...@163.com>
抄送:"user.spark"<user@spark.apache.org>

How many replicas ,you have for this topic . 


On Thu, Jun 22, 2017 at 9:19 AM, lk_spark <lk_sp...@163.com> wrote:

java.lang.IllegalStateException: No current assignment for partition pages-2
 at 
org.apache.kafka.clients.consumer.internals.SubscriptionState.assignedState(SubscriptionState.java:264)
 at 
org.apache.kafka.clients.consumer.internals.SubscriptionState.needOffsetReset(SubscriptionState.java:336)
 at 
org.apache.kafka.clients.consumer.KafkaConsumer.seekToEnd(KafkaConsumer.java:1236)
 at 
org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.latestOffsets(DirectKafkaInputDStream.scala:197)
 at 
org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.compute(DirectKafkaInputDStream.scala:214)
 at 
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:341)
 at 
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:341)
 at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
 at 
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:340)
 at 
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:340)
 at 
org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:415)
 at 
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:335)
 at 
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:333)
 at scala.Option.orElse(Option.scala:289)
 at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:330)
 at 
org.apache.spark.streaming.dstream.ForEachDStream.generateJob(ForEachDStream.scala:48)
 at 
org.apache.spark.streaming.DStreamGraph$$anonfun$1.apply(DStreamGraph.scala:117)
 at 
org.apache.spark.streaming.DStreamGraph$$anonfun$1.apply(DStreamGraph.scala:116)
 at 
scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
 at 
scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
 at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
 at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
 at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
 at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
 at org.apache.spark.streaming.DStreamGraph.generateJobs(DStreamGraph.scala:116)
 at 
org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$3.apply(JobGenerator.scala:249)
 at 
org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$3.apply(JobGenerator.scala:247)
 at scala.util.Try$.apply(Try.scala:192)
 at 
org.apache.spark.streaming.scheduler.JobGenerator.generateJobs(JobGenerator.scala:247)
 at 
org.apache.spark.streaming.scheduler.JobGenerator.org$apache$spark$streaming$scheduler$JobGenerator$$processEvent(JobGenerator.scala:183)
 at 
org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.onReceive(JobGenerator.scala:89)
 at 
org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.onReceive(JobGenerator.scala:88)
 at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)


2017-06-22 

lk_spark 



发件人:"lk_spark"<lk_sp...@163.com>
发送时间:2017-06-22 11:13
主题:spark2.1 kafka0.10
收件人:"user.spark"<user@spark.apache.org>
抄送:

hi,all: 
when I run stream application for a few minutes ,I got this error : 

17/06/22 10:34:56 INFO ConsumerCoordinator: Revoking previously assigned 
partitions [comment-0, profile-1, profile-3, cwb-3, bizs-1, cwb-1, 
weibocomment-0, bizs-2, pages-0, bizs-4, pages-2, weibo-0, pages-4, weibo-4, 
clicks-1, comment-1, weibo-2, clicks-3, weibocomment-4, weibocomment-2, 
profile-0, profile-2, profile-4, cwb-4, cwb-2, cwb-0, bizs-0, bizs-3, pages-1, 
weibo-1, pages-3, clicks-2, weibo-3, clicks-4, comment-2, weibocomment-3, 
clicks-0, weibocomment-1] for group youedata1
17/06/22 10:34:56 INFO AbstractCoordinator: (Re-)joining group youedata1
17/06/22 10:34:56 INFO AbstractCoordinator: Successfully joined group youedata1 
with generation 3
17/06/22 10:34:56 INFO ConsumerCoordinator: Setting newly assigned partitions 
[comment-0, profile-1, profile-3, cwb-3, bizs-1, cwb-1, weibocomment-0, bizs-2, 
bizs-4, pages-4, weibo-4, clicks-1, comment-1, clicks-3, weibocomment-4, 
weibocomment-2, profile-0, profile-2, profile-4, cwb-4, cwb-2, cwb-0, bizs-0, 
bizs-3, pages-3, clicks-2, weibo-3, clicks-4, comment-2, weibocomment-3, 
clicks-0, weibocomment-1] for group youedata1
17/06/22 10:34:56 ERROR JobScheduler: Error generating jobs for time 
1498098896000 ms
java.lang.IllegalStateException: No current assignment for partition pages-2

I don't know why ?

2017-06-22


lk_spark 

Re: spark2.1 kafka0.10

2017-06-21 Thread Pralabh Kumar
How many replicas ,you have for this topic .

On Thu, Jun 22, 2017 at 9:19 AM, lk_spark <lk_sp...@163.com> wrote:

> java.lang.IllegalStateException: No current assignment for partition
> pages-2
>  at org.apache.kafka.clients.consumer.internals.SubscriptionState.
> assignedState(SubscriptionState.java:264)
>  at org.apache.kafka.clients.consumer.internals.SubscriptionState.
> needOffsetReset(SubscriptionState.java:336)
>  at org.apache.kafka.clients.consumer.KafkaConsumer.
> seekToEnd(KafkaConsumer.java:1236)
>  at org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.
> latestOffsets(DirectKafkaInputDStream.scala:197)
>  at org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.compute(
> DirectKafkaInputDStream.scala:214)
>  at org.apache.spark.streaming.dstream.DStream$$anonfun$
> getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:341)
>  at org.apache.spark.streaming.dstream.DStream$$anonfun$
> getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:341)
>  at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
>  at org.apache.spark.streaming.dstream.DStream$$anonfun$
> getOrCompute$1$$anonfun$1.apply(DStream.scala:340)
>  at org.apache.spark.streaming.dstream.DStream$$anonfun$
> getOrCompute$1$$anonfun$1.apply(DStream.scala:340)
>  at org.apache.spark.streaming.dstream.DStream.
> createRDDWithLocalProperties(DStream.scala:415)
>  at org.apache.spark.streaming.dstream.DStream$$anonfun$
> getOrCompute$1.apply(DStream.scala:335)
>  at org.apache.spark.streaming.dstream.DStream$$anonfun$
> getOrCompute$1.apply(DStream.scala:333)
>  at scala.Option.orElse(Option.scala:289)
>  at org.apache.spark.streaming.dstream.DStream.getOrCompute(
> DStream.scala:330)
>  at org.apache.spark.streaming.dstream.ForEachDStream.
> generateJob(ForEachDStream.scala:48)
>  at org.apache.spark.streaming.DStreamGraph$$anonfun$1.apply(
> DStreamGraph.scala:117)
>  at org.apache.spark.streaming.DStreamGraph$$anonfun$1.apply(
> DStreamGraph.scala:116)
>  at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(
> TraversableLike.scala:241)
>  at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(
> TraversableLike.scala:241)
>  at scala.collection.mutable.ResizableArray$class.foreach(
> ResizableArray.scala:59)
>  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>  at scala.collection.TraversableLike$class.flatMap(
> TraversableLike.scala:241)
>  at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
>  at org.apache.spark.streaming.DStreamGraph.generateJobs(
> DStreamGraph.scala:116)
>  at org.apache.spark.streaming.scheduler.JobGenerator$$
> anonfun$3.apply(JobGenerator.scala:249)
>  at org.apache.spark.streaming.scheduler.JobGenerator$$
> anonfun$3.apply(JobGenerator.scala:247)
>  at scala.util.Try$.apply(Try.scala:192)
>  at org.apache.spark.streaming.scheduler.JobGenerator.
> generateJobs(JobGenerator.scala:247)
>  at org.apache.spark.streaming.scheduler.JobGenerator.org$
> apache$spark$streaming$scheduler$JobGenerator$$processEvent(JobGenerator.
> scala:183)
>  at org.apache.spark.streaming.scheduler.JobGenerator$$anon$
> 1.onReceive(JobGenerator.scala:89)
>  at org.apache.spark.streaming.scheduler.JobGenerator$$anon$
> 1.onReceive(JobGenerator.scala:88)
>  at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
>
> 2017-06-22
> --
> lk_spark
> --
>
> *发件人:*"lk_spark"<lk_sp...@163.com>
> *发送时间:*2017-06-22 11:13
> *主题:*spark2.1 kafka0.10
> *收件人:*"user.spark"<user@spark.apache.org>
> *抄送:*
>
> hi,all:
> when I run stream application for a few minutes ,I got this error :
>
> 17/06/22 10:34:56 INFO ConsumerCoordinator: Revoking previously assigned
> partitions [comment-0, profile-1, profile-3, cwb-3, bizs-1, cwb-1,
> weibocomment-0, bizs-2, pages-0, bizs-4, pages-2, weibo-0, pages-4,
> weibo-4, clicks-1, comment-1, weibo-2, clicks-3, weibocomment-4,
> weibocomment-2, profile-0, profile-2, profile-4, cwb-4, cwb-2, cwb-0,
> bizs-0, bizs-3, pages-1, weibo-1, pages-3, clicks-2, weibo-3, clicks-4,
> comment-2, weibocomment-3, clicks-0, weibocomment-1] for group youedata1
> 17/06/22 10:34:56 INFO AbstractCoordinator: (Re-)joining group youedata1
> 17/06/22 10:34:56 INFO AbstractCoordinator: Successfully joined group
> youedata1 with generation 3
> 17/06/22 10:34:56 INFO ConsumerCoordinator: Setting newly assigned
> partitions [comment-0, profile-1, profile-3, cwb-3, bizs-1, cwb-1,
> weibocomment-0, bizs-2, bizs-4, pages-4, weibo-4, clicks-1, comment-1,
> clicks-3, weibocomment-4, weibocomment-2, profile-0, profile-2, profile-4,
> cwb-4, cwb-2, cwb-0, bizs-0, bizs-3, p

Re: spark2.1 kafka0.10

2017-06-21 Thread lk_spark
java.lang.IllegalStateException: No current assignment for partition pages-2
 at 
org.apache.kafka.clients.consumer.internals.SubscriptionState.assignedState(SubscriptionState.java:264)
 at 
org.apache.kafka.clients.consumer.internals.SubscriptionState.needOffsetReset(SubscriptionState.java:336)
 at 
org.apache.kafka.clients.consumer.KafkaConsumer.seekToEnd(KafkaConsumer.java:1236)
 at 
org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.latestOffsets(DirectKafkaInputDStream.scala:197)
 at 
org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.compute(DirectKafkaInputDStream.scala:214)
 at 
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:341)
 at 
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(DStream.scala:341)
 at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
 at 
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:340)
 at 
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1$$anonfun$1.apply(DStream.scala:340)
 at 
org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:415)
 at 
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:335)
 at 
org.apache.spark.streaming.dstream.DStream$$anonfun$getOrCompute$1.apply(DStream.scala:333)
 at scala.Option.orElse(Option.scala:289)
 at org.apache.spark.streaming.dstream.DStream.getOrCompute(DStream.scala:330)
 at 
org.apache.spark.streaming.dstream.ForEachDStream.generateJob(ForEachDStream.scala:48)
 at 
org.apache.spark.streaming.DStreamGraph$$anonfun$1.apply(DStreamGraph.scala:117)
 at 
org.apache.spark.streaming.DStreamGraph$$anonfun$1.apply(DStreamGraph.scala:116)
 at 
scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
 at 
scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
 at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
 at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
 at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
 at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
 at org.apache.spark.streaming.DStreamGraph.generateJobs(DStreamGraph.scala:116)
 at 
org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$3.apply(JobGenerator.scala:249)
 at 
org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$3.apply(JobGenerator.scala:247)
 at scala.util.Try$.apply(Try.scala:192)
 at 
org.apache.spark.streaming.scheduler.JobGenerator.generateJobs(JobGenerator.scala:247)
 at 
org.apache.spark.streaming.scheduler.JobGenerator.org$apache$spark$streaming$scheduler$JobGenerator$$processEvent(JobGenerator.scala:183)
 at 
org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.onReceive(JobGenerator.scala:89)
 at 
org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.onReceive(JobGenerator.scala:88)
 at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)


2017-06-22 

lk_spark 



发件人:"lk_spark"<lk_sp...@163.com>
发送时间:2017-06-22 11:13
主题:spark2.1 kafka0.10
收件人:"user.spark"<user@spark.apache.org>
抄送:

hi,all: 
when I run stream application for a few minutes ,I got this error : 

17/06/22 10:34:56 INFO ConsumerCoordinator: Revoking previously assigned 
partitions [comment-0, profile-1, profile-3, cwb-3, bizs-1, cwb-1, 
weibocomment-0, bizs-2, pages-0, bizs-4, pages-2, weibo-0, pages-4, weibo-4, 
clicks-1, comment-1, weibo-2, clicks-3, weibocomment-4, weibocomment-2, 
profile-0, profile-2, profile-4, cwb-4, cwb-2, cwb-0, bizs-0, bizs-3, pages-1, 
weibo-1, pages-3, clicks-2, weibo-3, clicks-4, comment-2, weibocomment-3, 
clicks-0, weibocomment-1] for group youedata1
17/06/22 10:34:56 INFO AbstractCoordinator: (Re-)joining group youedata1
17/06/22 10:34:56 INFO AbstractCoordinator: Successfully joined group youedata1 
with generation 3
17/06/22 10:34:56 INFO ConsumerCoordinator: Setting newly assigned partitions 
[comment-0, profile-1, profile-3, cwb-3, bizs-1, cwb-1, weibocomment-0, bizs-2, 
bizs-4, pages-4, weibo-4, clicks-1, comment-1, clicks-3, weibocomment-4, 
weibocomment-2, profile-0, profile-2, profile-4, cwb-4, cwb-2, cwb-0, bizs-0, 
bizs-3, pages-3, clicks-2, weibo-3, clicks-4, comment-2, weibocomment-3, 
clicks-0, weibocomment-1] for group youedata1
17/06/22 10:34:56 ERROR JobScheduler: Error generating jobs for time 
1498098896000 ms
java.lang.IllegalStateException: No current assignment for partition pages-2

I don't know why ?

2017-06-22


lk_spark 

spark2.1 kafka0.10

2017-06-21 Thread lk_spark
hi,all: 
when I run stream application for a few minutes ,I got this error : 

17/06/22 10:34:56 INFO ConsumerCoordinator: Revoking previously assigned 
partitions [comment-0, profile-1, profile-3, cwb-3, bizs-1, cwb-1, 
weibocomment-0, bizs-2, pages-0, bizs-4, pages-2, weibo-0, pages-4, weibo-4, 
clicks-1, comment-1, weibo-2, clicks-3, weibocomment-4, weibocomment-2, 
profile-0, profile-2, profile-4, cwb-4, cwb-2, cwb-0, bizs-0, bizs-3, pages-1, 
weibo-1, pages-3, clicks-2, weibo-3, clicks-4, comment-2, weibocomment-3, 
clicks-0, weibocomment-1] for group youedata1
17/06/22 10:34:56 INFO AbstractCoordinator: (Re-)joining group youedata1
17/06/22 10:34:56 INFO AbstractCoordinator: Successfully joined group youedata1 
with generation 3
17/06/22 10:34:56 INFO ConsumerCoordinator: Setting newly assigned partitions 
[comment-0, profile-1, profile-3, cwb-3, bizs-1, cwb-1, weibocomment-0, bizs-2, 
bizs-4, pages-4, weibo-4, clicks-1, comment-1, clicks-3, weibocomment-4, 
weibocomment-2, profile-0, profile-2, profile-4, cwb-4, cwb-2, cwb-0, bizs-0, 
bizs-3, pages-3, clicks-2, weibo-3, clicks-4, comment-2, weibocomment-3, 
clicks-0, weibocomment-1] for group youedata1
17/06/22 10:34:56 ERROR JobScheduler: Error generating jobs for time 
1498098896000 ms
java.lang.IllegalStateException: No current assignment for partition pages-2

I don't know why ?

2017-06-22


lk_spark