Good call, probably worth back-porting, I'll try to do that. I don't
think it blocks a release, but would be good to get into a next RC if
any.

On Wed, Jun 22, 2016 at 11:38 AM, Pete Robbins <robbin...@gmail.com> wrote:
> This has failed on our 1.6 stream builds regularly.
> (https://issues.apache.org/jira/browse/SPARK-6005) looks fixed in 2.0?
>
> On Wed, 22 Jun 2016 at 11:15 Sean Owen <so...@cloudera.com> wrote:
>>
>> Oops, one more in the "does anybody else see this" department:
>>
>> - offset recovery *** FAILED ***
>>   recoveredOffsetRanges.forall(((or: (org.apache.spark.streaming.Time,
>> Array[org.apache.spark.streaming.kafka.OffsetRange])) =>
>>
>> earlierOffsetRangesAsSets.contains(scala.Tuple2.apply[org.apache.spark.streaming.Time,
>>
>> scala.collection.immutable.Set[org.apache.spark.streaming.kafka.OffsetRange]](or._1,
>>
>> scala.this.Predef.refArrayOps[org.apache.spark.streaming.kafka.OffsetRange](or._2).toSet[org.apache.spark.streaming.kafka.OffsetRange]))))
>> was false Recovered ranges are not the same as the ones generated
>> (DirectKafkaStreamSuite.scala:301)
>>
>> This actually fails consistently for me too in the Kafka integration
>> code. Not timezone related, I think.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org

Reply via email to