This has failed on our 1.6 stream builds regularly. (
https://issues.apache.org/jira/browse/SPARK-6005) looks fixed in 2.0?

On Wed, 22 Jun 2016 at 11:15 Sean Owen <so...@cloudera.com> wrote:

> Oops, one more in the "does anybody else see this" department:
>
> - offset recovery *** FAILED ***
>   recoveredOffsetRanges.forall(((or: (org.apache.spark.streaming.Time,
> Array[org.apache.spark.streaming.kafka.OffsetRange])) =>
>
> earlierOffsetRangesAsSets.contains(scala.Tuple2.apply[org.apache.spark.streaming.Time,
>
> scala.collection.immutable.Set[org.apache.spark.streaming.kafka.OffsetRange]](or._1,
>
> scala.this.Predef.refArrayOps[org.apache.spark.streaming.kafka.OffsetRange](or._2).toSet[org.apache.spark.streaming.kafka.OffsetRange]))))
> was false Recovered ranges are not the same as the ones generated
> (DirectKafkaStreamSuite.scala:301)
>
> This actually fails consistently for me too in the Kafka integration
> code. Not timezone related, I think.
>
> On Wed, Jun 22, 2016 at 9:02 AM, Sean Owen <so...@cloudera.com> wrote:
> > I'm fairly convinced this error and others that appear timestamp
> > related are an environment problem. This test and method have been
> > present for several Spark versions, without change. I reviewed the
> > logic and it seems sound, explicitly setting the time zone correctly.
> > I am not sure why it behaves differently on this machine.
> >
> > I'd give a +1 to this release if nobody else is seeing errors like
> > this. The sigs, hashes, other tests pass for me.
> >
> > On Tue, Jun 21, 2016 at 6:49 PM, Sean Owen <so...@cloudera.com> wrote:
> >> UIUtilsSuite:
> >> - formatBatchTime *** FAILED ***
> >>   "2015/05/14 [14]:04:40" did not equal "2015/05/14 [21]:04:40"
> >> (UIUtilsSuite.scala:73)
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> For additional commands, e-mail: dev-h...@spark.apache.org
>
>

Reply via email to