+1 On Wed, Jun 22, 2016 at 1:07 PM, Kousuke Saruta <[email protected]> wrote:
> +1 (non-binding) > > On 2016/06/23 4:53, Reynold Xin wrote: > > +1 myself > > > On Wed, Jun 22, 2016 at 12:19 PM, Sean McNamara < > [email protected]> wrote: > >> +1 >> >> On Jun 22, 2016, at 1:14 PM, Michael Armbrust <[email protected]> >> wrote: >> >> +1 >> >> On Wed, Jun 22, 2016 at 11:33 AM, Jonathan Kelly <[email protected]> >> wrote: >> >>> +1 >>> >>> On Wed, Jun 22, 2016 at 10:41 AM Tim Hunter < <[email protected]> >>> [email protected]> wrote: >>> >>>> +1 This release passes all tests on the graphframes and tensorframes >>>> packages. >>>> >>>> On Wed, Jun 22, 2016 at 7:19 AM, Cody Koeninger < <[email protected]> >>>> [email protected]> wrote: >>>> >>>>> If we're considering backporting changes for the 0.8 kafka >>>>> integration, I am sure there are people who would like to get >>>>> >>>>> https://issues.apache.org/jira/browse/SPARK-10963 >>>>> >>>>> into 1.6.x as well >>>>> >>>>> On Wed, Jun 22, 2016 at 7:41 AM, Sean Owen < <[email protected]> >>>>> [email protected]> wrote: >>>>> > Good call, probably worth back-porting, I'll try to do that. I don't >>>>> > think it blocks a release, but would be good to get into a next RC if >>>>> > any. >>>>> > >>>>> > On Wed, Jun 22, 2016 at 11:38 AM, Pete Robbins < >>>>> <[email protected]>[email protected]> wrote: >>>>> >> This has failed on our 1.6 stream builds regularly. >>>>> >> ( <https://issues.apache.org/jira/browse/SPARK-6005> >>>>> https://issues.apache.org/jira/browse/SPARK-6005) looks fixed in 2.0? >>>>> >> >>>>> >> On Wed, 22 Jun 2016 at 11:15 Sean Owen < <[email protected]> >>>>> [email protected]> wrote: >>>>> >>> >>>>> >>> Oops, one more in the "does anybody else see this" department: >>>>> >>> >>>>> >>> - offset recovery *** FAILED *** >>>>> >>> recoveredOffsetRanges.forall(((or: >>>>> (org.apache.spark.streaming.Time, >>>>> >>> Array[org.apache.spark.streaming.kafka.OffsetRange])) => >>>>> >>> >>>>> >>> >>>>> earlierOffsetRangesAsSets.contains(scala.Tuple2.apply[org.apache.spark.streaming.Time, >>>>> >>> >>>>> >>> >>>>> scala.collection.immutable.Set[org.apache.spark.streaming.kafka.OffsetRange]](or._1, >>>>> >>> >>>>> >>> >>>>> scala.this.Predef.refArrayOps[org.apache.spark.streaming.kafka.OffsetRange](or._2).toSet[org.apache.spark.streaming.kafka.OffsetRange])))) >>>>> >>> was false Recovered ranges are not the same as the ones generated >>>>> >>> (DirectKafkaStreamSuite.scala:301) >>>>> >>> >>>>> >>> This actually fails consistently for me too in the Kafka >>>>> integration >>>>> >>> code. Not timezone related, I think. >>>>> > >>>>> > --------------------------------------------------------------------- >>>>> > To unsubscribe, e-mail: <[email protected]> >>>>> [email protected] >>>>> > For additional commands, e-mail: <[email protected]> >>>>> [email protected] >>>>> > >>>>> >>>>> --------------------------------------------------------------------- >>>>> To unsubscribe, e-mail: <[email protected]> >>>>> [email protected] >>>>> For additional commands, e-mail: <[email protected]> >>>>> [email protected] >>>>> >>>>> >>>> >> >> > > -- Sameer Agarwal Software Engineer | Databricks Inc. http://cs.berkeley.edu/~sameerag
