Re: [VOTE] Release Apache Spark 1.6.2 (RC2)

2016-06-23 Thread Reynold Xin
Maciej let's fix SPARK-13283. It won't block 1.6.2 though.

On Thu, Jun 23, 2016 at 5:45 AM, Maciej Bryński  wrote:

> -1
>
> I need SPARK-13283 to be solved.
>
> Regards,
> Maciek Bryński
>
> 2016-06-23 0:13 GMT+02:00 Krishna Sankar :
>
>> +1 (non-binding, of course)
>>
>> 1. Compiled OSX 10.10 (Yosemite) OK Total time: 37:11 min
>>  mvn clean package -Pyarn -Phadoop-2.6 -DskipTests
>> 2. Tested pyspark, mllib (iPython 4.0)
>> 2.0 Spark version is 1.6.2
>> 2.1. statistics (min,max,mean,Pearson,Spearman) OK
>> 2.2. Linear/Ridge/Lasso Regression OK
>> 2.3. Decision Tree, Naive Bayes OK
>> 2.4. KMeans OK
>>Center And Scale OK
>> 2.5. RDD operations OK
>>   State of the Union Texts - MapReduce, Filter,sortByKey (word count)
>> 2.6. Recommendation (Movielens medium dataset ~1 M ratings) OK
>>Model evaluation/optimization (rank, numIter, lambda) with
>> itertools OK
>> 3. Scala - MLlib
>> 3.1. statistics (min,max,mean,Pearson,Spearman) OK
>> 3.2. LinearRegressionWithSGD OK
>> 3.3. Decision Tree OK
>> 3.4. KMeans OK
>> 3.5. Recommendation (Movielens medium dataset ~1 M ratings) OK
>> 3.6. saveAsParquetFile OK
>> 3.7. Read and verify the 4.3 save(above) - sqlContext.parquetFile,
>> registerTempTable, sql OK
>> 3.8. result = sqlContext.sql("SELECT
>> OrderDetails.OrderID,ShipCountry,UnitPrice,Qty,Discount FROM Orders INNER
>> JOIN OrderDetails ON Orders.OrderID = OrderDetails.OrderID") OK
>> 4.0. Spark SQL from Python OK
>> 4.1. result = sqlContext.sql("SELECT * from people WHERE State = 'WA'") OK
>> 5.0. Packages
>> 5.1. com.databricks.spark.csv - read/write OK (--packages
>> com.databricks:spark-csv_2.10:1.4.0)
>> 6.0. DataFrames
>> 6.1. cast,dtypes OK
>> 6.2. groupBy,avg,crosstab,corr,isNull,na.drop OK
>> 6.3. All joins,sql,set operations,udf OK
>> 7.0. GraphX/Scala
>> 7.1. Create Graph (small and bigger dataset) OK
>> 7.2. Structure APIs - OK
>> 7.3. Social Network/Community APIs - OK
>> 7.4. Algorithms (PageRank of 2 datasets, aggregateMessages() ) OK
>>
>> Cheers & Good Work, Folks
>> 
>>
>> On Sun, Jun 19, 2016 at 9:24 PM, Reynold Xin  wrote:
>>
>>> Please vote on releasing the following candidate as Apache Spark version
>>> 1.6.2. The vote is open until Wednesday, June 22, 2016 at 22:00 PDT and
>>> passes if a majority of at least 3+1 PMC votes are cast.
>>>
>>> [ ] +1 Release this package as Apache Spark 1.6.2
>>> [ ] -1 Do not release this package because ...
>>>
>>>
>>> The tag to be voted on is v1.6.2-rc2
>>> (54b1121f351f056d6b67d2bb4efe0d553c0f7482)
>>>
>>> The release files, including signatures, digests, etc. can be found at:
>>> http://people.apache.org/~pwendell/spark-releases/spark-1.6.2-rc2-bin/
>>>
>>> Release artifacts are signed with the following key:
>>> https://people.apache.org/keys/committer/pwendell.asc
>>>
>>> The staging repository for this release can be found at:
>>> https://repository.apache.org/content/repositories/orgapachespark-1186/
>>>
>>> The documentation corresponding to this release can be found at:
>>> http://people.apache.org/~pwendell/spark-releases/spark-1.6.2-rc2-docs/
>>>
>>>
>>> ===
>>> == How can I help test this release? ==
>>> ===
>>> If you are a Spark user, you can help us test this release by taking an
>>> existing Spark workload and running on this release candidate, then
>>> reporting any regressions from 1.6.1.
>>>
>>> 
>>> == What justifies a -1 vote for this release? ==
>>> 
>>> This is a maintenance release in the 1.6.x series.  Bugs already present
>>> in 1.6.1, missing features, or bugs related to new features will not
>>> necessarily block this release.
>>>
>>>
>>>
>>>
>>
>
>
> --
> Maciek Bryński
>


Re: [VOTE] Release Apache Spark 1.6.2 (RC2)

2016-06-23 Thread vaquar khan
+1 (non-binding

Regards,
Vaquar khan
On 23 Jun 2016 07:50, "Sean Owen"  wrote:

> I don't think that qualifies as a blocker; not even clear it's a
> regression. Even non-binding votes here should focus on whether this
> is OK to release as a maintenance update to 1.6.1.
>
> On Thu, Jun 23, 2016 at 1:45 PM, Maciej Bryński  wrote:
> > -1
> >
> > I need SPARK-13283 to be solved.
> >
> > Regards,
> > Maciek Bryński
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> For additional commands, e-mail: dev-h...@spark.apache.org
>
>


Re: [VOTE] Release Apache Spark 1.6.2 (RC2)

2016-06-23 Thread Sean Owen
I don't think that qualifies as a blocker; not even clear it's a
regression. Even non-binding votes here should focus on whether this
is OK to release as a maintenance update to 1.6.1.

On Thu, Jun 23, 2016 at 1:45 PM, Maciej Bryński  wrote:
> -1
>
> I need SPARK-13283 to be solved.
>
> Regards,
> Maciek Bryński
>

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: [VOTE] Release Apache Spark 1.6.2 (RC2)

2016-06-23 Thread Maciej Bryński
-1

I need SPARK-13283 to be solved.

Regards,
Maciek Bryński

2016-06-23 0:13 GMT+02:00 Krishna Sankar :

> +1 (non-binding, of course)
>
> 1. Compiled OSX 10.10 (Yosemite) OK Total time: 37:11 min
>  mvn clean package -Pyarn -Phadoop-2.6 -DskipTests
> 2. Tested pyspark, mllib (iPython 4.0)
> 2.0 Spark version is 1.6.2
> 2.1. statistics (min,max,mean,Pearson,Spearman) OK
> 2.2. Linear/Ridge/Lasso Regression OK
> 2.3. Decision Tree, Naive Bayes OK
> 2.4. KMeans OK
>Center And Scale OK
> 2.5. RDD operations OK
>   State of the Union Texts - MapReduce, Filter,sortByKey (word count)
> 2.6. Recommendation (Movielens medium dataset ~1 M ratings) OK
>Model evaluation/optimization (rank, numIter, lambda) with
> itertools OK
> 3. Scala - MLlib
> 3.1. statistics (min,max,mean,Pearson,Spearman) OK
> 3.2. LinearRegressionWithSGD OK
> 3.3. Decision Tree OK
> 3.4. KMeans OK
> 3.5. Recommendation (Movielens medium dataset ~1 M ratings) OK
> 3.6. saveAsParquetFile OK
> 3.7. Read and verify the 4.3 save(above) - sqlContext.parquetFile,
> registerTempTable, sql OK
> 3.8. result = sqlContext.sql("SELECT
> OrderDetails.OrderID,ShipCountry,UnitPrice,Qty,Discount FROM Orders INNER
> JOIN OrderDetails ON Orders.OrderID = OrderDetails.OrderID") OK
> 4.0. Spark SQL from Python OK
> 4.1. result = sqlContext.sql("SELECT * from people WHERE State = 'WA'") OK
> 5.0. Packages
> 5.1. com.databricks.spark.csv - read/write OK (--packages
> com.databricks:spark-csv_2.10:1.4.0)
> 6.0. DataFrames
> 6.1. cast,dtypes OK
> 6.2. groupBy,avg,crosstab,corr,isNull,na.drop OK
> 6.3. All joins,sql,set operations,udf OK
> 7.0. GraphX/Scala
> 7.1. Create Graph (small and bigger dataset) OK
> 7.2. Structure APIs - OK
> 7.3. Social Network/Community APIs - OK
> 7.4. Algorithms (PageRank of 2 datasets, aggregateMessages() ) OK
>
> Cheers & Good Work, Folks
> 
>
> On Sun, Jun 19, 2016 at 9:24 PM, Reynold Xin  wrote:
>
>> Please vote on releasing the following candidate as Apache Spark version
>> 1.6.2. The vote is open until Wednesday, June 22, 2016 at 22:00 PDT and
>> passes if a majority of at least 3+1 PMC votes are cast.
>>
>> [ ] +1 Release this package as Apache Spark 1.6.2
>> [ ] -1 Do not release this package because ...
>>
>>
>> The tag to be voted on is v1.6.2-rc2
>> (54b1121f351f056d6b67d2bb4efe0d553c0f7482)
>>
>> The release files, including signatures, digests, etc. can be found at:
>> http://people.apache.org/~pwendell/spark-releases/spark-1.6.2-rc2-bin/
>>
>> Release artifacts are signed with the following key:
>> https://people.apache.org/keys/committer/pwendell.asc
>>
>> The staging repository for this release can be found at:
>> https://repository.apache.org/content/repositories/orgapachespark-1186/
>>
>> The documentation corresponding to this release can be found at:
>> http://people.apache.org/~pwendell/spark-releases/spark-1.6.2-rc2-docs/
>>
>>
>> ===
>> == How can I help test this release? ==
>> ===
>> If you are a Spark user, you can help us test this release by taking an
>> existing Spark workload and running on this release candidate, then
>> reporting any regressions from 1.6.1.
>>
>> 
>> == What justifies a -1 vote for this release? ==
>> 
>> This is a maintenance release in the 1.6.x series.  Bugs already present
>> in 1.6.1, missing features, or bugs related to new features will not
>> necessarily block this release.
>>
>>
>>
>>
>


-- 
Maciek Bryński


Re: [VOTE] Release Apache Spark 1.6.2 (RC2)

2016-06-22 Thread Sameer Agarwal
+1

On Wed, Jun 22, 2016 at 1:07 PM, Kousuke Saruta 
wrote:

> +1 (non-binding)
>
> On 2016/06/23 4:53, Reynold Xin wrote:
>
> +1 myself
>
>
> On Wed, Jun 22, 2016 at 12:19 PM, Sean McNamara <
> sean.mcnam...@webtrends.com> wrote:
>
>> +1
>>
>> On Jun 22, 2016, at 1:14 PM, Michael Armbrust 
>> wrote:
>>
>> +1
>>
>> On Wed, Jun 22, 2016 at 11:33 AM, Jonathan Kelly 
>> wrote:
>>
>>> +1
>>>
>>> On Wed, Jun 22, 2016 at 10:41 AM Tim Hunter < 
>>> timhun...@databricks.com> wrote:
>>>
 +1 This release passes all tests on the graphframes and tensorframes
 packages.

 On Wed, Jun 22, 2016 at 7:19 AM, Cody Koeninger < 
 c...@koeninger.org> wrote:

> If we're considering backporting changes for the 0.8 kafka
> integration, I am sure there are people who would like to get
>
> https://issues.apache.org/jira/browse/SPARK-10963
>
> into 1.6.x as well
>
> On Wed, Jun 22, 2016 at 7:41 AM, Sean Owen < 
> so...@cloudera.com> wrote:
> > Good call, probably worth back-porting, I'll try to do that. I don't
> > think it blocks a release, but would be good to get into a next RC if
> > any.
> >
> > On Wed, Jun 22, 2016 at 11:38 AM, Pete Robbins <
> robbin...@gmail.com> wrote:
> >> This has failed on our 1.6 stream builds regularly.
> >> ( 
> https://issues.apache.org/jira/browse/SPARK-6005) looks fixed in 2.0?
> >>
> >> On Wed, 22 Jun 2016 at 11:15 Sean Owen < 
> so...@cloudera.com> wrote:
> >>>
> >>> Oops, one more in the "does anybody else see this" department:
> >>>
> >>> - offset recovery *** FAILED ***
> >>>   recoveredOffsetRanges.forall(((or:
> (org.apache.spark.streaming.Time,
> >>> Array[org.apache.spark.streaming.kafka.OffsetRange])) =>
> >>>
> >>>
> earlierOffsetRangesAsSets.contains(scala.Tuple2.apply[org.apache.spark.streaming.Time,
> >>>
> >>>
> scala.collection.immutable.Set[org.apache.spark.streaming.kafka.OffsetRange]](or._1,
> >>>
> >>>
> scala.this.Predef.refArrayOps[org.apache.spark.streaming.kafka.OffsetRange](or._2).toSet[org.apache.spark.streaming.kafka.OffsetRange]
> >>> was false Recovered ranges are not the same as the ones generated
> >>> (DirectKafkaStreamSuite.scala:301)
> >>>
> >>> This actually fails consistently for me too in the Kafka
> integration
> >>> code. Not timezone related, I think.
> >
> > -
> > To unsubscribe, e-mail: 
> dev-unsubscr...@spark.apache.org
> > For additional commands, e-mail: 
> dev-h...@spark.apache.org
> >
>
> -
> To unsubscribe, e-mail: 
> dev-unsubscr...@spark.apache.org
> For additional commands, e-mail: 
> dev-h...@spark.apache.org
>
>

>>
>>
>
>


-- 
Sameer Agarwal
Software Engineer | Databricks Inc.
http://cs.berkeley.edu/~sameerag


Re: [VOTE] Release Apache Spark 1.6.2 (RC2)

2016-06-22 Thread Kousuke Saruta

+1 (non-binding)


On 2016/06/23 4:53, Reynold Xin wrote:

+1 myself


On Wed, Jun 22, 2016 at 12:19 PM, Sean McNamara 
> wrote:


+1


On Jun 22, 2016, at 1:14 PM, Michael Armbrust
> wrote:

+1

On Wed, Jun 22, 2016 at 11:33 AM, Jonathan Kelly
> wrote:

+1

On Wed, Jun 22, 2016 at 10:41 AM Tim Hunter
>
wrote:

+1 This release passes all tests on the graphframes and
tensorframes packages.

On Wed, Jun 22, 2016 at 7:19 AM, Cody Koeninger
> wrote:

If we're considering backporting changes for the 0.8
kafka
integration, I am sure there are people who would
like to get

https://issues.apache.org/jira/browse/SPARK-10963

into 1.6.x as well

On Wed, Jun 22, 2016 at 7:41 AM, Sean Owen
> wrote:
> Good call, probably worth back-porting, I'll try to
do that. I don't
> think it blocks a release, but would be good to get
into a next RC if
> any.
>
> On Wed, Jun 22, 2016 at 11:38 AM, Pete Robbins
> wrote:
>> This has failed on our 1.6 stream builds regularly.
>> (https://issues.apache.org/jira/browse/SPARK-6005)
looks fixed in 2.0?
>>
>> On Wed, 22 Jun 2016 at 11:15 Sean Owen
> wrote:
>>>
>>> Oops, one more in the "does anybody else see
this" department:
>>>
>>> - offset recovery *** FAILED ***
>>>  recoveredOffsetRanges.forall(((or:
(org.apache.spark.streaming.Time,
>>>
Array[org.apache.spark.streaming.kafka.OffsetRange])) =>
>>>
>>>

earlierOffsetRangesAsSets.contains(scala.Tuple2.apply[org.apache.spark.streaming.Time,
>>>
>>>

scala.collection.immutable.Set[org.apache.spark.streaming.kafka.OffsetRange]](or._1,
>>>
>>>

scala.this.Predef.refArrayOps[org.apache.spark.streaming.kafka.OffsetRange](or._2).toSet[org.apache.spark.streaming.kafka.OffsetRange]
>>> was false Recovered ranges are not the same as
the ones generated
>>> (DirectKafkaStreamSuite.scala:301)
>>>
>>> This actually fails consistently for me too in
the Kafka integration
>>> code. Not timezone related, I think.
>
>

-
> To unsubscribe, e-mail:
dev-unsubscr...@spark.apache.org

> For additional commands, e-mail:
dev-h...@spark.apache.org

>


-
To unsubscribe, e-mail:
dev-unsubscr...@spark.apache.org

For additional commands, e-mail:
dev-h...@spark.apache.org











Re: [VOTE] Release Apache Spark 1.6.2 (RC2)

2016-06-22 Thread Sean McNamara
+1

On Jun 22, 2016, at 1:14 PM, Michael Armbrust 
> wrote:

+1

On Wed, Jun 22, 2016 at 11:33 AM, Jonathan Kelly 
> wrote:
+1

On Wed, Jun 22, 2016 at 10:41 AM Tim Hunter 
> wrote:
+1 This release passes all tests on the graphframes and tensorframes packages.

On Wed, Jun 22, 2016 at 7:19 AM, Cody Koeninger 
> wrote:
If we're considering backporting changes for the 0.8 kafka
integration, I am sure there are people who would like to get

https://issues.apache.org/jira/browse/SPARK-10963

into 1.6.x as well

On Wed, Jun 22, 2016 at 7:41 AM, Sean Owen 
> wrote:
> Good call, probably worth back-porting, I'll try to do that. I don't
> think it blocks a release, but would be good to get into a next RC if
> any.
>
> On Wed, Jun 22, 2016 at 11:38 AM, Pete Robbins 
> > wrote:
>> This has failed on our 1.6 stream builds regularly.
>> (https://issues.apache.org/jira/browse/SPARK-6005) looks fixed in 2.0?
>>
>> On Wed, 22 Jun 2016 at 11:15 Sean Owen 
>> > wrote:
>>>
>>> Oops, one more in the "does anybody else see this" department:
>>>
>>> - offset recovery *** FAILED ***
>>>   recoveredOffsetRanges.forall(((or: (org.apache.spark.streaming.Time,
>>> Array[org.apache.spark.streaming.kafka.OffsetRange])) =>
>>>
>>> earlierOffsetRangesAsSets.contains(scala.Tuple2.apply[org.apache.spark.streaming.Time,
>>>
>>> scala.collection.immutable.Set[org.apache.spark.streaming.kafka.OffsetRange]](or._1,
>>>
>>> scala.this.Predef.refArrayOps[org.apache.spark.streaming.kafka.OffsetRange](or._2).toSet[org.apache.spark.streaming.kafka.OffsetRange]
>>> was false Recovered ranges are not the same as the ones generated
>>> (DirectKafkaStreamSuite.scala:301)
>>>
>>> This actually fails consistently for me too in the Kafka integration
>>> code. Not timezone related, I think.
>
> -
> To unsubscribe, e-mail: 
> dev-unsubscr...@spark.apache.org
> For additional commands, e-mail: 
> dev-h...@spark.apache.org
>

-
To unsubscribe, e-mail: 
dev-unsubscr...@spark.apache.org
For additional commands, e-mail: 
dev-h...@spark.apache.org






Re: [VOTE] Release Apache Spark 1.6.2 (RC2)

2016-06-22 Thread Michael Armbrust
+1

On Wed, Jun 22, 2016 at 11:33 AM, Jonathan Kelly 
wrote:

> +1
>
> On Wed, Jun 22, 2016 at 10:41 AM Tim Hunter 
> wrote:
>
>> +1 This release passes all tests on the graphframes and tensorframes
>> packages.
>>
>> On Wed, Jun 22, 2016 at 7:19 AM, Cody Koeninger 
>> wrote:
>>
>>> If we're considering backporting changes for the 0.8 kafka
>>> integration, I am sure there are people who would like to get
>>>
>>> https://issues.apache.org/jira/browse/SPARK-10963
>>>
>>> into 1.6.x as well
>>>
>>> On Wed, Jun 22, 2016 at 7:41 AM, Sean Owen  wrote:
>>> > Good call, probably worth back-porting, I'll try to do that. I don't
>>> > think it blocks a release, but would be good to get into a next RC if
>>> > any.
>>> >
>>> > On Wed, Jun 22, 2016 at 11:38 AM, Pete Robbins 
>>> wrote:
>>> >> This has failed on our 1.6 stream builds regularly.
>>> >> (https://issues.apache.org/jira/browse/SPARK-6005) looks fixed in
>>> 2.0?
>>> >>
>>> >> On Wed, 22 Jun 2016 at 11:15 Sean Owen  wrote:
>>> >>>
>>> >>> Oops, one more in the "does anybody else see this" department:
>>> >>>
>>> >>> - offset recovery *** FAILED ***
>>> >>>   recoveredOffsetRanges.forall(((or:
>>> (org.apache.spark.streaming.Time,
>>> >>> Array[org.apache.spark.streaming.kafka.OffsetRange])) =>
>>> >>>
>>> >>>
>>> earlierOffsetRangesAsSets.contains(scala.Tuple2.apply[org.apache.spark.streaming.Time,
>>> >>>
>>> >>>
>>> scala.collection.immutable.Set[org.apache.spark.streaming.kafka.OffsetRange]](or._1,
>>> >>>
>>> >>>
>>> scala.this.Predef.refArrayOps[org.apache.spark.streaming.kafka.OffsetRange](or._2).toSet[org.apache.spark.streaming.kafka.OffsetRange]
>>> >>> was false Recovered ranges are not the same as the ones generated
>>> >>> (DirectKafkaStreamSuite.scala:301)
>>> >>>
>>> >>> This actually fails consistently for me too in the Kafka integration
>>> >>> code. Not timezone related, I think.
>>> >
>>> > -
>>> > To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
>>> > For additional commands, e-mail: dev-h...@spark.apache.org
>>> >
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
>>> For additional commands, e-mail: dev-h...@spark.apache.org
>>>
>>>
>>


Re: [VOTE] Release Apache Spark 1.6.2 (RC2)

2016-06-22 Thread Jonathan Kelly
+1

On Wed, Jun 22, 2016 at 10:41 AM Tim Hunter 
wrote:

> +1 This release passes all tests on the graphframes and tensorframes
> packages.
>
> On Wed, Jun 22, 2016 at 7:19 AM, Cody Koeninger 
> wrote:
>
>> If we're considering backporting changes for the 0.8 kafka
>> integration, I am sure there are people who would like to get
>>
>> https://issues.apache.org/jira/browse/SPARK-10963
>>
>> into 1.6.x as well
>>
>> On Wed, Jun 22, 2016 at 7:41 AM, Sean Owen  wrote:
>> > Good call, probably worth back-porting, I'll try to do that. I don't
>> > think it blocks a release, but would be good to get into a next RC if
>> > any.
>> >
>> > On Wed, Jun 22, 2016 at 11:38 AM, Pete Robbins 
>> wrote:
>> >> This has failed on our 1.6 stream builds regularly.
>> >> (https://issues.apache.org/jira/browse/SPARK-6005) looks fixed in 2.0?
>> >>
>> >> On Wed, 22 Jun 2016 at 11:15 Sean Owen  wrote:
>> >>>
>> >>> Oops, one more in the "does anybody else see this" department:
>> >>>
>> >>> - offset recovery *** FAILED ***
>> >>>   recoveredOffsetRanges.forall(((or: (org.apache.spark.streaming.Time,
>> >>> Array[org.apache.spark.streaming.kafka.OffsetRange])) =>
>> >>>
>> >>>
>> earlierOffsetRangesAsSets.contains(scala.Tuple2.apply[org.apache.spark.streaming.Time,
>> >>>
>> >>>
>> scala.collection.immutable.Set[org.apache.spark.streaming.kafka.OffsetRange]](or._1,
>> >>>
>> >>>
>> scala.this.Predef.refArrayOps[org.apache.spark.streaming.kafka.OffsetRange](or._2).toSet[org.apache.spark.streaming.kafka.OffsetRange]
>> >>> was false Recovered ranges are not the same as the ones generated
>> >>> (DirectKafkaStreamSuite.scala:301)
>> >>>
>> >>> This actually fails consistently for me too in the Kafka integration
>> >>> code. Not timezone related, I think.
>> >
>> > -
>> > To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
>> > For additional commands, e-mail: dev-h...@spark.apache.org
>> >
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
>> For additional commands, e-mail: dev-h...@spark.apache.org
>>
>>
>


Re: [VOTE] Release Apache Spark 1.6.2 (RC2)

2016-06-22 Thread Tim Hunter
+1 This release passes all tests on the graphframes and tensorframes
packages.

On Wed, Jun 22, 2016 at 7:19 AM, Cody Koeninger  wrote:

> If we're considering backporting changes for the 0.8 kafka
> integration, I am sure there are people who would like to get
>
> https://issues.apache.org/jira/browse/SPARK-10963
>
> into 1.6.x as well
>
> On Wed, Jun 22, 2016 at 7:41 AM, Sean Owen  wrote:
> > Good call, probably worth back-porting, I'll try to do that. I don't
> > think it blocks a release, but would be good to get into a next RC if
> > any.
> >
> > On Wed, Jun 22, 2016 at 11:38 AM, Pete Robbins 
> wrote:
> >> This has failed on our 1.6 stream builds regularly.
> >> (https://issues.apache.org/jira/browse/SPARK-6005) looks fixed in 2.0?
> >>
> >> On Wed, 22 Jun 2016 at 11:15 Sean Owen  wrote:
> >>>
> >>> Oops, one more in the "does anybody else see this" department:
> >>>
> >>> - offset recovery *** FAILED ***
> >>>   recoveredOffsetRanges.forall(((or: (org.apache.spark.streaming.Time,
> >>> Array[org.apache.spark.streaming.kafka.OffsetRange])) =>
> >>>
> >>>
> earlierOffsetRangesAsSets.contains(scala.Tuple2.apply[org.apache.spark.streaming.Time,
> >>>
> >>>
> scala.collection.immutable.Set[org.apache.spark.streaming.kafka.OffsetRange]](or._1,
> >>>
> >>>
> scala.this.Predef.refArrayOps[org.apache.spark.streaming.kafka.OffsetRange](or._2).toSet[org.apache.spark.streaming.kafka.OffsetRange]
> >>> was false Recovered ranges are not the same as the ones generated
> >>> (DirectKafkaStreamSuite.scala:301)
> >>>
> >>> This actually fails consistently for me too in the Kafka integration
> >>> code. Not timezone related, I think.
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> > For additional commands, e-mail: dev-h...@spark.apache.org
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> For additional commands, e-mail: dev-h...@spark.apache.org
>
>


Re: [VOTE] Release Apache Spark 1.6.2 (RC2)

2016-06-22 Thread Sean Owen
Good call, probably worth back-porting, I'll try to do that. I don't
think it blocks a release, but would be good to get into a next RC if
any.

On Wed, Jun 22, 2016 at 11:38 AM, Pete Robbins  wrote:
> This has failed on our 1.6 stream builds regularly.
> (https://issues.apache.org/jira/browse/SPARK-6005) looks fixed in 2.0?
>
> On Wed, 22 Jun 2016 at 11:15 Sean Owen  wrote:
>>
>> Oops, one more in the "does anybody else see this" department:
>>
>> - offset recovery *** FAILED ***
>>   recoveredOffsetRanges.forall(((or: (org.apache.spark.streaming.Time,
>> Array[org.apache.spark.streaming.kafka.OffsetRange])) =>
>>
>> earlierOffsetRangesAsSets.contains(scala.Tuple2.apply[org.apache.spark.streaming.Time,
>>
>> scala.collection.immutable.Set[org.apache.spark.streaming.kafka.OffsetRange]](or._1,
>>
>> scala.this.Predef.refArrayOps[org.apache.spark.streaming.kafka.OffsetRange](or._2).toSet[org.apache.spark.streaming.kafka.OffsetRange]
>> was false Recovered ranges are not the same as the ones generated
>> (DirectKafkaStreamSuite.scala:301)
>>
>> This actually fails consistently for me too in the Kafka integration
>> code. Not timezone related, I think.

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: [VOTE] Release Apache Spark 1.6.2 (RC2)

2016-06-22 Thread Pete Robbins
This has failed on our 1.6 stream builds regularly. (
https://issues.apache.org/jira/browse/SPARK-6005) looks fixed in 2.0?

On Wed, 22 Jun 2016 at 11:15 Sean Owen  wrote:

> Oops, one more in the "does anybody else see this" department:
>
> - offset recovery *** FAILED ***
>   recoveredOffsetRanges.forall(((or: (org.apache.spark.streaming.Time,
> Array[org.apache.spark.streaming.kafka.OffsetRange])) =>
>
> earlierOffsetRangesAsSets.contains(scala.Tuple2.apply[org.apache.spark.streaming.Time,
>
> scala.collection.immutable.Set[org.apache.spark.streaming.kafka.OffsetRange]](or._1,
>
> scala.this.Predef.refArrayOps[org.apache.spark.streaming.kafka.OffsetRange](or._2).toSet[org.apache.spark.streaming.kafka.OffsetRange]
> was false Recovered ranges are not the same as the ones generated
> (DirectKafkaStreamSuite.scala:301)
>
> This actually fails consistently for me too in the Kafka integration
> code. Not timezone related, I think.
>
> On Wed, Jun 22, 2016 at 9:02 AM, Sean Owen  wrote:
> > I'm fairly convinced this error and others that appear timestamp
> > related are an environment problem. This test and method have been
> > present for several Spark versions, without change. I reviewed the
> > logic and it seems sound, explicitly setting the time zone correctly.
> > I am not sure why it behaves differently on this machine.
> >
> > I'd give a +1 to this release if nobody else is seeing errors like
> > this. The sigs, hashes, other tests pass for me.
> >
> > On Tue, Jun 21, 2016 at 6:49 PM, Sean Owen  wrote:
> >> UIUtilsSuite:
> >> - formatBatchTime *** FAILED ***
> >>   "2015/05/14 [14]:04:40" did not equal "2015/05/14 [21]:04:40"
> >> (UIUtilsSuite.scala:73)
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> For additional commands, e-mail: dev-h...@spark.apache.org
>
>


Re: [VOTE] Release Apache Spark 1.6.2 (RC2)

2016-06-22 Thread Sean Owen
Oops, one more in the "does anybody else see this" department:

- offset recovery *** FAILED ***
  recoveredOffsetRanges.forall(((or: (org.apache.spark.streaming.Time,
Array[org.apache.spark.streaming.kafka.OffsetRange])) =>
earlierOffsetRangesAsSets.contains(scala.Tuple2.apply[org.apache.spark.streaming.Time,
scala.collection.immutable.Set[org.apache.spark.streaming.kafka.OffsetRange]](or._1,
scala.this.Predef.refArrayOps[org.apache.spark.streaming.kafka.OffsetRange](or._2).toSet[org.apache.spark.streaming.kafka.OffsetRange]
was false Recovered ranges are not the same as the ones generated
(DirectKafkaStreamSuite.scala:301)

This actually fails consistently for me too in the Kafka integration
code. Not timezone related, I think.

On Wed, Jun 22, 2016 at 9:02 AM, Sean Owen  wrote:
> I'm fairly convinced this error and others that appear timestamp
> related are an environment problem. This test and method have been
> present for several Spark versions, without change. I reviewed the
> logic and it seems sound, explicitly setting the time zone correctly.
> I am not sure why it behaves differently on this machine.
>
> I'd give a +1 to this release if nobody else is seeing errors like
> this. The sigs, hashes, other tests pass for me.
>
> On Tue, Jun 21, 2016 at 6:49 PM, Sean Owen  wrote:
>> UIUtilsSuite:
>> - formatBatchTime *** FAILED ***
>>   "2015/05/14 [14]:04:40" did not equal "2015/05/14 [21]:04:40"
>> (UIUtilsSuite.scala:73)

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: [VOTE] Release Apache Spark 1.6.2 (RC2)

2016-06-22 Thread Sean Owen
I'm fairly convinced this error and others that appear timestamp
related are an environment problem. This test and method have been
present for several Spark versions, without change. I reviewed the
logic and it seems sound, explicitly setting the time zone correctly.
I am not sure why it behaves differently on this machine.

I'd give a +1 to this release if nobody else is seeing errors like
this. The sigs, hashes, other tests pass for me.

On Tue, Jun 21, 2016 at 6:49 PM, Sean Owen  wrote:
> UIUtilsSuite:
> - formatBatchTime *** FAILED ***
>   "2015/05/14 [14]:04:40" did not equal "2015/05/14 [21]:04:40"
> (UIUtilsSuite.scala:73)

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: [VOTE] Release Apache Spark 1.6.2 (RC2)

2016-06-21 Thread Shixiong(Ryan) Zhu
Hey Pete,

I just pushed your PR to branch 1.6. As it's not a blocker, it may or may
not be in 1.6.2, depending on if there will be another RC.

On Tue, Jun 21, 2016 at 1:36 PM, Pete Robbins  wrote:

> It breaks Spark running on machines with less than 3 cores/threads, which
> may be rare, and it is maybe an edge case.
>
> Personally, I like to fix known bugs and the fact there are other blocking
> methods in event loops actually makes it worse not to fix ones that you
> know about.
>
> Probably not a blocker to release though but that's your call.
>
> Cheers,
>
> On Tue, Jun 21, 2016 at 6:40 PM Shixiong(Ryan) Zhu <
> shixi...@databricks.com> wrote:
>
>> Hey Pete,
>>
>> I didn't backport it to 1.6 because it just affects tests in most cases.
>> I'm sure we also have other places calling blocking methods in the event
>> loops, so similar issues are still there even after applying this patch.
>> Hence, I don't think it's a blocker for 1.6.2.
>>
>> On Tue, Jun 21, 2016 at 2:57 AM, Pete Robbins 
>> wrote:
>>
>>> The PR (https://github.com/apache/spark/pull/13055) to fix
>>> https://issues.apache.org/jira/browse/SPARK-15262 was applied to 1.6.2
>>> however this fix caused another issue
>>> https://issues.apache.org/jira/browse/SPARK-15606 the fix for which (
>>> https://github.com/apache/spark/pull/13355) has not been backported to
>>> the 1.6 branch so I'm now seeing the same failure in 1.6.2
>>>
>>> Cheers,
>>>
>>> On Mon, 20 Jun 2016 at 05:25 Reynold Xin  wrote:
>>>
 Please vote on releasing the following candidate as Apache Spark
 version 1.6.2. The vote is open until Wednesday, June 22, 2016 at 22:00 PDT
 and passes if a majority of at least 3+1 PMC votes are cast.

 [ ] +1 Release this package as Apache Spark 1.6.2
 [ ] -1 Do not release this package because ...


 The tag to be voted on is v1.6.2-rc2
 (54b1121f351f056d6b67d2bb4efe0d553c0f7482)

 The release files, including signatures, digests, etc. can be found at:
 http://people.apache.org/~pwendell/spark-releases/spark-1.6.2-rc2-bin/

 Release artifacts are signed with the following key:
 https://people.apache.org/keys/committer/pwendell.asc

 The staging repository for this release can be found at:
 https://repository.apache.org/content/repositories/orgapachespark-1186/

 The documentation corresponding to this release can be found at:
 http://people.apache.org/~pwendell/spark-releases/spark-1.6.2-rc2-docs/


 ===
 == How can I help test this release? ==
 ===
 If you are a Spark user, you can help us test this release by taking an
 existing Spark workload and running on this release candidate, then
 reporting any regressions from 1.6.1.

 
 == What justifies a -1 vote for this release? ==
 
 This is a maintenance release in the 1.6.x series.  Bugs already
 present in 1.6.1, missing features, or bugs related to new features will
 not necessarily block this release.




>>


Re: [VOTE] Release Apache Spark 1.6.2 (RC2)

2016-06-21 Thread Pete Robbins
It breaks Spark running on machines with less than 3 cores/threads, which
may be rare, and it is maybe an edge case.

Personally, I like to fix known bugs and the fact there are other blocking
methods in event loops actually makes it worse not to fix ones that you
know about.

Probably not a blocker to release though but that's your call.

Cheers,

On Tue, Jun 21, 2016 at 6:40 PM Shixiong(Ryan) Zhu 
wrote:

> Hey Pete,
>
> I didn't backport it to 1.6 because it just affects tests in most cases.
> I'm sure we also have other places calling blocking methods in the event
> loops, so similar issues are still there even after applying this patch.
> Hence, I don't think it's a blocker for 1.6.2.
>
> On Tue, Jun 21, 2016 at 2:57 AM, Pete Robbins  wrote:
>
>> The PR (https://github.com/apache/spark/pull/13055) to fix
>> https://issues.apache.org/jira/browse/SPARK-15262 was applied to 1.6.2
>> however this fix caused another issue
>> https://issues.apache.org/jira/browse/SPARK-15606 the fix for which (
>> https://github.com/apache/spark/pull/13355) has not been backported to
>> the 1.6 branch so I'm now seeing the same failure in 1.6.2
>>
>> Cheers,
>>
>> On Mon, 20 Jun 2016 at 05:25 Reynold Xin  wrote:
>>
>>> Please vote on releasing the following candidate as Apache Spark version
>>> 1.6.2. The vote is open until Wednesday, June 22, 2016 at 22:00 PDT and
>>> passes if a majority of at least 3+1 PMC votes are cast.
>>>
>>> [ ] +1 Release this package as Apache Spark 1.6.2
>>> [ ] -1 Do not release this package because ...
>>>
>>>
>>> The tag to be voted on is v1.6.2-rc2
>>> (54b1121f351f056d6b67d2bb4efe0d553c0f7482)
>>>
>>> The release files, including signatures, digests, etc. can be found at:
>>> http://people.apache.org/~pwendell/spark-releases/spark-1.6.2-rc2-bin/
>>>
>>> Release artifacts are signed with the following key:
>>> https://people.apache.org/keys/committer/pwendell.asc
>>>
>>> The staging repository for this release can be found at:
>>> https://repository.apache.org/content/repositories/orgapachespark-1186/
>>>
>>> The documentation corresponding to this release can be found at:
>>> http://people.apache.org/~pwendell/spark-releases/spark-1.6.2-rc2-docs/
>>>
>>>
>>> ===
>>> == How can I help test this release? ==
>>> ===
>>> If you are a Spark user, you can help us test this release by taking an
>>> existing Spark workload and running on this release candidate, then
>>> reporting any regressions from 1.6.1.
>>>
>>> 
>>> == What justifies a -1 vote for this release? ==
>>> 
>>> This is a maintenance release in the 1.6.x series.  Bugs already present
>>> in 1.6.1, missing features, or bugs related to new features will not
>>> necessarily block this release.
>>>
>>>
>>>
>>>
>


Re: [VOTE] Release Apache Spark 1.6.2 (RC2)

2016-06-21 Thread Sean Owen
Nice one, yeah indeed I was doing an incremental build. Not a blocker.
I'll have a look into the others, though I suspect they're problems
with tests rather than production code.

On Tue, Jun 21, 2016 at 6:53 PM, Marcelo Vanzin  wrote:
> On Tue, Jun 21, 2016 at 10:49 AM, Sean Owen  wrote:
>> I'm getting some errors building on Ubuntu 16 + Java 7. First is one
>> that may just be down to a Scala bug:
>>
>> [ERROR] bad symbolic reference. A signature in WebUI.class refers to
>> term eclipse
>> in package org which is not available.
>
> This is probably https://issues.apache.org/jira/browse/SPARK-13780. It
> should only affect incremental builds ("mvn -rf ..." or "mvn -pl
> ..."), not clean builds. Not sure about the other ones.
>
> --
> Marcelo

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: [VOTE] Release Apache Spark 1.6.2 (RC2)

2016-06-21 Thread Marcelo Vanzin
On Tue, Jun 21, 2016 at 10:49 AM, Sean Owen  wrote:
> I'm getting some errors building on Ubuntu 16 + Java 7. First is one
> that may just be down to a Scala bug:
>
> [ERROR] bad symbolic reference. A signature in WebUI.class refers to
> term eclipse
> in package org which is not available.

This is probably https://issues.apache.org/jira/browse/SPARK-13780. It
should only affect incremental builds ("mvn -rf ..." or "mvn -pl
..."), not clean builds. Not sure about the other ones.

-- 
Marcelo

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: [VOTE] Release Apache Spark 1.6.2 (RC2)

2016-06-21 Thread Sean Owen
I'm getting some errors building on Ubuntu 16 + Java 7. First is one
that may just be down to a Scala bug:

[ERROR] bad symbolic reference. A signature in WebUI.class refers to
term eclipse
in package org which is not available.
It may be completely missing from the current classpath, or the version on
the classpath might be incompatible with the version used when
compiling WebUI.class.
[ERROR] bad symbolic reference. A signature in WebUI.class refers to term jetty
in value org.eclipse which is not available.
It may be completely missing from the current classpath, or the version on
the classpath might be incompatible with the version used when
compiling WebUI.class.

But I'm seeing some consistent timezone-related failures, from core:

UIUtilsSuite:
- formatBatchTime *** FAILED ***
  "2015/05/14 [14]:04:40" did not equal "2015/05/14 [21]:04:40"
(UIUtilsSuite.scala:73)


and several from Spark SQL, like:


- udf_unix_timestamp *** FAILED ***
  Results do not match for udf_unix_timestamp:
  == Parsed Logical Plan ==
  'Project [unresolvedalias(2009-03-20
11:30:01),unresolvedalias('unix_timestamp(2009-03-20 11:30:01))]
  +- 'UnresolvedRelation `oneline`, None

  == Analyzed Logical Plan ==
  _c0: string, _c1: bigint
  Project [2009-03-20 11:30:01 AS _c0#122914,unixtimestamp(2009-03-20
11:30:01,-MM-dd HH:mm:ss) AS _c1#122915L]
  +- MetastoreRelation default, oneline, None

  == Optimized Logical Plan ==
  Project [2009-03-20 11:30:01 AS _c0#122914,1237548601 AS _c1#122915L]
  +- MetastoreRelation default, oneline, None

  == Physical Plan ==
  Project [2009-03-20 11:30:01 AS _c0#122914,1237548601 AS _c1#122915L]
  +- HiveTableScan MetastoreRelation default, oneline, None
  _c0 _c1
  !== HIVE - 1 row(s) ==== CATALYST - 1 row(s) ==
  !2009-03-20 11:30:01 1237573801   2009-03-20 11:30:01 1237548601
(HiveComparisonTest.scala:458)


I'll start looking into them. It could be real, if possibly minor,
bugs because I presume most of the testing happens on machines in a
PDT timezone instead of UTC? that's at least the timezone of the
machine I'm testing on.

On Mon, Jun 20, 2016 at 5:24 AM, Reynold Xin  wrote:
> Please vote on releasing the following candidate as Apache Spark version
> 1.6.2. The vote is open until Wednesday, June 22, 2016 at 22:00 PDT and
> passes if a majority of at least 3+1 PMC votes are cast.
>
> [ ] +1 Release this package as Apache Spark 1.6.2
> [ ] -1 Do not release this package because ...
>
>
> The tag to be voted on is v1.6.2-rc2
> (54b1121f351f056d6b67d2bb4efe0d553c0f7482)
>
> The release files, including signatures, digests, etc. can be found at:
> http://people.apache.org/~pwendell/spark-releases/spark-1.6.2-rc2-bin/
>
> Release artifacts are signed with the following key:
> https://people.apache.org/keys/committer/pwendell.asc
>
> The staging repository for this release can be found at:
> https://repository.apache.org/content/repositories/orgapachespark-1186/
>
> The documentation corresponding to this release can be found at:
> http://people.apache.org/~pwendell/spark-releases/spark-1.6.2-rc2-docs/
>
>
> ===
> == How can I help test this release? ==
> ===
> If you are a Spark user, you can help us test this release by taking an
> existing Spark workload and running on this release candidate, then
> reporting any regressions from 1.6.1.
>
> 
> == What justifies a -1 vote for this release? ==
> 
> This is a maintenance release in the 1.6.x series.  Bugs already present in
> 1.6.1, missing features, or bugs related to new features will not
> necessarily block this release.
>
>
>

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: [VOTE] Release Apache Spark 1.6.2 (RC2)

2016-06-21 Thread Shixiong(Ryan) Zhu
Hey Pete,

I didn't backport it to 1.6 because it just affects tests in most cases.
I'm sure we also have other places calling blocking methods in the event
loops, so similar issues are still there even after applying this patch.
Hence, I don't think it's a blocker for 1.6.2.

On Tue, Jun 21, 2016 at 2:57 AM, Pete Robbins  wrote:

> The PR (https://github.com/apache/spark/pull/13055) to fix
> https://issues.apache.org/jira/browse/SPARK-15262 was applied to 1.6.2
> however this fix caused another issue
> https://issues.apache.org/jira/browse/SPARK-15606 the fix for which (
> https://github.com/apache/spark/pull/13355) has not been backported to
> the 1.6 branch so I'm now seeing the same failure in 1.6.2
>
> Cheers,
>
> On Mon, 20 Jun 2016 at 05:25 Reynold Xin  wrote:
>
>> Please vote on releasing the following candidate as Apache Spark version
>> 1.6.2. The vote is open until Wednesday, June 22, 2016 at 22:00 PDT and
>> passes if a majority of at least 3+1 PMC votes are cast.
>>
>> [ ] +1 Release this package as Apache Spark 1.6.2
>> [ ] -1 Do not release this package because ...
>>
>>
>> The tag to be voted on is v1.6.2-rc2
>> (54b1121f351f056d6b67d2bb4efe0d553c0f7482)
>>
>> The release files, including signatures, digests, etc. can be found at:
>> http://people.apache.org/~pwendell/spark-releases/spark-1.6.2-rc2-bin/
>>
>> Release artifacts are signed with the following key:
>> https://people.apache.org/keys/committer/pwendell.asc
>>
>> The staging repository for this release can be found at:
>> https://repository.apache.org/content/repositories/orgapachespark-1186/
>>
>> The documentation corresponding to this release can be found at:
>> http://people.apache.org/~pwendell/spark-releases/spark-1.6.2-rc2-docs/
>>
>>
>> ===
>> == How can I help test this release? ==
>> ===
>> If you are a Spark user, you can help us test this release by taking an
>> existing Spark workload and running on this release candidate, then
>> reporting any regressions from 1.6.1.
>>
>> 
>> == What justifies a -1 vote for this release? ==
>> 
>> This is a maintenance release in the 1.6.x series.  Bugs already present
>> in 1.6.1, missing features, or bugs related to new features will not
>> necessarily block this release.
>>
>>
>>
>>


Re: [VOTE] Release Apache Spark 1.6.2 (RC2)

2016-06-21 Thread Pete Robbins
The PR (https://github.com/apache/spark/pull/13055) to fix
https://issues.apache.org/jira/browse/SPARK-15262 was applied to 1.6.2
however this fix caused another issue
https://issues.apache.org/jira/browse/SPARK-15606 the fix for which (
https://github.com/apache/spark/pull/13355) has not been backported to the
1.6 branch so I'm now seeing the same failure in 1.6.2

Cheers,

On Mon, 20 Jun 2016 at 05:25 Reynold Xin  wrote:

> Please vote on releasing the following candidate as Apache Spark version
> 1.6.2. The vote is open until Wednesday, June 22, 2016 at 22:00 PDT and
> passes if a majority of at least 3+1 PMC votes are cast.
>
> [ ] +1 Release this package as Apache Spark 1.6.2
> [ ] -1 Do not release this package because ...
>
>
> The tag to be voted on is v1.6.2-rc2
> (54b1121f351f056d6b67d2bb4efe0d553c0f7482)
>
> The release files, including signatures, digests, etc. can be found at:
> http://people.apache.org/~pwendell/spark-releases/spark-1.6.2-rc2-bin/
>
> Release artifacts are signed with the following key:
> https://people.apache.org/keys/committer/pwendell.asc
>
> The staging repository for this release can be found at:
> https://repository.apache.org/content/repositories/orgapachespark-1186/
>
> The documentation corresponding to this release can be found at:
> http://people.apache.org/~pwendell/spark-releases/spark-1.6.2-rc2-docs/
>
>
> ===
> == How can I help test this release? ==
> ===
> If you are a Spark user, you can help us test this release by taking an
> existing Spark workload and running on this release candidate, then
> reporting any regressions from 1.6.1.
>
> 
> == What justifies a -1 vote for this release? ==
> 
> This is a maintenance release in the 1.6.x series.  Bugs already present
> in 1.6.1, missing features, or bugs related to new features will not
> necessarily block this release.
>
>
>
>