Re: [VOTE] Release Apache Spark 1.1.0 (RC3)

2014-09-03 Thread Patrick Wendell
I'm cancelling this release in favor of RC4. Happy voting!

On Tue, Sep 2, 2014 at 9:55 PM, Patrick Wendell pwend...@gmail.com wrote:
 Thanks everyone for voting on this. There were two minor issues (one a
 blocker) were found that warrant cutting a new RC. For those who voted
 +1 on this release, I'd encourage you to +1 rc4 when it comes out
 unless you have been testing issues specific to the EC2 scripts. This
 will move the release process along.

 SPARK-3332 - Issue with tagging in EC2 scripts
 SPARK-3358 - Issue with regression for m3.XX instances

 - Patrick

 On Tue, Sep 2, 2014 at 6:55 PM, Nicholas Chammas
 nicholas.cham...@gmail.com wrote:
 In light of the discussion on SPARK-, I'll revoke my -1 vote. The
 issue does not appear to be serious.


 On Sun, Aug 31, 2014 at 5:14 PM, Nicholas Chammas
 nicholas.cham...@gmail.com wrote:

 -1: I believe I've found a regression from 1.0.2. The report is captured
 in SPARK-.


 On Sat, Aug 30, 2014 at 6:07 PM, Patrick Wendell pwend...@gmail.com
 wrote:

 Please vote on releasing the following candidate as Apache Spark version
 1.1.0!

 The tag to be voted on is v1.1.0-rc3 (commit b2d0493b):

 https://git-wip-us.apache.org/repos/asf?p=spark.git;a=commit;h=b2d0493b223c5f98a593bb6d7372706cc02bebad

 The release files, including signatures, digests, etc. can be found at:
 http://people.apache.org/~pwendell/spark-1.1.0-rc3/

 Release artifacts are signed with the following key:
 https://people.apache.org/keys/committer/pwendell.asc

 The staging repository for this release can be found at:
 https://repository.apache.org/content/repositories/orgapachespark-1030/

 The documentation corresponding to this release can be found at:
 http://people.apache.org/~pwendell/spark-1.1.0-rc3-docs/

 Please vote on releasing this package as Apache Spark 1.1.0!

 The vote is open until Tuesday, September 02, at 23:07 UTC and passes if
 a majority of at least 3 +1 PMC votes are cast.

 [ ] +1 Release this package as Apache Spark 1.1.0
 [ ] -1 Do not release this package because ...

 To learn more about Apache Spark, please see
 http://spark.apache.org/

 == Regressions fixed since RC1 ==
 - Build issue for SQL support:
 https://issues.apache.org/jira/browse/SPARK-3234
 - EC2 script version bump to 1.1.0.

 == What justifies a -1 vote for this release? ==
 This vote is happening very late into the QA period compared with
 previous votes, so -1 votes should only occur for significant
 regressions from 1.0.2. Bugs already present in 1.0.X will not block
 this release.

 == What default changes should I be aware of? ==
 1. The default value of spark.io.compression.codec is now snappy
 -- Old behavior can be restored by switching to lzf

 2. PySpark now performs external spilling during aggregations.
 -- Old behavior can be restored by setting spark.shuffle.spill to
 false.

 -
 To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
 For additional commands, e-mail: dev-h...@spark.apache.org




-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: [VOTE] Release Apache Spark 1.1.0 (RC3)

2014-09-02 Thread Will Benton
Zongheng pointed out in my SPARK-3329 PR 
(https://github.com/apache/spark/pull/2220) that Aaron had already fixed this 
issue but that it had gotten inadvertently clobbered by another patch.  I don't 
know how the project handles this kind of problem, but I've rewritten my 
SPARK-3329 branch to cherry-pick Aaron's fix (also fixing a merge conflict and 
handling a test case that it didn't).

The other weird spurious testsuite failures related to orderings I've seen were 
in DESCRIBE FUNCTION EXTENDED for functions with lists of synonyms (e.g. 
STDDEV).  I can't reproduce those now but will take another look later this 
week.



best,
wb

- Original Message -
 From: Sean Owen so...@cloudera.com
 To: Will Benton wi...@redhat.com
 Cc: Patrick Wendell pwend...@gmail.com, dev@spark.apache.org
 Sent: Sunday, August 31, 2014 12:18:42 PM
 Subject: Re: [VOTE] Release Apache Spark 1.1.0 (RC3)
 
 Fantastic. As it happens, I just fixed up Mahout's tests for Java 8
 and observed a lot of the same type of failure.
 
 I'm about to submit PRs for the two issues I identified. AFAICT these
 3 then cover the failures I mentioned:
 
 https://issues.apache.org/jira/browse/SPARK-3329
 https://issues.apache.org/jira/browse/SPARK-3330
 https://issues.apache.org/jira/browse/SPARK-3331
 
 I'd argue that none necessarily block a release, since they just
 represent a problem with test-only code in Java 8, with the test-only
 context of Jenkins and multiple profiles, and with a trivial
 configuration in a style check for Python. Should be fixed but none
 indicate a bug in the release.
 
 On Sun, Aug 31, 2014 at 6:11 PM, Will Benton wi...@redhat.com wrote:
  - Original Message -
 
  dev/run-tests fails two tests (1 Hive, 1 Kafka Streaming) for me
  locally on 1.1.0-rc3. Does anyone else see that? It may be my env.
  Although I still see the Hive failure on Debian too:
 
  [info] - SET commands semantics for a HiveContext *** FAILED ***
  [info]   Expected Array(spark.sql.key.usedfortestonly=test.val.0,
  spark.sql.key.usedfortestonlyspark.sql.key.usedfortestonly=test.val.0test.val.0),
  but got
  Array(spark.sql.key.usedfortestonlyspark.sql.key.usedfortestonly=test.val.0test.val.0,
  spark.sql.key.usedfortestonly=test.val.0) (HiveQuerySuite.scala:541)
 
  I've seen this error before.  (In particular, I've seen it on my OS X
  machine using Oracle JDK 8 but not on Fedora using OpenJDK.)  I've also
  seen similar errors in topic branches (but not on master) that seem to
  indicate that tests depend on sets of pairs arriving from Hive in a
  particular order; it seems that this isn't a safe assumption.
 
  I just submitted a (trivial) PR to fix this spurious failure:
  https://github.com/apache/spark/pull/2220
 
 
  best,
  wb
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
 For additional commands, e-mail: dev-h...@spark.apache.org
 
 

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: [VOTE] Release Apache Spark 1.1.0 (RC3)

2014-09-02 Thread Will Benton
+1

Tested Scala/MLlib apps on Fedora 20 (OpenJDK 7) and OS X 10.9 (Oracle JDK 8).


best,
wb


- Original Message -
 From: Patrick Wendell pwend...@gmail.com
 To: dev@spark.apache.org
 Sent: Saturday, August 30, 2014 5:07:52 PM
 Subject: [VOTE] Release Apache Spark 1.1.0 (RC3)
 
 Please vote on releasing the following candidate as Apache Spark version
 1.1.0!
 
 The tag to be voted on is v1.1.0-rc3 (commit b2d0493b):
 https://git-wip-us.apache.org/repos/asf?p=spark.git;a=commit;h=b2d0493b223c5f98a593bb6d7372706cc02bebad
 
 The release files, including signatures, digests, etc. can be found at:
 http://people.apache.org/~pwendell/spark-1.1.0-rc3/
 
 Release artifacts are signed with the following key:
 https://people.apache.org/keys/committer/pwendell.asc
 
 The staging repository for this release can be found at:
 https://repository.apache.org/content/repositories/orgapachespark-1030/
 
 The documentation corresponding to this release can be found at:
 http://people.apache.org/~pwendell/spark-1.1.0-rc3-docs/
 
 Please vote on releasing this package as Apache Spark 1.1.0!
 
 The vote is open until Tuesday, September 02, at 23:07 UTC and passes if
 a majority of at least 3 +1 PMC votes are cast.
 
 [ ] +1 Release this package as Apache Spark 1.1.0
 [ ] -1 Do not release this package because ...
 
 To learn more about Apache Spark, please see
 http://spark.apache.org/
 
 == Regressions fixed since RC1 ==
 - Build issue for SQL support:
 https://issues.apache.org/jira/browse/SPARK-3234
 - EC2 script version bump to 1.1.0.
 
 == What justifies a -1 vote for this release? ==
 This vote is happening very late into the QA period compared with
 previous votes, so -1 votes should only occur for significant
 regressions from 1.0.2. Bugs already present in 1.0.X will not block
 this release.
 
 == What default changes should I be aware of? ==
 1. The default value of spark.io.compression.codec is now snappy
 -- Old behavior can be restored by switching to lzf
 
 2. PySpark now performs external spilling during aggregations.
 -- Old behavior can be restored by setting spark.shuffle.spill to false.
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
 For additional commands, e-mail: dev-h...@spark.apache.org
 
 

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: [VOTE] Release Apache Spark 1.1.0 (RC3)

2014-09-02 Thread Cheng Lian
+1

   - Tested Thrift server and SQL CLI locally on OSX 10.9.
   - Checked datanucleus dependencies in distribution tarball built by
   make-distribution.sh without SPARK_HIVE defined.

​


On Tue, Sep 2, 2014 at 2:30 PM, Will Benton wi...@redhat.com wrote:

 +1

 Tested Scala/MLlib apps on Fedora 20 (OpenJDK 7) and OS X 10.9 (Oracle JDK
 8).


 best,
 wb


 - Original Message -
  From: Patrick Wendell pwend...@gmail.com
  To: dev@spark.apache.org
  Sent: Saturday, August 30, 2014 5:07:52 PM
  Subject: [VOTE] Release Apache Spark 1.1.0 (RC3)
 
  Please vote on releasing the following candidate as Apache Spark version
  1.1.0!
 
  The tag to be voted on is v1.1.0-rc3 (commit b2d0493b):
 
 https://git-wip-us.apache.org/repos/asf?p=spark.git;a=commit;h=b2d0493b223c5f98a593bb6d7372706cc02bebad
 
  The release files, including signatures, digests, etc. can be found at:
  http://people.apache.org/~pwendell/spark-1.1.0-rc3/
 
  Release artifacts are signed with the following key:
  https://people.apache.org/keys/committer/pwendell.asc
 
  The staging repository for this release can be found at:
  https://repository.apache.org/content/repositories/orgapachespark-1030/
 
  The documentation corresponding to this release can be found at:
  http://people.apache.org/~pwendell/spark-1.1.0-rc3-docs/
 
  Please vote on releasing this package as Apache Spark 1.1.0!
 
  The vote is open until Tuesday, September 02, at 23:07 UTC and passes if
  a majority of at least 3 +1 PMC votes are cast.
 
  [ ] +1 Release this package as Apache Spark 1.1.0
  [ ] -1 Do not release this package because ...
 
  To learn more about Apache Spark, please see
  http://spark.apache.org/
 
  == Regressions fixed since RC1 ==
  - Build issue for SQL support:
  https://issues.apache.org/jira/browse/SPARK-3234
  - EC2 script version bump to 1.1.0.
 
  == What justifies a -1 vote for this release? ==
  This vote is happening very late into the QA period compared with
  previous votes, so -1 votes should only occur for significant
  regressions from 1.0.2. Bugs already present in 1.0.X will not block
  this release.
 
  == What default changes should I be aware of? ==
  1. The default value of spark.io.compression.codec is now snappy
  -- Old behavior can be restored by switching to lzf
 
  2. PySpark now performs external spilling during aggregations.
  -- Old behavior can be restored by setting spark.shuffle.spill to
 false.
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
  For additional commands, e-mail: dev-h...@spark.apache.org
 
 

 -
 To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
 For additional commands, e-mail: dev-h...@spark.apache.org




Re: [VOTE] Release Apache Spark 1.1.0 (RC3)

2014-09-02 Thread Reynold Xin
+1


On Tue, Sep 2, 2014 at 3:08 PM, Cheng Lian lian.cs@gmail.com wrote:

 +1

- Tested Thrift server and SQL CLI locally on OSX 10.9.
- Checked datanucleus dependencies in distribution tarball built by
make-distribution.sh without SPARK_HIVE defined.

 ​


 On Tue, Sep 2, 2014 at 2:30 PM, Will Benton wi...@redhat.com wrote:

  +1
 
  Tested Scala/MLlib apps on Fedora 20 (OpenJDK 7) and OS X 10.9 (Oracle
 JDK
  8).
 
 
  best,
  wb
 
 
  - Original Message -
   From: Patrick Wendell pwend...@gmail.com
   To: dev@spark.apache.org
   Sent: Saturday, August 30, 2014 5:07:52 PM
   Subject: [VOTE] Release Apache Spark 1.1.0 (RC3)
  
   Please vote on releasing the following candidate as Apache Spark
 version
   1.1.0!
  
   The tag to be voted on is v1.1.0-rc3 (commit b2d0493b):
  
 
 https://git-wip-us.apache.org/repos/asf?p=spark.git;a=commit;h=b2d0493b223c5f98a593bb6d7372706cc02bebad
  
   The release files, including signatures, digests, etc. can be found at:
   http://people.apache.org/~pwendell/spark-1.1.0-rc3/
  
   Release artifacts are signed with the following key:
   https://people.apache.org/keys/committer/pwendell.asc
  
   The staging repository for this release can be found at:
  
 https://repository.apache.org/content/repositories/orgapachespark-1030/
  
   The documentation corresponding to this release can be found at:
   http://people.apache.org/~pwendell/spark-1.1.0-rc3-docs/
  
   Please vote on releasing this package as Apache Spark 1.1.0!
  
   The vote is open until Tuesday, September 02, at 23:07 UTC and passes
 if
   a majority of at least 3 +1 PMC votes are cast.
  
   [ ] +1 Release this package as Apache Spark 1.1.0
   [ ] -1 Do not release this package because ...
  
   To learn more about Apache Spark, please see
   http://spark.apache.org/
  
   == Regressions fixed since RC1 ==
   - Build issue for SQL support:
   https://issues.apache.org/jira/browse/SPARK-3234
   - EC2 script version bump to 1.1.0.
  
   == What justifies a -1 vote for this release? ==
   This vote is happening very late into the QA period compared with
   previous votes, so -1 votes should only occur for significant
   regressions from 1.0.2. Bugs already present in 1.0.X will not block
   this release.
  
   == What default changes should I be aware of? ==
   1. The default value of spark.io.compression.codec is now snappy
   -- Old behavior can be restored by switching to lzf
  
   2. PySpark now performs external spilling during aggregations.
   -- Old behavior can be restored by setting spark.shuffle.spill to
  false.
  
   -
   To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
   For additional commands, e-mail: dev-h...@spark.apache.org
  
  
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
  For additional commands, e-mail: dev-h...@spark.apache.org
 
 



Re: [VOTE] Release Apache Spark 1.1.0 (RC3)

2014-09-02 Thread Kan Zhang
+1

Verified PySpark InputFormat/OutputFormat examples.


On Tue, Sep 2, 2014 at 4:10 PM, Reynold Xin r...@databricks.com wrote:

 +1


 On Tue, Sep 2, 2014 at 3:08 PM, Cheng Lian lian.cs@gmail.com wrote:

  +1
 
 - Tested Thrift server and SQL CLI locally on OSX 10.9.
 - Checked datanucleus dependencies in distribution tarball built by
 make-distribution.sh without SPARK_HIVE defined.
 
  ​
 
 
  On Tue, Sep 2, 2014 at 2:30 PM, Will Benton wi...@redhat.com wrote:
 
   +1
  
   Tested Scala/MLlib apps on Fedora 20 (OpenJDK 7) and OS X 10.9 (Oracle
  JDK
   8).
  
  
   best,
   wb
  
  
   - Original Message -
From: Patrick Wendell pwend...@gmail.com
To: dev@spark.apache.org
Sent: Saturday, August 30, 2014 5:07:52 PM
Subject: [VOTE] Release Apache Spark 1.1.0 (RC3)
   
Please vote on releasing the following candidate as Apache Spark
  version
1.1.0!
   
The tag to be voted on is v1.1.0-rc3 (commit b2d0493b):
   
  
 
 https://git-wip-us.apache.org/repos/asf?p=spark.git;a=commit;h=b2d0493b223c5f98a593bb6d7372706cc02bebad
   
The release files, including signatures, digests, etc. can be found
 at:
http://people.apache.org/~pwendell/spark-1.1.0-rc3/
   
Release artifacts are signed with the following key:
https://people.apache.org/keys/committer/pwendell.asc
   
The staging repository for this release can be found at:
   
  https://repository.apache.org/content/repositories/orgapachespark-1030/
   
The documentation corresponding to this release can be found at:
http://people.apache.org/~pwendell/spark-1.1.0-rc3-docs/
   
Please vote on releasing this package as Apache Spark 1.1.0!
   
The vote is open until Tuesday, September 02, at 23:07 UTC and passes
  if
a majority of at least 3 +1 PMC votes are cast.
   
[ ] +1 Release this package as Apache Spark 1.1.0
[ ] -1 Do not release this package because ...
   
To learn more about Apache Spark, please see
http://spark.apache.org/
   
== Regressions fixed since RC1 ==
- Build issue for SQL support:
https://issues.apache.org/jira/browse/SPARK-3234
- EC2 script version bump to 1.1.0.
   
== What justifies a -1 vote for this release? ==
This vote is happening very late into the QA period compared with
previous votes, so -1 votes should only occur for significant
regressions from 1.0.2. Bugs already present in 1.0.X will not block
this release.
   
== What default changes should I be aware of? ==
1. The default value of spark.io.compression.codec is now snappy
-- Old behavior can be restored by switching to lzf
   
2. PySpark now performs external spilling during aggregations.
-- Old behavior can be restored by setting spark.shuffle.spill to
   false.
   
-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org
   
   
  
   -
   To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
   For additional commands, e-mail: dev-h...@spark.apache.org
  
  
 



Re: [VOTE] Release Apache Spark 1.1.0 (RC3)

2014-09-02 Thread Michael Armbrust
+1


On Tue, Sep 2, 2014 at 5:18 PM, Matei Zaharia matei.zaha...@gmail.com
wrote:

 +1

 Tested on Mac OS X.

 Matei

 On September 2, 2014 at 5:03:19 PM, Kan Zhang (kzh...@apache.org) wrote:

 +1

 Verified PySpark InputFormat/OutputFormat examples.


 On Tue, Sep 2, 2014 at 4:10 PM, Reynold Xin r...@databricks.com wrote:

  +1
 
 
  On Tue, Sep 2, 2014 at 3:08 PM, Cheng Lian lian.cs@gmail.com
 wrote:
 
   +1
  
   - Tested Thrift server and SQL CLI locally on OSX 10.9.
   - Checked datanucleus dependencies in distribution tarball built by
   make-distribution.sh without SPARK_HIVE defined.
  
   ​
  
  
   On Tue, Sep 2, 2014 at 2:30 PM, Will Benton wi...@redhat.com wrote:
  
+1
   
Tested Scala/MLlib apps on Fedora 20 (OpenJDK 7) and OS X 10.9
 (Oracle
   JDK
8).
   
   
best,
wb
   
   
- Original Message -
 From: Patrick Wendell pwend...@gmail.com
 To: dev@spark.apache.org
 Sent: Saturday, August 30, 2014 5:07:52 PM
 Subject: [VOTE] Release Apache Spark 1.1.0 (RC3)

 Please vote on releasing the following candidate as Apache Spark
   version
 1.1.0!

 The tag to be voted on is v1.1.0-rc3 (commit b2d0493b):

   
  
 
 https://git-wip-us.apache.org/repos/asf?p=spark.git;a=commit;h=b2d0493b223c5f98a593bb6d7372706cc02bebad

 The release files, including signatures, digests, etc. can be found
  at:
 http://people.apache.org/~pwendell/spark-1.1.0-rc3/

 Release artifacts are signed with the following key:
 https://people.apache.org/keys/committer/pwendell.asc

 The staging repository for this release can be found at:

  
 https://repository.apache.org/content/repositories/orgapachespark-1030/

 The documentation corresponding to this release can be found at:
 http://people.apache.org/~pwendell/spark-1.1.0-rc3-docs/

 Please vote on releasing this package as Apache Spark 1.1.0!

 The vote is open until Tuesday, September 02, at 23:07 UTC and
 passes
   if
 a majority of at least 3 +1 PMC votes are cast.

 [ ] +1 Release this package as Apache Spark 1.1.0
 [ ] -1 Do not release this package because ...

 To learn more about Apache Spark, please see
 http://spark.apache.org/

 == Regressions fixed since RC1 ==
 - Build issue for SQL support:
 https://issues.apache.org/jira/browse/SPARK-3234
 - EC2 script version bump to 1.1.0.

 == What justifies a -1 vote for this release? ==
 This vote is happening very late into the QA period compared with
 previous votes, so -1 votes should only occur for significant
 regressions from 1.0.2. Bugs already present in 1.0.X will not
 block
 this release.

 == What default changes should I be aware of? ==
 1. The default value of spark.io.compression.codec is now
 snappy
 -- Old behavior can be restored by switching to lzf

 2. PySpark now performs external spilling during aggregations.
 -- Old behavior can be restored by setting spark.shuffle.spill
 to
false.


 -
 To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
 For additional commands, e-mail: dev-h...@spark.apache.org


   
-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org
   
   
  
 



Re: [VOTE] Release Apache Spark 1.1.0 (RC3)

2014-09-02 Thread Denny Lee
+1  Tested on Mac OSX, Thrift Server, SparkSQL


On September 2, 2014 at 17:29:29, Michael Armbrust (mich...@databricks.com) 
wrote:

+1  


On Tue, Sep 2, 2014 at 5:18 PM, Matei Zaharia matei.zaha...@gmail.com  
wrote:  

 +1  
  
 Tested on Mac OS X.  
  
 Matei  
  
 On September 2, 2014 at 5:03:19 PM, Kan Zhang (kzh...@apache.org) wrote:  
  
 +1  
  
 Verified PySpark InputFormat/OutputFormat examples.  
  
  
 On Tue, Sep 2, 2014 at 4:10 PM, Reynold Xin r...@databricks.com wrote:  
  
  +1  
   
   
  On Tue, Sep 2, 2014 at 3:08 PM, Cheng Lian lian.cs@gmail.com  
 wrote:  
   
   +1  

   - Tested Thrift server and SQL CLI locally on OSX 10.9.  
   - Checked datanucleus dependencies in distribution tarball built by  
   make-distribution.sh without SPARK_HIVE defined.  

   ​  


   On Tue, Sep 2, 2014 at 2:30 PM, Will Benton wi...@redhat.com wrote:  

+1  
 
Tested Scala/MLlib apps on Fedora 20 (OpenJDK 7) and OS X 10.9  
 (Oracle  
   JDK  
8).  
 
 
best,  
wb  
 
 
- Original Message -  
 From: Patrick Wendell pwend...@gmail.com  
 To: dev@spark.apache.org  
 Sent: Saturday, August 30, 2014 5:07:52 PM  
 Subject: [VOTE] Release Apache Spark 1.1.0 (RC3)  
  
 Please vote on releasing the following candidate as Apache Spark  
   version  
 1.1.0!  
  
 The tag to be voted on is v1.1.0-rc3 (commit b2d0493b):  
  
 

   
 https://git-wip-us.apache.org/repos/asf?p=spark.git;a=commit;h=b2d0493b223c5f98a593bb6d7372706cc02bebad
   
  
 The release files, including signatures, digests, etc. can be found  
  at:  
 http://people.apache.org/~pwendell/spark-1.1.0-rc3/  
  
 Release artifacts are signed with the following key:  
 https://people.apache.org/keys/committer/pwendell.asc  
  
 The staging repository for this release can be found at:  
  

 https://repository.apache.org/content/repositories/orgapachespark-1030/  
  
 The documentation corresponding to this release can be found at:  
 http://people.apache.org/~pwendell/spark-1.1.0-rc3-docs/  
  
 Please vote on releasing this package as Apache Spark 1.1.0!  
  
 The vote is open until Tuesday, September 02, at 23:07 UTC and  
 passes  
   if  
 a majority of at least 3 +1 PMC votes are cast.  
  
 [ ] +1 Release this package as Apache Spark 1.1.0  
 [ ] -1 Do not release this package because ...  
  
 To learn more about Apache Spark, please see  
 http://spark.apache.org/  
  
 == Regressions fixed since RC1 ==  
 - Build issue for SQL support:  
 https://issues.apache.org/jira/browse/SPARK-3234  
 - EC2 script version bump to 1.1.0.  
  
 == What justifies a -1 vote for this release? ==  
 This vote is happening very late into the QA period compared with  
 previous votes, so -1 votes should only occur for significant  
 regressions from 1.0.2. Bugs already present in 1.0.X will not  
 block  
 this release.  
  
 == What default changes should I be aware of? ==  
 1. The default value of spark.io.compression.codec is now  
 snappy  
 -- Old behavior can be restored by switching to lzf  
  
 2. PySpark now performs external spilling during aggregations.  
 -- Old behavior can be restored by setting spark.shuffle.spill  
 to  
false.  
  
  
 -  
 To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org  
 For additional commands, e-mail: dev-h...@spark.apache.org  
  
  
 
-  
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org  
For additional commands, e-mail: dev-h...@spark.apache.org  
 
 

   
  


RE: [VOTE] Release Apache Spark 1.1.0 (RC3)

2014-09-02 Thread Sean McNamara
+1

From: Patrick Wendell [pwend...@gmail.com]
Sent: Saturday, August 30, 2014 4:08 PM
To: dev@spark.apache.org
Subject: [VOTE] Release Apache Spark 1.1.0 (RC3)

Please vote on releasing the following candidate as Apache Spark version 1.1.0!

The tag to be voted on is v1.1.0-rc3 (commit b2d0493b):
https://git-wip-us.apache.org/repos/asf?p=spark.git;a=commit;h=b2d0493b223c5f98a593bb6d7372706cc02bebad

The release files, including signatures, digests, etc. can be found at:
http://people.apache.org/~pwendell/spark-1.1.0-rc3/

Release artifacts are signed with the following key:
https://people.apache.org/keys/committer/pwendell.asc

The staging repository for this release can be found at:
https://repository.apache.org/content/repositories/orgapachespark-1030/

The documentation corresponding to this release can be found at:
http://people.apache.org/~pwendell/spark-1.1.0-rc3-docs/

Please vote on releasing this package as Apache Spark 1.1.0!

The vote is open until Tuesday, September 02, at 23:07 UTC and passes if
a majority of at least 3 +1 PMC votes are cast.

[ ] +1 Release this package as Apache Spark 1.1.0
[ ] -1 Do not release this package because ...

To learn more about Apache Spark, please see
http://spark.apache.org/

== Regressions fixed since RC1 ==
- Build issue for SQL support: https://issues.apache.org/jira/browse/SPARK-3234
- EC2 script version bump to 1.1.0.

== What justifies a -1 vote for this release? ==
This vote is happening very late into the QA period compared with
previous votes, so -1 votes should only occur for significant
regressions from 1.0.2. Bugs already present in 1.0.X will not block
this release.

== What default changes should I be aware of? ==
1. The default value of spark.io.compression.codec is now snappy
-- Old behavior can be restored by switching to lzf

2. PySpark now performs external spilling during aggregations.
-- Old behavior can be restored by setting spark.shuffle.spill to false.

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



RE: [VOTE] Release Apache Spark 1.1.0 (RC3)

2014-09-02 Thread Jeremy Freeman
+1



--
View this message in context: 
http://apache-spark-developers-list.1001551.n3.nabble.com/VOTE-Release-Apache-Spark-1-1-0-RC3-tp8147p8211.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: [VOTE] Release Apache Spark 1.1.0 (RC3)

2014-09-02 Thread Paolo Platter
+1
Tested on HDP 2.1 Sandbox, Thrift Server with Simba Shark ODBC

Paolo

Da: Jeremy Freemanmailto:freeman.jer...@gmail.com
Data invio: ?mercoled?? ?3? ?settembre? ?2014 ?02?:?34
A: d...@spark.incubator.apache.orgmailto:d...@spark.incubator.apache.org

+1



--
View this message in context: 
http://apache-spark-developers-list.1001551.n3.nabble.com/VOTE-Release-Apache-Spark-1-1-0-RC3-tp8147p8211.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: [VOTE] Release Apache Spark 1.1.0 (RC3)

2014-09-02 Thread Nicholas Chammas
In light of the discussion on SPARK-, I'll revoke my -1 vote. The
issue does not appear to be serious.


On Sun, Aug 31, 2014 at 5:14 PM, Nicholas Chammas 
nicholas.cham...@gmail.com wrote:

 -1: I believe I've found a regression from 1.0.2. The report is captured
 in SPARK- https://issues.apache.org/jira/browse/SPARK-.


 On Sat, Aug 30, 2014 at 6:07 PM, Patrick Wendell pwend...@gmail.com
 wrote:

 Please vote on releasing the following candidate as Apache Spark version
 1.1.0!

 The tag to be voted on is v1.1.0-rc3 (commit b2d0493b):

 https://git-wip-us.apache.org/repos/asf?p=spark.git;a=commit;h=b2d0493b223c5f98a593bb6d7372706cc02bebad

 The release files, including signatures, digests, etc. can be found at:
 http://people.apache.org/~pwendell/spark-1.1.0-rc3/

 Release artifacts are signed with the following key:
 https://people.apache.org/keys/committer/pwendell.asc

 The staging repository for this release can be found at:
 https://repository.apache.org/content/repositories/orgapachespark-1030/

 The documentation corresponding to this release can be found at:
 http://people.apache.org/~pwendell/spark-1.1.0-rc3-docs/

 Please vote on releasing this package as Apache Spark 1.1.0!

 The vote is open until Tuesday, September 02, at 23:07 UTC and passes if
 a majority of at least 3 +1 PMC votes are cast.

 [ ] +1 Release this package as Apache Spark 1.1.0
 [ ] -1 Do not release this package because ...

 To learn more about Apache Spark, please see
 http://spark.apache.org/

 == Regressions fixed since RC1 ==
 - Build issue for SQL support:
 https://issues.apache.org/jira/browse/SPARK-3234
 - EC2 script version bump to 1.1.0.

 == What justifies a -1 vote for this release? ==
 This vote is happening very late into the QA period compared with
 previous votes, so -1 votes should only occur for significant
 regressions from 1.0.2. Bugs already present in 1.0.X will not block
 this release.

 == What default changes should I be aware of? ==
 1. The default value of spark.io.compression.codec is now snappy
 -- Old behavior can be restored by switching to lzf

 2. PySpark now performs external spilling during aggregations.
 -- Old behavior can be restored by setting spark.shuffle.spill to
 false.

 -
 To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
 For additional commands, e-mail: dev-h...@spark.apache.org





Re: [VOTE] Release Apache Spark 1.1.0 (RC3)

2014-09-02 Thread Patrick Wendell
Thanks everyone for voting on this. There were two minor issues (one a
blocker) were found that warrant cutting a new RC. For those who voted
+1 on this release, I'd encourage you to +1 rc4 when it comes out
unless you have been testing issues specific to the EC2 scripts. This
will move the release process along.

SPARK-3332 - Issue with tagging in EC2 scripts
SPARK-3358 - Issue with regression for m3.XX instances

- Patrick

On Tue, Sep 2, 2014 at 6:55 PM, Nicholas Chammas
nicholas.cham...@gmail.com wrote:
 In light of the discussion on SPARK-, I'll revoke my -1 vote. The
 issue does not appear to be serious.


 On Sun, Aug 31, 2014 at 5:14 PM, Nicholas Chammas
 nicholas.cham...@gmail.com wrote:

 -1: I believe I've found a regression from 1.0.2. The report is captured
 in SPARK-.


 On Sat, Aug 30, 2014 at 6:07 PM, Patrick Wendell pwend...@gmail.com
 wrote:

 Please vote on releasing the following candidate as Apache Spark version
 1.1.0!

 The tag to be voted on is v1.1.0-rc3 (commit b2d0493b):

 https://git-wip-us.apache.org/repos/asf?p=spark.git;a=commit;h=b2d0493b223c5f98a593bb6d7372706cc02bebad

 The release files, including signatures, digests, etc. can be found at:
 http://people.apache.org/~pwendell/spark-1.1.0-rc3/

 Release artifacts are signed with the following key:
 https://people.apache.org/keys/committer/pwendell.asc

 The staging repository for this release can be found at:
 https://repository.apache.org/content/repositories/orgapachespark-1030/

 The documentation corresponding to this release can be found at:
 http://people.apache.org/~pwendell/spark-1.1.0-rc3-docs/

 Please vote on releasing this package as Apache Spark 1.1.0!

 The vote is open until Tuesday, September 02, at 23:07 UTC and passes if
 a majority of at least 3 +1 PMC votes are cast.

 [ ] +1 Release this package as Apache Spark 1.1.0
 [ ] -1 Do not release this package because ...

 To learn more about Apache Spark, please see
 http://spark.apache.org/

 == Regressions fixed since RC1 ==
 - Build issue for SQL support:
 https://issues.apache.org/jira/browse/SPARK-3234
 - EC2 script version bump to 1.1.0.

 == What justifies a -1 vote for this release? ==
 This vote is happening very late into the QA period compared with
 previous votes, so -1 votes should only occur for significant
 regressions from 1.0.2. Bugs already present in 1.0.X will not block
 this release.

 == What default changes should I be aware of? ==
 1. The default value of spark.io.compression.codec is now snappy
 -- Old behavior can be restored by switching to lzf

 2. PySpark now performs external spilling during aggregations.
 -- Old behavior can be restored by setting spark.shuffle.spill to
 false.

 -
 To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
 For additional commands, e-mail: dev-h...@spark.apache.org




-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: [VOTE] Release Apache Spark 1.1.0 (RC3)

2014-09-01 Thread Prashant Sharma
Easy or quicker way to build spark is

sbt/sbt assembly/assembly

Prashant Sharma




On Mon, Sep 1, 2014 at 8:40 PM, Nicholas Chammas nicholas.cham...@gmail.com
 wrote:

 If this is not a confirmed regression from 1.0.2, I think it's better to
 report it in a separate thread or JIRA.

 I believe serious regressions are generally the only reason to block a new
 release. Otherwise, if this is an old issue, it should be handled
 separately.

 2014년 9월 1일 월요일, chutiumteng@gmail.com님이 작성한 메시지:

  i didn't tried with 1.0.2
 
  it takes always too long to build spark assembly jars... more than 20min
 
  [info] Packaging
 
 
 /mnt/some-nfs/common/spark/assembly/target/scala-2.10/spark-assembly-1.1.0-SNAPSHOT-hadoop1.0.3-mapr-3.0.3.jar
  ...
  [info] Packaging
 
 
 /mnt/some-nfs/common/spark/examples/target/scala-2.10/spark-examples-1.1.0-SNAPSHOT-hadoop1.0.3-mapr-3.0.3.jar
  ...
  [info] Done packaging.
  [info] Done packaging.
  [success] Total time: 1582 s, completed Sep 1, 2014 1:39:21 PM
 
  is there some easily way to exclude some modules such as spark/examples
 or
  spark/external ?
 
 
 
  --
  View this message in context:
 
 http://apache-spark-developers-list.1001551.n3.nabble.com/VOTE-Release-Apache-Spark-1-1-0-RC3-tp8147p8163.html
  Sent from the Apache Spark Developers List mailing list archive at
  Nabble.com.
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org javascript:;
  For additional commands, e-mail: dev-h...@spark.apache.org
 javascript:;
 
 



Re: [VOTE] Release Apache Spark 1.1.0 (RC3)

2014-09-01 Thread Andrew Or
+1. Tested all the basic applications under both deploy modes (where
applicable) in the following environments:

- locally on OSX 10.9
- locally on Windows 8.1
- standalone cluster
- yarn cluster built with Hadoop 2.4

From this front I have observed no regressions, and verified that
standalone-cluster mode is now fixed.



2014-09-01 9:27 GMT-07:00 Prashant Sharma scrapco...@gmail.com:

 Easy or quicker way to build spark is

 sbt/sbt assembly/assembly

 Prashant Sharma




 On Mon, Sep 1, 2014 at 8:40 PM, Nicholas Chammas 
 nicholas.cham...@gmail.com
  wrote:

  If this is not a confirmed regression from 1.0.2, I think it's better to
  report it in a separate thread or JIRA.
 
  I believe serious regressions are generally the only reason to block a
 new
  release. Otherwise, if this is an old issue, it should be handled
  separately.
 
  2014년 9월 1일 월요일, chutiumteng@gmail.com님이 작성한 메시지:
 
   i didn't tried with 1.0.2
  
   it takes always too long to build spark assembly jars... more than
 20min
  
   [info] Packaging
  
  
 
 /mnt/some-nfs/common/spark/assembly/target/scala-2.10/spark-assembly-1.1.0-SNAPSHOT-hadoop1.0.3-mapr-3.0.3.jar
   ...
   [info] Packaging
  
  
 
 /mnt/some-nfs/common/spark/examples/target/scala-2.10/spark-examples-1.1.0-SNAPSHOT-hadoop1.0.3-mapr-3.0.3.jar
   ...
   [info] Done packaging.
   [info] Done packaging.
   [success] Total time: 1582 s, completed Sep 1, 2014 1:39:21 PM
  
   is there some easily way to exclude some modules such as spark/examples
  or
   spark/external ?
  
  
  
   --
   View this message in context:
  
 
 http://apache-spark-developers-list.1001551.n3.nabble.com/VOTE-Release-Apache-Spark-1-1-0-RC3-tp8147p8163.html
   Sent from the Apache Spark Developers List mailing list archive at
   Nabble.com.
  
   -
   To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
 javascript:;
   For additional commands, e-mail: dev-h...@spark.apache.org
  javascript:;
  
  
 



Re: [VOTE] Release Apache Spark 1.1.0 (RC3)

2014-08-31 Thread Sean Owen
All the signatures are correct. The licensing all looks fine. The
source builds fine.

Now, let me ask about unit tests, since I had a more detailed look,
which I should have done before.


dev/run-tests fails two tests (1 Hive, 1 Kafka Streaming) for me
locally on 1.1.0-rc3. Does anyone else see that? It may be my env.
Although I still see the Hive failure on Debian too:

[info] - SET commands semantics for a HiveContext *** FAILED ***
[info]   Expected Array(spark.sql.key.usedfortestonly=test.val.0,
spark.sql.key.usedfortestonlyspark.sql.key.usedfortestonly=test.val.0test.val.0),
but got 
Array(spark.sql.key.usedfortestonlyspark.sql.key.usedfortestonly=test.val.0test.val.0,
spark.sql.key.usedfortestonly=test.val.0) (HiveQuerySuite.scala:541)


Python lint checks fail for files in python/build/py4j. These aren't
Spark files and are only present in this location in the release. The
check should simply be updated later to ignore this. Not a blocker.


Evidently, the SBT tests pass, usually, in master:
https://amplab.cs.berkeley.edu/jenkins/view/Spark/job/SparkPullRequestBuilder/
But Maven tests have not passed in master for a long time:
https://amplab.cs.berkeley.edu/jenkins/view/Spark/

I can reproduce this with Maven for 1.1.0-rc3. It feels funny to ship
with a repeatable Maven build failure, since Maven is the build of
record for release. Whatever is being tested is probably OK since SBT
passes, so it need not block release. I'll look for a fix as well.

A simple sbt test always fails for me, and that just may be because
the build is now only meaningful with further configuration. SBT tests
are mostly passing if not consistently for all profiles:
https://amplab.cs.berkeley.edu/jenkins/view/Spark/  These also sort of
feel funny, although nothing seems like an outright blocker.

I guess I'll add a non-binding +0 -- none of these are necessarily a
blocker but adds up to feeling a bit iffy about the state of tests in
the context of a release.

On Sat, Aug 30, 2014 at 11:07 PM, Patrick Wendell pwend...@gmail.com wrote:
 Please vote on releasing the following candidate as Apache Spark version 
 1.1.0!

 The tag to be voted on is v1.1.0-rc3 (commit b2d0493b):
 https://git-wip-us.apache.org/repos/asf?p=spark.git;a=commit;h=b2d0493b223c5f98a593bb6d7372706cc02bebad

 The release files, including signatures, digests, etc. can be found at:
 http://people.apache.org/~pwendell/spark-1.1.0-rc3/

 Release artifacts are signed with the following key:
 https://people.apache.org/keys/committer/pwendell.asc

 The staging repository for this release can be found at:
 https://repository.apache.org/content/repositories/orgapachespark-1030/

 The documentation corresponding to this release can be found at:
 http://people.apache.org/~pwendell/spark-1.1.0-rc3-docs/

 Please vote on releasing this package as Apache Spark 1.1.0!

 The vote is open until Tuesday, September 02, at 23:07 UTC and passes if
 a majority of at least 3 +1 PMC votes are cast.

 [ ] +1 Release this package as Apache Spark 1.1.0
 [ ] -1 Do not release this package because ...

 To learn more about Apache Spark, please see
 http://spark.apache.org/

 == Regressions fixed since RC1 ==
 - Build issue for SQL support: 
 https://issues.apache.org/jira/browse/SPARK-3234
 - EC2 script version bump to 1.1.0.

 == What justifies a -1 vote for this release? ==
 This vote is happening very late into the QA period compared with
 previous votes, so -1 votes should only occur for significant
 regressions from 1.0.2. Bugs already present in 1.0.X will not block
 this release.

 == What default changes should I be aware of? ==
 1. The default value of spark.io.compression.codec is now snappy
 -- Old behavior can be restored by switching to lzf

 2. PySpark now performs external spilling during aggregations.
 -- Old behavior can be restored by setting spark.shuffle.spill to false.

 -
 To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
 For additional commands, e-mail: dev-h...@spark.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: [VOTE] Release Apache Spark 1.1.0 (RC3)

2014-08-31 Thread Will Benton
- Original Message -

 dev/run-tests fails two tests (1 Hive, 1 Kafka Streaming) for me
 locally on 1.1.0-rc3. Does anyone else see that? It may be my env.
 Although I still see the Hive failure on Debian too:
 
 [info] - SET commands semantics for a HiveContext *** FAILED ***
 [info]   Expected Array(spark.sql.key.usedfortestonly=test.val.0,
 spark.sql.key.usedfortestonlyspark.sql.key.usedfortestonly=test.val.0test.val.0),
 but got
 Array(spark.sql.key.usedfortestonlyspark.sql.key.usedfortestonly=test.val.0test.val.0,
 spark.sql.key.usedfortestonly=test.val.0) (HiveQuerySuite.scala:541)

I've seen this error before.  (In particular, I've seen it on my OS X machine 
using Oracle JDK 8 but not on Fedora using OpenJDK.)  I've also seen similar 
errors in topic branches (but not on master) that seem to indicate that tests 
depend on sets of pairs arriving from Hive in a particular order; it seems that 
this isn't a safe assumption.

I just submitted a (trivial) PR to fix this spurious failure:  
https://github.com/apache/spark/pull/2220


best,
wb

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: [VOTE] Release Apache Spark 1.1.0 (RC3)

2014-08-31 Thread Sean Owen
Fantastic. As it happens, I just fixed up Mahout's tests for Java 8
and observed a lot of the same type of failure.

I'm about to submit PRs for the two issues I identified. AFAICT these
3 then cover the failures I mentioned:

https://issues.apache.org/jira/browse/SPARK-3329
https://issues.apache.org/jira/browse/SPARK-3330
https://issues.apache.org/jira/browse/SPARK-3331

I'd argue that none necessarily block a release, since they just
represent a problem with test-only code in Java 8, with the test-only
context of Jenkins and multiple profiles, and with a trivial
configuration in a style check for Python. Should be fixed but none
indicate a bug in the release.

On Sun, Aug 31, 2014 at 6:11 PM, Will Benton wi...@redhat.com wrote:
 - Original Message -

 dev/run-tests fails two tests (1 Hive, 1 Kafka Streaming) for me
 locally on 1.1.0-rc3. Does anyone else see that? It may be my env.
 Although I still see the Hive failure on Debian too:

 [info] - SET commands semantics for a HiveContext *** FAILED ***
 [info]   Expected Array(spark.sql.key.usedfortestonly=test.val.0,
 spark.sql.key.usedfortestonlyspark.sql.key.usedfortestonly=test.val.0test.val.0),
 but got
 Array(spark.sql.key.usedfortestonlyspark.sql.key.usedfortestonly=test.val.0test.val.0,
 spark.sql.key.usedfortestonly=test.val.0) (HiveQuerySuite.scala:541)

 I've seen this error before.  (In particular, I've seen it on my OS X machine 
 using Oracle JDK 8 but not on Fedora using OpenJDK.)  I've also seen similar 
 errors in topic branches (but not on master) that seem to indicate that tests 
 depend on sets of pairs arriving from Hive in a particular order; it seems 
 that this isn't a safe assumption.

 I just submitted a (trivial) PR to fix this spurious failure:  
 https://github.com/apache/spark/pull/2220


 best,
 wb

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: [VOTE] Release Apache Spark 1.1.0 (RC3)

2014-08-31 Thread Patrick Wendell
For my part I'm +1 on this, though Sean it would be great separately
to fix the test environment.

For those who voted on rc2, this is almost identical, so feel free to
+1 unless you think there are issues with the two minor bug fixes.

On Sun, Aug 31, 2014 at 10:18 AM, Sean Owen so...@cloudera.com wrote:
 Fantastic. As it happens, I just fixed up Mahout's tests for Java 8
 and observed a lot of the same type of failure.

 I'm about to submit PRs for the two issues I identified. AFAICT these
 3 then cover the failures I mentioned:

 https://issues.apache.org/jira/browse/SPARK-3329
 https://issues.apache.org/jira/browse/SPARK-3330
 https://issues.apache.org/jira/browse/SPARK-3331

 I'd argue that none necessarily block a release, since they just
 represent a problem with test-only code in Java 8, with the test-only
 context of Jenkins and multiple profiles, and with a trivial
 configuration in a style check for Python. Should be fixed but none
 indicate a bug in the release.

 On Sun, Aug 31, 2014 at 6:11 PM, Will Benton wi...@redhat.com wrote:
 - Original Message -

 dev/run-tests fails two tests (1 Hive, 1 Kafka Streaming) for me
 locally on 1.1.0-rc3. Does anyone else see that? It may be my env.
 Although I still see the Hive failure on Debian too:

 [info] - SET commands semantics for a HiveContext *** FAILED ***
 [info]   Expected Array(spark.sql.key.usedfortestonly=test.val.0,
 spark.sql.key.usedfortestonlyspark.sql.key.usedfortestonly=test.val.0test.val.0),
 but got
 Array(spark.sql.key.usedfortestonlyspark.sql.key.usedfortestonly=test.val.0test.val.0,
 spark.sql.key.usedfortestonly=test.val.0) (HiveQuerySuite.scala:541)

 I've seen this error before.  (In particular, I've seen it on my OS X 
 machine using Oracle JDK 8 but not on Fedora using OpenJDK.)  I've also seen 
 similar errors in topic branches (but not on master) that seem to indicate 
 that tests depend on sets of pairs arriving from Hive in a particular order; 
 it seems that this isn't a safe assumption.

 I just submitted a (trivial) PR to fix this spurious failure:  
 https://github.com/apache/spark/pull/2220


 best,
 wb

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: [VOTE] Release Apache Spark 1.1.0 (RC3)

2014-08-31 Thread Nicholas Chammas
-1: I believe I've found a regression from 1.0.2. The report is captured in
SPARK- https://issues.apache.org/jira/browse/SPARK-.


On Sat, Aug 30, 2014 at 6:07 PM, Patrick Wendell pwend...@gmail.com wrote:

 Please vote on releasing the following candidate as Apache Spark version
 1.1.0!

 The tag to be voted on is v1.1.0-rc3 (commit b2d0493b):

 https://git-wip-us.apache.org/repos/asf?p=spark.git;a=commit;h=b2d0493b223c5f98a593bb6d7372706cc02bebad

 The release files, including signatures, digests, etc. can be found at:
 http://people.apache.org/~pwendell/spark-1.1.0-rc3/

 Release artifacts are signed with the following key:
 https://people.apache.org/keys/committer/pwendell.asc

 The staging repository for this release can be found at:
 https://repository.apache.org/content/repositories/orgapachespark-1030/

 The documentation corresponding to this release can be found at:
 http://people.apache.org/~pwendell/spark-1.1.0-rc3-docs/

 Please vote on releasing this package as Apache Spark 1.1.0!

 The vote is open until Tuesday, September 02, at 23:07 UTC and passes if
 a majority of at least 3 +1 PMC votes are cast.

 [ ] +1 Release this package as Apache Spark 1.1.0
 [ ] -1 Do not release this package because ...

 To learn more about Apache Spark, please see
 http://spark.apache.org/

 == Regressions fixed since RC1 ==
 - Build issue for SQL support:
 https://issues.apache.org/jira/browse/SPARK-3234
 - EC2 script version bump to 1.1.0.

 == What justifies a -1 vote for this release? ==
 This vote is happening very late into the QA period compared with
 previous votes, so -1 votes should only occur for significant
 regressions from 1.0.2. Bugs already present in 1.0.X will not block
 this release.

 == What default changes should I be aware of? ==
 1. The default value of spark.io.compression.codec is now snappy
 -- Old behavior can be restored by switching to lzf

 2. PySpark now performs external spilling during aggregations.
 -- Old behavior can be restored by setting spark.shuffle.spill to
 false.

 -
 To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
 For additional commands, e-mail: dev-h...@spark.apache.org




Re: [VOTE] Release Apache Spark 1.1.0 (RC3)

2014-08-31 Thread chutium
has anyone tried to build it on hadoop.version=2.0.0-mr1-cdh4.3.0 or
hadoop.version=1.0.3-mapr-3.0.3 ?

see comments in
https://issues.apache.org/jira/browse/SPARK-3124
https://github.com/apache/spark/pull/2035

i built spark snapshot on hadoop.version=1.0.3-mapr-3.0.3
and the ticket creator built on hadoop.version=2.0.0-mr1-cdh4.3.0

both hadoop version do not work

on 1.0.3-mapr3.0.3

when i try to start spark-shell

i got:

14/08/23 23:29:46 INFO SecurityManager: Changing view acls to: client09,
14/08/23 23:29:46 INFO SecurityManager: Changing modify acls to: client09,
14/08/23 23:29:46 INFO SecurityManager: SecurityManager: authentication
disabled; ui acls disabled; users with view permissions: Set(client09, );
users with modify permissions: Set(client09, )
14/08/23 23:29:50 INFO Slf4jLogger: Slf4jLogger started
14/08/23 23:29:50 INFO Remoting: Starting remoting
14/08/23 23:29:50 ERROR ActorSystemImpl: Uncaught fatal error from thread
[spark-akka.actor.default-dispatcher-2] shutting down ActorSystem [spark]
java.lang.VerifyError: (class:
org/jboss/netty/channel/socket/nio/NioWorkerPool, method: createWorker
signature:
(Ljava/util/concurrent/Executor;)Lorg/jboss/netty/channel/socket/nio/AbstractNioWorker;)
Wrong return type in function
at
akka.remote.transport.netty.NettyTransport.init(NettyTransport.scala:282)
at
akka.remote.transport.netty.NettyTransport.init(NettyTransport.scala:239)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at
akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$2.apply(DynamicAccess.scala:78)
at scala.util.Try$.apply(Try.scala:161)
...
...
...


it seems this netty jar conflict affects not only SQL component and some
test-case



--
View this message in context: 
http://apache-spark-developers-list.1001551.n3.nabble.com/VOTE-Release-Apache-Spark-1-1-0-RC3-tp8147p8159.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: [VOTE] Release Apache Spark 1.1.0 (RC3)

2014-08-31 Thread Nicholas Chammas
On Sun, Aug 31, 2014 at 6:38 PM, chutium teng@gmail.com wrote:

 has anyone tried to build it on hadoop.version=2.0.0-mr1-cdh4.3.0 or
 hadoop.version=1.0.3-mapr-3.0.3 ?


Is the behavior you're seeing a regression from 1.0.2, or does 1.0.2 have
this same problem?

Nick


[VOTE] Release Apache Spark 1.1.0 (RC3)

2014-08-30 Thread Patrick Wendell
Please vote on releasing the following candidate as Apache Spark version 1.1.0!

The tag to be voted on is v1.1.0-rc3 (commit b2d0493b):
https://git-wip-us.apache.org/repos/asf?p=spark.git;a=commit;h=b2d0493b223c5f98a593bb6d7372706cc02bebad

The release files, including signatures, digests, etc. can be found at:
http://people.apache.org/~pwendell/spark-1.1.0-rc3/

Release artifacts are signed with the following key:
https://people.apache.org/keys/committer/pwendell.asc

The staging repository for this release can be found at:
https://repository.apache.org/content/repositories/orgapachespark-1030/

The documentation corresponding to this release can be found at:
http://people.apache.org/~pwendell/spark-1.1.0-rc3-docs/

Please vote on releasing this package as Apache Spark 1.1.0!

The vote is open until Tuesday, September 02, at 23:07 UTC and passes if
a majority of at least 3 +1 PMC votes are cast.

[ ] +1 Release this package as Apache Spark 1.1.0
[ ] -1 Do not release this package because ...

To learn more about Apache Spark, please see
http://spark.apache.org/

== Regressions fixed since RC1 ==
- Build issue for SQL support: https://issues.apache.org/jira/browse/SPARK-3234
- EC2 script version bump to 1.1.0.

== What justifies a -1 vote for this release? ==
This vote is happening very late into the QA period compared with
previous votes, so -1 votes should only occur for significant
regressions from 1.0.2. Bugs already present in 1.0.X will not block
this release.

== What default changes should I be aware of? ==
1. The default value of spark.io.compression.codec is now snappy
-- Old behavior can be restored by switching to lzf

2. PySpark now performs external spilling during aggregations.
-- Old behavior can be restored by setting spark.shuffle.spill to false.

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org