Repository: spark
Updated Branches:
  refs/heads/branch-1.6 b3ecff640 -> 909231d7a


[SPARK-17003][BUILD][BRANCH-1.6] release-build.sh is missing hive-thriftserver 
for scala 2.11

## What changes were proposed in this pull request?
hive-thriftserver works with Scala 2.11 
(https://issues.apache.org/jira/browse/SPARK-8013). So, let's publish scala 
2.11 artifacts with the flag of `-Phive-thfitserver`. I am also fixing the doc.

Author: Yin Huai <yh...@databricks.com>

Closes #14586 from yhuai/SPARK-16453-branch-1.6.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/909231d7
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/909231d7
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/909231d7

Branch: refs/heads/branch-1.6
Commit: 909231d7aec591af2fcf0ffaf0612a8c034bcd7a
Parents: b3ecff6
Author: Yin Huai <yh...@databricks.com>
Authored: Fri Aug 12 10:29:05 2016 -0700
Committer: Yin Huai <yh...@databricks.com>
Committed: Fri Aug 12 10:29:05 2016 -0700

----------------------------------------------------------------------
 dev/create-release/release-build.sh | 10 ++++------
 docs/building-spark.md              |  2 --
 python/pyspark/sql/functions.py     |  2 +-
 3 files changed, 5 insertions(+), 9 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/909231d7/dev/create-release/release-build.sh
----------------------------------------------------------------------
diff --git a/dev/create-release/release-build.sh 
b/dev/create-release/release-build.sh
index 2c3af6a..840fb20 100755
--- a/dev/create-release/release-build.sh
+++ b/dev/create-release/release-build.sh
@@ -80,7 +80,7 @@ NEXUS_PROFILE=d63f592e7eac0 # Profile for Spark staging 
uploads
 BASE_DIR=$(pwd)
 
 MVN="build/mvn --force"
-PUBLISH_PROFILES="-Pyarn -Phive -Phadoop-2.2"
+PUBLISH_PROFILES="-Pyarn -Phive -Phive-thriftserver -Phadoop-2.2"
 PUBLISH_PROFILES="$PUBLISH_PROFILES -Pspark-ganglia-lgpl -Pkinesis-asl"
 
 rm -rf spark
@@ -187,7 +187,7 @@ if [[ "$1" == "package" ]]; then
   # We increment the Zinc port each time to avoid OOM's and other craziness if 
multiple builds
   # share the same Zinc server.
   make_binary_release "hadoop1" "-Psparkr -Phadoop-1 -Phive 
-Phive-thriftserver" "3030" &
-  make_binary_release "hadoop1-scala2.11" "-Psparkr -Phadoop-1 -Phive 
-Dscala-2.11" "3031" &
+  make_binary_release "hadoop1-scala2.11" "-Psparkr -Phadoop-1 -Phive 
-Phive-thriftserver -Dscala-2.11" "3031" &
   make_binary_release "cdh4" "-Psparkr -Phadoop-1 -Phive -Phive-thriftserver 
-Dhadoop.version=2.0.0-mr1-cdh4.2.0" "3032" &
   make_binary_release "hadoop2.3" "-Psparkr -Phadoop-2.3 -Phive 
-Phive-thriftserver -Pyarn" "3033" &
   make_binary_release "hadoop2.4" "-Psparkr -Phadoop-2.4 -Phive 
-Phive-thriftserver -Pyarn" "3034" &
@@ -256,8 +256,7 @@ if [[ "$1" == "publish-snapshot" ]]; then
   # Generate random point for Zinc
   export ZINC_PORT=$(python -S -c "import random; print 
random.randrange(3030,4030)")
 
-  $MVN -DzincPort=$ZINC_PORT --settings $tmp_settings -DskipTests 
$PUBLISH_PROFILES \
-    -Phive-thriftserver deploy
+  $MVN -DzincPort=$ZINC_PORT --settings $tmp_settings -DskipTests 
$PUBLISH_PROFILES deploy
   ./dev/change-scala-version.sh 2.11
   $MVN -DzincPort=$ZINC_PORT -Dscala-2.11 --settings $tmp_settings \
     -DskipTests $PUBLISH_PROFILES clean deploy
@@ -293,8 +292,7 @@ if [[ "$1" == "publish-release" ]]; then
   # Generate random point for Zinc
   export ZINC_PORT=$(python -S -c "import random; print 
random.randrange(3030,4030)")
 
-  $MVN -DzincPort=$ZINC_PORT -Dmaven.repo.local=$tmp_repo -DskipTests 
$PUBLISH_PROFILES \
-    -Phive-thriftserver clean install
+  $MVN -DzincPort=$ZINC_PORT -Dmaven.repo.local=$tmp_repo -DskipTests 
$PUBLISH_PROFILES clean install
 
   ./dev/change-scala-version.sh 2.11
 

http://git-wip-us.apache.org/repos/asf/spark/blob/909231d7/docs/building-spark.md
----------------------------------------------------------------------
diff --git a/docs/building-spark.md b/docs/building-spark.md
index 5f694dc..4348b38 100644
--- a/docs/building-spark.md
+++ b/docs/building-spark.md
@@ -129,8 +129,6 @@ To produce a Spark package compiled with Scala 2.11, use 
the `-Dscala-2.11` prop
     ./dev/change-scala-version.sh 2.11
     mvn -Pyarn -Phadoop-2.4 -Dscala-2.11 -DskipTests clean package
 
-Spark does not yet support its JDBC component for Scala 2.11.
-
 # Spark Tests in Maven
 
 Tests are run by default via the [ScalaTest Maven 
plugin](http://www.scalatest.org/user_guide/using_the_scalatest_maven_plugin).

http://git-wip-us.apache.org/repos/asf/spark/blob/909231d7/python/pyspark/sql/functions.py
----------------------------------------------------------------------
diff --git a/python/pyspark/sql/functions.py b/python/pyspark/sql/functions.py
index 1152954..6912cc9 100644
--- a/python/pyspark/sql/functions.py
+++ b/python/pyspark/sql/functions.py
@@ -1299,7 +1299,7 @@ def regexp_extract(str, pattern, idx):
     >>> df = sqlContext.createDataFrame([('100-200',)], ['str'])
     >>> df.select(regexp_extract('str', '(\d+)-(\d+)', 1).alias('d')).collect()
     [Row(d=u'100')]
-    >>> df = spark.createDataFrame([('aaaac',)], ['str'])
+    >>> df = sqlContext.createDataFrame([('aaaac',)], ['str'])
     >>> df.select(regexp_extract('str', '(a+)(b)?(c)', 2).alias('d')).collect()
     [Row(d=u'')]
     """


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to