Repository: spark
Updated Branches:
refs/heads/master 8161562ea -> b97ddff00
[SPARK-7684] [SQL] Refactoring MetastoreDataSourcesSuite to workaround
SPARK-7684
As stated in SPARK-7684, currently `TestHive.reset` has some execution order
specific bug, which makes running specific test suites l
Repository: spark
Updated Branches:
refs/heads/master b97ddff00 -> db3fd054f
[SPARK-7853] [SQL] Fixes a class loader issue in Spark SQL
This PR is based on PR #6396 authored by chenghao-intel. Essentially, Spark SQL
should use context classloader to load SerDe classes.
yhuai helped updat
Repository: spark
Updated Branches:
refs/heads/branch-1.4 89fe93fc3 -> e07b71560
[SPARK-7853] [SQL] Fixes a class loader issue in Spark SQL
This PR is based on PR #6396 authored by chenghao-intel. Essentially, Spark SQL
should use context classloader to load SerDe classes.
yhuai hel
d
3. Renaming the title of the session page from `ThriftServer` to `JDBC/ODBC
Session`.
https://issues.apache.org/jira/browse/SPARK-7907
Author: Yin Huai
Closes #6448 from yhuai/JDBCServer and squashes the following commits:
eadcc3d [Yin Huai] Update test.
9168005 [Yin Huai] Use SQL as the tab n
and
3. Renaming the title of the session page from `ThriftServer` to `JDBC/ODBC
Session`.
https://issues.apache.org/jira/browse/SPARK-7907
Author: Yin Huai
Closes #6448 from yhuai/JDBCServer and squashes the following commits:
eadcc3d [Yin Huai] Update test.
9168005 [Yin Huai] Use SQL as the
ive Context fails
to create in spark shell because of the class loader issue.
Author: Yin Huai
Closes #6459 from yhuai/SPARK-7853 and squashes the following commits:
37ad33e [Yin Huai] Do not use hiveQlTable at all.
47cdb6d [Yin Huai] Move hiveconf.set to the end of setConf.
005649b [Yin H
hat Hive Context fails
to create in spark shell because of the class loader issue.
Author: Yin Huai
Closes #6459 from yhuai/SPARK-7853 and squashes the following commits:
37ad33e [Yin Huai] Do not use hiveQlTable at all.
47cdb6d [Yin Huai] Move hiveconf.set to the end of setConf.
005649b [Yin H
Repository: spark
Updated Branches:
refs/heads/master a51b133de -> e7b617755
[SPARK-7950] [SQL] Sets spark.sql.hive.version in
HiveThriftServer2.startWithContext()
When starting `HiveThriftServer2` via `startWithContext`, property
`spark.sql.hive.version` isn't set. This causes Simba ODBC dr
Repository: spark
Updated Branches:
refs/heads/branch-1.4 23bd05fff -> caea7a618
[SPARK-7950] [SQL] Sets spark.sql.hive.version in
HiveThriftServer2.startWithContext()
When starting `HiveThriftServer2` via `startWithContext`, property
`spark.sql.hive.version` isn't set. This causes Simba ODB
Repository: spark
Updated Branches:
refs/heads/master bcb47ad77 -> 7b7f7b6c6
[SPARK-8020] [SQL] Spark SQL conf in spark-defaults.conf make metadataHive get
constructed too early
https://issues.apache.org/jira/browse/SPARK-8020
Author: Yin Huai
Closes #6571 from yhuai/SPARK-8020-1
Repository: spark
Updated Branches:
refs/heads/branch-1.4 9d6475b93 -> 4940630f5
[SPARK-8020] [SQL] Spark SQL conf in spark-defaults.conf make metadataHive get
constructed too early
https://issues.apache.org/jira/browse/SPARK-8020
Author: Yin Huai
Closes #6571 from yhuai/SPARK-8020-1
Repository: spark
Updated Branches:
refs/heads/branch-1.4 97fedf1a0 -> 8c3fc3a6c
[HOT-FIX] Add EvaluatedType back to RDG
https://github.com/apache/spark/commit/87941ff8c49a6661f22c31aa7b84ac1fce768135
accidentally removed the EvaluatedType.
Author: Yin Huai
Closes #6589 from yh
Repository: spark
Updated Branches:
refs/heads/master ad06727fe -> 686a45f0b
[SPARK-8014] [SQL] Avoid premature metadata discovery when writing a
HadoopFsRelation with a save mode other than Append
The current code references the schema of the DataFrame to be written before
checking save mod
Repository: spark
Updated Branches:
refs/heads/branch-1.4 815e05654 -> cbaf59544
[SPARK-8014] [SQL] Avoid premature metadata discovery when writing a
HadoopFsRelation with a save mode other than Append
The current code references the schema of the DataFrame to be written before
checking save
Repository: spark
Updated Branches:
refs/heads/master 28dbde387 -> f1646e102
[SPARK-7973] [SQL] Increase the timeout of two CliSuite tests.
https://issues.apache.org/jira/browse/SPARK-7973
Author: Yin Huai
Closes #6525 from yhuai/SPARK-7973 and squashes the following commits:
763b821 [
Repository: spark
Updated Branches:
refs/heads/branch-1.4 ee7f365bd -> 54a4ea407
[SPARK-7973] [SQL] Increase the timeout of two CliSuite tests.
https://issues.apache.org/jira/browse/SPARK-7973
Author: Yin Huai
Closes #6525 from yhuai/SPARK-7973 and squashes the following commits:
763b
Repository: spark
Updated Branches:
refs/heads/master 04b679993 -> 49efd03ba
[SPARK-12138][SQL] Escape \u in the generated comments of codegen
When \u appears in a comment block (i.e. in /**/), code gen will break. So, in
Expression and CodegenFallback, we escape \u to \\u.
yhuai Ple
Repository: spark
Updated Branches:
refs/heads/branch-1.6 c8747a9db -> 82a71aba0
[SPARK-12138][SQL] Escape \u in the generated comments of codegen
When \u appears in a comment block (i.e. in /**/), code gen will break. So, in
Expression and CodegenFallback, we escape \u to \\u.
yhuai Ple
Repository: spark
Updated Branches:
refs/heads/branch-1.5 8bbb3cdd1 -> 93a0510a5
[SPARK-12138][SQL] Escape \u in the generated comments of codegen
When \u appears in a comment block (i.e. in /**/), code gen will break. So, in
Expression and CodegenFallback, we escape \u to \\u.
yhuai Ple
Repository: spark
Updated Branches:
refs/heads/master 84b809445 -> 36282f78b
[SPARK-12184][PYTHON] Make python api doc for pivot consistant with scala doc
In SPARK-11946 the API for pivot was changed a bit and got updated doc, the doc
changes were not made for the python api though. This PR u
Repository: spark
Updated Branches:
refs/heads/branch-1.6 c8aa5f201 -> cdeb89b34
[SPARK-12184][PYTHON] Make python api doc for pivot consistant with scala doc
In SPARK-11946 the API for pivot was changed a bit and got updated doc, the doc
changes were not made for the python api though. This
Repository: spark
Updated Branches:
refs/heads/master 872a2ee28 -> 4bcb89494
[SPARK-12205][SQL] Pivot fails Analysis when aggregate is UnresolvedFunction
Delays application of ResolvePivot until all aggregates are resolved to prevent
problems with UnresolvedFunction and adds unit test
Author
Repository: spark
Updated Branches:
refs/heads/branch-1.6 1c8451b5e -> 9145bfb81
[SPARK-12205][SQL] Pivot fails Analysis when aggregate is UnresolvedFunction
Delays application of ResolvePivot until all aggregates are resolved to prevent
problems with UnresolvedFunction and adds unit test
Au
tps://cloud.githubusercontent.com/assets/2072857/11673132/1ba01192-9dcb-11e5-98d9-ac0b4e92e98c.png)
JIRA: https://issues.apache.org/jira/browse/SPARK-11678
Author: Yin Huai
Closes #10211 from yhuai/basePathDoc.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-
tps://cloud.githubusercontent.com/assets/2072857/11673132/1ba01192-9dcb-11e5-98d9-ac0b4e92e98c.png)
JIRA: https://issues.apache.org/jira/browse/SPARK-11678
Author: Yin Huai
Closes #10211 from yhuai/basePathDoc.
(cherry picked from commit ac8cdf1cdc148bd21290ecf4d4f9874f8c87cc14)
Signed-off-by:
Repository: spark
Updated Branches:
refs/heads/branch-1.6 93ef24638 -> e541f703d
[SPARK-12012][SQL][BRANCH-1.6] Show more comprehensive PhysicalRDD metadata
when visualizing SQL query plan
This PR backports PR #10004 to branch-1.6
It adds a private[sql] method metadata to SparkPlan, which ca
Repository: spark
Updated Branches:
refs/heads/master d9d354ed4 -> bc5f56aa6
[SPARK-12250][SQL] Allow users to define a UDAF without providing details of
its inputSchema
https://issues.apache.org/jira/browse/SPARK-12250
Author: Yin Huai
Closes #10236 from yhuai/SPARK-12250.
Proj
Repository: spark
Updated Branches:
refs/heads/branch-1.6 e541f703d -> 594fafc61
[SPARK-12250][SQL] Allow users to define a UDAF without providing details of
its inputSchema
https://issues.apache.org/jira/browse/SPARK-12250
Author: Yin Huai
Closes #10236 from yhuai/SPARK-12250.
(che
itch to a new one.
It is possible that it can reduce the flakyness of our tests that need to
create HiveContext (e.g. HiveSparkSubmitSuite). I will test it more.
https://issues.apache.org/jira/browse/SPARK-12228
Author: Yin Huai
Closes #10204 from yhuai/derbyInMemory.
Project: h
Repository: spark
Updated Branches:
refs/heads/branch-1.6 5d3722f8e -> d09af2cb4
[SPARK-12258][SQL] passing null into ScalaUDF
Check nullability and passing them into ScalaUDF.
Closes #10249
Author: Davies Liu
Closes #10259 from davies/udf_null.
(cherry picked from commit b1b4ee7f3541d92c
Repository: spark
Updated Branches:
refs/heads/master 24d3357d6 -> b1b4ee7f3
[SPARK-12258][SQL] passing null into ScalaUDF
Check nullability and passing them into ScalaUDF.
Closes #10249
Author: Davies Liu
Closes #10259 from davies/udf_null.
Project: http://git-wip-us.apache.org/repos/as
Repository: spark
Updated Branches:
refs/heads/master a0ff6d16e -> 1e799d617
[SPARK-12298][SQL] Fix infinite loop in DataFrame.sortWithinPartitions
Modifies the String overload to call the Column overload and ensures this is
called in a test.
Author: Ankur Dave
Closes #10271 from ankurdave
Repository: spark
Updated Branches:
refs/heads/branch-1.6 c2f20469d -> 03d801587
[SPARK-12298][SQL] Fix infinite loop in DataFrame.sortWithinPartitions
Modifies the String overload to call the Column overload and ensures this is
called in a test.
Author: Ankur Dave
Closes #10271 from ankur
yhuai nongli marmbrus
Author: Davies Liu
Closes #10228 from davies/single_distinct.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/834e7148
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/834e7148
Diff: http://git
Repository: spark
Updated Branches:
refs/heads/master 2aecda284 -> 834e71489
http://git-wip-us.apache.org/repos/asf/spark/blob/834e7148/sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate/utils.scala
--
diff --git
Repository: spark
Updated Branches:
refs/heads/master 834e71489 -> ed87f6d3b
[SPARK-12275][SQL] No plan for BroadcastHint in some condition
When SparkStrategies.BasicOperators's "case BroadcastHint(child) =>
apply(child)" is hit, it only recursively invokes BasicOperators.apply with
this "ch
Repository: spark
Updated Branches:
refs/heads/branch-1.6 fbf16da2e -> 94ce5025f
[SPARK-12275][SQL] No plan for BroadcastHint in some condition
When SparkStrategies.BasicOperators's "case BroadcastHint(child) =>
apply(child)" is hit, it only recursively invokes BasicOperators.apply with
this
Window functions.
cc rxin / yhuai
Author: Herman van Hovell
Closes #9819 from hvanhovell/SPARK-8641-2.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/658f66e6
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/658f66e6
Repository: spark
Updated Branches:
refs/heads/master ed6ebda5c -> 658f66e62
http://git-wip-us.apache.org/repos/asf/spark/blob/658f66e6/sql/hive/compatibility/src/test/scala/org/apache/spark/sql/hive/execution/HiveWindowFunctionQuerySuite.scala
--
Repository: spark
Updated Branches:
refs/heads/master 278281828 -> ee444fe4b
[SPARK-11619][SQL] cannot use UDTF in DataFrame.selectExpr
Description of the problem from cloud-fan
Actually this line:
https://github.com/apache/spark/blob/branch-1.5/sql/core/src/main/scala/org/apache/spark/sql/D
AND
expressions partially.
Author: Yin Huai
Closes #10362 from yhuai/SPARK-12218.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/41ee7c57
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/41ee7c57
Diff: http://git-
ted AND
expressions partially.
Author: Yin Huai
Closes #10362 from yhuai/SPARK-12218.
(cherry picked from commit 41ee7c57abd9f52065fd7ffb71a8af229603371d)
Signed-off-by: Yin Huai
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/sp
ted AND
expressions partially.
Author: Yin Huai
Closes #10362 from yhuai/SPARK-12218.
(cherry picked from commit 41ee7c57abd9f52065fd7ffb71a8af229603371d)
Signed-off-by: Yin Huai
Conflicts:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parq
Repository: spark
Updated Branches:
refs/heads/master 575a13279 -> b374a2583
[SPARK-12102][SQL] Cast a non-nullable struct field to a nullable field during
analysis
Compare both left and right side of the case expression ignoring nullablity
when checking for type equality.
Author: Dilip Bis
Repository: spark
Updated Branches:
refs/heads/master 8d4940092 -> 8e23d8db7
[SPARK-12218] Fixes ORC conjunction predicate push down
This PR is a follow-up of PR #10362.
Two major changes:
1. The fix introduced in #10362 is OK for Parquet, but may disable ORC PPD in
many cases
PR #103
Repository: spark
Updated Branches:
refs/heads/master 1a91be807 -> 73862a1eb
[SPARK-11394][SQL] Throw IllegalArgumentException for unsupported types in
postgresql
If DataFrame has BYTE types, throws an exception:
org.postgresql.util.PSQLException: ERROR: type "byte" does not exist
Author: Ta
Repository: spark
Updated Branches:
refs/heads/branch-1.6 fd202485a -> 85a871818
[SPARK-11394][SQL] Throw IllegalArgumentException for unsupported types in
postgresql
If DataFrame has BYTE types, throws an exception:
org.postgresql.util.PSQLException: ERROR: type "byte" does not exist
Author
Repository: spark
Updated Branches:
refs/heads/master d80cc90b5 -> 8e629b10c
[SPARK-12530][BUILD] Fix build break at Spark-Master-Maven-Snapshots from #1293
Compilation error caused due to string concatenations that are not a constant
Use raw string literal to avoid string concatenations
http
Repository: spark
Updated Branches:
refs/heads/master 8e629b10c -> f6ecf1433
[SPARK-11199][SPARKR] Improve R context management story and add getOrCreate
* Changes api.r.SQLUtils to use ```SQLContext.getOrCreate``` instead of
creating a new context.
* Adds a simple test
[SPARK-11199] #commen
add the licenses of these two projects to the
licenses directory. They are both under the ASL. srowen any thoughts?
cc yhuai
Author: Herman van Hovell
Closes #10402 from hvanhovell/SPARK-8641-docs.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.or
Repository: spark
Updated Branches:
refs/heads/master 8f659393b -> 6c83d938c
[SPARK-12579][SQL] Force user-specified JDBC driver to take precedence
Spark SQL's JDBC data source allows users to specify an explicit JDBC driver to
load (using the `driver` argument), but in the current code it's
Repository: spark
Updated Branches:
refs/heads/branch-1.6 b5a1f564a -> 7f37c1e45
[SPARK-12579][SQL] Force user-specified JDBC driver to take precedence
Spark SQL's JDBC data source allows users to specify an explicit JDBC driver to
load (using the `driver` argument), but in the current code i
Repository: spark
Updated Branches:
refs/heads/master d084a2de3 -> 34de24abb
[SPARK-12589][SQL] Fix UnsafeRowParquetRecordReader to properly set the row
length.
The reader was previously not setting the row length meaning it was wrong if
there were variable
length columns. This problem does
Repository: spark
Updated Branches:
refs/heads/branch-1.6 1005ee396 -> 8ac919809
[SPARK-12589][SQL] Fix UnsafeRowParquetRecordReader to properly set the row
length.
The reader was previously not setting the row length meaning it was wrong if
there were variable
length columns. This problem d
Repository: spark
Updated Branches:
refs/heads/branch-1.6 d9e4438b5 -> 5afa62b20
[SPARK-12647][SQL] Fix o.a.s.sqlexecution.ExchangeCoordinatorSuite.determining
the number of reducers: aggregate operator
change expected partition sizes
Author: Pete Robbins
Closes #10599 from robbinspg/branc
Repository: spark
Updated Branches:
refs/heads/branch-1.4 bc397753c -> d4914647a
Revert "[SPARK-12006][ML][PYTHON] Fix GMM failure if initialModel is not None"
This reverts commit fcd013cf70e7890aa25a8fe3cb6c8b36bf0e1f04.
Author: Yin Huai
Closes #10632 from yhuai/pythonSt
Repository: spark
Updated Branches:
refs/heads/master 834651835 -> 34dbc8af2
[SPARK-12580][SQL] Remove string concatenations from usage and extended in
@ExpressionDescription
Use multi-line string literals for ExpressionDescription with ``//
scalastyle:off line.size.limit`` and ``// scalasty
Repository: spark
Updated Branches:
refs/heads/master 659fd9d04 -> d9447cac7
http://git-wip-us.apache.org/repos/asf/spark/blob/d9447cac/sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveUDFs.scala
--
diff --git a/sql/hive/s
[SPARK-12593][SQL] Converts resolved logical plan back to SQL
This PR tries to enable Spark SQL to convert resolved logical plans back to SQL
query strings. For now, the major use case is to canonicalize Spark SQL native
view support. The major entry point is `SQLBuilder.toSQL`, which returns
Repository: spark
Updated Branches:
refs/heads/master 8fe928b4f -> 9559ac5f7
[SPARK-12744][SQL] Change parsing JSON integers to timestamps to treat integers
as number of seconds
JIRA: https://issues.apache.org/jira/browse/SPARK-12744
This PR makes parsing JSON integers to timestamps consiste
Repository: spark
Updated Branches:
refs/heads/branch-1.6 94b39f777 -> 03e523e52
Revert "[SPARK-12645][SPARKR] SparkR support hash function"
This reverts commit 8b5f23043322254c725c703c618ba3d3cc4a4240.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apac
Repository: spark
Updated Branches:
refs/heads/master f14922cff -> dc7b3870f
[SPARK-12558][SQL] AnalysisException when multiple functions applied in GROUP
BY clause
cloud-fan Can you please take a look ?
In this case, we are failing during check analysis while validating the
aggregation exp
Repository: spark
Updated Branches:
refs/heads/branch-1.6 f71e5cc12 -> dcdc864cf
[SPARK-12558][SQL] AnalysisException when multiple functions applied in GROUP
BY clause
cloud-fan Can you please take a look ?
In this case, we are failing during check analysis while validating the
aggregation
sue.
Author: Yin Huai
Closes #10742 from yhuai/fixStyle.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/d6fd9b37
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/d6fd9b37
Diff: http://git-wip-us.apache.org/repos/
Repository: spark
Updated Branches:
refs/heads/master ad1503f92 -> 513266c04
[SPARK-12833][HOT-FIX] Fix scala 2.11 compilation.
Seems
https://github.com/apache/spark/commit/5f83c6991c95616ecbc2878f8860c69b2826f56c
breaks scala 2.11 compilation.
Author: Yin Huai
Closes #10774 from yh
Repository: spark
Updated Branches:
refs/heads/master 5f843781e -> f6ddbb360
[SPARK-12833][HOT-FIX] Reset the locale after we set it.
Author: Yin Huai
Closes #10778 from yhuai/resetLocale.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.
Repository: spark
Updated Branches:
refs/heads/master 233d6cee9 -> db9a86058
[SPARK-12558][FOLLOW-UP] AnalysisException when multiple functions applied in
GROUP BY clause
Addresses the comments from Yin.
https://github.com/apache/spark/pull/10520
Author: Dilip Biswal
Closes #10758 from dil
Repository: spark
Updated Branches:
refs/heads/branch-1.6 5803fce90 -> 53184ce77
[SPARK-12558][FOLLOW-UP] AnalysisException when multiple functions applied in
GROUP BY clause
Addresses the comments from Yin.
https://github.com/apache/spark/pull/10520
Author: Dilip Biswal
Closes #10758 from
Repository: spark
Updated Branches:
refs/heads/master 38c3c0e31 -> 4f11e3f2a
[SPARK-12841][SQL] fix cast in filter
In SPARK-10743 we wrap cast with `UnresolvedAlias` to give `Cast` a better
alias if possible. However, for cases like `filter`, the `UnresolvedAlias`
can't be resolved and actua
Repository: spark
Updated Branches:
refs/heads/branch-1.6 d43704d7f -> 68265ac23
[SPARK-12841][SQL][BRANCH-1.6] fix cast in filter
In SPARK-10743 we wrap cast with `UnresolvedAlias` to give `Cast` a better
alias if possible. However, for cases like filter, the `UnresolvedAlias` can't
be reso
Repository: spark
Updated Branches:
refs/heads/master 0ddba6d88 -> e14817b52
[SPARK-12870][SQL] better format bucket id in file name
for normal parquet file without bucket, it's file name ends with a jobUUID
which maybe all numbers and mistakeny regarded as bucket id. This PR improves
the fo
Repository: spark
Updated Branches:
refs/heads/master 015c8efb3 -> d60f8d74a
[SPARK-8968] [SQL] [HOT-FIX] Fix scala 2.11 build.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/d60f8d74
Tree: http://git-wip-us.apache.org/re
Repository: spark
Updated Branches:
refs/heads/master e789b1d2c -> 3327fd281
[SPARK-12624][PYSPARK] Checks row length when converting Java arrays to Python
rows
When actual row length doesn't conform to specified schema field length, we
should give a better error message instead of throwing
Repository: spark
Updated Branches:
refs/heads/branch-1.6 f913f7ea0 -> 88614dd0f
[SPARK-12624][PYSPARK] Checks row length when converting Java arrays to Python
rows
When actual row length doesn't conform to specified schema field length, we
should give a better error message instead of throw
Repository: spark
Updated Branches:
refs/heads/master 7d877c343 -> 00026fa99
[SPARK-12901][SQL][HOT-FIX] Fix scala 2.11 compilation.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/00026fa9
Tree: http://git-wip-us.apache.o
Modified: spark/site/examples.html
URL:
http://svn.apache.org/viewvc/spark/site/examples.html?rev=1726699&r1=1726698&r2=1726699&view=diff
==
--- spark/site/examples.html (original)
+++ spark/site/examples.html Mon Jan 25 2
Modified: spark/site/news/spark-screencasts-published.html
URL:
http://svn.apache.org/viewvc/spark/site/news/spark-screencasts-published.html?rev=1726699&r1=1726698&r2=1726699&view=diff
==
--- spark/site/news/spark-screenc
Author: yhuai
Date: Mon Jan 25 21:57:32 2016
New Revision: 1726699
URL: http://svn.apache.org/viewvc?rev=1726699&view=rev
Log:
Update the Spark example page to include examples using high level APIs
Modified:
spark/_config.yml
spark/_layouts/global.html
spark/examples.md
s
Repository: spark
Updated Branches:
refs/heads/master ae0309a88 -> 08c781ca6
[SPARK-12682][SQL] Add support for (optionally) not storing tables in hive
metadata format
This PR adds a new table option (`skip_hive_metadata`) that'd allow the user to
skip storing the table metadata in hive meta
Repository: spark
Updated Branches:
refs/heads/branch-1.6 572bc3999 -> f0c98a60f
[SPARK-12682][SQL] Add support for (optionally) not storing tables in hive
metadata format
This PR adds a new table option (`skip_hive_metadata`) that'd allow the user to
skip storing the table metadata in hive
Repository: spark
Updated Branches:
refs/heads/branch-1.6 f0c98a60f -> 6ce3dd940
[SPARK-12682][SQL][HOT-FIX] Fix test compilation
Author: Yin Huai
Closes #10925 from yhuai/branch-1.6-hot-fix.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.
Repository: spark
Updated Branches:
refs/heads/branch-1.6 6ce3dd940 -> 85518eda4
[SPARK-12611][SQL][PYSPARK][TESTS] Fix test_infer_schema_to_local
Previously (when the PR was first created) not specifying b= explicitly was
fine (and treated as default null) - instead be explicit about b being
Repository: spark
Updated Branches:
refs/heads/master ce38a35b7 -> 58f5d8c1d
[SPARK-12728][SQL] Integrates SQL generation with native view
This PR is a follow-up of PR #10541. It integrates the newly introduced SQL
generation feature with native view to make native view canonical.
In this PR
Repository: spark
Updated Branches:
refs/heads/branch-1.6 17d1071ce -> 96e32db5c
[SPARK-10847][SQL][PYSPARK] Pyspark - DataFrame - Optional Metadata with `None`
triggers cryptic failure
The error message is now changed from "Do not support type class scala.Tuple2."
to "Do not support type cl
Repository: spark
Updated Branches:
refs/heads/branch-1.5 ae6fcc6bc -> 49dc8e7d3
[SPARK-10847][SQL][PYSPARK] Pyspark - DataFrame - Optional Metadata with `None`
triggers cryptic failure
The error message is now changed from "Do not support type class scala.Tuple2."
to "Do not support type cl
Repository: spark
Updated Branches:
refs/heads/master 41f0c85f9 -> edd473751
[SPARK-10847][SQL][PYSPARK] Pyspark - DataFrame - Optional Metadata with `None`
triggers cryptic failure
The error message is now changed from "Do not support type class scala.Tuple2."
to "Do not support type class
Repository: spark
Updated Branches:
refs/heads/master 87abcf7df -> 32f741115
[SPARK-13021][CORE] Fail fast when custom RDDs violate RDD.partition's API
contract
Spark's `Partition` and `RDD.partitions` APIs have a contract which requires
custom implementations of `RDD.partitions` to ensure t
Repository: spark
Updated Branches:
refs/heads/master ef96cd3c5 -> d702f0c17
[HOTFIX] Fix Scala 2.11 compilation
by explicitly marking annotated parameters as vals (SI-8813).
Caused by #10835.
Author: Andrew Or
Closes #10955 from andrewor14/fix-scala211.
Project: http://git-wip-us.apache
Repository: spark
Updated Branches:
refs/heads/master b8666fd0e -> 22ba21348
[SPARK-13087][SQL] Fix group by function for sort based aggregation
It is not valid to call `toAttribute` on a `NamedExpression` unless we know for
sure that the child produced that `NamedExpression`. The current co
Repository: spark
Updated Branches:
refs/heads/branch-1.6 70fcbf68e -> bd8efba8f
[SPARK-13087][SQL] Fix group by function for sort based aggregation
It is not valid to call `toAttribute` on a `NamedExpression` unless we know for
sure that the child produced that `NamedExpression`. The curren
Repository: spark
Updated Branches:
refs/heads/master 6de6a9772 -> 672032d0a
[SPARK-13020][SQL][TEST] fix random generator for map type
when we generate map, we first randomly pick a length, then create a seq of key
value pair with the expected length, and finally call `toMap`. However, `toMa
Repository: spark
Updated Branches:
refs/heads/master 9267bc68f -> 6f710f9fd
[SPARK-12476][SQL] Implement JdbcRelation#unhandledFilters for removing
unnecessary Spark Filter
Input: SELECT * FROM jdbcTable WHERE col0 = 'xxx'
Current plan:
```
== Optimized Logical Plan ==
Project [col0#0,col1#
PR build even if a PR only changes
sql/core. So, I am going to remove `ExtendedHiveTest` annotation from
`HiveCompatibilitySuite`.
https://issues.apache.org/jira/browse/SPARK-13475
Author: Yin Huai
Closes #11351 from yhuai/SPARK-13475.
Project: http://git-wip-us.apache.org/repos/asf/spark/r
run in PR build even if a PR only changes
sql/core. So, I am going to remove `ExtendedHiveTest` annotation from
`HiveCompatibilitySuite`.
https://issues.apache.org/jira/browse/SPARK-13475
Author: Yin Huai
Closes #11351 from yhuai/SPARK-13475.
(cherry picked from com
Modified: spark/site/news/two-weeks-to-spark-summit-2014.html
URL:
http://svn.apache.org/viewvc/spark/site/news/two-weeks-to-spark-summit-2014.html?rev=1732240&r1=1732239&r2=1732240&view=diff
==
--- spark/site/news/two-wee
Author: yhuai
Date: Wed Feb 24 23:07:27 2016
New Revision: 1732240
URL: http://svn.apache.org/viewvc?rev=1732240&view=rev
Log:
Add links to ASF on the nav bar
Modified:
spark/_layouts/global.html
spark/site/community.html
spark/site/documentation.html
spark/site/downloads.
by SPARK-13383. So, I am fixing the test.
Author: Yin Huai
Closes #11355 from yhuai/SPARK-13383-fix-test.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/cbb0b65a
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/cbb0b
Repository: spark
Updated Branches:
refs/heads/master 5a7af9e7a -> 2b042577f
[SPARK-13092][SQL] Add ExpressionSet for constraint tracking
This PR adds a new abstraction called an `ExpressionSet` which attempts to
canonicalize expressions to remove cosmetic differences. Deterministic
express
Repository: spark
Updated Branches:
refs/heads/master 50e60e36f -> 8afe49141
[SPARK-12941][SQL][MASTER] Spark-SQL JDBC Oracle dialect fails to map string
datatypes to Oracle VARCHAR datatype
## What changes were proposed in this pull request?
This Pull request is used for the fix SPARK-12941
Repository: spark
Updated Branches:
refs/heads/master 8afe49141 -> 26ac60806
[SPARK-13487][SQL] User-facing RuntimeConfig interface
## What changes were proposed in this pull request?
This patch creates the public API for runtime configuration and an
implementation for it. The public runtime
601 - 700 of 855 matches
Mail list logo