Repository: spark
Updated Branches:
refs/heads/branch-2.0 b3845fede -> b430aa98c
[SPARK-15431][SQL][HOTFIX] ignore 'list' command testcase from CliSuite for now
## What changes were proposed in this pull request?
The test cases for `list` command added in `CliSuite` by PR #13212 can not run
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13276#issuecomment-222183974
Merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13212#discussion_r64926605
--- Diff:
sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/CliSuite.scala
---
@@ -238,4 +238,23 @@ class CliSuite extends
Repository: spark
Updated Branches:
refs/heads/master d5911d117 -> 6f95c6c03
[SPARK-15431][SQL][HOTFIX] ignore 'list' command testcase from CliSuite for now
## What changes were proposed in this pull request?
The test cases for `list` command added in `CliSuite` by PR #13212 can not run
in
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13276#issuecomment-222183379
It causes master builds to fail. I am merging it. Is the bug in the
implementation or in our test code?
---
If your project is set up for it, you can reply
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13270#discussion_r64922718
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -212,11 +212,46 @@ class SessionCatalog
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13348#issuecomment-222065198
I created the jira because of
https://issues.apache.org/jira/browse/SPARK-15034?focusedCommentId=15301508=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13348#discussion_r64858995
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -55,7 +56,7 @@ object SQLConf {
val WAREHOUSE_PATH
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13348#discussion_r64858379
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -665,7 +666,8 @@ private[sql] class SQLConf extends Serializable
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13343#discussion_r64856240
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala
---
@@ -871,6 +879,58 @@ class DDLSuite extends QueryTest
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13348#discussion_r64853158
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -55,7 +56,7 @@ object SQLConf {
val WAREHOUSE_PATH
Repository: spark
Updated Branches:
refs/heads/branch-2.0 702755f92 -> 8e26b74fc
[SPARK-15583][SQL] Disallow altering datasource properties
## What changes were proposed in this pull request?
Certain table properties (and SerDe properties) are in the protected namespace
Repository: spark
Updated Branches:
refs/heads/master 6ab973ec5 -> 3fca635b4
[SPARK-15583][SQL] Disallow altering datasource properties
## What changes were proposed in this pull request?
Certain table properties (and SerDe properties) are in the protected namespace
`spark.sql.sources.`,
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13341#issuecomment-222048281
Thanks. Merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13336#issuecomment-222045290
sorry. It has been fixed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13336#issuecomment-222045184
Seems it breaks 1.6 build?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13343#discussion_r64849728
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala
---
@@ -871,6 +879,58 @@ class DDLSuite extends QueryTest
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13341#issuecomment-222044580
lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
lic constructor to use
SparkSession.build.getOrCreate and removes isRootContext from SQLContext.
## How was this patch tested?
Existing tests.
Author: Yin Huai <yh...@databricks.com>
Closes #13310 from yhuai/SPARK-15532.
(cherry picked from commit 3ac2363d757cc9cebc627974f17ecda3a263efdf)
S
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13310#issuecomment-222026576
Merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13310#discussion_r64823932
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala ---
@@ -69,13 +67,30 @@ class SQLContext private[sql](
// Note: Since Spark 2.0
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13315#issuecomment-221997409
lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/1#issuecomment-221968733
lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12313#issuecomment-221966605
@rdblue Thank you for this PR. Those improvements sound good. I chatted
with others. Here are two questions.
1. `sqlContext.table("source").write.byName.
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13315#discussion_r64794004
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -289,37 +289,53 @@ case class TruncateTableCommand(
val
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13315#discussion_r64793761
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -289,37 +289,53 @@ case class TruncateTableCommand(
val
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13315#issuecomment-221949749
{{sql("truncate table emp16") }} should delete all partitions, right?
---
If your project is set up for it, you can reply to this email and have your
re
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13280#issuecomment-221935404
@rdblue Is there any perf evaluation of this new version that we can refer
to ?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13310#discussion_r64783746
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -70,6 +70,27 @@ object SQLConf {
.intConf
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13302#discussion_r64688076
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -288,9 +288,10 @@ case class TruncateTableCommand
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13290#discussion_r64687394
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1448,6 +1450,37 @@ class Analyzer
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13290#discussion_r64686215
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1448,6 +1450,37 @@ class Analyzer
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13307#issuecomment-221753653
Seems we should convert those disabled tests to our tests? Or we should put
those disabled tests to HiveQuerySuite or HiveWindowFunctionQuerySuite?
---
If your project
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13310#issuecomment-221753174
oh, some ml tests failed. Let me take a look.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13310#discussion_r64675651
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala
---
@@ -103,7 +103,11 @@ class SparkSession private(
* A wrapped version
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13310#discussion_r64675631
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/MultiSQLContextsSuite.scala ---
@@ -0,0 +1,96 @@
+/*
+* Licensed to the Apache Software
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13310#discussion_r64675634
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -70,6 +70,27 @@ object SQLConf {
.intConf
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13310#discussion_r64671030
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -70,6 +70,27 @@ object SQLConf {
.intConf
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/13310
[SPARK-15532] [SQL] Add SQLConf.ALLOW_MULTIPLE_CONTEXTS back and make the
error message configurable
## What changes were proposed in this pull request?
This PR adds
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13302#discussion_r64667194
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -288,9 +288,10 @@ case class TruncateTableCommand
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13302#issuecomment-221700058
lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/11371#discussion_r64521117
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1443,6 +1445,32 @@ class Analyzer
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/13290
[SQL] Prevent illegal NULL propagation when filtering outer-join results
## What changes were proposed in this pull request?
This is another approach for addressing SPARK-13484 (the first
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/11371#discussion_r64506496
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1443,6 +1445,32 @@ class Analyzer
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13273#issuecomment-221324600
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13212#discussion_r64325475
--- Diff:
sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/CliSuite.scala
---
@@ -238,4 +238,23 @@ class CliSuite extends
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13236#issuecomment-221133646
@rdblue Thanks. That is a good point :) I am closing this PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user yhuai closed the pull request at:
https://github.com/apache/spark/pull/13236
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13236#issuecomment-221080895
We support different versions of Hive metastore (for example, Spark depends
on Hive 1.2.1 but we can talk to a metastore using Hive 0.12). Also, the Hadoop
used
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13236#issuecomment-221069266
I can create a Hadoop Configuration inside HiveClientImpl and use it to
create the HiveConf. The main issue is that we cannot pass a Hadoop
Configuration
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13236#issuecomment-221050299
@rdblue Thank you for looking at this!
The reason that I added the flag to disable sharing Hadoop classes is that
hadoops used by Spark and metastore client may
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13236#discussion_r64157490
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala
---
@@ -73,7 +74,7 @@ import org.apache.spark.util.{CircularBuffer
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13236#discussion_r64157471
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala
---
@@ -73,7 +74,7 @@ import org.apache.spark.util.{CircularBuffer
Repository: spark
Updated Branches:
refs/heads/master 201a51f36 -> c18fa464f
[SPARK-15280] Input/Output] Refactored OrcOutputWriter and moved serialization
to a new class.
## What changes were proposed in this pull request?
Refactoring: Separated ORC serialization logic from OrcOutputWriter
Repository: spark
Updated Branches:
refs/heads/branch-2.0 4148a9c2c -> 6871deb93
[SPARK-15280] Input/Output] Refactored OrcOutputWriter and moved serialization
to a new class.
## What changes were proposed in this pull request?
Refactoring: Separated ORC serialization logic from
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13066#issuecomment-220805144
Merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13066#issuecomment-220795260
LGTM. I will merge this to master and branch 2.0 once the description is
updated. Also regarding "How was this patch tested?", we can say that manual
tests an
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13066#issuecomment-220795180
@seyfe Can you also update the description (since we are doing a
refactoring instead of adding a new public class)?
---
If your project is set up for it, you can reply
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13066#issuecomment-220750857
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13236#issuecomment-220750748
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13236#discussion_r64123606
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveUtils.scala
---
@@ -129,6 +129,14 @@ private[spark] object HiveUtils extends Logging
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/13236
[SPARK-15455] For IsolatedClientLoader, we need to provide a conf to
disable sharing Hadoop classes
## What changes were proposed in this pull request?
(Please fill in changes proposed
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13068#issuecomment-220728799
Thanks! LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13204#issuecomment-220706346
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12714#discussion_r64079773
--- Diff:
sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBase.g4 ---
@@ -179,6 +173,11 @@ unsupportedHiveNativeCommands
| kw1
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13066#discussion_r64079348
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/orc/OrcRelation.scala ---
@@ -167,39 +166,69 @@ private[sql] class DefaultSource
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12772#issuecomment-220525633
At here, `A\tB\tC\t\t` represents 5 fields and `A\tB\tC\t` represents 4
fields, right?
---
If your project is set up for it, you can reply to this email and have your
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12772#issuecomment-220525398
can you add a regression test?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13204#issuecomment-220525308
I like 3. We can tell what operators are in a single WholeStageCodeGen
operator and we can also know what are input operators of a WholeStageCodeGen.
---
If your
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12772#issuecomment-220499145
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13207#issuecomment-220498914
tes this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12932#issuecomment-220496773
@dosoft Thank you for the PR. What is the memory footprint of a leaked
driver?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12932#issuecomment-220496577
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13205#issuecomment-220476667
lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13128#issuecomment-220411852
All Spark confs are not mutable. For SQL, those metastore related confs are
also not mutable considering the long-running metastore client that we have. I
do think
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/10061#discussion_r63822163
--- Diff: core/src/main/scala/org/apache/spark/util/JsonProtocol.scala ---
@@ -96,6 +100,7 @@ private[spark] object JsonProtocol
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13156#discussion_r63820108
--- Diff:
sql/hivecontext-compatibility/src/main/scala/org/apache/spark/sql/hive/HiveContext.scala
---
@@ -58,4 +58,16 @@ class HiveContext private[hive
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13128#issuecomment-220203789
@gatorsmile For 2.0, all confs should go into spark conf or sql conf. What
is the reason to change this?
---
If your project is set up for it, you can reply
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13111#issuecomment-220203561
@gatorsmile For 2.0, we will use `spark.sql.warehouse.dir` instead of using
`hive.metastore.warehouse.dir` (the support of this has been dropped).
---
If your project
Repository: spark
Updated Branches:
refs/heads/branch-2.0 36acf8856 -> f5784459e
[SPARK-15192][SQL] null check for SparkSession.createDataFrame
## What changes were proposed in this pull request?
This PR adds null check in `SparkSession.createDataFrame`, so that we can make
sure the passed
Repository: spark
Updated Branches:
refs/heads/master 32be51fba -> ebfe3a1f2
[SPARK-15192][SQL] null check for SparkSession.createDataFrame
## What changes were proposed in this pull request?
This PR adds null check in `SparkSession.createDataFrame`, so that we can make
sure the passed in
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13008#issuecomment-220201533
Thanks! Merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/10061#discussion_r63808009
--- Diff: core/src/main/scala/org/apache/spark/util/JsonProtocol.scala ---
@@ -96,6 +100,7 @@ private[spark] object JsonProtocol
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/10061#discussion_r63804220
--- Diff: core/src/main/scala/org/apache/spark/util/JsonProtocol.scala ---
@@ -96,6 +100,7 @@ private[spark] object JsonProtocol
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13173#discussion_r63732226
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -633,16 +633,16 @@ case class ShowCreateTableCommand(table
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13156#discussion_r63729506
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/MetastoreDataSourcesSuite.scala
---
@@ -622,7 +622,7 @@ class MetastoreDataSourcesSuite extends
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13156#discussion_r63727680
--- Diff:
sql/hivecontext-compatibility/src/test/scala/org/apache/spark/sql/hive/HiveContextCompatibilitySuite.scala
---
@@ -99,4 +105,41 @@ class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13008#discussion_r63726478
--- Diff: mllib/src/main/scala/org/apache/spark/mllib/fpm/FPGrowth.scala ---
@@ -116,7 +116,7 @@ object FPGrowthModel extends Loader[FPGrowthModel
ks.com>
Closes #13157 from yhuai/SPARK-14346-fix-scala2.10.
(cherry picked from commit 2a5db9c140b9d60a5ec91018be19bec7b80850ee)
Signed-off-by: Yin Huai <yh...@databricks.com>
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commi
;
Closes #13157 from yhuai/SPARK-14346-fix-scala2.10.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/2a5db9c1
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/2a5db9c1
Diff: http://git-wip-us.apache.org/repos/asf/s
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13157#issuecomment-219897800
I am merging this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13157#issuecomment-219885469
tested locally. The fix is good.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/13157
[SPARK-14346] Fix scala-2.10 build
## What changes were proposed in this pull request?
Scala 2.10 build was broken by
https://github.com/apache/spark/pull/13079/files. I am reverting the change
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13079#discussion_r63620803
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/interface.scala
---
@@ -200,6 +207,7 @@ case class SimpleCatalogRelation
Repository: spark
Updated Branches:
refs/heads/branch-2.0 7b62b7c11 -> 2dddec40d
[SPARK-14346][SQL] Native SHOW CREATE TABLE for Hive tables/views
## What changes were proposed in this pull request?
This is a follow-up of #12781. It adds native `SHOW CREATE TABLE` support for
Hive tables
Repository: spark
Updated Branches:
refs/heads/master 8e8bc9f95 -> b674e67c2
[SPARK-14346][SQL] Native SHOW CREATE TABLE for Hive tables/views
## What changes were proposed in this pull request?
This is a follow-up of #12781. It adds native `SHOW CREATE TABLE` support for
Hive tables and
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13079#issuecomment-219878464
LGTM. I am merging this to master and 2.0. Let's change the error message
and exception class in a separate pr.
---
If your project is set up for it, you can reply
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13079#discussion_r63618404
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -626,40 +626,142 @@ case class ShowCreateTableCommand(table
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13153#issuecomment-219877343
Maybe also put the size comparison in the description?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13156#discussion_r63613203
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/MultiDatabaseSuite.scala ---
@@ -202,7 +202,8 @@ class MultiDatabaseSuite extends QueryTest
1301 - 1400 of 5990 matches
Mail list logo