Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13365
LGTM thanks for doing this. Merging into master 2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13420
LGTM I'll add the semicolon master 2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13412
LGTM merging into master 2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13383
Merging into master 2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13406
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13406
Merging into master 2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13415#discussion_r65285509
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -884,6 +884,10 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13415#discussion_r65285449
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/ParserUtils.scala
---
@@ -50,6 +50,17 @@ object ParserUtils
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13415#discussion_r65285301
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/ParserUtils.scala
---
@@ -50,6 +50,17 @@ object ParserUtils
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13427
looks good
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13427
FYI @gatorsmile
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/12618
@gatorsmile Why don't we do name validation in so many places? Many of them
don't seem to have anything to do with specifying datasource paths. Can you try
to keep the behavior
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12618#discussion_r65272445
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -849,7 +894,7 @@ class SessionCatalog
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12618#discussion_r65272406
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -759,15 +804,15 @@ class SessionCatalog
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12618#discussion_r65272233
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -180,7 +237,7 @@ class SessionCatalog
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12618#discussion_r65272254
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -190,7 +247,7 @@ class SessionCatalog
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12618#discussion_r65272201
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -164,8 +215,14 @@ class SessionCatalog
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12618#discussion_r65272109
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -87,19 +89,68 @@ class SessionCatalog
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12618#discussion_r65272047
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -71,6 +71,8 @@ class SessionCatalog
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12618#discussion_r65271997
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -293,19 +345,18 @@ class SessionCatalog
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13384
Looks like that logic is removed in #13082 and we forgot to update the doc.
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13379
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13379
@jkbradley what do you think?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13415#discussion_r65267470
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -937,7 +937,13 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13413
@techaddict please add a test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13413#discussion_r65267181
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -855,7 +855,8 @@ class SessionCatalog
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13088#discussion_r65266575
--- Diff: repl/scala-2.11/src/main/scala/org/apache/spark/repl/Main.scala
---
@@ -88,12 +89,25 @@ object Main extends Logging
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13088#discussion_r65266515
--- Diff: repl/scala-2.11/src/main/scala/org/apache/spark/repl/Main.scala
---
@@ -88,12 +89,25 @@ object Main extends Logging
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13386
LGTM, minor comments only.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13386#discussion_r65266137
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveSessionState.scala ---
@@ -139,22 +139,6 @@ private[hive] class HiveSessionState
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13386#discussion_r65265900
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -936,7 +936,39 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13415#discussion_r65265169
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -937,7 +937,13 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13395#discussion_r65255370
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -936,7 +936,47 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13416
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13334
@techaddict please close this PR as this is not the right fix.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13407
or just add a special appResource like `PYSPARK_SHELL` or `SPARK_SHELL`
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13037
@kayousterhout? In the past she has had some opinions on similar issues.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
GitHub user andrewor14 opened a pull request:
https://github.com/apache/spark/pull/13416
[SPARK-15596][SPARK-15635][SQL] ALTER TABLE RENAME fixes
## What changes were proposed in this pull request?
**SPARK-15596**: Even after we renamed a cached table, the plan would
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13316#discussion_r64947551
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -1255,6 +1255,7 @@ class Dataset[T] private[sql](
* :: Experimental
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13332#issuecomment-18837
OK, looks good. Merging into master 2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13354#issuecomment-18441
Yes, this particular place broke the build. Can you please close this PR?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13357#issuecomment-18282
Yeah I don't think we want to change this behavior. The motivation you
described in the JIRA isn't super compelling.
---
If your project is set up for it, you can
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13321#discussion_r64946601
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ExistingRDD.scala ---
@@ -347,15 +347,16 @@ private[sql] object DataSourceScanExec
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13330#issuecomment-16763
Merging into master 2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13344#issuecomment-16500
Merging into master 2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13345#issuecomment-16174
Merging inot master 2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13349#issuecomment-15897
Merging inot master 2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13352#issuecomment-15279
@dongjoon-hyun actually the `builder.sparkContext` method was added
recently so there are other existing places where we could use that. Would you
mind submitting
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13321#issuecomment-14850
@watermen Thanks for fixing this. Once you address the remaining comments
I'll go ahead and merge this.
---
If your project is set up for it, you can reply
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13321#discussion_r64944786
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ExistingRDD.scala ---
@@ -347,15 +347,16 @@ private[sql] object DataSourceScanExec
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13327#discussion_r64944224
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -2329,8 +2330,14 @@ class Dataset[T] private[sql](
* @since 2.0.0
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13327#discussion_r64943992
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/interface.scala
---
@@ -50,6 +50,11 @@ case class CatalogStorageFormat
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13330#issuecomment-12493
This looks OK, but in the future @xinhhuynh it's better to include more
changes even in a minor patch so there's less review overhead.
---
If your project is set
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13330#issuecomment-12264
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13345#issuecomment-11500
Looks great
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13352#discussion_r64942596
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/api/python/PythonMLLibAPI.scala ---
@@ -1178,8 +1176,9 @@ private[python] class PythonMLLibAPI
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13352#discussion_r64942547
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/api/python/PythonMLLibAPI.scala ---
@@ -1178,8 +1176,9 @@ private[python] class PythonMLLibAPI
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13343#discussion_r64858434
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala
---
@@ -871,6 +879,58 @@ class DDLSuite extends QueryTest
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13343#discussion_r64855963
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala
---
@@ -871,6 +879,58 @@ class DDLSuite extends QueryTest
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13349#issuecomment-222056689
thanks for working on this
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13349#issuecomment-222056682
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13315#issuecomment-222042460
Thanks for all the LGTMs. I'm going to merge this into master 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13341#discussion_r64845679
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/createDataSourceTables.scala
---
@@ -255,6 +255,23 @@ case class
GitHub user andrewor14 opened a pull request:
https://github.com/apache/spark/pull/13343
[SPARK-15594][SQL] ALTER TABLE SERDEPROPERTIES does not respect partition
spec
## What changes were proposed in this pull request?
These commands ignore the partition spec and change
GitHub user andrewor14 opened a pull request:
https://github.com/apache/spark/pull/13341
[SPARK-15583][SQL] Disallow altering datasource properties
## What changes were proposed in this pull request?
Certain table properties (and SerDe properties) are in the protected
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13336#issuecomment-221995295
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13334#issuecomment-221994991
@techaddict you've misunderstood the JIRA. I removed them from
`HiveCompatibilitySuite` on purpose because they don't pass there. Instead you
should add them back
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13315#issuecomment-221994189
I tested partitioned data source tables too. If you add data back then the
partitions will be created again. I think that's OK.
---
If your project is set up
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13319#issuecomment-221978572
m2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13088#issuecomment-221968007
@xwu0226 thanks for working on this. This patch currently exposes too much
to the user. The config is internal and should be kept that way; that's the
reason why
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13088#discussion_r64804303
--- Diff: conf/spark-defaults.conf.template ---
@@ -25,3 +25,4 @@
# spark.serializer
org.apache.spark.serializer.KryoSerializer
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13088#discussion_r64804173
--- Diff:
repl/scala-2.11/src/main/scala/org/apache/spark/repl/SparkILoop.scala ---
@@ -43,6 +43,10 @@ class SparkILoop(in0: Option[BufferedReader], out
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13088#discussion_r64803892
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala
---
@@ -585,6 +585,18 @@ class SparkSession private(
sparkContext.stop
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13088#discussion_r64803748
--- Diff: repl/scala-2.11/src/main/scala/org/apache/spark/repl/Main.scala
---
@@ -88,10 +88,22 @@ object Main extends Logging
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13088#discussion_r64803716
--- Diff: repl/scala-2.11/src/main/scala/org/apache/spark/repl/Main.scala
---
@@ -88,10 +88,22 @@ object Main extends Logging
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13330#discussion_r64802782
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -1665,7 +1665,7 @@ class Dataset[T] private[sql
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13309#issuecomment-221964506
Merging into master 2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13307#issuecomment-221964196
https://issues.apache.org/jira/browse/SPARK-15576
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13307#issuecomment-221963953
OK, let's do that in a follow-up patch. Thanks, merging into master 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13315#issuecomment-221949078
@sureshthalamati what behavior did you expect? I would think that
truncating a partitioned table without specifying the specs should just delete
data from all
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13088#issuecomment-221924482
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13319#issuecomment-221924363
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13315#issuecomment-221751879
@hvanhovell @yhuai
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13307#issuecomment-221751101
@yhuai
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13305#issuecomment-221750415
@sureshthalamati I've taken over this patch at #13315 with a different
solution. Thanks for working on this.
---
If your project is set up for it, you can reply
GitHub user andrewor14 opened a pull request:
https://github.com/apache/spark/pull/13315
[SPARK-15538][SQL] Fix TRUNCATE TABLE for datasource tables
## What changes were proposed in this pull request?
Before:
```
scala> sql("CREATE TABLE boxes (length INT
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13309#issuecomment-221742573
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13305#issuecomment-221735811
```
scala> sql("CREATE TABLE boxes (length INT, height INT, width INT) USING
parquet")
scala> (1 to 3).map { i => (i, i * 2, i * 3) }.toD
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13305#issuecomment-221736081
@sureshthalamati I was able to get it working with your example as well. I
think it does work with datasource table. We just need to add a call
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13305#issuecomment-221736590
Actually, it looks like it's a slightly bigger change so we can always do
it in a future patch if you prefer.
---
If your project is set up for it, you can reply
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13305#issuecomment-221735424
Actually I was able to get `TRUNCATE TABLE` to work on data source tables,
though I have to add a call to `spark.catalog.refershTable` every time after I
call
GitHub user andrewor14 opened a pull request:
https://github.com/apache/spark/pull/13307
[SPARK-15539][SQL] DROP TABLE throw exception if table doesn't exist
## What changes were proposed in this pull request?
Same as #13302, but for DROP TABLE.
## How
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13307#issuecomment-221726577
@hvanhovell
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13305#discussion_r64662860
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -292,10 +292,18 @@ case class TruncateTableCommand
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13305#issuecomment-221724081
add to whitelist
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13305#discussion_r64662799
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveCommandSuite.scala
---
@@ -345,6 +345,19 @@ class HiveCommandSuite extends
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13305#issuecomment-221723642
@sureshthalamati can you also add a case for EXTERNAL table? You can also
add `[SPARK-15536]` to the title
---
If your project is set up for it, you can reply
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/13302#issuecomment-221722627
Merging into master 2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13305#discussion_r64662478
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -292,10 +292,18 @@ case class TruncateTableCommand
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13305#discussion_r64662387
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -292,10 +292,18 @@ case class TruncateTableCommand
201 - 300 of 10294 matches
Mail list logo