Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16975#discussion_r102569401
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -470,12 +470,25 @@ class SparkContext(config: SparkConf) extends Logging
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16975#discussion_r101887609
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -466,7 +466,7 @@ object SparkSubmit extends CommandLineUtils
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16975#discussion_r101887589
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -466,7 +466,7 @@ object SparkSubmit extends CommandLineUtils
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16975#discussion_r101857385
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -466,7 +466,7 @@ object SparkSubmit extends CommandLineUtils
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16975#discussion_r101857230
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -466,7 +466,7 @@ object SparkSubmit extends CommandLineUtils
GitHub user andrewor14 opened a pull request:
https://github.com/apache/spark/pull/16975
[SPARK-19522] Fix executor memory in local-cluster mode
## What changes were proposed in this pull request?
```
bin/spark-shell --master local-cluster[2,1,2048
Github user andrewor14 closed the pull request at:
https://github.com/apache/spark/pull/13899
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/13899
Closing for now; too many conflicts.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/16819
I agree. Resource managers generally expect applications to request more
than what's available already so we don't have to do it again ourselves in
Spark.
---
If your project is set up
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/16823
This is a bad idea! First it breaks backward compatibility, and second, we
intentionally didn't want to make it so general that the user can pass in any
objects. Can you please close this PR
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15396#discussion_r98218538
--- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
@@ -1589,7 +1589,8 @@ abstract class RDD[T: ClassTag](
* This is introduced
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/16081
and 2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/16073
and 2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/16073
LGTM merging into master 2.1.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/16081
Ok, merging into master 2.1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15462
@kiszk is there a JIRA associated specifically with adding tests for
`InMemoryRelation`?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15462
LGTM, merging into master 2.1 thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15993
Sounds good. Merging into master 2.1.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15462#discussion_r89205894
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/columnar/InMemoryColumnarQuerySuite.scala
---
@@ -20,18 +20,83 @@ package
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15462
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15462#discussion_r89205780
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/columnar/InMemoryColumnarQuerySuite.scala
---
@@ -58,6 +123,12 @@ class
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15462#discussion_r89205541
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/columnar/InMemoryColumnarQuerySuite.scala
---
@@ -20,18 +20,83 @@ package
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15462#discussion_r89205861
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/columnar/InMemoryColumnarQuerySuite.scala
---
@@ -246,4 +317,59 @@ class
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15462#discussion_r89205730
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/columnar/InMemoryColumnarQuerySuite.scala
---
@@ -20,18 +20,83 @@ package
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15978
(Oops never mind, not my fault! :p)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15978
@cloud-fan can you make a patch for 2.0?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15978
Oops, that was my fault. Thanks merging into master 2.1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15811
I did
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15896
I personally think `UNCACHE TABLE IF EXISTS` is best. It preserves the old
behavior but lets the user make sure a table is not cached if they really want.
---
If your project is set up
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15953
Makes sense. This LGTM merging into master and 2.0. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15811
LGTM merging into master thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15811
Looks good, just one question.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15811#discussion_r88115031
--- Diff: python/pyspark/rdd.py ---
@@ -181,6 +181,7 @@ def __init__(self, jrdd, ctx,
jrdd_deserializer=AutoBatchedSerializer(PickleSeri
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15811
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15833#discussion_r88113719
--- Diff: core/src/main/scala/org/apache/spark/deploy/Client.scala ---
@@ -221,7 +221,9 @@ object Client {
val conf = new SparkConf
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15766
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15756
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15739
Also there's another patch trying to solve the same issue: #15742
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15756
LGTM. That's a massive amount of time spent in `Class.getSimpleName`!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15756#discussion_r86422590
--- Diff: core/src/main/scala/org/apache/spark/util/JsonProtocol.scala ---
@@ -540,7 +544,8 @@ private[spark] object JsonProtocol {
def
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15756#discussion_r86422652
--- Diff: core/src/main/scala/org/apache/spark/util/JsonProtocol.scala ---
@@ -540,7 +544,8 @@ private[spark] object JsonProtocol {
def
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15756#discussion_r86422521
--- Diff: core/src/main/scala/org/apache/spark/util/JsonProtocol.scala ---
@@ -540,7 +544,8 @@ private[spark] object JsonProtocol {
def
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15739
ok to test @vanzin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15698
LGTM retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15410
We shouldn't display file names but we should display application names and
IDs, something the user understands. We don't have to do that as part of this
issue.
---
If your project is set up
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15458
I see. Then maybe we should add a comment above the config to note that
several commands don't work (e.g. ALTER TABLE) if this is turned on, even if
it's only internal.
---
If your project
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15458
Yes that's why it's `internal`
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15456
Merging into master
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15458
JK, actually it doesn't merge in 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15458
Cool beans. Merging into master 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15400
This one LGTM I'm merging it into master. Thanks for working on this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15400
Usually we retest the PR if it's been a few days since it last ran tests.
We have had build breaks before where we merged a PR that passed tests a long
time ago.
---
If your project is set up
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15410
ok to test
I think the idea is good, but it would be a better UX if we display the
pending applications as rows in the existing table (or a new one) and indicate
there that it's still
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15410#discussion_r83044768
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/HistoryPage.scala ---
@@ -38,6 +39,13 @@ private[history] class HistoryPage(parent
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15396
Looks good. I left a suggestion that I think will make the code cleaner.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15396#discussion_r83043442
--- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
@@ -1589,7 +1589,8 @@ abstract class RDD[T: ClassTag](
* This is introduced
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15396#discussion_r83042522
--- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
@@ -1589,7 +1589,8 @@ abstract class RDD[T: ClassTag](
* This is introduced
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15400
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15353
@keypointt by "working" I mean it should be replaced by a line break, not a
space
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitH
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15405
Thanks for working on this. It's great to see how small the patch turned
out to be!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15405#discussion_r82671287
--- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
---
@@ -637,6 +637,16 @@ private[deploy] class Master
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15405
add to whitelist
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15347
OK, this change by itself LGTM. @dafrista would you mind creating a
separate JIRA (or point me to an existing one) about the TODO then? Merging
this into master
---
If your project is set up
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15353
But this isn't the original intention, which is to actually add a line
break where `\n` is today. IIRC this works correctly on Chrome but not on
Safari (or the other way round?). If you can make
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15353
Also this is a more general problem, not just for streaming
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15347#discussion_r82042420
--- Diff:
core/src/main/java/org/apache/spark/util/collection/unsafe/sort/UnsafeExternalSorter.java
---
@@ -145,7 +145,9 @@ private UnsafeExternalSorter
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15350
I think that's OK. This is supposed to be a unit test for the BlockManager,
not how BlockManager interacts with the rest of the system. LGTM
---
If your project is set up for it, you can reply
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15290
Merging into master
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15290
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15247#discussion_r81218774
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/ApplicationHistoryProvider.scala
---
@@ -109,4 +109,11 @@ private[history] abstract class
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15247#discussion_r81219143
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/ApplicationHistoryProvider.scala
---
@@ -109,4 +109,11 @@ private[history] abstract class
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15247
LGTM merging into master 2.0, thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15247#discussion_r81219049
--- Diff:
core/src/main/scala/org/apache/spark/status/api/v1/ApiRootResource.scala ---
@@ -222,6 +222,7 @@ private[spark] object ApiRootResource
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15247#discussion_r81218992
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/HistoryServer.scala ---
@@ -182,6 +182,10 @@ class HistoryServer
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15295
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15221
This looks reasonable. Merging into master. I will leave it out from
branch-2.0 just in case.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15295#discussion_r81150854
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala
---
@@ -791,7 +791,7 @@ object SparkSession {
// Get the session
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15295
LGTM. Pretty straightforward.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15244
Thanks merged into master 2.0 and 1.6
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15243#discussion_r80478992
--- Diff:
core/src/main/scala/org/apache/spark/deploy/worker/WorkerWatcher.scala ---
@@ -21,7 +21,7 @@ import org.apache.spark.internal.Logging
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15137
Got it. I think back then when I wrote that comment we still haven't
supported cluster mode with python yet. It's just a little confusing if we're
reading that comment in isolation
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15137
I've merged it. One more thing, would you mind correcting the comment in
`PythonRunner#formatPath`? Right now it says "we currently only support local
python files", which is apparentl
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15137
Good catch. I'm merging this into master, 2.0 and 1.6. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15181
Thanks for the clean up. I'm merging this into master. Because this patch
touches multiple files in the critical scheduler code I'm hesitant on back
porting this.
---
If your project is set up
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15181#discussion_r79939560
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskResultGetter.scala ---
@@ -118,14 +118,14 @@ private[spark] class TaskResultGetter(sparkEnv
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15181#discussion_r79894168
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskResultGetter.scala ---
@@ -118,14 +118,14 @@ private[spark] class TaskResultGetter(sparkEnv
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15159#discussion_r79867613
--- Diff:
core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala ---
@@ -70,6 +70,8 @@ private[deploy] class SparkSubmitArguments(args
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15001
LGTM, merging into master 2.0 thanks @zsxwing!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15001#discussion_r79449912
--- Diff:
core/src/main/scala/org/apache/spark/deploy/master/ui/ApplicationPage.scala ---
@@ -70,6 +70,16 @@ private[ui] class ApplicationPage(parent
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15133
Yeah `SparkSession` will be the new thing moving forward. `SparkContext` is
kind of just a legacy thing.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15133
We should probably just make it a random UUID in all cases to be
consistent. I don't know if people check whether `spark.app.name` is set, so
that might be a backward compatibility concern
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/14338
I am opposed to this change because these are internal states that were
never intended for the user to tweak. If there is a specific problem you're
trying to fix then we should address that more
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/14644
Tim, please file a JIRA!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/14765
From the JIRA description it seems that this issue arises not only in the
context of DA. If that's the case then we should definitely not just
arbitrarily remove code from
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/14862
What exactly does this change buy us? It doesn't allow us to remove any of
the inheritance code. `TestHiveSessionState` is not fundamentally different
from `HiveSessionState` so I think it's
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15099
LGTM!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/13822
By the way I'm not super active in this community anymore. If you want a
quicker response you could try your luck by pinging @yhuai or @cloud-fan
instead.
---
If your project is set up
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13822#discussion_r79247914
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -660,6 +662,10 @@ case class ShowPartitionsCommand
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13822#discussion_r79247651
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala
---
@@ -595,6 +595,19 @@ class HiveDDLSuite
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/13822#discussion_r79247588
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala
---
@@ -595,6 +595,19 @@ class HiveDDLSuite
1 - 100 of 10294 matches
Mail list logo