Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/17521
@gatorsmile @cloud-fan @ueshin Sorry .. i was on transit from work. Sure, i
will make a try. However , i wanted to understand this a bit more. In my
understanding, the current problem we are
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17512
**[Test build #75534 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75534/testReport)**
for PR 17512 at commit
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/17532
I don't think is directly related, but, I tried to make a similar change to
the Spark Scala implementation of this type of conversion and couldn't make it
work, just because it changed a bunch of
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17512#discussion_r109833836
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/CatalogImpl.scala ---
@@ -456,7 +460,8 @@ class CatalogImpl(sparkSession: SparkSession)
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17258
**[Test build #3638 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3638/testReport)**
for PR 17258 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17535
**[Test build #75533 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75533/testReport)**
for PR 17535 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17526
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
GitHub user carsonwang opened a pull request:
https://github.com/apache/spark/pull/17535
[SPARK-20222][SQL] Bring back the Spark SQL UI when executing queries in
Spark SQL CLI
## What changes were proposed in this pull request?
There is no Spark SQL UI when executing queries in
Github user danielyli commented on the issue:
https://github.com/apache/spark/pull/15899
Hey,
Checking in again on this PR. Can we please support `withFilter` for pair
RDDs? For-expressions are a central sugar in Scala syntax, and without them
developers are hampered in
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17534
Added one more
2.'/applications/[app-id]/stages/[stage-id]' in REST API,remove redundant
description â?status=[active|complete|pending|failed] list only stages in the
state.â.
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17511#discussion_r109833091
--- Diff: R/pkg/R/DataFrame.R ---
@@ -557,7 +557,7 @@ setMethod("insertInto",
jmode <- convertToJSaveMode(ifelse(overwrite,
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17534
Because I just use this API today to develop, only to find the problem.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17525
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75527/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17525
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17525
**[Test build #75527 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75527/testReport)**
for PR 17525 at commit
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/14617#discussion_r109830934
--- Diff: core/src/main/scala/org/apache/spark/status/api/v1/api.scala ---
@@ -111,7 +115,11 @@ class RDDDataDistribution private[spark](
val
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/14617#discussion_r109832240
--- Diff: core/src/main/scala/org/apache/spark/storage/StorageUtils.scala
---
@@ -176,26 +178,51 @@ class StorageStatus(val blockManagerId:
BlockManagerId,
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/14617#discussion_r109830395
--- Diff: core/src/main/scala/org/apache/spark/ui/exec/ExecutorsPage.scala
---
@@ -115,8 +115,9 @@ private[spark] object ExecutorsPage {
val
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/14617#discussion_r109830705
--- Diff: core/src/main/scala/org/apache/spark/ui/exec/ExecutorsPage.scala
---
@@ -81,6 +115,11 @@ private[spark] object ExecutorsPage {
val
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/14617#discussion_r109831142
--- Diff: core/src/main/scala/org/apache/spark/storage/StorageUtils.scala
---
@@ -35,7 +35,13 @@ import org.apache.spark.internal.Logging
* class
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/17534
Why didn't you add this to your last PR? What other related changes do you
want to make? Making several small closely related PRs is frowned on
---
If your project is set up for it, you can reply
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17521
@dilipbiswal Can you please try it based on what @cloud-fan and @ueshin
suggested? Does it resolve the issue you report?
---
If your project is set up for it, you can reply to this email and
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17149#discussion_r109827813
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -386,7 +386,7 @@ class SparkSqlAstBuilder(conf: SQLConf)
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17471
cc @cloud-fan / @ueshin / @sameeragarwal can you review this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17149#discussion_r109827145
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -386,7 +386,7 @@ class SparkSqlAstBuilder(conf: SQLConf)
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17149#discussion_r109827042
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -386,7 +386,7 @@ class SparkSqlAstBuilder(conf: SQLConf)
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17149#discussion_r109827045
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -386,7 +386,7 @@ class SparkSqlAstBuilder(conf: SQLConf)
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17149#discussion_r109826890
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -386,7 +386,7 @@ class SparkSqlAstBuilder(conf: SQLConf)
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17149#discussion_r109826847
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -386,7 +386,7 @@ class SparkSqlAstBuilder(conf: SQLConf)
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17532
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17532
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75530/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17532
**[Test build #75530 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75530/testReport)**
for PR 17532 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17533
**[Test build #75532 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75532/testReport)**
for PR 17533 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17534
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17532
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75528/
Test PASSed.
---
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/17534
[SPARK-20218]'/applications/[app-id]/stages' in REST API,add description.
## What changes were proposed in this pull request?
'/applications/[app-id]/stages' in rest api.status
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17532
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17532
**[Test build #75528 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75528/testReport)**
for PR 17532 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17533
**[Test build #75531 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75531/testReport)**
for PR 17533 at commit
Github user ueshin commented on the issue:
https://github.com/apache/spark/pull/17521
I agree with @cloud-fan's suggestion, `ConfigEntryWithDefaultFunction`
approach. Hopefully the default value should be fixed when `SQLConf` instance
is created with the timezone in the JVM at that
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17533
**[Test build #75529 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75529/testReport)**
for PR 17533 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17532
**[Test build #75530 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75530/testReport)**
for PR 17532 at commit
GitHub user jinxing64 opened a pull request:
https://github.com/apache/spark/pull/17533
[SPARK-20219] Schedule tasks based on size of input from ScheduledRDD
## What changes were proposed in this pull request?
When data is highly skewed on `ShuffledRDD`, it make sense to
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/17532
cc @jkbradley @MLnick @holdenk
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17532
**[Test build #75528 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75528/testReport)**
for PR 17532 at commit
GitHub user viirya opened a pull request:
https://github.com/apache/spark/pull/17532
[SPARK-20214][ML] Make sure converted csc matrix has sorted indices
## What changes were proposed in this pull request?
`_convert_to_vector` converts a scipy sparse matrix to csc matrix for
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17520
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17520
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75526/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17525
**[Test build #75527 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75527/testReport)**
for PR 17525 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17520
**[Test build #75526 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75526/testReport)**
for PR 17520 at commit
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/17525
Thanks for the change. It is easier to understand when things are being
triggered now. LGTM.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17521
These test cases are timezone sensitive, right? If so, the changes made by
@dilipbiswal are reasonable to me.
---
If your project is set up for it, you can reply to this email and have your
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/17521
also cc @ueshin , I think the default value of `SESSION_LOCAL_TIMEZONE`
should always be the current timezone in the JVM. We may need something like
`ConfigEntryWithDefaultFunction`, so that the
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/17525
left one minor comment, otherwise LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/17525#discussion_r109821142
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/ProcessingTimeExecutorSuite.scala
---
@@ -35,6 +42,56 @@ class
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17521
`SESSION_LOCAL_TIMEZONE` sounds a static SQLConf. Do we allow users to
change it at runtime? It sounds like our codes do not allow users to
change/refresh it.
---
If your project is set up
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/17521
@cloud-fan @nsyca A quick update.. I ran the problematic tests and they
pass with a change to move the time zone setting code to PlanTest.scala just
before we create the SQLConf like following
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17531
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17531
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75525/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17531
**[Test build #75525 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75525/testReport)**
for PR 17531 at commit
Github user merlintang commented on the issue:
https://github.com/apache/spark/pull/16165
@vanzin sorry, I mean the 2.1.1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17525
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75524/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17525
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17525
**[Test build #75524 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75524/testReport)**
for PR 17525 at commit
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/17459
Please modify the PR title also. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/17459#discussion_r109817633
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/linalg/distributed/IndexedRowMatrix.scala
---
@@ -108,8 +108,64 @@ class IndexedRowMatrix
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/17459#discussion_r109817581
--- Diff:
mllib/src/test/scala/org/apache/spark/mllib/linalg/distributed/IndexedRowMatrixSuite.scala
---
@@ -89,11 +89,19 @@ class IndexedRowMatrixSuite
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17511#discussion_r109816897
--- Diff: R/pkg/R/DataFrame.R ---
@@ -557,7 +557,7 @@ setMethod("insertInto",
jmode <- convertToJSaveMode(ifelse(overwrite, "overwrite",
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17512#discussion_r109815601
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/CatalogImpl.scala ---
@@ -367,6 +367,7 @@ class CatalogImpl(sparkSession: SparkSession)
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17251
Hi, @cloud-fan and @gatorsmile .
If there is something to do more, please let me know.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user sitalkedia commented on the issue:
https://github.com/apache/spark/pull/17471
cc - @rxin, @kayousterhout, @squito
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/16165
2.1 has already shipped and this is not a regression.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/17521
@cloud-fan @nsyca Changing to make it lazy works for the test cases i have
tried. I am running the full tests now.
---
If your project is set up for it, you can reply to this email and have
Github user nsyca commented on the issue:
https://github.com/apache/spark/pull/17521
@dilipbiswal has narrowed down that this PR is changing the behaviour. He
will continue to investigate and will post an update in the next hour or so
before he calls it a day.
---
If your project
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/17521
@nsyca can you try marking `SQLConf.SESSION_LOCAL_TIMEZONE` as `lazy val`?
I think the issue is that, once `object SQLConf` is instantiated, the default
value for `SESSION_LOCAL_TIMEZONE` is
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17520
**[Test build #75526 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75526/testReport)**
for PR 17520 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17336
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17524
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75523/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17524
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17524
**[Test build #75523 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75523/testReport)**
for PR 17524 at commit
Github user jkbradley commented on the issue:
https://github.com/apache/spark/pull/17336
Thanks a lot for the second update! This LGTM
Merging with master
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/17336#discussion_r109810945
--- Diff: mllib/src/test/scala/org/apache/spark/ml/fpm/FPGrowthSuite.scala
---
@@ -85,38 +85,58 @@ class FPGrowthSuite extends SparkFunSuite with
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17531
**[Test build #75525 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75525/testReport)**
for PR 17531 at commit
GitHub user ericl opened a pull request:
https://github.com/apache/spark/pull/17531
[SPARK-20217][core] Executor should not fail stage if killed task throws
non-interrupted exception
## What changes were proposed in this pull request?
If tasks throw non-interrupted
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17501
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user jkbradley commented on the issue:
https://github.com/apache/spark/pull/17501
Thanks! As you can tell, interrupts happen for me too : P
Merging with master
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17525
**[Test build #75524 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75524/testReport)**
for PR 17525 at commit
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/17525#discussion_r109805354
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/TriggerExecutor.scala
---
@@ -83,6 +84,6 @@ case class
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/17525#discussion_r109805306
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQuerySuite.scala
---
@@ -207,80 +207,91 @@ class StreamingQuerySuite extends
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17398
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17512#discussion_r109803579
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/CatalogImpl.scala ---
@@ -456,7 +460,8 @@ class CatalogImpl(sparkSession: SparkSession)
Github user liancheng commented on the issue:
https://github.com/apache/spark/pull/17398
LGTM. Merging to master. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17520
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75522/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17520
**[Test build #75522 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75522/testReport)**
for PR 17520 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17520
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17524
**[Test build #75523 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75523/testReport)**
for PR 17524 at commit
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17524
retest this pleaes
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17512#discussion_r109798818
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/catalog/Catalog.scala ---
@@ -447,6 +447,7 @@ abstract class Catalog {
/**
*
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17512#discussion_r109798528
--- Diff: python/pyspark/sql/catalog.py ---
@@ -276,14 +277,24 @@ def clearCache(self):
@since(2.0)
def refreshTable(self,
Github user merlintang commented on the issue:
https://github.com/apache/spark/pull/16165
should we backport this into 2.1? @vanzin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
1 - 100 of 266 matches
Mail list logo