Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14185
**[Test build #62289 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62289/consoleFull)**
for PR 14185 at commit
[`7fe36f5`](https://github.com/apache/spark/commit/7
Github user ericl commented on the issue:
https://github.com/apache/spark/pull/14022
To reduce the risk, how about changing the semantics to
```
* - spark/sparkconf/hiveconf: looks for the value in the spark config
* - system: looks for the value in the system propertie
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/14185
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14185#discussion_r70731128
--- Diff:
launcher/src/main/java/org/apache/spark/launcher/SparkLauncher.java ---
@@ -418,14 +414,26 @@ public SparkAppHandle
startApplication(SparkAppHandle
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/14185
This PR should really be referencing SPARK-14702, which is the older bug.
SPARK-16511 should be closed as a duplicate of it.
---
If your project is set up for it, you can reply to this email and hav
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14185#discussion_r70730228
--- Diff:
launcher/src/main/java/org/apache/spark/launcher/SparkLauncher.java ---
@@ -418,14 +414,26 @@ public SparkAppHandle
startApplication(SparkAppHandle
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/14022
> Wouldn't enabling it by default break backwards compatibility?
Yes, maybe. But having a flag to disable everything would also potentially
break features that rely on it... although you coul
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14190
Hi, @rxin .
Could you review this trivial PR exposing `sql()` in PySpark Shell for
consistency?
---
If your project is set up for it, you can reply to this email and have your
reply appea
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14185#discussion_r70729812
--- Diff:
launcher/src/main/java/org/apache/spark/launcher/SparkLauncher.java ---
@@ -418,14 +414,26 @@ public SparkAppHandle
startApplication(SparkAppHandle
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14184
Hi, @liancheng .
Could you review this PR?
It was made by you at
https://github.com/apache/spark/commit/72981bc8f0d421e2563e2543a8c16a8cc76ad3aa#diff-e59968489e4f36f43010dd7acd6034
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14185#discussion_r70729733
--- Diff:
core/src/test/scala/org/apache/spark/launcher/LauncherBackendSuite.scala ---
@@ -17,15 +17,16 @@
package org.apache.spark.launcher
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14185#discussion_r70729726
--- Diff:
core/src/test/scala/org/apache/spark/launcher/LauncherBackendSuite.scala ---
@@ -17,15 +17,16 @@
package org.apache.spark.launcher
Github user ericl commented on the issue:
https://github.com/apache/spark/pull/14022
Wouldn't enabling it by default break backwards compatibility? I agree that
would be better, but it seems likely that '${...}' may be used in existing
configs.
---
If your project is set up for it,
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14177
Does the hive metastore not shutdown properly even if we do
`sparkSession.stop()` in all the test files ? The reason I'm trying to avoid
having `enableHiveMetastore=F` in most test files is that Hi
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/14022
> how about a global flag for enabling config expansion
I think that would be more confusing. Why would someone disable expansion?
My only concern with enabling it for all options is
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14022#discussion_r70728894
--- Diff:
core/src/main/scala/org/apache/spark/internal/config/ConfigEntry.scala ---
@@ -99,13 +118,83 @@ private class FallbackConfigEntry[T] (
key:
Github user ericl commented on the issue:
https://github.com/apache/spark/pull/14022
Instead of selectively enabling this for certain confs / config builders,
how about a global flag for enabling config expansion? I think that would be
less likely to be confusing. Also, perhaps a warn
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/14079
Just some minor stuff, I'll let people more familiar with the scheduler
comment further.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as wel
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14139
**[Test build #62288 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62288/consoleFull)**
for PR 14139 at commit
[`82d3711`](https://github.com/apache/spark/commit/8
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14139
@rxin I think this version is the minimal change. Since the partition
discovery logic in inside HadoopFsRelation in 1.6 and the refresh is triggered
by using lazy val, passing a flag down will introdu
Github user ericl commented on a diff in the pull request:
https://github.com/apache/spark/pull/14022#discussion_r70728529
--- Diff:
core/src/main/scala/org/apache/spark/internal/config/ConfigEntry.scala ---
@@ -99,13 +118,83 @@ private class FallbackConfigEntry[T] (
key:
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14190
**[Test build #62286 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62286/consoleFull)**
for PR 14190 at commit
[`c5dc235`](https://github.com/apache/spark/commit/c
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14139
**[Test build #62287 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62287/consoleFull)**
for PR 14139 at commit
[`5d66df7`](https://github.com/apache/spark/commit/5
Github user ajbozarth commented on the issue:
https://github.com/apache/spark/pull/13670
Follow up to my previous post, this works on yarn and the history server
but not on standalone, as a user I can't figure out what's broken.
---
If your project is set up for it, you can reply to
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/14190
[SPARK-16536][SQL][PYSPARK] Expose `sql` in PySpark Shell
## What changes were proposed in this pull request?
This PR exposes `sql` in PySpark Shell like Scala/R Shells.
**Ba
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14079#discussion_r70728183
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/BlacklistTrackerSuite.scala ---
@@ -0,0 +1,282 @@
+/*
+ * Licensed to the Apache Software Fou
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14139#discussion_r70727924
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala ---
@@ -273,6 +273,22 @@ private[hive] class HiveMetastoreCatalog(val cl
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14079#discussion_r70727964
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/BlacklistTrackerSuite.scala ---
@@ -0,0 +1,282 @@
+/*
+ * Licensed to the Apache Software Fou
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14079#discussion_r70727854
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/BlacklistTrackerSuite.scala ---
@@ -0,0 +1,282 @@
+/*
+ * Licensed to the Apache Software Fou
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14079#discussion_r70727727
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/BlacklistIntegrationSuite.scala
---
@@ -51,37 +54,67 @@ class BlacklistIntegrationSuite extends
Sch
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14079#discussion_r70727301
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -611,16 +614,32 @@ private[spark] class TaskSetManager(
//
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14079#discussion_r70727231
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -292,8 +288,12 @@ private[spark] class TaskSetManager(
{
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14189
**[Test build #62285 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62285/consoleFull)**
for PR 14189 at commit
[`815aa05`](https://github.com/apache/spark/commit/8
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14079#discussion_r70726798
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala ---
@@ -280,11 +304,25 @@ private[spark] class TaskSchedulerImpl(
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/14173
right, that line would not be necessary.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this featur
Github user ajbozarth commented on the issue:
https://github.com/apache/spark/pull/13670
Thanks for all the fixes, I checked it out and am in the middle of looking
at it. It looks great on the history server for me but the page just doesn't
load or show and form of errors when i try t
Github user keypointt commented on the issue:
https://github.com/apache/spark/pull/14189
I've just changes pom.xml files in a few of sub projects, if this is a
valid patch, I'll change all of the pom.xml files.
---
If your project is set up for it, you can reply to this email and hav
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14079#discussion_r70726463
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/BlacklistTracker.scala ---
@@ -0,0 +1,223 @@
+/*
+ * Licensed to the Apache Software Foundati
GitHub user keypointt opened a pull request:
https://github.com/apache/spark/pull/14189
[SPARK-16535] In pom.xml, remove groupId which is redundant definition and
inherited from the parent
https://issues.apache.org/jira/browse/SPARK-16535
## What changes were proposed in th
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14173
I managed to reproduce the issue that Jenkins was hitting. It had to do
with using `@method` on a as.DataFrame that was creating an error on html
generation. I just removed that and it seems to be
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14079#discussion_r70726067
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/BlacklistTracker.scala ---
@@ -0,0 +1,223 @@
+/*
+ * Licensed to the Apache Software Foundati
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/14173
I ran both knitr and lintr on this, both succeeded.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14079#discussion_r70725583
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/BlacklistTracker.scala ---
@@ -0,0 +1,223 @@
+/*
+ * Licensed to the Apache Software Foundati
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14173
**[Test build #62284 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62284/consoleFull)**
for PR 14173 at commit
[`3299242`](https://github.com/apache/spark/commit/3
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14173
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62283/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14173
**[Test build #62283 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62283/consoleFull)**
for PR 14173 at commit
[`6c9309e`](https://github.com/apache/spark/commit/
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14173
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14173
**[Test build #62283 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62283/consoleFull)**
for PR 14173 at commit
[`6c9309e`](https://github.com/apache/spark/commit/6
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/14022
I'll leave this around a little more but if I don't see more feedback I
plan to push it before the weekend.
---
If your project is set up for it, you can reply to this email and have your
reply appe
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14155
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62282/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14155
**[Test build #62282 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62282/consoleFull)**
for PR 14155 at commit
[`b8e0eee`](https://github.com/apache/spark/commit/
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14155
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14183
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14183
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62277/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14183
**[Test build #62277 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62277/consoleFull)**
for PR 14183 at commit
[`77c4a6e`](https://github.com/apache/spark/commit/
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/14187
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user jkbradley commented on the issue:
https://github.com/apache/spark/pull/14187
@BryanCutler Thanks for taking a look! I'll merge this with master and
branch-2.0 then
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as w
Github user mgummelt closed the pull request at:
https://github.com/apache/spark/pull/14188
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is e
GitHub user mgummelt opened a pull request:
https://github.com/apache/spark/pull/14188
.
## What changes were proposed in this pull request?
(Please fill in changes proposed in this fix)
## How was this patch tested?
(Please explain how this patch was
Github user BryanCutler commented on the issue:
https://github.com/apache/spark/pull/14187
LGTM, I took a quick look (but didn't build the docs)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not hav
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/14148
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14148
LGTM. Merging to master and branch 2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enable
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14022
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62275/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14022
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14022
**[Test build #62275 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62275/consoleFull)**
for PR 14022 at commit
[`b928a55`](https://github.com/apache/spark/commit/
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14162
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14162
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62281/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14162
**[Test build #62281 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62281/consoleFull)**
for PR 14162 at commit
[`226a165`](https://github.com/apache/spark/commit/
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/14178
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user cloud-fan closed the pull request at:
https://github.com/apache/spark/pull/14071
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/14071
I'm closing it as it turns out that we still need these hive specific stuff
in CatalogStorageFormat. Hiding them in properties seems don't have much
benefit, what we need is just adding a 'provide
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13670
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13670
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62274/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13670
**[Test build #62274 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62274/consoleFull)**
for PR 13670 at commit
[`faf814f`](https://github.com/apache/spark/commit/
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14173
@felixcheung Can you check if you see the same error as Jenkins on your
machine ? On my machine the install and tests seem to pass, so I think this is
a R / roxygen / devtools version problem.
-
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14178
LGTM. Merging this to master, branch-2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14178#discussion_r70717248
--- Diff: R/pkg/inst/tests/testthat/test_sparkSQL.R ---
@@ -237,7 +237,7 @@ test_that("read csv as DataFrame", {
"Empty,Dummy,Placeho
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14178#discussion_r70716537
--- Diff: R/pkg/inst/tests/testthat/test_sparkSQL.R ---
@@ -237,7 +237,7 @@ test_that("read csv as DataFrame", {
"Empty,Dummy,Plac
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14158#discussion_r70716204
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/AllExecutionsPage.scala
---
@@ -146,11 +158,35 @@ private[ui] abstract class Executi
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14178#discussion_r70715940
--- Diff: R/pkg/inst/tests/testthat/test_sparkSQL.R ---
@@ -237,7 +237,7 @@ test_that("read csv as DataFrame", {
"Empty,Dummy,Placeho
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14178
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r70715360
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -313,18 +313,48 @@ class SparkSqlAstBuilder(conf: SQLConf) ext
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14178
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62280/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14178
**[Test build #62280 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62280/consoleFull)**
for PR 14178 at commit
[`ab0af9f`](https://github.com/apache/spark/commit/
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14162
**[Test build #62281 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62281/consoleFull)**
for PR 14162 at commit
[`226a165`](https://github.com/apache/spark/commit/2
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r70715057
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/interface.scala
---
@@ -115,6 +116,9 @@ case class CatalogTablePartition(
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14155
**[Test build #62282 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62282/consoleFull)**
for PR 14155 at commit
[`b8e0eee`](https://github.com/apache/spark/commit/b
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14187
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62278/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14187
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r70714896
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/interface.scala
---
@@ -49,12 +50,12 @@ case class CatalogStorageFormat(
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14187
**[Test build #62278 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62278/consoleFull)**
for PR 14187 at commit
[`4b21994`](https://github.com/apache/spark/commit/
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14186
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62273/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14186
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14186
**[Test build #62273 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62273/consoleFull)**
for PR 14186 at commit
[`89a4eed`](https://github.com/apache/spark/commit/
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14178
**[Test build #62280 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62280/consoleFull)**
for PR 14178 at commit
[`ab0af9f`](https://github.com/apache/spark/commit/a
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14090#discussion_r70711218
--- Diff: docs/sparkr.md ---
@@ -312,7 +310,82 @@ head(ldf, 3)
Apply a function to each group of a `SparkDataFrame`. The function is to
be applied t
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14090#discussion_r70711263
--- Diff: docs/sparkr.md ---
@@ -312,7 +310,82 @@ head(ldf, 3)
Apply a function to each group of a `SparkDataFrame`. The function is to
be applied t
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14090#discussion_r7076
--- Diff: docs/sparkr.md ---
@@ -263,7 +263,7 @@ In SparkR, we support several kinds of User-Defined
Functions:
# dapply
Apply a function t
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14173
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14173
**[Test build #62279 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62279/consoleFull)**
for PR 14173 at commit
[`a2275ae`](https://github.com/apache/spark/commit/
101 - 200 of 660 matches
Mail list logo