Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14065
**[Test build #61912 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61912/consoleFull)**
for PR 14065 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13620
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14091
**[Test build #61913 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61913/consoleFull)**
for PR 14091 at commit
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/14065#discussion_r69916415
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/token/AMDelegationTokenRenewer.scala
---
@@ -171,10 +174,9 @@ private[yarn] class
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/14065
I took a quick look through.
It might be nice to think about how we could handle other credentials.
For instance Apache Kafka currently doesn't have tokens so you need keytab
or
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/14008#discussion_r69928094
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringExpressions.scala
---
@@ -652,6 +654,145 @@ case class
Github user a-roberts commented on the issue:
https://github.com/apache/spark/pull/11956
@robbinspg and I are evaluating this from a functional and performance
perspective, full disclosure: we both work for IBM with @kiszk.
All unit tests pass including the new ones Ishizaki
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14090
**[Test build #61911 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61911/consoleFull)**
for PR 14090 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14089
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user kiszk commented on a diff in the pull request:
https://github.com/apache/spark/pull/13680#discussion_r69927398
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/benchmark/UnsafeArrayDataBenchmark.scala
---
@@ -0,0 +1,251 @@
+/*
+ * Licensed to
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/14008#discussion_r69927073
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringExpressions.scala
---
@@ -652,6 +654,145 @@ case class
GitHub user NarineK opened a pull request:
https://github.com/apache/spark/pull/14090
[SPARK-16112][SparkR] Programming guide for gapply/gapplyCollect
## What changes were proposed in this pull request?
Updates programming guide for spark.gapply/spark.gapplyCollect.
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14089
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/61909/
Test PASSed.
---
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/14065#discussion_r69918095
--- Diff: yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala ---
@@ -390,8 +390,9 @@ private[spark] class Client(
// Upload Spark and
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/14065#discussion_r69917965
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/token/HDFSTokenProvider.scala
---
@@ -0,0 +1,116 @@
+/*
+ * Licensed to the Apache
Github user JustinPihony commented on the issue:
https://github.com/apache/spark/pull/14077
Thanks. I will have to wait until SPARK-16401 is resolved or else the code
will not pass tests, though.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14091
**[Test build #61913 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61913/consoleFull)**
for PR 14091 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14090
**[Test build #61911 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61911/consoleFull)**
for PR 14090 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14090
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14090
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/61911/
Test PASSed.
---
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/14079#discussion_r69910685
--- Diff:
yarn/src/main/scala/org/apache/spark/scheduler/cluster/YarnSchedulerBackend.scala
---
@@ -125,8 +125,11 @@ private[spark] abstract class
GitHub user kiszk opened a pull request:
https://github.com/apache/spark/pull/14091
[SPARK-16412][SQL] Generate Java code that gets an array in each column of
CachedBatch when DataFrame.cache() is called
## What changes were proposed in this pull request?
Waiting #11956 to
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14091
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/61913/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14091
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/13701
@yhuai BTW, when reading more row groups, the performance improvement is
much more.
Before this patch:
Java HotSpot(TM) 64-Bit Server VM 1.8.0_71-b15 on Linux 3.19.0-25-generic
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/14088
Can you please fix the description? "Fix bugs for "Can not get user config
when calling SparkHadoopUtil.get.conf in other places"." doesn't make sense to
me. Where exactly is SparkHadoopUtil
Github user janplus commented on a diff in the pull request:
https://github.com/apache/spark/pull/14008#discussion_r69905666
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringExpressions.scala
---
@@ -652,6 +654,145 @@ case class
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14089
**[Test build #61909 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61909/consoleFull)**
for PR 14089 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13620
**[Test build #61910 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61910/consoleFull)**
for PR 13620 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13620
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/61910/
Test PASSed.
---
Github user nblintao commented on the issue:
https://github.com/apache/spark/pull/13620
I believe this commit has resolved the bugs reported by @ajbozarth. It
looks well on history server pages now, and it could keep the status of other
tables while changing one.
Could you please
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/13123
I think this change is made by another PR #13677. We can close it now.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14065
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user MasterDDT commented on the issue:
https://github.com/apache/spark/pull/14092
cc @JoshRosen @rxin
I wasn't sure if the right fix here is that `Expression` should override
`equals` and use `semanticEquals`, that would be a bigger change but I think
would work.
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14077
@JustinPihony How about you first moving the `copy` function in your PR
now? Then, we can review your PR before the SPARK-16401 is resolved.
---
If your project is set up for it, you can reply
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14028#discussion_r69936163
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JacksonGenerator.scala
---
@@ -17,74 +17,180 @@
package
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14093#discussion_r69942754
--- Diff:
core/src/main/java/org/apache/spark/shuffle/sort/UnsafeShuffleWriter.java ---
@@ -349,12 +349,19 @@ void forceSorterToSpill() throws IOException {
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14081
**[Test build #61918 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61918/consoleFull)**
for PR 14081 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14081
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/61918/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14081
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user JustinPihony commented on the issue:
https://github.com/apache/spark/pull/14077
@gatorsmile As I said above, I actually think it might be better to keep
the work that was already done and am waiting for Reynold's feedback.
---
If your project is set up for it, you can
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14092
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/11956
**[Test build #61915 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61915/consoleFull)**
for PR 11956 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/11956
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/61915/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14071
**[Test build #61914 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61914/consoleFull)**
for PR 14071 at commit
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/14008#discussion_r69928758
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringExpressions.scala
---
@@ -652,6 +654,160 @@ case class
Github user janplus commented on a diff in the pull request:
https://github.com/apache/spark/pull/14008#discussion_r69932808
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringExpressions.scala
---
@@ -652,6 +654,145 @@ case class
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13701
@viirya Maybe you have not read my discussion with @rdblue . @rdblue
already explained how Parquet internally works. Like what I said above, I think
we still need a test for confirming whether
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13701
@rdblue uh, I see. Thank you for your explanation! My above suggestion is
to confirm what you said in @viirya test cases. We expect to see the same
results as what you mentioned.
It
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14093#discussion_r69943578
--- Diff:
core/src/main/java/org/apache/spark/shuffle/sort/UnsafeShuffleWriter.java ---
@@ -349,12 +349,19 @@ void forceSorterToSpill() throws IOException {
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13680
**[Test build #61917 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61917/consoleFull)**
for PR 13680 at commit
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13765#discussion_r69951435
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/dsl/package.scala ---
@@ -370,8 +370,11 @@ package object dsl {
case
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/13765#discussion_r69930213
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/dsl/package.scala ---
@@ -370,8 +370,11 @@ package object dsl {
case plan
GitHub user MasterDDT opened a pull request:
https://github.com/apache/spark/pull/14092
[SPARK-16419][SQL] EnsureRequirements adds extra Sort to already sorted
cached table
## What changes were proposed in this pull request?
EnsureRequirements compares the required and
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/14071
cc @yhuai @gatorsmile @liancheng @clockfly
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/13778
From another point of view, is it necessary to propagate the python UDF
from python side to jvm side? IIUC the serialization of python UDT happens at
python side, and the jvm side can only see
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14004
**[Test build #61920 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61920/consoleFull)**
for PR 14004 at commit
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/14051
This one broke branch 1.6. I just reverted it. Please resubmit a backport
for branch 1.6.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/13765#discussion_r69930648
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -537,12 +537,19 @@ object CollapseProject extends
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14028#discussion_r69936226
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JacksonGenerator.scala
---
@@ -17,74 +17,180 @@
package
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14028#discussion_r69936170
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JacksonGenerator.scala
---
@@ -17,74 +17,180 @@
package
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/14004#discussion_r69942945
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/regexpExpressions.scala
---
@@ -198,6 +203,66 @@ case class
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/14012
cc @liancheng please review this PR, thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/11956
**[Test build #61919 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61919/consoleFull)**
for PR 11956 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14071
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14071
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/61914/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14065
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/61912/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14065
**[Test build #61912 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61912/consoleFull)**
for PR 14065 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14071
**[Test build #61914 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61914/consoleFull)**
for PR 14071 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/11956
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user BryanCutler commented on a diff in the pull request:
https://github.com/apache/spark/pull/14081#discussion_r69944850
--- Diff:
examples/src/main/java/org/apache/spark/examples/ml/JavaPipelineExample.java ---
@@ -1,88 +0,0 @@
-/*
- * Licensed to the Apache
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14081
**[Test build #61918 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61918/consoleFull)**
for PR 14081 at commit
Github user liancheng commented on the issue:
https://github.com/apache/spark/pull/13765
Under what circumstances will a user use 2 or more adjacent re-partitioning
operators?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user janplus commented on a diff in the pull request:
https://github.com/apache/spark/pull/14008#discussion_r69933267
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringExpressions.scala
---
@@ -652,6 +654,145 @@ case class
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/14052#discussion_r69933155
--- Diff:
core/src/main/scala/org/apache/spark/deploy/rest/RestSubmissionServer.scala ---
@@ -93,6 +94,14 @@ private[spark] abstract class
Github user rdblue commented on the issue:
https://github.com/apache/spark/pull/13701
@gatorsmile, we've not seen a penalty from running row group level tests
when no row groups are filtered and we've decided to turn on dictionary
filtering by default. You may see a penalty from
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/11956
**[Test build #61915 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61915/consoleFull)**
for PR 11956 at commit
GitHub user rdblue opened a pull request:
https://github.com/apache/spark/pull/14093
SPARK-16420: Ensure compression streams are closed.
## What changes were proposed in this pull request?
This uses the try/finally pattern to ensure streams are closed after use.
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/14004#discussion_r69943328
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/regexpExpressions.scala
---
@@ -198,6 +203,67 @@ case class
Github user lovexi commented on the issue:
https://github.com/apache/spark/pull/14080
@rxin Sure. Get a cleaner title instead.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user janplus commented on a diff in the pull request:
https://github.com/apache/spark/pull/14008#discussion_r69932567
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringExpressions.scala
---
@@ -652,6 +654,160 @@ case class
Github user vlad17 commented on the issue:
https://github.com/apache/spark/pull/13778
LGTM +1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14090
cc @felixcheung @mengxr
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14093
**[Test build #61916 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61916/consoleFull)**
for PR 14093 at commit
Github user rdblue commented on a diff in the pull request:
https://github.com/apache/spark/pull/14093#discussion_r69943044
--- Diff:
core/src/main/java/org/apache/spark/shuffle/sort/UnsafeShuffleWriter.java ---
@@ -349,12 +349,19 @@ void forceSorterToSpill() throws IOException {
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/13984#discussion_r69945470
--- Diff: R/pkg/R/SQLContext.R ---
@@ -744,6 +747,9 @@ read.df.default <- function(path = NULL, source = NULL,
schema = NULL, ...) {
if
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/14004#discussion_r69948642
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/regexpExpressions.scala
---
@@ -198,6 +203,67 @@ case class
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/14051
@zsxwing crumbs, thanks for that. It looks reasonably sure it's related,
though, I still can't quite figure out how it would cause this failure:
```
[error]
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/14030
LGTM. Merging to master and 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13765#discussion_r69951981
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/dsl/package.scala ---
@@ -370,8 +370,11 @@ package object dsl {
case
Github user krishnakalyan3 commented on the issue:
https://github.com/apache/spark/pull/13894
@holdenk @MLnick sorry for so many changes. Newbie here. Please let me know
if the current state is okay?.
---
If your project is set up for it, you can reply to this email and have your
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/14089
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14093
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/14022
> Spark SQL allows env:xxx and system:xxx. We should follow the same here.
Sounds good. I looked briefly at the code and they could potentially be
merged later, but to avoid issues like "how
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14089
Thanks - merging in master/2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14081
**[Test build #61922 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61922/consoleFull)**
for PR 14081 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14081
**[Test build #61922 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61922/consoleFull)**
for PR 14081 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14004
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/61920/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14004
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13765
There are three possibilities.
1. User mistakes. (Rarely)
2. Intermediate results of optimization. (More frequently.)
3. `View` (or pre-designed `Dataset`).
---
If your
1 - 100 of 591 matches
Mail list logo