Github user zero323 commented on the issue:
https://github.com/apache/spark/pull/17672
> btw, fyi, you don't have to rebase or squash each time you push
Noted :) But yeah, there was a conflict after SPARK-20375.
> At least we should include how this goes into a
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/17636
@hvanhovell Thanks a lot.
@gatorsmile Thank you for doing a very thorough review. Really appreciate
it !!
---
If your project is set up for it, you can reply to this email and have your
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17649
**[Test build #76004 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76004/testReport)**
for PR 17649 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17649
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76004/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17649
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15125
**[Test build #76003 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76003/testReport)**
for PR 15125 at commit
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17665#discussion_r112571942
--- Diff:
resource-managers/yarn/src/main/resources/META-INF/services/org.apache.spark.deploy.security.ServiceCredentialProvider
---
@@ -0,0 +1,3 @@
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17342
**[Test build #76000 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76000/testReport)**
for PR 17342 at commit
Github user budde commented on a diff in the pull request:
https://github.com/apache/spark/pull/17467#discussion_r112565900
--- Diff:
external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisInputDStream.scala
---
@@ -249,6 +252,17 @@ object
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17649
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17649
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76002/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15125
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76003/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15125
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user zero323 commented on the issue:
https://github.com/apache/spark/pull/17469
@map222
[`ignore_unicode_prefix`](https://github.com/apache/spark/blob/8ddf0d2a60795a2306f94df8eac6e265b1fe5230/python/pyspark/rdd.py#L146-L156)
---
If your project is set up for it, you can reply
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17087
**[Test build #75999 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75999/testReport)**
for PR 17087 at commit
Github user rdblue commented on the issue:
https://github.com/apache/spark/pull/17540
@cloud-fan, I finally tracked down the PySpark problem. The in-memory sink
had `dataframe.collect()` calls inside the streaming batch write. I think this
is ready to be merged now!
---
If your
Github user zero323 commented on the issue:
https://github.com/apache/spark/pull/17469
You'll have to use it on the wrapped function. Like:
contains = ignore_unicode_prefix(_bin_op("contains"))
---
If your project is set up for it, you can reply to this email and
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17582
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17582
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75998/
Test PASSed.
---
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17705
I am merging this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17649
**[Test build #76002 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76002/testReport)**
for PR 17649 at commit
Github user map222 commented on the issue:
https://github.com/apache/spark/pull/17469
Ok, I added ignore_unicode_prefix to the 6 functions, and it passed local
tests. I think it is ready for Jenkins again.
---
If your project is set up for it, you can reply to this email and have
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17707
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75997/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17707
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17540
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17540
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75996/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17540
**[Test build #75996 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75996/testReport)**
for PR 17540 at commit
Github user budde commented on a diff in the pull request:
https://github.com/apache/spark/pull/17467#discussion_r112566462
--- Diff:
external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala
---
@@ -147,6 +152,14 @@ class
Github user CodingCat commented on the issue:
https://github.com/apache/spark/pull/16291
what's the current status of this PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user mgummelt commented on a diff in the pull request:
https://github.com/apache/spark/pull/17665#discussion_r112570247
--- Diff:
resource-managers/yarn/src/main/resources/META-INF/services/org.apache.spark.deploy.security.ServiceCredentialProvider
---
@@ -0,0 +1,3 @@
Github user map222 commented on the issue:
https://github.com/apache/spark/pull/17469
@zero323 Aaaah, I had even identified what I needed to do! So I just need
to decorate `_unary_op` and `_bin_op`, yes?
---
If your project is set up for it, you can reply to this email and have your
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17342
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17342
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76000/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17582
**[Test build #75998 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75998/testReport)**
for PR 17582 at commit
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17636
Merging to master/branch-2.2. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17680
@ash211, Thanks for your approval.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user mgummelt commented on a diff in the pull request:
https://github.com/apache/spark/pull/17665#discussion_r112570582
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -30,6 +30,7 @@ import scala.util.Properties
import
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17680
**[Test build #76006 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76006/testReport)**
for PR 17680 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17707
**[Test build #75997 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75997/testReport)**
for PR 17707 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17087
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75999/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17087
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user ptkool commented on a diff in the pull request:
https://github.com/apache/spark/pull/17708#discussion_r112549462
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicLogicalOperators.scala
---
@@ -387,6 +387,13 @@ case class
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17708
**[Test build #76005 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76005/testReport)**
for PR 17708 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17636
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17705
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
GitHub user hvanhovell opened a pull request:
https://github.com/apache/spark/pull/17710
[SPARK-20420][SQL] Add events to the external catalog
## What changes were proposed in this pull request?
It is often useful to be able to track changes to the `ExternalCatalog`.
##
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17665
**[Test build #76008 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76008/testReport)**
for PR 17665 at commit
Github user kevinyu98 commented on the issue:
https://github.com/apache/spark/pull/12646
@hvanhovell: Would you have some cycles to review this PR? I would
appreciated some feedback on this.. Thanks.
---
If your project is set up for it, you can reply to this email and have your
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17693#discussion_r112584338
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/jsonExpressions.scala
---
@@ -149,7 +149,7 @@ case class
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17693#discussion_r112584915
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/jsonExpressions.scala
---
@@ -149,7 +149,7 @@ case class
Github user umehrot2 commented on the issue:
https://github.com/apache/spark/pull/17445
@kayousterhout @mridulm @rxin @lins05 @markhamstra @tgravescs @squito Can
you take a look at this ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17710
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76007/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17710
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/17670
ping
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user zuotingbing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17638#discussion_r112595663
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala ---
@@ -50,22 +49,22 @@ import org.apache.spark.util.{JsonProtocol,
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17638#discussion_r112595717
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -405,9 +405,7 @@ class SparkContext(config: SparkConf) extends Logging {
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/17641
LGTM, merging to master/2.2
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/17711#discussion_r112602578
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/SparkSqlParserSuite.scala
---
@@ -290,4 +290,14 @@ class SparkSqlParserSuite extends
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/17681
The proposal LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17711#discussion_r112602983
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/SparkSqlParserSuite.scala
---
@@ -290,4 +290,14 @@ class SparkSqlParserSuite extends
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17711#discussion_r112602929
--- Diff: sql/core/src/test/resources/sql-tests/inputs/string-functions.sql
---
@@ -1,3 +1,6 @@
+-- A pipe operation for string concatenation
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17711
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17708
**[Test build #76005 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76005/testReport)**
for PR 17708 at commit
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17693
I like the idea but I am not sure of `DROPMALFORMED` mode though. If we use
an expression with the mode enabled, whole record (not only the column but all
columns) will be dropped in some json
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17665
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17665
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76008/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17665
**[Test build #76008 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76008/testReport)**
for PR 17665 at commit
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17711#discussion_r112591328
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -1483,4 +1483,12 @@ class SparkSqlAstBuilder(conf: SQLConf)
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/17711
okay, I'll add soon.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user zuotingbing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17638#discussion_r112593597
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -405,9 +405,7 @@ class SparkContext(config: SparkConf) extends Logging {
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17711
**[Test build #76010 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76010/testReport)**
for PR 17711 at commit
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17638#discussion_r112594234
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala ---
@@ -50,22 +49,22 @@ import org.apache.spark.util.{JsonProtocol,
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17703
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/17707
looks like we need to fix a test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user zuotingbing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17638#discussion_r112596553
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala ---
@@ -50,22 +49,22 @@ import org.apache.spark.util.{JsonProtocol,
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17641
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17670
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user mgummelt commented on a diff in the pull request:
https://github.com/apache/spark/pull/17665#discussion_r112572593
--- Diff:
core/src/main/scala/org/apache/spark/deploy/security/ConfigurableCredentialManager.scala
---
@@ -41,15 +41,17 @@ import
Github user mgummelt commented on a diff in the pull request:
https://github.com/apache/spark/pull/17665#discussion_r112572777
--- Diff:
core/src/main/scala/org/apache/spark/deploy/security/ServiceCredentialProvider.scala
---
@@ -0,0 +1,57 @@
+/*
+ * Licensed to the
Github user mgummelt commented on a diff in the pull request:
https://github.com/apache/spark/pull/17665#discussion_r112572802
--- Diff:
core/src/main/scala/org/apache/spark/executor/CoarseGrainedExecutorBackend.scala
---
@@ -174,6 +177,24 @@ private[spark] class
Github user mgummelt commented on a diff in the pull request:
https://github.com/apache/spark/pull/17665#discussion_r112572874
--- Diff:
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSecurityManager.scala
---
@@ -0,0 +1,66 @@
+/*
+ *
Github user liancheng commented on the issue:
https://github.com/apache/spark/pull/17693
Not suggesting doing it in this PR but maybe adding a SQL option to let the
users choose the error handling strategy of all the JSON functions probably
makes more sense here? The Spark JSON data
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17710
**[Test build #76007 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76007/testReport)**
for PR 17710 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17711
**[Test build #76009 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76009/testReport)**
for PR 17711 at commit
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17582
Just update the description, please review again @vanzin , thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17638#discussion_r112595579
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala ---
@@ -50,22 +49,22 @@ import org.apache.spark.util.{JsonProtocol,
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r112597680
--- Diff:
launcher/src/main/java/org/apache/spark/launcher/SparkAppHandle.java ---
@@ -95,7 +95,8 @@ public boolean isFinal() {
void kill();
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17711#discussion_r112599634
--- Diff: sql/core/src/test/resources/sql-tests/inputs/string-functions.sql
---
@@ -1,3 +1,6 @@
+-- A pipe operation for string concatenation
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17540#discussion_r112601511
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/util/DataFrameCallbackSuite.scala
---
@@ -183,21 +183,22 @@ class DataFrameCallbackSuite extends
Github user zuotingbing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17638#discussion_r112601515
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala ---
@@ -50,22 +49,22 @@ import org.apache.spark.util.{JsonProtocol,
GitHub user maropu opened a pull request:
https://github.com/apache/spark/pull/17712
[SPARK-20416][SQL] Print UDF names in EXPLAIN
## What changes were proposed in this pull request?
This pr added `withName` in `UserDefinedFunction` for printing UDF names in
EXPLAIN
##
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17191#discussion_r112603375
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -136,6 +136,7 @@ class Analyzer(
Github user mgummelt commented on a diff in the pull request:
https://github.com/apache/spark/pull/17665#discussion_r112573960
--- Diff:
resource-managers/yarn/src/main/resources/META-INF/services/org.apache.spark.deploy.security.ServiceCredentialProvider
---
@@ -0,0 +1,3 @@
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/17703
thanks, merging to master/2.2!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17712
**[Test build #76013 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76013/testReport)**
for PR 17712 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17710
**[Test build #76007 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76007/testReport)**
for PR 17710 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17680
**[Test build #76006 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76006/testReport)**
for PR 17680 at commit
GitHub user maropu opened a pull request:
https://github.com/apache/spark/pull/17711
[SPARK-19951][SQL] Add string concatenate operator || to Spark SQL
## What changes were proposed in this pull request?
This pr added code to support `||` for string concatenation. This string
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17700
Thanks @squito for clarification, sorry I misunderstood it.
Regarding this new `memoryMetrics`, will all the memory related metrics be
shown here, like what you mentioned in the JIRA?
Github user zuotingbing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17638#discussion_r112594056
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala ---
@@ -50,22 +49,22 @@ import org.apache.spark.util.{JsonProtocol,
1 - 100 of 449 matches
Mail list logo