Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/13326
@tnachen Can you check this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/13077
@srowen /@tnachen Can you check this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/13072
@srowen Can you check this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/13143
@tnachen Can you check this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/16801
[SPARK-13619] [WEBUI] [CORE] Jobs page UI shows wrong number of failed tasks
## What changes were proposed in this pull request?
When the Failed/Killed Task End events come after
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/13072
MesosClusterDispatcher also has multiple threads like Executor, when any
one thread terminates in the MesosClusterDispatcher process due to some
error/exception it keeps running without
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/16725
[SPARK-19377] [WEBUI] [CORE] Killed tasks should have the status as KILLED
## What changes were proposed in this pull request?
Copying of the killed status was missing while getting
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/16705
[SPARK-19354] [Core] Killed tasks are getting marked as FAILED
## What changes were proposed in this pull request?
Handling the exception which occurs during the kill and logging
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/13077#discussion_r94205390
--- Diff:
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
---
@@ -559,15 +560,29
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/13077
@tnachen, sorry for the delay, I will update the patch. Thanks
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/12753
I will update this PR with the ConfigReader and reopen the jira.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/12753
@vanzin, SPARK-3767 was resolved as 'Won't Fix' by @srowen. I was in
assumption that SPARK-16671 covers this as well.
---
If your project is set up for it, you can reply to this email
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/12753
@vanzin Thanks for looking into this, I have resolved the conflicts.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/13143
MesosDriver doesn't throw any exception, it just returns with the value as
Status.DRIVER_ABORTED.
```
registerLatch.await()
// propagate any error
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/13077
Thanks @tnachen for looking into this, I will update this with the changes.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/11996
@lw-lin I think it will release the resources and then it throws
TaskKilledException at
[Executor.scala#L307](https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/13323
@tnachen Thanks for your review, I have added a test for this, can you have
a look into it?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/13326
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/59989/testReport/
`org.apache.spark.scheduler.BlacklistIntegrationSuite.Bad node with
multiple executors, job
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/13326
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/59989/
Test FAILed
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/13326
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/13326
**[Test build #59989 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/59989/consoleFull)**
for PR 13326 at commit
[`7f4f34b`](https://github.com/apache/spark
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/13326
**[Test build #59989 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/59989/consoleFull)**
for PR 13326 at commit
[`7f4f34b`](https://github.com/apache/spark
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/13326
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/13407
Thanks @vanzin for review and merging.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/13326#discussion_r65748851
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
---
@@ -188,10 +188,10 @@ private[spark] class
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/13407
Thanks @vanzin and @andrewor14 for looking into this, sorry for the delay.
> If SparkSubmit can still process --kill and --status with those, then
that's fine too (just
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/13407
[SPARK-15665] [CORE] spark-submit --kill and --status are not working
## What changes were proposed in this pull request?
--kill and --status were not considered while handling
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/11996#issuecomment-222589679
Thanks @kayousterhout.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/11996#issuecomment-222104830
@kayousterhout, I have added inline comments and the build is also fine
now, please have a look into it. Thanks
---
If your project is set up for it, you can
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/11996#issuecomment-221968586
@kayousterhout Thanks a lot for your review and comments. I have fixed
them, please have a look into this.
---
If your project is set up for it, you can reply
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/13326
[SPARK-15560] [Mesos] Queued/Supervise drivers waiting for retry drivers
disappear for kill command in Mesos mode
## What changes were proposed in this pull request?
With the patch
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/13323
[SPARK-1] [Mesos] Driver with --supervise option cannot be killed in
Mesos mode
## What changes were proposed in this pull request?
Not adding the Killed applications for retry
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/11996#issuecomment-221531103
@kayousterhout, can you have look into this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/11996#discussion_r63738214
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/TaskSetManagerSuite.scala ---
@@ -789,6 +791,51 @@ class TaskSetManagerSuite extends
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/11996#discussion_r63736986
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/TaskSetManagerSuite.scala ---
@@ -789,6 +791,51 @@ class TaskSetManagerSuite extends
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/11996#issuecomment-220082195
Thanks a lot @kayousterhout for the review.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/13143
[SPARK-15359] [Mesos] Mesos dispatcher should handle DRIVER_ABORTED status
from mesosDriver.run()
## What changes were proposed in this pull request?
When the mesosDriver.run
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/13077
[SPARK-10748] [Mesos] Log error instead of crashing Spark Mesos dispatcher
when a job is misconfigured
## What changes were proposed in this pull request?
Now handling the spark
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/13073#issuecomment-218703662
@clockfly, seems JIRA number mentioned in the title is wrong, I think it
should be SPARK-15253.
---
If your project is set up for it, you can reply
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/11996#issuecomment-218671842
@kayousterhout, @markhamstra any comments plz?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/13072
[SPARK-15288] [Mesos] Mesos dispatcher should handle gracefully when any
thread gets UncaughtException
## What changes were proposed in this pull request?
Adding the default
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/12031#issuecomment-216774818
Thanks a lot @zsxwing for pushing this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/12753#issuecomment-216760992
@rxin, Please have a look into this and let me know any thing needs to be
done here. About @, M/R also uses @ for the taskid wild card in java opts
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/12753#discussion_r61995355
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -166,14 +166,15 @@ private[spark
Github user devaraj-kavali closed the pull request at:
https://github.com/apache/spark/pull/12571
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/12753#issuecomment-215619844
Thanks @rxin for checking this, I don't think @ is used any where. Here
again we are replacing only for 'spark.executor.extraJavaOptions' value when
@execid
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/12753
[SPARK-3767] [CORE] Support wildcard in Spark properties
## What changes were proposed in this pull request?
Added provision to specify the 'spark.executor.extraJavaOptions' value
Github user devaraj-kavali closed the pull request at:
https://github.com/apache/spark/pull/11778
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/12571#discussion_r61209327
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/SparkDeploySchedulerBackend.scala
---
@@ -66,12 +66,20 @@ private[spark] class
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/12571#issuecomment-214982150
Thanks @tgravescs for the comment, users can still specify this gc params
as part of the java opts. If the user doesn't specify these gc params then only
we
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/12571#issuecomment-213522339
@srowen I have made the changes, Please have a look into this. Thanks
---
If your project is set up for it, you can reply to this email and have your
reply
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/12571#issuecomment-212841070
Thanks @srowen for checking this immediately, I will make the changes as
per your explanation.
---
If your project is set up for it, you can reply
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/12031#issuecomment-212826864
ping @andrewor14, @zsxwing
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/12571
[SPARK-1989] [CORE] Exit executors faster if they get into a cycle of heavy
GC
## What changes were proposed in this pull request?
Added spark.executor.gcTimeLimit config
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/12031#issuecomment-208725033
@andrewor14, Can you have a look into this when you find some time? Thanks
---
If your project is set up for it, you can reply to this email and have your
reply
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/12031#issuecomment-206215752
Thanks @zsxwing for your comments. I have addressed them, Please have a
look into this.
---
If your project is set up for it, you can reply to this email
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/12082#discussion_r58590060
--- Diff: yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala ---
@@ -1444,4 +1444,19 @@ object Client extends Logging
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/11996#discussion_r58563479
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/TaskSetManagerSuite.scala ---
@@ -789,6 +791,51 @@ class TaskSetManagerSuite extends
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/12031#discussion_r58342616
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -319,10 +319,14 @@ private[spark] class Executor
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/11996#issuecomment-204313542
Thanks @tgravescs for checking this, I will add test for these changes.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/12082#discussion_r58177914
--- Diff: yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala ---
@@ -1444,4 +1444,19 @@ object Client extends Logging
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/12082#issuecomment-204309628
Thanks @tgravescs for looking into the patch.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/12082
[SPARK-13063] [YARN] Make the SPARK YARN STAGING DIR as configurable
## What changes were proposed in this pull request?
Made the SPARK YARN STAGING DIR as configurable
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/12031
[SPARK-14234] [CORE] Executor crashes for TaskRunner thread interruption
## What changes were proposed in this pull request?
Resetting the task interruption status before updating
Github user devaraj-kavali closed the pull request at:
https://github.com/apache/spark/pull/11916
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/11916#issuecomment-202318900
I have moved these changes to the PR
https://github.com/apache/spark/pull/11996 for SPARK-10530. @tgravescs, please
have a look into https://github.com/apache
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/11996
[SPARK-10530] [CORE] Kill other task attempts when one taskattempt
belonging the same task is succeeded in speculation
## What changes were proposed in this pull request
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/11916#discussion_r57459040
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -620,6 +620,14 @@ private[spark] class TaskSetManager
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/11916#discussion_r57349258
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -620,6 +620,14 @@ private[spark] class TaskSetManager
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/11916#discussion_r57340394
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -620,6 +620,14 @@ private[spark] class TaskSetManager
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/11916#issuecomment-200740334
Thanks @rxin and @andrewor14 for looking into the patch.
These failed tests in the latest build are not related to this patch and
they have been failing
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/11916
[SPARK-13343] [CORE] speculative tasks that didn't commit shouldn't be
marked as success
## What changes were proposed in this pull request?
Now with this patch, killed tasks
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/11819
[SPARK-913] [CORE] log the size of each shuffle block in block manager
## What changes were proposed in this pull request?
Added a log message which shows the size of the block
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/11778
[SPARK-13965] [CORE] Driver should kill the other running task attempts if
any one task attempt succeeds for the same task
## What changes were proposed in this pull request?
core
Github user devaraj-kavali closed the pull request at:
https://github.com/apache/spark/pull/11819
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/11819#issuecomment-198434010
Thanks @srowen and @JoshRosen for the details, I am closing this since the
BlockManager no longer handles shuffled blocks.
---
If your project is set up
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/11490#issuecomment-193282292
Sounds fine @srowen, I will update with the change.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/11490#discussion_r5519
--- Diff: core/src/main/scala/org/apache/spark/ui/WebUI.scala ---
@@ -134,7 +134,8 @@ private[spark] abstract class WebUI(
def bind
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/11490#issuecomment-192878456
Thanks @srowen and @zsxwing for the confirmation. I have updated the
description and fixed the review comment.
---
If your project is set up for it, you can
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/11490#issuecomment-192118319
I agree @srowen, I see that SPARK_PUBLIC_DNS is not for binding purpose. I
have changed the env var to SPARK_LOCAL_IP.
---
If your project is set up
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/11490#issuecomment-191869171
I had overlooked and it was my mistake, I think we need to consider both
the env variables something like,
```
serverInfo =
Some(startJettyServer
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/11490
[SPARK-13117] [Web UI] WebUI should use the local ip not 0.0.0.0
## What changes were proposed in this pull request?
In WebUI, now Jetty Server starts with SPARK_PUBLIC_DNS config
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/11474
[SPARK-13621] [CORE] TestExecutor.scala needs to be moved to test package
Moved TestExecutor.scala from src to test package and removed the unused
file TestClient.scala.
You can merge
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/11133#issuecomment-189320534
@srowen
Would it be OK if we start the Jetty server with the default value as
"0.0.0.0" instead of the local host name and it can t
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/11133#issuecomment-188678997
Earlier there was no problem in the test because the jetty server was
getting started with â0.0.0.0â and was not taking effect of the value
configured
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/11053#issuecomment-188062361
@yinxusen I will look into the issue SPARK-13462, Thanks for creating it.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/11133#issuecomment-187520503
@srowen, I have fixed the test failure, Can you have a look into this?
Thanks
---
If your project is set up for it, you can reply to this email and have your
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/11133#issuecomment-187098980
It is not giving clear details about the failure and exiting with the exit
code 1 is because of ***System.exit(1)***. I think we can skip this
***System.exit(1
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/11133#issuecomment-187096991
I see the test/Jenkins failure is due to the PR change.
Here org.apache.spark.deploy.LogUrlsStandaloneSuite is failing because of
the below exception
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/11053#issuecomment-186855380
Thanks @yinxusen for the good suggestion, I have addressed it.
> ModelSelectionViaTrainValidationSplitExam
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/11133#issuecomment-186185243
Thanks @srowen for trying jenkins test to check this.
```javascript
[info] - verify that correct log urls get propagated from workers (2
seconds
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/11053#issuecomment-186082803
Thanks again @yinxusen for the review, I have addressed the comments.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/11132#issuecomment-186070514
Thanks again @yinxusen for the review, I have addressed them.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/11132#issuecomment-185629805
Thanks @yinxusen for the review, I have addressed them.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/11053#issuecomment-185611856
Thanks @yinxusen for your details review and comments. I have addressed
them.
---
If your project is set up for it, you can reply to this email and have your
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/11132#discussion_r53277719
--- Diff:
examples/src/main/java/org/apache/spark/examples/mllib/JavaSVDExample.java ---
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/11053#issuecomment-185280685
Thanks @srowen for review and comments. I have removed serialVersionUID and
setters in Java Beans and also addressed the unnecessary spaces between braces
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/11132#issuecomment-185064091
@yinxusen Thanks for reviewing, I have addressed the comments, Please have
a look into this.
---
If your project is set up for it, you can reply to this email
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/11133#issuecomment-185016457
@srowen I am investigating it, will update. Thanks
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user devaraj-kavali commented on the pull request:
https://github.com/apache/spark/pull/11053#issuecomment-184538336
Thanks for the review @yinxusen. I have configured the code format in IDE
and using the same for formatting the code. I will fix these comments and
update
101 - 200 of 214 matches
Mail list logo