Github user liyezhang556520 closed the pull request at:
https://github.com/apache/spark/pull/12296
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/12038#issuecomment-208659447
@davies , I didn't see the commit in branch-1.6 either, seems this commit
can not be simply git cherry-pick because the file path is not the same in
master
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/12083#issuecomment-208329946
@davies , please see https://github.com/apache/spark/pull/12296
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/12296#issuecomment-208218812
cc @davies
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user liyezhang556520 opened a pull request:
https://github.com/apache/spark/pull/12296
[SPARK-14290][CORE][backport-1.6] avoid significant memory copy in Netty's
tranâ¦
## What changes were proposed in this pull request?
When netty transfer data that is not `FileRegion
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/12083#issuecomment-206180807
retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/12083#issuecomment-206168009
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/12083#issuecomment-206096350
@vanzin , @zsxwing , I have changed the buffer limit to 256K. I do agree
that it's better we handle this issue by manually copying data to directBuffer,
so
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/12083#issuecomment-205631561
@zsxwing , @vanzin Any further comments?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user liyezhang556520 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12083#discussion_r58287737
--- Diff:
common/network-common/src/main/java/org/apache/spark/network/protocol/MessageWithHeader.java
---
@@ -44,6 +45,14 @@
private long
Github user liyezhang556520 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12083#discussion_r58287444
--- Diff:
common/network-common/src/main/java/org/apache/spark/network/protocol/MessageWithHeader.java
---
@@ -44,6 +45,14 @@
private long
Github user liyezhang556520 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12083#discussion_r58287274
--- Diff:
common/network-common/src/main/java/org/apache/spark/network/protocol/MessageWithHeader.java
---
@@ -44,6 +45,14 @@
private long
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/12083#issuecomment-204632236
>So if we write a 1M buffer, it can only write NIO_BUFFER_LIMIT (512K). And
we need to write the reset 512K again. So in this case, we need to copy 1M +
5
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/12083#issuecomment-204207886
Hi @vanzin , the memory copy place is given out by @zsxwing , the call
stack is as follows:
```
at java.nio.Bits.copyFromArray(Bits.java:754
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/12038#issuecomment-204199865
@zsxwing , I updated the commit description. Thank you @zsxwing and @vanzin
for reviewing.
---
If your project is set up for it, you can reply to this email
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/12083#issuecomment-203872402
cc @rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user liyezhang556520 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12083#discussion_r58032868
--- Diff:
common/network-common/src/main/java/org/apache/spark/network/protocol/MessageWithHeader.java
---
@@ -44,6 +45,14 @@
private long
GitHub user liyezhang556520 opened a pull request:
https://github.com/apache/spark/pull/12083
[SPARK-14290][CORE][Network] avoid significant memory copy in netty's
transferTo
## What changes were proposed in this pull request?
When netty transfer data that is not `FileRegion
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/12038#issuecomment-203724131
retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/12038#issuecomment-203715485
@vanzin
>That's better, but is it needed at all? I don't see any comments about why
consolidating the buffers is a win in the source for CompositeByte
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/12038#issuecomment-203213286
@vanzin , I think @zsxwing 's idea of using
`CompositeByteBuf.addComponents` is a better choice, which will only introduce
exactly one copy if the small buffer
Github user liyezhang556520 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12038#discussion_r57830669
--- Diff:
common/network-common/src/main/java/org/apache/spark/network/util/TransportFrameDecoder.java
---
@@ -139,14 +139,18 @@ private ByteBuf
GitHub user liyezhang556520 opened a pull request:
https://github.com/apache/spark/pull/12038
[SPARK-14242][CORE][Network] avoid using compositeBuffer for frame decoder
## What changes were proposed in this pull request?
In this patch, we avoid using `compositeBuffer
Github user liyezhang556520 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12038#discussion_r57741705
--- Diff:
common/network-common/src/main/java/org/apache/spark/network/util/TransportFrameDecoder.java
---
@@ -139,14 +139,18 @@ private ByteBuf
Github user liyezhang556520 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12038#discussion_r57742478
--- Diff:
common/network-common/src/main/java/org/apache/spark/network/util/TransportFrameDecoder.java
---
@@ -139,14 +139,18 @@ private ByteBuf
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/7753#issuecomment-170942709
Hi @steveloughran , thank you for your attention and your comments, and
your further comments on this PR are pretty appreciated. Your advice is quite
correct
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/8765#issuecomment-163818782
@zsxwing , Since the original implementation has changed a lot, let me
close this PR/JIRA first, if the issue still exists, I'll reopen.
---
If your project
Github user liyezhang556520 closed the pull request at:
https://github.com/apache/spark/pull/8765
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user liyezhang556520 commented on a diff in the pull request:
https://github.com/apache/spark/pull/7753#discussion_r46152658
--- Diff:
core/src/test/scala/org/apache/spark/ui/memory/MemoryListenerSuite.scala ---
@@ -0,0 +1,258 @@
+/*
+ * Licensed to the Apache
Github user liyezhang556520 commented on a diff in the pull request:
https://github.com/apache/spark/pull/7753#discussion_r46154647
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -85,6 +85,9 @@ private[spark] class Executor
Github user liyezhang556520 commented on a diff in the pull request:
https://github.com/apache/spark/pull/7753#discussion_r46152192
--- Diff:
core/src/test/scala/org/apache/spark/ui/memory/MemoryListenerSuite.scala ---
@@ -0,0 +1,258 @@
+/*
+ * Licensed to the Apache
Github user liyezhang556520 commented on a diff in the pull request:
https://github.com/apache/spark/pull/7753#discussion_r46153027
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/EventLoggingListenerSuite.scala
---
@@ -122,6 +122,105 @@ class EventLoggingListenerSuite
Github user liyezhang556520 commented on a diff in the pull request:
https://github.com/apache/spark/pull/7753#discussion_r46152057
--- Diff:
core/src/main/scala/org/apache/spark/executor/ExecutorMetrics.scala ---
@@ -0,0 +1,69 @@
+/*
+ * Licensed to the Apache Software
Github user liyezhang556520 commented on a diff in the pull request:
https://github.com/apache/spark/pull/7753#discussion_r46152802
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/EventLoggingListenerSuite.scala
---
@@ -122,6 +122,105 @@ class EventLoggingListenerSuite
Github user liyezhang556520 commented on a diff in the pull request:
https://github.com/apache/spark/pull/7753#discussion_r46153585
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala ---
@@ -228,6 +261,46 @@ private[spark] class
Github user liyezhang556520 commented on a diff in the pull request:
https://github.com/apache/spark/pull/7753#discussion_r45584487
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala ---
@@ -20,8 +20,10 @@ package org.apache.spark.scheduler
Github user liyezhang556520 commented on a diff in the pull request:
https://github.com/apache/spark/pull/7753#discussion_r45584427
--- Diff:
core/src/main/scala/org/apache/spark/network/netty/NettyBlockTransferService.scala
---
@@ -47,6 +50,39 @@ class NettyBlockTransferService
Github user liyezhang556520 commented on a diff in the pull request:
https://github.com/apache/spark/pull/7753#discussion_r45585344
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala ---
@@ -228,6 +260,40 @@ private[spark] class
Github user liyezhang556520 commented on a diff in the pull request:
https://github.com/apache/spark/pull/7753#discussion_r45584147
--- Diff:
core/src/main/scala/org/apache/spark/executor/ExecutorMetrics.scala ---
@@ -0,0 +1,54 @@
+/*
+ * Licensed to the Apache Software
Github user liyezhang556520 commented on a diff in the pull request:
https://github.com/apache/spark/pull/7753#discussion_r45584922
--- Diff:
core/src/main/scala/org/apache/spark/executor/ExecutorMetrics.scala ---
@@ -0,0 +1,54 @@
+/*
+ * Licensed to the Apache Software
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/7753#issuecomment-158924858
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/7753#issuecomment-158890951
@squito thank you for your comments, it helped a lot, I updated some unit
tests. If you got time, can you help to review? Thanks.
---
If your project is set up
Github user liyezhang556520 commented on a diff in the pull request:
https://github.com/apache/spark/pull/7753#discussion_r45584881
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala ---
@@ -91,7 +93,12 @@ private[spark] class EventLoggingListener
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/7753#issuecomment-153299381
jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/7753#issuecomment-153354857
@squito, sorry for long delay updating, any further comments?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user liyezhang556520 commented on a diff in the pull request:
https://github.com/apache/spark/pull/7753#discussion_r40302251
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala ---
@@ -20,8 +20,11 @@ package org.apache.spark.scheduler
Github user liyezhang556520 commented on a diff in the pull request:
https://github.com/apache/spark/pull/7753#discussion_r40302500
--- Diff: core/src/main/scala/org/apache/spark/util/JsonProtocol.scala ---
@@ -229,9 +228,11 @@ private[spark] object JsonProtocol {
def
Github user liyezhang556520 commented on a diff in the pull request:
https://github.com/apache/spark/pull/7753#discussion_r40302144
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -447,7 +450,16 @@ private[spark] class Executor
Github user liyezhang556520 commented on a diff in the pull request:
https://github.com/apache/spark/pull/7753#discussion_r40301519
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -85,6 +85,9 @@ private[spark] class Executor
Github user liyezhang556520 commented on a diff in the pull request:
https://github.com/apache/spark/pull/7753#discussion_r40339731
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala ---
@@ -152,8 +159,19 @@ private[spark] class
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/7753#issuecomment-142980989
>1. I don't think there is any need to separate out the memory used by the
client and server portions. These are internal details that the end-user
doesn't c
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/2134#issuecomment-140318245
ok, I'll close this PR
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user liyezhang556520 opened a pull request:
https://github.com/apache/spark/pull/8765
[SPARK-10608][CORE] disable reduce locality as default
for details, pls refer to
[SPARK-10608](https://issues.apache.org/jira/browse/SPARK-10608)
You can merge this pull request into a Git
Github user liyezhang556520 closed the pull request at:
https://github.com/apache/spark/pull/2134
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/8765#issuecomment-140355454
jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/8412#issuecomment-134526849
@srowen , I added the missing params for APIs that intending to adding
params in the whole file. I'm wondering whether those missing `@param` are on
purpose
GitHub user liyezhang556520 opened a pull request:
https://github.com/apache/spark/pull/8412
[DOC] add missing parameters for scala doc
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/liyezhang556520/spark minorDoc
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/7753#issuecomment-133629862
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user liyezhang556520 opened a pull request:
https://github.com/apache/spark/pull/7753
[SPARK-9104][CORE][WIP] expose Netty network layer memory used in shuffle
read part
This is a sub-task of
[SPARK-9103](https://issues.apache.org/jira/browse/SPARK-9103), we'd like
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/7753#issuecomment-126163347
@jerryshao , thanks for your review and feedback, I'll separate it into
different PRs. And I will keep this PR open and waiting for others to see
whether
Github user liyezhang556520 commented on a diff in the pull request:
https://github.com/apache/spark/pull/7753#discussion_r35832655
--- Diff:
core/src/main/scala/org/apache/spark/executor/ExecutorMetrics.scala ---
@@ -0,0 +1,84 @@
+/*
+ * Licensed to the Apache Software
GitHub user liyezhang556520 opened a pull request:
https://github.com/apache/spark/pull/7562
[SPARK-9212][CORE] upgrade Netty version to 4.0.29.Final
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/liyezhang556520/spark SPARK
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/7562#issuecomment-123213266
Maybe I made a misleading expression with hack, I meant I didn't find a
way to go into netty to get the memory usage for old versions since they didn't
provide
Github user liyezhang556520 closed the pull request at:
https://github.com/apache/spark/pull/3971
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user liyezhang556520 commented on a diff in the pull request:
https://github.com/apache/spark/pull/5934#discussion_r30971371
--- Diff:
core/src/test/scala/org/apache/spark/serializer/KryoSerializerSuite.scala ---
@@ -32,6 +32,36 @@ class KryoSerializerSuite extends FunSuite
GitHub user liyezhang556520 opened a pull request:
https://github.com/apache/spark/pull/6395
[SPARK-7854][test] refine Kryo test suite
this modification is accroding @JoshRosen 's comments, for details, please
refer to [#5934](https://github.com/apache/spark/pull/5934/files
Github user liyezhang556520 closed the pull request at:
https://github.com/apache/spark/pull/2956
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/2956#issuecomment-103291479
I'm closing this, thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user liyezhang556520 closed the pull request at:
https://github.com/apache/spark/pull/3825
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/5886#issuecomment-101308561
@vanzin , the current implementation will make SPARK-7189 worse, what about
introduce a hashMap to maintain the filename, modifiedTime of each file, file
size
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/5934#issuecomment-99499678
@ilganeli , Thank you for your comments, code updated.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user liyezhang556520 commented on a diff in the pull request:
https://github.com/apache/spark/pull/5886#discussion_r29731021
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala ---
@@ -186,13 +186,14 @@ private[history] class
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/5886#issuecomment-99295131
@vanzin , there is time interval from getting the first file's modification
time to the last file's. Assume there are 3 files: F1, F2, F3. And before
scanning
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/5886#issuecomment-99322278
Hi @vanzin , you are correct that a single call to `listStatus` will return
all information of the all available log files, but I'm not sure about the
details
GitHub user liyezhang556520 opened a pull request:
https://github.com/apache/spark/pull/5934
[SPARK-7392][Core] bugfix: Kryo buffer size cannot be larger than 2M
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/liyezhang556520
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/5739#issuecomment-97635489
Hi @vanzin, Sorry for my misleading expression. I mean, the exception threw
from `try` block is not aware in `finally` block even if it is caught by a
`catch
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/5736#issuecomment-97282994
Thanks to @andrewor14 and @vanzin's comments
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/5739#issuecomment-97287644
Hi @vanzin , the way you suggested may not able to get exception threw from
`try` block, so that some sibling exceptions may not able to be suppressed
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/5739#issuecomment-96964795
@rxin, I'm just thinking it's always more reasonable to use
`tryWithSafeFinally`, which can give more elegant error message. Also, all the
code in other places
GitHub user liyezhang556520 opened a pull request:
https://github.com/apache/spark/pull/5736
[SPARK-6314][CORE] handle JsonParseException for history server
This is handled in the same way with
[SPARK-6197](https://issues.apache.org/jira/browse/SPARK-6197). The result of
this PR
GitHub user liyezhang556520 opened a pull request:
https://github.com/apache/spark/pull/5739
[Core][test][minor] replace try finally block with tryWithSafeFinally
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/liyezhang556520
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/2134#issuecomment-93649867
jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user liyezhang556520 opened a pull request:
https://github.com/apache/spark/pull/5331
[SPARK-6676][BUILD] add more hadoop version support for maven profile
support `-Phadoop-2.5` and `-Phadoop-2.6` when building and testing Spark
You can merge this pull request into a Git
Github user liyezhang556520 closed the pull request at:
https://github.com/apache/spark/pull/5331
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/5331#issuecomment-6497
Ok, I'll close this PR then, thanks for your feedback, @srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/4927#issuecomment-77576837
@srowen , thanks for your comments, and I have updated the code.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/4927#issuecomment-77600734
@viirya , that is because we need to check whether `lines` iterator is
exhausted (by checking `lines.hasNext`) to decide whether we can ignore the
exception
GitHub user liyezhang556520 opened a pull request:
https://github.com/apache/spark/pull/4924
[CORE, DEPLOY][minor] align arguments order with docs of worker
The help message for starting `worker` is `Usage: Worker [options]
master`. While in `start-slaves.sh`, the format
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/4891#issuecomment-77366985
Hi @viirya , @andrewor14 , I don't think rename `.inprogress` to normal log
file is a good idea. If the log file remains with `.inprogress`, there must be
some
Github user liyezhang556520 commented on a diff in the pull request:
https://github.com/apache/spark/pull/4891#discussion_r25864977
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala ---
@@ -202,13 +202,23 @@ private[spark] class
GitHub user liyezhang556520 opened a pull request:
https://github.com/apache/spark/pull/4927
[SPARK-6197][CORE] handle json exception when hisotry file not finished
writing
For details, please refer to
[SPARK-6197](https://issues.apache.org/jira/browse/SPARK-6197)
You can merge
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/4927#issuecomment-77509856
cc @srowen, @andrewor14, @viirya
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/4927#issuecomment-77511029
Yes, I do encounter the problem, if you use Ctrl+C during tasks finishing,
it's easy to happen.
---
If your project is set up for it, you can reply
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/4927#issuecomment-77511789
And I don't think writing operation is atomic, signals still can iterrupt
---
If your project is set up for it, you can reply to this email and have your
reply
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/4927#issuecomment-77511920
ok, thanks @zsxwing answer the second question.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user liyezhang556520 commented on a diff in the pull request:
https://github.com/apache/spark/pull/4848#discussion_r25671520
--- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
---
@@ -774,7 +778,7 @@ private[spark] class Master(
case fnf
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/4848#issuecomment-76887024
Hi @srowen , @viirya , any further comments?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user liyezhang556520 commented on the pull request:
https://github.com/apache/spark/pull/4848#issuecomment-76717703
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user liyezhang556520 commented on a diff in the pull request:
https://github.com/apache/spark/pull/4848#discussion_r25583715
--- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
---
@@ -736,30 +736,31 @@ private[spark] class Master(
val appName
Github user liyezhang556520 commented on a diff in the pull request:
https://github.com/apache/spark/pull/4848#discussion_r25584310
--- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
---
@@ -736,30 +736,31 @@ private[spark] class Master(
val appName
1 - 100 of 206 matches
Mail list logo