Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/2958#issuecomment-60567168
Same replacement happened in https://github.com/apache/spark/pull/2276,
same change in `runExecutorLauncher` is mentioned in that PR but done nothing
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/2955#issuecomment-60587013
You mean we make `ApplicationMasterArguments` accept memory parameter in
two kind of format, one is 2g style and another is just number in megabytes
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/2955#issuecomment-60588263
There is a better idea: we use `MemoryParam` to accept only 2g style
parameter in `ApplicationMaster` and let the memory string passed by
`ClientBase` appended
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/2955#issuecomment-60589510
Code updated. How about it?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/2955#issuecomment-60590793
At beginning I misunderstood your point. Shame
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/2958#issuecomment-60706448
@vanzin @benoyantony Could you two help to check?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/2745#issuecomment-58916414
@srowen I checked and found the line contains more than 100 charactors
already, so keep the wrapping. The period is also deleted.
---
If your project is set up
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/2745#issuecomment-58735051
Actually I don't understant what `Runtime Environment` category means.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/2541#issuecomment-57803237
@JoshRosen I have tested and it worked fine. You can also have a try simply.
---
If your project is set up for it, you can reply to this email and have your
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/2541#issuecomment-57803284
@JoshRosen I have tested and it worked fine. You can also have a try simply.
---
If your project is set up for it, you can reply to this email and have your
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/2573#issuecomment-57285525
Actually HistoryServer can read application logs generated by Spark apps on
another node. The `spark.eventLog.dir` could be different between this and
that. So
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/2573#issuecomment-57131370
Looks like `spark.history.fs.logDirectory` and `spark.eventLog.dir` is same
configuration item on different sides(driver side and HistoryServer side). I
thingk
GitHub user WangTaoTheTonic opened a pull request:
https://github.com/apache/spark/pull/2579
[SPARK-3722][Docs]minor improvement and fix in docs
https://issues.apache.org/jira/browse/SPARK-3722
You can merge this pull request into a Git repository by running:
$ git pull https
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/2509#issuecomment-57082932
@liancheng I tried `export` and it worked. Thanks for the suggestion.
Also modified permission of `stop-thriftserver.sh`.
---
If your project is set up
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/2509#issuecomment-57097696
eh...I cloned the repository on another laptop and found it's executable,
as shown in top-left corner of
https://github.com/WangTaoTheTonic/spark/blob
GitHub user WangTaoTheTonic opened a pull request:
https://github.com/apache/spark/pull/2567
[SPARK-3715][Docs]minor typo
https://issues.apache.org/jira/browse/SPARK-3715
You can merge this pull request into a Git repository by running:
$ git pull https://github.com
Github user WangTaoTheTonic commented on a diff in the pull request:
https://github.com/apache/spark/pull/2509#discussion_r18123634
--- Diff: sbin/spark-daemon.sh ---
@@ -142,8 +142,12 @@ case $startStop in
spark_rotate_log $log
echo starting $command
GitHub user WangTaoTheTonic opened a pull request:
https://github.com/apache/spark/pull/2541
[SPARK-3696]Do not override the user-difined conf_dir
https://issues.apache.org/jira/browse/SPARK-3696
We see if SPARK_CONF_DIR is already defined before assignment.
You can merge
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/2509#issuecomment-56818354
@liancheng As you said, I put spark-submit as an option to achieve
generalization and use source(dot) instead of `exec` to make `SUBMISSION_OPTS
GitHub user WangTaoTheTonic opened a pull request:
https://github.com/apache/spark/pull/2509
[SPARK-3658]Take thrift server as a daemon
https://issues.apache.org/jira/browse/SPARK-3658
And keep the `CLASS_NOT_FOUND_EXIT_STATUS` and exit message in
`SparkSubmit.scala`.
You
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/2454#issuecomment-56276648
I think it is better using lazy val for readability(putting all elements of
defaultSparkProperties into value properties is more comfortable than
conversely
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/2454#issuecomment-56252307
@vanzin Sorry for that. Fixed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/2421#issuecomment-56020425
@sarutak Thanks I see, thought only commiters can do it this way.
---
If your project is set up for it, you can reply to this email and have your
reply appear
GitHub user WangTaoTheTonic opened a pull request:
https://github.com/apache/spark/pull/2445
[SPARK-3589][Minor]remove redundant code
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/WangTaoTheTonic/spark removeRedundant
Github user WangTaoTheTonic commented on a diff in the pull request:
https://github.com/apache/spark/pull/2445#discussion_r17765294
--- Diff: bin/spark-class ---
@@ -169,7 +169,6 @@ if [ -n $SPARK_SUBMIT_BOOTSTRAP_DRIVER ]; then
# This is used only if the properties file
Github user WangTaoTheTonic commented on a diff in the pull request:
https://github.com/apache/spark/pull/2445#discussion_r17766944
--- Diff: bin/spark-class ---
@@ -169,7 +169,6 @@ if [ -n $SPARK_SUBMIT_BOOTSTRAP_DRIVER ]; then
# This is used only if the properties file
GitHub user WangTaoTheTonic opened a pull request:
https://github.com/apache/spark/pull/2454
[SPARK-3599]Avoid loaing properties file frequently
https://issues.apache.org/jira/browse/SPARK-3599
You can merge this pull request into a Git repository by running:
$ git pull https
GitHub user WangTaoTheTonic opened a pull request:
https://github.com/apache/spark/pull/2427
[SPARK-3565]Fix configuration item not consistent with document
https://issues.apache.org/jira/browse/SPARK-3565
spark.ports.maxRetries should be spark.port.maxRetries. Make
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/2427#issuecomment-55920064
I am not clear why there is an test failed in
org.apache.spark.broadcast.BroadcastSuite, but it seems like it has nothing to
do with this commit.
Could we
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/2421#issuecomment-55984128
Use 127 instead, it is the biggest prime number in those less than 128.
How about it, guys?
---
If your project is set up for it, you can reply
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/2427#issuecomment-55984199
@andrewor14 Looks like Jenkins is not triggered?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/2421#issuecomment-55986443
Sorry for not noticnig If a command is not found, the child process
created to execute it returns a status of 127. If a command is found but is not
executable
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/2427#issuecomment-55986575
He might be tired. -_-
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/2421#issuecomment-55991903
Gosh, the test failed. I looked block generator throttling in
NetworkReceiverSuite.scala but couldn't see why.
---
If your project is set up for it, you can
GitHub user WangTaoTheTonic opened a pull request:
https://github.com/apache/spark/pull/2421
Using a special exit code instead of 1 to represent ClassNotFoundExcepti...
...on
As improvement of https://github.com/apache/spark/pull/1944, we should use
more special exit code
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/2421#issuecomment-55843315
@liancheng What do you think?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/1980#issuecomment-55356016
I did not make very much stuty of SecurityManager, but it only does
authentication while not the encryption in communication.
I am not sure
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/1106#issuecomment-54588914
The PR is: https://issues.apache.org/jira/browse/SPARK-3411.
Cause the filter will create copy of worker, so I change the way of
filtering.
The shuffle
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/1106#issuecomment-54700647
To @JoshRosen , the pr title is already modified, so is the jira.
@andrewor14 i think keeping track the workers' resource is too complex. So
I choose worker
GitHub user WangTaoTheTonic opened a pull request:
https://github.com/apache/spark/pull/2234
[SPARK-3344]Reformat code: add blank lines
Add blank lines between test cases.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com
Github user WangTaoTheTonic closed the pull request at:
https://github.com/apache/spark/pull/2234
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/1926#issuecomment-53378027
All done. The issue is https://issues.apache.org/jira/browse/SPARK-3225.
Please check.
---
If your project is set up for it, you can reply to this email
Github user WangTaoTheTonic closed the pull request at:
https://github.com/apache/spark/pull/1714
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user WangTaoTheTonic opened a pull request:
https://github.com/apache/spark/pull/1926
Typo in script
use_conf_dir = user_conf_dir in load-spark-env.sh.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/WangTaoTheTonic/spark
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/1838#issuecomment-51675161
Could someone merge this? Thanks. @rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user WangTaoTheTonic opened a pull request:
https://github.com/apache/spark/pull/1838
[Web UI]Make decision order of Worker's WebUI port consistent with Master's
The decision order of Worker's WebUI port is --webui-port,
SPARK_WORKER_WEBUI_POR, 8081(default
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/1838#issuecomment-51499869
Sorry for my carelessness. Now I fixed it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/1714#issuecomment-51287555
Anyone verify this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user WangTaoTheTonic opened a pull request:
https://github.com/apache/spark/pull/1714
[SPARK-2750]Add Https support for Web UI
https://issues.apache.org/jira/browse/SPARK-2750
Already tested on 1 master, 3worker cluster.
You can merge this pull request into a Git
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/1106#issuecomment-46536007
Make it short, the commit will better balance the load strategy when there
comes a lot of drivers, while not result in bad performance when drivers is
few
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/1106#issuecomment-46401368
Another situation is that the works lists changes frequently, which will
make drivers relaunching a lot.
---
If your project is set up for it, you can reply
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/1105#issuecomment-46437289
Updated.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user WangTaoTheTonic opened a pull request:
https://github.com/apache/spark/pull/1105
Minor fix
The value env is never used in SparkContext.scala.
Add detailed comment for method setDelaySeconds in MetadataCleaner.scala
instead of the unsure one.
You can merge
GitHub user WangTaoTheTonic opened a pull request:
https://github.com/apache/spark/pull/1106
Optimize the schedule procedure in Master
If the waiting driver array is too big, the drivers in it will be
dispatched to the first worker we get(if it has enough resources
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/1106#issuecomment-46390574
You mean the increased shuffles may lead to a bad performance?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/565#issuecomment-41492225
Updated, please check.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user WangTaoTheTonic opened a pull request:
https://github.com/apache/spark/pull/565
Handle the vals that never used
In XORShiftRandom.scala, use val million instead of constant 1e6.toInt.
Delete vals that never used in other files.
You can merge this pull request
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/565#issuecomment-41473587
Hi Owen, thanks for your suggestion.
I inspected the unused assignment, method parameters and symbol using
Intellij IDEA, here is the results exclude test
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/565#issuecomment-41484541
Thanks for that, i already fixed it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user WangTaoTheTonic opened a pull request:
https://github.com/apache/spark/pull/553
Delete the val that never used
It seems that the val startTime and endTime is never used, so delete
them.
You can merge this pull request into a Git repository by running:
$ git pull
401 - 460 of 460 matches
Mail list logo