Github user rekhajoshm commented on the issue:
https://github.com/apache/spark/pull/19404
Thanks for the good inputs.Closing this PR.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user rekhajoshm closed the pull request at:
https://github.com/apache/spark/pull/19404
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user rekhajoshm commented on the issue:
https://github.com/apache/spark/pull/20337
thanks @srowen done.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
GitHub user rekhajoshm opened a pull request:
https://github.com/apache/spark/pull/20337
[SPARK-11630] [core] ClosureCleaner moved from warning to debug
## What changes were proposed in this pull request?
ClosureCleaner moved from warning to debug
## How was this patch
GitHub user rekhajoshm opened a pull request:
https://github.com/apache/spark/pull/20338
[SPARK-11222][BUILD][PYTHON] python code style checker update
## What changes were proposed in this pull request?
Referencing latest python code style checking from PyPi/pycodestyle
Github user rekhajoshm commented on the issue:
https://github.com/apache/spark/pull/20338
Thanks @HyukjinKwon for your review.updated, please verify.
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user rekhajoshm commented on a diff in the pull request:
https://github.com/apache/spark/pull/20338#discussion_r162837153
--- Diff: dev/tox.ini ---
@@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the
Github user rekhajoshm commented on a diff in the pull request:
https://github.com/apache/spark/pull/20338#discussion_r162838933
--- Diff: dev/tox.ini ---
@@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the
GitHub user rekhajoshm opened a pull request:
https://github.com/apache/spark/pull/20347
[SPARK-20129][Core] JavaSparkContext should use SparkContext.getOrCreate
## What changes were proposed in this pull request?
Using SparkContext getOrCreate() instead of recreating new sc in
Github user rekhajoshm commented on the issue:
https://github.com/apache/spark/pull/20338
@HyukjinKwon @ueshin was bit hesitant to change variable/references as pep8
is very clear within python circles as python style checker.however after
pondering, have done the needful. please
GitHub user rekhajoshm opened a pull request:
https://github.com/apache/spark/pull/20378
[SPARK-11222][Build][Python] Python document style checker added
## What changes were proposed in this pull request?
Using pydocstyle for python document style checker
https://github.com
Github user rekhajoshm commented on the issue:
https://github.com/apache/spark/pull/20347
Thank you @srowen I admire you for doing what you do over all the jira/PR's
I have studied, and followed up.
If its ok, will keep this PR open for few days, and close if jira is
getti
GitHub user rekhajoshm opened a pull request:
https://github.com/apache/spark/pull/20418
[SPARK-790][MESOS] Implement reregistered callback for MesosScheduler
## What changes were proposed in this pull request?
Implement reregistered callback for MesosScheduler.
There are
Github user rekhajoshm commented on the issue:
https://github.com/apache/spark/pull/20378
Thanks @HyukjinKwon for review.
This checks only for python files, as $PATHS_TO_CHECK is passed, and it
takes only python files(find . -name "*.py").Also the single file
Github user rekhajoshm commented on the issue:
https://github.com/apache/spark/pull/20378
Thanks @HyukjinKwon for your update.
@HyukjinKwon @holdenk @ueshin @viirya @icexelloss @felixcheung
@BryanCutler and @MrBago - While you are thinking on it, below is my analysis
Github user rekhajoshm commented on the issue:
https://github.com/apache/spark/pull/20432
ð LGTM @ueshin
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user rekhajoshm commented on a diff in the pull request:
https://github.com/apache/spark/pull/20378#discussion_r164926603
--- Diff: dev/lint-python ---
@@ -83,6 +84,53 @@ else
rm "$PEP8_REPORT_PATH"
fi
+ Python Document St
GitHub user rekhajoshm opened a pull request:
https://github.com/apache/spark/pull/20500
[SPARK-14023][MLlib] Make exceptions consistent regarding fields and columns
## What changes were proposed in this pull request?
Make exceptions consistent regarding fields and columns
GitHub user rekhajoshm opened a pull request:
https://github.com/apache/spark/pull/20501
[SPARK-22430][Docs] Unknown tag warnings when building R docs with Roxygen
6.0.1
## What changes were proposed in this pull request?
Removed @export tag to get rid of unknown tag warnings
Github user rekhajoshm commented on the issue:
https://github.com/apache/spark/pull/20378
@HyukjinKwon I will check it over weekend.thanks
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
GitHub user rekhajoshm opened a pull request:
https://github.com/apache/spark/pull/20556
[SPARK-23367][Build] Include python document style checking
## What changes were proposed in this pull request?
Include python document style checking.
This PR includes the pydocstyle
Github user rekhajoshm commented on a diff in the pull request:
https://github.com/apache/spark/pull/20378#discussion_r167148657
--- Diff: dev/lint-python ---
@@ -83,6 +84,53 @@ else
rm "$PEP8_REPORT_PATH"
fi
+ Python Document St
Github user rekhajoshm commented on the issue:
https://github.com/apache/spark/pull/20378
@HyukjinKwon Identifying docstyle failures does not help much as it is not
straightforward to exclude in this version
Github user rekhajoshm commented on the issue:
https://github.com/apache/spark/pull/20378
@HyukjinKwon @holdenk @ueshin @viirya @icexelloss @felixcheung @BryanCutler
and @MrBago - This was one of the possible approach that I was running by you.
I have proposed another approach at
Github user rekhajoshm closed the pull request at:
https://github.com/apache/spark/pull/20378
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user rekhajoshm commented on the issue:
https://github.com/apache/spark/pull/20501
Ack. thanks for the update @felixcheung @srowen Closing this.
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user rekhajoshm closed the pull request at:
https://github.com/apache/spark/pull/20501
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user rekhajoshm commented on the issue:
https://github.com/apache/spark/pull/20556
build error unrelated to this PR. retest this please.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
Github user rekhajoshm commented on a diff in the pull request:
https://github.com/apache/spark/pull/20556#discussion_r170792015
--- Diff: dev/lint-python ---
@@ -21,10 +21,15 @@ SCRIPT_DIR="$( cd "$( dirname "$0" )" && pwd )"
Github user rekhajoshm commented on a diff in the pull request:
https://github.com/apache/spark/pull/20556#discussion_r170792029
--- Diff: dev/lint-python ---
@@ -21,10 +21,15 @@ SCRIPT_DIR="$( cd "$( dirname "$0" )" && pwd )"
Github user rekhajoshm commented on a diff in the pull request:
https://github.com/apache/spark/pull/20556#discussion_r170792051
--- Diff: dev/lint-python ---
@@ -82,6 +87,34 @@ else
rm "$PYCODESTYLE_REPORT_PATH"
fi
+# Check python document style,
Github user rekhajoshm commented on a diff in the pull request:
https://github.com/apache/spark/pull/20556#discussion_r170792037
--- Diff: dev/tox.ini ---
@@ -17,3 +17,5 @@
ignore=E402,E731,E241,W503,E226,E722,E741,E305
max-line-length=100
exclude=cloudpickle.py
Github user rekhajoshm commented on a diff in the pull request:
https://github.com/apache/spark/pull/20556#discussion_r170792111
--- Diff: dev/lint-python ---
@@ -82,6 +87,34 @@ else
rm "$PYCODESTYLE_REPORT_PATH"
fi
+# Check python document style,
GitHub user rekhajoshm reopened a pull request:
https://github.com/apache/spark/pull/20501
[SPARK-22430][Docs] Unknown tag warnings when building R docs with Roxygen
6.0.1
## What changes were proposed in this pull request?
Removed @export tag to get rid of unknown tag warnings
Github user rekhajoshm commented on the issue:
https://github.com/apache/spark/pull/20501
done @felixcheung thanks
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
GitHub user rekhajoshm opened a pull request:
https://github.com/apache/spark/pull/21604
[SPARK-24614][PySpark] : Fix for SyntaxWarning on tests.py
## What changes were proposed in this pull request?
Fix for SyntaxWarning on tests.py
## How was this patch tested
GitHub user rekhajoshm opened a pull request:
https://github.com/apache/spark/pull/21683
[SPARK-24507][Documentation] Update streaming guide
## What changes were proposed in this pull request?
Updated streaming guide for direct stream and link to integration guide
Github user rekhajoshm commented on a diff in the pull request:
https://github.com/apache/spark/pull/21660#discussion_r199335279
--- Diff:
resource-managers/kubernetes/integration-tests/src/test/scala/org/apache/spark/deploy/k8s/integrationtest/KubernetesSuite.scala
---
@@ -21,17
GitHub user rekhajoshm opened a pull request:
https://github.com/apache/spark/pull/21684
[SPARK-24470][Core] RestSubmissionClient to be robust against 404 & non
json responses
## What changes were proposed in this pull request?
Added check for 404, to avoid json parsing on
Github user rekhajoshm commented on the issue:
https://github.com/apache/spark/pull/21684
```
Discovery starting.
Discovery completed in 5 seconds, 946 milliseconds.
Run starting. Expected test count is: 16
StandaloneRestSubmitSuite:
- construct submit request
Github user rekhajoshm commented on the issue:
https://github.com/apache/spark/pull/21684
retest please.the failure unrelated to this PR.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
Github user rekhajoshm commented on a diff in the pull request:
https://github.com/apache/spark/pull/21684#discussion_r200735457
--- Diff:
core/src/main/scala/org/apache/spark/deploy/rest/RestSubmissionClient.scala ---
@@ -233,30 +233,41 @@ private[spark] class
Github user rekhajoshm commented on a diff in the pull request:
https://github.com/apache/spark/pull/21684#discussion_r200737560
--- Diff:
core/src/main/scala/org/apache/spark/deploy/rest/RestSubmissionClient.scala ---
@@ -233,30 +233,41 @@ private[spark] class
Github user rekhajoshm commented on a diff in the pull request:
https://github.com/apache/spark/pull/21684#discussion_r200741018
--- Diff:
core/src/test/scala/org/apache/spark/deploy/rest/StandaloneRestSubmitSuite.scala
---
@@ -366,7 +366,8 @@ class StandaloneRestSubmitSuite
Github user rekhajoshm commented on a diff in the pull request:
https://github.com/apache/spark/pull/21684#discussion_r200784938
--- Diff:
core/src/main/scala/org/apache/spark/deploy/rest/RestSubmissionClient.scala ---
@@ -233,30 +233,43 @@ private[spark] class
Github user rekhajoshm commented on the issue:
https://github.com/apache/spark/pull/21684
```
Discovery starting.
Discovery completed in 3 seconds, 75 milliseconds.
Run starting. Expected test count is: 16
StandaloneRestSubmitSuite:
- construct submit request
GitHub user rekhajoshm opened a pull request:
https://github.com/apache/spark/pull/9440
[Spark-11478] [ML] ML StringIndexer return inconsistent schema
```val data = sc.parallelize(Seq((0, "a"), (1, "b"), (2, "c"), (3, "a&quo
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/9440#issuecomment-153831974
Thanks @yanboliang for your comments.My findings were similar to yours, and
that nullable is the cause, driven by attr.toStructField().This was a few secs
quick look
GitHub user rekhajoshm opened a pull request:
https://github.com/apache/spark/pull/9525
[SPARK-11531] [ML] : SparseVector error Msg
PySpark SparseVector should have "Found duplicate indices" error message
You can merge this pull request into a Git repository by running:
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/7602#issuecomment-133373808
Thanks @srowen @andrewor14 for the merge.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/7253#issuecomment-120182769
thanks @vanzin . unless i m missing something i intentionally ascending
order for both start and end time. Users want to see which ended first
(ascending endtime) on
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/7253#issuecomment-120186854
Ack @vanzin updating, start time also descending order - users want to see
which started last first??
Anycase, the concerned issue of jira hits just the case 4 in
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/7253#issuecomment-120191046
@vanzin ok. With descending endtime, what i am thinking is -
As we both agree in case 4, when a1 is incomplete(end time unknown, but
optimistically will occur
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/7253#issuecomment-120207067
@vanzin thanks. confirming you are ok with descending start time where
start time is checked for, and ascending end time is fine?
I can do descending start
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/7253#issuecomment-120227192
imo that would not be correct @vanzin for the case when a1 is completed,
and a2 is incomplete if a1 start times before a2, doing descending start time
check would
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/7253#issuecomment-120755608
Hi. updated commit for having order of descending start time if both
attempts are completed or running, else completed attempts before running
attempts.maybe that is
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/7253#issuecomment-121064833
hmm..I see your point @vanzin .According to your prescription then, later
attempts before earlier attempts is the correct order.Did you consider all
possibilities
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/7253#issuecomment-121092259
thanks @vanzin convinced, updated to have descending start time for all
cases.
thanks for the conversation @vanzin
---
If your project is set up for it, you
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/7253#issuecomment-121456683
done @vanzin thanks
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/7253#issuecomment-121711991
done @vanzin thanks
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user rekhajoshm opened a pull request:
https://github.com/apache/spark/pull/7481
[SPARK-9118][ML]: Implement IntArrayParam in mllib
Implement IntArrayParam in mllib
You can merge this pull request into a Git repository by running:
$ git pull https://github.com
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/7253#issuecomment-122435567
thanks @vanzin @srowen @andrewor14 !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/7481#issuecomment-122508200
thanks @mengxr
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user rekhajoshm closed the pull request at:
https://github.com/apache/spark/pull/6969
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/6969#issuecomment-116844189
ack @andrewor14
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/6973#issuecomment-116877384
@srowen done.can you please review/approve it?thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/6973#issuecomment-116913875
thanks @andrewor14 for your quick input.the last commit 2ce5760 was updated
for the comment from @srowen kindly check.thanks
---
If your project is set up for it
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/6973#issuecomment-116932789
had missed that inline comment. updated @andrewor14 @srowen thanks
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/6973#issuecomment-117281723
Thanks for your inputs @srowen @andrewor14. done.also removed unused
JavaConversions._ please review, thanks.
---
If your project is set up for it, you can reply to
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/6973#issuecomment-117352799
thank you @andrewor14 @srowen for the reviews and merge.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/5992#issuecomment-117846787
thanks @jkbradley @mengxr for discussion and quick inputs.done.please
review.thanks.
---
If your project is set up for it, you can reply to this email and have your
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/5992#issuecomment-117888609
thanks @jkbradley for your review.done.please have a look.thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/5992#issuecomment-118206234
thanks @mengxr for your comment.done.please have a look.Also raised
SPARK-8806, please check if that makes sense?thanks
---
If your project is set up for it, you
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/5992#issuecomment-118665283
thanks @jkbradley @mengxr for the reviews and merge.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
GitHub user rekhajoshm opened a pull request:
https://github.com/apache/spark/pull/7251
[SPARK-3164] [ML] DecisionTree Split.categories as Set
DecisionTree Split.categories as Set changes across Split, DecisionTree,
DecisionTreeModel and DecisionTreeSuite
You can merge this pull
GitHub user rekhajoshm opened a pull request:
https://github.com/apache/spark/pull/7253
[SPARK-8593] [YARN]: History Server: some attempt completed to work with
showIncomplete
History Server: As I understood the issue, if some attempt of the task is
completed and in that case even
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/7251#issuecomment-119278302
Closing as suggested by @jkbradley
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user rekhajoshm closed the pull request at:
https://github.com/apache/spark/pull/7251
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/7253#issuecomment-119699619
thanks @vanzin @tgravescs @srowen @andrewor14 for your comments.
My first line of attack was to remove early filtering in HistoryPage which
removes if first
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/7602#issuecomment-126163542
Thanks for your comment @srowen . It does not make it worse, as toString()
was in earlier code and was always doing it in addition to string objects in
recursive
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/7602#issuecomment-126524343
Thanks @srowen for your comment.The recursive calls for cluster child and
nodes within makeDotSubgraph internally appends the result of each call and has
normal
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/7602#issuecomment-126809040
Good point @srowen new StringBuilder was as in original
code,corrected.Agree that it helps.
OOM catch was suggestive to back my comment on stack recursion/thread
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/7602#issuecomment-126826912
thanks for your comments @srowen .done.please can you have a look? thanks!
---
If your project is set up for it, you can reply to this email and have your
reply
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/7602#issuecomment-127066526
Thanks @andrewor14 for your review comments. done. What you say is correct.
That said, in stack recursion it is simply bad practice to create more objects
than
GitHub user rekhajoshm opened a pull request:
https://github.com/apache/spark/pull/6969
[SPARK-8393] [Streaming] Handle InterruptedException on awaitTermination
variants
Handle InterruptedException on awaitTermination variants
You can merge this pull request into a Git repository
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/5992#issuecomment-114712687
hi @mengxr hmm., moving getParam out of Params or to modify its visibility
for ${getParam(colName)? Please can you also check with @JosephBradley of
SPARK-7137 on
GitHub user rekhajoshm opened a pull request:
https://github.com/apache/spark/pull/6972
[SPARK-5768] [Web UI] Fix for incorrect memory in Spark UI
Fix for incorrect memory in Spark UI as per SPARK-5768
You can merge this pull request into a Git repository by running:
$ git
GitHub user rekhajoshm opened a pull request:
https://github.com/apache/spark/pull/6973
[SPARK-2645] [Core] Fix for SparkContext stop behavior
Fix for SparkContext stop behavior
You can merge this pull request into a Git repository by running:
$ git pull https://github.com
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/6973#issuecomment-115091698
Thanks for your quick inputs and review comments @sarutak @srowen
@andrewor14 @squito .
@sean â Based on my quick look yesterday, sc.stop() calls sparkEnv
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/6972#issuecomment-115093575
Thanks @andrewor14 @sarutak @srowen for quick inputs and review comments.
Updated for review comments.thanks.
---
If your project is set up for it, you can reply
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/6972#issuecomment-115098070
thanks @sarutak done as suggested.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/6972#issuecomment-115352921
thanks @sarutak @srowen @sujkh85 for quick inputs and merge.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/6973#issuecomment-115363575
Thanks @srowen @squito for your review comments.
@srowen - log it for any kind of debugging, but can remove if not preferred
@squito - updated as suggested
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/6973#issuecomment-115518782
thanks @vanzin for your comments.Updated the title.Other than SparkContext,
SparkEnv stop can be invoked from Scheduler, or if Driver commanded a shutdown,
Executor
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/6973#issuecomment-115825750
thanks @srowen for your inputs.I had missed capturing exception on
fail.updated(12f66b5).try/catch/fail(e) seems to be accepted practice when
failing on
GitHub user rekhajoshm opened a pull request:
https://github.com/apache/spark/pull/7602
[SPARK-8889] [Core]: Fix for OOM for graph creation
Fix for OOM for graph creation
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/rekhajoshm
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/7602#issuecomment-124270729
thanks @JoshRosen for your review comment.can you please have a look now?
thanks!
---
If your project is set up for it, you can reply to this email and have your
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/7602#issuecomment-125108253
ping @JoshRosen @andrewor14
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user rekhajoshm commented on a diff in the pull request:
https://github.com/apache/spark/pull/7602#discussion_r35510171
--- Diff:
core/src/main/scala/org/apache/spark/ui/scope/RDDOperationGraph.scala ---
@@ -181,17 +181,26 @@ private[ui] object RDDOperationGraph extends
Github user rekhajoshm commented on the pull request:
https://github.com/apache/spark/pull/7602#issuecomment-125819970
@andrewor14 thanks for checking. imo the OOM can only happen if system is
running low on memory and/or GC is working 98% of the time and still being able
to free <
1 - 100 of 181 matches
Mail list logo