Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/2342#discussion_r29005360
--- Diff:
core/src/main/resources/org/apache/spark/ui/static/timeline-view.js ---
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/2342#discussion_r29004786
--- Diff:
core/src/main/resources/org/apache/spark/ui/static/timeline-view.js ---
@@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/2342#discussion_r29006547
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/AllJobsPage.scala ---
@@ -17,17 +17,172 @@
package org.apache.spark.ui.jobs
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/2342#discussion_r29007086
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/AllJobsPage.scala ---
@@ -17,17 +17,172 @@
package org.apache.spark.ui.jobs
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/2342#discussion_r28989612
--- Diff:
core/src/main/scala/org/apache/spark/ui/jobs/JobProgressListener.scala ---
@@ -73,8 +76,13 @@ class JobProgressListener(conf: SparkConf) extends
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/2342#discussion_r28991619
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/AllJobsPage.scala ---
@@ -154,6 +305,9 @@ private[ui] class AllJobsPage(parent: JobsTab) extends
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/2342#discussion_r28992557
--- Diff:
core/src/main/resources/org/apache/spark/ui/static/timeline-view.js ---
@@ -0,0 +1,90 @@
+/*
+ * Licensed to the Apache Software
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/4474#issuecomment-95805067
I'd be happy to have a patch that kills the JVM when this occurs, with a
warning message logged. I didn't realize in your original submission that this
was actually
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/2342#discussion_r29024275
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/AllJobsPage.scala ---
@@ -17,17 +17,172 @@
package org.apache.spark.ui.jobs
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/4450#issuecomment-95237844
Hey @sryza - two higher level questions as I'm doing a deeper review of
this.
1. This seems predicated on the idea that serialization streams can safely
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/4450#discussion_r28884412
--- Diff:
core/src/main/scala/org/apache/spark/util/collection/ExternalSorter.scala ---
@@ -113,11 +114,21 @@ private[spark] class ExternalSorter[K, V, C
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/4450#discussion_r28882363
--- Diff:
core/src/main/scala/org/apache/spark/util/collection/PartitionedSerializedPairBuffer.scala
---
@@ -0,0 +1,254 @@
+/*
+ * Licensed
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2342#issuecomment-95284434
@sarutak will you have time to address the other feedback soon? We are
getting close to the merge deadline and I really want to make sure this feature
gets
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5633#issuecomment-95422532
This is how we usually do class loading, but IIRC, there is an issue with
certain JDBC drivers where they need to be loaded from the primordial
classloader or else
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2342#issuecomment-95423610
If you have time to do an iteration in the next day or two, it would be
helpful. We can continue to go back and fourth ideally over the next week
to have it ready
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5641#issuecomment-95422599
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/5430#discussion_r28916839
--- Diff: core/src/main/scala/org/apache/spark/util/JsonProtocol.scala ---
@@ -801,18 +801,18 @@ private[spark] object JsonProtocol {
def
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/5430#discussion_r28916759
--- Diff: core/src/main/scala/org/apache/spark/util/JsonProtocol.scala ---
@@ -801,18 +801,18 @@ private[spark] object JsonProtocol {
def
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5149#issuecomment-94650812
Okay I'll pull this in now.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5149#issuecomment-94651348
Ah actually looks like there is a small bug caused by the refactoring (ping
@texasmichelle):
```
Traceback (most recent call last):
File ./dev
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2342#issuecomment-94889476
If you want you can keep the stage view for now, but we may decide to
remove it before merging this patch.
---
If your project is set up for it, you can reply
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/3913#issuecomment-94886124
Sure, sounds fine to me. We can have a static method for it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5546#issuecomment-94884549
Yeah it's becoming much more common so I'd say let's just go for it across
both builds. Otherwise if there is some subtle issue caused by this, it will be
super
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5615#issuecomment-94921418
LGTM as well, pending tests.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5620#issuecomment-94949026
/cc @andrewor14
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user pwendell opened a pull request:
https://github.com/apache/spark/pull/5620
[Minor] Comment improvements in ExternalSorter.
1. Clearly specifies the contract/interactions for users of this class.
2. Minor fix in one doc to avoid ambiguity.
You can merge this pull
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5620#issuecomment-94966747
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5620#issuecomment-95021845
Thanks for checking this out Sandy, merging it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5623#issuecomment-95031134
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5149#issuecomment-94989301
Thanks! I've merged this in.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5149#issuecomment-94604740
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/5547#discussion_r28743323
--- Diff: core/src/main/resources/org/apache/spark/ui/static/jobs-graph.js
---
@@ -0,0 +1,118 @@
+function renderJobsGraphs(data) {
+ /* show
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/2342#discussion_r28726587
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/AllJobsPage.scala ---
@@ -17,18 +17,137 @@
package org.apache.spark.ui.jobs
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/2342#discussion_r28727537
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/AllJobsPage.scala ---
@@ -17,18 +17,137 @@
package org.apache.spark.ui.jobs
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2342#issuecomment-94583906
Hi @sarutak - I did a pretty thorough pass on this and I have some
feedback. First some high level stuff:
- The overall architecture. I'm +1 on using
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/2342#discussion_r28726698
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/AllJobsPage.scala ---
@@ -17,18 +17,137 @@
package org.apache.spark.ui.jobs
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/2342#discussion_r28727987
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/AllJobsPage.scala ---
@@ -17,18 +17,137 @@
package org.apache.spark.ui.jobs
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/2342#discussion_r28729612
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/AllJobsPage.scala ---
@@ -17,18 +17,137 @@
package org.apache.spark.ui.jobs
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/2342#discussion_r28735427
--- Diff:
core/src/main/scala/org/apache/spark/ui/jobs/JobProgressListener.scala ---
@@ -73,8 +73,15 @@ class JobProgressListener(conf: SparkConf) extends
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/5547#discussion_r28743664
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala ---
@@ -234,6 +235,8 @@ private[ui] class StagePage(parent: StagesTab) extends
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/5547#discussion_r28743686
--- Diff: core/src/main/resources/org/apache/spark/ui/static/jobs-graph.js
---
@@ -0,0 +1,118 @@
+function renderJobsGraphs(data) {
+ /* show
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5403#issuecomment-94588226
Since it's a pretty simple implementation, I'd be fine if it were merged
in. But I think we should say clearly that it can be useful for benchmarking,
etc, but isn't
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/5547#discussion_r28744020
--- Diff: core/src/main/resources/org/apache/spark/ui/static/jobs-graph.js
---
@@ -0,0 +1,118 @@
+function renderJobsGraphs(data) {
+ /* show
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/2342#discussion_r28726383
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/AllJobsPage.scala ---
@@ -17,18 +17,137 @@
package org.apache.spark.ui.jobs
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5547#issuecomment-94607373
Thanks a lot for submitting this. It is a cool feature - we'll need to
think about whether we like this charting library vs the one in the timeline
view PR. I am going
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2342#issuecomment-94558457
@sarutak can you do some sanity testing to see how large this can
realistically scale too at the stage level? For instance try with 10 tasks, 100
tasks, 500 tasks, 1000
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/5149#discussion_r28713910
--- Diff: dev/merge_spark_pr.py ---
@@ -286,68 +280,149 @@ def resolve_jira_issues(title, merge_branches,
comment):
resolve_jira_issue
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5149#issuecomment-94528030
Left one minor comment about the usability of the prompt - but otherwise
looking good.
---
If your project is set up for it, you can reply to this email and have your
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5565#issuecomment-94503276
@nkronenfeld we can do clean-ups in a way that's backwards compatible. But
we cannot remove old method signatures for the purpose of clean-up. This is why
we vet new
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5565#issuecomment-94366735
Hey @nkronenfeld - simply put, we cannot make binary-incompatible changes
to API's in Spark due to our API guarantees, so this rules out many of your
proposed changes
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5559#issuecomment-94350661
Jenkins, this is okay to test.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/5149#discussion_r28663536
--- Diff: dev/merge_spark_pr.py ---
@@ -286,68 +281,145 @@ def resolve_jira_issues(title, merge_branches,
comment):
resolve_jira_issue
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/5149#discussion_r28664751
--- Diff: dev/merge_spark_pr.py ---
@@ -286,68 +281,145 @@ def resolve_jira_issues(title, merge_branches,
comment):
resolve_jira_issue
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/5149#discussion_r28664855
--- Diff: dev/merge_spark_pr.py ---
@@ -286,68 +281,145 @@ def resolve_jira_issues(title, merge_branches,
comment):
resolve_jira_issue
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/5149#discussion_r28664956
--- Diff: dev/merge_spark_pr.py ---
@@ -286,68 +281,145 @@ def resolve_jira_issues(title, merge_branches,
comment):
resolve_jira_issue
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/5149#discussion_r28664977
--- Diff: dev/merge_spark_pr.py ---
@@ -286,68 +281,145 @@ def resolve_jira_issues(title, merge_branches,
comment):
resolve_jira_issue
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5149#issuecomment-94361972
This is looking good in terms of functionality. I made some comments to
help simplify the code a bit.
---
If your project is set up for it, you can reply to this email
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/5149#discussion_r28569271
--- Diff: dev/merge_spark_pr.py ---
@@ -286,68 +281,137 @@ def resolve_jira_issues(title, merge_branches,
comment):
resolve_jira_issue
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/4435#issuecomment-93889612
Hey @squito it looks like the automated dependency checking isn't working
so well for this PR. Can you do a diff and list all of the dependencies this is
adding
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5149#issuecomment-93891216
Hey @texasmichelle thanks for contributing this. It slipped of my radar but
it will be nice to get something like this in. One thing though, even though I
originally
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/3913#issuecomment-93786580
The reason I proposed to put in SparkContext is to avoid committing to the
current namespace/package of that object and just expose a narrower utility
function off
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5546#issuecomment-93857336
No there are no known issues, just general wondering whether this could
mess up compatibility in some subtle way, for instance if some longer exposed
classes
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/4027#issuecomment-92653950
The trade off is in predictability and how easily users can interpret the
implications of the options they are passing. I think it could be fine
later on to add
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/4688#issuecomment-92537887
Hi @harishreedharan - could you add some more documentation for this? The
high level architecture here may be hard for users to see. Here are some places
you might
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5443#issuecomment-92607590
I think you may need to look at the source code to figure out what is going
on:
https://github.com/jenkinsci/ghprb-plugin/blob/master/src/main/java/org
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5335#issuecomment-92176190
I also feel that the current approach makes more sense than Josh's
alternative. Changes to SparkContext get a lot of scrutiny during code review,
so clear documentation
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5471#issuecomment-91989673
Does Maven, in general support building child modules independently without
first doing a mvn install (which anyways requires doing the entire build)?
---
If your
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5335#issuecomment-91756792
ping @JoshRosen - I think he's proposed this exact change to me in the past.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5429#issuecomment-91327877
Jenkins, retest this please. LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5429#issuecomment-91132083
Actually sorry - I realized a problem with this. We can't merge this right
now.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5382#issuecomment-91125785
Great LTM - @WangTaoTheTonic does that look okay to you?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/5429#discussion_r28039180
--- Diff: dev/create-release/create-release.sh ---
@@ -119,13 +119,13 @@ if [[ ! $@ =~ --skip-publish ]]; then
rm -rf $SPARK_REPO
build
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5096#issuecomment-91084853
@shivaram - hey one thing I forgot to ask, how much time do the SparkR
tests add to the overall Spark tests?
---
If your project is set up for it, you can reply
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5429#issuecomment-91083610
LGTM - thanks for sending this. Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5382#issuecomment-91088355
Great to have an improvement here. One thing I don't understand, there are
two curved arrows from the SparkContext to the Executors/Workers. However, in
the upper arrow
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5403#issuecomment-90736418
Yeah so my feeling on this one is I'm sure it's really useful for
benchmarks where you can size things so that data is in memory, but I'd be
really hesitant to expose
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5402#issuecomment-90752076
LGTM
On Tue, Apr 7, 2015 at 6:26 PM, UCB AMPLab notificati...@github.com wrote:
Test PASSed.
Refer to this link for build results (access rights
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5096#issuecomment-90768943
Okay LGTM from a packaging perspective. Once @andrewor14 sign's off on the
spark-submit stuff I think this is ready to go.
---
If your project is set up for it, you
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/5372#discussion_r27794465
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -1900,7 +1900,17 @@ object SparkContext extends Logging {
private[spark
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5372#issuecomment-90034656
Left a very minor comment but otherwise LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5327#issuecomment-90033364
Yeah this seems good. I think it's quite common to have this for deamon
scripts.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5372#issuecomment-89869007
Do we need to modify `isDriver` to accept the old version of the identifier
as well? For instance, if a newer version of the History server is playing logs
from
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5328#issuecomment-89869200
Okay - let's see if we can figure that out and hopefully update the affects
and target version on the JIRA to be clear this is unrelated to 1.3.
---
If your project
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/5363#discussion_r27776597
--- Diff: docs/building-spark.md ---
@@ -98,10 +98,10 @@ mvn -Pyarn -Phadoop-2.2 -Dhadoop.version=2.2.0
-DskipTests clean package
# Apache Hadoop 2.3.X
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5363#issuecomment-89756227
Hey Sean,
Thanks for posting this. So my straw man alternative is to just add
hadoop2.5 and hadoop2.6 profiles and associated documentation. My concern
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5328#issuecomment-89752537
@srowen @andrewor14 @vanzin - if this was caused by #5085, then does it
affect Spark 1.3?
---
If your project is set up for it, you can reply to this email and have
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/5096#discussion_r27752135
--- Diff: R/create-docs.sh ---
@@ -0,0 +1,46 @@
+#!/bin/bash
+
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/5096#discussion_r27752106
--- Diff: R/SparkR_prep-0.1.sh ---
@@ -0,0 +1,52 @@
+#!/bin/sh
+
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/5096#discussion_r27752993
--- Diff: R/README.md ---
@@ -0,0 +1,73 @@
+# R on Spark
+
+SparkR is an R package that provides a light-weight frontend to use Spark
from R
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5096#issuecomment-89422690
I took a pass on this. I still haven't wrapped my head totally around the
packaging, but here are a few comments:
- Iâm not quite sure what all the scripts
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5286#issuecomment-88663368
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5286#issuecomment-88728572
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5286#issuecomment-88728689
/cc @brennonyork - this is giving me strange dependency messages even
though the patch doesn't touch any pom files.
---
If your project is set up for it, you can reply
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/5223#discussion_r27587595
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -1214,6 +1214,44 @@ private[spark] object Utils extends Logging
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5223#issuecomment-88554511
This LGTM with one minor comment - I looked closely at the external sorter
code, which seemed like the trickiest bit, and from what I can tell the
existing behavior
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5286#issuecomment-87954124
/cc @aarondav and @rxin, with whom I discussed some of the existing design.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/5286#issuecomment-87956528
We have to put that whenever we don't create a JIRA or else the scripts we
use get messed up. I can just make a JIRA for the overall clean-up.
---
If your project
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/5286#discussion_r27454958
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -439,14 +439,10 @@ private[spark] class BlockManager
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/5286#discussion_r27454970
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -439,14 +439,10 @@ private[spark] class BlockManager
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/4450#issuecomment-87956979
@sryza don't bother with my comments yet, still just taking a tour through
this part of the code.
---
If your project is set up for it, you can reply to this email
501 - 600 of 4362 matches
Mail list logo