Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/1592#issuecomment-55306732
@staple The Jenkins pull request builder is in an odd state of flux right
now. I've manually re-triggered your build (I should have self-service retest
this please
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/1951#issuecomment-55309457
This looks good to me, so I'm going to merge it into master. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2363#issuecomment-55310770
There are two old JIRAs that seem relevant:
- [SPARK-2100](https://issues.apache.org/jira/browse/SPARK-2100): Allow
users to disable Jetty Spark UI in local
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2370#issuecomment-55429877
LGTM; I've merged this into `master`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2372#issuecomment-55435003
According to Jenkins, this is the commit that was tested:
8f021acbd81fcf5826fe1a92639e101063e075dd. This corresponds to a commit that
was auto-generated by GitHub
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/1934#issuecomment-55437586
This looks good to me. I tested `spark-submit` with both Scala and Java
examples and the error-free cases still work correctly. I also modified
`SparkPi` so
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2077#issuecomment-55440393
The didn't merge cleanly here is a false-positive due to a bug in the
pull request builder; please ignore the spurious warning (I'll fix this in the
afternoon
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/260#issuecomment-55462913
To clarify, we don't have administrative access to this GitHub repository,
so we can't use the Close Issue button.
---
If your project is set up for it, you can reply
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2365#issuecomment-55470194
Maybe we should wait a couple of days to hear back from the Pyrolite folks
and see if they will cut a new release.
---
If your project is set up for it, you can reply
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2336#discussion_r17509198
--- Diff: python/pyspark/shuffle.py ---
@@ -68,6 +68,11 @@ def _get_local_dirs(sub):
return [os.path.join(d, python, str(os.getpid()), sub) for d
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2369#issuecomment-55476949
This looks good to me. [There's some ongoing discussion on the
JIRA](https://issues.apache.org/jira/browse/SPARK-2797) over whether this
should be included in 1.1.1
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2369#issuecomment-55477091
I think this is clearly a bug, not a missing feature, since SchemaRDD
instances expose a public method that always throws an exception when called.
I'd like to merge
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2351#discussion_r17509328
--- Diff: python/pyspark/accumulators.py ---
@@ -215,6 +215,21 @@ def addInPlace(self, value1, value2):
COMPLEX_ACCUMULATOR_PARAM
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2365#discussion_r17509580
--- Diff: core/src/main/scala/org/apache/spark/api/python/SerDeUtil.scala
---
@@ -28,6 +30,56 @@ import org.apache.spark.rdd.RDD
/** Utilities
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2144#issuecomment-55478165
This looks good to me (Davies and I walked through the code offline). I'm
going to merge this into `master`. Thanks!
---
If your project is set up for it, you can
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2369#issuecomment-55479226
Backported into `branch-1.1` (a couple of minor merge conflicts, but only
in `tests.py`; I fixed them by hand).
---
If your project is set up for it, you can reply
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2369#discussion_r17510195
--- Diff: python/pyspark/tests.py ---
@@ -574,6 +574,34 @@ def test_broadcast_in_udf(self):
[res] = self.sqlCtx.sql(SELECT MYUDF('')).collect
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2369#discussion_r17510242
--- Diff: python/pyspark/tests.py ---
@@ -574,6 +574,34 @@ def test_broadcast_in_udf(self):
[res] = self.sqlCtx.sql(SELECT MYUDF('')).collect
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2339#issuecomment-55483793
At least some of the delay in SSH coming up could be due to security
updates being installed on machines launched with old AMIs as soon as they boot
up (take a look
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/577#issuecomment-55499135
But I remember that if some file is using, Windows will prevent from
deleting it.
I'm pretty sure that I observed this issue while trying to run the Maven
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1592#discussion_r17514641
--- Diff: python/pyspark/context.py ---
@@ -36,6 +37,65 @@
from py4j.java_collections import ListConverter
+__all__ = [JavaStackTrace
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1592#discussion_r17514658
--- Diff: python/pyspark/rdd.py ---
@@ -704,7 +651,8 @@ def collect(self):
Return a list that contains all of the elements
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1592#discussion_r17514679
--- Diff: python/pyspark/sql.py ---
@@ -1624,15 +1636,40 @@ def count(self):
return self._jschema_rdd.count()
def collect(self
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/1592#issuecomment-55507731
@staple @marmbrus Aside from my comments on moving the traceback functions
into their own file, this looks good to me.
---
If your project is set up for it, you can
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2365#issuecomment-55507864
@mattf I just found out that this is blocking #2378, which is blocking
other MLlib Python API patches, so I'm going to consider merging this now...
---
If your
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2336#discussion_r17514735
--- Diff: python/pyspark/worker.py ---
@@ -27,12 +27,11 @@
# copy_reg module.
from pyspark.accumulators import _accumulatorRegistry
from
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2336#issuecomment-55508153
This looks good to me.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2209#issuecomment-55509238
Jenkins will actually show you how long the tests took, which can be
helpful in narrowing down why we're seeing these timeouts. In this case, it
looks like
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/1381#issuecomment-55509267
Unless anyone has objection / review feedback, I'd like to commit my
updated version of this PR. I'll do it tomorrow to give folks a chance to
weigh in.
---
If your
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2383#discussion_r17514997
--- Diff: python/pyspark/tests.py ---
@@ -586,6 +586,17 @@ def test_repartitionAndSortWithinPartitions(self):
self.assertEquals(partitions[0
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2376#issuecomment-55509430
It seems like the issue here is that unnecessary objects are being included
in the closure, since presumably this bug would also manifest itself through
serialization
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/1482#issuecomment-55509782
@aarondav Ah, I see your point: an OOM might be thrown from _anywhere_ and
hit the uncaught exception handler via a different code path.
---
If your project is set up
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2259#issuecomment-55510191
This looks good to me; merging it into master now. I wonder if we'll see a
net reduction in Jenkins flakiness due to using significantly fewer ephemeral
ports
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2385#issuecomment-55516585
Jenkins, this is ok to test.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/1381#issuecomment-55626881
@aaronjosephs The binary search is a good idea, although I think there are
a few subtleties involved in getting it to work generally. Imagine that I call
sortByKey
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/945#discussion_r17558868
--- Diff:
streaming/src/test/scala/org/apache/spark/streaming/NetworkReceiverSuite.scala
---
@@ -146,6 +146,44 @@ class NetworkReceiverSuite extends
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/945#discussion_r17561215
--- Diff:
streaming/src/test/scala/org/apache/spark/streaming/NetworkReceiverSuite.scala
---
@@ -146,6 +146,44 @@ class NetworkReceiverSuite extends
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1980#discussion_r17569562
--- Diff:
core/src/main/scala/org/apache/spark/deploy/master/WorkerInfo.scala ---
@@ -30,7 +30,7 @@ private[spark] class WorkerInfo(
val cores
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1980#discussion_r17569719
--- Diff: core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala
---
@@ -78,6 +78,7 @@ private[spark] class Worker(
var activeMasterUrl
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1980#discussion_r17569799
--- Diff: core/src/main/scala/org/apache/spark/ui/JettyUtils.scala ---
@@ -35,6 +35,8 @@ import org.json4s.jackson.JsonMethods.{pretty, render
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1980#discussion_r17569959
--- Diff: core/src/main/scala/org/apache/spark/ui/JettyUtils.scala ---
@@ -207,6 +210,48 @@ private[spark] object JettyUtils extends Logging
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1980#discussion_r17570027
--- Diff: core/src/main/scala/org/apache/spark/ui/JettyUtils.scala ---
@@ -207,6 +210,48 @@ private[spark] object JettyUtils extends Logging
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1980#discussion_r17570077
--- Diff: core/src/main/scala/org/apache/spark/ui/JettyUtils.scala ---
@@ -207,6 +210,48 @@ private[spark] object JettyUtils extends Logging
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/1980#issuecomment-55662724
Is `spark.http.policy` the best name for this configuration option? Do you
think that this can be a boolean option, or are there cases for wanting to have
values
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1980#discussion_r17570651
--- Diff:
core/src/main/scala/org/apache/spark/deploy/worker/WorkerArguments.scala ---
@@ -53,6 +53,9 @@ private[spark] class WorkerArguments(args: Array
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1980#discussion_r17570692
--- Diff:
core/src/main/scala/org/apache/spark/deploy/worker/WorkerArguments.scala ---
@@ -53,6 +53,9 @@ private[spark] class WorkerArguments(args: Array
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/1980#issuecomment-55663526
Also, it looks like this adds configuration options under several (new)
namespaces:
- `spark.http.policy`
- `spark.client.https.need-auth
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1980#discussion_r17571251
--- Diff: core/src/main/scala/org/apache/spark/ui/JettyUtils.scala ---
@@ -207,6 +210,48 @@ private[spark] object JettyUtils extends Logging
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/1980#issuecomment-55665149
What happens if I've configured the web UI to use `https` then attempt to
browse to the `http` URL? Is it easy to set up an automatic redirect?
---
If your project
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2365#issuecomment-55685369
I'm going to merge this now.
As a reference / side note, http://bugs.python.org/issue2389 provides some
good context for why Python 2.6's array pickling
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2378#issuecomment-55685928
Just merged #2365 in case you want to rebase.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2385#issuecomment-55687151
This looks good to me, so I'm going to merge it. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/1592#issuecomment-55687685
Now that #2385 has been merged, this looks like it will be ready to merge
as soon as you rebase it on top of master.
---
If your project is set up for it, you can
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1616#discussion_r17618358
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -313,15 +313,83 @@ private[spark] object Utils extends Logging
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/1616#issuecomment-55790784
@andrewor14 This seems fine to me, since it looks like the potential race
condition / collision issue has been addressed (via the new choice of
`cachedFileName
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2383#issuecomment-55792092
This looks good to me, so I'm going to merge it. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2406#discussion_r17621605
--- Diff: docs/configuration.md ---
@@ -520,10 +520,10 @@ Apart from these, the following properties are also
available, and may be useful
/tr
tr
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2414#issuecomment-55793545
I've merged this. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2250#issuecomment-55798065
This seems like a good idea; I can see how the current behavior is
confusing, especially since I think it might be common for multiple apps to be
running with the same
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2292#issuecomment-55803235
This looks good to me; I'm going to merge this into master but leave the
JIRA open so that we remember to eventually remove the epydocs / etc.
---
If your project
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/577#issuecomment-55807671
Apart from Andrew's minor comments, this looks good to me.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/1616#issuecomment-55856032
This looks like a failure due to a known flaky test:
```
[info] SparkSinkSuite:
[info] - Success with ack *** FAILED ***
[info] 4000 did not equal
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2406#issuecomment-55856872
LGTM; thanks for updating the title! I'm going to merge this now.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1977#discussion_r17678700
--- Diff: python/pyspark/rdd.py ---
@@ -1562,21 +1560,34 @@ def createZero():
return self.combineByKey(lambda v: func(createZero(), v
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1977#discussion_r17680613
--- Diff: python/pyspark/rdd.py ---
@@ -1562,21 +1560,34 @@ def createZero():
return self.combineByKey(lambda v: func(createZero(), v
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1977#discussion_r17681171
--- Diff: python/pyspark/rdd.py ---
@@ -1588,8 +1599,27 @@ def mergeCombiners(a, b):
a.extend(b)
return
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/1977#issuecomment-55958078
Summarizing some of our in-person discussion (@davies, let me know if I've
made any mistakes here!):
`GroupByKey` and `SameKey` work together to address
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/1977#issuecomment-55958996
This looks like a good patch. The code here is fairly complicated and had
some complex control flow, although after discussion I believe that it works
correctly
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/1977#issuecomment-55959414
There's a bit of code duplication between ExternalGroupBy and
ExternalMerger, but maybe this is unavoidable. It would be nice to add a short
comment
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2413#issuecomment-55969817
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2186#issuecomment-55970507
This almost certainly breaks binary compatibility; sorry for letting this
PR sit for so long. I'll try to update it today or tomorrow.
---
If your project is set up
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2333#issuecomment-55977729
@sarutak In the long run, I'd be interested in re-writing the UI in terms
of a richer REST API that exposes data as JSON, exactly for the visualization
use-case
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2417#issuecomment-56124617
LGTM. Surprising that the broadcast variable removal code was never
triggered in the test suite before; thanks for fixing that!
---
If your project is set up
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2305#issuecomment-56124864
Sorry for not reviewing this until now; it sort of fell off my radar.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2305#discussion_r17765356
--- Diff: docs/programming-guide.md ---
@@ -286,7 +286,7 @@ We describe operations on distributed datasets later on.
/div
-One
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2313#issuecomment-56127611
This is a tricky issue.
Exact reproducibility / determinism crops up in two different senses here:
re-running an entire job and re-computing a lost partition
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2305#issuecomment-56238499
I think that Jenkins might have crashed or restarted overnight, but it
seems to be working now.
This looks good to me, so I'm going to merge it. Feel free
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2304#issuecomment-56239181
I'm going to merge this one, too, since this won't introduce any backwards
incompatibilities and makes the examples more understandable.
---
If your project is set up
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2465#issuecomment-56240579
I actually merged the old one using our CLI tool, so that commit should
have been included at a03e5b81e91d9d792b6a2e01d1505394ea303dd8, so I think we
can close
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2337#issuecomment-56280786
My initial thought was that a job group-based approach might be a bit
cleaner, but there are a few subtleties with that proposal that we need to
consider
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2337#issuecomment-56281339
@rxin @pwendell Since we have job groups and the ability to cancel all jobs
running in a job group (`sc.cancelJobGroup()`), then why do we need
FutureAction? It looks
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2250#issuecomment-56281680
I feel strongly that we should use the same application ID to refer to the
application in every context, since creating a different id based off
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2432#issuecomment-56281694
Can you add closes #1067 to the description here, too?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2432#issuecomment-56281779
Quoting @sarutak from #2250, regarding this PR:
And for problem 2, when launching ExecutorBackends, launcher pass
application id to ExecutorBackends
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2472#issuecomment-56281856
Ah, good catch! Since this is a doc-only markdown change, I'm going to
merge it without waiting for Jenkins.
---
If your project is set up for it, you can reply
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2467#issuecomment-56281993
LGTM. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user JoshRosen opened a pull request:
https://github.com/apache/spark/pull/2474
[SPARK-3616] Add basic Selenium tests to WebUISuite
This patch adds Selenium tests for Spark's web UI. To avoid adding extra
dependencies to the test environment, the tests use Selenium's
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2474#issuecomment-56286662
@pwendell [According to
Jenkins](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20614/testReport/org.apache.spark.ui/UISuite/),
UISuite took ~11
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2474#issuecomment-56290962
If the only issue here is test speed, maybe we can disable the slower tests
by default on Jenkins.
---
If your project is set up for it, you can reply to this email
GitHub user JoshRosen opened a pull request:
https://github.com/apache/spark/pull/2482
[SPARK-3626] [WIP] Replace AsyncRDDActions with a more general runAsync()
mechanism
### Background
The `AsyncRDDActions` methods were introduced
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2337#issuecomment-56309633
I've opened #2482 , a pull request (WIP) illustrating my proposal to remove
`AsyncRDDActions` and replace it with a more general mechanism for
asynchronously launching
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2482#issuecomment-56310133
Fair enough, although the `AsyncRDDActions` class was marked as
`@Experimental` and the documentation for that annotation explicitly warns that
experimental APIs might
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2482#issuecomment-56315058
I've taken another pass at this. This time, I kept AsyncRDDActions but
re-implemented it using `runAsync`, but I'm actually on the fence about that
change. The one
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2337#issuecomment-56407152
Unless properties contains the job group info somehow.
It does, actually; the property is named `SparkContext.SPARK_JOB_GROUP_ID`.
Since `properties` can
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2337#issuecomment-56408573
More than that, it's private[spark], which means I have to hardcode the
string in my code and hope it never changes...
Yeah, I wasn't suggesting
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2474#issuecomment-56411152
Note to self / reviewers: #2489 addresses another httpclient dependency
issue and will probably conflict with this.
---
If your project is set up for it, you can
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/882#issuecomment-56419351
Hi @praveenr019,
I like the idea of exposing information from the web UI in a
machine-readable format. However, I'd like to do more up-front design on a
REST
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2333#issuecomment-56419802
I've opened [SPARK-3644](https://issues.apache.org/jira/browse/SPARK-3644)
as a forum for discussing the design of a REST API; sorry for the delay.
---
If your
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2356#issuecomment-56420439
Now that #2378 has been merged, is this unblocked?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2492#issuecomment-56440200
BTW: it's a bit dangerous that user can upload new module to modify the
default behavior of system. Currently, it's hard to find the the correct
position to insert
1 - 100 of 8887 matches
Mail list logo