Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4723#issuecomment-93380621
[Test build #30340 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30340/consoleFull)
for PR 4723 at commit
Github user 31z4 commented on the pull request:
https://github.com/apache/spark/pull/5361#issuecomment-93395164
@davies @jkbradley @rxin could someone look into this commit again and
trigger the Jenkins test?
---
If your project is set up for it, you can reply to this email and have
Github user shaananc commented on the pull request:
https://github.com/apache/spark/pull/5173#issuecomment-93394590
Good idea. I was on YARN 2.4, testing it with just two nodes.
I just tried running it locally rather than on the cluster and it worked
fine.
If you want
GitHub user liancheng opened a pull request:
https://github.com/apache/spark/pull/5526
[SQL] [WIP] Partitioning support for the data sources API
**NOTE:** Currently this is only a tentative PR discussing API design.
You can merge this pull request into a Git repository by running:
Github user petro-rudenko commented on the pull request:
https://github.com/apache/spark/pull/5510#issuecomment-93412249
Ideally crossvalidator should handle next cases:
1) No parameters at all: just run est.fit(dataset, new ParamMap)
2) 1 param: set this param to estimator
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5439#issuecomment-93354224
[Test build #30335 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30335/consoleFull)
for PR 5439 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5439#issuecomment-93348502
[Test build #30334 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30334/consoleFull)
for PR 5439 at commit
Github user isaias commented on the pull request:
https://github.com/apache/spark/pull/2055#issuecomment-93381091
@srowen done
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2447#issuecomment-93383384
@patmcdonough are you in a position to follow up on the comments above? I'm
wondering if this is alive or not or whether it should be closed.
---
If your project is set
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5439#issuecomment-93398033
[Test build #30335 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30335/consoleFull)
for PR 5439 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5471#issuecomment-93417801
[Test build #30336 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30336/consoleFull)
for PR 5471 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5526#issuecomment-93417227
[Test build #30344 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30344/consoleFull)
for PR 5526 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5471#issuecomment-93417860
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user emres commented on the pull request:
https://github.com/apache/spark/pull/5438#issuecomment-93419072
@srowen thanks for checking. @tdas does it look OK to you, too?
One thing I've realized is that I did not touch streaming programming
guide, and I think I should
GitHub user srowen opened a pull request:
https://github.com/apache/spark/pull/5528
SPARK-6846 [WEBUI] Stage kill URL easy to accidentally trigger and
possibility for security issue
kill endpoints now only accept a POST (kill stage, master kill app, master
kill driver); kill link
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5439#issuecomment-93348547
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5429#issuecomment-93368600
[Test build #30338 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30338/consoleFull)
for PR 5429 at commit
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/4980#discussion_r28414608
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveWriterContainers.scala ---
@@ -234,7 +234,13 @@ private[spark] class
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4910#issuecomment-93370416
@Leolh are you able to follow up on this? there are comments awaiting
changes.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user petro-rudenko commented on the pull request:
https://github.com/apache/spark/pull/5510#issuecomment-93373411
Maybe in Crossvalidator handle empty estimatorParamMap?
```scala
/** @group setParam */
def setEstimatorParamMaps(value: Array[ParamMap]): this.type =
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4138#issuecomment-93380078
@maropu Jenkins is correct, you're modifying a public API here, since you
add a method to an abstract class. I think this one has stalled anyway; is
there another way to
Github user mce commented on a diff in the pull request:
https://github.com/apache/spark/pull/5439#discussion_r28419555
--- Diff:
extras/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisReceiver.scala
---
@@ -82,15 +82,19 @@ private[kinesis] class
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/5471
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/5256#issuecomment-93366627
@Sephiroth-Lin can you rebase this? and does it make sense to refer to
localhost instead? Consider also that we have `Utils.localHostName()` which
is an attempt to
Github user squito commented on the pull request:
https://github.com/apache/spark/pull/4435#issuecomment-93369479
jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4402#issuecomment-93382082
@maropu this looks stale. Is it still something you think should be merged?
maybe you can rebase and then it can be reviewed if so.
---
If your project is set up for it,
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5429#issuecomment-93419944
[Test build #30338 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30338/consoleFull)
for PR 5429 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5429#issuecomment-93419968
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5528#issuecomment-93419945
[Test build #30346 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30346/consoleFull)
for PR 5528 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5510#issuecomment-93366639
[Test build #30337 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30337/consoleFull)
for PR 5510 at commit
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4886#issuecomment-93378547
@suyanNone from the description I'm still not sure I understand why this is
needed. It seems reasonable to try again to put the data into memory, later, if
it is intended
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5439#issuecomment-93398087
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
GitHub user viirya opened a pull request:
https://github.com/apache/spark/pull/5527
[SPARK-6857][SQL] Add support of numpy types for Python SQL schema inference
JIRA https://issues.apache.org/jira/browse/SPARK-6857
You can merge this pull request into a Git repository by running:
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/5368#issuecomment-93360400
@rxin @sryza do you have an opinion on this? It'd be good to resolve this
and https://github.com/apache/spark/pull/5250 one way or the other. This is a
narrower approach
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/5429#issuecomment-93367073
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/5429#issuecomment-93367242
@pwendell are you still OK with this, for 1.4? seems not-crazy, except of
course that now the project commits to publishing another artifact. I am not
sure about the
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4435#issuecomment-93372085
[Test build #30339 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30339/consoleFull)
for PR 4435 at commit
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2055#issuecomment-93385407
Cool, unless anyone pops up to object, I'll merge tomorrow.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5526#issuecomment-93411632
[Test build #30342 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30342/consoleFull)
for PR 5526 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5529#issuecomment-93420234
[Test build #30345 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30345/consoleFull)
for PR 5529 at commit
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4481#issuecomment-93371064
If there's not going to be follow up along these lines, would you mind
closing the PR?
(Aside: we now have a new syntax for specifying time properties, FWIW.)
---
If
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4273#issuecomment-93374066
This seems sort of stalled, and
https://issues.apache.org/jira/browse/SPARK-5561 suggests this should be
accomplished with a more generalized solution anyway. Is it best
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/4887#discussion_r28416270
--- Diff: core/src/main/scala/org/apache/spark/storage/MemoryStore.scala ---
@@ -295,9 +296,9 @@ private[spark] class MemoryStore(blockManager:
BlockManager,
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5527#issuecomment-93413194
[Test build #30343 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30343/consoleFull)
for PR 5527 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5471#issuecomment-93364836
[Test build #30336 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30336/consoleFull)
for PR 5471 at commit
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2055#issuecomment-93369197
@isaias can you make the change here? let's use debug logging.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user jerryshao commented on the pull request:
https://github.com/apache/spark/pull/4723#issuecomment-93378234
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2055#issuecomment-93382437
[Test build #30341 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30341/consoleFull)
for PR 2055 at commit
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/5197#discussion_r28416458
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -589,10 +589,11 @@ private[spark] class BlockManager(
private def
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/3913#issuecomment-93384261
I'd like to resolve this one way or the other. My hesitation is mostly
about tacking on another method to `SparkContext`, developer API or no. Would
it really be a better
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5510#issuecomment-93418514
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5510#issuecomment-93418495
[Test build #30337 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30337/consoleFull)
for PR 5510 at commit
GitHub user zsxwing opened a pull request:
https://github.com/apache/spark/pull/5529
[SPARK-6934][Core] Use 'spark.akka.askTimeout' for the ask timeout
Fixed my mistake in #4588
You can merge this pull request into a Git repository by running:
$ git pull
Github user zsxwing commented on the pull request:
https://github.com/apache/spark/pull/5529#issuecomment-93454776
cc @rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5529#issuecomment-93456069
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5529#issuecomment-93456048
[Test build #30345 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30345/consoleFull)
for PR 5529 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5526#issuecomment-93424090
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4723#issuecomment-93429833
[Test build #30340 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30340/consoleFull)
for PR 4723 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4723#issuecomment-93429881
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4435#issuecomment-93434588
[Test build #30339 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30339/consoleFull)
for PR 4435 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4435#issuecomment-93434620
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2055#issuecomment-93430020
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2055#issuecomment-93429979
[Test build #30341 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30341/consoleFull)
for PR 2055 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5526#issuecomment-93427546
[Test build #30348 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30348/consoleFull)
for PR 5526 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5361#issuecomment-93444810
[Test build #677 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/677/consoleFull)
for PR 5361 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5526#issuecomment-93448783
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/5027#issuecomment-93452826
Yeah, I asked about that some time ago, and I believe the concern was about
surprising users (by changing defaults) + the fact that the Hadoop 2 distro
used by
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5528#issuecomment-93455478
[Test build #30346 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30346/consoleFull)
for PR 5528 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5528#issuecomment-93455509
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user mccheah closed the pull request at:
https://github.com/apache/spark/pull/4481
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5526#issuecomment-93432036
[Test build #30349 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30349/consoleFull)
for PR 5526 at commit
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/5526#discussion_r28424139
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/sources/interfaces.scala ---
@@ -78,6 +80,40 @@ trait SchemaRelationProvider {
schema:
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5526#issuecomment-93424047
[Test build #30347 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30347/consoleFull)
for PR 5526 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5526#issuecomment-93424080
[Test build #30347 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30347/consoleFull)
for PR 5526 at commit
Github user alexliu68 commented on the pull request:
https://github.com/apache/spark/pull/5520#issuecomment-93444671
I vote to have keyword as identifier in OPTIONS. Existing code fails
without giving meaningful error message if the keyword is on OPTIONS.
---
If your project is set
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5526#issuecomment-93448767
[Test build #30342 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30342/consoleFull)
for PR 5526 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5527#issuecomment-93422130
[Test build #30343 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30343/consoleFull)
for PR 5527 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5527#issuecomment-93422134
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/5526#discussion_r28425682
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/sources/interfaces.scala ---
@@ -197,3 +233,69 @@ trait InsertableRelation {
trait CatalystScan
Github user squito commented on the pull request:
https://github.com/apache/spark/pull/4435#issuecomment-93446557
updated to go back to java enums
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user jkbradley commented on the pull request:
https://github.com/apache/spark/pull/5510#issuecomment-93468468
Thinking more, I'm starting to feel like returning 1 empty ParamMap is
better than returning 0 ParamMaps. Basically, I think the natural unit grid
should 1 empty
Github user viirya commented on the pull request:
https://github.com/apache/spark/pull/5488#issuecomment-93480065
@micaelcapitao Thanks. I updated the doc too.
I think you can use jdbc data source API to create temporary table and then
use WHERE clause to add predicates.
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/5514#discussion_r28437452
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala ---
@@ -49,11 +49,8 @@ private[history] class FsHistoryProvider(conf:
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5526#issuecomment-93480235
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user ilganeli commented on the pull request:
https://github.com/apache/spark/pull/4895#issuecomment-93486074
All - anything to do here?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5530#issuecomment-93486052
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5530#issuecomment-93486037
[Test build #30352 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30352/consoleFull)
for PR 5530 at commit
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/5027#issuecomment-93488039
Yeah spark-ec2 does not support Hadoop 2 right now, though there has been a
patch sitting around for a while now
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/4062#issuecomment-93488892
Sorry... Would you mind to rebase this again? Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/5520#issuecomment-93489836
You can find the data type parser at
[here](https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/types/DataTypeParser.scala).
---
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5499#issuecomment-93491077
[Test build #30357 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30357/consoleFull)
for PR 5499 at commit
Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/5478#issuecomment-93490885
That alternative solution makes sense to me. If it's not going to be added
to the classpath, it might make more sense to use a zip than a jar.
---
If your project is set
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/5520#issuecomment-93457079
I think this feature seems reasonable. Only question is, can we do it with
a regex parser like we did with datatypes instead?
---
If your project is set up for it,
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4537#issuecomment-93475010
Jenkins claims ...
```
[error] * method
setConsumerOffsetMetadata(java.lang.String,scala.collection.immutable.Map)scala.util.Either
in class
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5530#issuecomment-93478358
[Test build #30352 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30352/consoleFull)
for PR 5530 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5527#issuecomment-93478390
[Test build #30353 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30353/consoleFull)
for PR 5527 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5514#issuecomment-93481783
[Test build #30355 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30355/consoleFull)
for PR 5514 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5531#issuecomment-93490034
[Test build #30356 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/30356/consoleFull)
for PR 5531 at commit
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/5463#discussion_r28434599
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -755,104 +769,115 @@ private[spark] class BlockManager(
case _ =
Github user koeninger commented on the pull request:
https://github.com/apache/spark/pull/4537#issuecomment-93478250
That maybe makes a certain amount of sense... I'll try replacing the
default arguments with multiple overloaded methods, see if that passes mima.
---
If your project
1 - 100 of 773 matches
Mail list logo