Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3397#issuecomment-63936801
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3397#issuecomment-63936791
[Test build #23708 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/23708/consoleFull)
for PR 3397 at commit
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/3262#issuecomment-63937032
@heathermiller @gzm0 - do you think this pr is good for merge now?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user zzcclp commented on the pull request:
https://github.com/apache/spark/pull/3398#issuecomment-63937355
@liancheng, thanks
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user gzm0 commented on a diff in the pull request:
https://github.com/apache/spark/pull/3262#discussion_r20702584
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -1427,47 +1427,74 @@ object SparkContext extends Logging {
private[spark]
Github user gzm0 commented on the pull request:
https://github.com/apache/spark/pull/3262#issuecomment-63937449
Otherwise LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3397#issuecomment-63937868
[Test build #23715 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/23715/consoleFull)
for PR 3397 at commit
Github user davies commented on the pull request:
https://github.com/apache/spark/pull/3397#issuecomment-63937895
@mengxr @jkbradley I had changed the storage level to MEMORY_AND_DISK_SER,
and move them into Scala. Also added cache() for decision tree and random
forest (only three
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/3397#issuecomment-63938419
@davies Let's use MEMORY_AND_DISK instead for best performance. For
decision tree, we still need to cache the input.
---
If your project is set up for it, you can reply
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/3380#issuecomment-63938924
cc @aarondav
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/3341#issuecomment-63939012
LGTM too.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/3225#discussion_r20703138
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -630,7 +634,10 @@ class SparkContext(config: SparkConf) extends
SparkStatusAPI with
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3398#issuecomment-63939167
[Test build #23710 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/23710/consoleFull)
for PR 3398 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3398#issuecomment-63939173
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/3349#issuecomment-63939255
doc changes can go in until last minute before the release actually.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/3399#issuecomment-63939425
@witgo `numExamples * miniBatchFraction` is the expected size. I thought it
should work well in practice. Did you observe failures?
---
If your project is set up for it,
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/3262#discussion_r20703300
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -1427,47 +1427,74 @@ object SparkContext extends Logging {
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/3341#issuecomment-63939765
Looks good.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user gzm0 commented on the pull request:
https://github.com/apache/spark/pull/3262#issuecomment-63939837
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user davies commented on the pull request:
https://github.com/apache/spark/pull/3397#issuecomment-63939908
@mengxr Changed to MEMORY_AND_DISK. But for Rating, it use
MEMORY_AND_DISK_SER.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3262#issuecomment-63939997
[Test build #23716 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/23716/consoleFull)
for PR 3262 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3399#issuecomment-63940130
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3399#issuecomment-63940122
[Test build #23712 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/23712/consoleFull)
for PR 3399 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3397#issuecomment-63940245
[Test build #23717 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/23717/consoleFull)
for PR 3397 at commit
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/3341#issuecomment-63940666
Ok merging this in master and branch-1.2.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/3349#discussion_r20703837
--- Diff: docs/running-on-mesos.md ---
@@ -183,6 +183,49 @@ node. Please refer to [Hadoop on
Mesos](https://github.com/mesos/hadoop).
In either case, HDFS
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/3225#discussion_r20703826
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -630,7 +634,10 @@ class SparkContext(config: SparkConf) extends
SparkStatusAPI with
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/3349#discussion_r20703846
--- Diff: docs/running-on-mesos.md ---
@@ -183,6 +183,49 @@ node. Please refer to [Hadoop on
Mesos](https://github.com/mesos/hadoop).
In either case, HDFS
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/3399#issuecomment-63941966
@mengxr I'm not sure. In my test of #3222, The convergence rate of SGD
less than expected. it should be affected by this issue.
---
If your project is set up for it,
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3399#issuecomment-63942124
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3399#issuecomment-63942120
[Test build #23713 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/23713/consoleFull)
for PR 3399 at commit
GitHub user watermen opened a pull request:
https://github.com/apache/spark/pull/3400
[SPARK-4535][Streaming] Fix the error in comments
change `NetworkInputDStream` to `ReceiverInputDStream`
change `ReceiverInputTracker` to `ReceiverTracker`
You can merge this pull request into
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3400#issuecomment-63943505
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3009#issuecomment-63944081
[Test build #23714 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/23714/consoleFull)
for PR 3009 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3009#issuecomment-63944088
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/3343#issuecomment-63945210
@felixmaximilian This was merged but Apache didn't close it automatically.
Do you mind closing it? Thanks!
---
If your project is set up for it, you can reply to this
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3375#issuecomment-63945383
[Test build #23718 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/23718/consoleFull)
for PR 3375 at commit
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/3208#discussion_r20705901
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/SqlParser.scala ---
@@ -339,18 +339,15 @@ class SqlParser extends AbstractSparkSQLParser {
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3397#issuecomment-63946469
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3397#issuecomment-63946460
[Test build #23715 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/23715/consoleFull)
for PR 3397 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3399#issuecomment-63947706
[Test build #23719 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/23719/consoleFull)
for PR 3399 at commit
GitHub user sarutak opened a pull request:
https://github.com/apache/spark/pull/3401
[SPARK-4536][SQL] Add sqrt and abs to Spark SQL DSL
Spark SQL havs embeded sqrt and abs but DSL doesn't support those functions.
You can merge this pull request into a Git repository by running:
GitHub user ScrapCodes opened a pull request:
https://github.com/apache/spark/pull/3402
[SPARK-4377] Fixed serialization issue by switching to akka provided
serializer.
... - there is no way around this for deserializing actorRef(s).
You can merge this pull request into a Git
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3401#issuecomment-63948961
[Test build #23720 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/23720/consoleFull)
for PR 3401 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3397#issuecomment-63949458
[Test build #23717 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/23717/consoleFull)
for PR 3397 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3397#issuecomment-63949467
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3375#issuecomment-63949550
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3375#issuecomment-63949544
[Test build #23718 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/23718/consoleFull)
for PR 3375 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3402#issuecomment-63949587
[Test build #23721 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/23721/consoleFull)
for PR 3402 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3262#issuecomment-63949699
[Test build #23716 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/23716/consoleFull)
for PR 3262 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3262#issuecomment-63949710
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3402#issuecomment-63949791
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3402#issuecomment-63949788
[Test build #23721 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/23721/consoleFull)
for PR 3402 at commit
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/3402#issuecomment-63950396
@JoshRosen Please take a look and see if this fix works for us.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3402#issuecomment-63950858
[Test build #23722 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/23722/consoleFull)
for PR 3402 at commit
Github user ScrapCodes commented on the pull request:
https://github.com/apache/spark/pull/3402#issuecomment-63951951
Even after this fix someone can run into same errors if he suppose builds
spark with scala 2.10 and runs master first and then try to recover it with
spark built with
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3402#issuecomment-63953129
[Test build #23723 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/23723/consoleFull)
for PR 3402 at commit
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/3397#discussion_r20708251
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/api/python/PythonMLLibAPI.scala ---
@@ -74,13 +74,27 @@ class PythonMLLibAPI extends Serializable {
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/3397#discussion_r20708235
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/api/python/PythonMLLibAPI.scala ---
@@ -74,13 +74,27 @@ class PythonMLLibAPI extends Serializable {
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3401#issuecomment-63956555
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3401#issuecomment-63956546
[Test build #23720 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/23720/consoleFull)
for PR 3401 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3399#issuecomment-63956932
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user felixmaximilian closed the pull request at:
https://github.com/apache/spark/pull/3343
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3402#issuecomment-63959514
[Test build #23722 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/23722/consoleFull)
for PR 3402 at commit
GitHub user lianhuiwang opened a pull request:
https://github.com/apache/spark/pull/3403
[SPARK-4534][Core]JavaSparkContext create new constructor to support
preferredNodeLocalityData with YARN
create new constructor to support preferredNodeLocalityData with YARN
example:
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3403#issuecomment-63960552
[Test build #23724 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/23724/consoleFull)
for PR 3403 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3403#issuecomment-63960658
[Test build #23724 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/23724/consoleFull)
for PR 3403 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3403#issuecomment-63960659
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3402#issuecomment-63961842
[Test build #23723 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/23723/consoleFull)
for PR 3402 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3402#issuecomment-63961848
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user preaudc commented on the pull request:
https://github.com/apache/spark/pull/2855#issuecomment-63969515
I have no clue what could bring this MiMa test failure, and how to fix it.
Can anybody give me a hand?
---
If your project is set up for it, you can reply to this email
Github user debasish83 commented on the pull request:
https://github.com/apache/spark/pull/3098#issuecomment-63979463
More details about the API added and experiments are on the JIRA
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user srowen closed the pull request at:
https://github.com/apache/spark/pull/3170
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user davies commented on the pull request:
https://github.com/apache/spark/pull/3397#issuecomment-63999301
@mengxr fixed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user adamnovak commented on the pull request:
https://github.com/apache/spark/pull/1297#issuecomment-6365
Can it be in Spark 1.3? This sort of functionality would really help us get
a Spark-based implementation of the stuff that
@ga4gh/global-alliance-committers is doing
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3397#issuecomment-64000207
[Test build #23726 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/23726/consoleFull)
for PR 3397 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3397#issuecomment-64001766
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3397#issuecomment-64002601
[Test build #531 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/531/consoleFull)
for PR 3397 at commit
Github user arahuja commented on the pull request:
https://github.com/apache/spark/pull/3209#issuecomment-64003170
@vanzin that sounds reasonable, though confusing if #3233 does not go in
soon, but anyways, sounds fine to me. Is there something I should do for that?
Reopen this
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3157#issuecomment-64003495
[Test build #23727 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/23727/consoleFull)
for PR 3157 at commit
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/3393#issuecomment-64003925
@pwendell Please merge it. :-)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/3209#issuecomment-64009711
I don't know if it's possible to move a PR to a different branch (or
whether you need to create a new one). In any case, it's not a big deal if this
goes into master.
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/3262#issuecomment-64011914
I'm merging this in master. Thanks for working on this @zsxwing and
everybody else for reviewing.
---
If your project is set up for it, you can reply to this email and
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/3262#issuecomment-64011977
cc @mateiz @pwendell I'm leaving this out of branch-1.2 thinking it is too
last minute to merge something like this. Let me know if you want to cherry
pick this into
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3397#issuecomment-64014157
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3397#issuecomment-64014150
[Test build #23726 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/23726/consoleFull)
for PR 3397 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3397#issuecomment-64015022
[Test build #531 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/531/consoleFull)
for PR 3397 at commit
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/3262#issuecomment-64016979
Yeah merging to master sounds fine; it's too late to put it in 1.2.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/3262#issuecomment-64017006
Thanks for the patch @zsxwing, this is very cool.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/3403#issuecomment-64018115
preferredNodeLocalityData is currently broken (see SPARK-2089), and we're
discussing changing the API for it. I think it would be best to hold off on
this change until
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3157#issuecomment-64019070
[Test build #23727 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/23727/consoleFull)
for PR 3157 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3157#issuecomment-64019143
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
GitHub user brennonyork opened a pull request:
https://github.com/apache/spark/pull/3404
SPARK-3182: Add geolocation bounding for Twitter Streaming
This PR adds an additional capability to the Twitter Streaming function
allowing a user to filter based on a series of
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3404#issuecomment-64030693
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user EntilZha opened a pull request:
https://github.com/apache/spark/pull/3405
[SPARK-4543] Javadoc failure for network-common causes publish-local to fail
Pull request to accompany: https://issues.apache.org/jira/browse/SPARK-4543
Javadoc is missing from
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/3397#discussion_r20739035
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/api/python/PythonMLLibAPI.scala ---
@@ -74,10 +74,28 @@ class PythonMLLibAPI extends Serializable {
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3405#issuecomment-64031328
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/3397#discussion_r20739110
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/api/python/PythonMLLibAPI.scala ---
@@ -526,10 +515,15 @@ class PythonMLLibAPI extends Serializable {
Github user jkbradley commented on the pull request:
https://github.com/apache/spark/pull/3397#issuecomment-64031697
LGTM
@pwendell had questions about whether we should allow the user specify (in
the Python call) whether they want to use caching. CC @mengxr
---
If your
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/3405#issuecomment-64032182
No, that's really not a solution to put in a bunch of dummy TODOs. The
Javadoc messages are just warnings. I do not see why just the network common
module ends up showing
1 - 100 of 176 matches
Mail list logo