Github user aarondav commented on the pull request:
https://github.com/apache/spark/pull/2742#issuecomment-58621877
Added a non-configurable version of the memory map pathway, with the
threshold you suggested (2MB, the size of a hugepage). Note that this fix will
also be included in
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2742#issuecomment-58622268
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21583/consoleFull)
for PR 2742 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2742#issuecomment-58627794
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2742#issuecomment-58627787
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/21583/consoleFull)
for PR 2742 at commit
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/2742#issuecomment-58628192
LGTM. Merged. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/2742
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user mridulm commented on the pull request:
https://github.com/apache/spark/pull/2742#issuecomment-58724312
This needs to be configurable ... IIRC 1.1 had this customizable.
Different limits exist for vm vs heap memory in yarn (for example).
---
If your project is set up
Github user aarondav commented on the pull request:
https://github.com/apache/spark/pull/2742#issuecomment-58727337
@mridulm Could you give an example of which way you would want to shift it
via config? Map more or less often?
---
If your project is set up for it, you can reply to
Github user mridulm commented on the pull request:
https://github.com/apache/spark/pull/2742#issuecomment-58728241
With 1.1, in expts, we have done both : depending on whether our user code
is mmap'ing too much data (and so we pull things into heap .. using
libraries not in our
Github user mridulm commented on the pull request:
https://github.com/apache/spark/pull/2742#issuecomment-58728319
Note: this is reqd since there are heap and vm limits enforced, so we
juggle available memory around so that jobs can run to completion!
On 11-Oct-2014 4:56 am,
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/2742#discussion_r18689297
--- Diff: core/src/main/scala/org/apache/spark/network/ManagedBuffer.scala
---
@@ -72,7 +72,10 @@ final class FileSegmentManagedBuffer(val file: File, val
11 matches
Mail list logo