Github user MartinWeindel commented on the pull request:
https://github.com/apache/spark/pull/5597#issuecomment-94562061
The value 5 seconds is the default value of Mesos, which is used if not
set or an invalid value is given. So at least with current versions of
Mesos
GitHub user MartinWeindel opened a pull request:
https://github.com/apache/spark/pull/5597
Avoid warning message about invalid refuse_seconds value in Mesos =0.21...
Starting with version 0.21.0, Apache Mesos is very noisy if the filter
parameter refuse_seconds is set to an invalid
Github user MartinWeindel commented on the pull request:
https://github.com/apache/spark/pull/4299#issuecomment-72538918
Am 02.02.2015 um 22:07 schrieb UCB AMPLab:
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https
Github user MartinWeindel commented on the pull request:
https://github.com/apache/spark/pull/4299#issuecomment-72387509
Moved comment after `if (!isWindows)` as suggested. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user MartinWeindel commented on the pull request:
https://github.com/apache/spark/pull/4299#issuecomment-72333732
Yes, it's a regression. It worked with 1.2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
GitHub user MartinWeindel opened a pull request:
https://github.com/apache/spark/pull/4299
Disabling Utils.chmod700 for Windows
This patch makes Spark 1.2.1rc2 work again on Windows.
Without it you get following log output on creating a Spark context:
INFO
Github user MartinWeindel commented on the pull request:
https://github.com/apache/spark/pull/1860#issuecomment-53260324
Yes, this becomes tricky. And I don't see a satisfying solution, as I would
have to predict how many tasks will run in parallel to ensure
Github user MartinWeindel commented on the pull request:
https://github.com/apache/spark/pull/1860#issuecomment-5888
OK, so I have reverted the work-around patch and added a known issue
paragraph to the running-on-mesos documentation.
---
If your project is set up for it, you
Github user MartinWeindel commented on the pull request:
https://github.com/apache/spark/pull/1860#issuecomment-53203084
Hey Patrick,
first of all let me emphasize again that this is only a work-around. The
real problem is that Mesos only makes offers
GitHub user MartinWeindel opened a pull request:
https://github.com/apache/spark/pull/1860
work around for problem with Mesos offering semantic
When using Mesos with the fine-grained mode, a Spark job can run into a
dead lock on low allocatable memory on Mesos slaves. As a work
10 matches
Mail list logo