I see. One option would be to expose multiple disks as resources to
frameworks and have them use that. The task sandboxes (and other metadata)
will still be located in `work_dir`, but most of the tasks' I/O could be
directed towards those disks. Of course, this needs changes to frameworks
which is
You can config multiple disks for persistent volumes. Please see this doc
for more details:
http://mesos.apache.org/documentation/latest/multiple-disk/
- Jie
On Wed, Nov 22, 2017 at 1:57 PM, Jeff Kubina wrote:
> Thanks, that is what I thought.
>
> Why: To spread the
Thanks, that is what I thought.
Why: To spread the I/O-workload of some frameworks across many disks.
--
Jeff Kubina
410-988-4436
On Wed, Nov 22, 2017 at 2:21 PM, Vinod Kone wrote:
> No. Why do you need that?
>
> On Wed, Nov 22, 2017 at 10:42 AM, Jeff Kubina
No. Why do you need that?
On Wed, Nov 22, 2017 at 10:42 AM, Jeff Kubina wrote:
> Is it possible to configure a mesos agent to use multiple work directories
> (the work_dir parameter)?
>
>
If you have an executor running on a agent, wait for an offer from *that
agent* and launch a new task with the *same* ExecutorInfo as the one you
originally used to launch the executor. In this case, mesos will not launch
a new executor but passes the task to the already running executor. Note
Vihod,
much more clear. Thanks.
I refined first question inline.
On 22 November 2017 at 21:15, Vinod Kone wrote:
> Hi Alex,
>
> See my answers below
>
> 1. Launch a task without accepting an offer (on already existing executor).
>>
>
> This is not currently possible.
Is it possible to configure a mesos agent to use multiple work directories
(the work_dir parameter)?
Hi Alex,
See my answers below
1. Launch a task without accepting an offer (on already existing executor).
>
This is not currently possible. Every task needs some non-zero resources,
and hence an offer, to be launched. What's your use case?
> 2. Initiate an executor with no tasks (to launch
Ivan-
I ran the following:
zookeepercli -servers=10.10.10.51:2181 -c rm
/marathon/state/migration-in-progress
2017-11-22 11:00:47 FATAL zk: node does not exist
and tried to restart Marathon and same issue. Does this appear to be a
Zookeeper issue?
Thanks!
Alex
On Wed, Nov 22, 2017 at
Hi Alex,
If you are sure that the Marathon state in ZK is consistent, you can remove
the flag using zkCli.sh
For instance, if the ZK connection string for Marathon, you use is
zk://localhost:2181/marathon, then once connected to ZK using zkCli.sh,
just execute "rm
Hey,
I've read Scheduler HTTP API
http://mesos.apache.org/documentation/latest/scheduler-http-api/
and do not see how to (what is a call for):
1. Launch a task without accepting an offer (on already existing executor).
2. Initiate an executor with no tasks (to launch them later).
3. How actually
Tomas-
thank you for the reply! I am running marathon 1.5.2, zookeeper: 3.4.8-1
I looked on the referenced gitlab [age, but I really did find the syntax to
remove the flag as suggested. Do you happen to know the syntax via the
zookeeper cli?
Thank you again!
On Wed, Nov 22, 2017 at 8:50 AM,
Hi Alex,
looks like you've restarted Marathon during election. Try to backup
ZooKeeper data and then go to exhibitor / ZooKeeper CLI and remove flag
from Marathon namespace:
/state/migration-in-progress
According to https://github.com/mesosphere/marathon/pull/5662 the flag
should be removed
13 matches
Mail list logo