Hi,
I had tried adjusting the allocation_interval flag earlier, but I guess I was
misusing it - it does exactly what I was looking for.
Thanks for the help!
Christopher
> On Jun 18, 2015, at 7:57 AM, Alex Rukletsov wrote:
>
> Christopher,
>
> have you tried to adjust the master allocation_int
Christopher,
have you tried to adjust the master allocation_interval flag?
On Thu, Jun 18, 2015 at 12:20 AM, Christopher Ketchum
wrote:
> Hi,
>
> I think those logs were misleading, sorry. I am running the tests in
> Pycharm, which aggregates all the logs onto one console so I selected only
> t
Hi,
I think those logs were misleading, sorry. I am running the tests in Pycharm,
which aggregates all the logs onto one console so I selected only the mesos
messages that explicitly said they were from master. Here are those logs
without my editing. Again, the last two messages are almost a se
Looks like the hierarchical allocator doesn't trigger an allocation when
resources are recovered from a finished task (likely a bug. can you file a
ticket?). Instead it depends on the periodic allocation interval (default
1s, configurable via flags.allocation_interval) for the next allocation. In
t
You can see there is about a second delay between the last two messages. Its
not a huge amount of time but it is noticeable, especially when testing with
many short tasks.
I0617 11:34:08.582778 185491456 master.cpp:4690] Removing task 1 with resources
cpus(*):3.9 of framework 20150617-113405-1
Chris,
```
driver_->requestResources(pendingResources);
```
The design is there, but as far as I'm concerned this is a noop. You can
try and track that and maybe implement a patch in the scheduler.
On Wed, Jun 17, 2015 at 1:18 PM, Vinod Kone wrote:
> Can you paste the master logs for whe
Can you paste the master logs for when the task is finished and the next
offer is sent?
On Wed, Jun 17, 2015 at 9:11 AM, Christopher Ketchum
wrote:
> Hi everyone,
>
> Thanks for the responses. To clarify, I’m only running one framework with
> a single slave for testing purposes, and it is the re
Hi everyone,
Thanks for the responses. To clarify, I’m only running one framework with a
single slave for testing purposes, and it is the re-offers that I am trying to
adjust. When I watch the program run I see tasks updating to TASK_FINISHED, but
there is a noticeable delay where my framework
Hi Christopher,
To let a particular mesos framework receive more offers than other
frameworks, we assign our frameworks weights. The higher the weight, the
more frequently the framework will receive an offer. See the '--weights'
and '--roles' options in the config:
http://mesos.apache.org/docume
Christopher,
try adjusting master allocation_interval flag. It specifies often the
allocator performs batch allocations to frameworks. As Ondrej pointed out,
if you framework explicitly declines offers, it won't be re-offered the
same resources for some period of time.
On Sat, Jun 13, 2015 at 8:3
Hi Christopher,
i dont know about any way way how to speed up first resource offer -
in my experience new offers arrive almost immediately after framework
registration. It depends on the infrastructure you are testing your
framework on - are there any
other frameworks running? As is discussed in a
Hi,
I was wondering if there was any way to adjust the rate of resource offers to
the framework. I am writing a mesos framework, and when I am testing it I am
noticing a slight pause were the framework seems to be waiting for another
resource offer. I would like to know if there is any way to s
12 matches
Mail list logo