Re: Can I consider other framework tasks as a resource? Does it make sense?

2016-12-13 Thread haosdent
Hi, @Petr.

> Like if I want to run my task collocated with some other tasks on the
same node I have to make this decision somewhere.
Do you mean "POD" here?

For my cases, if there are some dependencies between my tasks, I use
database, message queue or zookeeper to implement my requirement.

On Wed, Dec 14, 2016 at 3:09 AM, Petr Novak  wrote:

> Hello,
>
> I want to execute tasks which requires some other tasks from other
> framework(s) already running. I’m thinking where such logic/strategy/policy
> belongs in principle. I understand scheduling as a process to decide where
> to execute task according to some resources availability, typically CPU,
> mem, net, hdd etc.
>
>
>
> If my task require other tasks running could I generalize and consider
> that those tasks from other frameworks are kind of required resources and
> put this logic/strategy decisions into scheduler? Like if I want to run my
> task collocated with some other tasks on the same node I have to make this
> decision somewhere.
>
>
>
> Does it make any sense? I’m asking because I have never thought about
> other frameworks/tasks as “resources” so that I could put them into
> scheduler to satisfy my understanding of a scheduler. Or it rather belongs
> higher like to a framework, or lower to an executor? Should scheduler be
> dedicated to decisions about resources which are offered and am I mixing
> concepts?
>
>
>
> Or I just should keep distinction between resources and
> requirements/policies or whatever but anyway does this kind of logic still
> belongs to scheduler or it should be somewhere else? I’m trying to
> understand which logic should be in scheduler and what should go somewhere
> else.
>
>
>
> Many thanks,
>
> Petr
>
>
>



-- 
Best Regards,
Haosdent Huang


Re: Quota

2016-12-13 Thread Vijay Srinivasaraghavan
Hi Alex,
>>Granularity in the allocator is a single agent.
Does this mean if I have only one agent and the moment if I set any quota, the 
framework running on the agent will not be allocated with any resource?
>From the logs, I don't get to see much details for the scenario when the quota 
>is set following deploying a package through Marathon. However, when I remove 
>the quota, I see the following message in master log "Allocating 
>ports(*):[*]; disk(*):." "Sending 1 offers to framework XX (marathon) 
>at scheduler-XXX"
RegardsVijay 

On Sunday, December 11, 2016 9:05 AM, Alex Rukletsov  
wrote:
 

 Granularity in the allocator is a single agent. Hence even though you set 
quota for 0.0001 CPU, at least one agent is "blocked". This is probably the 
reason why marathon is not getting offers. You can turn verbose master logs and 
check allocator messages to confirm. Alex.
On 10 Dec 2016 2:14 am, "Vijay"  wrote:

The dispatcher needs 1cpu and 1G memory.

Regards,
Vijay

Sent from my iPhone

> On Dec 9, 2016, at 4:51 PM, Vinod Kone  wrote:
>
> And how many resources does spark need?
>
>> On Fri, Dec 9, 2016 at 4:05 PM, Vijay Srinivasaraghavan 
>>  wrote:
>> Here is the slave state info. I see marathon is registered as "slave_public" 
>> role and is configured with "default_accepted_resource_ roles" as "*"
>>
>> "slaves":[
>>       {
>>          "id":"69356344-e2c4-453d-baaf- 22df4a4cc430-S0",
>>          "pid":"slave(1)@xxx.xxx.xxx. 100:5051",
>>          "hostname":"xxx.xxx.xxx.100",
>>          "registered_time":1481267726. 19244,
>>          "resources":{
>>             "disk":12099.0,
>>             "mem":14863.0,
>>             "gpus":0.0,
>>             "cpus":4.0,
>>             "ports":"[1025-2180, 2182-3887, 3889-5049, 5052-8079, 8082-8180, 
>>8182-32000]"
>>          },
>>          "used_resources":{
>>             "disk":0.0,
>>             "mem":0.0,
>>             "gpus":0.0,
>>             "cpus":0.0
>>          },
>>          "offered_resources":{
>>             "disk":0.0,
>>             "mem":0.0,
>>             "gpus":0.0,
>>             "cpus":0.0
>>          },
>>          "reserved_resources":{
>>
>>          },
>>          "unreserved_resources":{
>>             "disk":12099.0,
>>             "mem":14863.0,
>>             "gpus":0.0,
>>             "cpus":4.0,
>>             "ports":"[1025-2180, 2182-3887, 3889-5049, 5052-8079, 8082-8180, 
>>8182-32000]"
>>          },
>>          "attributes":{
>>
>>          },
>>          "active":true,
>>          "version":"1.0.1"
>>       }
>>    ],
>>
>> Regards
>> Vijay
>> On Friday, December 9, 2016 3:48 PM, Vinod Kone  wrote:
>>
>>
>> How many resources does the agent register with the master? How many 
>> resources does spark task need?
>>
>> I'm guessing marathon is not registered with "test" role so it is only 
>> getting un-reserved resources which are not enough for spark task?
>>
>> On Fri, Dec 9, 2016 at 2:54 PM, Vijay Srinivasaraghavan 
>>  wrote:
>> I have a standalone DCOS setup (Single node Vagrant VM running DCOS 
>> v.1.9-dev build + Mesos 1.0.1 + Marathon 1.3.0). Both master and agent are 
>> running on same VM.
>>
>> Resource: 4 CPU, 16GB Memory, 20G Disk
>>
>> I have created a quota using new V1 API which creates a role "test" with 
>> resource constraints of 0.5 CPU and 1G Memory.
>>
>> When I try to deploy Spark package, Marathon receives the request but the 
>> task is in "waiting" state since it did not receive any offers from Master 
>> though I don't see any resource constraints from the hardware perspective.
>>
>> However, when I deleted the quota, Marathon is able to move forward with the 
>> deployment and Spark was deployed/up and running. I could see from the Mesos 
>> master logs that it had sent an offer to the Marathon framework.
>>
>> To debug the issue, I was trying to create a quota but this time did not 
>> provide any CPU and Memory (0 cpu and 0 mem). After this, when I try to 
>> deploy Spark from DCOS UI, I could see Marathon getting offer from Master 
>> and able to deploy Spark without the need to delete the quota this time.
>>
>> Did anyone notice similar behavior?
>>
>> Regards
>> Vijay
>>
>>
>>
>



   

Can I consider other framework tasks as a resource? Does it make sense?

2016-12-13 Thread Petr Novak
Hello,

I want to execute tasks which requires some other tasks from other
framework(s) already running. I'm thinking where such logic/strategy/policy
belongs in principle. I understand scheduling as a process to decide where
to execute task according to some resources availability, typically CPU,
mem, net, hdd etc.

 

If my task require other tasks running could I generalize and consider that
those tasks from other frameworks are kind of required resources and put
this logic/strategy decisions into scheduler? Like if I want to run my task
collocated with some other tasks on the same node I have to make this
decision somewhere.

 

Does it make any sense? I'm asking because I have never thought about other
frameworks/tasks as "resources" so that I could put them into scheduler to
satisfy my understanding of a scheduler. Or it rather belongs higher like to
a framework, or lower to an executor? Should scheduler be dedicated to
decisions about resources which are offered and am I mixing concepts?

 

Or I just should keep distinction between resources and
requirements/policies or whatever but anyway does this kind of logic still
belongs to scheduler or it should be somewhere else? I'm trying to
understand which logic should be in scheduler and what should go somewhere
else.

 

Many thanks, 

Petr

 



Re: Proposal: mesosadm, the command to bootstrap the mesos cluster.

2016-12-13 Thread Stephen Gran
Hi,

I'm quite happy with the current approach of bootstrapping a new agent 
with the location of zookeeper and a set of credentials.  This allows 
our automation code to make new agents join the cluster automatically.

Not that I'm opposed to the two step process you propose, I'm sure we 
can make that happen automatically as well, but aside from making mesos 
look more like other solutions, does it bring semantics that would be 
useful?  ie, are there actions that 'mesosadm init' would initiate?  Or 
would this be purely an interactive way to do the same things you can do 
now by seeding out config files?

Cheers,

On 13/12/16 05:14, tommy xiao wrote:
> Hi team,
>
>
> I came from china mesos community. in today's group discussion, we came
> across a topic: Howto enhance user's cluster experience?
>
> Because newcome user is top resource for a community. if we can enhance
> currently mesos cluster installation steps, it will help us fastly
> bootstrap in user community.
>
> why mesosadm?
>
> such as Swarm cluster setup steps:
>
> 1. docker init
> 2. docker join
>
> another kuberenetes 1.5 cluster setup steps:
>
> 1. kubeadm init
> 2. kubeadm join --token  
>
> So i think the init, join style is good experience for normal user. How
> about you think?
>
>
>
> --
> Deshi Xiao
> Twitter: xds2000
> E-mail: xiaods(AT)gmail.com 

-- 
Stephen Gran
Senior Technical Architect

picture the possibilities | piksel.com