Re: Checking success of resource reservations

2016-04-15 Thread Klaus Ma
Please try "curl -s http://mesos_master:5050/roles | python -m json.tool"
to get roles's info including reservation.


Da (Klaus), Ma (马达) | PMP® | Advisory Software Engineer
Platform OpenSource Technology, STG, IBM GCG
+86-10-8245 4084 | klaus1982...@gmail.com | http://k82.me

On Sat, Apr 16, 2016 at 2:43 AM, Sammy Nguyen 
wrote:

> Hi everyone,
>
> I am making resource reservations and creating persistent volumes through
> the operator HTTP endpoints on v0.28.0. In order to see if the requests
> went through, the docs (
> http://mesos.apache.org/documentation/latest/reservation/) say to check
> at the appropriate slave's /state endpoint. However, we are not seeing
> anything in the JSON response from that endpoint which would indicate
> success of the reservation. Can anyone provide guidance on this?
>
> For context, I am working on a script to reserve or unreserve disk and
> create or destroy persistent volumes as needed, and we would like to fail
> early if the reservation or persistent volume cannot be made.
>
> Thanks,
>
> *Sammy Nguyen*
>
>


Re: Framework taking default resources even though a role is specified

2016-04-15 Thread Klaus Ma
Which version are you using? For your requirement, I think you can try
Quota; currently, the resources beyond quota will not offer to the
framework whose quota satisfied. Quota will also include reserved resources.


Da (Klaus), Ma (马达) | PMP® | Advisory Software Engineer
Platform OpenSource Technology, STG, IBM GCG
+86-10-8245 4084 | klaus1982...@gmail.com | http://k82.me

On Sat, Apr 16, 2016 at 4:54 AM, Rodrick Brown 
wrote:

> You can try setting constraints on tasks in both Chronos and marathon that
> will limit deployment to only a certain set of nodes.
>
> Sent from Outlook for iPhone 
>
>
>
>
> On Fri, Apr 15, 2016 at 1:35 PM -0700, "June Taylor"  wrote:
>
> Evan,
>>
>> I'm not sure about it. We're new to the Mesos system and still learning.
>> We want to be able to classify resources so that our developers can run
>> tasks against them easily, without using more than they are permitted. It
>> seemed like resource roles were the appropriate solution, but they may not
>> go far enough if Mesos will still spill over into default resources.
>>
>>
>> Thanks,
>> June Taylor
>> System Administrator, Minnesota Population Center
>> University of Minnesota
>>
>> On Fri, Apr 15, 2016 at 3:27 PM, Evan Krall  wrote:
>>
>>> My understanding is that your framework would have to know not to accept
>>> offers for * resources. Marathon has an option to specify which roles to
>>> accept for a particular app, and has command line options for controlling
>>> the default. Maybe pyspark has something similar?
>>>
>>> On Fri, Apr 15, 2016 at 1:24 PM, June Taylor  wrote:
>>>
 Yep - we're waiting for it.


 Thanks,
 June Taylor
 System Administrator, Minnesota Population Center
 University of Minnesota

 On Fri, Apr 15, 2016 at 3:23 PM, Anand Mazumdar 
 wrote:

> FWIW, we recently fixed `mesos-execute` (command scheduler) to add
> support for roles. It should be available in the next release (0.29).
>
> https://issues.apache.org/jira/browse/MESOS-4744
>
> -anand
>
> On Apr 15, 2016, at 11:41 AM, June Taylor  wrote:
>
> Ken,
>
> Thanks for your reply.
>
> Is there a way to ensure a framework only receives the reserved
> resources?
>
> I would go ahead and take everything out of the * role, however, the
> 'mesos-execute' command doesn't support specifying a role, so that's the
> only way we can currently get mesos-execute to co-exist with pyspark.
>
> Any other thoughts from the group?
>
>
> Thanks,
> June Taylor
> System Administrator, Minnesota Population Center
> University of Minnesota
>
> On Fri, Apr 15, 2016 at 11:54 AM, Ken Sipe  wrote:
>
>> The framework with role “production” will receive production
>> resources and * resources
>> All other frameworks (assuming no role) will only receive * resources
>>
>> ken
>>
>> > On Apr 15, 2016, at 11:38 AM, June Taylor  wrote:
>> >
>> > We have a small cluster with 3 nodes in the * resource role
>> default, and 3 nodes in a "production" resource role.
>> >
>> > Starting up a framework which requests "production" properly
>> executes on the expected nodes, however, today we noticed that this job
>> also started up executors under the * resource role as well.
>> >
>> > We expect these tasks to only go on nodes with the "production"
>> resource role. Can you advise further?
>> >
>> > Thanks,
>> > June Taylor
>> > System Administrator, Minnesota Population Center
>> > University of Minnesota
>>
>>
>
>

>>>
>>
> *NOTICE TO RECIPIENTS*: This communication is confidential and intended
> for the use of the addressee only. If you are not an intended recipient of
> this communication, please delete it immediately and notify the sender by
> return email. Unauthorized reading, dissemination, distribution or copying
> of this communication is prohibited. This communication does not constitute
> an offer to sell or a solicitation of an indication of interest to purchase
> any loan, security or any other financial product or instrument, nor is it
> an offer to sell or a solicitation of an indication of interest to purchase
> any products or services to any persons who are prohibited from receiving
> such information under applicable law. The contents of this communication
> may not be accurate or complete and are subject to change without notice.
> As such, Orchard App, Inc. (including its subsidiaries and affiliates,
> "Orchard") makes no representation regarding the accuracy or completeness
> of the information contained herein. The intended recipient is advised to
> consult its own professional advisors, including those specializing in
> legal, tax and 

Re: Framework taking default resources even though a role is specified

2016-04-15 Thread Rodrick Brown
You can try setting constraints on tasks in both Chronos and marathon that will 
limit deployment to only a certain set of nodes. 

Sent from Outlook for iPhone




On Fri, Apr 15, 2016 at 1:35 PM -0700, "June Taylor"  wrote:










Evan,
I'm not sure about it. We're new to the Mesos system and still learning. We 
want to be able to classify resources so that our developers can run tasks 
against them easily, without using more than they are permitted. It seemed like 
resource roles were the appropriate solution, but they may not go far enough if 
Mesos will still spill over into default resources.

Thanks,June TaylorSystem Administrator, Minnesota Population CenterUniversity 
of Minnesota

On Fri, Apr 15, 2016 at 3:27 PM, Evan Krall  wrote:
My understanding is that your framework would have to know not to accept offers 
for * resources. Marathon has an option to specify which roles to accept for a 
particular app, and has command line options for controlling the default. Maybe 
pyspark has something similar?
On Fri, Apr 15, 2016 at 1:24 PM, June Taylor  wrote:
Yep - we're waiting for it.

Thanks,June TaylorSystem Administrator, Minnesota Population CenterUniversity 
of Minnesota

On Fri, Apr 15, 2016 at 3:23 PM, Anand Mazumdar  wrote:
FWIW, we recently fixed `mesos-execute` (command scheduler) to add support for 
roles. It should be available in the next release (0.29).
https://issues.apache.org/jira/browse/MESOS-4744
-anand
On Apr 15, 2016, at 11:41 AM, June Taylor  wrote:
Ken,
Thanks for your reply.
Is there a way to ensure a framework only receives the reserved resources?
I would go ahead and take everything out of the * role, however, the 
'mesos-execute' command doesn't support specifying a role, so that's the only 
way we can currently get mesos-execute to co-exist with pyspark.
Any other thoughts from the group?

Thanks,June TaylorSystem Administrator, Minnesota Population CenterUniversity 
of Minnesota

On Fri, Apr 15, 2016 at 11:54 AM, Ken Sipe  wrote:
The framework with role “production” will receive production resources and * 
resources

All other frameworks (assuming no role) will only receive * resources



ken



> On Apr 15, 2016, at 11:38 AM, June Taylor  wrote:

>

> We have a small cluster with 3 nodes in the * resource role default, and 3 
> nodes in a "production" resource role.

>

> Starting up a framework which requests "production" properly executes on the 
> expected nodes, however, today we noticed that this job also started up 
> executors under the * resource role as well.

>

> We expect these tasks to only go on nodes with the "production" resource 
> role. Can you advise further?

>

> Thanks,

> June Taylor

> System Administrator, Minnesota Population Center

> University of Minnesota


















-- 
*NOTICE TO RECIPIENTS*: This communication is confidential and intended for 
the use of the addressee only. If you are not an intended recipient of this 
communication, please delete it immediately and notify the sender by return 
email. Unauthorized reading, dissemination, distribution or copying of this 
communication is prohibited. This communication does not constitute an 
offer to sell or a solicitation of an indication of interest to purchase 
any loan, security or any other financial product or instrument, nor is it 
an offer to sell or a solicitation of an indication of interest to purchase 
any products or services to any persons who are prohibited from receiving 
such information under applicable law. The contents of this communication 
may not be accurate or complete and are subject to change without notice. 
As such, Orchard App, Inc. (including its subsidiaries and affiliates, 
"Orchard") makes no representation regarding the accuracy or completeness 
of the information contained herein. The intended recipient is advised to 
consult its own professional advisors, including those specializing in 
legal, tax and accounting matters. Orchard does not provide legal, tax or 
accounting advice.


Re: Framework taking default resources even though a role is specified

2016-04-15 Thread June Taylor
Evan,

I'm not sure about it. We're new to the Mesos system and still learning. We
want to be able to classify resources so that our developers can run tasks
against them easily, without using more than they are permitted. It seemed
like resource roles were the appropriate solution, but they may not go far
enough if Mesos will still spill over into default resources.


Thanks,
June Taylor
System Administrator, Minnesota Population Center
University of Minnesota

On Fri, Apr 15, 2016 at 3:27 PM, Evan Krall  wrote:

> My understanding is that your framework would have to know not to accept
> offers for * resources. Marathon has an option to specify which roles to
> accept for a particular app, and has command line options for controlling
> the default. Maybe pyspark has something similar?
>
> On Fri, Apr 15, 2016 at 1:24 PM, June Taylor  wrote:
>
>> Yep - we're waiting for it.
>>
>>
>> Thanks,
>> June Taylor
>> System Administrator, Minnesota Population Center
>> University of Minnesota
>>
>> On Fri, Apr 15, 2016 at 3:23 PM, Anand Mazumdar 
>> wrote:
>>
>>> FWIW, we recently fixed `mesos-execute` (command scheduler) to add
>>> support for roles. It should be available in the next release (0.29).
>>>
>>> https://issues.apache.org/jira/browse/MESOS-4744
>>>
>>> -anand
>>>
>>> On Apr 15, 2016, at 11:41 AM, June Taylor  wrote:
>>>
>>> Ken,
>>>
>>> Thanks for your reply.
>>>
>>> Is there a way to ensure a framework only receives the reserved
>>> resources?
>>>
>>> I would go ahead and take everything out of the * role, however, the
>>> 'mesos-execute' command doesn't support specifying a role, so that's the
>>> only way we can currently get mesos-execute to co-exist with pyspark.
>>>
>>> Any other thoughts from the group?
>>>
>>>
>>> Thanks,
>>> June Taylor
>>> System Administrator, Minnesota Population Center
>>> University of Minnesota
>>>
>>> On Fri, Apr 15, 2016 at 11:54 AM, Ken Sipe  wrote:
>>>
 The framework with role “production” will receive production resources
 and * resources
 All other frameworks (assuming no role) will only receive * resources

 ken

 > On Apr 15, 2016, at 11:38 AM, June Taylor  wrote:
 >
 > We have a small cluster with 3 nodes in the * resource role default,
 and 3 nodes in a "production" resource role.
 >
 > Starting up a framework which requests "production" properly executes
 on the expected nodes, however, today we noticed that this job also started
 up executors under the * resource role as well.
 >
 > We expect these tasks to only go on nodes with the "production"
 resource role. Can you advise further?
 >
 > Thanks,
 > June Taylor
 > System Administrator, Minnesota Population Center
 > University of Minnesota


>>>
>>>
>>
>


Re: Framework taking default resources even though a role is specified

2016-04-15 Thread Evan Krall
My understanding is that your framework would have to know not to accept
offers for * resources. Marathon has an option to specify which roles to
accept for a particular app, and has command line options for controlling
the default. Maybe pyspark has something similar?

On Fri, Apr 15, 2016 at 1:24 PM, June Taylor  wrote:

> Yep - we're waiting for it.
>
>
> Thanks,
> June Taylor
> System Administrator, Minnesota Population Center
> University of Minnesota
>
> On Fri, Apr 15, 2016 at 3:23 PM, Anand Mazumdar 
> wrote:
>
>> FWIW, we recently fixed `mesos-execute` (command scheduler) to add
>> support for roles. It should be available in the next release (0.29).
>>
>> https://issues.apache.org/jira/browse/MESOS-4744
>>
>> -anand
>>
>> On Apr 15, 2016, at 11:41 AM, June Taylor  wrote:
>>
>> Ken,
>>
>> Thanks for your reply.
>>
>> Is there a way to ensure a framework only receives the reserved resources?
>>
>> I would go ahead and take everything out of the * role, however, the
>> 'mesos-execute' command doesn't support specifying a role, so that's the
>> only way we can currently get mesos-execute to co-exist with pyspark.
>>
>> Any other thoughts from the group?
>>
>>
>> Thanks,
>> June Taylor
>> System Administrator, Minnesota Population Center
>> University of Minnesota
>>
>> On Fri, Apr 15, 2016 at 11:54 AM, Ken Sipe  wrote:
>>
>>> The framework with role “production” will receive production resources
>>> and * resources
>>> All other frameworks (assuming no role) will only receive * resources
>>>
>>> ken
>>>
>>> > On Apr 15, 2016, at 11:38 AM, June Taylor  wrote:
>>> >
>>> > We have a small cluster with 3 nodes in the * resource role default,
>>> and 3 nodes in a "production" resource role.
>>> >
>>> > Starting up a framework which requests "production" properly executes
>>> on the expected nodes, however, today we noticed that this job also started
>>> up executors under the * resource role as well.
>>> >
>>> > We expect these tasks to only go on nodes with the "production"
>>> resource role. Can you advise further?
>>> >
>>> > Thanks,
>>> > June Taylor
>>> > System Administrator, Minnesota Population Center
>>> > University of Minnesota
>>>
>>>
>>
>>
>


Re: Framework taking default resources even though a role is specified

2016-04-15 Thread June Taylor
Yep - we're waiting for it.


Thanks,
June Taylor
System Administrator, Minnesota Population Center
University of Minnesota

On Fri, Apr 15, 2016 at 3:23 PM, Anand Mazumdar  wrote:

> FWIW, we recently fixed `mesos-execute` (command scheduler) to add support
> for roles. It should be available in the next release (0.29).
>
> https://issues.apache.org/jira/browse/MESOS-4744
>
> -anand
>
> On Apr 15, 2016, at 11:41 AM, June Taylor  wrote:
>
> Ken,
>
> Thanks for your reply.
>
> Is there a way to ensure a framework only receives the reserved resources?
>
> I would go ahead and take everything out of the * role, however, the
> 'mesos-execute' command doesn't support specifying a role, so that's the
> only way we can currently get mesos-execute to co-exist with pyspark.
>
> Any other thoughts from the group?
>
>
> Thanks,
> June Taylor
> System Administrator, Minnesota Population Center
> University of Minnesota
>
> On Fri, Apr 15, 2016 at 11:54 AM, Ken Sipe  wrote:
>
>> The framework with role “production” will receive production resources
>> and * resources
>> All other frameworks (assuming no role) will only receive * resources
>>
>> ken
>>
>> > On Apr 15, 2016, at 11:38 AM, June Taylor  wrote:
>> >
>> > We have a small cluster with 3 nodes in the * resource role default,
>> and 3 nodes in a "production" resource role.
>> >
>> > Starting up a framework which requests "production" properly executes
>> on the expected nodes, however, today we noticed that this job also started
>> up executors under the * resource role as well.
>> >
>> > We expect these tasks to only go on nodes with the "production"
>> resource role. Can you advise further?
>> >
>> > Thanks,
>> > June Taylor
>> > System Administrator, Minnesota Population Center
>> > University of Minnesota
>>
>>
>
>


Re: Framework taking default resources even though a role is specified

2016-04-15 Thread Anand Mazumdar
FWIW, we recently fixed `mesos-execute` (command scheduler) to add support for 
roles. It should be available in the next release (0.29).

https://issues.apache.org/jira/browse/MESOS-4744 


-anand

> On Apr 15, 2016, at 11:41 AM, June Taylor  wrote:
> 
> Ken,
> 
> Thanks for your reply.
> 
> Is there a way to ensure a framework only receives the reserved resources?
> 
> I would go ahead and take everything out of the * role, however, the 
> 'mesos-execute' command doesn't support specifying a role, so that's the only 
> way we can currently get mesos-execute to co-exist with pyspark.
> 
> Any other thoughts from the group?
> 
> 
> Thanks,
> June Taylor
> System Administrator, Minnesota Population Center
> University of Minnesota
> 
> On Fri, Apr 15, 2016 at 11:54 AM, Ken Sipe  > wrote:
> The framework with role “production” will receive production resources and * 
> resources
> All other frameworks (assuming no role) will only receive * resources
> 
> ken
> 
> > On Apr 15, 2016, at 11:38 AM, June Taylor  > > wrote:
> >
> > We have a small cluster with 3 nodes in the * resource role default, and 3 
> > nodes in a "production" resource role.
> >
> > Starting up a framework which requests "production" properly executes on 
> > the expected nodes, however, today we noticed that this job also started up 
> > executors under the * resource role as well.
> >
> > We expect these tasks to only go on nodes with the "production" resource 
> > role. Can you advise further?
> >
> > Thanks,
> > June Taylor
> > System Administrator, Minnesota Population Center
> > University of Minnesota
> 
> 



Re: Error on Teardown attempt: Framework is not connected via HTTP

2016-04-15 Thread Anand Mazumdar
The `py-spark` framework looks to be driver based i.e. it uses the 
`MesosSchedulerDriver` underneath. You would need to use the `/teardown` 
endpoint that takes in the `frameworkId`as a query parameter for tearing it 
down. For more details, see: 
http://mesos.apache.org/documentation/latest/endpoints/master/teardown/ 


The `TEARDOWN` call to `/api/v1/scheduler` endpoint only works if your 
framework is using the new Scheduler API 
. Hope this 
helps.

-anand

> On Apr 15, 2016, at 12:56 PM, June Taylor  wrote:
> 
> We're getting the highlighted error message returned when attempting to tear 
> down a framework on our cluster:
> 
> june@cluster:~$ mesos frameworks
>  ID  NAMEHOST 
>   ACTIVE  TASKS   CPU MEM DISK
>  0c540ad0-a050-4c20-82df-7bd14ce95f51-0090  pyspark-shell  cluster   True 
> 4115.0  450560.0  0.0
> 
> 
> june@cluster:~$ curl -XPOST http://cluster 
> :5050/api/v1/scheduler -d '{ "framework_id": { "value": 
> "0c540ad0-a050-4c20-82df-7bd14ce95f51-0090" }, "type": "TEARDOWN"}' -H 
> Content-Type:application/json
> Framework is not connected via HTTP
> 
> We cannot get this framework to shut down. I'm not sure why we're getting 
> this type of error message, as the same POST command has worked against other 
> framework IDs in the past.
> 
> Your thoughts are much appreciated.
> 
> Thanks,
> June Taylor
> System Administrator, Minnesota Population Center
> University of Minnesota



Re: Error on Teardown attempt: Framework is not connected via HTTP

2016-04-15 Thread Vinod Kone
That's not the endpoint you want (that's for frameworks to use). You want
/teardown endpoint (that's for operators).


Error on Teardown attempt: Framework is not connected via HTTP

2016-04-15 Thread June Taylor
We're getting the highlighted error message returned when attempting to
tear down a framework on our cluster:

june@cluster:~$ mesos frameworks
 ID  NAMEHOST
ACTIVE  TASKS   CPU MEM DISK
 0c540ad0-a050-4c20-82df-7bd14ce95f51-0090  pyspark-shell  cluster   True
  4115.0  450560.0  0.0


june@cluster:~$ curl -XPOST http://cluster:5050/api/v1/scheduler -d '{
"framework_id": { "value": "0c540ad0-a050-4c20-82df-7bd14ce95f51-0090" },
"type": "TEARDOWN"}' -H Content-Type:application/json
Framework is not connected via HTTP

We cannot get this framework to shut down. I'm not sure why we're getting
this type of error message, as the same POST command has worked against
other framework IDs in the past.

Your thoughts are much appreciated.

Thanks,
June Taylor
System Administrator, Minnesota Population Center
University of Minnesota


Checking success of resource reservations

2016-04-15 Thread Sammy Nguyen
Hi everyone,

I am making resource reservations and creating persistent volumes through
the operator HTTP endpoints on v0.28.0. In order to see if the requests
went through, the docs (
http://mesos.apache.org/documentation/latest/reservation/) say to check at
the appropriate slave's /state endpoint. However, we are not seeing
anything in the JSON response from that endpoint which would indicate
success of the reservation. Can anyone provide guidance on this?

For context, I am working on a script to reserve or unreserve disk and
create or destroy persistent volumes as needed, and we would like to fail
early if the reservation or persistent volume cannot be made.

Thanks,

*Sammy Nguyen*


Re: Framework taking default resources even though a role is specified

2016-04-15 Thread Ken Sipe
The framework with role “production” will receive production resources and * 
resources
All other frameworks (assuming no role) will only receive * resources

ken

> On Apr 15, 2016, at 11:38 AM, June Taylor  wrote:
> 
> We have a small cluster with 3 nodes in the * resource role default, and 3 
> nodes in a "production" resource role.
> 
> Starting up a framework which requests "production" properly executes on the 
> expected nodes, however, today we noticed that this job also started up 
> executors under the * resource role as well.
> 
> We expect these tasks to only go on nodes with the "production" resource 
> role. Can you advise further?
> 
> Thanks,
> June Taylor
> System Administrator, Minnesota Population Center
> University of Minnesota



Framework taking default resources even though a role is specified

2016-04-15 Thread June Taylor
We have a small cluster with 3 nodes in the * resource role default, and 3
nodes in a "production" resource role.

Starting up a framework which requests "production" properly executes on
the expected nodes, however, today we noticed that this job also started up
executors under the * resource role as well.

We expect these tasks to only go on nodes with the "production" resource
role. Can you advise further?

Thanks,
June Taylor
System Administrator, Minnesota Population Center
University of Minnesota


Re: Prometheus Exporters on Marathon

2016-04-15 Thread June Taylor
Thanks for the tip - I am not familiar with Golang and just installed
whatever came from Ubuntu's packages. I see that is 1.2.1, so I will check
out a newer version.


Thanks,
June Taylor
System Administrator, Minnesota Population Center
University of Minnesota

On Fri, Apr 15, 2016 at 8:36 AM, Dick Davies  wrote:

> You are probably building on an older version of Golang - I think the
> Timeout attribute was added to http.Client around 1.5 or 1.6?
>
> On 15 April 2016 at 13:56, June Taylor  wrote:
> > David,
> >
> > Thanks for the assistance. How did you get the mesos-exporter installed?
> > When I tried the instructions from github.com/mesosphere/mesos-exporter,
> I
> > got this error:
> >
> > june@-cluster:~$ go get github.com/mesosphere/mesos-exporter
> > # github.com/mesosphere/mesos-exporter
> > gosrc/src/github.com/mesosphere/mesos-exporter/common.go:46: unknown
> > http.Client field 'Timeout' in struct literal
> > gosrc/src/github.com/mesosphere/mesos-exporter/master_state.go:73:
> unknown
> > http.Client field 'Timeout' in struct literal
> > gosrc/src/github.com/mesosphere/mesos-exporter/slave_monitor.go:56:
> unknown
> > http.Client field 'Timeout' in struct literal
> >
> >
> > Thanks,
> > June Taylor
> > System Administrator, Minnesota Population Center
> > University of Minnesota
> >
> > On Fri, Apr 15, 2016 at 4:29 AM, David Keijser  >
> > wrote:
> >>
> >> Sure. there is not a lot to it though.
> >>
> >> So we have simple service file like this
> >>
> >> /usr/lib/systemd/system/mesos_exporter.service
> >> ```
> >> [Unit]
> >> Description=Prometheus mesos exporter
> >>
> >> [Service]
> >> EnvironmentFile=-/etc/sysconfig/mesos_exporter
> >> ExecStart=/usr/bin/mesos_exporter $OPTIONS
> >> Restart=on-failure
> >> ```
> >>
> >> and the sysconfig is just a simple
> >>
> >> /etc/sysconfig/mesos_exporter
> >> ```
> >> OPTIONS=-master=http://10.4.72.253:5050
> >> ```
> >>
> >> - or -
> >>
> >> /etc/sysconfig/mesos_exporter
> >> ```
> >> OPTIONS=-slave=http://10.4.72.177:5051
> >> ```
> >>
> >> On Thu, Apr 14, 2016 at 12:22:56PM -0500, June Taylor wrote:
> >> > David,
> >> >
> >> > Thanks for the reply. Would you be able to share your configs for
> >> > starting
> >> > up the exporters?
> >> >
> >> >
> >> > Thanks,
> >> > June Taylor
> >> > System Administrator, Minnesota Population Center
> >> > University of Minnesota
> >> >
> >> > On Thu, Apr 14, 2016 at 11:27 AM, David Keijser
> >> > 
> >> > wrote:
> >> >
> >> > > We run the mesos exporter [1] and the node_exporter on each host
> >> > > directly
> >> > > managed by systemd. For other application specific exporters we have
> >> > > so far
> >> > > been baking them into the docker image of the application which is
> >> > > being
> >> > > run by marathon.
> >> > >
> >> > > 1) https://github.com/mesosphere/mesos_exporter
> >> > >
> >> > > On Thu, 14 Apr 2016 at 18:20 June Taylor  wrote:
> >> > >
> >> > >> Is anyone else running Prometheus exporters on their cluster? I am
> >> > >> stuck
> >> > >> because I can't get a working "go build" environment right now.
> >> > >>
> >> > >> Is anyone else running this directly on their nodes and masters?
> Or,
> >> > >> via
> >> > >> Marathon?
> >> > >>
> >> > >> If so, please share your setup specifics.
> >> > >>
> >> > >> Thanks,
> >> > >> June Taylor
> >> > >> System Administrator, Minnesota Population Center
> >> > >> University of Minnesota
> >> > >>
> >> > >
> >
> >
>


Re: Prometheus Exporters on Marathon

2016-04-15 Thread Dick Davies
You are probably building on an older version of Golang - I think the
Timeout attribute was added to http.Client around 1.5 or 1.6?

On 15 April 2016 at 13:56, June Taylor  wrote:
> David,
>
> Thanks for the assistance. How did you get the mesos-exporter installed?
> When I tried the instructions from github.com/mesosphere/mesos-exporter, I
> got this error:
>
> june@-cluster:~$ go get github.com/mesosphere/mesos-exporter
> # github.com/mesosphere/mesos-exporter
> gosrc/src/github.com/mesosphere/mesos-exporter/common.go:46: unknown
> http.Client field 'Timeout' in struct literal
> gosrc/src/github.com/mesosphere/mesos-exporter/master_state.go:73: unknown
> http.Client field 'Timeout' in struct literal
> gosrc/src/github.com/mesosphere/mesos-exporter/slave_monitor.go:56: unknown
> http.Client field 'Timeout' in struct literal
>
>
> Thanks,
> June Taylor
> System Administrator, Minnesota Population Center
> University of Minnesota
>
> On Fri, Apr 15, 2016 at 4:29 AM, David Keijser 
> wrote:
>>
>> Sure. there is not a lot to it though.
>>
>> So we have simple service file like this
>>
>> /usr/lib/systemd/system/mesos_exporter.service
>> ```
>> [Unit]
>> Description=Prometheus mesos exporter
>>
>> [Service]
>> EnvironmentFile=-/etc/sysconfig/mesos_exporter
>> ExecStart=/usr/bin/mesos_exporter $OPTIONS
>> Restart=on-failure
>> ```
>>
>> and the sysconfig is just a simple
>>
>> /etc/sysconfig/mesos_exporter
>> ```
>> OPTIONS=-master=http://10.4.72.253:5050
>> ```
>>
>> - or -
>>
>> /etc/sysconfig/mesos_exporter
>> ```
>> OPTIONS=-slave=http://10.4.72.177:5051
>> ```
>>
>> On Thu, Apr 14, 2016 at 12:22:56PM -0500, June Taylor wrote:
>> > David,
>> >
>> > Thanks for the reply. Would you be able to share your configs for
>> > starting
>> > up the exporters?
>> >
>> >
>> > Thanks,
>> > June Taylor
>> > System Administrator, Minnesota Population Center
>> > University of Minnesota
>> >
>> > On Thu, Apr 14, 2016 at 11:27 AM, David Keijser
>> > 
>> > wrote:
>> >
>> > > We run the mesos exporter [1] and the node_exporter on each host
>> > > directly
>> > > managed by systemd. For other application specific exporters we have
>> > > so far
>> > > been baking them into the docker image of the application which is
>> > > being
>> > > run by marathon.
>> > >
>> > > 1) https://github.com/mesosphere/mesos_exporter
>> > >
>> > > On Thu, 14 Apr 2016 at 18:20 June Taylor  wrote:
>> > >
>> > >> Is anyone else running Prometheus exporters on their cluster? I am
>> > >> stuck
>> > >> because I can't get a working "go build" environment right now.
>> > >>
>> > >> Is anyone else running this directly on their nodes and masters? Or,
>> > >> via
>> > >> Marathon?
>> > >>
>> > >> If so, please share your setup specifics.
>> > >>
>> > >> Thanks,
>> > >> June Taylor
>> > >> System Administrator, Minnesota Population Center
>> > >> University of Minnesota
>> > >>
>> > >
>
>


Re: Pyspark Cluster Mode

2016-04-15 Thread June Taylor
Pradeep,

Thanks for the assistance! We'll be trying this out and I'll certainly let
you know if we have questions.


Thanks,
June Taylor
System Administrator, Minnesota Population Center
University of Minnesota

On Fri, Apr 15, 2016 at 6:50 AM, Pradeep Chhetri <
pradeep.chhetr...@gmail.com> wrote:

> Hi June,
>
> Here is the spark marathon configuration you were asking:
> https://gist.github.com/pradeepchhetri/df6b71580a9f107378ffebc789d805ac
>
> I have included the script to start MesosClusterDispatcher too in the
> above gist
>
> I would suggest you to use this Dockerfile as the reference for building
> spark docker image:
> https://github.com/apache/spark/blob/master/external/docker/spark-mesos/Dockerfile
>
> I have modified my dockerfile to read env variables and fill the
> configuration template. These env variables are being passed thru marathon.
>
> And ofcourse, we are here to help you out.
>
> On Thu, Apr 14, 2016 at 5:03 PM, June Taylor  wrote:
>
>> Shuai,
>>
>> Thank you for your reply. Are you actually using this docker image in
>> Marathon successfully? If so, please share your JSON for the application,
>> as that would help me understand exactly what you suggest.
>>
>>
>> Thanks,
>> June Taylor
>> System Administrator, Minnesota Population Center
>> University of Minnesota
>>
>> On Thu, Apr 14, 2016 at 9:23 AM, Shuai Lin 
>> wrote:
>>
>>> To run the dispatcher  in marathon I would recommend use a docker image
>>> like mesosphere/spark https://hub.docker.com/r/mesosphere/spark/tags/
>>>
>>> One problem is how to access the dispatcher since it may be launched on
>>> any one the slaves. You can setup a service discovery mechanism like
>>> marathon-lb or mesos-dns for this purpose, but it may be a little overkill
>>> if you don't need them except here.
>>>
>>> On simple approach is to specify --net=host in the marathon task for the
>>> dispatch, and run a haproxy on the your your master server that tries all
>>> the slaves:
>>>
>>> listen mesos-spark-dispatcher 0.0.0.0:7077
 server node1 10.0.1.1:7077 check
 server node2 10.0.1.2:7077 check
 server node3 10.0.1.3:7077 check
>>>
>>>
>>> Then use "--master=mesos://yourmaster:7077" in your spark-submit command.
>>>
>>>
>>>
>>> On Thu, Apr 14, 2016 at 10:03 PM, June Taylor  wrote:
>>>
 Pradeep,

 Thank you for your reply. I have read that documentation, but it leaves
 out a lot of key pieces. Have you actually run MesosClusterDispatcher on
 Marathon? If so, can you please share your JSON configuration for the
 application?


 Thanks,
 June Taylor
 System Administrator, Minnesota Population Center
 University of Minnesota

 On Wed, Apr 13, 2016 at 11:32 AM, Pradeep Chhetri <
 pradeep.chhetr...@gmail.com> wrote:

> In cluster mode, you need to first run *MesosClusterDispatcher*
> application on marathon (Read more about that here:
> http://spark.apache.org/docs/latest/running-on-mesos.html#cluster-mode
> )
>
> In both client and cluster mode, you need to specify --master flag
> while submitting job, the only difference is that you will specifying the
> value as the URL of dispatcher in cluster mode
> (mesos://:) while in client mode, you
> will be specifying URL of mesos-master
> (mesos://:)
>
> On Wed, Apr 13, 2016 at 3:24 PM, June Taylor  wrote:
>
>> I'm interested in what the "best practice" is for running pyspark
>> jobs against a mesos cluster.
>>
>> Right now, we're simply passing the --master mesos://host:5050 flag,
>> which appears to register a framework properly.
>>
>> However, I was told this isn't "cluster mode" - and I'm a bit
>> confused. What is the recommended method of doing this?
>>
>> Thanks,
>> June Taylor
>> System Administrator, Minnesota Population Center
>> University of Minnesota
>>
>
>
>
> --
> Regards,
> Pradeep Chhetri
>


>>>
>>
>
>
> --
> Regards,
> Pradeep Chhetri
>


Re: Prometheus Exporters on Marathon

2016-04-15 Thread June Taylor
David,

Thanks for the assistance. How did you get the mesos-exporter installed?
When I tried the instructions from github.com/mesosphere/mesos-exporter, I
got this error:

june@-cluster:~$ go get github.com/mesosphere/mesos-exporter
# github.com/mesosphere/mesos-exporter
gosrc/src/github.com/mesosphere/mesos-exporter/common.go:46: unknown
http.Client field 'Timeout' in struct literal
gosrc/src/github.com/mesosphere/mesos-exporter/master_state.go:73: unknown
http.Client field 'Timeout' in struct literal
gosrc/src/github.com/mesosphere/mesos-exporter/slave_monitor.go:56: unknown
http.Client field 'Timeout' in struct literal


Thanks,
June Taylor
System Administrator, Minnesota Population Center
University of Minnesota

On Fri, Apr 15, 2016 at 4:29 AM, David Keijser 
wrote:

> Sure. there is not a lot to it though.
>
> So we have simple service file like this
>
> /usr/lib/systemd/system/mesos_exporter.service
> ```
> [Unit]
> Description=Prometheus mesos exporter
>
> [Service]
> EnvironmentFile=-/etc/sysconfig/mesos_exporter
> ExecStart=/usr/bin/mesos_exporter $OPTIONS
> Restart=on-failure
> ```
>
> and the sysconfig is just a simple
>
> /etc/sysconfig/mesos_exporter
> ```
> OPTIONS=-master=http://10.4.72.253:5050
> ```
>
> - or -
>
> /etc/sysconfig/mesos_exporter
> ```
> OPTIONS=-slave=http://10.4.72.177:5051
> ```
>
> On Thu, Apr 14, 2016 at 12:22:56PM -0500, June Taylor wrote:
> > David,
> >
> > Thanks for the reply. Would you be able to share your configs for
> starting
> > up the exporters?
> >
> >
> > Thanks,
> > June Taylor
> > System Administrator, Minnesota Population Center
> > University of Minnesota
> >
> > On Thu, Apr 14, 2016 at 11:27 AM, David Keijser <
> david.keij...@klarna.com>
> > wrote:
> >
> > > We run the mesos exporter [1] and the node_exporter on each host
> directly
> > > managed by systemd. For other application specific exporters we have
> so far
> > > been baking them into the docker image of the application which is
> being
> > > run by marathon.
> > >
> > > 1) https://github.com/mesosphere/mesos_exporter
> > >
> > > On Thu, 14 Apr 2016 at 18:20 June Taylor  wrote:
> > >
> > >> Is anyone else running Prometheus exporters on their cluster? I am
> stuck
> > >> because I can't get a working "go build" environment right now.
> > >>
> > >> Is anyone else running this directly on their nodes and masters? Or,
> via
> > >> Marathon?
> > >>
> > >> If so, please share your setup specifics.
> > >>
> > >> Thanks,
> > >> June Taylor
> > >> System Administrator, Minnesota Population Center
> > >> University of Minnesota
> > >>
> > >
>


Re: Pyspark Cluster Mode

2016-04-15 Thread Pradeep Chhetri
Hi June,

Here is the spark marathon configuration you were asking:
https://gist.github.com/pradeepchhetri/df6b71580a9f107378ffebc789d805ac

I have included the script to start MesosClusterDispatcher too in the above
gist

I would suggest you to use this Dockerfile as the reference for building
spark docker image:
https://github.com/apache/spark/blob/master/external/docker/spark-mesos/Dockerfile

I have modified my dockerfile to read env variables and fill the
configuration template. These env variables are being passed thru marathon.

And ofcourse, we are here to help you out.

On Thu, Apr 14, 2016 at 5:03 PM, June Taylor  wrote:

> Shuai,
>
> Thank you for your reply. Are you actually using this docker image in
> Marathon successfully? If so, please share your JSON for the application,
> as that would help me understand exactly what you suggest.
>
>
> Thanks,
> June Taylor
> System Administrator, Minnesota Population Center
> University of Minnesota
>
> On Thu, Apr 14, 2016 at 9:23 AM, Shuai Lin  wrote:
>
>> To run the dispatcher  in marathon I would recommend use a docker image
>> like mesosphere/spark https://hub.docker.com/r/mesosphere/spark/tags/
>>
>> One problem is how to access the dispatcher since it may be launched on
>> any one the slaves. You can setup a service discovery mechanism like
>> marathon-lb or mesos-dns for this purpose, but it may be a little overkill
>> if you don't need them except here.
>>
>> On simple approach is to specify --net=host in the marathon task for the
>> dispatch, and run a haproxy on the your your master server that tries all
>> the slaves:
>>
>> listen mesos-spark-dispatcher 0.0.0.0:7077
>>> server node1 10.0.1.1:7077 check
>>> server node2 10.0.1.2:7077 check
>>> server node3 10.0.1.3:7077 check
>>
>>
>> Then use "--master=mesos://yourmaster:7077" in your spark-submit command.
>>
>>
>>
>> On Thu, Apr 14, 2016 at 10:03 PM, June Taylor  wrote:
>>
>>> Pradeep,
>>>
>>> Thank you for your reply. I have read that documentation, but it leaves
>>> out a lot of key pieces. Have you actually run MesosClusterDispatcher on
>>> Marathon? If so, can you please share your JSON configuration for the
>>> application?
>>>
>>>
>>> Thanks,
>>> June Taylor
>>> System Administrator, Minnesota Population Center
>>> University of Minnesota
>>>
>>> On Wed, Apr 13, 2016 at 11:32 AM, Pradeep Chhetri <
>>> pradeep.chhetr...@gmail.com> wrote:
>>>
 In cluster mode, you need to first run *MesosClusterDispatcher*
 application on marathon (Read more about that here:
 http://spark.apache.org/docs/latest/running-on-mesos.html#cluster-mode)

 In both client and cluster mode, you need to specify --master flag
 while submitting job, the only difference is that you will specifying the
 value as the URL of dispatcher in cluster mode
 (mesos://:) while in client mode, you
 will be specifying URL of mesos-master
 (mesos://:)

 On Wed, Apr 13, 2016 at 3:24 PM, June Taylor  wrote:

> I'm interested in what the "best practice" is for running pyspark jobs
> against a mesos cluster.
>
> Right now, we're simply passing the --master mesos://host:5050 flag,
> which appears to register a framework properly.
>
> However, I was told this isn't "cluster mode" - and I'm a bit
> confused. What is the recommended method of doing this?
>
> Thanks,
> June Taylor
> System Administrator, Minnesota Population Center
> University of Minnesota
>



 --
 Regards,
 Pradeep Chhetri

>>>
>>>
>>
>


-- 
Regards,
Pradeep Chhetri


Re: Mesos Clusters on different Network

2016-04-15 Thread Stefano Bianchi
Maybe i found a solution is an openstack isshe.
Probably i need jus to use the same virtual router for both netwoks.
I will keep you updated
Il 15/apr/2016 10:30, "Stefano Bianchi"  ha scritto:

> Hi
>
> I already asked this question but idecided to open a new topic since where
> i wrote before were not the correspondent topic.
>
> So, as many of you have already understood, i'm working on openstack,
> where there is not a DNS.
> I have made 2 networks, NetA and NetB, on each of them there is a mesos
> cluster.
> I would like to connect them, im using zookeeper for the masters
> coordination and slaves registratuons.
> My question is, if i have a DNS which make all the machines on NetA and
> NetB communicating each other, pinging them or telnet or ssh, configuring
> zookeeper files with private IP can i reach the condition of distributed
> mesos clusters?
> Il 15/apr/2016 04:05, "Rodrick Brown"  ha
> scritto:
>
>> I have hundreds of small spark jobs running on my Mesos cluster
>> causing starvation to other frameworks like Marathon on my cluster.
>>
>> Is their a way to prevent these frameworks from getting offers so often?
>>
>> Apr 15 02:00:12 prod-mesos-m-3.$SERVER.com mesos-master[10259]: I0415
>> 02:00:12.503734 10266 master.cpp:3641] Processing DECLINE call for offers:
>> [ 50ceafa4-f3c1-4738-a9eb-c5d3bf0ff742-O7112667 ] for
>> framework 50ceafa4-f3c1-4738-a9eb-c5d3bf0ff742-15936 
>> (KafkaDirectConsumer[trades-topic])
>> at scheduler-9e557d33-e4a4-44ce-9dbe-0a7ca7c4842d@172.1.121.183:34858.
>>
>>
>>
>>
>>
>> --
>>
>> *Rodrick Brown* / Systems Engineer
>>
>> +1 917 445 6839 / rodr...@orchardplatform.com
>> 
>>
>> *Orchard Platform*
>>
>> 101 5th Avenue, 4th Floor, New York, NY 10003
>>
>> http://www.orchardplatform.com
>>
>> Orchard Blog  | Marketplace
>> Lending Meetup 
>>
>> *NOTICE TO RECIPIENTS*: This communication is confidential and intended
>> for the use of the addressee only. If you are not an intended recipient of
>> this communication, please delete it immediately and notify the sender
>> by return email. Unauthorized reading, dissemination, distribution or
>> copying of this communication is prohibited. This communication does not 
>> constitute
>> an offer to sell or a solicitation of an indication of interest to purchase
>> any loan, security or any other financial product or instrument, nor is it
>> an offer to sell or a solicitation of an indication of interest to purchase
>> any products or services to any persons who are prohibited from receiving
>> such information under applicable law. The contents of this communication
>> may not be accurate or complete and are subject to change without notice.
>> As such, Orchard App, Inc. (including its subsidiaries and affiliates,
>> "Orchard") makes no representation regarding the accuracy or
>> completeness of the information contained herein. The intended recipient is
>> advised to consult its own professional advisors, including those
>> specializing in legal, tax and accounting matters. Orchard does not
>> provide legal, tax or accounting advice.
>>
>


Re: Prometheus Exporters on Marathon

2016-04-15 Thread David Keijser
Sure. there is not a lot to it though.

So we have simple service file like this

/usr/lib/systemd/system/mesos_exporter.service
```
[Unit]
Description=Prometheus mesos exporter

[Service]
EnvironmentFile=-/etc/sysconfig/mesos_exporter
ExecStart=/usr/bin/mesos_exporter $OPTIONS
Restart=on-failure
```

and the sysconfig is just a simple

/etc/sysconfig/mesos_exporter
```
OPTIONS=-master=http://10.4.72.253:5050
```

- or -

/etc/sysconfig/mesos_exporter
```
OPTIONS=-slave=http://10.4.72.177:5051
```

On Thu, Apr 14, 2016 at 12:22:56PM -0500, June Taylor wrote:
> David,
> 
> Thanks for the reply. Would you be able to share your configs for starting
> up the exporters?
> 
> 
> Thanks,
> June Taylor
> System Administrator, Minnesota Population Center
> University of Minnesota
> 
> On Thu, Apr 14, 2016 at 11:27 AM, David Keijser 
> wrote:
> 
> > We run the mesos exporter [1] and the node_exporter on each host directly
> > managed by systemd. For other application specific exporters we have so far
> > been baking them into the docker image of the application which is being
> > run by marathon.
> >
> > 1) https://github.com/mesosphere/mesos_exporter
> >
> > On Thu, 14 Apr 2016 at 18:20 June Taylor  wrote:
> >
> >> Is anyone else running Prometheus exporters on their cluster? I am stuck
> >> because I can't get a working "go build" environment right now.
> >>
> >> Is anyone else running this directly on their nodes and masters? Or, via
> >> Marathon?
> >>
> >> If so, please share your setup specifics.
> >>
> >> Thanks,
> >> June Taylor
> >> System Administrator, Minnesota Population Center
> >> University of Minnesota
> >>
> >


signature.asc
Description: PGP signature


Mesos Clusters on different Network

2016-04-15 Thread Stefano Bianchi
Hi

I already asked this question but idecided to open a new topic since where
i wrote before were not the correspondent topic.

So, as many of you have already understood, i'm working on openstack, where
there is not a DNS.
I have made 2 networks, NetA and NetB, on each of them there is a mesos
cluster.
I would like to connect them, im using zookeeper for the masters
coordination and slaves registratuons.
My question is, if i have a DNS which make all the machines on NetA and
NetB communicating each other, pinging them or telnet or ssh, configuring
zookeeper files with private IP can i reach the condition of distributed
mesos clusters?
Il 15/apr/2016 04:05, "Rodrick Brown"  ha scritto:

> I have hundreds of small spark jobs running on my Mesos cluster
> causing starvation to other frameworks like Marathon on my cluster.
>
> Is their a way to prevent these frameworks from getting offers so often?
>
> Apr 15 02:00:12 prod-mesos-m-3.$SERVER.com mesos-master[10259]: I0415
> 02:00:12.503734 10266 master.cpp:3641] Processing DECLINE call for offers:
> [ 50ceafa4-f3c1-4738-a9eb-c5d3bf0ff742-O7112667 ] for
> framework 50ceafa4-f3c1-4738-a9eb-c5d3bf0ff742-15936 
> (KafkaDirectConsumer[trades-topic])
> at scheduler-9e557d33-e4a4-44ce-9dbe-0a7ca7c4842d@172.1.121.183:34858.
>
>
>
>
>
> --
>
> *Rodrick Brown* / Systems Engineer
>
> +1 917 445 6839 / rodr...@orchardplatform.com
> 
>
> *Orchard Platform*
>
> 101 5th Avenue, 4th Floor, New York, NY 10003
>
> http://www.orchardplatform.com
>
> Orchard Blog  | Marketplace Lending
> Meetup 
>
> *NOTICE TO RECIPIENTS*: This communication is confidential and intended
> for the use of the addressee only. If you are not an intended recipient of
> this communication, please delete it immediately and notify the sender by
> return email. Unauthorized reading, dissemination, distribution or copying
> of this communication is prohibited. This communication does not constitute
> an offer to sell or a solicitation of an indication of interest to purchase
> any loan, security or any other financial product or instrument, nor is it
> an offer to sell or a solicitation of an indication of interest to purchase
> any products or services to any persons who are prohibited from receiving
> such information under applicable law. The contents of this communication
> may not be accurate or complete and are subject to change without notice.
> As such, Orchard App, Inc. (including its subsidiaries and affiliates,
> "Orchard") makes no representation regarding the accuracy or completeness
> of the information contained herein. The intended recipient is advised to
> consult its own professional advisors, including those specializing in
> legal, tax and accounting matters. Orchard does not provide legal, tax or
> accounting advice.
>