Re: Debugging Scheduler HTTP API Failures

2016-08-14 Thread Dario Rexin
Zameer, the header value is enclosed in []. This is because headers can have multiple values and the library you use pus them into a list. You have to take the first item from that list and then it should work. > On Aug 14, 2016, at 10:19 PM, Zameer Manji wrote: > > Here is a MWE: https://git

Re: Debugging Scheduler HTTP API Failures

2016-08-14 Thread Zameer Manji
Here is a MWE: https://github.com/zmanji/mesos-mwe Follow the instructions in the README to reproduce. On Sun, Aug 14, 2016 at 9:04 PM, Dario Rexin wrote: > Can you post the code somewhere? > > > On Aug 14, 2016, at 8:54 PM, Zameer Manji wrote: > > Dario, > > The logs show that no disconnectio

Re: Debugging Scheduler HTTP API Failures

2016-08-14 Thread Dario Rexin
Can you post the code somewhere? > On Aug 14, 2016, at 8:54 PM, Zameer Manji wrote: > > Dario, > > The logs show that no disconnections occur until after the second POST > request. I would expect a log entry indicating a disconnect between the two > POST requests if the stream id changed. >

Re: Debugging Scheduler HTTP API Failures

2016-08-14 Thread Zameer Manji
Dario, The logs show that no disconnections occur until after the second POST request. I would expect a log entry indicating a disconnect between the two POST requests if the stream id changed. On Sun, Aug 14, 2016 at 8:28 PM, Dario Rexin wrote: > You’re absolutely right. I just tried the exact

Re: Debugging Scheduler HTTP API Failures

2016-08-14 Thread Dario Rexin
You’re absolutely right. I just tried the exact same steps and it worked fine for me. I also don’t see the log message. Do you have any reconnection logic in place? Is it possible, that your framework reconnected before you send the call? The Stream Id would change in that case. > On Aug 14, 2

Re: resources in Mesos UI but none given to Spark

2016-08-14 Thread Peter Figliozzi
Problem solved-- I was incorrectly specifying the spark directory in the command-line argument. It wants the spark root directory, not the bin. I unpacked mine in /opt, so, the Right way: spark.mesos.executor.home=/opt/spark-2.0.0-bin-hadoop2.7/ Wrong way: spark.mesos.executor.home=/opt/spark-

Re: resources in Mesos UI but none given to Spark

2016-08-14 Thread Peter Figliozzi
I notice the first thing that happens when I run the spark-shell is *three failed 'sandbox' tasks* appearing in the Mesos UI. (One for each agent.) Then another three, as if it tried twice. I attached the Mesos logs from the master and one of the agents. All of this happens when I run the spark-

Re: Debugging Scheduler HTTP API Failures

2016-08-14 Thread Zameer Manji
Dario, I do not think the case sensitivity matters here. If the master was expecting a header that was exactly 'Mesos-Stream-Id' and did not see it, I would expect to get the error response: `All non-subscribe calls should include the 'Mesos-Stream-Id' header`. That is the error response that you

what's the difference between mesos and yarn?

2016-08-14 Thread Yu Wei
Hi guys, What's the difference between yarn and mesos in practice? If using yarn, does container still needed? Thanks, Jared, (??) Software developer Interested in open source software, big data, Linux

Re: Debugging Scheduler HTTP API Failures

2016-08-14 Thread Dario Rexin
Oh, sorry, I didn't see you actually set the header (wall of text ;) ). That's an interesting issue, do you set the header case sensitive? I know headers shouldn't be case sensitive, but maybe there's a bug in the Mesos code. I have not seen this issue before. > On Aug 14, 2016, at 5:58 PM, Zam

Re: Debugging Scheduler HTTP API Failures

2016-08-14 Thread Dario Rexin
HinZameer, when you send the SUBSCRIBE to Mesos, the response will contain a header 'Mesos-Stream-Id'. You have to send that header with every subsequent call you send to Mesos for that framework. -- Dario > On Aug 14, 2016, at 5:58 PM, Zameer Manji wrote: > > Hey, > > I'm using the Mesos H

Debugging Scheduler HTTP API Failures

2016-08-14 Thread Zameer Manji
Hey, I'm using the Mesos HTTP API for the first time. I am currently encountering an issue where after a successful SUBSCRIBE call and receiving a SUBSCRIBED and HEARTBEAT event, a subsequent TEARDOWN call fails with HTTP 400 with a message of "The stream ID included in this request didn't match t

Re: resources in Mesos UI but none given to Spark

2016-08-14 Thread Michael Gummelt
Turning on Spark debug logs in conf/log4j.properties may help. The problem could be any number of things, including that you don't have enough resources for the default executor sizes. On Sun, Aug 14, 2016 at 2:37 PM, Peter Figliozzi wrote: > Hi All, I am new to Mesos. I set up a cluster this

resources in Mesos UI but none given to Spark

2016-08-14 Thread Peter Figliozzi
Hi All, I am new to Mesos. I set up a cluster this weekend with 3 agents, 1 master, Mesos 1.0.0. The resources show in the Mesos UI and the agents are all in the Agents tab. So everything looks good from that vantage point. Next I installed Spark 2.0.0 on each agent and the master, in the same

Re: Mesos loses track of Docker containers

2016-08-14 Thread Paul
Thank you, Sivaram. That would seem to be 2 "votes" for upgrading. -Paul > On Aug 13, 2016, at 11:47 PM, Sivaram Kannan wrote: > > > I don't remember the condition exactly, but I have faced similar issue in my > deployments and have been fixed when I moved to 0.26.0. Upgrade the marathon >

Re: Using mesos' cfs limits on a docker container?

2016-08-14 Thread Artem Harutyunyan
Hi Mark, Good to hear you figured it out. Can you please post curl errors that you were observing and describe your image repository setup? I'd like to make sure that we have instructions on how to mitigate those. Artem. On Sunday, August 14, 2016, Mark Hammons wrote: > In specific, I wanted t

Re: Using mesos' cfs limits on a docker container?

2016-08-14 Thread Avinash Sridharan
Hi Mark, That would be awesome (user facing documentation forthe `UnifiedContainerizer`). We have bits and pieces of the unified containerizer (and what it actually is), but would be great to land a more comprehensive documentation into its motivation and usability. May be have a separate `unified

Re: Using mesos' cfs limits on a docker container?

2016-08-14 Thread Mark Hammons
In specific, I wanted the process control capabilities of a mesos framework with custom schedulers and executors, but wanted to run my tasks in a framework definable environment (like running my tasks on a copy of Ubuntu 14 with certain libs installed). Using mixed-mode containerization worked w

Re: Using mesos' cfs limits on a docker container?

2016-08-14 Thread haosdent
Personally, I suggest to use the approach @Joseph and @Avinash mentioned. Because zhitao and my patches require Docker >= 1.7.0 . On Mon, Aug 15, 2016 at 1:27 AM, haosdent wrote: > Not sure if this related to https://issues.apache.org/ > jira/browse/MESOS-2154 > So far we have a quick workaround

Re: Using mesos' cfs limits on a docker container?

2016-08-14 Thread haosdent
Not sure if this related to https://issues.apache.org/jira/browse/MESOS-2154 So far we have a quick workaround: specify the `cpu-period` and `cpu-quota` in the parameters field of `DockerInfo`. Then `Docker::run` would delegate this to the docker daemon. And recently zhitao and me work on the fix

Re: Using mesos' cfs limits on a docker container?

2016-08-14 Thread Erik Weathers
What was the problem and how did you overcome it? (i.e. This would be a sad resolution to this thread for someone faced with this same problem in the future.) On Sunday, August 14, 2016, Mark Hammons wrote: > I finally got this working after fiddling with it all night. It works > great so far!

Re: Using mesos' cfs limits on a docker container?

2016-08-14 Thread Mark Hammons
I finally got this working after fiddling with it all night. It works great so far! Mark Edgar Hammons II - Research Engineer at BioEmergences 0603695656 > On 14 Aug 2016, at 04:50, Joseph Wu wrote: > > If you're not against running Docker containers without the Docker daemon, > try using the