Re: How does Mesos parse hadoop command??

2015-11-04 Thread Elizabeth Lingg
As a follow up, a place you would typically set the JAVA_HOME environment
variable would be in /etc/default/mesos-slave on Ubuntu

On Wed, Nov 4, 2015 at 11:38 AM, Elizabeth Lingg 
wrote:

> Ah, yes. I have seen this issue. Typically, it is because you have
> JAVA_HOME set on your host,but not on your Mesos Agent. If you run a
> Marathon job and output "env" you will see the JAVA_HOME environment
> variable is missing. You would need to set it in your agent init
> configurations as export JAVA_HOME=
>
> Thanks,
> Elizabeth
>
> On Wed, Nov 4, 2015 at 1:20 AM, haosdent  wrote:
>
>> how about add this flag when launch slave
>>  --executor_environment_variables='{"HADOOP_HOME": "/opt/hadoop-2.6.0"}'
>> ?
>>
>> On Wed, Nov 4, 2015 at 5:13 PM, Du, Fan  wrote:
>>
>>>
>>>
>>> On 2015/11/4 17:09, haosdent wrote:
>>>
 I notice
 ```
 "user":"root"
 ```
 Do you make sure could execute `hadoop version` under root?

>>>
>>>
>>> [root@tylersburg spark-1.5.1-bin-hadoop2.6]# whoami
>>> root
>>> [root@tylersburg spark-1.5.1-bin-hadoop2.6]# hadoop version
>>> Hadoop 2.6.0
>>> Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r
>>> e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1
>>> Compiled by jenkins on 2014-11-13T21:10Z
>>> Compiled with protoc 2.5.0
>>> From source with checksum 18e43357c8f927c0695f1e9522859d6a
>>> This command was run using
>>> /opt/hadoop-2.6.0/share/hadoop/common/hadoop-common-2.6.0.jar
>>>
>>> [root@tylersburg spark-1.5.1-bin-hadoop2.6]# ls -hl
>>> /opt/hadoop-2.6.0/bin/hadoop
>>> -rwxr-xr-x. 1 root root 5.4K Nov  3 08:36 /opt/hadoop-2.6.0/bin/hadoop
>>>
>>>
>>>
>>> On Wed, Nov 4, 2015 at 4:56 PM, Du, Fan > wrote:



 On 2015/11/4 16:40, Tim Chen wrote:

 What OS are you running this with?

 And I assume if you run /bin/sh and try to run hadoop it can be
 found in
 your PATH as well?


 I'm using CentOS-7.2

 # /bin/sh hadoop version
 Hadoop 2.6.0
 Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r
 e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1
 Compiled by jenkins on 2014-11-13T21:10Z
 Compiled with protoc 2.5.0
 >From source with checksum 18e43357c8f927c0695f1e9522859d6a
 This command was run using
 /opt/hadoop-2.6.0/share/hadoop/common/hadoop-common-2.6.0.jar



 Tim

 On Wed, Nov 4, 2015 at 12:34 AM, Du, Fan 
 >> wrote:

  Hi Mesos experts

  I setup a small mesos cluster with 1 master and 6 slaves,
  and deploy hdfs on the same cluster topology, both with
 root user role.

  #cat spark-1.5.1-bin-hadoop2.6/conf/spark-env.sh
  export MESOS_NATIVE_JAVA_LIBRARY=/usr/local/lib/libmesos.so
  export


 JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.91-2.6.2.1.el7_1.x86_64/jre/
  export
 SPARK_EXECUTOR_URI=hdfs://test/spark-1.5.1-bin-hadoop2.6.tgz

  When I run a simple SparkPi test
  #export MASTER=mesos://Mesos_Master_IP:5050
  #spark-1.5.1-bin-hadoop2.6/bin/run-example SparkPi 1

  I got this on slaves:

  I1104 22:24:02.238471 14518 fetcher.cpp:414] Fetcher Info:


 {"cache_directory":"\/tmp\/mesos\/fetch\/slaves\/556b49c1-7e6a-4f99-b320-c3f0c849e836-S6\/root","items":[{"action":"BYPASS_CACHE","uri":{"extract":true,"value":"hdfs:\/\/test\/spark-1.5.1-bin-hadoop2.6.tgz"}}],"sandbox_directory":"\/ws\/mesos\/slaves\/556b49c1-7e6a-4f99-b320-c3f0c849e836-S6\/frameworks\/556b49c1-7e6a-4f99-b320-c3f0c849e836-0003\/executors\/556b49c1-7e6a-4f99-b320-c3f0c849e836-S6\/runs\/9ec70f41-67d5-4a95-999f-933f3aa9e261","user":"root"}
  I1104 22:24:02.240910 14518 fetcher.cpp:369] Fetching URI
  'hdfs://test/spark-1.5.1-bin-hadoop2.6.tgz'
  I1104 22:24:02.240931 14518 fetcher.cpp:243] Fetching
 directly into
  the sandbox directory
  I1104 22:24:02.240952 14518 fetcher.cpp:180] Fetching URI
  'hdfs://test/spark-1.5.1-bin-hadoop2.6.tgz'
  E1104 22:24:02.245264 14518 shell.hpp:90] Command 'hadoop
 version
  2>&1' failed; this is the output:
  sh: hadoop: command not found
  Failed to fetch
 'hdfs://test/spark-1.5.1-bin-hadoop2.6.tgz':
  Skipping fetch with Hadoop client: Failed to execute
 'hadoop version
  2>&1'; the command was either not found or exited with 

Re: spark mesos shuffle service failing under marathon

2015-11-04 Thread Dean Wampler
Can you find anything in the logs that would indicate a failure?

On Wed, Nov 4, 2015 at 9:23 PM, Rodrick Brown 
wrote:

> Starting the mesos shuffle service seems to background the process so when
> ever marathon tries to bring up this process it constantly keeps trying to
> start and never registers as started? Is there a fix for this?
>
>
> --
>
> [image: Orchard Platform] 
>
> Rodrick Brown / DevOPs Engineer
> +1 917 445 6839 / rodr...@orchardplatform.com
> 
>
> Orchard Platform
> 101 5th Avenue, 4th Floor, New York, NY 10003
> http://www.orchardplatform.com
>
> Orchard Blog  | Marketplace Lending
> Meetup 
>
>
> *NOTICE TO RECIPIENTS*: This communication is confidential and intended
> for the use of the addressee only. If you are not an intended recipient of
> this communication, please delete it immediately and notify the sender by
> return email. Unauthorized reading, dissemination, distribution or copying
> of this communication is prohibited. This communication does not constitute
> an offer to sell or a solicitation of an indication of interest to purchase
> any loan, security or any other financial product or instrument, nor is it
> an offer to sell or a solicitation of an indication of interest to purchase
> any products or services to any persons who are prohibited from receiving
> such information under applicable law. The contents of this communication
> may not be accurate or complete and are subject to change without notice.
> As such, Orchard App, Inc. (including its subsidiaries and affiliates,
> "Orchard") makes no representation regarding the accuracy or completeness
> of the information contained herein. The intended recipient is advised to
> consult its own professional advisors, including those specializing in
> legal, tax and accounting matters. Orchard does not provide legal, tax or
> accounting advice.
>



-- 
*Dean Wampler, Ph.D.*
Typesafe 
Author: Programming Scala, 2nd Edition
 (O'Reilly)
@deanwampler 


Re: Apache Mesos Community Sync

2015-11-04 Thread Adam Bordelon
It's been a while since our last community sync, and tomorrow, Thursday Nov
5th shows up on my calendar as a 3pm Twitter-hosted meeting, since those
have traditionally been "Monthly on the first Thursday". After this, the
other meetings (third Thursday, or every other week?) can alternate between
9pm/9am. Let's get these on the calendar officially.

Vinod, are you/Twitter still planning to host the community sync tomorrow?

On Wed, Oct 14, 2015 at 1:01 AM, Adam Bordelon  wrote:

> We'll have the next community sync this Thursday (Oct. 15th) from 9-10am
> Pacific.
>
> Please add items to the agenda
> 
> .
>
> We will use Hangouts on Air again. We will post the video stream link
> shortly before the meeting, and only active participants (especially people
> on the agenda) should join the actual hangout. Others can watch the video
> stream and ask brief questions on #mesos on IRC. If you have something
> lengthier to discuss, put it on the agenda and ping us on email/IRC to get
> into the hangout.
>
> To join in person, come to Mesosphere HQ at 88 Stevenson St and see
> reception on the 2nd floor.
>
>
> On Thu, Oct 1, 2015 at 9:30 AM, haosdent  wrote:
>
>> Got it. Thank you.
>>
>> On Fri, Oct 2, 2015 at 12:27 AM, Gilbert Song 
>> wrote:
>>
>> > Yes, community sync is at 3 pm PST today afternoon. Video Link is still
>> not
>> > available. And here is the link for meeting agenda/notes:
>> >
>> >
>> >
>> https://docs.google.com/document/d/153CUCj5LOJCFAVpdDZC7COJDwKh9RDjxaTA0S7lzwDA/edit?usp=sharing
>> >
>> > On Thu, Oct 1, 2015 at 9:19 AM, haosdent  wrote:
>> >
>> > > Do today have community sync?
>> > >
>> > > On Fri, Sep 18, 2015 at 12:59 AM, Adam Bordelon 
>> > > wrote:
>> > >
>> > > > Today's community sync video/audio is archived at:
>> > > > http://youtu.be/ZQT6-fw8Ito
>> > > > The meeting agenda/notes are available at:
>> > > >
>> > > >
>> > >
>> >
>> https://docs.google.com/document/d/153CUCj5LOJCFAVpdDZC7COJDwKh9RDjxaTA0S7lzwDA/edit?usp=sharing
>> > > >
>> > > > For convenience, today's notes are reproduced below:
>> > > >
>> > > >-
>> > > >
>> > > >0.21.2-0.24.1 Patch Releases [Adam]
>> > > >-
>> > > >
>> > > >   What’s the plan for how many releases we want to support?
>> BenH:
>> > > >   Support at least 3 versions (e.g. 0.22.x, 0.23.x, 0.24.x) for
>> > > > which we will
>> > > >   do patch fixes
>> > > >   Neil: Or support an LTS version + recent releases
>> > > >   -
>> > > >
>> > > >   Separate Release Manager for backports? Joris and MPark will
>> RM
>> > for
>> > > >   these patch releases, with Adam shepherding. In general,
>> > > patch/point
>> > > >   releases don’t need to be managed by the same person who did
>> the
>> > > > original
>> > > >   release.
>> > > >   -
>> > > >
>> > > >   Need some guidelines (on the website) for what is a
>> > > >   backport-able/critical patch.
>> > > >   -
>> > > >
>> > > >   AI[Adam+0.25RMs]: Expand Release Guide with # supported
>> releases,
>> > > >   guidelines for critical patches, RM roles/responsibilities
>> > > >   -
>> > > >
>> > > >0.25.0 Release Planning[Joris]: Dashboard
>> > > ><
>> > > >
>> > >
>> >
>> https://issues.apache.org/jira/secure/Dashboard.jspa?selectPageId=12326859
>> > > > >
>> > > >-
>> > > >
>> > > >   Planning a triage meeting for Friday, hope to cut 0.25.0-rc1
>> by
>> > > Sept.
>> > > >   23rd.
>> > > >   -
>> > > >
>> > > >MesosCon EU [Adam]: Schedule announced!
>> > > > http://mesosconeu2015.sched.org/
>> > > >-
>> > > >
>> > > >   Register! Attend! Meet cool people! Learn awesome things!
>> > > >   -
>> > > >
>> > > >   Want to grow developer community as well as user community
>> > > >   -
>> > > >
>> > > >   Community voting vs. Program Committee selection?
>> > > >   -
>> > > >
>> > > >Mesos Developer Community Sync Frequency to weekly [Joris, BenH]
>> > > >-
>> > > >
>> > > >   Rotating time for time zones
>> > > >   -
>> > > >
>> > > >   Weeklys can be video/hangout
>> > > >   -
>> > > >
>> > > >   1/mo can be on-site @Twitter/Mesosphere/etc
>> > > >   -
>> > > >
>> > > >   AI [Joris]: Send out email proposal for times, weekly schedule
>> > > >   -
>> > > >
>> > > >   Proposal: Send each meeting’s notes to dev list (in plain
>> text)
>> > > >   afterwards
>> > > >   -
>> > > >
>> > > >ReviewBot needs more OS/Compiler coverage [Joseph, Joris]
>> > > >-
>> > > >
>> > > >   ReviewBot != Jenkins continuous build
>> > > >   -
>> > > >
>> > > >MESOS-3147: Allocator Refactor project kick off, want to discuss
>> > more
>> > > >for how we can proceed this. [Guangya, MPark, AlexR]
>> > > >-
>> > > >
>> > > >   Shepherd: MPark
>> 

Join the Apache Aurora + Mesos Meetup next Tuesday in Palo Alto

2015-11-04 Thread Dave Lester
Join the Bay Area Apache Aurora and Apache Mesos communities for our November 
meetup, scheduled for Tuesday, November 10th, 2015 and hosted by our friends at 
Medallia in Palo Alto.

Register here: 
http://www.meetup.com/Bay-Area-Apache-Aurora-Users-Group/events/225412389/

The meetup will feature two talks:
 * Apache Aurora Overview and Project Updates / Roadmap, presented by Bill 
Farner (Apache Aurora VP)
 * Aurora+Docker at Medallia: Gradually transitioning to microservices, 
presented by Aasmund Eldhuset

Additionally, we’ve included time in the middle for lightning talks. If you 
have something you'd like to briefly present, please leave a comment on the 
meetup.com page and we'll do our best to add you to the schedule.

Hope to see members of the Mesos and Aurora communities there!

Dave

Re: Apache Mesos Community Sync

2015-11-04 Thread Jie Yu
Adam, since most of the Twitter folks are OOO this week. I chatted with
Artem/Vinod. we think it makes sense to host the sync at Mesosphere
tomorrow.

- Jie

On Wed, Nov 4, 2015 at 4:22 PM, Adam Bordelon  wrote:

> It's been a while since our last community sync, and tomorrow, Thursday
> Nov 5th shows up on my calendar as a 3pm Twitter-hosted meeting, since
> those have traditionally been "Monthly on the first Thursday". After
> this, the other meetings (third Thursday, or every other week?) can
> alternate between 9pm/9am. Let's get these on the calendar officially.
>
> Vinod, are you/Twitter still planning to host the community sync tomorrow?
>
> On Wed, Oct 14, 2015 at 1:01 AM, Adam Bordelon  wrote:
>
>> We'll have the next community sync this Thursday (Oct. 15th) from 9-10am
>> Pacific.
>>
>> Please add items to the agenda
>> 
>> .
>>
>>
>> We will use Hangouts on Air again. We will post the video stream link
>> shortly before the meeting, and only active participants (especially people
>> on the agenda) should join the actual hangout. Others can watch the video
>> stream and ask brief questions on #mesos on IRC. If you have something
>> lengthier to discuss, put it on the agenda and ping us on email/IRC to get
>> into the hangout.
>>
>> To join in person, come to Mesosphere HQ at 88 Stevenson St and see
>> reception on the 2nd floor.
>>
>>
>> On Thu, Oct 1, 2015 at 9:30 AM, haosdent  wrote:
>>
>>> Got it. Thank you.
>>>
>>> On Fri, Oct 2, 2015 at 12:27 AM, Gilbert Song 
>>> wrote:
>>>
>>> > Yes, community sync is at 3 pm PST today afternoon. Video Link is
>>> still not
>>> > available. And here is the link for meeting agenda/notes:
>>> >
>>> >
>>> >
>>> https://docs.google.com/document/d/153CUCj5LOJCFAVpdDZC7COJDwKh9RDjxaTA0S7lzwDA/edit?usp=sharing
>>> >
>>> > On Thu, Oct 1, 2015 at 9:19 AM, haosdent  wrote:
>>> >
>>> > > Do today have community sync?
>>> > >
>>> > > On Fri, Sep 18, 2015 at 12:59 AM, Adam Bordelon 
>>> > > wrote:
>>> > >
>>> > > > Today's community sync video/audio is archived at:
>>> > > > http://youtu.be/ZQT6-fw8Ito
>>> > > > The meeting agenda/notes are available at:
>>> > > >
>>> > > >
>>> > >
>>> >
>>> https://docs.google.com/document/d/153CUCj5LOJCFAVpdDZC7COJDwKh9RDjxaTA0S7lzwDA/edit?usp=sharing
>>> > > >
>>> > > > For convenience, today's notes are reproduced below:
>>> > > >
>>> > > >-
>>> > > >
>>> > > >0.21.2-0.24.1 Patch Releases [Adam]
>>> > > >-
>>> > > >
>>> > > >   What’s the plan for how many releases we want to support?
>>> BenH:
>>> > > >   Support at least 3 versions (e.g. 0.22.x, 0.23.x, 0.24.x) for
>>> > > > which we will
>>> > > >   do patch fixes
>>> > > >   Neil: Or support an LTS version + recent releases
>>> > > >   -
>>> > > >
>>> > > >   Separate Release Manager for backports? Joris and MPark will
>>> RM
>>> > for
>>> > > >   these patch releases, with Adam shepherding. In general,
>>> > > patch/point
>>> > > >   releases don’t need to be managed by the same person who did
>>> the
>>> > > > original
>>> > > >   release.
>>> > > >   -
>>> > > >
>>> > > >   Need some guidelines (on the website) for what is a
>>> > > >   backport-able/critical patch.
>>> > > >   -
>>> > > >
>>> > > >   AI[Adam+0.25RMs]: Expand Release Guide with # supported
>>> releases,
>>> > > >   guidelines for critical patches, RM roles/responsibilities
>>> > > >   -
>>> > > >
>>> > > >0.25.0 Release Planning[Joris]: Dashboard
>>> > > ><
>>> > > >
>>> > >
>>> >
>>> https://issues.apache.org/jira/secure/Dashboard.jspa?selectPageId=12326859
>>> > > > >
>>> > > >-
>>> > > >
>>> > > >   Planning a triage meeting for Friday, hope to cut 0.25.0-rc1
>>> by
>>> > > Sept.
>>> > > >   23rd.
>>> > > >   -
>>> > > >
>>> > > >MesosCon EU [Adam]: Schedule announced!
>>> > > > http://mesosconeu2015.sched.org/
>>> > > >-
>>> > > >
>>> > > >   Register! Attend! Meet cool people! Learn awesome things!
>>> > > >   -
>>> > > >
>>> > > >   Want to grow developer community as well as user community
>>> > > >   -
>>> > > >
>>> > > >   Community voting vs. Program Committee selection?
>>> > > >   -
>>> > > >
>>> > > >Mesos Developer Community Sync Frequency to weekly [Joris, BenH]
>>> > > >-
>>> > > >
>>> > > >   Rotating time for time zones
>>> > > >   -
>>> > > >
>>> > > >   Weeklys can be video/hangout
>>> > > >   -
>>> > > >
>>> > > >   1/mo can be on-site @Twitter/Mesosphere/etc
>>> > > >   -
>>> > > >
>>> > > >   AI [Joris]: Send out email proposal for times, weekly
>>> schedule
>>> > > >   -
>>> > > >
>>> > > >   Proposal: Send each meeting’s notes to dev list (in plain
>>> text)
>>> > > >   afterwards
>>> > > >   -

Re: Apache Mesos Community Sync

2015-11-04 Thread Adam Bordelon
Sounds great! Please join us at Mesosphere HQ, 88 Stevenson St., SF at 3pm
Pacific tomorrow.
We will use youtube-onair again, links to be posted to IRC/email shortly
before the meeting.

Please add agenda items:
https://docs.google.com/document/d/153CUCj5LOJCFAVpdDZC7COJDwKh9RDjxaTA0S7lzwDA/edit#heading=h.za1f9dpxisdr

On Wed, Nov 4, 2015 at 4:25 PM, Jie Yu  wrote:

> Adam, since most of the Twitter folks are OOO this week. I chatted with
> Artem/Vinod. we think it makes sense to host the sync at Mesosphere
> tomorrow.
>
> - Jie
>
> On Wed, Nov 4, 2015 at 4:22 PM, Adam Bordelon  wrote:
>
>> It's been a while since our last community sync, and tomorrow, Thursday
>> Nov 5th shows up on my calendar as a 3pm Twitter-hosted meeting, since
>> those have traditionally been "Monthly on the first Thursday". After
>> this, the other meetings (third Thursday, or every other week?) can
>> alternate between 9pm/9am. Let's get these on the calendar officially.
>>
>> Vinod, are you/Twitter still planning to host the community sync
>> tomorrow?
>>
>> On Wed, Oct 14, 2015 at 1:01 AM, Adam Bordelon 
>> wrote:
>>
>>> We'll have the next community sync this Thursday (Oct. 15th) from
>>> 9-10am Pacific.
>>>
>>> Please add items to the agenda
>>> 
>>> .
>>>
>>>
>>> We will use Hangouts on Air again. We will post the video stream link
>>> shortly before the meeting, and only active participants (especially people
>>> on the agenda) should join the actual hangout. Others can watch the video
>>> stream and ask brief questions on #mesos on IRC. If you have something
>>> lengthier to discuss, put it on the agenda and ping us on email/IRC to get
>>> into the hangout.
>>>
>>> To join in person, come to Mesosphere HQ at 88 Stevenson St and see
>>> reception on the 2nd floor.
>>>
>>>
>>> On Thu, Oct 1, 2015 at 9:30 AM, haosdent  wrote:
>>>
 Got it. Thank you.

 On Fri, Oct 2, 2015 at 12:27 AM, Gilbert Song 
 wrote:

 > Yes, community sync is at 3 pm PST today afternoon. Video Link is
 still not
 > available. And here is the link for meeting agenda/notes:
 >
 >
 >
 https://docs.google.com/document/d/153CUCj5LOJCFAVpdDZC7COJDwKh9RDjxaTA0S7lzwDA/edit?usp=sharing
 >
 > On Thu, Oct 1, 2015 at 9:19 AM, haosdent  wrote:
 >
 > > Do today have community sync?
 > >
 > > On Fri, Sep 18, 2015 at 12:59 AM, Adam Bordelon 
 > > wrote:
 > >
 > > > Today's community sync video/audio is archived at:
 > > > http://youtu.be/ZQT6-fw8Ito
 > > > The meeting agenda/notes are available at:
 > > >
 > > >
 > >
 >
 https://docs.google.com/document/d/153CUCj5LOJCFAVpdDZC7COJDwKh9RDjxaTA0S7lzwDA/edit?usp=sharing
 > > >
 > > > For convenience, today's notes are reproduced below:
 > > >
 > > >-
 > > >
 > > >0.21.2-0.24.1 Patch Releases [Adam]
 > > >-
 > > >
 > > >   What’s the plan for how many releases we want to support?
 BenH:
 > > >   Support at least 3 versions (e.g. 0.22.x, 0.23.x, 0.24.x)
 for
 > > > which we will
 > > >   do patch fixes
 > > >   Neil: Or support an LTS version + recent releases
 > > >   -
 > > >
 > > >   Separate Release Manager for backports? Joris and MPark
 will RM
 > for
 > > >   these patch releases, with Adam shepherding. In general,
 > > patch/point
 > > >   releases don’t need to be managed by the same person who
 did the
 > > > original
 > > >   release.
 > > >   -
 > > >
 > > >   Need some guidelines (on the website) for what is a
 > > >   backport-able/critical patch.
 > > >   -
 > > >
 > > >   AI[Adam+0.25RMs]: Expand Release Guide with # supported
 releases,
 > > >   guidelines for critical patches, RM roles/responsibilities
 > > >   -
 > > >
 > > >0.25.0 Release Planning[Joris]: Dashboard
 > > ><
 > > >
 > >
 >
 https://issues.apache.org/jira/secure/Dashboard.jspa?selectPageId=12326859
 > > > >
 > > >-
 > > >
 > > >   Planning a triage meeting for Friday, hope to cut
 0.25.0-rc1 by
 > > Sept.
 > > >   23rd.
 > > >   -
 > > >
 > > >MesosCon EU [Adam]: Schedule announced!
 > > > http://mesosconeu2015.sched.org/
 > > >-
 > > >
 > > >   Register! Attend! Meet cool people! Learn awesome things!
 > > >   -
 > > >
 > > >   Want to grow developer community as well as user community
 > > >   -
 > > >
 > > >   Community voting vs. Program Committee selection?
 > > >   -
 > > >
 > > >Mesos Developer 

spark mesos shuffle service failing under marathon

2015-11-04 Thread Rodrick Brown
Starting the mesos shuffle service seems to background the process so when ever 
marathon tries to bring up this process it constantly keeps trying to start and 
never registers as started? Is there a fix for this? 


-- 
 
Rodrick Brown / DevOPs Engineer 
+1 917 445 6839 / rodr...@orchardplatform.com 

Orchard Platform 
101 5th Avenue, 4th Floor, New York, NY 10003 
http://www.orchardplatform.com 
Orchard Blog  | Marketplace Lending 
Meetup 

-- 
*NOTICE TO RECIPIENTS*: This communication is confidential and intended for 
the use of the addressee only. If you are not an intended recipient of this 
communication, please delete it immediately and notify the sender by return 
email. Unauthorized reading, dissemination, distribution or copying of this 
communication is prohibited. This communication does not constitute an 
offer to sell or a solicitation of an indication of interest to purchase 
any loan, security or any other financial product or instrument, nor is it 
an offer to sell or a solicitation of an indication of interest to purchase 
any products or services to any persons who are prohibited from receiving 
such information under applicable law. The contents of this communication 
may not be accurate or complete and are subject to change without notice. 
As such, Orchard App, Inc. (including its subsidiaries and affiliates, 
"Orchard") makes no representation regarding the accuracy or completeness 
of the information contained herein. The intended recipient is advised to 
consult its own professional advisors, including those specializing in 
legal, tax and accounting matters. Orchard does not provide legal, tax or 
accounting advice.


Re: Docker Multi-Host Networking and Mesos Isolation Strategies

2015-11-04 Thread John Omernik
I created a basic stub at https://issues.apache.org/jira/browse/MESOS-3828

Thanks!

John


On Wed, Nov 4, 2015 at 8:32 AM, haosdent  wrote:

> This new docker feature looks excited! To integrated with this, my quick
> idea is we could implement it as a pluggable module and let user choose
> which network isolator should used. But this is just my opinion. Could you
> create a story for this in https://issues.apache.org/jira/browse/MESOS so
> we could track this better.
>
> On Wed, Nov 4, 2015 at 9:29 PM, John Omernik  wrote:
>
>> Hey all,
>>
>> I see Docker 1.9 has a neat multihost networking feature.
>>
>> http://blog.docker.com/2015/11/docker-multi-host-networking-ga/
>>
>> I am curious how this may integrate (if at all) with the Network
>> Isolation/IP per container strategy Mesos is looking at.  Is there overlap
>> here? Are there integration points? Are we looking at divergent or
>> convergent network strategies here?
>>
>> I would imagine while Docker multi-host is docker specific, Mesos is
>> trying to solve the any container on multiple hosts problem, and thus the
>> scope may be larger, but could there be opportunity for integration?  The
>> reason I ask, is as I am rolling out Mesos PoCs the dev team is excited
>> about these new features and I want to understand how these may or may not
>> converge in the future.
>>
>> John
>>
>>
>>
>
>
> --
> Best Regards,
> Haosdent Huang
>


Re: How is Mesos doing certificate verification for resources in URIs?

2015-11-04 Thread Rad Gruchalski
Kamil,  

It’s perfect, thank you.










Kind regards,

Radek Gruchalski

ra...@gruchalski.com (mailto:ra...@gruchalski.com)
 
(mailto:ra...@gruchalski.com)
de.linkedin.com/in/radgruchalski/ (http://de.linkedin.com/in/radgruchalski/)

Confidentiality:
This communication is intended for the above-named person and may be 
confidential and/or legally privileged.
If it has come to you in error you must take no action based on it, nor must 
you copy or show it to anyone; please delete/destroy and inform the sender 
immediately.



On Wednesday, 4 November 2015 at 12:31, Rad Gruchalski wrote:

> Kamil,
>  
> Will give it a shot. Thanks for the pointer.
>  
>  
>  
>  
>  
>  
>  
>  
>  
>  
> Kind regards,

> Radek Gruchalski
> 
ra...@gruchalski.com (mailto:ra...@gruchalski.com)
 
> (mailto:ra...@gruchalski.com)
> de.linkedin.com/in/radgruchalski/ (http://de.linkedin.com/in/radgruchalski/)
>  
> Confidentiality:
> This communication is intended for the above-named person and may be 
> confidential and/or legally privileged.
> If it has come to you in error you must take no action based on it, nor must 
> you copy or show it to anyone; please delete/destroy and inform the sender 
> immediately.
>  
>  
>  
> On Wednesday, 4 November 2015 at 12:28, Kamil Chmielewski wrote:
>  
> > We had similiar issues with custom built Mesos linked with libcurl4-nss 
> > https://github.com/apache/mesos/pull/48.
> > Everythng works like expected when we use libcurl4-openssl.
> >  
> > Cheers,
> > Kamil
> >  
> > 2015-11-04 12:19 GMT+01:00 Rad Gruchalski  > (mailto:ra...@gruchalski.com)>:
> > > Yes, this is from the agent:  
> > >  
> > > ~$ curl -i https://raw.githubusercontent.com/apache/spark/master/pom.xml
> > > HTTP/1.1 200 OK
> > > Content-Security-Policy: default-src 'none'
> > > X-XSS-Protection: 1; mode=block
> > > X-Frame-Options: deny
> > > X-Content-Type-Options: nosniff
> > > Strict-Transport-Security: max-age=31536000
> > > ETag: "762bfc728233533ab49336ff68dc02203407ea43"
> > > Content-Type: text/plain; charset=utf-8
> > > Cache-Control: max-age=300
> > > X-GitHub-Request-Id: B91F1318:509A:EEE5F90:5639E92E
> > > Content-Length: 87329
> > > Accept-Ranges: bytes
> > > Date: Wed, 04 Nov 2015 11:17:02 GMT
> > > Via: 1.1 varnish
> > > Connection: keep-alive
> > > X-Served-By: cache-lhr6327-LHR
> > > X-Cache: MISS
> > > X-Cache-Hits: 0
> > > Vary: Authorization,Accept-Encoding
> > > Access-Control-Allow-Origin: *
> > > X-Fastly-Request-ID: f3120a4d90968291aa84609c786626599809456d
> > > Expires: Wed, 04 Nov 2015 11:22:02 GMT
> > > Source-Age: 0
> > >  
> > > 
> > >  > > > Best Regards,
> > > > Haosdent Huang  
> > >  
> >  
>  



Re: How does Mesos parse hadoop command??

2015-11-04 Thread Elizabeth Lingg
Ah, yes. I have seen this issue. Typically, it is because you have
JAVA_HOME set on your host,but not on your Mesos Agent. If you run a
Marathon job and output "env" you will see the JAVA_HOME environment
variable is missing. You would need to set it in your agent init
configurations as export JAVA_HOME=

Thanks,
Elizabeth

On Wed, Nov 4, 2015 at 1:20 AM, haosdent  wrote:

> how about add this flag when launch slave
>  --executor_environment_variables='{"HADOOP_HOME": "/opt/hadoop-2.6.0"}' ?
>
> On Wed, Nov 4, 2015 at 5:13 PM, Du, Fan  wrote:
>
>>
>>
>> On 2015/11/4 17:09, haosdent wrote:
>>
>>> I notice
>>> ```
>>> "user":"root"
>>> ```
>>> Do you make sure could execute `hadoop version` under root?
>>>
>>
>>
>> [root@tylersburg spark-1.5.1-bin-hadoop2.6]# whoami
>> root
>> [root@tylersburg spark-1.5.1-bin-hadoop2.6]# hadoop version
>> Hadoop 2.6.0
>> Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r
>> e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1
>> Compiled by jenkins on 2014-11-13T21:10Z
>> Compiled with protoc 2.5.0
>> From source with checksum 18e43357c8f927c0695f1e9522859d6a
>> This command was run using
>> /opt/hadoop-2.6.0/share/hadoop/common/hadoop-common-2.6.0.jar
>>
>> [root@tylersburg spark-1.5.1-bin-hadoop2.6]# ls -hl
>> /opt/hadoop-2.6.0/bin/hadoop
>> -rwxr-xr-x. 1 root root 5.4K Nov  3 08:36 /opt/hadoop-2.6.0/bin/hadoop
>>
>>
>>
>> On Wed, Nov 4, 2015 at 4:56 PM, Du, Fan >> > wrote:
>>>
>>>
>>>
>>> On 2015/11/4 16:40, Tim Chen wrote:
>>>
>>> What OS are you running this with?
>>>
>>> And I assume if you run /bin/sh and try to run hadoop it can be
>>> found in
>>> your PATH as well?
>>>
>>>
>>> I'm using CentOS-7.2
>>>
>>> # /bin/sh hadoop version
>>> Hadoop 2.6.0
>>> Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r
>>> e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1
>>> Compiled by jenkins on 2014-11-13T21:10Z
>>> Compiled with protoc 2.5.0
>>> >From source with checksum 18e43357c8f927c0695f1e9522859d6a
>>> This command was run using
>>> /opt/hadoop-2.6.0/share/hadoop/common/hadoop-common-2.6.0.jar
>>>
>>>
>>>
>>> Tim
>>>
>>> On Wed, Nov 4, 2015 at 12:34 AM, Du, Fan >> 
>>> >> wrote:
>>>
>>>  Hi Mesos experts
>>>
>>>  I setup a small mesos cluster with 1 master and 6 slaves,
>>>  and deploy hdfs on the same cluster topology, both with
>>> root user role.
>>>
>>>  #cat spark-1.5.1-bin-hadoop2.6/conf/spark-env.sh
>>>  export MESOS_NATIVE_JAVA_LIBRARY=/usr/local/lib/libmesos.so
>>>  export
>>>
>>>
>>> JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.91-2.6.2.1.el7_1.x86_64/jre/
>>>  export
>>> SPARK_EXECUTOR_URI=hdfs://test/spark-1.5.1-bin-hadoop2.6.tgz
>>>
>>>  When I run a simple SparkPi test
>>>  #export MASTER=mesos://Mesos_Master_IP:5050
>>>  #spark-1.5.1-bin-hadoop2.6/bin/run-example SparkPi 1
>>>
>>>  I got this on slaves:
>>>
>>>  I1104 22:24:02.238471 14518 fetcher.cpp:414] Fetcher Info:
>>>
>>>
>>> {"cache_directory":"\/tmp\/mesos\/fetch\/slaves\/556b49c1-7e6a-4f99-b320-c3f0c849e836-S6\/root","items":[{"action":"BYPASS_CACHE","uri":{"extract":true,"value":"hdfs:\/\/test\/spark-1.5.1-bin-hadoop2.6.tgz"}}],"sandbox_directory":"\/ws\/mesos\/slaves\/556b49c1-7e6a-4f99-b320-c3f0c849e836-S6\/frameworks\/556b49c1-7e6a-4f99-b320-c3f0c849e836-0003\/executors\/556b49c1-7e6a-4f99-b320-c3f0c849e836-S6\/runs\/9ec70f41-67d5-4a95-999f-933f3aa9e261","user":"root"}
>>>  I1104 22:24:02.240910 14518 fetcher.cpp:369] Fetching URI
>>>  'hdfs://test/spark-1.5.1-bin-hadoop2.6.tgz'
>>>  I1104 22:24:02.240931 14518 fetcher.cpp:243] Fetching
>>> directly into
>>>  the sandbox directory
>>>  I1104 22:24:02.240952 14518 fetcher.cpp:180] Fetching URI
>>>  'hdfs://test/spark-1.5.1-bin-hadoop2.6.tgz'
>>>  E1104 22:24:02.245264 14518 shell.hpp:90] Command 'hadoop
>>> version
>>>  2>&1' failed; this is the output:
>>>  sh: hadoop: command not found
>>>  Failed to fetch 'hdfs://test/spark-1.5.1-bin-hadoop2.6.tgz':
>>>  Skipping fetch with Hadoop client: Failed to execute
>>> 'hadoop version
>>>  2>&1'; the command was either not found or exited with a
>>> non-zero
>>>  exit status: 127
>>>  Failed to synchronize with slave (it's probably exited)
>>>
>>>
>>>  As for "sh: hadoop: command not found", it indicates when
>>> mesos
>>>  executes "hadoop version" command,
>>>  it cannot find any valid hadoop command, but actually when

Re: Docker Multi-Host Networking and Mesos Isolation Strategies

2015-11-04 Thread haosdent
This new docker feature looks excited! To integrated with this, my quick
idea is we could implement it as a pluggable module and let user choose
which network isolator should used. But this is just my opinion. Could you
create a story for this in https://issues.apache.org/jira/browse/MESOS so
we could track this better.

On Wed, Nov 4, 2015 at 9:29 PM, John Omernik  wrote:

> Hey all,
>
> I see Docker 1.9 has a neat multihost networking feature.
>
> http://blog.docker.com/2015/11/docker-multi-host-networking-ga/
>
> I am curious how this may integrate (if at all) with the Network
> Isolation/IP per container strategy Mesos is looking at.  Is there overlap
> here? Are there integration points? Are we looking at divergent or
> convergent network strategies here?
>
> I would imagine while Docker multi-host is docker specific, Mesos is
> trying to solve the any container on multiple hosts problem, and thus the
> scope may be larger, but could there be opportunity for integration?  The
> reason I ask, is as I am rolling out Mesos PoCs the dev team is excited
> about these new features and I want to understand how these may or may not
> converge in the future.
>
> John
>
>
>


-- 
Best Regards,
Haosdent Huang


Re: How does Mesos parse hadoop command??

2015-11-04 Thread Tim Chen
What OS are you running this with?

And I assume if you run /bin/sh and try to run hadoop it can be found in
your PATH as well?

Tim

On Wed, Nov 4, 2015 at 12:34 AM, Du, Fan  wrote:

> Hi Mesos experts
>
> I setup a small mesos cluster with 1 master and 6 slaves,
> and deploy hdfs on the same cluster topology, both with root user role.
>
> #cat spark-1.5.1-bin-hadoop2.6/conf/spark-env.sh
> export MESOS_NATIVE_JAVA_LIBRARY=/usr/local/lib/libmesos.so
> export
> JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.91-2.6.2.1.el7_1.x86_64/jre/
> export SPARK_EXECUTOR_URI=hdfs://test/spark-1.5.1-bin-hadoop2.6.tgz
>
> When I run a simple SparkPi test
> #export MASTER=mesos://Mesos_Master_IP:5050
> #spark-1.5.1-bin-hadoop2.6/bin/run-example SparkPi 1
>
> I got this on slaves:
>
> I1104 22:24:02.238471 14518 fetcher.cpp:414] Fetcher Info:
> {"cache_directory":"\/tmp\/mesos\/fetch\/slaves\/556b49c1-7e6a-4f99-b320-c3f0c849e836-S6\/root","items":[{"action":"BYPASS_CACHE","uri":{"extract":true,"value":"hdfs:\/\/test\/spark-1.5.1-bin-hadoop2.6.tgz"}}],"sandbox_directory":"\/ws\/mesos\/slaves\/556b49c1-7e6a-4f99-b320-c3f0c849e836-S6\/frameworks\/556b49c1-7e6a-4f99-b320-c3f0c849e836-0003\/executors\/556b49c1-7e6a-4f99-b320-c3f0c849e836-S6\/runs\/9ec70f41-67d5-4a95-999f-933f3aa9e261","user":"root"}
> I1104 22:24:02.240910 14518 fetcher.cpp:369] Fetching URI
> 'hdfs://test/spark-1.5.1-bin-hadoop2.6.tgz'
> I1104 22:24:02.240931 14518 fetcher.cpp:243] Fetching directly into the
> sandbox directory
> I1104 22:24:02.240952 14518 fetcher.cpp:180] Fetching URI
> 'hdfs://test/spark-1.5.1-bin-hadoop2.6.tgz'
> E1104 22:24:02.245264 14518 shell.hpp:90] Command 'hadoop version 2>&1'
> failed; this is the output:
> sh: hadoop: command not found
> Failed to fetch 'hdfs://test/spark-1.5.1-bin-hadoop2.6.tgz': Skipping
> fetch with Hadoop client: Failed to execute 'hadoop version 2>&1'; the
> command was either not found or exited with a non-zero exit status: 127
> Failed to synchronize with slave (it's probably exited)
>
>
> As for "sh: hadoop: command not found", it indicates when mesos executes
> "hadoop version" command,
> it cannot find any valid hadoop command, but actually when I log into the
> slave, "hadoop vesion"
> runs well, because I update hadoop path into PATH env.
>
> cat ~/.bashrc
> export
> JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.91-2.6.2.1.el7_1.x86_64/jre/
> export HADOOP_PREFIX=/opt/hadoop-2.6.0
> export HADOOP_HOME=$HADOOP_PREFIX
> export HADOOP_COMMON_HOME=$HADOOP_PREFIX
> export HADOOP_CONF_DIR=$HADOOP_PREFIX/etc/hadoop
> export HADOOP_HDFS_HOME=$HADOOP_PREFIX
> export HADOOP_MAPRED_HOME=$HADOOP_PREFIX
> export HADOOP_YARN_HOME=$HADOOP_PREFIX
> export PATH=$PATH:$HADOOP_PREFIX/sbin:$HADOOP_PREFIX/bin
>
> I also try to set hadoop_home when launching mesos-slave, hmm, no luck,
> the slave
> complains it can find JAVA_HOME env when executing "hadoop version"
>
> Finally I check the Mesos code where this error happens, it looks quite
> straight forward.
>
>  ./src/hdfs/hdfs.hpp
>  44 // HTTP GET on hostname:port and grab the information in the
>  45 // ... (this is the best hack I can think of to get
>  46 // 'fs.default.name' given the tools available).
>  47 struct HDFS
>  48 {
>  49   // Look for `hadoop' first where proposed, otherwise, look for
>  50   // HADOOP_HOME, otherwise, assume it's on the PATH.
>  51   explicit HDFS(const std::string& _hadoop)
>  52 : hadoop(os::exists(_hadoop)
>  53  ? _hadoop
>  54  : (os::getenv("HADOOP_HOME").isSome()
>  55 ? path::join(os::getenv("HADOOP_HOME").get(),
> "bin/hadoop")
>  56 : "hadoop")) {}
>  57
>  58   // Look for `hadoop' in HADOOP_HOME or assume it's on the PATH.
>  59   HDFS()
>  60 : hadoop(os::getenv("HADOOP_HOME").isSome()
>  61  ? path::join(os::getenv("HADOOP_HOME").get(),
> "bin/hadoop")
>  62  : "hadoop") {}
>  63
>  64   // Check if hadoop client is available at the path that was set.
>  65   // This can be done by executing `hadoop version` command and
>  66   // checking for status code == 0.
>  67   Try available()
>  68   {
>  69 Try command = strings::format("%s version", hadoop);
>  70
>  71 CHECK_SOME(command);
>  72
>  73 // We are piping stderr to stdout so that we can see the error (if
>  74 // any) in the logs emitted by `os::shell()` in case of failure.
>  75 Try out = os::shell(command.get() + " 2>&1");
>  76
>  77 if (out.isError()) {
>  78   return Error(out.error());
>  79 }
>  80
>  81 return true;
>  82   }
>
> It puzzled me for a while, am I missing something obviously?
> Thanks in advance.
>
>


Re: How does Mesos parse hadoop command??

2015-11-04 Thread haosdent
I notice
```
"user":"root"
```
Do you make sure could execute `hadoop version` under root?

On Wed, Nov 4, 2015 at 4:56 PM, Du, Fan  wrote:

>
>
> On 2015/11/4 16:40, Tim Chen wrote:
>
>> What OS are you running this with?
>>
>> And I assume if you run /bin/sh and try to run hadoop it can be found in
>> your PATH as well?
>>
>
> I'm using CentOS-7.2
>
> # /bin/sh hadoop version
> Hadoop 2.6.0
> Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r
> e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1
> Compiled by jenkins on 2014-11-13T21:10Z
> Compiled with protoc 2.5.0
> From source with checksum 18e43357c8f927c0695f1e9522859d6a
> This command was run using
> /opt/hadoop-2.6.0/share/hadoop/common/hadoop-common-2.6.0.jar
>
>
>
> Tim
>>
>> On Wed, Nov 4, 2015 at 12:34 AM, Du, Fan > > wrote:
>>
>> Hi Mesos experts
>>
>> I setup a small mesos cluster with 1 master and 6 slaves,
>> and deploy hdfs on the same cluster topology, both with root user
>> role.
>>
>> #cat spark-1.5.1-bin-hadoop2.6/conf/spark-env.sh
>> export MESOS_NATIVE_JAVA_LIBRARY=/usr/local/lib/libmesos.so
>> export
>>
>> JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.91-2.6.2.1.el7_1.x86_64/jre/
>> export SPARK_EXECUTOR_URI=hdfs://test/spark-1.5.1-bin-hadoop2.6.tgz
>>
>> When I run a simple SparkPi test
>> #export MASTER=mesos://Mesos_Master_IP:5050
>> #spark-1.5.1-bin-hadoop2.6/bin/run-example SparkPi 1
>>
>> I got this on slaves:
>>
>> I1104 22:24:02.238471 14518 fetcher.cpp:414] Fetcher Info:
>>
>> {"cache_directory":"\/tmp\/mesos\/fetch\/slaves\/556b49c1-7e6a-4f99-b320-c3f0c849e836-S6\/root","items":[{"action":"BYPASS_CACHE","uri":{"extract":true,"value":"hdfs:\/\/test\/spark-1.5.1-bin-hadoop2.6.tgz"}}],"sandbox_directory":"\/ws\/mesos\/slaves\/556b49c1-7e6a-4f99-b320-c3f0c849e836-S6\/frameworks\/556b49c1-7e6a-4f99-b320-c3f0c849e836-0003\/executors\/556b49c1-7e6a-4f99-b320-c3f0c849e836-S6\/runs\/9ec70f41-67d5-4a95-999f-933f3aa9e261","user":"root"}
>> I1104 22:24:02.240910 14518 fetcher.cpp:369] Fetching URI
>> 'hdfs://test/spark-1.5.1-bin-hadoop2.6.tgz'
>> I1104 22:24:02.240931 14518 fetcher.cpp:243] Fetching directly into
>> the sandbox directory
>> I1104 22:24:02.240952 14518 fetcher.cpp:180] Fetching URI
>> 'hdfs://test/spark-1.5.1-bin-hadoop2.6.tgz'
>> E1104 22:24:02.245264 14518 shell.hpp:90] Command 'hadoop version
>> 2>&1' failed; this is the output:
>> sh: hadoop: command not found
>> Failed to fetch 'hdfs://test/spark-1.5.1-bin-hadoop2.6.tgz':
>> Skipping fetch with Hadoop client: Failed to execute 'hadoop version
>> 2>&1'; the command was either not found or exited with a non-zero
>> exit status: 127
>> Failed to synchronize with slave (it's probably exited)
>>
>>
>> As for "sh: hadoop: command not found", it indicates when mesos
>> executes "hadoop version" command,
>> it cannot find any valid hadoop command, but actually when I log
>> into the slave, "hadoop vesion"
>> runs well, because I update hadoop path into PATH env.
>>
>> cat ~/.bashrc
>> export
>>
>> JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.91-2.6.2.1.el7_1.x86_64/jre/
>> export HADOOP_PREFIX=/opt/hadoop-2.6.0
>> export HADOOP_HOME=$HADOOP_PREFIX
>> export HADOOP_COMMON_HOME=$HADOOP_PREFIX
>> export HADOOP_CONF_DIR=$HADOOP_PREFIX/etc/hadoop
>> export HADOOP_HDFS_HOME=$HADOOP_PREFIX
>> export HADOOP_MAPRED_HOME=$HADOOP_PREFIX
>> export HADOOP_YARN_HOME=$HADOOP_PREFIX
>> export PATH=$PATH:$HADOOP_PREFIX/sbin:$HADOOP_PREFIX/bin
>>
>> I also try to set hadoop_home when launching mesos-slave, hmm, no
>> luck, the slave
>> complains it can find JAVA_HOME env when executing "hadoop version"
>>
>> Finally I check the Mesos code where this error happens, it looks
>> quite straight forward.
>>
>>   ./src/hdfs/hdfs.hpp
>>   44 // HTTP GET on hostname:port and grab the information in the
>>   45 // ... (this is the best hack I can think of to
>> get
>>   46 // 'fs.default.name ' given the tools
>>
>> available).
>>   47 struct HDFS
>>   48 {
>>   49   // Look for `hadoop' first where proposed, otherwise, look for
>>   50   // HADOOP_HOME, otherwise, assume it's on the PATH.
>>   51   explicit HDFS(const std::string& _hadoop)
>>   52 : hadoop(os::exists(_hadoop)
>>   53  ? _hadoop
>>   54  : (os::getenv("HADOOP_HOME").isSome()
>>   55 ? path::join(os::getenv("HADOOP_HOME").get(),
>> "bin/hadoop")
>>   56 : "hadoop")) {}
>>   57
>>   58   // Look for `hadoop' in HADOOP_HOME or assume it's on the PATH.
>>   59   HDFS()
>>   60 : hadoop(os::getenv("HADOOP_HOME").isSome()
>>   61  ? path::join(os::getenv("HADOOP_HOME").get(),
>> "bin/hadoop")
>>

Re: How does Mesos parse hadoop command??

2015-11-04 Thread Du, Fan



On 2015/11/4 17:09, haosdent wrote:

I notice
```
"user":"root"
```
Do you make sure could execute `hadoop version` under root?



[root@tylersburg spark-1.5.1-bin-hadoop2.6]# whoami
root
[root@tylersburg spark-1.5.1-bin-hadoop2.6]# hadoop version
Hadoop 2.6.0
Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r 
e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1

Compiled by jenkins on 2014-11-13T21:10Z
Compiled with protoc 2.5.0
From source with checksum 18e43357c8f927c0695f1e9522859d6a
This command was run using 
/opt/hadoop-2.6.0/share/hadoop/common/hadoop-common-2.6.0.jar


[root@tylersburg spark-1.5.1-bin-hadoop2.6]# ls -hl 
/opt/hadoop-2.6.0/bin/hadoop

-rwxr-xr-x. 1 root root 5.4K Nov  3 08:36 /opt/hadoop-2.6.0/bin/hadoop




On Wed, Nov 4, 2015 at 4:56 PM, Du, Fan > wrote:



On 2015/11/4 16:40, Tim Chen wrote:

What OS are you running this with?

And I assume if you run /bin/sh and try to run hadoop it can be
found in
your PATH as well?


I'm using CentOS-7.2

# /bin/sh hadoop version
Hadoop 2.6.0
Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r
e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1
Compiled by jenkins on 2014-11-13T21:10Z
Compiled with protoc 2.5.0
>From source with checksum 18e43357c8f927c0695f1e9522859d6a
This command was run using
/opt/hadoop-2.6.0/share/hadoop/common/hadoop-common-2.6.0.jar



Tim

On Wed, Nov 4, 2015 at 12:34 AM, Du, Fan 
>> wrote:

 Hi Mesos experts

 I setup a small mesos cluster with 1 master and 6 slaves,
 and deploy hdfs on the same cluster topology, both with
root user role.

 #cat spark-1.5.1-bin-hadoop2.6/conf/spark-env.sh
 export MESOS_NATIVE_JAVA_LIBRARY=/usr/local/lib/libmesos.so
 export


JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.91-2.6.2.1.el7_1.x86_64/jre/
 export
SPARK_EXECUTOR_URI=hdfs://test/spark-1.5.1-bin-hadoop2.6.tgz

 When I run a simple SparkPi test
 #export MASTER=mesos://Mesos_Master_IP:5050
 #spark-1.5.1-bin-hadoop2.6/bin/run-example SparkPi 1

 I got this on slaves:

 I1104 22:24:02.238471 14518 fetcher.cpp:414] Fetcher Info:


{"cache_directory":"\/tmp\/mesos\/fetch\/slaves\/556b49c1-7e6a-4f99-b320-c3f0c849e836-S6\/root","items":[{"action":"BYPASS_CACHE","uri":{"extract":true,"value":"hdfs:\/\/test\/spark-1.5.1-bin-hadoop2.6.tgz"}}],"sandbox_directory":"\/ws\/mesos\/slaves\/556b49c1-7e6a-4f99-b320-c3f0c849e836-S6\/frameworks\/556b49c1-7e6a-4f99-b320-c3f0c849e836-0003\/executors\/556b49c1-7e6a-4f99-b320-c3f0c849e836-S6\/runs\/9ec70f41-67d5-4a95-999f-933f3aa9e261","user":"root"}
 I1104 22:24:02.240910 14518 fetcher.cpp:369] Fetching URI
 'hdfs://test/spark-1.5.1-bin-hadoop2.6.tgz'
 I1104 22:24:02.240931 14518 fetcher.cpp:243] Fetching
directly into
 the sandbox directory
 I1104 22:24:02.240952 14518 fetcher.cpp:180] Fetching URI
 'hdfs://test/spark-1.5.1-bin-hadoop2.6.tgz'
 E1104 22:24:02.245264 14518 shell.hpp:90] Command 'hadoop
version
 2>&1' failed; this is the output:
 sh: hadoop: command not found
 Failed to fetch 'hdfs://test/spark-1.5.1-bin-hadoop2.6.tgz':
 Skipping fetch with Hadoop client: Failed to execute
'hadoop version
 2>&1'; the command was either not found or exited with a
non-zero
 exit status: 127
 Failed to synchronize with slave (it's probably exited)


 As for "sh: hadoop: command not found", it indicates when mesos
 executes "hadoop version" command,
 it cannot find any valid hadoop command, but actually when
I log
 into the slave, "hadoop vesion"
 runs well, because I update hadoop path into PATH env.

 cat ~/.bashrc
 export


JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.91-2.6.2.1.el7_1.x86_64/jre/
 export HADOOP_PREFIX=/opt/hadoop-2.6.0
 export HADOOP_HOME=$HADOOP_PREFIX
 export HADOOP_COMMON_HOME=$HADOOP_PREFIX
 export HADOOP_CONF_DIR=$HADOOP_PREFIX/etc/hadoop
 export HADOOP_HDFS_HOME=$HADOOP_PREFIX
 export HADOOP_MAPRED_HOME=$HADOOP_PREFIX
 export HADOOP_YARN_HOME=$HADOOP_PREFIX
 export PATH=$PATH:$HADOOP_PREFIX/sbin:$HADOOP_PREFIX/bin

 I also try to set hadoop_home when launching mesos-slave,
hmm, no
 luck, the slave
 complains it can find JAVA_HOME env when executing "hadoop
version"

 Finally I check the Mesos code 

Re: How does Mesos parse hadoop command??

2015-11-04 Thread Du, Fan



On 2015/11/4 16:40, Tim Chen wrote:

What OS are you running this with?

And I assume if you run /bin/sh and try to run hadoop it can be found in
your PATH as well?


I'm using CentOS-7.2

# /bin/sh hadoop version
Hadoop 2.6.0
Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r 
e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1

Compiled by jenkins on 2014-11-13T21:10Z
Compiled with protoc 2.5.0
From source with checksum 18e43357c8f927c0695f1e9522859d6a
This command was run using 
/opt/hadoop-2.6.0/share/hadoop/common/hadoop-common-2.6.0.jar





Tim

On Wed, Nov 4, 2015 at 12:34 AM, Du, Fan > wrote:

Hi Mesos experts

I setup a small mesos cluster with 1 master and 6 slaves,
and deploy hdfs on the same cluster topology, both with root user role.

#cat spark-1.5.1-bin-hadoop2.6/conf/spark-env.sh
export MESOS_NATIVE_JAVA_LIBRARY=/usr/local/lib/libmesos.so
export
JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.91-2.6.2.1.el7_1.x86_64/jre/
export SPARK_EXECUTOR_URI=hdfs://test/spark-1.5.1-bin-hadoop2.6.tgz

When I run a simple SparkPi test
#export MASTER=mesos://Mesos_Master_IP:5050
#spark-1.5.1-bin-hadoop2.6/bin/run-example SparkPi 1

I got this on slaves:

I1104 22:24:02.238471 14518 fetcher.cpp:414] Fetcher Info:

{"cache_directory":"\/tmp\/mesos\/fetch\/slaves\/556b49c1-7e6a-4f99-b320-c3f0c849e836-S6\/root","items":[{"action":"BYPASS_CACHE","uri":{"extract":true,"value":"hdfs:\/\/test\/spark-1.5.1-bin-hadoop2.6.tgz"}}],"sandbox_directory":"\/ws\/mesos\/slaves\/556b49c1-7e6a-4f99-b320-c3f0c849e836-S6\/frameworks\/556b49c1-7e6a-4f99-b320-c3f0c849e836-0003\/executors\/556b49c1-7e6a-4f99-b320-c3f0c849e836-S6\/runs\/9ec70f41-67d5-4a95-999f-933f3aa9e261","user":"root"}
I1104 22:24:02.240910 14518 fetcher.cpp:369] Fetching URI
'hdfs://test/spark-1.5.1-bin-hadoop2.6.tgz'
I1104 22:24:02.240931 14518 fetcher.cpp:243] Fetching directly into
the sandbox directory
I1104 22:24:02.240952 14518 fetcher.cpp:180] Fetching URI
'hdfs://test/spark-1.5.1-bin-hadoop2.6.tgz'
E1104 22:24:02.245264 14518 shell.hpp:90] Command 'hadoop version
2>&1' failed; this is the output:
sh: hadoop: command not found
Failed to fetch 'hdfs://test/spark-1.5.1-bin-hadoop2.6.tgz':
Skipping fetch with Hadoop client: Failed to execute 'hadoop version
2>&1'; the command was either not found or exited with a non-zero
exit status: 127
Failed to synchronize with slave (it's probably exited)


As for "sh: hadoop: command not found", it indicates when mesos
executes "hadoop version" command,
it cannot find any valid hadoop command, but actually when I log
into the slave, "hadoop vesion"
runs well, because I update hadoop path into PATH env.

cat ~/.bashrc
export
JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.91-2.6.2.1.el7_1.x86_64/jre/
export HADOOP_PREFIX=/opt/hadoop-2.6.0
export HADOOP_HOME=$HADOOP_PREFIX
export HADOOP_COMMON_HOME=$HADOOP_PREFIX
export HADOOP_CONF_DIR=$HADOOP_PREFIX/etc/hadoop
export HADOOP_HDFS_HOME=$HADOOP_PREFIX
export HADOOP_MAPRED_HOME=$HADOOP_PREFIX
export HADOOP_YARN_HOME=$HADOOP_PREFIX
export PATH=$PATH:$HADOOP_PREFIX/sbin:$HADOOP_PREFIX/bin

I also try to set hadoop_home when launching mesos-slave, hmm, no
luck, the slave
complains it can find JAVA_HOME env when executing "hadoop version"

Finally I check the Mesos code where this error happens, it looks
quite straight forward.

  ./src/hdfs/hdfs.hpp
  44 // HTTP GET on hostname:port and grab the information in the
  45 // ... (this is the best hack I can think of to get
  46 // 'fs.default.name ' given the tools
available).
  47 struct HDFS
  48 {
  49   // Look for `hadoop' first where proposed, otherwise, look for
  50   // HADOOP_HOME, otherwise, assume it's on the PATH.
  51   explicit HDFS(const std::string& _hadoop)
  52 : hadoop(os::exists(_hadoop)
  53  ? _hadoop
  54  : (os::getenv("HADOOP_HOME").isSome()
  55 ? path::join(os::getenv("HADOOP_HOME").get(),
"bin/hadoop")
  56 : "hadoop")) {}
  57
  58   // Look for `hadoop' in HADOOP_HOME or assume it's on the PATH.
  59   HDFS()
  60 : hadoop(os::getenv("HADOOP_HOME").isSome()
  61  ? path::join(os::getenv("HADOOP_HOME").get(),
"bin/hadoop")
  62  : "hadoop") {}
  63
  64   // Check if hadoop client is available at the path that was set.
  65   // This can be done by executing `hadoop version` command and
  66   // checking for status code == 0.
  67   Try available()
  68   {
  69 Try command = strings::format("%s version",
hadoop);
  70
  71 CHECK_SOME(command);
  72
  73 // We are piping stderr to stdout so 

Re: How does Mesos parse hadoop command??

2015-11-04 Thread haosdent
how about add this flag when launch slave
 --executor_environment_variables='{"HADOOP_HOME": "/opt/hadoop-2.6.0"}' ?

On Wed, Nov 4, 2015 at 5:13 PM, Du, Fan  wrote:

>
>
> On 2015/11/4 17:09, haosdent wrote:
>
>> I notice
>> ```
>> "user":"root"
>> ```
>> Do you make sure could execute `hadoop version` under root?
>>
>
>
> [root@tylersburg spark-1.5.1-bin-hadoop2.6]# whoami
> root
> [root@tylersburg spark-1.5.1-bin-hadoop2.6]# hadoop version
> Hadoop 2.6.0
> Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r
> e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1
> Compiled by jenkins on 2014-11-13T21:10Z
> Compiled with protoc 2.5.0
> From source with checksum 18e43357c8f927c0695f1e9522859d6a
> This command was run using
> /opt/hadoop-2.6.0/share/hadoop/common/hadoop-common-2.6.0.jar
>
> [root@tylersburg spark-1.5.1-bin-hadoop2.6]# ls -hl
> /opt/hadoop-2.6.0/bin/hadoop
> -rwxr-xr-x. 1 root root 5.4K Nov  3 08:36 /opt/hadoop-2.6.0/bin/hadoop
>
>
>
> On Wed, Nov 4, 2015 at 4:56 PM, Du, Fan > > wrote:
>>
>>
>>
>> On 2015/11/4 16:40, Tim Chen wrote:
>>
>> What OS are you running this with?
>>
>> And I assume if you run /bin/sh and try to run hadoop it can be
>> found in
>> your PATH as well?
>>
>>
>> I'm using CentOS-7.2
>>
>> # /bin/sh hadoop version
>> Hadoop 2.6.0
>> Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r
>> e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1
>> Compiled by jenkins on 2014-11-13T21:10Z
>> Compiled with protoc 2.5.0
>> >From source with checksum 18e43357c8f927c0695f1e9522859d6a
>> This command was run using
>> /opt/hadoop-2.6.0/share/hadoop/common/hadoop-common-2.6.0.jar
>>
>>
>>
>> Tim
>>
>> On Wed, Nov 4, 2015 at 12:34 AM, Du, Fan > 
>> >> wrote:
>>
>>  Hi Mesos experts
>>
>>  I setup a small mesos cluster with 1 master and 6 slaves,
>>  and deploy hdfs on the same cluster topology, both with
>> root user role.
>>
>>  #cat spark-1.5.1-bin-hadoop2.6/conf/spark-env.sh
>>  export MESOS_NATIVE_JAVA_LIBRARY=/usr/local/lib/libmesos.so
>>  export
>>
>>
>> JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.91-2.6.2.1.el7_1.x86_64/jre/
>>  export
>> SPARK_EXECUTOR_URI=hdfs://test/spark-1.5.1-bin-hadoop2.6.tgz
>>
>>  When I run a simple SparkPi test
>>  #export MASTER=mesos://Mesos_Master_IP:5050
>>  #spark-1.5.1-bin-hadoop2.6/bin/run-example SparkPi 1
>>
>>  I got this on slaves:
>>
>>  I1104 22:24:02.238471 14518 fetcher.cpp:414] Fetcher Info:
>>
>>
>> {"cache_directory":"\/tmp\/mesos\/fetch\/slaves\/556b49c1-7e6a-4f99-b320-c3f0c849e836-S6\/root","items":[{"action":"BYPASS_CACHE","uri":{"extract":true,"value":"hdfs:\/\/test\/spark-1.5.1-bin-hadoop2.6.tgz"}}],"sandbox_directory":"\/ws\/mesos\/slaves\/556b49c1-7e6a-4f99-b320-c3f0c849e836-S6\/frameworks\/556b49c1-7e6a-4f99-b320-c3f0c849e836-0003\/executors\/556b49c1-7e6a-4f99-b320-c3f0c849e836-S6\/runs\/9ec70f41-67d5-4a95-999f-933f3aa9e261","user":"root"}
>>  I1104 22:24:02.240910 14518 fetcher.cpp:369] Fetching URI
>>  'hdfs://test/spark-1.5.1-bin-hadoop2.6.tgz'
>>  I1104 22:24:02.240931 14518 fetcher.cpp:243] Fetching
>> directly into
>>  the sandbox directory
>>  I1104 22:24:02.240952 14518 fetcher.cpp:180] Fetching URI
>>  'hdfs://test/spark-1.5.1-bin-hadoop2.6.tgz'
>>  E1104 22:24:02.245264 14518 shell.hpp:90] Command 'hadoop
>> version
>>  2>&1' failed; this is the output:
>>  sh: hadoop: command not found
>>  Failed to fetch 'hdfs://test/spark-1.5.1-bin-hadoop2.6.tgz':
>>  Skipping fetch with Hadoop client: Failed to execute
>> 'hadoop version
>>  2>&1'; the command was either not found or exited with a
>> non-zero
>>  exit status: 127
>>  Failed to synchronize with slave (it's probably exited)
>>
>>
>>  As for "sh: hadoop: command not found", it indicates when
>> mesos
>>  executes "hadoop version" command,
>>  it cannot find any valid hadoop command, but actually when
>> I log
>>  into the slave, "hadoop vesion"
>>  runs well, because I update hadoop path into PATH env.
>>
>>  cat ~/.bashrc
>>  export
>>
>>
>> JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.91-2.6.2.1.el7_1.x86_64/jre/
>>  export HADOOP_PREFIX=/opt/hadoop-2.6.0
>>  export HADOOP_HOME=$HADOOP_PREFIX
>>  export HADOOP_COMMON_HOME=$HADOOP_PREFIX
>>  export HADOOP_CONF_DIR=$HADOOP_PREFIX/etc/hadoop
>>  

How does Mesos parse hadoop command??

2015-11-04 Thread Du, Fan

Hi Mesos experts

I setup a small mesos cluster with 1 master and 6 slaves,
and deploy hdfs on the same cluster topology, both with root user role.

#cat spark-1.5.1-bin-hadoop2.6/conf/spark-env.sh
export MESOS_NATIVE_JAVA_LIBRARY=/usr/local/lib/libmesos.so
export 
JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.91-2.6.2.1.el7_1.x86_64/jre/

export SPARK_EXECUTOR_URI=hdfs://test/spark-1.5.1-bin-hadoop2.6.tgz

When I run a simple SparkPi test
#export MASTER=mesos://Mesos_Master_IP:5050
#spark-1.5.1-bin-hadoop2.6/bin/run-example SparkPi 1

I got this on slaves:

I1104 22:24:02.238471 14518 fetcher.cpp:414] Fetcher Info: 
{"cache_directory":"\/tmp\/mesos\/fetch\/slaves\/556b49c1-7e6a-4f99-b320-c3f0c849e836-S6\/root","items":[{"action":"BYPASS_CACHE","uri":{"extract":true,"value":"hdfs:\/\/test\/spark-1.5.1-bin-hadoop2.6.tgz"}}],"sandbox_directory":"\/ws\/mesos\/slaves\/556b49c1-7e6a-4f99-b320-c3f0c849e836-S6\/frameworks\/556b49c1-7e6a-4f99-b320-c3f0c849e836-0003\/executors\/556b49c1-7e6a-4f99-b320-c3f0c849e836-S6\/runs\/9ec70f41-67d5-4a95-999f-933f3aa9e261","user":"root"}
I1104 22:24:02.240910 14518 fetcher.cpp:369] Fetching URI 
'hdfs://test/spark-1.5.1-bin-hadoop2.6.tgz'
I1104 22:24:02.240931 14518 fetcher.cpp:243] Fetching directly into the 
sandbox directory
I1104 22:24:02.240952 14518 fetcher.cpp:180] Fetching URI 
'hdfs://test/spark-1.5.1-bin-hadoop2.6.tgz'
E1104 22:24:02.245264 14518 shell.hpp:90] Command 'hadoop version 2>&1' 
failed; this is the output:

sh: hadoop: command not found
Failed to fetch 'hdfs://test/spark-1.5.1-bin-hadoop2.6.tgz': Skipping 
fetch with Hadoop client: Failed to execute 'hadoop version 2>&1'; the 
command was either not found or exited with a non-zero exit status: 127

Failed to synchronize with slave (it's probably exited)


As for "sh: hadoop: command not found", it indicates when mesos executes 
"hadoop version" command,
it cannot find any valid hadoop command, but actually when I log into 
the slave, "hadoop vesion"

runs well, because I update hadoop path into PATH env.

cat ~/.bashrc
export 
JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.91-2.6.2.1.el7_1.x86_64/jre/

export HADOOP_PREFIX=/opt/hadoop-2.6.0
export HADOOP_HOME=$HADOOP_PREFIX
export HADOOP_COMMON_HOME=$HADOOP_PREFIX
export HADOOP_CONF_DIR=$HADOOP_PREFIX/etc/hadoop
export HADOOP_HDFS_HOME=$HADOOP_PREFIX
export HADOOP_MAPRED_HOME=$HADOOP_PREFIX
export HADOOP_YARN_HOME=$HADOOP_PREFIX
export PATH=$PATH:$HADOOP_PREFIX/sbin:$HADOOP_PREFIX/bin

I also try to set hadoop_home when launching mesos-slave, hmm, no luck, 
the slave

complains it can find JAVA_HOME env when executing "hadoop version"

Finally I check the Mesos code where this error happens, it looks quite 
straight forward.


 ./src/hdfs/hdfs.hpp
 44 // HTTP GET on hostname:port and grab the information in the
 45 // ... (this is the best hack I can think of to get
 46 // 'fs.default.name' given the tools available).
 47 struct HDFS
 48 {
 49   // Look for `hadoop' first where proposed, otherwise, look for
 50   // HADOOP_HOME, otherwise, assume it's on the PATH.
 51   explicit HDFS(const std::string& _hadoop)
 52 : hadoop(os::exists(_hadoop)
 53  ? _hadoop
 54  : (os::getenv("HADOOP_HOME").isSome()
 55 ? path::join(os::getenv("HADOOP_HOME").get(), 
"bin/hadoop")

 56 : "hadoop")) {}
 57
 58   // Look for `hadoop' in HADOOP_HOME or assume it's on the PATH.
 59   HDFS()
 60 : hadoop(os::getenv("HADOOP_HOME").isSome()
 61  ? path::join(os::getenv("HADOOP_HOME").get(), 
"bin/hadoop")

 62  : "hadoop") {}
 63
 64   // Check if hadoop client is available at the path that was set.
 65   // This can be done by executing `hadoop version` command and
 66   // checking for status code == 0.
 67   Try available()
 68   {
 69 Try command = strings::format("%s version", hadoop);
 70
 71 CHECK_SOME(command);
 72
 73 // We are piping stderr to stdout so that we can see the error (if
 74 // any) in the logs emitted by `os::shell()` in case of failure.
 75 Try out = os::shell(command.get() + " 2>&1");
 76
 77 if (out.isError()) {
 78   return Error(out.error());
 79 }
 80
 81 return true;
 82   }

It puzzled me for a while, am I missing something obviously?
Thanks in advance.



How is Mesos doing certificate verification for resources in URIs?

2015-11-04 Thread Rad Gruchalski
Hi everyone,  

I’ve added the following URI to the URIs for the task: 
https://raw.githubusercontent.com/apache/spark/master/pom.xml. However, my task 
has failed because of:

Failed to fetch 
'https://raw.githubusercontent.com/apache/spark/master/pom.xml': Error 
downloading resource: Peer certificate cannot be authenticated with given CA 
certificates

This surely is a problem in mesos. Everybody else in the world claims that the 
certificate is valid. Or is there a setting for making this work?










Kind regards,

Radek Gruchalski

ra...@gruchalski.com (mailto:ra...@gruchalski.com)
 
(mailto:ra...@gruchalski.com)
de.linkedin.com/in/radgruchalski/ (http://de.linkedin.com/in/radgruchalski/)

Confidentiality:
This communication is intended for the above-named person and may be 
confidential and/or legally privileged.
If it has come to you in error you must take no action based on it, nor must 
you copy or show it to anyone; please delete/destroy and inform the sender 
immediately.




Re: How is Mesos doing certificate verification for resources in URIs?

2015-11-04 Thread haosdent
Could you curl https://raw.githubusercontent.com/apache/spark/master/pom.xml
success in your slave?

On Wed, Nov 4, 2015 at 6:50 PM, Rad Gruchalski  wrote:

> Hi everyone,
>
> I’ve added the following URI to the URIs for the task:
> https://raw.githubusercontent.com/apache/spark/master/pom.xml. However,
> my task has failed because of:
>
> Failed to fetch '
> https://raw.githubusercontent.com/apache/spark/master/pom.xml': Error
> downloading resource: Peer certificate cannot be authenticated with given
> CA certificates
>
> This surely is a problem in mesos. Everybody else in the world claims that
> the certificate is valid. Or is there a setting for making this work?
>
> Kind regards,
> Radek Gruchalski
> ra...@gruchalski.com 
> de.linkedin.com/in/radgruchalski/
>
>
> *Confidentiality:*This communication is intended for the above-named
> person and may be confidential and/or legally privileged.
> If it has come to you in error you must take no action based on it, nor
> must you copy or show it to anyone; please delete/destroy and inform the
> sender immediately.
>



-- 
Best Regards,
Haosdent Huang


Re: How is Mesos doing certificate verification for resources in URIs?

2015-11-04 Thread Rad Gruchalski
Yes, this is from the agent:  

~$ curl -i https://raw.githubusercontent.com/apache/spark/master/pom.xml
HTTP/1.1 200 OK
Content-Security-Policy: default-src 'none'
X-XSS-Protection: 1; mode=block
X-Frame-Options: deny
X-Content-Type-Options: nosniff
Strict-Transport-Security: max-age=31536000
ETag: "762bfc728233533ab49336ff68dc02203407ea43"
Content-Type: text/plain; charset=utf-8
Cache-Control: max-age=300
X-GitHub-Request-Id: B91F1318:509A:EEE5F90:5639E92E
Content-Length: 87329
Accept-Ranges: bytes
Date: Wed, 04 Nov 2015 11:17:02 GMT
Via: 1.1 varnish
Connection: keep-alive
X-Served-By: cache-lhr6327-LHR
X-Cache: MISS
X-Cache-Hits: 0
Vary: Authorization,Accept-Encoding
Access-Control-Allow-Origin: *
X-Fastly-Request-ID: f3120a4d90968291aa84609c786626599809456d
Expires: Wed, 04 Nov 2015 11:22:02 GMT
Source-Age: 0


 Best Regards,
> Haosdent Huang