Yuliya, I would be interested in the patch for MapR, is that a patch for
Myriad or a patch for Hadoop on MapR?  I wonder if there is a hadoop env
file I could modified in my TGZ to help address the issue on my nodes as
well. Can you describe what "mapr.host" is and if I can force overwrite
that in my ENV file or will MapR clobber that at a later point in
execution? I am thinking that with some simple sed, I could "fix" the conf
file.

Wait, I suppose there is no way for me edit the command used to run the
node manager... there's a thought. Could Myriad provide an ENV value or
something that would allow us to edit the command or insert something into
the command that is used to run the NM?  (below is the the command on my
cluster)  Basically, if there was a way to template that  and alter it in
the Myriad config, I could add commands to update the variables in the conf
file before it's copied to yarn-site on every node... just spitballing
ideas here...



sudo tar -zxpf hadoop-2.7.0-NM.tar.gz && sudo chown mapr . && cp conf
hadoop-2.7.0/etc/hadoop/yarn-site.xml; export YARN_HOME=hadoop-2.7.0; sudo
-E -u mapr -H env YARN_HOME=hadoop-2.7.0
YARN_NODEMANAGER_OPTS=-Dnodemanager.resource.io-spindles=4.0
-Dyarn.resourcemanager.hostname=myriad.marathon.mesos
-Dyarn.nodemanager.container-executor.class=org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor
-Dnodemanager.resource.cpu-vcores=2 -Dnodemanager.resource.memory-mb=8192
-Dmyriad.yarn.nodemanager.address=0.0.0.0:31984
-Dmyriad.yarn.nodemanager.localizer.address=0.0.0.0:31233
-Dmyriad.yarn.nodemanager.webapp.address=0.0.0.0:31716
-Dmyriad.mapreduce.shuffle.port=31786  /bin/yarn nodemanager

On Tue, Nov 17, 2015 at 4:44 PM, yuliya Feldman <yufeld...@yahoo.com.invalid
> wrote:

> Hadoop (not Mapr) requires whole path starting from "/" be owned by root
> and writable only by root
> The second problem is exactly what I was talking about configuration being
> taken from RM that overwrites local one
> I can give you a patch to mitigate the issue for Mapr if you are building
> from source.
> Thanks,Yuliya
>       From: John Omernik <j...@omernik.com>
>  To: dev@myriad.incubator.apache.org
>  Sent: Tuesday, November 17, 2015 1:15 PM
>  Subject: Re: Struggling with Permissions
>
> Well sure /tmp is world writeable but /tmp/mesos is not world writable thus
> there is a sandbox to play in there... or am I missing something. Not to
> mention my tmp is rwt which is world writable but only the creator or root
> can modify (based on the googles).
> Yuliya:
>
> I am seeing a weird behavior with MapR as it relates to (I believe) the
> mapr_direct_shuffle.
>
> In the Node Manager logs, I see things starting and it saying "Checking for
> local volume, if local volume is not present command will create and mount
> it"
>
> Command invoked is : /opt/mapr/server/createTTVolume.sh
> hadoopmapr7.brewingintel.com /var/mapr/local/
> hadoopmapr2.brewingintel.com/mapred /var/mapr/local/
> hadoopmapr2.brewingintel.com/mapred/nodeManager yarn
>
>
> What is interesting here is hadoopmapr7 is the nodemanager it's trying to
> start on, however the mount point it's trying to create is hadoopmapr2
> which is the node the resource manager happened to fall on...  I was very
> confused by that because in no place should hadoopmapr2 be "known" to the
> nodemanager, because it thinks the resource manager hostname is
> myriad.marathon.mesos.
>
> So why was it hard coding to the node the resource manager is running on?
>
> Well if I look at the conf file in the sandbox (the file that gets copied
> to be yarn-site.xml for node managers.  There ARE four references the
> hadoopmapr2. Three of the four say "source programatically" and one is just
> set... that's mapr.host.  Could there be some down stream hinkyness going
> on with how MapR is setting hostnames?  All of these variables seem "wrong"
> in that mapr.host (on the node manager) should be hadoopmapr7 in this case,
> and the resource managers should all be myriad.marathon.mesos.  I'd be
> interested in your thoughts here, because I am stumped at how these are
> getting set.
>
>
>
>
> <property><name>yarn.resourcemanager.address</name><value>hadoopmapr2:8032</value><source>programatically</source></property>
> <property><name>mapr.host</name><value>hadoopmapr2.brewingintel.com
> </value></property>
>
> <property><name>yarn.resourcemanager.resource-tracker.address</name><value>hadoopmapr2:8031</value><source>programatically</source></property>
>
> <property><name>yarn.resourcemanager.admin.address</name><value>hadoopmapr2:8033</value><source>programatically</source></property>
>
>
>
>
>
>
>
> On Tue, Nov 17, 2015 at 2:51 PM, Darin Johnson <dbjohnson1...@gmail.com>
> wrote:
>
> > Yuliya: Are you referencing yarn.nodemanager.hostname or a mapr specific
> > option?
> >
> > I'm working right now on passing a
> > -Dyarn.nodemanager.hostname=offer.getHostName().  Useful if you've got
> > extra ip's for a san or management network.
> >
> > John: Yeah the permissions on the tarball are a pain to get right.  I'm
> > working on Docker Support and a build script for the tarball, which
> should
> > make things easier.  Also, to the point of using world writable
> directories
> > it's a little scary from the security side of things to allow executables
> > to run there, especially things running as privileged users.  Many
> distro's
> > of linux will mount /tmp noexec.
> >
> > Darin
> >
> > On Tue, Nov 17, 2015 at 2:53 PM, yuliya Feldman
> > <yufeld...@yahoo.com.invalid
> > > wrote:
> >
> > > Please change workdir directory for mesos slave to one that is not /tmp
> > > and make sure that dir is owned by root.
> > > There is one more caveat with binary distro and MapR - in Myriad code
> for
> > > binary distro configuration is copied from RM to NMs - it doe snot work
> > for
> > > MapR since we need hostname (yes for the sake of local volumes) to be
> > > unique.
> > > MapR will have Myriad release to handle this situation.
> > >      From: John Omernik <j...@omernik.com>
> > >  To: dev@myriad.incubator.apache.org
> > >  Sent: Tuesday, November 17, 2015 11:37 AM
> > >  Subject: Re: Struggling with Permissions
> > >
> > > Oh hey, I found a post by me back on Sept 9.  I looked at the Jiras and
> > > followed the instructions with the same errors. At this point do I
> still
> > > need to have a place where the entire path is owned by root? That seems
> > > like a an odd requirement (a changed of each node to facilitate a
> > > framework)
> > >
> > >
> > >
> > >
> > >
> > > On Tue, Nov 17, 2015 at 1:25 PM, John Omernik <j...@omernik.com>
> wrote:
> > >
> > > > Hey all, I am struggling with permissions on myriad, trying to get
> the
> > > > right permissions in the tgz as well as who to run as.  I am running
> in
> > > > MapR, which means I need to run as mapr or root (otherwise my volume
> > > > creation scripts will fail on MapR, MapR folks, we should talk more
> > about
> > > > those scripts)
> > > >
> > > > But back to the code, I've had lots issues. When I run the
> > Frameworkuser
> > > > and Superuser as mapr, it unpacks everything as MapR and I get a
> > > > "/bin/container-executor" must be owned by root but is owned by 700
> (my
> > > > mapr UID).
> > > >
> > > > So now I am running as root, and I am getting the error below as it
> > > > relates to /tmp. I am not sure which /tmp this refers to. the /tmp
> that
> > > my
> > > > slave is executing in? (i.e. my local mesos agent /tmp directory) or
> my
> > > > MaprFS /tmp directory (both of which are world writable, as /tmp
> > > typically
> > > > is... or am I mistaken here?)
> > > >
> > > > Any thoughts on how to get this to resolve? This is when nodemanager
> is
> > > > trying to start running as root and root for both of my Myriad users.
> > > >
> > > > Thanks!
> > > >
> > > >
> > > > Caused by: ExitCodeException exitCode=24: File /tmp must not be world
> > or
> > > group writable, but is 1777
> > > >
> > > >
> > > >
> > > >
> > >
> > >
> > >
> > >
> >
>
>
>
>

Reply via email to