John,
Understood I don't think making the tempdir be setup that way is ideal.
We've had issues with other frameworks in the past.
Darin
On Sep 9, 2015 11:48 AM, "John Omernik" <[email protected]> wrote:

> Well at this point my biggest issue the root user stuff in the other thread
> and figuring out how to get it to work without making my slave's mesos temp
> only writable by root (is there a work around? And is this a best practice
> anyhow? what are the down stream effects of this etc)
>
> On Wed, Sep 9, 2015 at 10:45 AM, Darin Johnson <[email protected]>
> wrote:
>
> > Hey John I'm going to try to recreate issue using vanilla hadoop later
> > today.  Any other settings I should know about?
> > Darin
> > On Sep 9, 2015 9:42 AM, "John Omernik" <[email protected]> wrote:
> >
> > > This was another "slipped in" question in my other thread, I am
> breaking
> > > out for specific instructions.  Basically, I was struggling with with
> > some
> > > things in the wiki on this page:
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/MYRIAD/Installing+for+Administrators
> > >
> > > In step 5:
> > > Step 5: Configure YARN to use Myriad
> > >
> > > Modify the */opt/hadoop-2.7.0/etc/hadoop/yarn-site.xml* file as
> > instructed
> > > in Sample: myriad-config-default.yml
> > > <
> > >
> >
> https://cwiki.apache.org/confluence/display/MYRIAD/Sample%3A+myriad-config-default.yml
> > > >
> > > .
> > >
> > >
> > > Issue 1: It should link to the yarn-site.xml page, not hte
> > > myriad-config.default.yml page
> > >
> > > Issue 2:
> > > It has us put that information in the yarn-site.xml This makes sense.
> > The
> > > resource manager needs to be aware of the myriad stuff.
> > >
> > > Then I go to create a tarball, (which I SHOULD be able to use for both
> > > resource manager and nodemanager... right?) However, the instructions
> > state
> > > to remove the *.xml files.
> > >
> > > Step 6: Create the Tarball
> > >
> > > The tarball has all of the files needed for the Node Managers and
> > Resource
> > > Managers. The following shows how to create the tarball and place it in
> > > HDFS:
> > > cd ~
> > > sudo cp -rp /opt/hadoop-2.7.0 .
> > > sudo rm hadoop-2.7.0/etc/hadoop/*.xml
> > > sudo tar -zcpf ~/hadoop-2.7.0.tar.gz hadoop-2.7.0
> > > hadoop fs -put ~/hadoop-2.7.0.tar.gz /dist
> > >
> > >
> > > What I ended up doing... since I am running the resourcemanager
> (myriad)
> > in
> > > marathon, is I created two tarballs. One is my hadoop-2.7.0-RM.tar.gz
> > which
> > > has the all the xml files still in the tar ball for shipping to
> marathon.
> > > Then other is hadoop-2.7.0-NM.tar.gz which per the instructions removes
> > the
> > > *.xml files from the /etc/hadoop/ directory.
> > >
> > >
> > > I guess... my logic is that myriad creates the conf directory for the
> > > nodemanagers... but then I thought, and I overthinking something? Am I
> > > missing something? Could that be factoring into what I am doing here?
> > >
> > >
> > > Obviously my first steps are to add the extra yarn-site.xml entries,
> but
> > in
> > > this current setup, they are only going into the resource manager
> > yarn-site
> > > as the the node-managers don't have a yarn-site in their directories.
> > Am I
> > > looking at this correctly?  Perhaps we could rethink the removal
> process
> > of
> > > the XML files in the tarball to allow this to work correctly with a
> > single
> > > tarball?
> > >
> > > If I am missing something here, please advise!
> > >
> > >
> > > John
> > >
> >
>

Reply via email to