[
https://issues.apache.org/jira/browse/HADOOP-7939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13176937#comment-13176937
]
Eli Collins commented on HADOOP-7939:
-------------------------------------
bq. can you, please, let me know what is wrong with uniformly named environment
variables that are used in a very straightforward way in all of the scripts?
There's nothing wrong with using environment variables, I just don't think we
need them.
The problem statement mentions two issues which are, in my opinion, orthogonal:
(1) making it easy to treat all the projects independently and (2) requiring
the sub-dirs (logs, pids, bin, etc) be at pre-defined places prevents them from
being "dynamically registered/discovered during the runtime".
Wrt #1, having a HOME variable per-project solves this issue, and IIUC we have
that today though the code should be further simplified. Eg replace
$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib with just $HDFS_HOME/lib, rather than
make it even more configurable. The packaging (here on in Bigtop) can link lib
to whatever host-specific location it prefers.
Wrt #2, I'm not sure we need to dynamically register and discover these
locations at runtime. Even if we do, I don't see how hard-coding the location
prevents this - alternatives allows for paths to be dynamically registered and
discovered using symlinks, should work here too.
In short, I'd like to see the relevant code in Hadoop be simpler, with fewer
variables, and fewer cases to test, which hopefully translates into less time
spent maintaining this code and fewer bugs. I think we can do that, while
accomplishing #1 for packaging, w/o precluding the packaging from accomplishing
#2 if it wants to.
> Improve Hadoop subcomponent integration in Hadoop 0.23
> ------------------------------------------------------
>
> Key: HADOOP-7939
> URL: https://issues.apache.org/jira/browse/HADOOP-7939
> Project: Hadoop Common
> Issue Type: Improvement
> Components: build, conf, documentation, scripts
> Affects Versions: 0.23.0
> Reporter: Roman Shaposhnik
> Assignee: Roman Shaposhnik
> Fix For: 0.23.1
>
>
> h1. Introduction
> For the rest of this proposal it is assumed that the current set
> of Hadoop subcomponents is:
> * hadoop-common
> * hadoop-hdfs
> * hadoop-yarn
> * hadoop-mapreduce
> It must be noted that this is an open ended list, though. For example,
> implementations of additional frameworks on top of yarn (e.g. MPI) would
> also be considered a subcomponent.
> h1. Problem statement
> Currently there's an unfortunate coupling and hard-coding present at the
> level of launcher scripts, configuration scripts and Java implementation
> code that prevents us from treating all subcomponents of Hadoop independently
> of each other. In a lot of places it is assumed that bits and pieces
> from individual subcomponents *must* be located at predefined places
> and they can not be dynamically registered/discovered during the runtime.
> This prevents a truly flexible deployment of Hadoop 0.23.
> h1. Proposal
> NOTE: this is NOT a proposal for redefining the layout from HADOOP-6255.
> The goal here is to keep as much of that layout in place as possible,
> while permitting different deployment layouts.
> The aim of this proposal is to introduce the needed level of indirection and
> flexibility in order to accommodate the current assumed layout of Hadoop
> tarball
> deployments and all the other styles of deployments as well. To this end the
> following set of environment variables needs to be uniformly used in all of
> the subcomponent's launcher scripts, configuration scripts and Java code
> (<SC> stands for a literal name of a subcomponent). These variables are
> expected to be defined by <SC>-env.sh scripts and sourcing those files is
> expected to have the desired effect of setting the environment up correctly.
> # HADOOP_<SC>_HOME
> ## root of the subtree in a filesystem where a subcomponent is expected to
> be installed
> ## default value: $0/..
> # HADOOP_<SC>_JARS
> ## a subdirectory with all of the jar files comprising subcomponent's
> implementation
> ## default value: $(HADOOP_<SC>_HOME)/share/hadoop/$(<SC>)
> # HADOOP_<SC>_EXT_JARS
> ## a subdirectory with all of the jar files needed for extended
> functionality of the subcomponent (nonessential for correct work of the basic
> functionality)
> ## default value: $(HADOOP_<SC>_HOME)/share/hadoop/$(<SC>)/ext
> # HADOOP_<SC>_NATIVE_LIBS
> ## a subdirectory with all the native libraries that component requires
> ## default value: $(HADOOP_<SC>_HOME)/share/hadoop/$(<SC>)/native
> # HADOOP_<SC>_BIN
> ## a subdirectory with all of the launcher scripts specific to the client
> side of the component
> ## default value: $(HADOOP_<SC>_HOME)/bin
> # HADOOP_<SC>_SBIN
> ## a subdirectory with all of the launcher scripts specific to the
> server/system side of the component
> ## default value: $(HADOOP_<SC>_HOME)/sbin
> # HADOOP_<SC>_LIBEXEC
> ## a subdirectory with all of the launcher scripts that are internal to
> the implementation and should *not* be invoked directly
> ## default value: $(HADOOP_<SC>_HOME)/libexec
> # HADOOP_<SC>_CONF
> ## a subdirectory containing configuration files for a subcomponent
> ## default value: $(HADOOP_<SC>_HOME)/conf
> # HADOOP_<SC>_DATA
> ## a subtree in the local filesystem for storing component's persistent
> state
> ## default value: $(HADOOP_<SC>_HOME)/data
> # HADOOP_<SC>_LOG
> ## a subdirectory for subcomponents's log files to be stored
> ## default value: $(HADOOP_<SC>_HOME)/log
> # HADOOP_<SC>_RUN
> ## a subdirectory with runtime system specific information
> ## default value: $(HADOOP_<SC>_HOME)/run
> # HADOOP_<SC>_TMP
> ## a subdirectory with temprorary files
> ## default value: $(HADOOP_<SC>_HOME)/tmp
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira