+1 for option 4

On 10/12/11 9:50 AM, Eric Yang wrote:
Option #4 is the most practical use case for making a release.  For bleeding 
edge developers, they would prefer to mix and match different version of hdfs 
and mapreduce.  Hence, it may be good to release the single tarball for 
release, but continue to support component tarballs for developers and rpm/deb 
packaging.  In case, someone wants to run hdfs + hbase, but not mapreduce for 
specialized application.  Component separation tarball should continue to work 
for rpm/deb packaging.

regards,
Eric

On Oct 12, 2011, at 9:30 AM, Prashant Sharma wrote:

I support the idea of having 4 as additional option.

On Wed, Oct 12, 2011 at 9:37 PM, Alejandro Abdelnur<t...@cloudera.com>  wrote:
Currently common, hdfs and mapred create partial tars which are not usable
unless they are stitched together into a single tar.

With HADOOP-7642 the stitching happens as part of the build.

The build currently produces the following tars:

1* common TAR
2* hdfs (partial) TAR
3* mapreduce (partial) TAR
4* hadoop (full, the stitched one) TAR

#1 on its own does not run anything, #2 and #3 on their own don't run. #4
runs hdfs&  mapreduce.

Questions:

Q1. Does it make sense to publish #1, #2&  #3? Or #4 is sufficient and you
start the services you want (i.e. Hbase would just use HDFS)?

Q2. And what about a source TAR, does it make sense to have source TAR per
component or a single TAR for the whole?


For simplicity (for the build system and for users) I'd prefer a single
binary TAR and a single source TAR.

Thanks.

Alejandro



--

Prashant Sharma
Pramati Technologies
Begumpet, Hyderabad.



--
-Giri

Reply via email to