On Fri, Apr 22, 2011 at 10:17 PM, Allen Wittenauer
<awittena...@linkedin.com> wrote:
>        Could someone actually take the branch and try to install from 
> scratch?  i.e., not copy a pre-existing config.  I found a multitude of 
> problems with 203 doing this, but haven't had a chance to file JIRAs for all 
> of them.  (Pay attention to the log messages in particular). I'm not going to 
> be able to do this for various reasons.

Are the values in the default config broken and/or are some required
settings undocumented? The setup guide is almost surely out of sync.
Does the single-node setup still work? We should also test and
document an upgrade from an existing 0.20.2 cluster.

>        There are also a lot of places where hadoop-*-{core|hdfs|mapred} are 
> referenced in the documentation, but since this release breaks with 
> 0.20.{0-2}, all of those documentation references need to be fixed.

Are you talking about the jar naming since maven? I found a couple
places in the mapred-tutorial that use the old naming. Are there
others?

>        As it stands, I wouldn't want to use this release in production. In 
> addition to the previously mentioned issues,  the job size 
> unpredictability/regression with capacity scheduler is particularly 
> disturbing.  The math we're seeing doesn't seem to match the documentation at 
> all.  (Are we missing a test here?)

The documentation has almost surely lagged behind the changes in the
CS, particularly w.r.t. limits and what a single user can claim of the
queue capacity. Can you file a JIRA w/ an example where the assigned
capacity doesn't match what was expected? That should definitely be
fixed. -C

Reply via email to