Just to follow-up on my message below, a reboot seems to have fixed the service 
restart errors. There’s no reboot step mentioned in the installation guide, but 
I noticed a ‘waiting first boot’ (or something like that) in the output from 
‘service —status-all’ output.

I’d still be interested in the collective opinion of the developers on the 
latter part of the query though.

Cheers,
- Steve

From: Steven Nunez <[email protected]<mailto:[email protected]>>
Reply-To: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Date: Friday, 6 December 2013 17:02
To: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Subject: Hue Using MR1?

Gents,

I’m starting an install from BigTop 0.70: HDFS, YARN and Hue, with the goal of 
building up from there to a minimal stack. In theory, this should be as simple 
as ‘yum install hadoop\* hue\*’; in practice this turns out to be surprisingly 
broken. For example, after the yum install hue is reporting a configuration 
error:

hadoop.mapred_clusters.default.hadoop_mapred_home Current value: 
/usr/lib/hadoop-0.20-mapreduce
Path does not exist on the filesystem.

Nowhere is this being set from the files that BigTop installed. Why is Hue 
looking for MR1 stuff?

Attempting to fix another misconfiguration reported by hue:

hadoop.hdfs_clusters.default.webhdfs_url Current value: None
Failed to access filesystem root

Is simple, however when restarting the daemons with: "for i in 
hadoop-hdfs-namenode hadoop-hdfs-datanode ; do service $i restart ; done", both 
daemons fail to restart (say, what happened to start-dfs.sh?). Extracts from 
the logs show the reasons:


  *   Datanode because: java.net.BindException: Problem binding to 
[0.0.0.0:50010] java.net.BindException: Address already in use
  *   Namenode because: java.io.IOException: Cannot lock storage 
/var/lib/hadoop-hdfs/cache/hdfs/dfs/name. The directory is already locked

So what I’ve done is a fresh installed, modified two config files, as described 
in the hue configuration 
section<http://cloudera.github.io/hue/docs-3.5.0/manual.html#usage>, and tried 
to restart the HDFS daemons and they’re failing. This probably isn’t what I’d 
call ‘working’.

I know that BigTop is at 0.70, but then again version numbers don’t mean much 
these days. Is this kind of error to be expected after a fresh install? When 
will it be safe to assume that a simple stack (HDFS, YARN, Hue, Oozie, 
Zookeeper) installation would be working ‘out of the box’ without loads of 
manual configuration?

Perhaps I’m missing the point (I’m definitely missing the documentation), but 
at this stage it seems BigTop’s primary advantage is to ensure that the 
collection of packages is version compatible. Would that be fair to say? I’m 
not unappreciative of the value there, but just want to set expectations with 
people on what they’re getting with BigTop in its current state. I recall some 
discussion of configuration before, and there were somewhat different opinions. 
One that I recall was (my words, summarizing my understanding):

BigTop doesn’t do any configuration, that’s all left to the individual 
packages; BigTop just places them on the filesystem in (somewhat, there’s still 
.cfg, .ini, .conf files) a consistent manner.

Would that be a fair statement?

Cheers,
- Steve


Reply via email to