Hi Andrei,
I'll give it a try and get back to you as soon as possible. Thanks for
your help!
Sebastian
On 29.02.2012 18:41, Andrei Savu wrote:
I think this is a real problem we need to address soon - maybe even
make a new release (0.7.2).
Can you try to change the install java function to make it work for you?
On Feb 29, 2012 7:04 PM, "Sebastian Schoenherr"
<[email protected]
<mailto:[email protected]>> wrote:
Hi Andrei,
thanks a lot for your reply. No, the hadoop service is not
starting as expected. I added the suggested line to the end of my
property file but the result is still the same. I'll keep on trying.
Here is the log from /tmp/logs/stderr.log, it looks like my
hadoop-env.JAVA_HOME is not used.
+ export JAVA_HOME=/usr/lib/jvm/java-1.6.0-openjdk
+ JAVA_HOME=/usr/lib/jvm/java-1.6.0-openjdk
+ echo 'export JAVA_HOME=/usr/lib/jvm/java-1.6.0-openjdk'
+ echo 'export JAVA_HOME=/usr/lib/jvm/java-1.6.0-openjdk'
+ alternatives --install /usr/bin/java java
/usr/lib/jvm/java-1.6.0-openjdk/bin/java 17000
+ alternatives --set java /usr/lib/jvm/java-1.6.0-openjdk/bin/java
+ java -version
/tmp/setup-user.sh: line 247: java: command not found
+ exit 1
Thanks,
Sebastian
On 29.02.2012 17:05, Andrei Savu wrote:
thanks for the WHIRR Update. I just started a cluster with an
Ubuntu image and it works great. Unfortunately, I run into
some problems when trying to set up a cluster with the basic
Amazon Images (AMI 2011/09, 32bit and 64bit). I get an 'java
command not found error' on the instances.
But is Hadoop starting as expected?
No, hadoop is not starting as expected
I think the problem is that the Java path is different to the
ubuntu images. On 64Bit the correct path would be
/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64 instead of
/usr/lib/jvm/java-1.6.0-openjdk.
We've done testing only on Ubuntu 10.04 LTS and I think you are
right. You can workaround this limitation by adding something
like this to your properties file:
hadoop-env.JAVA_HOME= /usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64
Maybe I'm missing something out, here is my property file:
Thanks for your help,
Sebastian
whirr.cluster-name = 1330
whirr.instance-templates = 1
hadoop-jobtracker+hadoop-namenode,1
hadoop-datanode+hadoop-tasktracker
This should be hadoop-namenode+hadoop-jobtracker *not* the other
way around.
whirr.cluster-user = my-user
whirr.provider = aws-ec2
whirr.image-id = us-east-1/ami-31814f58
whirr.login-user = ec2-user
This option is not required.
whirr.hardware-id = t1.micro
I recommend you to use at least m1.small - t1.micro is less than
ideal in this case.
whirr.private-key-file = ...
whirr.public-key-file = ...
whirr.identity = ...
whirr.credential = ...
whirr.hadoop.install-function = install_cdh_hadoop
whirr.hadoop.configure-function = configure_cdh_hadoop
On 29.02.2012 09:44, Andrei Savu wrote:
The Apache Whirr team is pleased to announce the release
of Apache Whirr 0.7.1.
Whirr is a library and a command line tool that can be
used to run distributed
services in the cloud. It simplifies the deployment of
distributed systems on
cloud infrastructure, allowing you to launch and
tear-down complex
cloud cluster
environments with a single command.
Supported services currently include most of the
components of the Apache
Hadoop stack, Apache Mahout, Chef, Puppet, Ganglia,
elasticsearch, Apache
Cassandra, Voldemort and Hama. Services can be deployed
to Amazon EC2
and to Rackspace Cloud.
The release is available here:
http://www.apache.org/dyn/closer.cgi/whirr/
The full change log is available here:
https://issues.apache.org/jira/browse/WHIRR/fixforversion/12319942
We welcome your help and feedback. For more information
on how to
report problems, and to get involved, visit the project
website at
http://whirr.apache.org/
The Apache Whirr Team
--
Sebastian Schoenherr
PhD student in Bioinformatics
Institute of Computer Science
Division of Genetic Epidemiology
[email protected]
http://dbis-informatik.uibk.ac.at
http://www.i-med.ac.at/genepi/