@Marco - Yup. My search lead me to all the threads you posted and
seems likes I walked exactly your path :)

@Joris - Recipe is not different. It same, just couple of extra params

Here are the diff ones, nothing unusual

hadoop-mapreduce.mapred.child.java.opts=-Xms512M -Xms512M
hadoop-hdfs.dfs.replication=2
hadoop-mapreduce.mapred.tasktracker.reduce.tasks.maximum=2
hadoop-mapreduce.mapred.tasktracker.map.tasks.maximum=2

hadoop-mapreduce.mapred.map.child.java.opts=-Xms512M -Xms512M
hadoop-mapreduce.mapred.reduce.child.java.opts=-Xms512M -Xms512M

hadoop-env.JAVA_HOME=/usr/java/jdk1.6.0_30

The problem with my AMI was, it already had /data drive and a small
one. So just added line in configure_cdh_hadoop.sh script to remove
that drive and use /mnt (which is part of code). It took me a while to
smoothen the things out, but it worked.
2nd thing was we had a root user as well. So at some places need to
use sudo like here

if [ ! -e /data ]; then sudo ln -s /mnt /data; fi

@Andrei - Is it possible to make scripts more configurable? like in
configure_cdh_hadoop.sh we have /data as hardcoded, we could have a
base from where all this would be installed. Some thing like

mkdir -p /data/hadoop

to

mkdir -p <BASE_DIR>/hadoop

Hope it helps

thanks all !

ashish

On Sat, Mar 3, 2012 at 1:07 AM, Andrei Savu <[email protected]> wrote:
> Great! Do you think we need to incorporate any of those changes in the
> trunk?
>
> On Mar 2, 2012 8:27 PM, "Ashish" <[email protected]> wrote:
>>
>> Damm ran into https://issues.apache.org/jira/browse/WHIRR-490
>>
>> moved to 0.7.1 and its running.. fewww...
>>
>> but a good thing was, got to hack a bit into whirr... made some minor
>> changes to functions to make stuff work custom AMI :)
>>
>> On Fri, Mar 2, 2012 at 10:51 PM, Ashish <[email protected]> wrote:
>> > On Fri, Mar 2, 2012 at 3:30 PM, Ashish <[email protected]> wrote:
>> >>>
>> >>> No problem, I faced the exact same problem along with this one
>> >>> https://issues.apache.org/jira/browse/WHIRR-490 .
>> >>> PS: you should update to version 0.7.1
>> >>
>> >> yup its on cards.. will do it next week. till that time 0.7.0 works so
>> >> keeping things as is :)
>> >
>> > Ran into another issue. My Map tasks wont start, they all die with same
>> > error
>> >
>> > "Error occurred during initialization of VM
>> > Could not reserve enough space for object heap"
>> >
>> > Setting Heap size to 1500M and running on m1.xlarge. top shows enough
>> > free memory.
>> >
>> > Have set this property
>> >
>> > hadoop-env.JAVA_HOME=/usr/java/jdk1.6.0_30
>> >
>> > What else I am messing up?
>>
>>
>>
>> --
>> thanks
>> ashish
>>
>> Blog: http://www.ashishpaliwal.com/blog
>> My Photo Galleries: http://www.pbase.com/ashishpaliwal



-- 
thanks
ashish

Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal

Reply via email to