That problem is being resolved.
But a new issue rises. When I start the dfs, I found a line namenode
log - "2013-05-09
15:29:44,270 INFO logs: Aliases are enabled" What does it mean.
After it nothing happens. JPS shows that datanode is running but the web
interface for dfshealth is not running.
Any Idea?
Please help


On Thu, May 9, 2013 at 3:37 PM, Mohammad Mustaqeem
<[email protected]>wrote:

> That problem is being resolved.
> But a new issue rises. When I start the dfs, I found a line namenode log -
> "2013-05-09 15:29:44,270 INFO logs: Aliases are enabled" What does it
> mean.
> After it nothing happens. JPS shows that datanode is running but the web
> interface for dfshealth is not running.
> Any Idea?
> Please help.
>
>
> On Thu, May 9, 2013 at 2:22 AM, Serge Blazhievsky <[email protected]>wrote:
>
>> That's one I use too I think it's on apache web site
>>
>> Sent from my iPhone
>>
>> On May 8, 2013, at 1:49 PM, Chris Embree <[email protected]> wrote:
>>
>> Here is a sample I stole from the web and modified slightly... I think.
>>
>> HADOOP_CONF=/etc/hadoop/conf
>>
>> while [ $# -gt 0 ] ; do
>>   nodeArg=$1
>>   exec< ${HADOOP_CONF}/rack_info.txt
>>   result=""
>>   while read line ; do
>>     ar=( $line )
>>     if [ "${ar[0]}" = "$nodeArg" ] ; then
>>       result="${ar[1]}"
>>     fi
>>   done
>>   shift
>>   if [ -z "$result" ] ; then
>>     echo -n "/default/rack "
>>   else
>>     echo -n "$result "
>>   fi
>>
>> done
>>
>>
>> The rack_info.txt file contains all hostname AND IP addresses for each
>> node:
>> 10.10.10.10  /dc1/rack1
>> 10.10.10.11  /dc1/rack2
>> datanode1  /dc1/rack1
>> datanode2  /dc1/rack2
>> .. etch.
>>
>>
>> On Wed, May 8, 2013 at 1:38 PM, Adam Faris <[email protected]> wrote:
>>
>>> Look between the <code> blocks starting at line 1336.
>>> http://lnkd.in/rJsqpV   Some day it will get included in the
>>> documentation with a future Hadoop release. :)
>>>
>>> -- Adam
>>>
>>> On May 8, 2013, at 10:29 AM, Mohammad Mustaqeem <[email protected]>
>>>  wrote:
>>>
>>> > If anybody have sample (topology.script.file.name) script then please
>>> share it.
>>> >
>>> >
>>> > On Wed, May 8, 2013 at 10:30 PM, Mohammad Mustaqeem <
>>> [email protected]> wrote:
>>> > @chris, I have test it outside. It is working fine.
>>> >
>>> >
>>> > On Wed, May 8, 2013 at 7:48 PM, Leonid Fedotov <
>>> [email protected]> wrote:
>>> > Error in script.
>>> >
>>> >
>>> > On Wed, May 8, 2013 at 7:11 AM, Chris Embree <[email protected]>
>>> wrote:
>>> > Your script has an error in it.  Please test your script using both IP
>>> Addresses and Names, outside of hadoop.
>>> >
>>> >
>>> > On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem <
>>> [email protected]> wrote:
>>> > I have done this and found following error in log -
>>> >
>>> >
>>> > 2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping:
>>> Exception running
>>> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1
>>> > org.apache.hadoop.util.Shell$ExitCodeException:
>>> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8:
>>> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax
>>> error: "(" unexpected (expecting "done")
>>> >
>>> >       at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
>>> >       at org.apache.hadoop.util.Shell.run(Shell.java:129)
>>> >       at
>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
>>> >       at
>>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
>>> >       at
>>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
>>> >       at
>>> org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>>> >       at
>>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
>>> >       at
>>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
>>> >       at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
>>> >       at
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
>>> >       at
>>> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
>>> >       at
>>> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
>>> >       at
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
>>> >       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
>>> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
>>> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
>>> >       at java.security.AccessController.doPrivileged(Native Method)
>>> >       at javax.security.auth.Subject.doAs(Subject.java:415)
>>> >       at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
>>> >       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
>>> > 2013-05-08 18:53:45,223 ERROR
>>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve
>>> call returned null! Using /default-rack for host [127.0.0.1]
>>> >
>>> >
>>> >
>>> > On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <
>>> [email protected]> wrote:
>>> > You can put this parameter to core-site.xml or hdfs-site.xml
>>> > It both parsed during the HDFS startup.
>>> >
>>> > Leonid
>>> >
>>> >
>>> > On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <
>>> [email protected]> wrote:
>>> > Hello everyone,
>>> >     I was searching for how to make the hadoop cluster rack-aware and
>>> I find out from here
>>> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awarenessthat
>>>  we can do this by giving property of "
>>> topology.script.file.name". But here it is not written where to put this
>>> > <property>
>>> >               <name>topology.script.file.name</name>
>>> >
>>> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
>>> > </property>
>>> >
>>> > Means in which configuration file.
>>> > I am using hadoop-2.0.3-alpha.
>>> >
>>> >
>>> > --
>>> > With regards ---
>>> > Mohammad Mustaqeem,
>>> > M.Tech (CSE)
>>> > MNNIT Allahabad
>>> > 9026604270
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > With regards ---
>>> > Mohammad Mustaqeem,
>>> > M.Tech (CSE)
>>> > MNNIT Allahabad
>>> > 9026604270
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > With regards ---
>>> > Mohammad Mustaqeem,
>>> > M.Tech (CSE)
>>> > MNNIT Allahabad
>>> > 9026604270
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > With regards ---
>>> > Mohammad Mustaqeem,
>>> > M.Tech (CSE)
>>> > MNNIT Allahabad
>>> > 9026604270
>>> >
>>> >
>>>
>>>
>>
>
>
> --
> *With regards ---*
> *Mohammad Mustaqeem*,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
>
>
>


-- 
*With regards ---*
*Mohammad Mustaqeem*,
M.Tech (CSE)
MNNIT Allahabad
9026604270

Reply via email to