Hi,

The error you mentioned below " 'Name or service not known'" means servers
not able to communicate to each other. Check network configurations.

Sidharth
Mob: +91 8197555599
LinkedIn: www.linkedin.com/in/sidharthkumar2792

On 17-May-2017 12:13 PM, "Bhushan Pathak" <bhushan.patha...@gmail.com>
wrote:

Apologies for the delayed reply, was away due to some personal issues.

I tried the telnet command as well, but no luck. I get the response that
'Name or service not known'

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Wed, May 3, 2017 at 7:48 AM, Sidharth Kumar <sidharthkumar2...@gmail.com>
wrote:

> Can you check if the ports are opened by running telnet command.
> Run below command from source machine to destination machine and check if
> this help
>
> $telnet <IP address> <port number>
> Ex: $telnet 192.168.1.60 9000
>
>
> Let's Hadooping....!
>
> Bests
> Sidharth
> Mob: +91 8197555599
> LinkedIn: www.linkedin.com/in/sidharthkumar2792
>
> On 28-Apr-2017 10:32 AM, "Bhushan Pathak" <bhushan.patha...@gmail.com>
> wrote:
>
>> Hello All,
>>
>> 1. The slave & master can ping each other as well as use passwordless SSH
>> 2. The actual IP starts with 10.x.x.x, I have put in the config file as I
>> cannot share  the actual IP
>> 3. The namenode is formatted. I executed the 'hdfs namenode -format'
>> again just to rule out the possibility
>> 4. I did not configure anything in the master file. I don;t think Hadoop
>> 2.7.3 has a master file to be configured
>> 5. The netstat command [sudo netstat -tulpn | grep '51150' ] does not
>> give any output.
>>
>> Even if I change  the port number to a different one, say 52220, 50000, I
>> still get the same error.
>>
>> Thanks
>> Bhushan Pathak
>>
>> Thanks
>> Bhushan Pathak
>>
>> On Fri, Apr 28, 2017 at 7:52 AM, Lei Cao <charlie.c...@hotmail.com>
>> wrote:
>>
>>> Hi Mr. Bhushan,
>>>
>>> Have you tried to format namenode?
>>> Here's the command:
>>> hdfs namenode -format
>>>
>>> I've encountered such problem as namenode cannot be started. This
>>> command line easily fixed my problem.
>>>
>>> Hope this can help you.
>>>
>>> Sincerely,
>>> Lei Cao
>>>
>>>
>>> On Apr 27, 2017, at 12:09, Brahma Reddy Battula <
>>> brahmareddy.batt...@huawei.com> wrote:
>>>
>>> *Please check “hostname –i” .*
>>>
>>>
>>>
>>>
>>>
>>> *1)      **What’s configured in the “master” file.(you shared only
>>> slave file).?*
>>>
>>>
>>>
>>> *2)      **Can you able to “ping master”?*
>>>
>>>
>>>
>>> *3)      **Can you configure like this check once..?*
>>>
>>> *                1.1.1.1 master*
>>>
>>>
>>>
>>>
>>>
>>> Regards
>>>
>>> Brahma Reddy Battula
>>>
>>>
>>>
>>> *From:* Bhushan Pathak [mailto:bhushan.patha...@gmail.com
>>> <bhushan.patha...@gmail.com>]
>>> *Sent:* 27 April 2017 18:16
>>> *To:* Brahma Reddy Battula
>>> *Cc:* user@hadoop.apache.org
>>> *Subject:* Re: Hadoop 2.7.3 cluster namenode not starting
>>>
>>>
>>>
>>> Some additional info -
>>>
>>> OS: CentOS 7
>>>
>>> RAM: 8GB
>>>
>>>
>>>
>>> Thanks
>>>
>>> Bhushan Pathak
>>>
>>>
>>> Thanks
>>>
>>> Bhushan Pathak
>>>
>>>
>>>
>>> On Thu, Apr 27, 2017 at 3:34 PM, Bhushan Pathak <
>>> bhushan.patha...@gmail.com> wrote:
>>>
>>> Yes, I'm running the command on the master node.
>>>
>>>
>>>
>>> Attached are the config files & the hosts file. I have updated the IP
>>> address only as per company policy, so that original IP addresses are not
>>> shared.
>>>
>>>
>>>
>>> The same config files & hosts file exist on all 3 nodes.
>>>
>>>
>>>
>>> Thanks
>>>
>>> Bhushan Pathak
>>>
>>>
>>> Thanks
>>>
>>> Bhushan Pathak
>>>
>>>
>>>
>>> On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy Battula <
>>> brahmareddy.batt...@huawei.com> wrote:
>>>
>>> Are you sure that you are starting in same machine (master)..?
>>>
>>>
>>>
>>> Please share “/etc/hosts” and configuration files..
>>>
>>>
>>>
>>>
>>>
>>> Regards
>>>
>>> Brahma Reddy Battula
>>>
>>>
>>>
>>> *From:* Bhushan Pathak [mailto:bhushan.patha...@gmail.com]
>>> *Sent:* 27 April 2017 17:18
>>> *To:* user@hadoop.apache.org
>>> *Subject:* Fwd: Hadoop 2.7.3 cluster namenode not starting
>>>
>>>
>>>
>>> Hello
>>>
>>>
>>>
>>> I have a 3-node cluster where I have installed hadoop 2.7.3. I have
>>> updated core-site.xml, mapred-site.xml, slaves, hdfs-site.xml,
>>> yarn-site.xml, hadoop-env.sh files with basic settings on all 3 nodes.
>>>
>>>
>>>
>>> When I execute start-dfs.sh on the master node, the namenode does not
>>> start. The logs contain the following error -
>>>
>>> 2017-04-27 14:17:57,166 ERROR 
>>> org.apache.hadoop.hdfs.server.namenode.NameNode:
>>> Failed to start namenode.
>>>
>>> java.net.BindException: Problem binding to [master:51150]
>>> java.net.BindException: Cannot assign requested address; For more details
>>> see:  http://wiki.apache.org/hadoop/BindException
>>>
>>>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>>> Method)
>>>
>>>         at sun.reflect.NativeConstructorAccessorImpl.newInstance(Native
>>> ConstructorAccessorImpl.java:62)
>>>
>>>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(De
>>> legatingConstructorAccessorImpl.java:45)
>>>
>>>         at java.lang.reflect.Constructor.newInstance(Constructor.java:4
>>> 23)
>>>
>>>         at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java
>>> :792)
>>>
>>>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:7
>>> 21)
>>>
>>>         at org.apache.hadoop.ipc.Server.bind(Server.java:425)
>>>
>>>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574)
>>>
>>>         at org.apache.hadoop.ipc.Server.<init>(Server.java:2215)
>>>
>>>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:951)
>>>
>>>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(Protob
>>> ufRpcEngine.java:534)
>>>
>>>         at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRp
>>> cEngine.java:509)
>>>
>>>         at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)
>>>
>>>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<in
>>> it>(NameNodeRpcServer.java:345)
>>>
>>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcSer
>>> ver(NameNode.java:674)
>>>
>>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(N
>>> ameNode.java:647)
>>>
>>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameN
>>> ode.java:812)
>>>
>>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameN
>>> ode.java:796)
>>>
>>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNo
>>> de(NameNode.java:1493)
>>>
>>>         at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNod
>>> e.java:1559)
>>>
>>> Caused by: java.net.BindException: Cannot assign requested address
>>>
>>>         at sun.nio.ch.Net.bind0(Native Method)
>>>
>>>         at sun.nio.ch.Net.bind(Net.java:433)
>>>
>>>         at sun.nio.ch.Net.bind(Net.java:425)
>>>
>>>         at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelI
>>> mpl.java:223)
>>>
>>>         at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java
>>> :74)
>>>
>>>         at org.apache.hadoop.ipc.Server.bind(Server.java:408)
>>>
>>>         ... 13 more
>>>
>>> 2017-04-27 14:17:57,171 INFO org.apache.hadoop.util.ExitUtil: Exiting
>>> with status 1
>>>
>>> 2017-04-27 14:17:57,176 INFO 
>>> org.apache.hadoop.hdfs.server.namenode.NameNode:
>>> SHUTDOWN_MSG:
>>>
>>> /************************************************************
>>>
>>> SHUTDOWN_MSG: Shutting down NameNode at master/1.1.1.1
>>>
>>> ************************************************************/
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> I have changed the port number multiple times, every time I get the same
>>> error. How do I get past this?
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> Thanks
>>>
>>> Bhushan Pathak
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>

Reply via email to