ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(1
-----------------------------------------------------------------------------

                 Key: HDFS-2076
                 URL: https://issues.apache.org/jira/browse/HDFS-2076
             Project: Hadoop HDFS
          Issue Type: Bug
          Components: data-node
    Affects Versions: 0.20.2
         Environment: hadoop -hdfs
            Reporter: chakali ranga swamy


see sir
datanode log socket and datasteam problem unable to upload text file to DFS i 
deleted tmp folders dfs and mapred again i formated "hadoop namenode -format"
start-all.sh done then
dfs folder contains:
data node ,name node,secondarynamenode
mapred: empty
about space:-----------------
linux-8ysi:/etc/hadoop/hadoop-0.20.2 # df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda5 25G 16G 7.4G 69% /
udev 987M 212K 986M 1% /dev
/dev/sda7 42G 5.5G 34G 14% /home
-------------------------------------------
http://localhost:50070/dfshealth.jsp------------------

NameNode 'localhost:54310'
Started: Wed Jun 15 04:13:14 IST 2011
Version: 0.20.2, r911707
Compiled: Fri Feb 19 08:07:34 UTC 2010 by chrisdo
Upgrades: There are no upgrades in progress.

Browse the filesystem
Namenode Logs
Cluster Summary
10 files and directories, 0 blocks = 10 total. Heap Size is 15.5 MB / 966.69 MB 
(1%)
Configured Capacity : 24.61 GB
DFS Used : 24 KB
Non DFS Used : 17.23 GB
DFS Remaining : 7.38 GB
DFS Used% : 0 %
DFS Remaining% : 29.99 %
Live Nodes : 1
Dead Nodes : 0

NameNode Storage:
Storage Directory Type State
/tmp/Testinghadoop/dfs/name IMAGE_AND_EDITS Active

Hadoop, 2011.
----------------------------------------
core-site.xml
---------------------------------
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>
<name>hadoop.tmp.dir</name>
<value>/tmp/Testinghadoop/</value>
<description>A base for other temporary directories.</description>
</property>

<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>

</configuration>
------------------------------------------------
hdfs-site.xml
----------------------------------
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>
<name>dfs.permissions</name>
<value>true</value>
<description>
If "true", enable permission checking in HDFS.
If "false", permission checking is turned off,
but all other behavior is unchanged.
Switching from one parameter value to the other does not change the mode,
owner, or group of files or directories.
</description>
</property>

<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
</description>
</property>

</configuration>
---------------------------------------
mapred-site.xml
----------------------------------
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description>The host and port that the MapReduce job tracker runs
at. If "local", then jobs are run in-process as a single map
and reduce task.
</description>
</property>
</configuration>
----------------------------------------------------------------------------------

please give suggetions about this error:
------------------------------------------------------------------------------------------------------------------
linux-8ysi:/etc/hadoop/hadoop-0.20.2/conf # hadoop fsck /
RUN_JAVA
/usr/java/jre1.6.0_25/bin/java
.Status: HEALTHY
Total size: 0 B
Total dirs: 7
Total files: 1 (Files currently being written: 1)
Total blocks (validated): 0
Minimally replicated blocks: 0
Over-replicated blocks: 0
Under-replicated blocks: 0
Mis-replicated blocks: 0
Default replication factor: 1
Average block replication: 0.0
Corrupt blocks: 0
Missing replicas: 0
Number of data-nodes: 1
Number of racks: 1


The filesystem under path '/' is HEALTHY

linux-8ysi:/etc/hadoop/hadoop-0.20.2/conf # hadoop dfsadmin -report
RUN_JAVA
/usr/java/jre1.6.0_25/bin/java
Configured Capacity: 26425618432 (24.61 GB)
Present Capacity: 7923564544 (7.38 GB)
DFS Remaining: 7923539968 (7.38 GB)
DFS Used: 24576 (24 KB)
DFS Used%: 0%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 1 (1 total, 0 dead)

Name: 127.0.0.1:50010
Decommission Status : Normal
Configured Capacity: 26425618432 (24.61 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 18502053888 (17.23 GB)
DFS Remaining: 7923539968(7.38 GB)
DFS Used%: 0%
DFS Remaining%: 29.98%
Last contact: Wed Jun 15 05:54:00 IST 2011

i got this error:
----------------------------

linux-8ysi:/etc/hadoop/hadoop-0.20.2 # hadoop dfs -put spo.txt In
RUN_JAVA
/usr/java/jre1.6.0_25/bin/java
11/06/15 04:50:18 WARN hdfs.DFSClient: DataStreamer Exception: 
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File 
/user/root/In/spo.txt could only be replicated to 0 nodes, instead of 1
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Unknown Source)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

at org.apache.hadoop.ipc.Client.call(Client.java:740)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy0.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy0.addBlock(Unknown Source)
at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937)
at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819)
at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)

11/06/15 04:50:18 WARN hdfs.DFSClient: Error Recovery for block null bad 
datanode[0] nodes == null
11/06/15 04:50:18 WARN hdfs.DFSClient: Could not get block locations. Source 
file "/user/root/In/spo.txt" - Aborting...
put: java.io.IOException: File /user/root/In/spo.txt could only be replicated 
to 0 nodes, instead of 1
11/06/15 04:50:18 ERROR hdfs.DFSClient: Exception closing file 
/user/root/In/spo.txt : org.apache.hadoop.ipc.RemoteException: 
java.io.IOException: File /user/root/In/spo.txt could only be replicated to 0 
nodes, instead of 1
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Unknown Source)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

org.apache.hadoop.ipc.RemoteException: java.io.IOException: File 
/user/root/In/spo.txt could only be replicated to 0 nodes, instead of 1
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Unknown Source)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

at org.apache.hadoop.ipc.Client.call(Client.java:740)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy0.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy0.addBlock(Unknown Source)
at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937)
at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819)
at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)

regards
Ranga Swamy
8904524975 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to