Looks like you're using hadoop-1.1.1

Have you looked at Data node log ?

Would be helpful if you pastebin the portion of Data node log when it
shutdown.

Cheers


On Tue, May 20, 2014 at 7:10 AM, AnushaGuntaka <[email protected]>wrote:

> Hi ,
>
> Thanks in advance. Please help me out in figuring cause of the follwing
> error and fixing it.
>
> Am facing the below error while scanning a hbase table with partial RowKey
> filter through MapReduce program.
>
> Error: org.apache.hadoop.horg.apache.hadoop.hdfs.server.datanode.DataNode:
> DatanodeRegistration():DataXceiver java.io.InterruptedIOException:
> Interruped while waiting for IO on channel
> java.nio.channels.SocketChannel[closed]
>
> Data node on the slave node is getting shutdown on this error.
>
> My Map reduce program is running maptsks till 95% and then failing with
> this
> error.
>
> I have a hadoop cluster with two mechines ,
>
> Table Size : 652 GB (223 GB in master Node  and 514GB in slave node)
>
> System Disc details:
>
> Node            space available
> ---------------------------------
> master   ----   22 GB
> slave     ----   210 GB
>
> ------------------------------- core-site.xml -----------------------
> <configuration>
> <property>
>
>  <name>fs.tmp.dir</name>
>   <value>/home/e521596/hadoop-1.1.1/full</value>
>  </property>
>
>  <property>
>    <name>fs.default.name</name>
>   <value>hdfs://172.20.193.234:9000</value>
>    </property>
>
> <property>
>  <name>io.sort.factor</name>
>    <value>15</value>
>       <description>More streams merged at once while sorting
> files.</description>
>           </property>
>
> <property>
> <name>io.sort.mb</name>
> <value>1000</value>
> <description>Higher memory-limit while sorting data.</description>
> </property>
>
> <property>
> <name>io.sort.record.percent</name>
> <value>0.207</value>
> <description>Higher memory-limit while sorting data.</description>
> </property>
>
> <property>
> <name>io.sort.spill.percent</name>
>  <value>1</value>
>   <description>Higher memory-limit while sorting data.</description>
>    </property>
>
> </configuration>
> ------------------------------- mapred-site.xml -----------------------
>
> <configuration>
>   <property>
>         <name>mapred.job.tracker</name>
>         <value>fedora3:9001</value>
>   </property>
>   <property>
>        <name>mapred.reduce.tasks</name>
>        <value>6</value>
>   </property>
>   <property>
>         <name>mapred.tasktracker.map.tasks.maximum</name>
>         <value>6</value>
>   </property>
>   <property>
>         <name>mapred.tasktracker.reduce.tasks.maximum</name>
>         <value>6</value>
>   </property>
>   <property>
>        <name>mapred.textoutputformat.separator</name>
>        <value>#</value>
>   </property>
>
>  <property>
>         <name>mapred.compress.map.output</name>
>                <value>true</value>
>                  </property>
>
>  <property>
>          <name>mapred.child.java.opts</name>
>           <value>-Xms1024M -Xmx2048M</value>
>   </property>
>
>
> </configuration>
>  ---------------------------------------- hdfs-site.xml--------------------
>
> <configuration>
>   <property>
>         <name>dfs.name.dir</name>
>         <value>/home/e521596/hadoop-1.1.1/full/dfs/name</value>
>   </property>
>   <property>
>        <name>dfs.data.dir</name>
>        <value>/home/e521596/hadoop-1.1.1/full/dfs/data</value>
>   </property>
>   <property>
>      <name>dfs.replication</name>
>        <value>1</value>
>   </property>
> <property>
>      <name>dfs.datanode.max.xcievers</name>
>           <value>5096</value>
>             </property>
>
> <property>
>      <name>dfs.datanode.handler.count</name>
>           <value>200</value>
>             </property>
> <property>
>      <name>dfs.datanode.socket.write.timeout</name>
>                <value>0</value>
>                            </property>
>
>
> </configuration>
> ---------------------------------------------------------------------
>
>
>
>
>
>
> --
> View this message in context:
> http://apache-hbase.679495.n3.nabble.com/DataXceiver-java-io-InterruptedIOException-error-on-scannning-Hbase-table-tp4059419.html
> Sent from the HBase User mailing list archive at Nabble.com.
>

Reply via email to