Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change 
notification.

The following page has been changed by stack:
http://wiki.apache.org/hadoop/Hbase/Troubleshooting

The comment on the change is:
Removed dup.

------------------------------------------------------------------------------
   1. [#1 Problem: Master initializes, but Region Servers do not]
   1. [#2 Problem: Created Root Directory for HBase through Hadoop DFS]
   1. [#3 Problem: Replay of hlog required, forcing regionserver restart]
-  2. [#4 Problem: Master initializes, but Region Servers do not]
-  1. [#5 Problem: On migration, no files in root directory]
+  1. [#4 Problem: On migration, no files in root directory]
-  1. [#6 Problem: "xceiverCount 258 exceeds the limit of concurrent xcievers 
256"]
+  1. [#5 Problem: "xceiverCount 258 exceeds the limit of concurrent xcievers 
256"]
-  1. [#7 Problem: "No live nodes contain current block"]
+  1. [#6 Problem: "No live nodes contain current block"]
  
  [[Anchor(1)]]
  == Problem: Master initializes, but Region Servers do not ==
@@ -67, +66 @@

  === Causes ===
   * RPC timeouts may happen because of a IO contention which blocks processes 
during file swapping.
  === Resolution ===
-  * Either reduce the load or add more memory/machines.
+  * Eith
  
  [[Anchor(4)]]
- == Problem: Master initializes, but Region Servers do not ==
-  * Master's log contains repeated instances of the following block:
-   ~-INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 
/127.0.0.1:60020. Already tried 1 time(s).[[BR]]
-   INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 
/127.0.0.1:60020. Already tried 2 time(s).[[BR]]
-   ...[[BR]]
-   INFO org.apache.hadoop.ipc.RPC: Server at /127.0.0.1:60020 not available 
yet, Zzzzz...-~
-  * Region Servers' logs contains repeated instances of the following block:
-   ~-INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 
masternode/192.168.100.50:60000. Already tried 1 time(s).[[BR]]
-   INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 
masternode/192.168.100.50:60000. Already tried 2 time(s).[[BR]]
-   ...[[BR]]
-   INFO org.apache.hadoop.ipc.RPC: Server at masternode/192.168.100.50:60000 
not available yet, Zzzzz...-~
-  * Note that the Master believes the Region Servers have the IP of 127.0.0.1 
- which is localhost and resolves to the master's own localhost.
- === Causes ===
-  * The Region Servers are erroneously informing the Master that their IP 
addresses are 127.0.0.1.
- === Resolution ===
-  * Modify '''/etc/hosts''' on the region servers, from
-   {{{
- # Do not remove the following line, or various programs
- # that require network functionality will fail.
- 127.0.0.1             fully.qualified.regionservername regionservername  
localhost.localdomain localhost
- ::1           localhost6.localdomain6 localhost6
- }}}
- 
-  * To (removing the master node's name from localhost)
-   {{{
- # Do not remove the following line, or various programs
- # that require network functionality will fail.
- 127.0.0.1             localhost.localdomain localhost
- ::1           localhost6.localdomain6 localhost6
- }}}
- 
- [[Anchor(5)]]
  == Problem: On migration, no files in root directory ==
   * On Startup, Master says that you need to run the hbase migrations script. 
Upon running that, the hbase migrations script says no files in root directory.
  === Causes ===
@@ -118, +85 @@

  === Resolution ===
   * Either reduce the load or set dfs.datanode.max.xcievers (hadoop-site.xml) 
to a larger value than the default (256). Note that in order to change the 
tunable, you need 0.17.2 or 0.18.0 (HADOOP-3859).
  
- [[Anchor(6)]]
+ [[Anchor(5)]]
  == Problem: "xceiverCount 258 exceeds the limit of concurrent xcievers 256" ==
   * See an exception with above message in logs (usually hadoop 0.18.x).
  === Causes ===
@@ -127, +94 @@

   * Up the maximum by setting '''dfs.datanode.max.xcievers''' (sic).  See 
[http://mail-archives.apache.org/mod_mbox/hadoop-hbase-user/200810.mbox/%[email protected]%3e
 message from jean-adrien] for some background.
  
  
- [[Anchor(7)]]
+ [[Anchor(6)]]
  == Problem: "No live nodes contain current block" ==
   * See an exception with above message in logs (usually hadoop 0.18.x).
  === Causes ===

Reply via email to