Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change 
notification.

The following page has been changed by stack:
http://wiki.apache.org/hadoop/Hbase/Troubleshooting

------------------------------------------------------------------------------
   1. [#7 Problem: DFS instability and/or regionserver lease timeouts]
  
  [[Anchor(1)]]
- == Problem: Master initializes, but Region Servers do not ==
+ == 1. Problem: Master initializes, but Region Servers do not ==
   * Master's log contains repeated instances of the following block:
    ~-INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 
/127.0.0.1:60020. Already tried 1 time(s).[[BR]]
    INFO org.apache.hadoop.ipc.Client: Retrying connect to server: 
/127.0.0.1:60020. Already tried 2 time(s).[[BR]]
@@ -41, +41 @@

  ::1           localhost6.localdomain6 localhost6
  }}}
  [[Anchor(2)]] 
- == Problem: Created Root Directory for HBase through Hadoop DFS ==
+ == 2. Problem: Created Root Directory for HBase through Hadoop DFS ==
   * On Startup, Master says that you need to run the hbase migrations script. 
Upon running that, the hbase migrations script says no files in root directory.
  === Causes ===
   * HBase expects the root directory to either not exist, or to have already 
been initialized by hbase running a previous time. If you create a new 
directory for HBase using Hadoop DFS, this error will occur.
@@ -49, +49 @@

   * Make sure the HBase root directory does not currently exist or has been 
initialized by a previous run of HBase. Sure fire solution is to just use 
Hadoop dfs to delete the HBase root and let HBase create and initialize the 
directory itself.
  
  [[Anchor(3)]]
- == Problem: Replay of hlog required, forcing regionserver restart ==
+ == 3. Problem: Replay of hlog required, forcing regionserver restart ==
   * Under a heavy write load, some regions servers will go down with the 
following exception: 
  {{{
  WARN org.apache.hadoop.dfs.DFSClient: Exception while reading from 
blk_xxxxxxxxxxxxxxx of /hbase/some_repository from IP_address:50010: 
java.io.IOException: Premeture EOF from inputStream
@@ -69, +69 @@

  === Resolution ===
  
  [[Anchor(4)]]
- == Problem: On migration, no files in root directory ==
+ == 4. Problem: On migration, no files in root directory ==
   * On Startup, Master says that you need to run the hbase migrations script. 
Upon running that, the hbase migrations script says no files in root directory.
  === Causes ===
   * HBase expects the root directory to either not exist, or to have already 
been initialized by hbase running a previous time. If you create a new 
directory for HBase using Hadoop DFS, this error will occur.
@@ -86, +86 @@

   * Either reduce the load or set dfs.datanode.max.xcievers (hadoop-site.xml) 
to a larger value than the default (256). Note that in order to change the 
tunable, you need 0.17.2 or 0.18.0 (HADOOP-3859).
  
  [[Anchor(5)]]
- == Problem: "xceiverCount 258 exceeds the limit of concurrent xcievers 256" ==
+ == 5. Problem: "xceiverCount 258 exceeds the limit of concurrent xcievers 
256" ==
   * See an exception with above message in logs, usually the datanode logs.
  === Causes ===
   * An upper bound on connections was added in Hadoop 
(HADOOP-3633/HADOOP-3859).
@@ -95, +95 @@

  
  
  [[Anchor(6)]]
- == Problem: "No live nodes contain current block" ==
+ == 6. Problem: "No live nodes contain current block" ==
   * See an exception with above message in logs (usually hadoop 0.18.x).
  === Causes ===
   * Slow datanodes are marked as down by DFSClient; eventually all replicas 
are marked as 'bad' (HADOOP-3831).
@@ -105, +105 @@

  
  
  [[Anchor(7)]]
- == Problem: DFS instability and/or regionserver lease timeouts ==
+ == 7. Problem: DFS instability and/or regionserver lease timeouts ==
   * HBase regionserver leases expire during start up
   * HBase daemons cannot find block locations in HDFS during start up or other 
periods of load
  === Causes ===

Reply via email to