Andrew,

Here's more information about our setup....
- daemons do run as "hadoop" user (both HBase and Hadoop daemons)
- open file limit has been increased to 32768
- restarting HBase and HDFS does not solve the problem
- configuration parameters have *not* been changed per items 5 and 6. I
checked the datanode logs and did see xceiver exceptions...
   java.io.IOException: xceiverCount 257 exceeds the limit of concurrent
xcievers 256

  Hm. Looks like I should fiddle with those parameters.

Larry

On Thu, Jan 29, 2009 at 2:02 PM, Andrew Purtell <[email protected]> wrote:

> Hi Larry,
>
> If you shut down HBase *and* HDFS, and then restart them both, does
> that clear the problem?
>
> Are you running the Hadoop daemons (including DFS) under a user account
> such as "hadoop" or similar? Have you increased the open files limit
> (nofiles in /etc/security/limits.conf on RedHat style systems) for that user
> from the default of 1024 to something substantially larger (I use 32768)?
>
> Have you adjusted the HDFS configuration as suggested at
> http://wiki.apache.org/hadoop/Hbase/Troubleshooting , items 5 and 6?
>
>   - Andy
>
> > From: Larry Compton
> > Subject: java.io.IOException: Could not obtain block
> > 2009-01-29 13:07:50,439 WARN
> > org.apache.hadoop.hdfs.DFSClient: DFS Read:
> > java.io.IOException: Could not obtain block:
> > blk_2439003473799601954_58348
> > file=/hbase/-ROOT-/70236052/info/mapfiles/2587717070724571438/data
> [...]
> >
> > Hadoop 0.19.0
> > HBase 0.19.0
> [...]
>
>
>
>
>

Reply via email to