hbase cannot normally start regionserver in the environment of big data.

2014-11-07 Thread hanked...@sina.cn
I've deploied a 2+4 cluster which has been normally running for a long time. The cluster has got more than 40T data.When I initiatively shut the hbase service and try to restart it,the regionserver will be dead. The log of regionserver shows that all the regions are opened. But in the

Re: Re: hbase cannot normally start regionserver in the environment of big data.

2014-11-07 Thread hanked...@sina.cn
Hi, My hadoop is running fine when don't start hbase service . And my network is normal , I checked ! now , I restart hbase service , HDFS read timeout will occur! need you help , Thanks! hanked...@sina.cn From: Jean-Marc Spaggiari Date: 2014-11-07 20:57 To: user Subject:

Re: hbase cannot normally start regionserver in the environment of big data.

2014-11-07 Thread hanked...@sina.cn
Hi, using hbase 0.96 and hadoop 2.3 Master is no exception information regionserver WARN logs: 2014-11-07 15:13:19,512 WARN org.apache.hadoop.hdfs.BlockReaderFactory: I/O error constructing remote block reader. java.net.BindException: Cannot assign requested address at

Re: hbase cannot normally start regionserver in the environment of big data.

2014-11-07 Thread Jean-Marc Spaggiari
What are you hosts names and what is in your /etc/hosts file? Can you dig, dig -X and ping all your hosts including the master? Is hostname returned value mapped correctly to the IP? JM 2014-11-07 9:37 GMT-05:00 hanked...@sina.cn hanked...@sina.cn: Hi, using hbase 0.96 and hadoop 2.3

Re: Re: hbase cannot normally start regionserver in the environment of big data.

2014-11-07 Thread hanked...@sina.cn
Hi, There is no mistake of basic configuration. The Cluster normal run for a long time , stored after a certain amout of data. I restart hbase service , this kinds of problem will appear ! hanked...@sina.cn From: Jean-Marc Spaggiari Date: 2014-11-07 22:45 To: user CC: yuzhihong Subject: Re:

Re: s3n with hbase

2014-11-07 Thread Andrew Purtell
Admittedly it's been *years* since I experimented with pointing a HBase root at a s3 or s3n filesystem, but my (dated) experience is it could take some time for newly written objects to show up in a bucket. The write will have completed and the file will be closed, but upon immediate open attempt

Re: s3n with hbase

2014-11-07 Thread Matteo Bertozzi
another thing to keep in mind is that each rename() on s3 is a copy and since we tend to move files around our compaction is like: - create the file in .tmp - copy the file to the region/family dir - copy the old files to the archive ..and an hfile copy is not cheap Matteo On Fri, Nov 7,

Re: Random read operation about hundreds request per second

2014-11-07 Thread Pere Kyle
I think it may be a thrift issue, have you tried playing with the connection queues? set hbase.thrift.maxQueuedRequests to 0 From Varun Sharma: If you are opening persistent connections (connections that never close), you should probably set the queue size to 0. Because those connections will