There is an outstanding issue over in the current down JIRA to address
this, make it so hbase can ride over FS restart.
St.Ack
On Fri, Apr 9, 2010 at 4:31 PM, Ted Yu wrote:
> If hadoop is restarted when hbase is still running, hbase table(s) would be
> corrupted.
>
> Is it possible to make hbase
If hadoop is restarted when hbase is still running, hbase table(s) would be
corrupted.
Is it possible to make hbase tolerate hadoop restart ?
Thanks
Seeking clarification from hbase-dev.
Export class calls:
TableMapReduceUtil.initTableMapperJob(tableName, s, Exporter.class,
null,
null, job);
which in turn calls:
job.getConfiguration().set(TableInputFormat.INPUT_TABLE, table);
But TableInputFormat.setConf() is not called - thus t
enhance hbase.util.Bytes.toBytes() with length limit
Key: HBASE-2432
URL: https://issues.apache.org/jira/browse/HBASE-2432
Project: Hadoop HBase
Issue Type: Improvement
Component
I repeatedly have the following problem with
0.20.3/dfs.datanode.socket.write.timeout=0: Some RS is requested for
some data, the DFS can not find it, client hangs until timeout.
Grepping the cluster logs, I can see this:
1. at some time the DFS is asked to delete a block, blocks are deleted
from
Hi,
This is likely a multiple assignment bug.
Can you grep the NN log for the block ID 991235084167234271 ? This should
tell you which file it was originally allocated to, as well as what IP wrote
it. You should also see a deletion later. Also, the filename should give you
a clue as to which regi
On Fri, Apr 9, 2010 at 8:37 AM, Paul Smith wrote:
> However a clean unpack is working in non-distributed mode. My other copy I
> had been working on had a config like this:
>
>
>
>hbase.rootdir
>file:///tmp/hbase-${user.name}/hbase
>
>
>hbase.cluster.distributed
>true
>
>>
>> hadoop:name=RegionServerStatistics,service=RegionServer
>> hadoop:name=RPCStatistics-60020,service=HBase
>>
>
> ok, well under my local simple testing (sort of straight out of the box
> unpacking, not distributed), the RegionServer does _not_ export that
> RegionServerStatistics which is
On 09/04/2010, at 9:23 PM, Gary Helmling wrote:
> Hi Paul,
>
> For the master process, you should see 2 MBeans registered (assuming
> distributed setup with default ports):
>
> hadoop:name=MasterStatistics,service=Master
> hadoop:name=RPCStatistics-6,service=HBase
>
> The first is master-s
Anyone who hasn't registered, don't forget.
http://www.meetup.com/hackathon/calendar/13063062/
Ryan and Todd... hope you guys can make it so sign up!
Hi Paul,
For the master process, you should see 2 MBeans registered (assuming
distributed setup with default ports):
hadoop:name=MasterStatistics,service=Master
hadoop:name=RPCStatistics-6,service=HBase
The first is master-specific metrics (just cluster request count I think?).
The second i
That's right. If you ever see more versions than the family setting of
maxVersions, something is broken in the read path and we should fix it.
> -Original Message-
> From: Lars George [mailto:lars.geo...@gmail.com]
> Sent: Friday, April 09, 2010 12:58 AM
> To: hbase-dev@hadoop.apache.org
Oh, discussing this here it seems I was to fast to answer. The Scan
object will set Store.versionsToReturn() to the set max versions. So
have also no idea what the issue is :)
Please rephrase.
Lars
On Fri, Apr 9, 2010 at 9:27 AM, Lars George wrote:
> Hi,
>
>>> The third is about the hbase shell
I repeatedly have the following problem with
0.20.3/dfs.datanode.socket.write.timeout=0: Some RS is requested for
some data, the DFS can not find it, client hangs until timeout.
Grepping the cluster logs, I can see this:
1. at some time the DFS is asked to delete a block, blocks are deleted
from
Yeah I have no idea where that might be coming from. I think if you
set some breakpoints and ran the HMaster you might see what pops up
:-)
good luck!
On Fri, Apr 9, 2010 at 12:40 AM, wrote:
>
>
>
>
> On 09/04/2010, at 17:15, Ryan Rawson wrote:
>
>> In the HMaster, there is this:
>>
>> priva
On 09/04/2010, at 17:15, Ryan Rawson wrote:
In the HMaster, there is this:
private void startServiceThreads() {
// Do after main thread name has been set
this.metrics = new MasterMetrics();
which calls the MBeanUtil registration (a hadoop utility) which
registers the MasterMetrics
Hi,
>> The third is about the hbase shell, which implements an alter command,
>> that can change or add or delete a column family,but i remember the column
>> family cannot be changed after the table is created. I also want to know
>> default max VERSIONs, when i create a table i give the VERSION
In the HMaster, there is this:
private void startServiceThreads() {
// Do after main thread name has been set
this.metrics = new MasterMetrics();
which calls the MBeanUtil registration (a hadoop utility) which
registers the MasterMetrics which contains all those metrics you are
interest
18 matches
Mail list logo