I was hoping to see more clue as to why
/homes/avandana/hbase/hbase-0.90.3/build/hbase/test/TestHRegion/testWritesWhileScanning
couldn't be deleted.
Did you verify whether the file was on your computer ?
BTW I looked at the output.txt on my laptop. The log about HDFS-826 and
HDFS-200 was there -
I tried to use RegionSplitter to rolling split an existing region,
bin/hbase org.apache.hadoop.hbase.util.RegionSplitter -r -o 2 usertable
The following exception is shown,
Exception in thread main java.lang.IllegalArgumentException: Wrong FS:
hdfs://cas01:54310/hbase/usertable/_balancedSplit,
ps, I'm using HBase 0.90.2
2011/6/17 Sheng Chen chensheng2...@gmail.com
I tried to use RegionSplitter to rolling split an existing region,
bin/hbase org.apache.hadoop.hbase.util.RegionSplitter -r -o 2 usertable
The following exception is shown,
Exception in thread main
Hi!
This morning, on our production system, we experienced a very bad behavior of
HBase 0.20.6.
1- one of our region server crash
2- we restarted it with success (no error on the master nor on the region
servers)
3- but we discovered that our HBase clients were enable to recover for this
What happens if you add to your hbase-site a config which sets the key
fs.default.name (and fs.defaultFS) to the value you have for
hbase.rootdir (I presume its hdfs://cas01:54310/hbase/).
The script thinks local filesystem is its filesystem when it should be hdfs.
In scripts in bin, before we
The file is not there on my computer. So is there something I am missing to do
before the build process ?
On Jun 17, 2011, at 2:11 AM, Ted Yu wrote:
I was hoping to see more clue as to why
/homes/avandana/hbase/hbase-0.90.3/build/hbase/test/TestHRegion/testWritesWhileScanning
couldn't be
On Fri, Jun 17, 2011 at 6:42 AM, Vincent Barat vba...@ubikod.com wrote:
Each time a get() was performed, but ONLY ON THE BIGGEST TABLES, our HBase
clients triggered an exception (actually coming fro the restarted region
server):
org.apache.hadoop.hbase.NotServingRegionException:
Hi,
I want to add a column family to a existing table. I used the following code
but it shows that descriptor cannot be modified.
try {
HTableDescriptor descriptor = new
HTable(table.getConfiguration(),table.getTableName()).getTableDescriptor();
descriptor.addFamily(cf);
You need to go through HBaseAdmin.
http://hbase.apache.org/book.html#schema.creation
Disable the table, add the CF, then re-enable the table.
-Original Message-
From: Eranda Sooriyabandara [mailto:0704...@gmail.com]
Sent: Friday, June 17, 2011 2:11 PM
To: user@hbase.apache.org
We need to shove in 10 million lines to process into hbase. I have the file on
hadoop dfs and would like to map/reduce it to just put the line to the account
that the line is for(ie. Routes right to the node).
I am considering putting a Listlines in the Account basically(ie. A
column-family
Hi Doug,
Thanks for the quick reply. The changes you mentioned made my code work.
thanks
Eranda
What about using Hbaseneis it pretty goodlooks just like a distributed
Lucene and the same api and everything?
Later,
Dean
-Original Message-
From: Mark Kerzner [mailto:markkerz...@gmail.com]
Sent: Wednesday, June 15, 2011 10:10 PM
To: user@hbase.apache.org
Subject: Re: What's
How do you pre-split tables and how big should the splits be? We will be doing
a 3 terabyte load into hbase in the near future.
We have raw files spit out from our Sybase that I can load once into hadoop dfs
so we can wipe hbase, and reload the data into hbase on every run of our
prototype
I tried to use TableMapper and TableOutputFormat in
from org.apache.hadoop.hbase.mapreduce to write a map-reduce which
incremented some columns. I noticed that TableOutputFormat.write() doesn't
support Increment, only Put and Delete.
Is there a reason that TableOutputFormat shouldn't support
+1
On Jun 17, 2011 4:43 PM, Leif Wickland leifwickl...@gmail.com wrote:
I tried to use TableMapper and TableOutputFormat in
from org.apache.hadoop.hbase.mapreduce to write a map-reduce which
incremented some columns. I noticed that TableOutputFormat.write() doesn't
support Increment, only Put
The pre-splitting question is relatively easy. There is an optional argument
in the create table command that takes an array of keys. These keys are used
as the begin region keys. How big should the splits be? That is a little
harder. The rule of thumb is 1000 regions per server. You
Go for it!
St.Ack
On Fri, Jun 17, 2011 at 1:43 PM, Leif Wickland leifwickl...@gmail.com wrote:
I tried to use TableMapper and TableOutputFormat in
from org.apache.hadoop.hbase.mapreduce to write a map-reduce which
incremented some columns. I noticed that TableOutputFormat.write() doesn't
Watch out - increment is not idempotent, so you will have to somehow
ensure that a map runs exactly 1x and never more or less than that.
Also job failures will ruin the data as well.
-ryan
On Fri, Jun 17, 2011 at 1:57 PM, Stack st...@duboce.net wrote:
Go for it!
St.Ack
On Fri, Jun 17, 2011
Interesting (and mildly terrifying) point, Ryan.
Is there a valid pattern for storing a sum in HBase then using mapreduce to
calculate an update to that sum based on incremental data updates?
It seems a cycle like the following would avoid double increment problems,
but would suffer from a
I'm having difficulty figuring out why I am unable to a get a scanner URL
programmatically via the org.apache.hadoop.hbase.rest.client.Client. I followed
the instructions on the Stargate wiki by starting up a jetty servlet container
and verified a few GET operations which succeeded. I then
Hi Bill,
At the recent HBase hackathon in Berlin there was some word of ACLs in (the
next release of?) HBase from the Trend Micro guys, I believe.
Check this: http://search-hadoop.com/?q=aclfc_project=HBasefc_type=jira
Otis
--
We're hiring HBase / Hadoop / Hive / Mahout engineers with
21 matches
Mail list logo