Hi,
I have Cloudera CDH3 pseudo distributed mode machine for development. I was
completed uploading the bulk load of data into the hbase-0.89 using the
MapReduce program. The hbase was working fine after completed the load. But
when I restarts the machine. I cant able to login into the Hbase
the machine. Hbase is not working. It says as MasterNotRunning
exception.
The size of rows is around close to the 24 millions
Version of hadoop is 0.20
Regards
Jason
On Thu, Feb 17, 2011 at 10:56 PM, Stack st...@duboce.net wrote:
On Thu, Feb 17, 2011 at 2:15 AM, praba karan prabas...@gmail.com
Hi all,
I ve been trying to load the Hbase with huge amount of data into the Hbase
using the Map Reduce program. Hbase table contains the 16 columns and row Id
are generated by the UUID's. When I try to load, It takes a time and gives
the exception as discussed in the following link.
, 2011, at 10:26 AM, Ryan Rawson ryano...@gmail.com wrote:
Or the natural business key?
On Feb 15, 2011 10:00 AM, Jean-Daniel Cryans jdcry...@apache.org
wrote:
Try UUIDs.
J-D
On Tue, Feb 15, 2011 at 8:57 AM, praba karan prabas...@gmail.com
wrote:
Hi,
I am having the Map Reduce
Hi,
I am having the Map Reduce program for the uploading the Bulk data into the
Hbase-0.89 from HDFS file system. I need unique row ID for every row
(millions of rows). So that overwriting in the hbase table is to be avoided.
Any solution to overcome the Row ID problem without overwriting in the
Guys,
I got out of this error! This seems only permission issues to the
$HBASE_HOME/conf/hbase-site.xml. Just copying the hbase-site.xml into the
$HADOOP_HOME/conf/ directory and restarting the cluster resolves the error.
..Jason
On Sat, Feb 5, 2011 at 1:57 PM, praba karan prabas...@gmail.com
into it.
St.Ack
On Fri, Feb 4, 2011 at 4:44 AM, praba karan prabas...@gmail.com wrote:
Hi,
I am having the Hadoop-CDH3 pseudo distributed environment. I am running
the
Map Reduce program to do the Bulk load of data into the Hbase-0.89, I am
getting the following exception
Hi,
I am having the Hadoop-CDH3 pseudo distributed environment. I am running the
Map Reduce program to do the Bulk load of data into the Hbase-0.89, I am
getting the following exception.
org.apache.hadoop.hbase.client.NoServerForRegionException: Timed out trying
to locate root region
� � � �at
Thanks Mark,
But this is now what I need?
I am trying to upload the bulk of data from hdfs file system to the
Hbase-0.89. I need the Map Reduce program for that.
Regards
Jason
On Wed, Feb 2, 2011 at 7:46 PM, Mark Kerzner markkerz...@gmail.com wrote:
Jason,
attached is RowCounter.java
it without complexity
I dont want to use the command line tools such as completebulkload or
importtsv
Thank you Mark
Regards
Prabakaran
On Wed, Feb 2, 2011 at 9:18 PM, Stack st...@duboce.net wrote:
See http://hbase.apache.org/bulk-loads.html
St.Ack
On Wed, Feb 2, 2011 at 3:26 PM, praba karan
that there is not enough published
examples, and I have started accumulating mine here,
http://hadoopinpractice.com/code.html
http://hadoopinpractice.com/code.htmlCheers,
Mark
On Wed, Feb 2, 2011 at 9:56 AM, praba karan prabas...@gmail.com wrote:
Yeah, I had seen this. I developed the Map Reduce program
Hey Ryan,
I just got uploaded the small sample data into the Hbase-0.89. I will post
the Map Reduce code after completing the test. I need to get rid of the
exception which I am facing now. When I run the Map Reduce program in my
machine I am getting the following error.
12 matches
Mail list logo