Can any one please guide me to load the HBase table in to HDFS with
specific columnfamily.
thank,
karthik
You can use export command, but not sure if u can export just one column
family http://hbase.apache.org/0.94/book/ops_mgt.html#export
On Sep 21, 2016 10:42 PM, "Dima Spivak" wrote:
> Hey Karthik,
>
> This blog post [1] by our very own JD Cryans is a good place to start
>
By specifying "hbase.mapreduce.scan.column.family", you can export selected
column family.
On Wed, Sep 21, 2016 at 8:49 AM, sudhir patil
wrote:
> You can use export command, but not sure if u can export just one column
> family
Can you clarify your scenario ?
Normally hbase is backed by hdfs - the table is already stored on hdfs.
Cheers
On Wed, Sep 21, 2016 at 4:46 AM, karthi keyan
wrote:
> Can any one please guide me to load the HBase table in to HDFS with
> specific columnfamily.
>
>
Hey Karthik,
This blog post [1] by our very own JD Cryans is a good place to start
understanding bulk load.
1.
http://blog.cloudera.com/blog/2013/09/how-to-use-hbase-bulk-loading-and-why/
On Wednesday, September 21, 2016, karthi keyan
wrote:
> Can any one please
Hi Hbase experts,
Our application is unable to scan or read from hbase tables, when throttling is
set. We are getting ThrottlingException every time. The error is seen more
frequently when the number of hbase pre splits is increased. The size tables
for which this error is showing is empty (
Did you try to use REQUEST_NUMBER as throttle type?
2016-09-22 11:33 GMT+08:00 Sumit Nigam :
> Hi Guanghao,
>
> This throttling error shows up as soon as I start hbase. So, ideally there
> shouldn't be too many prior operations in play here. Plus, the error shows
>
No, did not try request number. I want to use size as my throttling factor.
Thanks again!
From: Guanghao Zhang
To: user@hbase.apache.org; Sumit Nigam
Sent: Thursday, September 22, 2016 9:28 AM
Subject: Re: Hbase throttling issues
Did you
On behalf of the development community, I'm pleased to announce the
release of YCSB version 0.11.0
Highlights:
* Support for ArangoDB. This is a new binding.
* Update to Apache Geode (incubating) to improve memory footprint.
* "couchbase" client deprecated in favor of "couchbase2".
*
Hi Guanghao,
This throttling error shows up as soon as I start hbase. So, ideally there
shouldn't be too many prior operations in play here. Plus, the error shows up
even when my table has hardly any data (possibly in kbs) and I have set
throttling limit to be ~10GB.
Thanks,Sumit
From:
All scan operations were throttled? Now it use the avg size of all previous
operations to check quota. Maybe the previous scan operation read too much
data.
2016-09-22 1:18 GMT+08:00 Sumit Nigam :
> Hi Hbase experts,
>
> Our application is unable to scan or read
Which hbase release are you using ?
Can you tell us the values for handler related config such as
hbase.regionserver.handler.count ?
How many regions does m7 have ?
Have many servers does m7 span ? Are regions of m7 spread evenly ?
Thanks
Hi all
I am facing an issue while accessing data from an hbase m7 table which has
about 50 million records.
In a single Api request, we make 3 calls to hbase m7.
1. Single Multi get to fetch about 30 records
2. Single multi-put to update about 500 records
3. Single multi-get to fetch about 15
13 matches
Mail list logo