ng, below example is for TableInputFormat)
> TableSplit tableSplit = (TableSplit)context.getInputSplit();
> byte[] startRow=tableSplit.getStartRow()
> byte[] endRow=tableSplit.getEndRow()
>
> Regards,
> Ankit Singhal
>
> On Mon, Jun 5, 2017 at 5:38 AM, Rajeshkumar J <raje
Hi,
I am running mapreduce over hbase table using java and I want to know which
is the startkey and stopkey for every region. As every mapper takes a
region as input I need to know the startkey and stopkey of a region in the
map function. Is there any function to get that or is there any way to
ormation can be retrieved:
>
> Table
> RegionLocator
> Admin
> StartEndKeys (region boundaries)
>
> You can also take a look at calculateRebalancedSplits() to see how it
> rebalances the InputSplit's.
>
> FYI
>
> On Tue, May 30, 2017 at 11:53 PM, Rajeshkumar J <
Hi,
I want to custom input split my hbase data. can any one tell me what are
the values I have known during this split process like only rowkey values
or any others
Thanks
Hi,
Can anyone point me the custom input split for hbase mapreduce example?
Thanks
ompaction which restores data locality.
>
> Thanks
>
> > On May 26, 2017, at 6:08 AM, Rajeshkumar J <rajeshkumarit8...@gmail.com>
> wrote:
> >
> > Thanks Ted. If data blocks of the hfile may not be on the same node as
> the
> > region server then how
ult, hdfs has replication
> factor of 3.
> If you're not using read replica feature, any single region is served by
> one region server (however the data blocks of the hfile may not be on the
> same node as the region server).
>
> Cheers
>
> On Thu, May 25, 2017 at 11:45 PM,
Hi,
we have region max file size as 10 GB. Whether the hfiles of a region
exists in same region server or will it be distributed?
Thanks
I am running mapreduce over hbase table. My row key is a combination of
Country and place. For instance Delhi_India. Is there any way i can send
each value of matched rowkey to separate mappers. For instance if i have
rowkey value such as NewYork_USA,Boston_USA,Delhi_India,Mumbai_India. So
all
Hi,
I came across MultiTableInputFormat class which convert HBase tabular
data from multiple scanners into a format that is consumable by Map/Reduce.
So what will be the input to the mapper class if I have passed a scan
object to this class which scans two tables. Whether the input to mapper
Hi,
Is there any way to customize hbase input split based on a column value.
For instance if i have a column family 'd' with 'country' column
qualifiers. 'country' column holds value such as USA,Australia,India,China.
So all the rows which holds country value as USA goes to 1 mapper then
<yuzhih...@gmail.com> wrote:
> Can you clarify your use case ?
>
> If your mapper calls Table#coprocessorService(), the recipient may not be
> on the same region server as the mapper.
>
> On Tue, May 16, 2017 at 10:58 PM, Rajeshkumar J <
> rajeshkumarit8...
Hi,
whether I can call hase coprocessor in the map function of mapreduce job?
Thanks
bin stack trace ?
>
> Please pastebin region server log as well.
>
> Thanks
>
> On Tue, May 16, 2017 at 6:49 AM, Rajeshkumar J <
> rajeshkumarit8...@gmail.com>
> wrote:
>
> > It is throwing error like
> >
> > org.apache.hadoop.hbase.client.Ret
hadoop/hbase/
> client/coprocessor/package-info.java
>
> If the endpoint hangs, can you select region server(s) and capture stack
> trace ?
>
> Pastebin the traces you capture.
>
> Which hbase release are you using ?
>
> Cheers
>
> On Mon, May 15, 2017 at 10:48 P
Hi,
I went through examples of coprocessors in hbase source. Can any one
point me the link where I can find group by coprocessor example.
Thanks
Hi,
I am currently analyzing hbase coprocessor and I tried run
RowCountEndPoint in single node cluster it is working fine. But whn I try
to run in multi node cluster it is not throwing error but it hangs in the
sense it is not running continuously. I have loaded coprocessor dynamically.
Thanks
Hi,
Check this example in the below link
https://www.3pillarglobal.com/insights/hbase-coprocessors
Thanks
On Mon, May 15, 2017 at 4:00 PM, Cheyenne Forbes <
cheyenne.osanu.for...@gmail.com> wrote:
> I have a *Coprocessor* which overrides *postPut* and I also want to call a
>
there will be any
issue??
Thanks
On Mon, May 15, 2017 at 2:22 PM, Ted Yu <yuzhih...@gmail.com> wrote:
> Split policy may play a role here.
>
> Please take a look at:
> http://hbase.apache.org/book.html#_custom_split_policies
>
> On Mon, May 15, 2017 at 1:48 AM, Rajeshku
Hi,
As we run mapreduce over hbase it will take each region as input for each
mapper. I have given region max size as 10GB. If i have about 5 gb will it
take 5 gb of data as input of mappers??
Thanks
f
> locality gets messed up through RegionServer failures Major Compactions
> should restore it.
>
> On Tue, May 9, 2017 at 8:00 AM, Rajeshkumar J <rajeshkumarit8...@gmail.com
> >
> wrote:
>
> > Hi,
> >
> >
> >I have a table with its region
Hi,
I have a table with its region max file size is 10 gb. If i run any
mapreduce over this table whether there will be any data locality. If not
please explain
Thanks
Hi
If I am running mapreduce on hbase tables what will be the input to
mapper function
Thanks
Also Ashish while specifying region location is there any option to use
regular expression?
On Thu, Mar 16, 2017 at 5:55 PM, Rajeshkumar J <rajeshkumarit8...@gmail.com>
wrote:
> thanks ashish. I got that as that region doesn't contain any data and data
> is available in other regions
rectory from completebulkload.
>
> -Original Message-
> From: Rajeshkumar J [mailto:rajeshkumarit8...@gmail.com]
> Sent: 16 March 2017 17:44
> To: user@hbase.apache.org
> Subject: Re: hbase table creation
>
> Ashish,
>
> I have tried as u said but I dont have any dat
table properties as it is in cluster1 then you
> will have to create it manually in cluster2.
>
> I hope this helps.
>
> Regards,
> Ashish
>
> -Original Message-
> From: Rajeshkumar J [mailto:rajeshkumarit8...@gmail.com]
> Sent: 16 March 2017 16:46
> To: user@hbas
> > >
> > > On Thu, Mar 16, 2017 at 3:51 PM, sudhakara st <sudhakara...@gmail.com>
> > > wrote:
> > >
> > > > You have to use 'copytable', here is more info
> > > > https://hbase.apache.org/book.html#copy.table
> > > >
> > >
6, 2017 at 3:46 PM, Rajeshkumar J <
> rajeshkumarit8...@gmail.com>
> wrote:
>
> > I have copied hbase data of a table from one cluster to another. For
> > instance I have a table testing and its data will be in the path
> > /hbase/default/data/testing
> >
&
I have copied hbase data of a table from one cluster to another. For
instance I have a table testing and its data will be in the path
/hbase/default/data/testing
I have copied these files from existing cluster to new cluster. Is there
any possibilty to create table and load data from these files
property
>
> On which machine did you add the property ?
> If you added on client machine (normally outside the cluster), region
> servers wouldn't see them.
>
> Did you pass timeout values greater than the default ?
>
> Cheers
>
> On Wed, Mar 1, 2017 at 11:00 PM, Rajeshkumar J &
I am trying to scan a table from a class in java. So this is hbase client.
I have created hbase configuration using HBaseCOnfiguration.create() in the
class and also I have placed my hbase-site.xml in the classpath. My hbase
services are up and running. Initially I dont have these properties
He <jerry...@gmail.com> wrote:
> These properties can be used on the client side and the server side.
> I assume you are asking about them on the server side.
>
> Unfortunately, these two are not supported yet for dynamic configuration.
>
> Thanks.
>
> Jerry
>
> O
Hi,
Can any one tell me whether we can add the below properties dynamically
1.hbase.client.scanner.timeout.period
2.hbase.rpc.timeout
Thanks,
Rajeshkumar
Hi,
hbase.client.scanner.caching
Description
Number of rows that we try to fetch when calling next on a scanner if it is
not served from (local, client) memory. This configuration works together
with hbase.client.scanner.max.result.size to try and use the network
efficiently. The default value
Hi,
hbase.client.scanner.caching
Description
Number of rows that we try to fetch when calling next on a scanner if it is
not served from (local, client) memory. This configuration works together
with hbase.client.scanner.max.result.size to try and use the network
efficiently. The default value
> Yu
>
> On 19 January 2017 at 20:08, Rajeshkumar J <rajeshkumarit8...@gmail.com>
> wrote:
>
> > I am using SingleColumnValueFilter for filtering based on some values.
> > Based on this I am getting lease expired exception during scan. So is
> there
> > any way to solve this?
> >
>
I am using SingleColumnValueFilter for filtering based on some values.
Based on this I am getting lease expired exception during scan. So is there
any way to solve this?
ggest that you can upgrade your cluster and client. 1.1.1
> has some data-loss bugs on scanning.
>
> Thanks,
> Phil
>
>
> 2016-12-22 17:26 GMT+08:00 Rajeshkumar J <rajeshkumarit8...@gmail.com>:
>
> > can you please explain what is the cause of this lease exception a
processingTime:
60136 totalTime: 60137
On Wed, Jan 11, 2017 at 4:45 PM, Rajeshkumar J <rajeshkumarit8...@gmail.com>
wrote:
> I have tried setting hbase.client.scanner.caching to 10, 100, 1000 and
> 1. But it still fails.
>
> On Wed, Jan 11, 2017 at 2:17 PM, Yu Li <car...@gmai
I have hbase.client.scanner.caching as 100. I am scanning a table. For
instance say we have 500 rows matching for the current scan. When I give
the statement ResultScanner.next() what will happen? whether it will return
100 rows for each next operation or something else
Thanks
> get/put/append/increment/multi hbase.rpc.timeout takes effect, unless you
> specified hbase.client.operation.timeout.
> Check ClientScanner#scannerTimeout, ScannerCallable#call,
> AbstractRpcClient#channelOperationTimeout and
> AbstractRpcClient#callBlockingMethod for more details if interest
Hi, As i got lease expire exception i have increased the values of these
properties hbase.client.scanner.timeout.period and hbase.rpc.timeout. Is
there any disadvantage increasing these two values?
On Mon, Jan 2, 2017 at 11:01 AM, Rajeshkumar J <rajeshkumarit8...@gmail.com>
wrote:
> I
we have 7000 regions in which we only have data in 20 regions. Whether this
may be the reason for this?
On Thu, Jan 5, 2017 at 10:58 PM, Stack <st...@duboce.net> wrote:
> On Thu, Jan 5, 2017 at 6:09 AM, Rajeshkumar J <rajeshkumarit8...@gmail.com
> >
> wrote:
>
> >
> > your rowkey. HBase wasn't really intended to do random scans of columns
> > without rowkey filters which is why I think that timeout is set so low by
> > default.
> >
> > This page helps:
> >
> > http://hbase.apache.org/0.94/book/rowkey.design.html
> &
this?
On Wed, Jan 4, 2017 at 11:42 AM, Stack <st...@duboce.net> wrote:
> Yeah. Try it.
> S
>
> On Tue, Jan 3, 2017 at 9:18 PM, Rajeshkumar J <rajeshkumarit8...@gmail.com
> >
> wrote:
>
> > So I need to ad
//hbase.apache.org/book.html#trouble.log.levels
>
> St.Ack
>
> On Tue, Jan 3, 2017 at 6:58 PM, Rajeshkumar J <rajeshkumarit8...@gmail.com
> >
> wrote:
>
> > Do we have to include this property in hbase-site.xml?
> >
> > On Wed, Jan 4, 2017 at 2:12 AM, Stack &
Do we have to include this property in hbase-site.xml?
On Wed, Jan 4, 2017 at 2:12 AM, Stack <st...@duboce.net> wrote:
> On Mon, Jan 2, 2017 at 10:32 PM, Rajeshkumar J <
> rajeshkumarit8...@gmail.com>
> wrote:
>
> > Do all rpc calls are logged in hbase region ser
Do all rpc calls are logged in hbase region server log? Also I need to find
the time taken for each scan calls is this possible
Thanks
:
> I am having an extremely difficult time parsing this one, Rajeshkumar.
>
> If you still have a question/something you don't understand, I'd
> appreciate it if you could try to re-state it a little more clearly.
>
>
> Rajeshkumar J wrote:
>
>> I am using hbase 1.1.1.
nce you have not shared the
> version of HBase which you are using, I would recommend that you look at
> the code. It should be very obvious what value is being used instead of the
> bogus value you provided.
>
>
> Rajeshkumar J wrote:
>
>> I have tried setting hba
know internals, you should look at the code - in IDE such as
> Eclipse.
> Start from StoreScanner and read the classes which reference it.
>
> Cheers
>
> On Wed, Dec 28, 2016 at 12:59 AM, Rajeshkumar J <
> rajeshkumarit8...@gmail.com
> > wrote:
>
> > Ca
Can anyone point me where I can learn internals of hbase such as scan in
depth.
E-13090 and
> https://issues.apache.org/jira/browse/HBASE-1 should help reduce the
> need for you to tweak configuration properties in the future.
>
>
> Rajeshkumar J wrote:
>
>> Hi,
>> I have following property value as below
Hi,
I have following property value as below
hbase.client.scanner.timeout.period
6
hbase.rpc.timeout
6
When scanning 30 tables in hbase(each table holds 45 million records) I got
WARN client.ScannerCallable [main]:
data-loss bugs on scanning.
>
> Thanks,
> Phil
>
>
> 2016-12-22 17:26 GMT+08:00 Rajeshkumar J <rajeshkumarit8...@gmail.com>:
>
> > can you please explain what is the cause of this lease exception and is
> > there any solve this in current version
> >
the scanner expired before its scanning done.
> hbase.client.scanner.timeout.period is used both at client and server, may
> be different if you change one of sides.
>
> BTW, I still suggest that you can upgrade your cluster and client. 1.1.1
> has some data-loss bugs on scanning.
>
on and retry when the lease
> is expired at server.
>
> Thanks,
> Phil
>
>
> 2016-12-22 16:51 GMT+08:00 Rajeshkumar J <rajeshkumarit8...@gmail.com>:
>
> > Also there is a solution what i have found from hbase user guide that
> > hbase.rpc.timeout must be greater than h
Also there is a solution what i have found from hbase user guide that
hbase.rpc.timeout must be greater than hbase.client.scanner.timeout.period.
How these two properties plays a part in the above exception. Please can
anyone explain?
On Wed, Dec 21, 2016 at 9:39 PM, Rajeshkumar J <rajeshkumar
understanding of this issue
On Wed, Dec 21, 2016 at 7:31 PM, Ted Yu <yuzhih...@gmail.com> wrote:
> Which hbase release are you using ?
>
> There is heartbeat support when scanning.
> Looks like the version you use doesn't have this support.
>
> Cheers
>
> > On Dec 21
r.next().
As far as I understand when scanner.next() is called it will fetch no
of rows as in *hbase.client.scanner.caching. *When this fetching
process takes more than lease period it will close the scanner object.
so this exception occuring?
Thanks,
Rajeshkumar J
On Wed, Dec 21, 2016 at 5:07 PM, R
what is lease exception
Thanks,
Rajeshkumar J
Hi,
We are moving from hbase 0.94 to hbase 1.1.1. Some methods of hbase 0.94
are deprecated for hbase 1.1.1. So can anyone post me the link for Api
documentation for Hbase version 1.1.1 as I can't find that.
Thanks
:12 PM, Ted Yu <yuzhih...@gmail.com> wrote:
> 0.98 uses thrift 0.9.0
>
> 0.9.0
>
> Can you pastebin the complete stack trace ?
>
> Were all the client connections encountering this problem ?
>
> Thanks
>
> On Tue, Feb 16, 2016 at 12:47 AM, Rajeshkumar J &l
Hi,
Please ignore my previous mail.
My C# code was compiled against thrift 0.92 and I am trying to connect to
hbase 0.98.17
Thanks
On Tue, Feb 16, 2016 at 2:02 PM, Rajeshkumar J <rajeshkumarit8...@gmail.com>
wrote:
> Hi,
>
> For both I have used hbase 0.98.17 only
>
&
n, Feb 15, 2016 at 11:04 PM, Rajeshkumar J <
> rajeshkumarit8...@gmail.com
> > wrote:
>
> > Hi,
> >
> >
> >I am connecting to thrift server via C# code and I am getting the
> > following exception as shown below
> >
> >Thrift.Protocol.TProtocolE
Hi,
I am connecting to thrift server via C# code and I am getting the
following exception as shown below
Thrift.Protocol.TProtocolException :Missing version in readMessageBegin,
old client?
Can any one explain me why above occurs
Thanks
Hi,
I am trying to store output from the pig relation into hbase using the
following code
store hbasedata into 'hbase://evallist' USING
org.apache.pig.backend.hadoop.hbase.HBaseStorage('fa:ID,fa:date,fa:email,fa:customername');
But it throws
Caused by: java.lang.IllegalArgumentException:
Hi,
Sorry. This issue is from my side as thrift server 2 didn't started
correctly.
Thanks
On Tue, Feb 2, 2016 at 1:35 AM, Stack <st...@duboce.net> wrote:
> On Tue, Jan 26, 2016 at 9:02 PM, Rajeshkumar J <
> rajeshkumarit8...@gmail.com>
> wrote:
>
> > Hi,
> &g
Hi,
Can any one guide me details regarding how to implement fuzzy row filter
with ranges in hbase using java api
Thanks
I am using HBase 0.98.17
Thanks
On Wed, Jan 27, 2016 at 11:08 AM, Ted Yu <yuzhih...@gmail.com> wrote:
> Which hbase release are you using ?
>
> There were some relatively recent fixes going into thrift2 server.
>
> Cheers
>
> On Tue, Jan 26, 2016 at 9:12 PM, Rajes
Hi,
I have table named temp in hbase. First I have started the thrift server
1 and queried the table and also it returns the result.
Then I have stopped the thrift server 1 and now I have started thrift
server 2 and queried the table with some queries which I have used earlier
but it didn't
> underlying table ?
>
> BTW I assume thrift servers 1 and 2 use the same binary - they just resided
> on different machines.
>
> Cheers
>
> On Tue, Jan 26, 2016 at 9:02 PM, Rajeshkumar J <
> rajeshkumarit8...@gmail.com>
> wrote:
>
> > Hi,
> >
> &
doop/hbase/client/Result.html
>
> On 20 January 2016 at 12:02, Rajeshkumar J <rajeshkumarit8...@gmail.com>
> wrote:
>
> > Hi,
> >
> > I want to get all the column values for a given row-key.
> >
> > Row-key col1 col2 col3 col4 col5
> >
e or better in performance as compared to Thrift
> api.
> > With Thrift api there is an extra hop. So, most of the time java api
> would
> > be better for performance.
> >
> > Sent from my iPhone
> >
> > > On Jan 12, 2016, at 4:29 AM, Rajeshkumar J <
&
try to
> retrieve the latest version of row key 100?
>
> Regards
> Ram
>
>
> On Fri, Jan 22, 2016 at 10:55 AM, Rajeshkumar J <
> rajeshkumarit8...@gmail.com
> > wrote:
>
> > Hi,
> >
> > For instance
> >
> > Row-key col1
Hi,
I have already posted this in mailing list but with changes in my use
case. Is there any options to retrieve all the columns of row-key at once.
ResultScanner resultScanner = table.getScanner(scan);
Iterator iterator = resultScanner.iterator();
while (iterator.hasNext()) {
sult is an individual KeyValue which you may have to
> iterate and get it.
> >> So is there any option to get all the column
> values of row-key at once.
> So this is already happening for you. Am I missing something here?
>
> On Fri, Jan 22, 2016 at 10:31 AM, Rajeshkumar
y possible.
>
> Regards
> Ram
>
> On Fri, Jan 22, 2016 at 11:11 AM, Rajeshkumar J <
> rajeshkumarit8...@gmail.com
> > wrote:
>
> > If that is the case if I do maintain only one versions of my data is this
> > retrieval is possible?
> >
> >
scan completed how
to get all the column values of id100 row key without iteration.
Thanks,
Rajeshkumar J
ly","some_qualifier");
> byte[] data = CellUtil.cloneValue(dataCell);
>
> The byte array "data" will represent the value of the column you've
> retrieved. Note that this code returns the latest version of the cell
> you've specified, and you'll have to do the necessary
new
> Get("id100".**getBytes()));* should do the trick.
>
> On Tue, Jan 19, 2016 at 7:17 AM, Rajeshkumar J <
> rajeshkumarit8...@gmail.com>
> wrote:
>
> > Hi,
> >
> > I have implemented Range scan using Java API as shown below
> >
> >
Hi,
I am currently accessing records via Hbase thrift server and it is fast.
If I have to access hbase using Java API will it be fast like thrift.
Thanks
hih...@gmail.com> wrote:
> I took a look at the stack trace you posted.
>
> Some thrift worker was in the state shown in HBASE-14926
>
> I wonder if the fix for HBASE-14926 would help in your case.
>
> Cheers
>
> On Mon, Jan 11, 2016 at 5:11 AM, Rajeshkumar J <
> rajeshkumar
va jstack tutorial' using google, you would find a
> lot of tutorials.
>
> You can use 'ps aux' command to search for the following class:
> org.apache.hadoop.hbase.thrift.ThriftServer
>
> Apply jstack on the corresponding process id.
>
> Cheers
>
> On Sun,
ng jstack
> command) when your query is running.
>
> This would give us some more clue as to the state thrift server was in when
> the problem happened.
>
> Cheers
>
> On Sun, Jan 10, 2016 at 9:11 AM, Rajeshkumar J <
> rajeshkumarit8...@gmail.com>
> wrote:
d a
> lot of tutorials.
>
> You can use 'ps aux' command to search for the following class:
> org.apache.hadoop.hbase.thrift.ThriftServer
>
> Apply jstack on the corresponding process id.
>
> Cheers
>
> On Sun, Jan 10, 2016 at 9:43 PM, Rajeshkumar J <
> rajeshkumarit8...@g
one help me in this?
Regards,
Rajeshkumar J
NFO [ConnectionCache_ChoreService_1]
zookeeper.ZooKeeper: Session: 0x1522613893e0005 closed
Thanks,
Rajeshkumar J
On Sun, Jan 10, 2016 at 10:31 PM, Ted Yu <yuzhih...@gmail.com> wrote:
> Can you take a stack trace of the thrift server and pastebin the trace ?
>
> Thanks
>
> On Sun,
effected by the issue.
>
> Thanks
>
> 2016-01-06 23:29 GMT-08:00 Rajeshkumar J <rajeshkumarit8...@gmail.com>:
> > Hi,
> >
> > I am connecting via Hbase thrift server to access records in Hbase and
> I
> > am doing this in C# and i am using range scan to get
On Jan 4, 2016, at 3:44 AM, Rajeshkumar J <rajeshkumarit8...@gmail.com>
> wrote:
> >
> > Hi,
> >
> > We can use fuzzy row filter when rowkey has fixed length.I decided to
> > design row key as -mm-dd| platform id | control id where platform id
> > and co
dependency jars but still it throws Create constructor
" FuzzyRowFilter(java.util.list) " and i am using hbase version 0.98.13
Can any one help me in solving this?
Thanks
On Mon, Jan 4, 2016 at 12:16 PM, Rajeshkumar J <rajeshkumarit8...@gmail.com>
wrote:
> Hi,
>
>
w filter with
> substring comparator.
>
>
> [image: http://]
> Tariq, Mohammad
> about.me/mti
> [image: http://]
> <http://about.me/mti>
>
>
> On Mon, Jan 4, 2016 at 4:08 PM, Rajeshkumar J <rajeshkumarit8...@gmail.com
> >
> wrote:
>
> > Hi,
&g
Hi,
We can use fuzzy row filter when rowkey has fixed length.I decided to
design row key as -mm-dd| platform id | control id where platform id
and control id are a two digit nos.
we have combo box through which we can select differen values of date,
platform and control for search. For this
Hi,
I have designed my row key as -mm-dd| platform | control and the
column values are the ids of the customers who visited the respective
platform and control on a particular day.
For instance
2016-01-01 | Bigdata | Bigdata 1002,20005,3009
Now we have a combo box through which we
AM, Ted Yu <yuzhih...@gmail.com> wrote:
> For the example given below, you can specify PrefixFilter for the scan.
>
> Please see also for examples of filter involving regex:
> https://issues.apache.org/jira/browse/HBASE-9428
>
> > On Dec 30, 2015, at 9:57 PM, Ra
Hi
Can any one tell me hbase shell command to use how
MultiRowRangeFilter ?
Thanks
Hi,
Currently i am using range scan for querying data in my hbase table. Is
there any option available to use regular expression in range scan. For ex
this is my range scan query
scan 'tmp', {STARTROW => '539428', ENDROW => '539430'}
Is there any option available so that we can use
scan
etc to suffice requirement of your query.
> So, you will need to look at your data model and pick the best query.
>
> HTH,
> Anil
>
> On Thu, Dec 17, 2015 at 10:17 PM, Rajeshkumar J <
> rajeshkumarit8...@gmail.com
> > wrote:
>
> > Hi,
> >
> >My
cover your use case?
>
> On Fri, Dec 18, 2015 at 4:29 AM, Rajeshkumar J <
> rajeshkumarit8...@gmail.com>
> wrote:
>
> > Hi Anil,
> >
> >I have about 10 million rows with each rows having more than 10k
> > columns. I need to query this table ba
t; If you know exact rowkey of row that you need to fetch then you just need
> to use GET. If you know just the prefix of rowkey, then you can use range
> scans in HBase. Does the above 2 scenario's cover your use case?
>
> On Fri, Dec 18, 2015 at 4:29 AM, Rajeshkumar J <
>
1 - 100 of 120 matches
Mail list logo