Just to add, I verified that hbase-client-2.0.0-SNAPSHOT.jar (which
contains org.apache.hadoop.hbase.HColumnDescriptor) is in hbase class path.
On Fri, Jan 8, 2016 at 2:44 PM, Sreeram <sreera...@gmail.com> wrote:
> Hi,
>
> I built HBase using cygwin in my local machine (the maste
thoughts on what is going on here ?
Sreeram
counter?
>
> JMS
>
> 2016-03-18 5:33 GMT-04:00 Sreeram <sreera...@gmail.com>:
>
> > Hi,
> >
> > I am looking for suggestions from community on implementing HBase
> > increment in a idempotent manner.
> >
> > My use case is a storm Hbase bol
.
Thank you.
Regards,
Sreeram
te those few columns together?
>
> 2016-03-18 6:14 GMT-04:00 Sreeram <sreera...@gmail.com>:
>
> > The incremented field is more like an amount field that will be storing
> the
> > aggregate amount. Since the field will be incremented concurrently by
> > multi
Hi Soufiani,
Can you try changing your configuration to have region server listen on
0.0.0.0:16020 and master listen on 0.0.0.0:16000 ?
127.0.0.1 being local loopback will not be accessible from outside.
Regards,
Sreeram
On Fri, Apr 22, 2016 at 9:00 PM, SOUFIANI Mustapha | السفياني مصطفى
replicated successfully in cluster B.
I went through the WALEdit API and it is not obvious to me if it is
possible to retrieve the attributes set on the row mutation.
Kindly let me know your suggestions.
Regards,
Sreeram
Thank you very much Ted. I understand fetching the tags will fetch the
associated attributes for a mutation. Will try out the same.
Regards,
Sreeram
On 29 Jan 2017 00:37, "Ted Yu" <yuzhih...@gmail.com> wrote:
In CellUtil, there is the following method:
public static Tag
to check if there are any limits on the number of rows per
cluster.
Will it be advisable to split the cluster in such situation into two or
more independent clusters? Will there be any impact to the read/write
throughput/latency as the table grows over time?
Please advise.
Regards,
Sreeram
Hi Ted,
>From the link
"Around 50-100 regions is a good number for a table with 1 or 2 column
families. Remember that a region is a contiguous segment of a column
family.".
This number 50-100 regions per table at the level of individual region
server or for the entire cluster ?
Th
WAL files in HDFS need to be written to ? The network
drive has synchronous replication across data centers. If the WAL files can
be written to the replicated network drives, can we recover in-flight data
in the passive cluster and resume operations from there ?
Regards,
Sreeram
to
the actual size of row in HBase.
Is my understanding correct? Kindly let me know.
Regards,
Sreeram
ks
On 23 Mar 2017 02:37, "Ted Yu" <yuzhih...@gmail.com> wrote:
> Sreeram:
> For #2, did you mean this method ?
>
> default void postWALRestore(final ObserverContext RegionCoprocessorEnvironment> ctx,
>
> HRegionInfo info, WALKey logKey, WALEdit logEdit)
: System coprocessor
Test.TestWALEditCP was loaded successfully with priority (536870912).
Any thoughts what can be going wrong ?
Thanks,
Sreeram
PS: My code is below.
package Test;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import
dit() coprocessor event
for transaction 0 be triggered before than transaction 1?
Thanks,
Sreeram
PS: I use HBase version 1.2.0
only the latest values for
the column, provided they match the filter.
Is this the expected behaviour of ValueFilter? Any suggestions if I
must be setting any options to skip the older values from coming in
the result?
Thank you
Regards,
Sreeram
that I use is HBase 1.2.0-cdh5.8.2
Kindly let me know
Thank you
-Sreeram
the community on this.
Thanks,
Sreeram
( or may be its not obvious for me).
The version of HBase that I use is 1.2.0-cdh5.8.2
Any help on this regard ?
Thanks,
Sreeram
to influence HBase to use higher values for the
socket read/write buffers when it does replication.
Any thoughts from the community on the same?
Thanks
Sreeram
[1] http://www.onlamp.com/pub/a/onlamp/2005/11/17/tcp_tuning.html
this in the HBase shell - Scan with wild characters?
(or) should we end up using HIVE ? what are the other options?
Can you please let me know.
-Sreeram
like info:regioninfo1, regioninfo2.
- Original Message -
From: lars hofhansl lhofha...@yahoo.com
To: user@hbase.apache.org user@hbase.apache.org; Sreeram K
sreeram...@yahoo.com
Cc:
Sent: Monday, December 12, 2011 10:45 PM
Subject: Re: HBase- Scan with wildcard character
First off, what
hofhansl lhofha...@yahoo.com
To: user@hbase.apache.org user@hbase.apache.org; Sreeram K
sreeram...@yahoo.com
Cc:
Sent: Tuesday, December 13, 2011 11:36 AM
Subject: Re: HBase- Scan with wildcard character
info:regioninfo is actually a serialized Java object (HRegionInfo). What you
see
Thanks Doug. I am looking more from HBase shell for this.
- Original Message -
From: Doug Meil doug.m...@explorysmedical.com
To: user@hbase.apache.org user@hbase.apache.org; Sreeram K
sreeram...@yahoo.com; lars hofhansl lhofha...@yahoo.com
Cc:
Sent: Tuesday, December 13, 2011 2:01 PM
Thank you Lars.
STOPROW did work in my hbase shell as you suggested
- Original Message -
From: lars hofhansl lhofha...@yahoo.com
To: user@hbase.apache.org user@hbase.apache.org; Sreeram K
sreeram...@yahoo.com
Cc:
Sent: Tuesday, December 13, 2011 3:56 PM
Subject: Re: HBase- Scan
I have one more question..
Can we have a query in HBase shell based on Colum Value.
I am looking at scan- with Coulm ID? is that possible..the way we are doing
with STARTROW?
Can you pl pont me to an example..
- Original Message -
From: Sreeram K sreeram...@yahoo.com
To: user
Hi,
I am looking for options to batch the output of HBase scan with prefix filter,
so that it can be paginated at the front end.
Please let me know if there recommended methods to do the same.
Thank you.
Sreeram=
CAUTION - Disclaimer *
This e-mail contains
27 matches
Mail list logo