Yes the command enable_table_replication will check whether a table exists
in peer cluster and if so compare the CFs. Ya you correctly said. The
difference in the table description results in failure of this command. You
can enable replication at src using the alter table command. We can fix
> ERROR [1955596624@qtp-58762-113898] mortbay.log:
/jmxjava.lang.IllegalStateException: Committed
This second exception trace is because of code level bug in
JMXJsonServlet. That wont come if the jmx url is corrected anyways. I
will file a jira to fix this code level issue
Anoop
On Tue,
In another mail thread Zheng Hu brought up an important Jra fix
https://issues.apache.org/jira/browse/HBASE-21657
Can u pls check with this once?
Anoop
On Tue, Jun 9, 2020 at 8:08 PM Jan Van Besien wrote:
> On Sun, Jun 7, 2020 at 7:49 AM Anoop John wrote:
> > As per the abov
As per the above configs, it looks like Bucket Cache is not being used.
Only on heap LRU cache in use.
@Jan - Is it possible for you to test with off heap Bucket Cache? Config
bucket cache off heap mode with size ~7.5 GB
Do you have any DataBlockEncoding enabled on the CF?
Anoop
On Fri, Jun 5,
Congrats Viraj !!!
Anoop
On Tue, Dec 31, 2019 at 9:26 AM Sukumar Maddineni
wrote:
> Wow congrats Viraj and Keep up the good work.
>
> --
> Sukumar
>
> On Mon, Dec 30, 2019 at 5:45 PM 宾莉金(binlijin) wrote:
>
> > Welcome and Congratulations, Viraj!
> >
> > 张铎(Duo Zhang) 于2019年12月30日周一 下午1:18写道:
Hi
When you did a put with a lower qualifier int (put 'mytable',
'MY_ROW', "pcf:\x0A", "\x00") the system flow is getting a valid cell at
1st step itself and that getting passed to the Filter. The Filter is doing
a seek which just avoids all the in between deletes and puts
Congratulations Stephen.
-Anoop-
On Tue, Aug 6, 2019 at 11:19 AM Pankaj kr wrote:
> Congratulations Stephen..!!
>
> Regards,
> Pankaj
>
> -Original Message-
> From: Sean Busbey [mailto:bus...@apache.org]
> Sent: 06 August 2019 00:27
> To: dev ; user@hbase.apache.org
> Subject:
Congrats Zheng.
Anoop
On Tue, Aug 6, 2019 at 8:52 AM OpenInx wrote:
> I'm so glad to join the PMC, Apache HBase is a great open source project
> and the
> community is also very nice and friendly. In the comming days, will do
> more to make
> the project & community forward, also we need
We might need a static creator method which takes start and end time in TR?
Anoop
On Tue, Aug 6, 2019 at 9:11 AM OpenInx wrote:
> Hi
> I've checked the code, I think you can use the deprecated TimeRange(ts+1)
> first, also the methods defined in TimeRange
> is not good enough now, we may need
Congratulations Sakthi
-Anoop-
On Sat, Aug 3, 2019 at 2:11 AM Stack wrote:
> Hurray!
> S
>
> On Wed, Jul 31, 2019 at 5:04 PM Sean Busbey wrote:
>
> > On behalf of the HBase PMC, I'm pleased to announce that Sakthi has
> > accepted our invitation to become an HBase committer.
> >
> > We'd
Based on what u pasted as the config
"
hbase.hregion.max.filesize
10737418240
Maximum HStoreFile size. If any one of a column families' HStoreFiles
has
grown to exceed this value, the hosting HRegion is split in
two.
"
I can say the issue is the version of HBase.
Older
Hi Srinidhi
You have File mode bucket cache. What is the size of the
cache? You configure single file path for cache or 1+ paths? If former,
splitting the cache into multiple files (paths can be given as comma
separated in the config) will help?
Anoop
On Fri, Apr 5, 2019 at
And once you successfully reset the encryption for ur table:cf, do run a
major compaction for the table:cf so that encrypted data will get rewritten
and this with no encryption. Pls remember that major compaction will have
lots of IO and putting load on your RS too.. It depends on total data
Congrats Jingyun !!!
-Anoop-
On Wed, Nov 14, 2018 at 8:18 AM Jingyun Tian wrote:
> Thank you all!
>
> Sincerely,
> Jingyun Tian
>
> On Wed, Nov 14, 2018 at 8:59 AM stack wrote:
>
> > Welcome jingyun.
> > S
> >
> > On Mon, Nov 12, 2018, 11:54 PM 张铎(Duo Zhang) wrote:
> >
> > > On behalf of the
the JIRA
> https://issues.apache.org/jira/browse/HBASE-14061, it seems this is only
> available in HBase 2.0?
>
> Thanks,
> Ming
>
> -----Original Message-
> From: Anoop John <anoop.hb...@gmail.com>
> Sent: Tuesday, April 17, 2018 1:42 PM
> To: user@hbase.apache.org
&g
When creating table, user can set storage policy on column family.
Each of the CF can have different policy. Pls see
setStoragePolicy(String) API in HColumnDescriptor.
-Anoop-
On Tue, Apr 17, 2018 at 7:16 AM, Ming wrote:
> Hi, all,
>
>
>
> HDFS support HSM, one can set a
Hi Jim
Just taking your eg: Use cell level labels along with
AggregationClient - Tt will NOT work. The reason is the Aggregation
impl for the server side will create scanner directly over the Region.
As you know, the cell level security features work with the help of a
co
>can it affect visibility in any way? Does it
still follow the principle "When a client receives a "success" response for
any mutation, that mutation is immediately visible to both that client and
any client with whom it later communicates through side channels" ?
No it wont affect the mentioned
we have the cache on write setting set to true, it will refuse to cache
>>> blocks for a file that is a newly compacted one. In our case we have sized
>>> the bucket cache to be big enough to hold all our data, and really want to
>>> avoid having to go to S3 until the
>>a) it was indeed one of the regions that was being compacted, major
compaction in one case, minor compaction in another, the issue started just
after compaction completed blowing away bucket cached blocks for the older
HFile's
About this part.Ya after the compaction, there is a step where
BulkDeleteEP allows to pass a Scan . What all rows this Scan returns
all can be deleted. So you can create an appropriate Scan to fetch
all rows with a common prefix. Pls try once.
-Anoop-
On Sat, Mar 3, 2018 at 2:49 AM, Ted Yu wrote:
> For #2, BulkDeleteEndpoint still
As which user you are running the scan shell command? It has to be as
michaelthomasen as u have set auths to this uname. Looks like the user
running the command is the hbase super user who started the RS process.
Then all cells will be returned irrespective of its visibility and scan
auths.
Anoop
Congrats Peter..
Anoop
On Friday, February 23, 2018, ramkrishna vasudevan <
ramkrishna.s.vasude...@gmail.com> wrote:
> Congratulations Peter !!!
>
> On Fri, Feb 23, 2018 at 3:40 PM, Peter Somogyi
> wrote:
>
> > Thank you very much everyone!
> >
> > On Thu, Feb 22, 2018 at
Hi
Seems you have write ops happening as you mentioned abt
minor compactions. When the compaction happens, the compacted file's
blocks will get evicted. Whatever be the value of
'hbase.rs.evictblocksonclose'. This config comes to play when the
Store is closed. Means the region
can do this convert
-Anoop-
On Thu, Feb 1, 2018 at 1:07 PM, Anoop John <anoop.hb...@gmail.com> wrote:
> Theoretically I dont think what you are trying to do is correct. I
> mean the incoming keys are fully ignored and new one is been made at
> CP layers. When CP is been co
hought HBase will stop new coming puts, finish all of the puts
>> in the batch, and then try to split.
>> But this maybe not right according to the exception that I got.
>>
>> BTY , It seems that I can't add put
>> to MiniBatchOperationInProgress miniBatchOp. There are on
Another related Q was also there.. Can you tell the actual
requirement? So the incoming puts you want to change the RKs of that?
Or you want to insert those as well as some new cells with a changed
RK?
-Anoop-
On Mon, Jan 29, 2018 at 3:49 PM, Yang Zhang wrote:
> Hello
What fix u tried now will help you. You can not avoid the loop by
using the complete or bypass way.. Because that is on the present
context. Another put on region will create a new context and so that
continues. One more thing/suggestion would be to see the hook
preBatchMutate. You will
Karthick
What is ur server and client versions? U using Java
client or some thing else?
-Anoop-
On Thu, Jan 18, 2018 at 12:46 PM, ramkrishna vasudevan
wrote:
> Hi
>
> Which version of HBase you get this problem? Do you have any pb classpath
>
FYI
It has to be multiples of 256.
hbase.bucketcache.bucket.sizes
A comma-separated list of sizes for buckets for the
bucketcache.
Can be multiple sizes. List block sizes in order from smallest to largest.
The sizes you use will depend on your data access patterns.
Must
Congrats and welcome Zheng...
-Anoop-
On Mon, Oct 23, 2017 at 11:51 AM, ramkrishna vasudevan
wrote:
> Welcome Zheng and congratulations !!!
>
> On Mon, Oct 23, 2017 at 11:48 AM, Duo Zhang wrote:
>
>> On behalf of the Apache HBase PMC, I am
Congrats Chia-Ping Welcome..
-Anoop-
On Sat, Sep 30, 2017 at 12:28 PM, Ashish Singhi wrote:
> Congratulations, Chia-Ping.
>
> On Sat, Sep 30, 2017 at 3:49 AM, Misty Stanley-Jones
> wrote:
>
>> The HBase PMC is delighted to announce that Chia-Ping
+1 on what Ram said abt why HM also need this setting in ur version..
HM starts a RS internally..You can do one thing.. Use different
hbase config xml for HM process other than what used for RS. May be
we need some fix also as Ram said.. Specially when no regions are
allowed in HM side.
In
You cannot rely on postDelete for cells removed because of TTL..
Because it is not a user started delete op and there wont be any
postDelete happening for this cell..The cell is there in some
HFile. Even after the TTL time elapsed also.. When a read happens,
the read flow will filter out
> since if i have 10 rows to update,then it means that 9 rows have no
> rowlocks.so is 'at least one rowlock' more symbolic than practical
> significane?
Pls note that it is doMiniBatchMutation. 10 rows there in ur batch.
The batch might get complete in many mini batches. When one mini
batch
>We think minor compact is just combining files , say, if we have 10 hfiles
>using 100G disk space, after minor compact, it still should be 100G, if not
>less.
That is partially true. The thing is when the new compacted files is
created, temp there will be duplicate data. Means unless the
So on the 2nd table, even if there are 4 CFs , while scanning you need
only data from single CF. And this under test CF is similar to what u
have in the 1st table? I mean same encoding and compression schema
and data size? While creating scan for 2nd table how u make? I hope
u do
Scan s = new
add release note to make it clear for
> user what the final changes are
>
> Best Regards,
> Yu
>
> On 7 August 2017 at 15:19, Anoop John <anoop.hb...@gmail.com> wrote:
>
>> Sorry for being late here Yu Li.
>> Regarding counting the rows (for the new metric) in multi.
Thanks for the write up Stack.
Thanks Huawei and all sponsors and specially to Jieshan. Few months
back, such a conference was just a dream and it is mainly because of
him this could happen. Hope we can continue this HBaseconAsia.
-Anoop-
On Tue, Aug 8, 2017 at 7:02 AM, Bijieshan
Sorry for being late here Yu Li.
Regarding counting the rows (for the new metric) in multi.. There
might be 2 Actions in multi request for the same row. This is possible
some time. I dont think we should check that and try to make it
perfect. That will have perf penalty also. So just saying
t; >>
>> >> -Original Message-
>> >> From: ramkrishna vasudevan [mailto:ramkrishna.s.vasude...@gmail.com]
>> >> Sent: Tuesday, August 1, 2017 11:16 AM
>> >> To: user@hbase.apache.org
>> >> Subject: Re: Awesome HBase - a curated
This is great and very useful.. Tks.
On Mon, Jul 31, 2017 at 8:34 PM, Robert Yokota wrote:
> To help me keep track of all the awesome stuff in the HBase ecosystem, I
> started a list. Let me know if I missed anything awesome.
>
> https://github.com/rayokota/awesome-hbase
Congrats Abhishek
On Mon, Jul 31, 2017 at 1:48 PM, Yu Li wrote:
> Congratulations, Abhishek!
>
> Best Regards,
> Yu
>
> On 31 July 2017 at 15:40, Jingcheng Du wrote:
>
>> Congratulations!
>>
>> Regards,
>> Jingcheng
>>
>> 2017-07-31 12:49 GMT+08:00
You got it correct.
Within your EP (this handles one region), you can get that Region
memstore size and add them all at client side.
-Anoop-
On Tue, Jul 25, 2017 at 12:10 AM, Dave Birdsall wrote:
> Hi,
>
> I think I understand the answer.
>
> My question was based on
You mean within your RegionObserver you are doing the get? Within
which hook? What is the way you are doing the get? Can u paste
that sample code.
-Anoop-
On Wed, Jul 26, 2017 at 8:02 PM, Veerraju Tadimeti wrote:
> Hi,
>
> If i use GET operation, is there any chance of
t is
>> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl
>>
>> #1 way (Put scanner in local map) - may not be possible, cos if two
>> different scan operation with and without attribute hits at the same time,
>> how can we differentiate in postScanner
alScanner parameter
> instead of RegionScanner.
>
> FYI
>
> On Sun, Jul 9, 2017 at 10:57 PM, Anoop John <anoop.hb...@gmail.com> wrote:
>
>> Ya as Ted said, u are not getting Scan object in the postScannerNext
>> and so can not make use of the attribute in Scan wit
Ya as Ted said, u are not getting Scan object in the postScannerNext
and so can not make use of the attribute in Scan within this hook.
Just setting the sharedData variable will cause issue with concurrent
scans. (As u imagine)
So I can think of solving this in 2 possible ways. (May be more ways
Congrats Chunhui..
On Wed, Jul 5, 2017 at 6:55 AM, Pankaj kr wrote:
> Congratulations Chunhui..!!
>
> Regards,
> Pankaj
>
>
> -Original Message-
> From: Yu Li [mailto:car...@gmail.com]
> Sent: Tuesday, July 04, 2017 1:24 PM
> To: d...@hbase.apache.org;
Yes we have per cell level TTL setting from 0.98.9 release onwards.
That should be the best way for ur usecase.
See Mutation#setTTL(long)
-Anoop-
On Fri, Jun 23, 2017 at 2:17 PM, yonghu wrote:
> I did not quite understand what you mean by "row timestamp"? As far as I
>
Congrats and welcome...
-Anoop-
On Mon, Jun 19, 2017 at 11:44 AM, Huaxiang Sun wrote:
> Congratulations!
>
>
> On Sun, Jun 18, 2017 at 9:43 PM, ramkrishna vasudevan <
> ramkrishna.s.vasude...@gmail.com> wrote:
>
>> Congratulations !!!
>>
>> On Mon, Jun 19, 2017 at 9:11 AM,
Congrats Allan.. Welcome !!
-Anoop-
On Fri, Jun 9, 2017 at 9:27 AM, 张铎(Duo Zhang) wrote:
> Congratulations!
>
> 2017-06-09 11:55 GMT+08:00 Ted Yu :
>
>> Congratulations, Allan !
>>
>> On Thu, Jun 8, 2017 at 8:49 PM, Yu Li wrote:
You can config this max WALs. (As said by Yu , hbase.regionserver.maxlogs)
When the total un archived WAL files count exceeds this, we will do
force flushes and so release some of the WALs. As Yu mentioned, when
we use multi WAL and say we have 2 WAL groups, this WAL count
effectively will be
Have a look at classes VisibilityController or AccessController .
These are observers as well as Endpoint which exposes some APIs for
client
-Anoop-
On Mon, May 15, 2017 at 4:34 PM, Rajeshkumar J
wrote:
> Hi,
>
>Check this example in the below link
>
>
I have also run a couple of tests and it looks like the TTL is not being
>> obeyed on the replica for any entry. Almost as if the TTL cell tags are not
>> being replicated. I couldn't find any significant clock skew. If it
>> matters, the HBase version on both sides is 1.0
Ya can u check whether the replica cluster is NOT removing ANY of the
TTL expired cells (as per ur expectation from master cluster) or some.
Is there too much clock time skew for the source RS and peer cluster
RS? Just check.
BTW can u see what is the hbase.replication.rpc.codec configuration
Ur earlier attempt to create this table would have failed in btw.. So
the status of the table in zk and in master may be diff.. Table exist
might be checking one and the next steps of crate table another..
Sorry forgot that area of code.. But have seen this kind of situation.
Not sure whether
This will result in a big sized region.. A row can not be split
across 2 regions. So that region can not get split
-Anoop-
On Tue, Apr 25, 2017 at 2:58 PM, schausson wrote:
> OK, so this is not supported for now...
>
> Additional questions related to region size :
>
> If
This is because of the way with which filters and versions are checked
in RS. We first do filter op and then apply version. The value filter
might have filtered out latest cell but it is applied and succeed on
older version and then only the version count begin.. There is an
open issue in HBase
On behalf of the Apache HBase PMC I"m pleased to announce that Yu Li
has accepted our invitation to become a PMC member on the Apache HBase
project. He has been an active contributor to HBase for past many
years. Looking forward for
many more contributions from him.
Welcome to the PMC, Yu Li...
You have to keep in mind one thing. The memstore size 128 MB said is
the memstore's heap size. Not the sum of all cell's key+ value size.
When a Cell is added into memstore there will be big overhead also.
(~100 bytes per cell java overhead). Ur cell size is so small that
more than half of
Congrats and Welcome Anastasia !
-Anoop-
On Mon, Mar 27, 2017 at 6:07 PM, ramkrishna vasudevan
wrote:
> Hi All
>
> Welcome Anastasia Braginsky, one more female committer to HBase. She has
> been active now for a while with her Compacting memstore feature and
>From HBase server perspective we need restrict memstore size + block
cache size to be max 80%. And memstore size alone can go down to 5%
if am not wrong.
We need to be careful when using G1 and giving this 80%. The cache
will be mostly full as u said it will be read workload. Making ur
working
96 GB of heap size? !!
What kind of work load is this?
-Anoop-
On Fri, Mar 10, 2017 at 10:54 AM, gehaijiang wrote:
> CMS GC program:
>
> 2017-03-10T10:15:25.741+0800: 4555916.378: [GC2017-03-10T10:15:25.741+0800:
> 4555916.378: [ParNew: 3067136K->340736K(3067136K),
>From 0.98.9 version onward, there a per cell TTL feature available.
(See HBASE-10560).. TTL can be controlled even per cell level.
It might eat up more space as the TTL is stored with every cell. What
u want is same TTL for one group and another for another. The major
concern is u need
There is one config "hbase.master.loadbalance.bytable" using which
user can say whether the balancing has to be done per table or
globally balance. I believe u r using StochasticLoadBalancer only
which is the default.
The move calls are affecting ur locality?
-Anoop-
On Wed, Mar 8, 2017 at
t;> replication = 3, then you will have regions with replica ids: {0, 1, 2}
>> where replica_id=0 is the primary.
>>
>> So you can do load-balancing with a get.setReplicaId(random() %
>> num_replicas) kind of pattern.
>>
>> Enis
>>
>>
>>
st? Has anyone else come up with
> anything on this?
> thanks
>
> ____
> From: Anoop John <anoop.hb...@gmail.com>
> Sent: Thursday, February 16, 2017 2:35:48 AM
> To: user@hbase.apache.org
> Subject: Re: On HBase Read Replicas
>
>
The blocking happens in write path only. While reading there is no
blocking as such. Are you speaking abt throttling?
In write path the blocking comes in because of 2 reasons
1. Per region u have 128 MB of flush size. Means when this region
reaches this size, a flush is initiated. But that
The region replica feature came in so as to reduce the MTTR and so
increase the data availability. When the master region containing RS
dies, the clients can read from the secondary regions. But to keep
one thing in mind that this data from secondary regions will be bit
out of sync as the
Attributes added to Mutation (via setAttribute) will NOT get
persistent in cells.. As Ted suggested, tags is the way.. There is
no way to pass Tags from client as of now.. I believe u have eventId
thing in server side only.. So u can add Tags to cells at server side
only (via some CP hooks).
Congrats Guanghao..
-Anoop-
On Tue, Dec 20, 2016 at 4:09 PM, ramkrishna vasudevan
wrote:
> Congratulations and welcome Guanghao.
>
> Regards
> Ram
>
> On Tue, Dec 20, 2016 at 4:07 PM, Guanghao Zhang wrote:
>
>> Thanks all. Looking forward
Congrats and welcome Binlijin.
-Anoop-
On Tue, Nov 29, 2016 at 3:18 PM, Duo Zhang wrote:
> On behalf of the Apache HBase PMC, I am pleased to announce that Lijin
> Bin(binlijin) has accepted the PMC's invitation to become a committer on
> the project. We appreciate all of
; 2.0 around the corner, some of us who build our own deploy may want to
>> > integrate into our builds. Thanks! These numbers look great
>> >
>> >> On Fri, Nov 18, 2016 at 12:20 PM Anoop John <anoop.hb...@gmail.com>
>> wrote:
>> >>
>> >&g
Hi Yu Li
Good to see that the off heap work help you.. The perf
numbers looks great. So this is a compare of on heap L1 cache vs off heap
L2 cache(HBASE-11425 enabled). So for 2.0 we should make L2 off heap
cache ON by default I believe. Will raise a jira for that we can
Hi
Ya the QualifierFilter is returning only the matched cells and skip
the remaining cells in the row. I am not able to see any ready made
FIlter for ur usecase. U may have to write a custom one.
-Anoop-
On Tue, Nov 8, 2016 at 2:50 AM, Mukesh Jha wrote:
> Hello
easy for u. Pls try. Let us know any further doubts/issues.
-Anoop-
On Tue, Nov 8, 2016 at 5:55 AM, Manjeet Singh
<manjeet.chand...@gmail.com> wrote:
> Increment by y x+y
>
> On 7 Nov 2016 23:52, "Anoop John" <anoop.hb...@gmail.com> wrote:
>
>> So w
my 2 different ETL job having the same rowkey and one ETL
> process get the value and at same time second also get that value and
> update it now first ETL job will replace that updated value.
>
> it can happen in same ETL job too.
>
> Thanks
> Manjeet
>
> On Mon, Nov 7, 2
Seems u want to get an already present row and do some op and put the
updated value. What is that op? If u can explain those we can try
help u with ways (if available) As such, the above kind of API do not
guarantee u any atomicity.
-Anoop-
On Fri, Nov 4, 2016 at 4:12 AM, Ted Yu
Agree with ur observation.. But DLR feature we wanted to get removed..
Because it is known to have issues.. Or else we need major work to
correct all these issues.
-Anoop-
On Tue, Oct 18, 2016 at 7:41 AM, Ted Yu wrote:
> If you have a cluster, I suggest you turn on DLR and
Congrats Stephen...
On Tue, Oct 18, 2016 at 10:20 AM, ramkrishna vasudevan
wrote:
> Congrats Stephen!!
>
> On Tue, Oct 18, 2016 at 2:37 AM, Stack wrote:
>
>> Wahoo!
>>
>> On Fri, Oct 14, 2016 at 11:27 AM, Enis Söztutar wrote:
Say us more? What u would like to do? How is ur data and the table
structure. And u try doing some aggregation or so? If u can say us
more abt what u r really trying to do, we can try to point you to the
best possible option wrt CPs.
-Anoop-
On Thu, Oct 13, 2016 at 8:39 PM, Nkechi Achara
Ya what API used for this update op u mention?
-Anoop-
On Mon, Oct 17, 2016 at 9:25 AM, Ted Yu wrote:
> Which release of hbase are you using ?
>
> How did you determine that coprocessor's postPut method is not triggered
> for the update ? By additional logging ?
>
> Can you
You will have to set the stop row also (As Phil Yang suggested) .
Otherwise the scan might return row from next region (if this region
is empty)
-Anoop-
On Wed, Aug 3, 2016 at 12:23 PM, jinhong lu wrote:
> Thanks a lot. I make a mistake in my code, it should be:
>
Ya it comes with write workload. Not like with concurrent reads.
Once the write is done (memstore write and WAL write), we mark that
MVCC operation corresponding to this as complete and wait for a global
read point to advance to atleast this point. (Every write op will have
a number corresponding
; wrote:
>> > >
>> > > > Yeah, thanks for this Ram. Although in my testing I have found
>> > > > that a client user attempting to use the visibility expression
>> > > > resolver doesn't seem to have the ability to scan the hbase:labels
>>
Can u paste code how u make the FilterList?
I think you want to use MUST_PASS_ALL op in FL.
Ya the order of passing filters in FL can matter.
In your query case, you have to pass filter as SCVF, KeyOnlyF, PageFilter.
The PageFilter must come at the end.
Any filter working scope is within one
Thanks Ram.. Ya that seems the best way as CellCreator is public
exposed class. May be we should explain abt this in hbase book under
the Visibility labels area. Good to know you have Visibility labels
based usecase. Let us know in case of any trouble. Thanks.
-Anoop-
On Wed, Jun 8, 2016 at
In ur test env also u have numDataIndexLevels=2? Or it is 1 only?
-Anoop-
On Mon, Jun 6, 2016 at 1:12 PM, Pankaj kr wrote:
> Thanks Ted for replying.
> Yeah, We have a plan to upgrade. But currently I want to know the reason
> behind this. I tried to reproduce this in
So u feel the call s.close() would have created some Exception and
the removal of entry from scannerReadPoints would not have happened.
hmm.. sounds possible
public synchronized void close() {
if (storeHeap != null) {
storeHeap.close();
storeHeap = null;
}
if
Ya as said, there is a warn message while doing the flush. To check
with every mutation whether it having some tags and hfile version is
<3 would be bit costlier. Suggest you change the version and things
will work as expected for you.
-Anoop-
On Fri, May 6, 2016 at 10:09 PM, Huaxiang Sun
case too.
>
> Regards
> Ram
>
> On Fri, May 6, 2016 at 12:14 PM, Anoop John <anoop.hb...@gmail.com> wrote:
>
>> >>I am working on Hbase ACLs in order to lock a particular cell value for
>> writes by a user for an indefinite amount of time. This same user
Do u know all possible column names for this
table?
-Anoop-
On Fri, May 6, 2016 at 11:59 AM, Anoop John <anoop.hb...@gmail.com> wrote:
> HBASE-11432 removed the cell first strategy
>
> -Anoop-
>
> On Thu, May 5, 2016 at 4:49 PM, Tokayer, Jason M.
> <jason.toka...@capitalone.com&
HBASE-11432 removed the cell first strategy
-Anoop-
On Thu, May 5, 2016 at 4:49 PM, Tokayer, Jason M.
wrote:
> Hi Ram,
>
> I very much appreciate the guidance. I should be able to run through the
> gambit of tests via code this afternoon and will report back when
els back (as ordinals, as a super user, and using
>> the KeyValueCodecWithTags codec) using the HBase shell?
>>
>> If so, what are the steps I need to take (i.e. doesn't seem to be working
>> for me, but then I've likely made a mistake setting the codec).
>>
>> Thank
Different ways there for you to achieve
1. preScannerOpen -> Here u will get Scan object. You can add a new
Filter into the Scan object and pass the Scan object (or the
attribute you look for) into this Filter. The later scan op will use
this Filter and within that u can do filter of cells.
2.
So u have given the max heap size for the RS process via Cloudera
manager and u fear that that much is not taken by the process. U may
need to check the logs in CDH manager and the RS to see whether it is
correctly set
So u r using bucket cache(BC) in offheap mode right. In 1.0 version
there
Ya having HBase side cache will be a better choice rather than HDFS
cache IMO. Yes u r correct... You might not want to give a very
large size for the heap. You can make use of the off heap BucketCache.
-Anoop-
On Thu, Mar 31, 2016 at 4:35 AM, Ted Yu wrote:
> For #1,
I see you set cacheBlocks to be false on the Scan. By any chance on
some other RS(s), the data you are looking for is already in cache?
(Any previous scan or by cache on write) And there are no concurrent
writes any way right? This much difference in time ! One
possibility is blocks avail
r.IndexRegionObserver,
> org.apache.hadoop.hbase.JMXListener,
> org.apache.hadoop.hbase.index.coprocessor.wal.IndexWALObserver] |
> org.apache.hadoop.hbase.regionserver.HRegionServer.abort(HRegionServer.java:2063)
>
> Here also, no error details in DN/NN log.
>
> I am still
1 - 100 of 272 matches
Mail list logo