Re: Re: Can HBase 2.5.8 working with jdk17?

2024-04-11 Thread Reid Chan
should be able to run on JDK17

I think, at least, you need to provide your error logs, and related
environment variables.



On Wed, Apr 10, 2024 at 7:36 PM Bryan Beaudreault 
wrote:

> I can’t answer that because I know nothing about your environment or the
> error you are receiving. For us it just worked to start up. You may have to
> change jvm flags in your hbase-env.sh if you are using old deprecated flags
> that may be removed in jdk17.
>
> On Wed, Apr 10, 2024 at 3:01 AM lisoda  wrote:
>
> > Hi.
> > I am currently unable to start REGION using JDK17 directly. What
> > adjustments do I need to make to use JDK17/21?
> > Tks.
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > 在 2024-04-09 18:52:34,"Bryan Beaudreault"  写道:
> > >We ran hbase under jdk17 for a few months. The only issue we saw was
> > >https://issues.apache.org/jira/browse/HBASE-28206 which was fixed in
> > 2.5.7.
> > >
> > >More recently we’ve upgraded again to jdk21 to gain access to
> generational
> > >zgc. That also has been working fine without any additional patches.
> > >
> > >We’re running with hdfs 3.3.6, which may or may not matter.
> > >
> > >On Tue, Apr 9, 2024 at 3:23 AM lisoda  wrote:
> > >
> > >> Hi.
> > >>
> > >> I'm using HBase 2.5.8 and I'd like to upgrade the JDK version of the
> > >> RegionServer in my cluster to JDK17. Does anyone have any experience
> > with
> > >> this?  Tks.
> >
>


Re: Region state is PENDING_CLOSE persists.

2023-08-06 Thread Reid Chan
Please apply this patch: https://issues.apache.org/jira/browse/HBASE-24099,
if you couldn't do any version upgrade

After that, you can tune *hbase.regionserver.executor.closeregion.threads*
and *hbase.regionserver.executor.openregion.threads* to speed up close/open
regions.

---

Best Regards,
R.C


On Sun, Aug 6, 2023 at 12:11 PM Manimekalai Kunjithapatham <
k.manimeka...@gmail.com> wrote:

> Dear Team,
>
> In one of the Hbase Cluster, occasionally the some of the region has been
> stuck in PENDING_CLOSE state for long time. After that I need to restart
> particular region server holding region and the only it resolves.
>
> The cluster has write loaded as this cluster which receives replication
> from another cluster.
>
> The HBase version is 1.2.6.
>
> Please help to solve this issue
>
> Below is the thread dump
>
> regionserver//10.x.x.x:16020-shortCompactions-1691219096144" daemon prio=10
> tid=0x7f07b06a8000 nid=0x38ff9 runnable [0x7f0741f21000]
>
>java.lang.Thread.State: RUNNABLE
>
> at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
>
> at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
>
> at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
>
> at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
>
> - locked <0x0005418009f8> (a sun.nio.ch.Util$2)
>
> - locked <0x000541800a08> (a
> java.util.Collections$UnmodifiableSet)
>
> - locked <0x0005418009b0> (a sun.nio.ch.EPollSelectorImpl)
>
> at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
>
> at
> org.apache.hadoop.net
> .SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335)
>
> at
> org.apache.hadoop.net
> .SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>
> at
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
>
> at
>
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.readChannelFully(PacketReceiver.java:258)
>
> at
>
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:209)
>
> at
>
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:171)
>
> at
>
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:102)
>
> at
>
> org.apache.hadoop.hdfs.RemoteBlockReader2.readNextPacket(RemoteBlockReader2.java:186)
>
> at
> org.apache.hadoop.hdfs.RemoteBlockReader2.read(RemoteBlockReader2.java:146)
>
> - locked <0x0004c986b3c0> (a
> org.apache.hadoop.hdfs.RemoteBlockReader2)
>
> at
>
> org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:686)
>
> at
> org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:742)
>
> - eliminated <0x0004c986b358> (a
> org.apache.hadoop.hdfs.DFSInputStream)
>
> at
>
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:799)
>
> at
> org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:840)
>
> - locked <0x0004c986b358> (a
> org.apache.hadoop.hdfs.DFSInputStream)
>
> at java.io.DataInputStream.read(DataInputStream.java:149)
>
> at
> org.apache.hadoop.hbase.io
> .hfile.HFileBlock.readWithExtra(HFileBlock.java:709)
>
> at
> org.apache.hadoop.hbase.io
> .hfile.HFileBlock$AbstractFSReader.readAtOffset(HFileBlock.java:1440)
>
> at
> org.apache.hadoop.hbase.io
> .hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1648)
>
> at
> org.apache.hadoop.hbase.io
> .hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1532)
>
> at
> org.apache.hadoop.hbase.io
> .hfile.HFileReaderV2.readBlock(HFileReaderV2.java:452)
>
> at
> org.apache.hadoop.hbase.io
> .hfile.HFileReaderV2$AbstractScannerV2.readNextDataBlock(HFileReaderV2.java:729)
>
> at
> org.apache.hadoop.hbase.io
> .hfile.HFileReaderV2$ScannerV2.isNextBlock(HFileReaderV2.java:854)
>
> at
> org.apache.hadoop.hbase.io
> .hfile.HFileReaderV2$ScannerV2.positionForNextBlock(HFileReaderV2.java:849)
>
> at
> org.apache.hadoop.hbase.io
> .hfile.HFileReaderV2$ScannerV2._next(HFileReaderV2.java:866)
>
> at
> org.apache.hadoop.hbase.io
> .hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:886)
>
> at
>
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.next(StoreFileScanner.java:154)
>
> at
>
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:111)
>
> at
>
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:588)
>
> at
>
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:318)
>
> at
>
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:111)
>
> at
>
> 

Re: EOL branch-1 and all 1.x ?

2021-10-17 Thread Reid Chan
+1 for EOL branch-1.4

Thanks for the works

On Wed, Oct 13, 2021 at 1:39 PM 张铎(Duo Zhang)  wrote:

> Filed HBASE-26355 for releasing 1.4.14.
>
> 张铎(Duo Zhang)  于2021年10月11日周一 下午5:31写道:
>
> > So I think in this thread, the only concern is about performance issues,
> > so we decided to make new releases on branch-1.
> >
> > But at least I think we all agree to EOL other 1.x release lines,
> > especially branch-1.4 right?
> >
> > If no other concerns, let's do a final 1.4.14 release and then mark
> > branch-1.4 as EOL. There are 40 issues under 1.4.14 so I think it is
> worth
> > having a new release.
> >
> > Thanks.
> >
> > Andrew Purtell  于2021年6月1日周二 上午3:16写道:
> >
> >> It would be good to do the performance work at least, if you are up for
> >> it. There are always going to be consequences for the kind of
> significant
> >> evolution that 2.x represents over 1.x.
> >>
> >> Regarding performance, a change always has positive and negative
> >> consequences. It is important to understand them both, informed by real
> >> world use cases. My guess is you have real world use cases, Reid. Your
> >> results will be meaningful.
> >>
> >> Synthetic benchmarks are less interesting unless the regression is
> >> obvious and more like a bug than a consequence. Sure they will report
> >> positive and negative changes, but does that actually mean anything? It
> >> depends. Sometimes it will only mean something if we care about
> supporting
> >> the synthetic benchmark as a first class use case. (Usually we don’t;
> but
> >> universal cross system bench tools like YCSB are exceptions.)
> >>
> >>
> >> > On May 31, 2021, at 9:25 AM, Reid Chan 
> wrote:
> >> >
> >> > Thanks to Andrew and Sean's help, I managed to release the first
> >> candidate
> >> > of 1.7.0 (at least it is a beginning, and graduated from green hand).
> >> > BTW, The [VOTE]
> >> > <
> >>
> https://lists.apache.org/thread.html/r0b96b6596fc423e17ff648633e5ea76fd897d9afb8a03ae6e09cdb8f%40%3Cdev.hbase.apache.org%3E
> >> >
> >> >
> >> > The following are my thoughts:
> >> > I'm willing to continue branch-1's life as a RM.
> >> > And before EOL branch-1, I need to announce EOL of branch-1.4.
> >> > While maintaining the branch-1, I also will do some benchmarks between
> >> 1.7+
> >> > and 2.4+ (the latest). If 2.4+ is better, cool. Otherwise, I'm willing
> >> to
> >> > spend some time diving in.
> >> > After the performance issue is done, I need to review the upgrade from
> >> 1.x
> >> > to 2.x. I remember someone wrote it. But HBASE-25902 seems to reveal
> >> some
> >> > problems already.
> >> > I will announce EOL of branch-1 if listed above are done.
> >> >
> >> > Probably more than 1 year, by estimation, if I have to do it all
> alone.
> >> The
> >> > most time-spending should be performance diving in (if there was) and
> >> > upgrade review.
> >> >
> >> > Any thought is appreciated.
> >> >
> >> >
> >> > ---
> >> > Best regards,
> >> > R.C
> >> >
> >> >
> >> >
> >> >
> >> >> On Tue, Apr 20, 2021 at 12:13 AM Reid Chan 
> >> wrote:
> >> >>
> >> >>
> >> >> FYI, a JDK issue when I was making the 1.7.0 release.
> >> >>
> >> >>
> >> >>
> >>
> https://lists.apache.org/thread.html/r118b08134676d9234362a28898249186fe73a1fb08535d6eec6a91d3%40%3Cdev.hbase.apache.org%3E
> >> >>
> >> >>
> >> >> ---
> >> >> Best Regards,
> >> >> R.C
> >> >>
> >> >>> On Thu, Apr 1, 2021 at 6:03 AM Andrew Purtell 
> >> wrote:
> >> >>>
> >> >>> Is it time to consider EOL of branch-1 and all 1.x releases ?
> >> >>>
> >> >>> There doesn't seem to be much developer interest in branch-1 beyond
> >> >>> occasional maintenance. This is understandable. Per our
> compatibility
> >> >>> guidelines, branch-1 commits must be compatible with Java 7, and the
> >> range
> >> >>> of acceptable versions of third party dependencies is also
> restricted
> >> due
> >> >>> to Java 7 comp

[ANNOUNCE] Apache HBase 1.7.0 is now available for download

2021-06-12 Thread Reid Chan
The HBase team is happy to announce the immediate availability of Apache
HBase 1.7.0!

Apache HBase is an open-source, distributed, versioned, non-relational
database. Apache HBase gives you low latency random access to billions of
rows with millions of columns atop non-specialized hardware. To learn more
about HBase, see https://hbase.apache.org/.

Download from http://hbase.apache.org/downloads

HBase 1.7.0 is the latest minor release of HBase version 1, continuing on
the theme of bringing a stable, reliable database to the Apache Big Data
ecosystem and beyond.

For instructions on verifying ASF release downloads, please see

https://www.apache.org/dyn/closer.cgi#verify

Project member signature keys can be found at

https://www.apache.org/dist/hbase/KEYS

Thanks to all the contributors who made this release possible!

A list of the 243 issues resolved in this release can be found at
https://issues.apache.org/jira/projects/HBASE/versions/12346510

Thanks to all the contributors who made this release possible!

Questions, comments, and problems are always welcome at:
d...@hbase.apache.org

Best,
The HBase Dev Team


Re: EOL branch-1 and all 1.x ?

2021-05-31 Thread Reid Chan
Thanks to Andrew and Sean's help, I managed to release the first candidate
of 1.7.0 (at least it is a beginning, and graduated from green hand).
BTW, The [VOTE]
<https://lists.apache.org/thread.html/r0b96b6596fc423e17ff648633e5ea76fd897d9afb8a03ae6e09cdb8f%40%3Cdev.hbase.apache.org%3E>

The following are my thoughts:
I'm willing to continue branch-1's life as a RM.
And before EOL branch-1, I need to announce EOL of branch-1.4.
While maintaining the branch-1, I also will do some benchmarks between 1.7+
and 2.4+ (the latest). If 2.4+ is better, cool. Otherwise, I'm willing to
spend some time diving in.
After the performance issue is done, I need to review the upgrade from 1.x
to 2.x. I remember someone wrote it. But HBASE-25902 seems to reveal some
problems already.
I will announce EOL of branch-1 if listed above are done.

Probably more than 1 year, by estimation, if I have to do it all alone. The
most time-spending should be performance diving in (if there was) and
upgrade review.

Any thought is appreciated.


---
Best regards,
R.C




On Tue, Apr 20, 2021 at 12:13 AM Reid Chan  wrote:

>
> FYI, a JDK issue when I was making the 1.7.0 release.
>
>
> https://lists.apache.org/thread.html/r118b08134676d9234362a28898249186fe73a1fb08535d6eec6a91d3%40%3Cdev.hbase.apache.org%3E
>
>
> ---
> Best Regards,
> R.C
>
> On Thu, Apr 1, 2021 at 6:03 AM Andrew Purtell  wrote:
>
>> Is it time to consider EOL of branch-1 and all 1.x releases ?
>>
>> There doesn't seem to be much developer interest in branch-1 beyond
>> occasional maintenance. This is understandable. Per our compatibility
>> guidelines, branch-1 commits must be compatible with Java 7, and the range
>> of acceptable versions of third party dependencies is also restricted due
>> to Java 7 compatibility requirements. Most developers are writing code
>> with
>> Java 8+ idioms these days. For that reason and because the branch-1 code
>> base is generally aged at this point, all but trivial (or lucky!)
>> backports
>> require substantial changes in order to integrate adequately. Let me also
>> observe that branch-1 artifacts are not fully compatible with Java 11 or
>> later. (The shell is a good example of such issues: The version of
>> jruby-complete required by branch-1 is not compatible with Java 11 and
>> upgrading to the version used by branch-2 causes shell commands to error
>> out due to Ruby language changes.)
>>
>> We can a priori determine there is insufficient motivation for production
>> of release artifacts for the PMC to vote upon. Otherwise, someone would
>> have done it. We had 12 releases from branch-2 derived code in 2019, 13
>> releases from branch-2 derived code in 2020, and so far we have had 3
>> releases from branch-2 derived code in 2021. In contrast, we had 8
>> releases
>> from branch-1 derived code in 2019, 0 releases from branch-1 in 2020, and
>> so far 0 releases from branch-1 in 2021.
>>
>> *  2021202020191.x0282.x31312*
>>
>> If there is someone interested in continuing branch-1, now is the time to
>> commit. However let me be clear that simply expressing an abstract desire
>> to see continued branch-1 releases will not be that useful. It will be
>> noted, but will not have much real world impact. Apache is a do-ocracy. In
>> the absence of intrinsic motivation of project participants, which is what
>> we seem to have here, you will need to do something: Fix the compatibility
>> issues, if any between the last release of 1.x and the current branch-1
>> head; fix any failing and flaky unit tests; produce release artifacts; and
>> submit those artifacts to the PMC for voting. Or, convince someone with
>> commit rights and/or PMC membership to undertake these actions on your
>> behalf.
>>
>> Otherwise, I respectfully submit for your consideration, it is time to
>> declare  branch-1 and all 1.x code lines EOL, simply acknowledging what
>> has
>> effectively already happened.
>>
>> --
>> Best regards,
>> Andrew
>>
>> Words like orphans lost among the crosstalk, meaning torn from truth's
>> decrepit hands
>>- A23, Crosstalk
>>
>


Re: [ANNOUNCE] New HBase Committer Xiaolin Ha(哈晓琳)

2021-05-16 Thread Reid Chan
Welcome Xiaolin.


--
Best regards,
R.C

On Sat, May 15, 2021 at 10:10 PM 张铎(Duo Zhang) 
wrote:

> On behalf of the Apache HBase PMC, I am pleased to announce that Xiaolin
> Ha(sunhelly) has accepted the PMC's invitation to become a committer on the
> project. We appreciate all of Xiaolin's generous contributions thus far and
> look forward to her continued involvement.
>
> Congratulations and welcome, Xiaolin Ha!
>
> 我很高兴代表Apache HBase PMC宣布哈晓琳已接受我们的邀请,成为Apache
> HBase项目的Committer。感谢哈晓琳一直以来为HBase项目做出的贡献,并期待她在未来继续承担更多的责任。
>
> 欢迎哈晓琳!
>


Re: EOL branch-1 and all 1.x ?

2021-04-19 Thread Reid Chan
FYI, a JDK issue when I was making the 1.7.0 release.

https://lists.apache.org/thread.html/r118b08134676d9234362a28898249186fe73a1fb08535d6eec6a91d3%40%3Cdev.hbase.apache.org%3E


---
Best Regards,
R.C

On Thu, Apr 1, 2021 at 6:03 AM Andrew Purtell  wrote:

> Is it time to consider EOL of branch-1 and all 1.x releases ?
>
> There doesn't seem to be much developer interest in branch-1 beyond
> occasional maintenance. This is understandable. Per our compatibility
> guidelines, branch-1 commits must be compatible with Java 7, and the range
> of acceptable versions of third party dependencies is also restricted due
> to Java 7 compatibility requirements. Most developers are writing code with
> Java 8+ idioms these days. For that reason and because the branch-1 code
> base is generally aged at this point, all but trivial (or lucky!) backports
> require substantial changes in order to integrate adequately. Let me also
> observe that branch-1 artifacts are not fully compatible with Java 11 or
> later. (The shell is a good example of such issues: The version of
> jruby-complete required by branch-1 is not compatible with Java 11 and
> upgrading to the version used by branch-2 causes shell commands to error
> out due to Ruby language changes.)
>
> We can a priori determine there is insufficient motivation for production
> of release artifacts for the PMC to vote upon. Otherwise, someone would
> have done it. We had 12 releases from branch-2 derived code in 2019, 13
> releases from branch-2 derived code in 2020, and so far we have had 3
> releases from branch-2 derived code in 2021. In contrast, we had 8 releases
> from branch-1 derived code in 2019, 0 releases from branch-1 in 2020, and
> so far 0 releases from branch-1 in 2021.
>
> *  2021202020191.x0282.x31312*
>
> If there is someone interested in continuing branch-1, now is the time to
> commit. However let me be clear that simply expressing an abstract desire
> to see continued branch-1 releases will not be that useful. It will be
> noted, but will not have much real world impact. Apache is a do-ocracy. In
> the absence of intrinsic motivation of project participants, which is what
> we seem to have here, you will need to do something: Fix the compatibility
> issues, if any between the last release of 1.x and the current branch-1
> head; fix any failing and flaky unit tests; produce release artifacts; and
> submit those artifacts to the PMC for voting. Or, convince someone with
> commit rights and/or PMC membership to undertake these actions on your
> behalf.
>
> Otherwise, I respectfully submit for your consideration, it is time to
> declare  branch-1 and all 1.x code lines EOL, simply acknowledging what has
> effectively already happened.
>
> --
> Best regards,
> Andrew
>
> Words like orphans lost among the crosstalk, meaning torn from truth's
> decrepit hands
>- A23, Crosstalk
>


Re: EOL branch-1 and all 1.x ?

2021-03-31 Thread Reid Chan
My only concern is about the performance, once in a while there'll be
some emails like "2.x.y is slower than 1.x.y".


On Thu, Apr 1, 2021 at 6:03 AM Andrew Purtell  wrote:

> Is it time to consider EOL of branch-1 and all 1.x releases ?
>
> There doesn't seem to be much developer interest in branch-1 beyond
> occasional maintenance. This is understandable. Per our compatibility
> guidelines, branch-1 commits must be compatible with Java 7, and the range
> of acceptable versions of third party dependencies is also restricted due
> to Java 7 compatibility requirements. Most developers are writing code with
> Java 8+ idioms these days. For that reason and because the branch-1 code
> base is generally aged at this point, all but trivial (or lucky!) backports
> require substantial changes in order to integrate adequately. Let me also
> observe that branch-1 artifacts are not fully compatible with Java 11 or
> later. (The shell is a good example of such issues: The version of
> jruby-complete required by branch-1 is not compatible with Java 11 and
> upgrading to the version used by branch-2 causes shell commands to error
> out due to Ruby language changes.)
>
> We can a priori determine there is insufficient motivation for production
> of release artifacts for the PMC to vote upon. Otherwise, someone would
> have done it. We had 12 releases from branch-2 derived code in 2019, 13
> releases from branch-2 derived code in 2020, and so far we have had 3
> releases from branch-2 derived code in 2021. In contrast, we had 8 releases
> from branch-1 derived code in 2019, 0 releases from branch-1 in 2020, and
> so far 0 releases from branch-1 in 2021.
>
> *  2021202020191.x0282.x31312*
>
> If there is someone interested in continuing branch-1, now is the time to
> commit. However let me be clear that simply expressing an abstract desire
> to see continued branch-1 releases will not be that useful. It will be
> noted, but will not have much real world impact. Apache is a do-ocracy. In
> the absence of intrinsic motivation of project participants, which is what
> we seem to have here, you will need to do something: Fix the compatibility
> issues, if any between the last release of 1.x and the current branch-1
> head; fix any failing and flaky unit tests; produce release artifacts; and
> submit those artifacts to the PMC for voting. Or, convince someone with
> commit rights and/or PMC membership to undertake these actions on your
> behalf.
>
> Otherwise, I respectfully submit for your consideration, it is time to
> declare  branch-1 and all 1.x code lines EOL, simply acknowledging what has
> effectively already happened.
>
> --
> Best regards,
> Andrew
>
> Words like orphans lost among the crosstalk, meaning torn from truth's
> decrepit hands
>- A23, Crosstalk
>


Re: [ANNOUNCE] Please welcome Viraj Jasani to the Apache HBase PMC

2020-10-06 Thread Reid Chan
Welcome and well deserved, Viraj! (clap)




--

Best regards,
R.C




From: Andrew Purtell 
Sent: 06 October 2020 00:58
To: dev; Hbase-User
Subject: [ANNOUNCE] Please welcome Viraj Jasani to the Apache HBase PMC

On behalf of the Apache HBase PMC I am pleased to announce that
Viraj Jasani has accepted our invitation to become a PMC member on the
HBase project. We appreciate Viraj stepping up to take more
responsibility for the project.

Please join me in welcoming Viraj to the HBase PMC!


As a reminder, if anyone would like to nominate another person as a
committer or PMC member, even if you are not currently a committer or
PMC member, you can always drop a note to priv...@hbase.apache.org
to let us know.

--
Best regards,
Andrew


Re: EOL 1.3.x?

2020-08-12 Thread Reid Chan
+1




--

Best regards,
R.C




From: 张铎(Duo Zhang) 
Sent: 11 August 2020 16:01
To: HBase Dev List; hbase-user
Subject: EOL 1.3.x?

The last release for 1.3.x is 2019.10.20, which means we do not have a
release for this release line for about 10 months.

Let's make it EOL and tell users to at least upgrade to 1.4.x?

Thanks.


Re: [ANNOUNCE] Please welcome Lijin Bin to the HBase PMC

2020-05-25 Thread Reid Chan


Welcome Lijin!



--

Best regards,
R.C




From: Guanghao Zhang 
Sent: 25 May 2020 22:22
To: HBase Dev List; Hbase-User
Subject: [ANNOUNCE] Please welcome Lijin Bin to the HBase PMC

On behalf of the Apache HBase PMC I am pleased to announce that Lijin Bin
has accepted our invitation to become a PMC member on the Apache HBase
project. We appreciate Lijin Bin stepping up to take more responsibility in
the HBase project.

Please join me in welcoming Lijin Bin to the HBase PMC!


Re: How to avoid write hot spot, While using cross row transactions.

2020-01-09 Thread Reid Chan
I think you need some more coding works for fulfilling Atomicity in cross 
region scenario, by aid of some third party softwares, like Zookeeper.

AFAIK, Procedure framework in Master may also have ability to do that, but I'm 
not sure the details of it and if it supports client customized procedure (I 
remember the answer is negative).

Last but not lease, what about trying Phoenix?



--

Best regards,
R.C




From: Kang Minwoo 
Sent: 10 January 2020 12:51
To: user@hbase.apache.org
Subject: How to avoid write hot spot, While using cross row transactions.

Hello, users.

I use MultiRowMutationEndpoint coprocessor for cross row transactions.
It has a constraint that is rows must be located in the same region.
I removed random hash bytes in the row key.
After that, I suffer write hot-spot.

But cross row transactions are a core feature in my application. When I put a 
new data row, I put an index row.

Before I use MultiRowMutationEndpoint coprocessor, I had a mismatch between the 
data row and the index row.

Is there any best practice in that situation?
I want to avoid write hot-spot and use an index.


Best regards,
Minwoo Kang


Re: [ANNOUNCE] New HBase committer Viraj Jasani

2019-12-29 Thread Reid Chan
Welcome and Congratulations, Viraj!


--

Best regards,
R.C




From: Peter Somogyi 
Sent: 27 December 2019 21:01
To: HBase Dev List; hbase-user
Subject: [ANNOUNCE] New HBase committer Viraj Jasani

On behalf of the Apache HBase PMC I am pleased to announce that
Viraj Jasani has accepted the PMC's invitation to become a
commiter on the project.

Thanks so much for the work you've been contributing. We look forward
to your continued involvement.

Congratulations and welcome!


Re: [ANNOUNCE] Please welcome Balazs Meszaros to the Apache HBase PMC

2019-10-27 Thread Reid Chan
 Congratulations and welcome, Balazs!




--

Best regards,
R.C




From: Sean Busbey 
Sent: 24 October 2019 22:35
To: dev; Hbase-User
Subject: [ANNOUNCE] Please welcome Balazs Meszaros to the Apache HBase PMC

On behalf of the Apache HBase PMC I am pleased to announce that
Balazs Meszaros has accepted our invitation to become a PMC member on the
HBase project. We appreciate Balazs stepping up to take more
responsibility in the HBase project.

Please join me in welcoming Balazs to the HBase PMC!



As a reminder, if anyone would like to nominate another person as a
committer or PMC member, even if you are not currently a committer or
PMC member, you can always drop a note to priv...@hbase.apache.org to
let us know.


Re: [ANNOUNCE] Please welcome Wellington Chevreuil to the Apache HBase PMC

2019-10-24 Thread Reid Chan


Welcome Wellington! Congratulations!



--

Best regards,
R.C




From: Salvatore LaMendola (BLOOMBERG/ 731 LEX) 
Sent: 24 October 2019 04:19
To: d...@hbase.apache.org
Cc: user@hbase.apache.org
Subject: Re: [ANNOUNCE] Please welcome Wellington Chevreuil to the Apache HBase 
PMC

Congrats Sakthi and Wellington!

From: d...@hbase.apache.org At: 10/23/19 16:17:58To:  d...@hbase.apache.org
Cc:  user@hbase.apache.org
Subject: Re: [ANNOUNCE] Please welcome Wellington Chevreuil to the Apache HBase 
PMC

Congrats Wellington!

Sakthi

On Wed, Oct 23, 2019 at 1:16 PM Sean Busbey  wrote:

> On behalf of the Apache HBase PMC I am pleased to announce that
> Wellington Chevreuil has accepted our invitation to become a PMC member on
> the
> HBase project. We appreciate Wellington stepping up to take more
> responsibility in the HBase project.
>
> Please join me in welcoming Wellington to the HBase PMC!
>
>
>
> As a reminder, if anyone would like to nominate another person as a
> committer or PMC member, even if you are not currently a committer or
> PMC member, you can always drop a note to priv...@hbase.apache.org to
> let us know.
>




Re: [ANNOUNCE] Please welcome Sakthi to the Apache HBase PMC

2019-10-24 Thread Reid Chan
Welcome Sakthi! Congratulations!


--

Best regards,
R.C




From: Sean Busbey 
Sent: 24 October 2019 04:14
To: dev; Hbase-User
Subject: [ANNOUNCE] Please welcome Sakthi to the Apache HBase PMC

On behalf of the Apache HBase PMC I am pleased to announce that
Sakthi has accepted our invitation to become a PMC member on the
HBase project. We appreciate Sakthi stepping up to take more
responsibility in the HBase project.

Please join me in welcoming Jan to the HBase PMC!



As a reminder, if anyone would like to nominate another person as a
committer or PMC member, even if you are not currently a committer or
PMC member, you can always drop a note to priv...@hbase.apache.org to
let us know.


Re: Hbase errors for " ERROR: org.apache.hadoop.hbase.PleaseHoldException:Master is initializing"

2019-10-11 Thread Reid Chan
There's a quick work-around.

All regions get stuck when trying assigning themselves to host 'dtla1apps21' 
(from log message). It indicates the RS on that host how some becomes 
problematic.

In this case, you can stop the RS on that host which will trigger all STUCK 
regions to assign themselves to other RSs. 


--

Best regards,
R.C





From: 青椒肉丝 <1336318...@qq.com>
Sent: 11 October 2019 15:24
To: busbey; user
Subject: Re:  Hbase errors for " ERROR: 
org.apache.hadoop.hbase.PleaseHoldException:Master is initializing"

hi Sean,
My hbase version is  hbase2.1.0+cdh6.1.0 ,  And I've seen the logs of HBase 
master, but there are only some RIT status logs , which are as follows


“  2019-10-11 15:12:00,696 WARN org.apache.hadoop.hbase.master.CatalogJanitor: 
CatalogJanitor is disabled! Enabled=true, maintenanceMode=false, 
am=org.apache.hadoop.hbase.master.assignment.AssignmentManager@60c2bb1b, 
metaLoaded=true, hasRIT=true clusterShutDown=false
2019-10-11 15:12:42,597 INFO SecurityLogger.org.apache.hadoop.hbase.Server: 
Connection from X.X.X.X:59064, version=2.1.0-cdh6.2.0, sasl=false, ugi=hbase 
(auth:SIMPLE), service=MasterService
2019-10-11 15:13:00,293 WARN 
org.apache.hadoop.hbase.master.assignment.AssignmentManager: STUCK 
Region-In-Transition rit=OPENING, location=dtla1apps21,16020,1566557484692, 
table=KYLIN_CGMROZNC85, region=8f4bd153aba8f5c727a9de409cff6136
2019-10-11 15:13:00,293 WARN 
org.apache.hadoop.hbase.master.assignment.AssignmentManager: STUCK 
Region-In-Transition rit=OPENING, location=dtla1apps21,16020,1566557484692, 
table=KYLIN_NYBD0OX5WG, region=b07600e84a852938d9dd2ec323246ac6
2019-10-11 15:13:00,293 WARN 
org.apache.hadoop.hbase.master.assignment.AssignmentManager: STUCK 
Region-In-Transition rit=OPENING, location=dtla1apps21,16020,1566557484692, 
table=KYLIN_B3XSLI6DBK, region=49beda7ec05ffec6d238ebbf568a8c6c
2019-10-11 15:13:00,293 WARN 
org.apache.hadoop.hbase.master.assignment.AssignmentManager: STUCK 
Region-In-Transition rit=OPENING, location=dtla1apps21,16020,1566557484692, 
table=KYLIN_5QDDBMS68Q, region=f711df5144d4b706ecd2af9fabc2eba7
2019-10-11 15:13:00,293 WARN 
org.apache.hadoop.hbase.master.assignment.AssignmentManager: STUCK 
Region-In-Transition rit=OPENING, location=dtla1apps21,16020,1566557484692, 
table=KYLIN_B3XSLI6DBK, region=0e23a092963bdebd4bbf92ed0a25ad40
2019-10-11 15:13:00,293 WARN 
org.apache.hadoop.hbase.master.assignment.AssignmentManager: STUCK 
Region-In-Transition rit=OPENING, location=dtla1apps21,16020,1566557484692, 
table=KYLIN_CGMROZNC85, region=f96d2c76ec9b31bbb166305507807af4
2019-10-11 15:13:00,293 WARN 
org.apache.hadoop.hbase.master.assignment.AssignmentManager: STUCK 
Region-In-Transition rit=OPENING, location=dtla1apps21,16020,1566557484692, 
table=KYLIN_KVR1JIOD9F, region=835dfd5d87640f0bed48f605cfa6c39c
2019-10-11 15:13:00,294 WARN 
org.apache.hadoop.hbase.master.assignment.AssignmentManager: STUCK 
Region-In-Transition rit=OPENING, location=dtla1apps21,16020,1566557484692, 
table=KYLIN_2ZJ8M4PT96, region=eb35edb36de7f0866af45a5921584197
2019-10-11 15:13:00,294 WARN 
org.apache.hadoop.hbase.master.assignment.AssignmentManager: STUCK 
Region-In-Transition rit=OPENING, location=dtla1apps21,16020,1566557484692, 
table=KYLIN_B3XSLI6DBK, region=bf040047d1d1db74cb92bd05ac75376b
2019-10-11 15:13:00,294 WARN 
org.apache.hadoop.hbase.master.assignment.AssignmentManager: STUCK 
Region-In-Transition rit=OPENING, location=dtla1apps21,16020,1566557484692, 
table=KYLIN_2ZJ8M4PT96, region=d8a94206df91132d28548b3ea48bcc57
2019-10-11 15:13:00,294 WARN 
org.apache.hadoop.hbase.master.assignment.AssignmentManager: STUCK 
Region-In-Transition rit=OPENING, location=dtla1apps21,16020,1566557484692, 
table=KYLIN_NYBD0OX5WG, region=5aa573da499a2b7a6a3b1b7b534d01e1
2019-10-11 15:13:00,294 WARN 
org.apache.hadoop.hbase.master.assignment.AssignmentManager: STUCK 
Region-In-Transition rit=OPENING, location=dtla1apps21,16020,1566557484692, 
table=KYLIN_KVR1JIOD9F, region=97b2010687b632449e1682b2d6030028
2019-10-11 15:13:00,294 WARN 
org.apache.hadoop.hbase.master.assignment.AssignmentManager: STUCK 
Region-In-Transition rit=OPENING, location=dtla1apps21,16020,1566557484692, 
table=KYLIN_B3XSLI6DBK, region=6103048c6aef0816ee69a20d294c6d19
2019-10-11 15:13:00,294 WARN 
org.apache.hadoop.hbase.master.assignment.AssignmentManager: STUCK 
Region-In-Transition rit=OPENING, location=dtla1apps21,16020,1566557484692, 
table=KYLIN_3L8MX901Y2, region=c67f935d8ce72ed3ea505b507968d93e
2019-10-11 15:13:00,294 WARN 
org.apache.hadoop.hbase.master.assignment.AssignmentManager: STUCK 
Region-In-Transition rit=OPENING, location=dtla1apps21,16020,1566557484692, 
table=KYLIN_2ZJ8M4PT96, region=e3a8eba16110f5a7f1e7ad20fbe45634
2019-10-11 15:13:00,294 WARN 
org.apache.hadoop.hbase.master.assignment.AssignmentManager: STUCK 
Region-In-Transition rit=OPENING, location=dtla1apps21,16020,1566557484692, 

Re: Equivalent of Row Level Security for HBase

2019-09-02 Thread Reid Chan


HBase has `Cell` level ACL which is much more fine-grained than `Row` level, I 
think it may suit your need.

There's also one feature --- Visibility Labels: 
http://hbase.apache.org/book.html#hbase.visibility.labels, you might want to 
take a shot.




--

Best regards,
R.C




From: Simon Mottram 
Sent: 03 September 2019 08:59
To: user@hbase.apache.org
Subject: Equivalent of Row Level Security for HBase

Hi

I'm a Java developer, very new to HBase and could use some directions

I'm working on a project where we have a combination of sparse
data columns (1000's) with added headaches of multi-tenancy/row level
security. Initially the database will be small but in the near/medium
future will expand to millions. Hbase looks great for sparse nature of
the the back end and looks perfect for the expected data load but I
need to check that we can support the customer's
security requirements.

Shared Data
===
Each record in the table must be secured but it could be multiple
tenants for a record.  Think 'shared' data.

So for example if you had 3 records

record1, some data columns
record2, some data columns, not all shared with record1
record3, some data columns, not all same as 1 and 2

We need
userGroup1 to be able to see record1 and record2
userGroup2 to be able to see record2 and record3

How would you handle this in HBase?  Off the top of my head We could:

1) use a Table per user group and do UNION queries, I have strong
reservations about performance here as a fundamental reason for the
system is to perform aggregations such as averages, standard deviations
etc across the data.  Think userGroups = bunches of
statisticians/scientists. Also the sparse data structure will make
unions problematic I think.

2) 'Row' level security. Can we customise the ACL system to allow the
equivalent of multiple tenants per record?

3) None of the above ?


Best Regards

Simon


Re: [ANNOUNCE] Please welcome Zheng Hu to the HBase PMC

2019-08-05 Thread Reid Chan
Congratulations, Zheng!
(Clapping)




--

Best regards,
R.C




From: Duo Zhang 
Sent: 05 August 2019 10:07
To: HBase Dev List; hbase-user
Subject: [ANNOUNCE] Please welcome Zheng Hu to the HBase PMC

On behalf of the Apache HBase PMC I am pleased to announce that Zheng Hu
has accepted our invitation to become a PMC member on the Apache HBase
project. We appreciate Zheng Hu stepping up to take more responsibility in
the HBase project.

Please join me in welcoming Zheng Hu to the HBase PMC!


Re: [ANNOUNCE] new HBase committer Sakthi

2019-08-01 Thread Reid Chan


Congratulations and welcome, Sakthi!



--

Best regards,
R.C




From: Sean Busbey 
Sent: 01 August 2019 08:04
To: user@hbase.apache.org; dev
Subject: [ANNOUNCE] new HBase committer Sakthi

On behalf of the HBase PMC, I'm pleased to announce that Sakthi has
accepted our invitation to become an HBase committer.

We'd like to thank Sakthi for all of his diligent contributions to the
project thus far. We look forward to his continued participation in our
community.

Congrats and welcome Sakthi!


Re: [Announce] 张铎 (Duo Zhang) is Apache HBase PMC chair

2019-07-19 Thread Reid Chan
Congratulations Duo!
Thanks Misty!
(clapping)




--

Best regards,
R.C




From: Misty Linville 
Sent: 19 July 2019 01:46
To: HBase Dev List; hbase-user
Cc: Duo Zhang; priv...@hbase.apache.org
Subject: [Announce] 张铎 (Duo Zhang) is Apache HBase PMC chair

Each Apache project has a project management committee (PMC) that oversees
governance of the project, votes on new committers and PMC members, and
ensures that the software we produce adheres to the standards of the
Foundation. One of the roles on the PMC is the PMC chair. The PMC chair
represents the project as a Vice President of the Foundation and
communicates to the board about the project's health, once per quarter and
at other times as needed.

It's been my honor to serve as your PMC chair since 2017, when I took over
from Andrew Purtell. I've decided to step back from my volunteer ASF
activities to leave room in my life for other things. The HBase PMC
nominated Duo for this role, and Duo has kindly agreed! The board passed
this resolution in its meeting yesterday[1] and it is already official[2].
Congratulations, Duo, and thank you for continuing to honor the project
with your dedication.

Misty

[1] The minutes have not yet posted at the time of this email, but will be
available at http://www.apache.org/foundation/records/minutes/2019/.
[2] https://www.apache.org/foundation/#who-runs-the-asf


Re: Bits getting flipped in record value

2019-03-21 Thread Reid Chan
Maybe you can try unset the DATA_BLOCK_ENCODING attribute, then bulk load and 
run live ingest feed again to see if bits flipped still happen, if it is 
convenient. 


--

Best regards,
R.C




From: Austin Heyne 
Sent: 21 March 2019 23:28
To: user@hbase.apache.org
Subject: Re: Bits getting flipped in record value

We're using FAST_DIFF for the block encoding. For completeness here's
the rest:

{TABLE_ATTRIBUTES => {coprocessor$1 =>
's3://optix.rtb16/root/lib/geomesa-hbase-distributed-runtime_2.11-2.1.0.jar|org.locationtech.geomesa.hbase.coprocessor.GeoMesaCoprocessor|1073741823|'},
{NAME => 'd', BLOOMFILTER => 'NONE', DATA_BLOCK_ENCODING => 'FAST_DIFF',
COMPRESSION => 'SNAPPY'}

Thanks,
Austin

On 3/20/19 23:01, Reid Chan wrote:
> Do your table has `DATA_BLOCK_ENCODING` set?
>
>
>
>
>
> --
>
> Best regards,
> R.C
>
>
>
> 
> From: aheyne 
> Sent: 21 March 2019 10:06
> To: Sean Busbey
> Cc: user@hbase.apache.org
> Subject: Re: Bits getting flipped in record value
>
> Correct, no records will ever be updated.
>
> We do have a custom coprocessor loaded but before I mention that you
> should have more context about what we're actually doing. We're running
> GeoMesa [1] on top of HBase. Practically what this means is that for
> every record we write, we write twice, once to a spatio-temporal indexed
> table and another to an attribute indexed table. What we have seen is
> that the value in one of the tables, but not the other, is becoming
> corrupt. As far as we can tell, no custom read path code is involved
> since we've validated the raw binary values using direct HBase API
> access and since the write path has been verified with a second bulk
> load. Additionally, the fact the same code is writing the values for
> each index, I feel confident ruling out the write path. As I understand
> it so far the only manipulation of the values is happening during
> compactions.
>
> The coprocessor we're using is available here [2]. It's not doing
> anything too crazy, just filtering and depending on the query type,
> deserializing the row values and/or rolling up some aggregations.
>
> Thanks,
> Austin
>
> [1] https://www.geomesa.org/
> [2]
> https://github.com/locationtech/geomesa/blob/master/geomesa-hbase/geomesa-hbase-datastore/src/main/scala/org/locationtech/geomesa/hbase/coprocessor/GeoMesaCoprocessor.scala
>
> On 2019-03-20 21:34, Sean Busbey wrote:
>> So you're saying no records should ever be updated, right?
>>
>> Do you have any coprocessors loaded?
>>
>> On Wed, Mar 20, 2019, 20:32 aheyne  wrote:
>>
>>> I don't have the WALs but due to the nature of the data each
>>> record/key
>>> is unique. The keys for the data are generated using
>>> spatial-temporal
>>> dimensions of the observation.
>>>
>>> -Austin
>>>
>>> On 2019-03-20 21:25, Sean Busbey wrote:
>>>> Have you examined the wals for writes to the impacted cells to
>>> verify
>>>> an
>>>> update wasn't written with the change to the value?
>>>>
>>>> On Wed, Mar 20, 2019, 17:47 Austin Heyne  wrote:
>>>>
>>>>> Hey all,
>>>>>
>>>>> We're running HBase 1.4.8 on EMR 5.20 backed by S3 and we're
>>> seeing a
>>>>> bit get flipped in some record values.
>>>>>
>>>>> We've preformed a bulk ingest and bulk load of a large chunk of
>>> data
>>>>> and
>>>>> then pointed a live ingest feed to that table. After a period of
>>> time
>>>>> we
>>>>> found that a few records in the table had been corrupted and were
>>> one
>>>>> bit different from their original value. Since we saved the
>>> output of
>>>>> the bulk ingest we re-loaded those files and verified that at the
>>> time
>>>>> of bulk load the record was correct. This seems to us to indicate
>>> that
>>>>> at some point during the live ingest writes the record was
>>> corrupted.
>>>>> I've verified that the region that the record is in has never
>>> been
>>>>> split
>>>>> but it has received over 2 million write requests so there very
>>> likely
>>>>> could have been some minor compactions there.
>>>>>
>>>>> Has anyone seen anything like this before?
>>>>>
>>>>> Thanks,
>>>>> Austin
>>>>>
>>>>> --
>>>>> Austin L. Heyne
>>>>>
>>>>>
--
Austin L. Heyne



Re: Bits getting flipped in record value

2019-03-20 Thread Reid Chan
Do your table has `DATA_BLOCK_ENCODING` set?





--

Best regards,
R.C




From: aheyne 
Sent: 21 March 2019 10:06
To: Sean Busbey
Cc: user@hbase.apache.org
Subject: Re: Bits getting flipped in record value

Correct, no records will ever be updated.

We do have a custom coprocessor loaded but before I mention that you
should have more context about what we're actually doing. We're running
GeoMesa [1] on top of HBase. Practically what this means is that for
every record we write, we write twice, once to a spatio-temporal indexed
table and another to an attribute indexed table. What we have seen is
that the value in one of the tables, but not the other, is becoming
corrupt. As far as we can tell, no custom read path code is involved
since we've validated the raw binary values using direct HBase API
access and since the write path has been verified with a second bulk
load. Additionally, the fact the same code is writing the values for
each index, I feel confident ruling out the write path. As I understand
it so far the only manipulation of the values is happening during
compactions.

The coprocessor we're using is available here [2]. It's not doing
anything too crazy, just filtering and depending on the query type,
deserializing the row values and/or rolling up some aggregations.

Thanks,
Austin

[1] https://www.geomesa.org/
[2]
https://github.com/locationtech/geomesa/blob/master/geomesa-hbase/geomesa-hbase-datastore/src/main/scala/org/locationtech/geomesa/hbase/coprocessor/GeoMesaCoprocessor.scala

On 2019-03-20 21:34, Sean Busbey wrote:
> So you're saying no records should ever be updated, right?
>
> Do you have any coprocessors loaded?
>
> On Wed, Mar 20, 2019, 20:32 aheyne  wrote:
>
>> I don't have the WALs but due to the nature of the data each
>> record/key
>> is unique. The keys for the data are generated using
>> spatial-temporal
>> dimensions of the observation.
>>
>> -Austin
>>
>> On 2019-03-20 21:25, Sean Busbey wrote:
>>> Have you examined the wals for writes to the impacted cells to
>> verify
>>> an
>>> update wasn't written with the change to the value?
>>>
>>> On Wed, Mar 20, 2019, 17:47 Austin Heyne  wrote:
>>>
 Hey all,

 We're running HBase 1.4.8 on EMR 5.20 backed by S3 and we're
>> seeing a
 bit get flipped in some record values.

 We've preformed a bulk ingest and bulk load of a large chunk of
>> data
 and
 then pointed a live ingest feed to that table. After a period of
>> time
 we
 found that a few records in the table had been corrupted and were
>> one
 bit different from their original value. Since we saved the
>> output of
 the bulk ingest we re-loaded those files and verified that at the
>> time
 of bulk load the record was correct. This seems to us to indicate
>> that
 at some point during the live ingest writes the record was
>> corrupted.

 I've verified that the region that the record is in has never
>> been
 split
 but it has received over 2 million write requests so there very
>> likely
 could have been some minor compactions there.

 Has anyone seen anything like this before?

 Thanks,
 Austin

 --
 Austin L. Heyne




Re: [ANNOUNCE] Please welcome Peter Somogyi to the HBase PMC

2019-01-21 Thread Reid Chan
Congratulations! Peter! (clapping)


--

Best regards,
R.C




From: Guanghao Zhang 
Sent: 22 January 2019 11:07
To: Hbase-User
Cc: HBase Dev List
Subject: Re: [ANNOUNCE] Please welcome Peter Somogyi to the HBase PMC

Congratulations!

Yu Li  于2019年1月22日周二 上午10:48写道:

> Congratulations, Peter!
>
> Best Regards,
> Yu
>
>
> On Tue, 22 Jan 2019 at 10:38, Guangxu Cheng 
> wrote:
>
> > Congratulations Peter!
> >
> > -
> > Best Regards
> > Guangxu Cheng
> >
> > Allan Yang  于2019年1月22日周二 上午10:15写道:
> >
> > > Congratulations Peter!
> > > Best Regards
> > > Allan Yang
> > >
> > >
> > > Pankaj kr  于2019年1月22日周二 上午9:49写道:
> > >
> > > >
> > > > Congratulations Peter...!!!
> > > >
> > > > Regards,
> > > > Pankaj
> > > >
> > > > --
> > > > Pankaj Kumar
> > > > M: +91-9535197664(India Contact Number)
> > > > E: pankaj...@huawei.com
> > > > 2012实验室-班加罗尔研究所IT BU分部
> > > > 2012 Laboratories-IT BU Branch Dept.HTIPL
> > > > From:Duo Zhang 
> > > > To:HBase Dev List ;hbase-user <
> > > user@hbase.apache.org
> > > > >
> > > > Date:2019-01-22 07:06:43
> > > > Subject:[ANNOUNCE] Please welcome Peter Somogyi to the HBase PMC
> > > >
> > > > On behalf of the Apache HBase PMC I am pleased to announce that Peter
> > > > Somogyi
> > > > has accepted our invitation to become a PMC member on the Apache
> HBase
> > > > project.
> > > > We appreciate Peter stepping up to take more responsibility in the
> > HBase
> > > > project.
> > > >
> > > > Please join me in welcoming Peter to the HBase PMC!
> > > >
> > >
> >
>


Re: Does hbase support username and password?

2018-10-25 Thread Reid Chan
>> Does hbase support username and password?

No, it doesn't.

 >>  feature that HBase users could use username and password had contributed 
 >> this feature to the community

That's a customized feature, combination with apache-derby and some hacking in 
hdfs codes if i recall correctly.
It is not contributed to HBase for sure.

--

Best regards,
R.C




From: yuhang li <89389114...@gmail.com>
Sent: 25 October 2018 15:40
To: user@hbase.apache.org
Subject: Does hbase support username and password?

Hello everyone!
I am managing a HBase cluster for multiple users and for safety I want to
control the authority for each of them. The native ACL featrue is base on
Kerberos, but it not easy for me to persuade my users to use kerberos for
authorization and authentication, which is also not easy to use for them.
I heard on HBasecon Asia 2018  that Alibaba had developed the feature that
HBase users could use username and password and had contributed this
feature to the community.I wonder where can I find this contribution? from
JIRA or git repository?
Can someone help clarify this?


Re: Unable to read from Kerberised HBase

2018-07-12 Thread Reid Chan
I think there's possibility that some of the clients login failed.

Have you tried checking your krb5kdc.log to see login audit, or turn on 
-Dsun.security.krb5.debug=true?

And based on your situation, i suggest to use UPN(User Principal Name) with 
format "name@REALM" instead of SPN.



R.C




From: Lalit Jadhav 
Sent: 12 July 2018 19:41:03
To: user@hbase.apache.org
Subject: Re: Unable to read from Kerberised HBase

Yes, Reid, every machine has specific keytab and corresponding principal.


On Wed, Jul 11, 2018 at 3:29 PM, Reid Chan  wrote:

> Does every machine where hbase client runs has your specific keytab and
> corresponding principal?
>
> From snippet, i can tell that you're using service principal to login
> (with name/hostname@REALM format), and each principal should be different
> due to their different hostname.
>
>
>
> R.C
>
>
>
> 
> From: Lalit Jadhav 
> Sent: 11 July 2018 17:45:22
> To: user@hbase.apache.org
> Subject: Re: Unable to read from Kerberised HBase
>
> Yes.
>
> On Wed, Jul 11, 2018 at 2:58 PM, Reid Chan  wrote:
>
> > Does your hbase client run on multiple machines?
> >
> > R.C
> >
> >
> > 
> > From: Lalit Jadhav 
> > Sent: 11 July 2018 14:31:40
> > To: user@hbase.apache.org
> > Subject: Re: Unable to read from Kerberised HBase
> >
> > Tried with given snippet,
> >
> > It works when a table placed on single RegionServer. But when Table is
> > distributed across the cluster, I am not able to scan table, Let me know
> if
> > I am going wrong somewhere.
> >
> > On Tue, Jul 10, 2018 at 2:13 PM, Reid Chan 
> wrote:
> >
> > > Try this way:
> > >
> > >
> > > Connection connection = ugi.doAs(new PrivilegedAction() {
> > >
> > > @Override
> > > public Connection run() {
> > >   return ConnectionFactory.createConnection(configuration);
> > > }
> > >   });
> > >
> > >
> > >
> > > R.C
> > >
> > >
> > >
> > > 
> > > From: Lalit Jadhav 
> > > Sent: 10 July 2018 16:35:15
> > > To: user@hbase.apache.org
> > > Subject: Re: Unable to read from Kerberised HBase
> > >
> > > Code Snipper:
> > >
> > > Configuration configuration = HBaseConfiguration.create();
> > > configuration.set("hbase.zookeeper.quorum",  "QUARAM");
> > > configuration.set("hbase.master", "MASTER");
> > > configuration.set("hbase.zookeeper.property.clientPort", "2181");
> > > configuration.set("hadoop.security.authentication", "kerberos");
> > > configuration.set("hbase.security.authentication", "kerberos");
> > > configuration.set("zookeeper.znode.parent", "/hbase-secure");
> > > configuration.set("hbase.cluster.distributed", "true");
> > > configuration.set("hbase.rpc.protection", "authentication");
> > > configuration.set("hbase.regionserver.kerberos.principal",
> > > "hbase/Principal@realm");
> > > configuration.set("hbase.regionserver.keytab.file",
> > > "/home/developers/Desktop/hbase.service.keytab3");
> > > configuration.set("hbase.master.kerberos.principal",
> > > "hbase/HbasePrincipal@realm");
> > > configuration.set("hbase.master.keytab.file",
> > > "/etc/security/keytabs/hbase.service.keytab");
> > >
> > > System.setProperty("java.security.krb5.conf","/etc/krb5.conf");
> > >
> > > String principal = System.getProperty("kerberosPrincipal",
> > > "hbase/HbasePrincipal@realm");
> > > String keytabLocation = System.getProperty("kerberosKeytab",
> > > "/etc/security/keytabs/hbase.service.keytab");
> > > UserGroupInformation.setconfiguration(configuration);
> > > UserGroupInformation.loginUserFromKeytab(principal,
> keytabLocation);
> > > UserGroupInformation userGroupInformation = UserGroupInformation.
> > > loginUserFromKeytabAndReturnUGI("hbase/HbasePrincipal@realm",
> > > "/etc/security/keytabs/hbase.service.keytab");
> > > UserGroupInformation.setLoginUser(us

Re: Query for OldWals and use of WAl for Hbase indexer

2018-07-12 Thread Reid Chan
Please check the comments i leave in your filed jira.


From: Manjeet Singh 
Sent: 12 July 2018 14:55:40
To: user@hbase.apache.org
Subject: Re: Query for OldWals and use of WAl for Hbase indexer

Hi

Reid
You suggest me to directly delete OldWals by using Hdfs command "hdfs -rm"
 Question is : It's really safe and I will not lose any data and Indexing
from the system.

Thanks
Manjeet Singh

On Thu, Jul 12, 2018 at 12:09 PM, Manjeet Singh 
wrote:

> Hi
>
> I have created HBASE-20877
> <https://issues.apache.org/jira/browse/HBASE-20877> for the same, request
> you to please move it into active sprint.
>
> Thanks
> Manjeet Singh
>
> On Thu, Jul 12, 2018 at 7:42 AM, Reid Chan  wrote:
>
>> oldWals are supposed to be cleaned in master background chore, I also
>> doubt they are needed.
>>
>> HBASE-20352(for 1.x version) is to speed up cleaning oldWals, it may
>> address your concern "OldWals is quite huge"
>>
>>
>> R.C
>>
>>
>>
>> 
>> From: Manjeet Singh 
>> Sent: 12 July 2018 08:19:21
>> To: user@hbase.apache.org
>> Subject: Re: Query for OldWals and use of WAl for Hbase indexer
>>
>> I have one more question
>>
>> If solr is having its own data mean its maintaining data in their shards
>> and hbase is maintaining in data folder... Why still oldWals need?
>>
>> Thanks
>> Manjeet singh
>>
>> On Wed, 11 Jul 2018, 23:19 Manjeet Singh, 
>> wrote:
>>
>> > Thanks Sean for your reply
>> >
>> > I still have some question un answered like
>> > Q1: How Hbase syncronized with Hbase indexer.
>> > Q2 What optimization I can apply.
>> > Q3 As it's clear from my stats, data in OldWals is quite huge so it's
>> not
>> > getting clear my HMaster., how can I improve my HDFS space issue due to
>> > this?
>> >
>> > Thanks
>> > Manjeet Singh
>> >
>> > On Wed, Jul 11, 2018 at 9:33 PM, Sean Busbey  wrote:
>> >
>> >> Presuming you're using the Lily indexer[1], yes it relies on hbase's
>> >> built in cross-cluster replication.
>> >>
>> >> The replication system stores WALs until it can successfully send them
>> >> for replication. If you look in ZK you should be able to see which
>> >> regionserver(s) are waiting to send those WALs over. The easiest way
>> >> to do this is probably to look at the "zk dump" web page on the
>> >> Master's web ui[2].
>> >>
>> >> Once you have the particular region server(s), take a look at their
>> >> logs for messages about difficulty sending edits to the replication
>> >> peer you have set up for the destination solr collection.
>> >>
>> >> If you remove the WALs then the solr collection will have a hole in
>> >> it. Depending on how far behind you are, it might be quicker to 1)
>> >> remove the replication peer, 2) wait for old wals to clear, 3)
>> >> reenable replication, 4) use a batch indexing tool to index data
>> >> already in the table.
>> >>
>> >> [1]:
>> >>
>> >> http://ngdata.github.io/hbase-indexer/
>> >>
>> >> [2]:
>> >>
>> >> The specifics will vary depending on your installation, but the page
>> >> is essentially at a URL like
>> >> https://active-master-host.example.com:22002/zk.jsp
>> >>
>> >> the link is on the master UI landing page, near the bottom, in the
>> >> description of the "ZooKeeper Quorum" row. it's the end of "Addresses
>> >> of all registered ZK servers. For more, see zk dump."
>> >>
>> >> On Wed, Jul 11, 2018 at 10:16 AM, Manjeet Singh
>> >>  wrote:
>> >> > Hi All
>> >> >
>> >> > I have a query regarding Hbase replication and OldWals
>> >> >
>> >> > Hbase version 1.2.1
>> >> >
>> >> > To enable Hbase indexing we use below command on table
>> >> >
>> >> > alter '', {NAME => 'CF1', REPLICATION_SCOPE => 1}
>> >> >
>> >> > By Doing this actually replication get enabled as hbase-indexer
>> required
>> >> > it, as per my understanding indexer use hbase WAL (Please correct me
>> if
>> >> I
>> >> > am wrong).
>> >> >
>> >> > so question is How Hbase syncronize with Solr Indexer? What is the
>> role
>> >> of
>> >> > replication? what optimization we can apply in order to reduce data
>> >> size?
>> >> >
>> >> >
>> >> > I can see that our OldWals are getting filled , if Hmaster it self
>> >> taking
>> >> > care why it's reached to 7.2 TB? what if I delete it, does it impact
>> >> solr
>> >> > indexing?
>> >> >
>> >> > 7.2 K   21.5 K  /hbase/.hbase-snapshot
>> >> > 0   0   /hbase/.tmp
>> >> > 0   0   /hbase/MasterProcWALs
>> >> > 18.3 G  60.2 G  /hbase/WALs
>> >> > 28.7 G  86.1 G  /hbase/archive
>> >> > 0   0   /hbase/corrupt
>> >> > 1.7 T   5.2 T   /hbase/data
>> >> > 42  126 /hbase/hbase.id
>> >> > 7   21  /hbase/hbase.version
>> >> > 7.2 T   21.6 T  /hbase/oldWALs
>> >> >
>> >> >
>> >> >
>> >> >
>> >> > Thanks
>> >> > Manjeet Singh
>> >>
>> >
>> >
>> >
>> > --
>> > luv all
>> >
>>
>
>
>
> --
> luv all
>



--
luv all


Re: Query for OldWals and use of WAl for Hbase indexer

2018-07-11 Thread Reid Chan
oldWals are supposed to be cleaned in master background chore, I also doubt 
they are needed.

HBASE-20352(for 1.x version) is to speed up cleaning oldWals, it may address 
your concern "OldWals is quite huge"


R.C




From: Manjeet Singh 
Sent: 12 July 2018 08:19:21
To: user@hbase.apache.org
Subject: Re: Query for OldWals and use of WAl for Hbase indexer

I have one more question

If solr is having its own data mean its maintaining data in their shards
and hbase is maintaining in data folder... Why still oldWals need?

Thanks
Manjeet singh

On Wed, 11 Jul 2018, 23:19 Manjeet Singh, 
wrote:

> Thanks Sean for your reply
>
> I still have some question un answered like
> Q1: How Hbase syncronized with Hbase indexer.
> Q2 What optimization I can apply.
> Q3 As it's clear from my stats, data in OldWals is quite huge so it's not
> getting clear my HMaster., how can I improve my HDFS space issue due to
> this?
>
> Thanks
> Manjeet Singh
>
> On Wed, Jul 11, 2018 at 9:33 PM, Sean Busbey  wrote:
>
>> Presuming you're using the Lily indexer[1], yes it relies on hbase's
>> built in cross-cluster replication.
>>
>> The replication system stores WALs until it can successfully send them
>> for replication. If you look in ZK you should be able to see which
>> regionserver(s) are waiting to send those WALs over. The easiest way
>> to do this is probably to look at the "zk dump" web page on the
>> Master's web ui[2].
>>
>> Once you have the particular region server(s), take a look at their
>> logs for messages about difficulty sending edits to the replication
>> peer you have set up for the destination solr collection.
>>
>> If you remove the WALs then the solr collection will have a hole in
>> it. Depending on how far behind you are, it might be quicker to 1)
>> remove the replication peer, 2) wait for old wals to clear, 3)
>> reenable replication, 4) use a batch indexing tool to index data
>> already in the table.
>>
>> [1]:
>>
>> http://ngdata.github.io/hbase-indexer/
>>
>> [2]:
>>
>> The specifics will vary depending on your installation, but the page
>> is essentially at a URL like
>> https://active-master-host.example.com:22002/zk.jsp
>>
>> the link is on the master UI landing page, near the bottom, in the
>> description of the "ZooKeeper Quorum" row. it's the end of "Addresses
>> of all registered ZK servers. For more, see zk dump."
>>
>> On Wed, Jul 11, 2018 at 10:16 AM, Manjeet Singh
>>  wrote:
>> > Hi All
>> >
>> > I have a query regarding Hbase replication and OldWals
>> >
>> > Hbase version 1.2.1
>> >
>> > To enable Hbase indexing we use below command on table
>> >
>> > alter '', {NAME => 'CF1', REPLICATION_SCOPE => 1}
>> >
>> > By Doing this actually replication get enabled as hbase-indexer required
>> > it, as per my understanding indexer use hbase WAL (Please correct me if
>> I
>> > am wrong).
>> >
>> > so question is How Hbase syncronize with Solr Indexer? What is the role
>> of
>> > replication? what optimization we can apply in order to reduce data
>> size?
>> >
>> >
>> > I can see that our OldWals are getting filled , if Hmaster it self
>> taking
>> > care why it's reached to 7.2 TB? what if I delete it, does it impact
>> solr
>> > indexing?
>> >
>> > 7.2 K   21.5 K  /hbase/.hbase-snapshot
>> > 0   0   /hbase/.tmp
>> > 0   0   /hbase/MasterProcWALs
>> > 18.3 G  60.2 G  /hbase/WALs
>> > 28.7 G  86.1 G  /hbase/archive
>> > 0   0   /hbase/corrupt
>> > 1.7 T   5.2 T   /hbase/data
>> > 42  126 /hbase/hbase.id
>> > 7   21  /hbase/hbase.version
>> > 7.2 T   21.6 T  /hbase/oldWALs
>> >
>> >
>> >
>> >
>> > Thanks
>> > Manjeet Singh
>>
>
>
>
> --
> luv all
>


Re: Unable to read from Kerberised HBase

2018-07-11 Thread Reid Chan
Does every machine where hbase client runs has your specific keytab and 
corresponding principal?

>From snippet, i can tell that you're using service principal to login (with 
>name/hostname@REALM format), and each principal should be different due to 
>their different hostname.



R.C




From: Lalit Jadhav 
Sent: 11 July 2018 17:45:22
To: user@hbase.apache.org
Subject: Re: Unable to read from Kerberised HBase

Yes.

On Wed, Jul 11, 2018 at 2:58 PM, Reid Chan  wrote:

> Does your hbase client run on multiple machines?
>
> R.C
>
>
> 
> From: Lalit Jadhav 
> Sent: 11 July 2018 14:31:40
> To: user@hbase.apache.org
> Subject: Re: Unable to read from Kerberised HBase
>
> Tried with given snippet,
>
> It works when a table placed on single RegionServer. But when Table is
> distributed across the cluster, I am not able to scan table, Let me know if
> I am going wrong somewhere.
>
> On Tue, Jul 10, 2018 at 2:13 PM, Reid Chan  wrote:
>
> > Try this way:
> >
> >
> > Connection connection = ugi.doAs(new PrivilegedAction() {
> >
> > @Override
> > public Connection run() {
> >   return ConnectionFactory.createConnection(configuration);
> > }
> >   });
> >
> >
> >
> > R.C
> >
> >
> >
> > 
> > From: Lalit Jadhav 
> > Sent: 10 July 2018 16:35:15
> > To: user@hbase.apache.org
> > Subject: Re: Unable to read from Kerberised HBase
> >
> > Code Snipper:
> >
> > Configuration configuration = HBaseConfiguration.create();
> > configuration.set("hbase.zookeeper.quorum",  "QUARAM");
> > configuration.set("hbase.master", "MASTER");
> > configuration.set("hbase.zookeeper.property.clientPort", "2181");
> > configuration.set("hadoop.security.authentication", "kerberos");
> > configuration.set("hbase.security.authentication", "kerberos");
> > configuration.set("zookeeper.znode.parent", "/hbase-secure");
> > configuration.set("hbase.cluster.distributed", "true");
> > configuration.set("hbase.rpc.protection", "authentication");
> > configuration.set("hbase.regionserver.kerberos.principal",
> > "hbase/Principal@realm");
> > configuration.set("hbase.regionserver.keytab.file",
> > "/home/developers/Desktop/hbase.service.keytab3");
> > configuration.set("hbase.master.kerberos.principal",
> > "hbase/HbasePrincipal@realm");
> > configuration.set("hbase.master.keytab.file",
> > "/etc/security/keytabs/hbase.service.keytab");
> >
> > System.setProperty("java.security.krb5.conf","/etc/krb5.conf");
> >
> > String principal = System.getProperty("kerberosPrincipal",
> > "hbase/HbasePrincipal@realm");
> > String keytabLocation = System.getProperty("kerberosKeytab",
> > "/etc/security/keytabs/hbase.service.keytab");
> > UserGroupInformation.setconfiguration(configuration);
> > UserGroupInformation.loginUserFromKeytab(principal, keytabLocation);
> > UserGroupInformation userGroupInformation = UserGroupInformation.
> > loginUserFromKeytabAndReturnUGI("hbase/HbasePrincipal@realm",
> > "/etc/security/keytabs/hbase.service.keytab");
> > UserGroupInformation.setLoginUser(userGroupInformation);
> >
> >Connection connection =
> > ConnectionFactory.createConnection(configuration);
> >
> >
> > Any more logs about login failure or success or related? - No, I only got
> > above logs.
> >
> >
> > On Tue, Jul 10, 2018 at 1:58 PM, Reid Chan 
> wrote:
> >
> > > Any more logs about login failure or success or related?
> > >
> > > And can you show the code snippet of connection creation?
> > > 
> > > From: Lalit Jadhav 
> > > Sent: 10 July 2018 16:06:32
> > > To: user@hbase.apache.org
> > > Subject: Re: Unable to read from Kerberised HBase
> > >
> > > Table only contains 100 rows. Still not able to scan.
> > >
> > > On Tue, Jul 10, 2018, 12:21 PM anil gupta 
> wrote:
> > >
> > > > As per error message, your scan ran for more than 1 minute but the
> > > timeout
> > > > is set for 1 minute

Re: Unable to read from Kerberised HBase

2018-07-11 Thread Reid Chan
Does your hbase client run on multiple machines?

R.C



From: Lalit Jadhav 
Sent: 11 July 2018 14:31:40
To: user@hbase.apache.org
Subject: Re: Unable to read from Kerberised HBase

Tried with given snippet,

It works when a table placed on single RegionServer. But when Table is
distributed across the cluster, I am not able to scan table, Let me know if
I am going wrong somewhere.

On Tue, Jul 10, 2018 at 2:13 PM, Reid Chan  wrote:

> Try this way:
>
>
> Connection connection = ugi.doAs(new PrivilegedAction() {
>
> @Override
> public Connection run() {
>   return ConnectionFactory.createConnection(configuration);
> }
>   });
>
>
>
> R.C
>
>
>
> 
> From: Lalit Jadhav 
> Sent: 10 July 2018 16:35:15
> To: user@hbase.apache.org
> Subject: Re: Unable to read from Kerberised HBase
>
> Code Snipper:
>
> Configuration configuration = HBaseConfiguration.create();
> configuration.set("hbase.zookeeper.quorum",  "QUARAM");
> configuration.set("hbase.master", "MASTER");
> configuration.set("hbase.zookeeper.property.clientPort", "2181");
> configuration.set("hadoop.security.authentication", "kerberos");
> configuration.set("hbase.security.authentication", "kerberos");
> configuration.set("zookeeper.znode.parent", "/hbase-secure");
> configuration.set("hbase.cluster.distributed", "true");
> configuration.set("hbase.rpc.protection", "authentication");
> configuration.set("hbase.regionserver.kerberos.principal",
> "hbase/Principal@realm");
> configuration.set("hbase.regionserver.keytab.file",
> "/home/developers/Desktop/hbase.service.keytab3");
> configuration.set("hbase.master.kerberos.principal",
> "hbase/HbasePrincipal@realm");
> configuration.set("hbase.master.keytab.file",
> "/etc/security/keytabs/hbase.service.keytab");
>
> System.setProperty("java.security.krb5.conf","/etc/krb5.conf");
>
> String principal = System.getProperty("kerberosPrincipal",
> "hbase/HbasePrincipal@realm");
> String keytabLocation = System.getProperty("kerberosKeytab",
> "/etc/security/keytabs/hbase.service.keytab");
> UserGroupInformation.setconfiguration(configuration);
> UserGroupInformation.loginUserFromKeytab(principal, keytabLocation);
> UserGroupInformation userGroupInformation = UserGroupInformation.
> loginUserFromKeytabAndReturnUGI("hbase/HbasePrincipal@realm",
> "/etc/security/keytabs/hbase.service.keytab");
> UserGroupInformation.setLoginUser(userGroupInformation);
>
>Connection connection =
> ConnectionFactory.createConnection(configuration);
>
>
> Any more logs about login failure or success or related? - No, I only got
> above logs.
>
>
> On Tue, Jul 10, 2018 at 1:58 PM, Reid Chan  wrote:
>
> > Any more logs about login failure or success or related?
> >
> > And can you show the code snippet of connection creation?
> > 
> > From: Lalit Jadhav 
> > Sent: 10 July 2018 16:06:32
> > To: user@hbase.apache.org
> > Subject: Re: Unable to read from Kerberised HBase
> >
> > Table only contains 100 rows. Still not able to scan.
> >
> > On Tue, Jul 10, 2018, 12:21 PM anil gupta  wrote:
> >
> > > As per error message, your scan ran for more than 1 minute but the
> > timeout
> > > is set for 1 minute. Hence the error. Try doing smaller scans or
> > increasing
> > > timeout.(PS: HBase is mostly good for short scan not for full table
> > scans.)
> > >
> > > On Mon, Jul 9, 2018 at 8:37 PM, Lalit Jadhav <
> lalit.jad...@nciportal.com
> > >
> > > wrote:
> > >
> > > > While connecting to remote HBase cluster, I can create Table and get
> > > Table
> > > > Listing.  But unable to scan Table using Java API. Below is code
> > > >
> > > > configuration.set("hbase.zookeeper.quorum", "QUARAM");
> > > > configuration.set("hbase.master", "MASTER");
> > > > configuration.set("hbase.zookeeper.property.clientPort",
> "2181");
> > > > configuration.set("hadoop.security.authentication", "kerberos");
> > > > configuration.set("hbase.security.aut

Re: Unable to read from Kerberised HBase

2018-07-10 Thread Reid Chan
Try this way:


Connection connection = ugi.doAs(new PrivilegedAction() {

@Override
public Connection run() {
  return ConnectionFactory.createConnection(configuration);
}
  });



R.C




From: Lalit Jadhav 
Sent: 10 July 2018 16:35:15
To: user@hbase.apache.org
Subject: Re: Unable to read from Kerberised HBase

Code Snipper:

Configuration configuration = HBaseConfiguration.create();
configuration.set("hbase.zookeeper.quorum",  "QUARAM");
configuration.set("hbase.master", "MASTER");
configuration.set("hbase.zookeeper.property.clientPort", "2181");
configuration.set("hadoop.security.authentication", "kerberos");
configuration.set("hbase.security.authentication", "kerberos");
configuration.set("zookeeper.znode.parent", "/hbase-secure");
configuration.set("hbase.cluster.distributed", "true");
configuration.set("hbase.rpc.protection", "authentication");
configuration.set("hbase.regionserver.kerberos.principal",
"hbase/Principal@realm");
configuration.set("hbase.regionserver.keytab.file",
"/home/developers/Desktop/hbase.service.keytab3");
configuration.set("hbase.master.kerberos.principal",
"hbase/HbasePrincipal@realm");
configuration.set("hbase.master.keytab.file",
"/etc/security/keytabs/hbase.service.keytab");

System.setProperty("java.security.krb5.conf","/etc/krb5.conf");

String principal = System.getProperty("kerberosPrincipal",
"hbase/HbasePrincipal@realm");
String keytabLocation = System.getProperty("kerberosKeytab",
"/etc/security/keytabs/hbase.service.keytab");
UserGroupInformation.setconfiguration(configuration);
UserGroupInformation.loginUserFromKeytab(principal, keytabLocation);
UserGroupInformation userGroupInformation = UserGroupInformation.
loginUserFromKeytabAndReturnUGI("hbase/HbasePrincipal@realm",
"/etc/security/keytabs/hbase.service.keytab");
UserGroupInformation.setLoginUser(userGroupInformation);

   Connection connection =
ConnectionFactory.createConnection(configuration);


Any more logs about login failure or success or related? - No, I only got
above logs.


On Tue, Jul 10, 2018 at 1:58 PM, Reid Chan  wrote:

> Any more logs about login failure or success or related?
>
> And can you show the code snippet of connection creation?
> 
> From: Lalit Jadhav 
> Sent: 10 July 2018 16:06:32
> To: user@hbase.apache.org
> Subject: Re: Unable to read from Kerberised HBase
>
> Table only contains 100 rows. Still not able to scan.
>
> On Tue, Jul 10, 2018, 12:21 PM anil gupta  wrote:
>
> > As per error message, your scan ran for more than 1 minute but the
> timeout
> > is set for 1 minute. Hence the error. Try doing smaller scans or
> increasing
> > timeout.(PS: HBase is mostly good for short scan not for full table
> scans.)
> >
> > On Mon, Jul 9, 2018 at 8:37 PM, Lalit Jadhav  >
> > wrote:
> >
> > > While connecting to remote HBase cluster, I can create Table and get
> > Table
> > > Listing.  But unable to scan Table using Java API. Below is code
> > >
> > > configuration.set("hbase.zookeeper.quorum", "QUARAM");
> > > configuration.set("hbase.master", "MASTER");
> > > configuration.set("hbase.zookeeper.property.clientPort", "2181");
> > > configuration.set("hadoop.security.authentication", "kerberos");
> > > configuration.set("hbase.security.authentication", "kerberos");
> > > configuration.set("zookeeper.znode.parent", "/hbase-secure");
> > > configuration.set("hbase.cluster.distributed", "true");
> > > configuration.set("hbase.rpc.protection", "authentication");
> > > configuration.set("hbase.regionserver.kerberos.principal",
> > > "hbase/Principal@realm");
> > > configuration.set("hbase.regionserver.keytab.file",
> > > "/home/developers/Desktop/hbase.service.keytab3");
> > > configuration.set("hbase.master.kerberos.principal",
> > > "hbase/HbasePrincipal@realm");
> > > configuration.set("hbase.master.keytab.file",
> > > "/etc/security/keytabs/hbase.service.keytab");
> > >
> > > System.setProperty("java.security.krb5.conf"

Re: Unable to read from Kerberised HBase

2018-07-10 Thread Reid Chan
Any more logs about login failure or success or related?

And can you show the code snippet of connection creation?

From: Lalit Jadhav 
Sent: 10 July 2018 16:06:32
To: user@hbase.apache.org
Subject: Re: Unable to read from Kerberised HBase

Table only contains 100 rows. Still not able to scan.

On Tue, Jul 10, 2018, 12:21 PM anil gupta  wrote:

> As per error message, your scan ran for more than 1 minute but the timeout
> is set for 1 minute. Hence the error. Try doing smaller scans or increasing
> timeout.(PS: HBase is mostly good for short scan not for full table scans.)
>
> On Mon, Jul 9, 2018 at 8:37 PM, Lalit Jadhav 
> wrote:
>
> > While connecting to remote HBase cluster, I can create Table and get
> Table
> > Listing.  But unable to scan Table using Java API. Below is code
> >
> > configuration.set("hbase.zookeeper.quorum", "QUARAM");
> > configuration.set("hbase.master", "MASTER");
> > configuration.set("hbase.zookeeper.property.clientPort", "2181");
> > configuration.set("hadoop.security.authentication", "kerberos");
> > configuration.set("hbase.security.authentication", "kerberos");
> > configuration.set("zookeeper.znode.parent", "/hbase-secure");
> > configuration.set("hbase.cluster.distributed", "true");
> > configuration.set("hbase.rpc.protection", "authentication");
> > configuration.set("hbase.regionserver.kerberos.principal",
> > "hbase/Principal@realm");
> > configuration.set("hbase.regionserver.keytab.file",
> > "/home/developers/Desktop/hbase.service.keytab3");
> > configuration.set("hbase.master.kerberos.principal",
> > "hbase/HbasePrincipal@realm");
> > configuration.set("hbase.master.keytab.file",
> > "/etc/security/keytabs/hbase.service.keytab");
> >
> > System.setProperty("java.security.krb5.conf","/etc/krb5.conf");
> >
> > String principal = System.getProperty("kerberosPrincipal",
> > "hbase/HbasePrincipal@realm");
> > String keytabLocation = System.getProperty("kerberosKeytab",
> > "/etc/security/keytabs/hbase.service.keytab");
> > UserGroupInformation.setconfiguration(configuration);
> > UserGroupInformation.loginUserFromKeytab(principal, keytabLocation);
> > UserGroupInformation userGroupInformation =
> > UserGroupInformation.loginUserFromKeytabAndReturnUG
> > I("hbase/HbasePrincipal@realm",
> > "/etc/security/keytabs/hbase.service.keytab");
> > UserGroupInformation.setLoginUser(userGroupInformation);
> >
> > I am getting bellow errors,
> >
> > org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed
> after
> > attempts=36, exceptions: Mon Jul 09 18:45:57 IST 2018, null,
> > java.net.SocketTimeoutException: callTimeout=6, callDuration=64965:
> > row
> > '' on table 'DEMO_TABLE' at
> > region=DEMO_TABLE,,1529819280641.40f0e7dc4159937619da237915be8b11.,
> > hostname=dn1-devup.mstorm.com,60020,1531051433899, seqNum=526190
> >
> > Exception : java.io.IOException: Failed to get result within timeout,
> > timeout=6ms
> >
> >
> > --
> > Regards,
> > Lalit Jadhav
> > Network Component Private Limited.
> >
>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>


Re: HBase Replication Between Two Secure Clusters With Different Kerberos KDC's

2018-05-23 Thread Reid Chan
Have you checked kdc's audit log (krb5kdc.log)?


> Mechanism level: Server not found in Kerberos database (7) - LOOKING_UP_SERVER

This Indicating you were trying to authenticate an unknown principal. Please 
check the log, it may give you hint.


Also, do you have following properties in hbase-site.xml?

  
hbase.zookeeper.property.kerberos.removeHostFromPrincipal
true
  

  
hbase.zookeeper.property.kerberos.removeRealmFromPrincipal
true
  


From: Saad Mufti <saad.mu...@gmail.com>
Sent: 23 May 2018 23:31:11
To: user@hbase.apache.org
Subject: Re: HBase Replication Between Two Secure Clusters With Different 
Kerberos KDC's

Thanks, here it is:

1. On the source cluster, will be identical on the target cluster as both
use the same Kerberos realm name, though each has its own cluster specific
KDC:

$ more /etc/zookeeper/conf/server-jaas.conf
>
> /**
>
>  * Licensed to the Apache Software Foundation (ASF) under one
>
>  * or more contributor license agreements.  See the NOTICE file
>
>  * distributed with this work for additional information
>
>  * regarding copyright ownership.  The ASF licenses this file
>
>  * to you under the Apache License, Version 2.0 (the
>
>  * "License"); you may not use this file except in compliance
>
>  * with the License.  You may obtain a copy of the License at
>
>  * 
>
>  * http://www.apache.org/licenses/LICENSE-2.0
>
>  * 
>
>  * Unless required by applicable law or agreed to in writing, software
>
>  * distributed under the License is distributed on an "AS IS" BASIS,
>
>  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
> implied.
>
>  * See the License for the specific language governing permissions and
>
>  * limitations under the License.
>
>  */
>
> Server {
>
>   com.sun.security.auth.module.Krb5LoginModule required
>
>   useKeyTab=true
>
>   keyTab="/etc/zookeeper.keytab"
>
>   storeKey=true
>
>   useTicketCache=false
>
>   principal="zookeeper/@PGS.dev";
>
> };
>
>
2. I ran zkCli.sh after authenticating as kerberos principal
zookeeper/@PGS.dev got the following:

getAcl /hbase
>
> 'world,'anyone
>
> : r
>
> 'sasl,'hbase
>
> : cdrwa
>
>
3. I was logged in as Kerberos principal hbase/@PGS.dev when I ran
the add_peer command

Thanks for taking the time to help me in any way you can.


Saad


On Wed, May 23, 2018 at 7:24 AM, Reid Chan <reidddc...@outlook.com> wrote:

> Three places to check,
>
>
>   1.  Would you mind showing your "/etc/zookeeper/conf/server-jaas.conf",
>
> 2. and using zkCli.sh to getAcl /hbase.
> 3. BTW, what was your login principal when executing "add_peer" in
> hbase shell.
> 
> From: Saad Mufti <saad.mu...@gmail.com>
> Sent: 23 May 2018 01:48:17
> To: user@hbase.apache.org
> Subject: HBase Replication Between Two Secure Clusters With Different
> Kerberos KDC's
>
> Hi,
>
> Here is my scenario, I have two secure/authenticated EMR based HBase
> clusters, both have their own cluster dedicated KDC (using EMR support for
> this which means we get Kerberos support by just turning on a config flag).
>
> Now we want to get replication going between them. For other application
> reasons, we want both clusters to have the same Kerberos realm, let's say
> APP.COM, so Kerberos principals are like a...@app.com .
>
> I looked around the web and found the instructions at
> https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/
> bk_hadoop-high-availability/content/hbase-cluster-
> replication-among-kerberos-secured-clusters.html
> so I tried to follow these directions. Of course the instructions are for
> replication between clusters with different realms, so I adapted this by
> adding only one principal "krbtgt/app@app.com" and gave it some
> arbitrary password. Followed the rest of the directions as well to pass a
> rule property to Zookeeper and the requisite Hadoop property in
> core-site.xml .
>
> After all this, when I set up replication from cluster1 to cluster using
> add_peer, I see error messages in the region servers for cluster1 of the
> following form:
>
>
>
> > 2018-05-22 17:27:45,763 INFO  [main-SendThread(xxx.net:2181)]
> > zookeeper.ClientCnxn: Opening socket connection to server i
> >
> > p-10-194-247-88.aolp-ds-dev.us-east-1.ec2.aolcloud.net/xxx.yyy.zzz:2181.
> > Will attempt to SASL-authenticate using Login Context section 'Client'
> >
> > 2018-05-22 17:27:45,764 INFO  [main-SendThread(xxx.net:2181)]
> > zookeeper.ClientCnxn: Socket connection establi

Re: HBase Replication Between Two Secure Clusters With Different Kerberos KDC's

2018-05-23 Thread Reid Chan
Three places to check,


  1.  Would you mind showing your "/etc/zookeeper/conf/server-jaas.conf",

2. and using zkCli.sh to getAcl /hbase.
3. BTW, what was your login principal when executing "add_peer" in hbase 
shell.

From: Saad Mufti 
Sent: 23 May 2018 01:48:17
To: user@hbase.apache.org
Subject: HBase Replication Between Two Secure Clusters With Different Kerberos 
KDC's

Hi,

Here is my scenario, I have two secure/authenticated EMR based HBase
clusters, both have their own cluster dedicated KDC (using EMR support for
this which means we get Kerberos support by just turning on a config flag).

Now we want to get replication going between them. For other application
reasons, we want both clusters to have the same Kerberos realm, let's say
APP.COM, so Kerberos principals are like a...@app.com .

I looked around the web and found the instructions at
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_hadoop-high-availability/content/hbase-cluster-replication-among-kerberos-secured-clusters.html
so I tried to follow these directions. Of course the instructions are for
replication between clusters with different realms, so I adapted this by
adding only one principal "krbtgt/app@app.com" and gave it some
arbitrary password. Followed the rest of the directions as well to pass a
rule property to Zookeeper and the requisite Hadoop property in
core-site.xml .

After all this, when I set up replication from cluster1 to cluster using
add_peer, I see error messages in the region servers for cluster1 of the
following form:



> 2018-05-22 17:27:45,763 INFO  [main-SendThread(xxx.net:2181)]
> zookeeper.ClientCnxn: Opening socket connection to server i
>
> p-10-194-247-88.aolp-ds-dev.us-east-1.ec2.aolcloud.net/xxx.yyy.zzz:2181.
> Will attempt to SASL-authenticate using Login Context section 'Client'
>
> 2018-05-22 17:27:45,764 INFO  [main-SendThread(xxx.net:2181)]
> zookeeper.ClientCnxn: Socket connection established to ip-1
>
> 0-194-247-88.aolp-ds-dev.us-east-1.ec2.aolcloud.net/xxx.yyy.zzz:2181,
> initiating session
>
> 2018-05-22 17:27:45,777 INFO  [main-SendThread(xxx.net:2181)]
> zookeeper.ClientCnxn: Session establishment complete on ser
>
> ver xxx.net/xxx.yyy.zzz:2181, sessionid = 0x16388599b300215, negotiated
> timeout = 4
> 2018-05-22 17:27:45,779 ERROR [main-SendThread(xxx.net:2181)]
> client.ZooKeeperSaslClient: An error: (java.security.Privil
>
> egedActionException: javax.security.sasl.SaslException: GSS initiate
> failed [Caused by GSSException: No valid credentials provided (Mechanism
> level: Server not found in Kerberos database (7) - LOOKING_UP_SERVER)])
> occurred when evaluating Zookeeper Quorum Member's  received SASL token.
> Zookeeper Client will go to AUTH_FAILED state.
>
> 2018-05-22 17:27:45,779 ERROR [main-SendThread(xxx.net:2181)]
> zookeeper.ClientCnxn: SASL authentication with Zookeeper Quorum member
> failed: javax.security.sasl.SaslException: An error:
> (java.security.PrivilegedActionException:
> javax.security.sasl.SaslException: GSS initiate failed [Caused by
> GSSException: No valid credentials provided (Mechanism level: Server not
> found in Kerberos database (7) - LOOKING_UP_SERVER)]) occurred when
> evaluating Zookeeper Quorum Member's  received SASL token. Zookeeper
> Client will go to AUTH_FAILED state.
>
> 2018-05-22 17:28:12,574 WARN  [main-EventThread] zookeeper.ZKUtil:
> hconnection-0x4dcc1d33-0x16388599b300215, quorum=xyz.net:2181,
> baseZNode=/hbase Unable to set watcher on znode (/hbase/hbaseid)
>
> org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode
> = AuthFailed for /hbase/hbaseid
>
> at
> org.apache.zookeeper.KeeperException.create(KeeperException.java:123)
>
> at
> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>
> at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1102)
>
> at
>
>
The Zookeeper start command looks like the following:

/usr/lib/jvm/java-openjdk/bin/java -D*zoo*keeper.log.dir=/var/log/*zoo*keeper
> -D*zoo*keeper.root.logger=INFO,ROLLINGFILE -cp /usr/lib/*zoo*
> keeper/bin/../build/classes:/usr/lib/*zoo*
> keeper/bin/../build/lib/*.jar:/usr/lib/*zoo*
> keeper/bin/../lib/slf4j-log4j12-1.6.1.jar:/usr/lib/*zoo*
> keeper/bin/../lib/slf4j-api-1.6.1.jar:/usr/lib/*zoo*
> keeper/bin/../lib/netty-3.10.5.Final.jar:/usr/lib/*zoo*
> keeper/bin/../lib/log4j-1.2.16.jar:/usr/lib/*zoo*
> keeper/bin/../lib/jline-2.11.jar:/usr/lib/*zoo*keeper/bin/../*zoo*
> keeper-3.4.10.jar:/usr/lib/*zoo*keeper/bin/../src/java/lib/*.jar:/etc/
> *zoo*keeper/conf::/etc/*zoo*keeper/conf:/usr/lib/*zoo*keeper/*:/usr/lib/
> *zoo*keeper/lib/* 
> -Djava.security.auth.login.config=/etc/*zoo*keeper/conf/server-jaas.conf
> -D*zoo*keeper.security.auth_to_local=RULE:[2:\$1@\$0](.*@\QAPP.COM\E$)s/@\
> APP.COM\E$//DEFAULT -*zoo*keeper.log.threshold=INFO
> -Dcom.sun.management.jmxremote
> -Dcom.sun.management.jmxremote.local.only=false 
>