Re: get first row of every region

2016-08-01 Thread Phil Yang
If you pre-split this table with some split points when you create the
table, the start key may not be the prefix of the first row key. So you
should use setStartRow(regionInfo.getStartKey())
and setStopRow(nextRegionInfo.getStartKey()) and setBatch(1), if the result
is still null, this region is indeed empty.

Thanks,
Phil


2016-08-02 10:21 GMT+08:00 jinhong lu :

>  Thanks. Here is my code, but in most case, r is null? why this happened?
>
> byte[] startRowkey =
> regionInfo.getStartKey();
> Scan sc = new Scan();
> sc.setBatch(1);
> sc.setRowPrefixFilter(startRowkey);
> try {
> scanner = table.getScanner(sc);
> r = scanner.next();
> scanner.close();
> }
> Thanks,
> lujinhong
>
> > 在 2016年8月1日,18:49,Ted Yu > 写道:
> >
> > .
>
>


Re: get first row of every region

2016-08-01 Thread Jean-Marc Spaggiari
Well,  then it should return the row from the next region.

So it might means that the last region is empty or the last X regions,  no?

Le 2016-08-01 7:52 PM, "Vladimir Rodionov"  a
écrit :

> it means that for some regions you do not have any data.
>
> -Vlad
>
> On Mon, Aug 1, 2016 at 7:21 PM, jinhong lu  wrote:
>
> >  Thanks. Here is my code, but in most case, r is null? why this happened?
> >
> > byte[] startRowkey =
> > regionInfo.getStartKey();
> > Scan sc = new Scan();
> > sc.setBatch(1);
> > sc.setRowPrefixFilter(startRowkey);
> > try {
> > scanner = table.getScanner(sc);
> > r = scanner.next();
> > scanner.close();
> > }
> > Thanks,
> > lujinhong
> >
> > > 在 2016年8月1日,18:49,Ted Yu  yuzhih...@gmail.com>> 写道:
> > >
> > > .
> >
> >
>


Re: get first row of every region

2016-08-01 Thread Vladimir Rodionov
it means that for some regions you do not have any data.

-Vlad

On Mon, Aug 1, 2016 at 7:21 PM, jinhong lu  wrote:

>  Thanks. Here is my code, but in most case, r is null? why this happened?
>
> byte[] startRowkey =
> regionInfo.getStartKey();
> Scan sc = new Scan();
> sc.setBatch(1);
> sc.setRowPrefixFilter(startRowkey);
> try {
> scanner = table.getScanner(sc);
> r = scanner.next();
> scanner.close();
> }
> Thanks,
> lujinhong
>
> > 在 2016年8月1日,18:49,Ted Yu > 写道:
> >
> > .
>
>


Re: get first row of every region

2016-08-01 Thread jinhong lu
 Thanks. Here is my code, but in most case, r is null? why this happened?   


byte[] startRowkey = regionInfo.getStartKey();
Scan sc = new Scan();
sc.setBatch(1);
sc.setRowPrefixFilter(startRowkey);
try {
scanner = table.getScanner(sc);
r = scanner.next();
scanner.close();
} 
Thanks,
lujinhong

> 在 2016年8月1日,18:49,Ted Yu > 
> 写道:
> 
> .



query multiple specified columns using filterlist

2016-08-01 Thread 乔彦克
Hi all,

Currently I am using HBase client api to fetch data from HBase, and I want
to get 3 columns (cf:c1,cf:c2,cf:c3), and some value equals cf:c3 column,
But I don't know how to construct the filterList to achieve this, So How
can I do with that?
Any suggestion will be appreciated!


Best Regards,
QiaoYanke


Re: HBase replication on secured clusters

2016-08-01 Thread Esteban Gutierrez
Hi,

Assuming your both clusters have the proper cross realm authentication and
ZK and ZooKeeper has the right zookeeper.security.auth_to_local rules
configured (same as the ones from Hadoop) you shouldn't have that problem.
Also, your krb5. should have the proper mappings between hostnames and
realms in the [domain_realm] section.

cheers,
esteban.


--
Cloudera, Inc.


On Mon, Aug 1, 2016 at 11:34 AM, maychau 
wrote:

> Hello everyone,
>
> I'm trying to write a Scala application to test HBase replication on
> secured
> (Kerberized) clusters. I'm using Cloudera CDH5.5.2 version. My keytab is
> hbase user. The program did pickup the keytab and is able to log in with it
> based on INFO message, however I'm getting error "KeeperErrorCode = NoAuth
> for /hbase/replication/peers". Does anyone know why it is not able to
> access
> that znode using hbase keytab even though I believe it should be able to as
> that work through hbase zkcli shell client.
>
>   def main(args: Array[String]) {
>val conf = HBaseConfiguration.create()
>
>val keytab = "path_to_hbase.keytab"
>val principle = ""
>System.setProperty("java.security.auth.login.config",
> "path_to_jaas.conf_file");
>
>UserGroupInformation.setConfiguration(conf)
>UserGroupInformation.loginUserFromKeytab(principle, keytab)
>
>   val connection = ConnectionFactory.createConnection(conf)
>
>   //FAILED HERE WHEN TRYING TO CONNECT TO ZK TO GET CHILDREN NODE
>   val replAdmin = new ReplicationAdmin(conf)
>  }
>
> [main] INFO org.apache.hadoop.security.UserGroupInformation - Login
> successful for user  using keytab file 
>
> [main-EventThread] INFO org.apache.zookeeper.ClientCnxn - EventThread shut
> down
> Exception in thread "main" java.io.IOException: Error initializing the
> replication admin client.
> at
>
> org.apache.hadoop.hbase.client.replication.ReplicationAdmin.(ReplicationAdmin.java:151)
> at com.thomsonreuters.bigdata.HbaseTest$.main(HbaseTest.scala:201)
> at com.thomsonreuters.bigdata.HbaseTest.main(HbaseTest.scala)
> Caused by: org.apache.hadoop.hbase.replication.ReplicationException: Error
> getting the list of peer clusters.
> at
>
> org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.addExistingPeers(ReplicationPeersZKImpl.java:361)
> at
>
> org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.init(ReplicationPeersZKImpl.java:104)
> at
>
> org.apache.hadoop.hbase.client.replication.ReplicationAdmin.(ReplicationAdmin.java:132)
> ... 2 more
> Caused by: org.apache.zookeeper.KeeperException$NoAuthException:
> KeeperErrorCode = NoAuth for /hbase/replication/peers
> at
> org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
> at
> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
> at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1472)
> at
>
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getChildren(RecoverableZooKeeper.java:296)
> at
>
> org.apache.hadoop.hbase.zookeeper.ZKUtil.listChildrenNoWatch(ZKUtil.java:575)
> at
>
> org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.addExistingPeers(ReplicationPeersZKImpl.java:359)
> ... 4 more
>
> Thank you
>
>
>
> --
> View this message in context:
> http://apache-hbase.679495.n3.nabble.com/HBase-replication-on-secured-clusters-tp4081486.html
> Sent from the HBase User mailing list archive at Nabble.com.
>


Re: some questions about hbase pre-split

2016-08-01 Thread Ted Yu
For #1, please take a look at split.rb :

Split entire table or pass a region to split individual region.  With the
second parameter, you can specify an explicit split key for the region.
Examples:
split 'tableName'
split 'namespace:tableName'
split 'regionName' # format: 'tableName,startKey,id'
split 'tableName', 'splitKey'
split 'regionName', 'splitKey'

For #2, hbase will redistribute the data (of parent region into daughter
regions).

Cheers

On Mon, Aug 1, 2016 at 3:59 PM, Baichuan YANG  wrote:

> Hi all,
>
> Currently I am working on a data simulation project on which I need to
> store huge amount of data into one HBase table. I plan to pre-split the
> regions and disable auto-split, and here are my concerns;
> 1. Assume that originally I pre-split the table into 1000 regions. In the
> future if we need to split the table to 2000 regions or more, how to
> archive that (or just tell me what is the hbase shell command or Java
> APIs)?
> 2. If I re-split the table as I describe above, how HBase deal with the
> data that have been already saved into table? Will HBase automatically
> redistribute the data? Or I have to reload the data again?
>
> Regards,
> BC Y
>


some questions about hbase pre-split

2016-08-01 Thread Baichuan YANG
Hi all,

Currently I am working on a data simulation project on which I need to
store huge amount of data into one HBase table. I plan to pre-split the
regions and disable auto-split, and here are my concerns;
1. Assume that originally I pre-split the table into 1000 regions. In the
future if we need to split the table to 2000 regions or more, how to
archive that (or just tell me what is the hbase shell command or Java APIs)?
2. If I re-split the table as I describe above, how HBase deal with the
data that have been already saved into table? Will HBase automatically
redistribute the data? Or I have to reload the data again?

Regards,
BC Y


HBase replication on secured clusters

2016-08-01 Thread maychau
Hello everyone,

I'm trying to write a Scala application to test HBase replication on secured
(Kerberized) clusters. I'm using Cloudera CDH5.5.2 version. My keytab is
hbase user. The program did pickup the keytab and is able to log in with it
based on INFO message, however I'm getting error "KeeperErrorCode = NoAuth
for /hbase/replication/peers". Does anyone know why it is not able to access
that znode using hbase keytab even though I believe it should be able to as
that work through hbase zkcli shell client. 

  def main(args: Array[String]) { 
   val conf = HBaseConfiguration.create() 

   val keytab = "path_to_hbase.keytab" 
   val principle = "" 
   System.setProperty("java.security.auth.login.config",
"path_to_jaas.conf_file");

   UserGroupInformation.setConfiguration(conf) 
   UserGroupInformation.loginUserFromKeytab(principle, keytab) 

  val connection = ConnectionFactory.createConnection(conf) 

  //FAILED HERE WHEN TRYING TO CONNECT TO ZK TO GET CHILDREN NODE 
  val replAdmin = new ReplicationAdmin(conf) 
 } 

[main] INFO org.apache.hadoop.security.UserGroupInformation - Login
successful for user  using keytab file 

[main-EventThread] INFO org.apache.zookeeper.ClientCnxn - EventThread shut
down
Exception in thread "main" java.io.IOException: Error initializing the
replication admin client.
at
org.apache.hadoop.hbase.client.replication.ReplicationAdmin.(ReplicationAdmin.java:151)
at com.thomsonreuters.bigdata.HbaseTest$.main(HbaseTest.scala:201)
at com.thomsonreuters.bigdata.HbaseTest.main(HbaseTest.scala)
Caused by: org.apache.hadoop.hbase.replication.ReplicationException: Error
getting the list of peer clusters.
at
org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.addExistingPeers(ReplicationPeersZKImpl.java:361)
at
org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.init(ReplicationPeersZKImpl.java:104)
at
org.apache.hadoop.hbase.client.replication.ReplicationAdmin.(ReplicationAdmin.java:132)
... 2 more
Caused by: org.apache.zookeeper.KeeperException$NoAuthException:
KeeperErrorCode = NoAuth for /hbase/replication/peers
at org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1472)
at
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getChildren(RecoverableZooKeeper.java:296)
at
org.apache.hadoop.hbase.zookeeper.ZKUtil.listChildrenNoWatch(ZKUtil.java:575)
at
org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.addExistingPeers(ReplicationPeersZKImpl.java:359)
... 4 more

Thank you



--
View this message in context: 
http://apache-hbase.679495.n3.nabble.com/HBase-replication-on-secured-clusters-tp4081486.html
Sent from the HBase User mailing list archive at Nabble.com.


Re: Shaded client for 0.94x

2016-08-01 Thread Ted Yu
Have you taken a look at
http://hbase.apache.org/book.html#hadoop2.hbase_0.94 ?

On Mon, Aug 1, 2016 at 1:04 PM, Igor Berman  wrote:

> Hi all,
> I have old hbase cluster 0.94x that I need to write some data to. The
> problem is that my setup already contains hadoop2 jars in classpath(the
> natural cluster is hadoop2x). I've found hbase-shaded-client project that
> could help me, but it seems that it started from hbase 1x
> I thought about using Thrift api(hoping it won't create classpath collision
> problems)
> another option would be to prepare shaded client from 0.94x. Is it
> possible?
> Any pointers to previous such work?
>
>
> any suggestions will be highly appreciated.
> Thanks in advance,
> Igor
>
>
>
> --
> View this message in context:
> http://apache-hbase.679495.n3.nabble.com/Shaded-client-for-0-94x-tp4081489.html
> Sent from the HBase User mailing list archive at Nabble.com.
>


Re: issue starting regionserver with SASL authentication failed

2016-08-01 Thread Dima Spivak
The stacktrace suggests you don't have a ticket-granting ticket. Have you
kinit'd as the service user?

-Dima

On Sun, Jul 31, 2016 at 11:19 PM, Aneela Saleem 
wrote:

> Hi Dima,
>
> I followed the official reference guide now, but still same error.
> Attached is the hbase-site.xml file, please have a look. What's wrong there?
>
> On Thu, Jul 28, 2016 at 11:58 PM, Dima Spivak 
> wrote:
>
>> I haven't looked in detail at your hbase-site.xml, but if you're running
>> Apache HBase (and not a CDH release), I might recommend using the official
>> reference guide [1] to configure your cluster instead of the CDH 4.2.0
>> docs
>> since those would correspond to HBase 0.94, and might well have different
>> steps required to set up security. If you are trying out CDH HBase, be
>> sure
>> to use up-to-date documentation for your release.
>>
>> Let us know how it goes.
>>
>> [1] https://hbase.apache.org/book.html#hbase.secure.configuration
>>
>> -Dima
>>
>> On Thu, Jul 28, 2016 at 10:09 AM, Aneela Saleem 
>> wrote:
>>
>> > Hi Dima,
>> >
>> > I'm running Hbase version 1.2.2
>> >
>> > On Thu, Jul 28, 2016 at 8:35 PM, Dima Spivak 
>> wrote:
>> >
>> > > Hi Aneela,
>> > >
>> > > What version of HBase are you running?
>> > >
>> > > -Dima
>> > >
>> > > On Thursday, July 28, 2016, Aneela Saleem 
>> > wrote:
>> > >
>> > > > Hi,
>> > > >
>> > > > I have successfully configured Zookeeper with Kerberos
>> authentication.
>> > > Now
>> > > > i'm facing issue while configuring HBase with Kerberos
>> authentication.
>> > I
>> > > > have followed this link
>> > > > <
>> > >
>> >
>> http://www.cloudera.com/documentation/archive/cdh/4-x/4-2-0/CDH4-Security-Guide/cdh4sg_topic_8_2.html
>> > > >.
>> > > > Attached are the configuration files, i.e., hbase-site.xml and
>> > > > zk-jaas.conf.
>> > > >
>> > > > Following are the logs from regionserver:
>> > > >
>> > > > 016-07-28 17:44:56,881 WARN  [regionserver/hadoop-master/
>> > > > 192.168.23.206:16020] regionserver.HRegionServer: error telling
>> master
>> > > we
>> > > > are up
>> > > > com.google.protobuf.ServiceException: java.io.IOException: Could not
>> > set
>> > > > up IO Streams to hadoop-master/192.168.23.206:16000
>> > > > at
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:240)
>> > > > at
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
>> > > > at
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$BlockingStub.regionServerStartup(RegionServerStatusProtos.java:8982)
>> > > > at
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.regionserver.HRegionServer.reportForDuty(HRegionServer.java:2284)
>> > > > at
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:906)
>> > > > at java.lang.Thread.run(Thread.java:745)
>> > > > Caused by: java.io.IOException: Could not set up IO Streams to
>> > > > hadoop-master/192.168.23.206:16000
>> > > > at
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:785)
>> > > > at
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906)
>> > > > at
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873)
>> > > > at
>> > >
>> org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241)
>> > > > at
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
>> > > > ... 5 more
>> > > > Caused by: java.lang.RuntimeException: SASL authentication failed.
>> The
>> > > > most likely cause is missing or invalid credentials. Consider
>> 'kinit'.
>> > > > at
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$1.run(RpcClientImpl.java:685)
>> > > > at java.security.AccessController.doPrivileged(Native Method)
>> > > > at javax.security.auth.Subject.doAs(Subject.java:415)
>> > > > at
>> > > >
>> > >
>> >
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
>> > > > at
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.handleSaslConnectionFailure(RpcClientImpl.java:643)
>> > > > at
>> > > >
>> > >
>> >
>> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:751)
>> > > > ... 9 more
>> > > > Caused by: javax.security.sasl.SaslException: GSS initiate failed
>> > [Caused
>> > > > by GSSException: No valid credentials provided (Mechanism level:
>> Failed
>> > > to
>> > > > find any Kerberos tgt)]
>> > > > at
>> > > >
>> > >
>> >
>> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:212)
>> > > > 

Shaded client for 0.94x

2016-08-01 Thread Igor Berman
Hi all,
I have old hbase cluster 0.94x that I need to write some data to. The
problem is that my setup already contains hadoop2 jars in classpath(the
natural cluster is hadoop2x). I've found hbase-shaded-client project that
could help me, but it seems that it started from hbase 1x
I thought about using Thrift api(hoping it won't create classpath collision
problems)
another option would be to prepare shaded client from 0.94x. Is it possible?
Any pointers to previous such work?


any suggestions will be highly appreciated.
Thanks in advance,
Igor



--
View this message in context: 
http://apache-hbase.679495.n3.nabble.com/Shaded-client-for-0-94x-tp4081489.html
Sent from the HBase User mailing list archive at Nabble.com.


Re: get first row of every region

2016-08-01 Thread Ted Yu
You can issue Scan with each of the start keys and setBatch(1).
Close each scan after next() is called.

On Mon, Aug 1, 2016 at 1:55 AM, jinhong lu  wrote:

> Hi, I want to get first row of every region in a table, Any API for that?
> getStartKey() will return the rowkey not existed, but just the prefix.
>
>
>
> Thanks,
> lujinhong
>
>


Re: hbase java change baseZNode

2016-08-01 Thread kevin
val conf = HBaseConfiguration.create()
conf.set("hbase.zookeeper.property.clientPort", "2181")
conf.set("hbase.zookeeper.quorum", zkQuorum)
conf.set("zookeeper.znode.parent","/hbase-unsecure")

val connection = ConnectionFactory.createConnection(conf);

It's really worked.

2016-08-01 18:41 GMT+08:00 Ted Yu :

> As mentioned in Kevin's first email, if /hbase-unsecure is the znode used
> by Ambari, setting zookeeper.znode.parent to hbase (or /hbase) wouldn't
> help.
>
> On Mon, Aug 1, 2016 at 3:39 AM, Adam Davidson <
> adam.david...@bigdatapartnership.com> wrote:
>
> > Hi Kevin,
> >
> > when creating the Configuration object for the HBase connection
> > (HBaseConfiguration.create()), you often need to set a number of
> properties
> > on the resulting object. The one you require in this case is
> > 'zookeeper.znode.parent'. Set that to 'hbase' and that should fix this
> > particular problem, though you may find other properties may need
> > attention. I believe it is all documented in the main Apache HBase docs.
> >
> > Regards,
> > Adam
> >
> > On Mon, 1 Aug 2016 at 11:36 kevin  wrote:
> >
> > > hi,all:
> > > I install hbase by ambari ,I found it's zookeeper url is
> /hbase-unsecure
> > .
> > > when I use java api to connect to hbase ,program hung up .
> > > after kill it ,I found message :
> > > WARN ZKUtil: hconnection-0x4d1d2788-0x25617464bd80032,
> > > quorum=Centosle02:2181,Centosle03:2181,Centosle01:2181,
> baseZNode=/hbase
> > > Unable to get data of znode /hbase/meta-region-server
> > > java.lang.InterruptedException
> > >
> > > it read /hbase data not  /hbase-unsecure
> > >
> >
> > --
> >
> >
> > *We're hiring!*
> >  Please check out our current positions *here*
> > *.*
> > --
> >
> > *NOTICE AND DISCLAIMER*
> >
> > This email (including attachments) is confidential. If you are not the
> > intended recipient, notify the sender immediately, delete this email from
> > your system and do not disclose or use for any purpose.
> >
> > Business Address: Eagle House, 163 City Road, London, EC1V 1NR. United
> > Kingdom
> > Registered Office: Finsgate, 5-7 Cranwood Street, London, EC1V 9EE.
> United
> > Kingdom
> > Big Data Partnership Limited is a company registered in England & Wales
> > with Company No 7904824
> >
>


Re: hbase java change baseZNode

2016-08-01 Thread Ted Yu
As mentioned in Kevin's first email, if /hbase-unsecure is the znode used
by Ambari, setting zookeeper.znode.parent to hbase (or /hbase) wouldn't
help.

On Mon, Aug 1, 2016 at 3:39 AM, Adam Davidson <
adam.david...@bigdatapartnership.com> wrote:

> Hi Kevin,
>
> when creating the Configuration object for the HBase connection
> (HBaseConfiguration.create()), you often need to set a number of properties
> on the resulting object. The one you require in this case is
> 'zookeeper.znode.parent'. Set that to 'hbase' and that should fix this
> particular problem, though you may find other properties may need
> attention. I believe it is all documented in the main Apache HBase docs.
>
> Regards,
> Adam
>
> On Mon, 1 Aug 2016 at 11:36 kevin  wrote:
>
> > hi,all:
> > I install hbase by ambari ,I found it's zookeeper url is /hbase-unsecure
> .
> > when I use java api to connect to hbase ,program hung up .
> > after kill it ,I found message :
> > WARN ZKUtil: hconnection-0x4d1d2788-0x25617464bd80032,
> > quorum=Centosle02:2181,Centosle03:2181,Centosle01:2181, baseZNode=/hbase
> > Unable to get data of znode /hbase/meta-region-server
> > java.lang.InterruptedException
> >
> > it read /hbase data not  /hbase-unsecure
> >
>
> --
>
>
> *We're hiring!*
>  Please check out our current positions *here*
> *.*
> --
>
> *NOTICE AND DISCLAIMER*
>
> This email (including attachments) is confidential. If you are not the
> intended recipient, notify the sender immediately, delete this email from
> your system and do not disclose or use for any purpose.
>
> Business Address: Eagle House, 163 City Road, London, EC1V 1NR. United
> Kingdom
> Registered Office: Finsgate, 5-7 Cranwood Street, London, EC1V 9EE. United
> Kingdom
> Big Data Partnership Limited is a company registered in England & Wales
> with Company No 7904824
>


Re: hbase java change baseZNode

2016-08-01 Thread kevin
Thank you Adam Davidson.

2016-08-01 18:39 GMT+08:00 Adam Davidson <
adam.david...@bigdatapartnership.com>:

> Hi Kevin,
>
> when creating the Configuration object for the HBase connection
> (HBaseConfiguration.create()), you often need to set a number of properties
> on the resulting object. The one you require in this case is
> 'zookeeper.znode.parent'. Set that to 'hbase' and that should fix this
> particular problem, though you may find other properties may need
> attention. I believe it is all documented in the main Apache HBase docs.
>
> Regards,
> Adam
>
> On Mon, 1 Aug 2016 at 11:36 kevin  wrote:
>
> > hi,all:
> > I install hbase by ambari ,I found it's zookeeper url is /hbase-unsecure
> .
> > when I use java api to connect to hbase ,program hung up .
> > after kill it ,I found message :
> > WARN ZKUtil: hconnection-0x4d1d2788-0x25617464bd80032,
> > quorum=Centosle02:2181,Centosle03:2181,Centosle01:2181, baseZNode=/hbase
> > Unable to get data of znode /hbase/meta-region-server
> > java.lang.InterruptedException
> >
> > it read /hbase data not  /hbase-unsecure
> >
>
> --
>
>
> *We're hiring!*
>  Please check out our current positions *here*
> *.*
> --
>
> *NOTICE AND DISCLAIMER*
>
> This email (including attachments) is confidential. If you are not the
> intended recipient, notify the sender immediately, delete this email from
> your system and do not disclose or use for any purpose.
>
> Business Address: Eagle House, 163 City Road, London, EC1V 1NR. United
> Kingdom
> Registered Office: Finsgate, 5-7 Cranwood Street, London, EC1V 9EE. United
> Kingdom
> Big Data Partnership Limited is a company registered in England & Wales
> with Company No 7904824
>


Re: hbase java change baseZNode

2016-08-01 Thread Ted Yu
How did your Java program obtain hbase-site.xml of the cluster ?

Looks like hbase-site.xml was not on the classpath.

On Mon, Aug 1, 2016 at 3:36 AM, kevin  wrote:

> hi,all:
> I install hbase by ambari ,I found it's zookeeper url is /hbase-unsecure .
> when I use java api to connect to hbase ,program hung up .
> after kill it ,I found message :
> WARN ZKUtil: hconnection-0x4d1d2788-0x25617464bd80032,
> quorum=Centosle02:2181,Centosle03:2181,Centosle01:2181, baseZNode=/hbase
> Unable to get data of znode /hbase/meta-region-server
> java.lang.InterruptedException
>
> it read /hbase data not  /hbase-unsecure
>


hbase java change baseZNode

2016-08-01 Thread kevin
hi,all:
I install hbase by ambari ,I found it's zookeeper url is /hbase-unsecure .
when I use java api to connect to hbase ,program hung up .
after kill it ,I found message :
WARN ZKUtil: hconnection-0x4d1d2788-0x25617464bd80032,
quorum=Centosle02:2181,Centosle03:2181,Centosle01:2181, baseZNode=/hbase
Unable to get data of znode /hbase/meta-region-server
java.lang.InterruptedException

it read /hbase data not  /hbase-unsecure


get first row of every region

2016-08-01 Thread jinhong lu
Hi, I want to get first row of every region in a table, Any API for that?
getStartKey() will return the rowkey not existed, but just the prefix.



Thanks,
lujinhong



Re: issue starting regionserver with SASL authentication failed

2016-08-01 Thread Aneela Saleem
Hi Dima,

I followed the official reference guide now, but still same error. Attached
is the hbase-site.xml file, please have a look. What's wrong there?

On Thu, Jul 28, 2016 at 11:58 PM, Dima Spivak  wrote:

> I haven't looked in detail at your hbase-site.xml, but if you're running
> Apache HBase (and not a CDH release), I might recommend using the official
> reference guide [1] to configure your cluster instead of the CDH 4.2.0 docs
> since those would correspond to HBase 0.94, and might well have different
> steps required to set up security. If you are trying out CDH HBase, be sure
> to use up-to-date documentation for your release.
>
> Let us know how it goes.
>
> [1] https://hbase.apache.org/book.html#hbase.secure.configuration
>
> -Dima
>
> On Thu, Jul 28, 2016 at 10:09 AM, Aneela Saleem 
> wrote:
>
> > Hi Dima,
> >
> > I'm running Hbase version 1.2.2
> >
> > On Thu, Jul 28, 2016 at 8:35 PM, Dima Spivak 
> wrote:
> >
> > > Hi Aneela,
> > >
> > > What version of HBase are you running?
> > >
> > > -Dima
> > >
> > > On Thursday, July 28, 2016, Aneela Saleem 
> > wrote:
> > >
> > > > Hi,
> > > >
> > > > I have successfully configured Zookeeper with Kerberos
> authentication.
> > > Now
> > > > i'm facing issue while configuring HBase with Kerberos
> authentication.
> > I
> > > > have followed this link
> > > > <
> > >
> >
> http://www.cloudera.com/documentation/archive/cdh/4-x/4-2-0/CDH4-Security-Guide/cdh4sg_topic_8_2.html
> > > >.
> > > > Attached are the configuration files, i.e., hbase-site.xml and
> > > > zk-jaas.conf.
> > > >
> > > > Following are the logs from regionserver:
> > > >
> > > > 016-07-28 17:44:56,881 WARN  [regionserver/hadoop-master/
> > > > 192.168.23.206:16020] regionserver.HRegionServer: error telling
> master
> > > we
> > > > are up
> > > > com.google.protobuf.ServiceException: java.io.IOException: Could not
> > set
> > > > up IO Streams to hadoop-master/192.168.23.206:16000
> > > > at
> > > >
> > >
> >
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:240)
> > > > at
> > > >
> > >
> >
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
> > > > at
> > > >
> > >
> >
> org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$BlockingStub.regionServerStartup(RegionServerStatusProtos.java:8982)
> > > > at
> > > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.reportForDuty(HRegionServer.java:2284)
> > > > at
> > > >
> > >
> >
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:906)
> > > > at java.lang.Thread.run(Thread.java:745)
> > > > Caused by: java.io.IOException: Could not set up IO Streams to
> > > > hadoop-master/192.168.23.206:16000
> > > > at
> > > >
> > >
> >
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:785)
> > > > at
> > > >
> > >
> >
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906)
> > > > at
> > > >
> > >
> >
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873)
> > > > at
> > > org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1241)
> > > > at
> > > >
> > >
> >
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
> > > > ... 5 more
> > > > Caused by: java.lang.RuntimeException: SASL authentication failed.
> The
> > > > most likely cause is missing or invalid credentials. Consider
> 'kinit'.
> > > > at
> > > >
> > >
> >
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$1.run(RpcClientImpl.java:685)
> > > > at java.security.AccessController.doPrivileged(Native Method)
> > > > at javax.security.auth.Subject.doAs(Subject.java:415)
> > > > at
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
> > > > at
> > > >
> > >
> >
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.handleSaslConnectionFailure(RpcClientImpl.java:643)
> > > > at
> > > >
> > >
> >
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:751)
> > > > ... 9 more
> > > > Caused by: javax.security.sasl.SaslException: GSS initiate failed
> > [Caused
> > > > by GSSException: No valid credentials provided (Mechanism level:
> Failed
> > > to
> > > > find any Kerberos tgt)]
> > > > at
> > > >
> > >
> >
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:212)
> > > > at
> > > >
> > >
> >
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179)
> > > > at
> > > >
> > >
> >
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617)
> > > > at
> > > >
> > >
> >
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162)