Re: Reg. HBase client API calls in secure cluster (Kerberos)

2014-12-10 Thread AnandaVelMurugan Chandra Mohan
Hi,

Thanks for responding back. But I get this error now

Failure to initialize security context [Caused by GSSException: Invalid
name provided (Mechanism level: Could not load configuration file
C:\Windows\krb5.ini (The system cannot find the file specified))]

My problem is very similar to this stackflow question

http://stackoverflow.com/questions/21193453/how-to-access-secure-kerberized-hadoop-using-just-java-api

Basically I want to run the examples in this link
http://java.dzone.com/articles/handling-big-data-hbase-part-4 against my
secure cluster.

Regards,
Anand

On Wed, Dec 10, 2014 at 11:58 AM, Srikanth Srungarapu srikanth...@gmail.com
 wrote:

 Hi,
 Please take a look at the patch added as part of HBASE-12366
 https://issues.apache.org/jira/browse/HBASE-12366. There will be a new
 AuthUtil. launchAuthChore() which should help in your
 case. And also, the documentation patch is here HBASE-12528
 https://issues.apache.org/jira/browse/HBASE-12528 just in case. Hope
 this
 helps.
 Thanks,
 Srikanth.

 On Tue, Dec 9, 2014 at 10:11 PM, AnandaVelMurugan Chandra Mohan 
 ananthu2...@gmail.com wrote:

  Hi All,
 
  My Hbase admin has set up kerberos authentication in our cluster. Now all
  the HBase Java client API calls hang indefinitely.
  I could scan/get in HBase shell, but when I do the same through the java
  api,it hangs in the scan statement.
 
  This is code which was working earlier, but not now. Earlier I was
 running
  this code outside of the cluster without any impersonation.
 
  Configuration config = HBaseConfiguration.create();
  HTable table = new HTable(config, Assets);
  Scan Scan = new Scan();
  ResultScanner results = table.getScanner(Scan);
 
  Do I need to impersonate as any super user to make this work now? How do
 I
  pass the kerberos credentials? Any pointers would be greatly appreciated.
  --
  Regards,
  Anand
 




-- 
Regards,
Anand


RE: Reg. HBase client API calls in secure cluster (Kerberos)

2014-12-10 Thread ashish singhi
Hi.

When I get this exception I usually set 
System.setProperty(java.security.krb5.conf, krbfilepath); in my client code.
Where krbfilepath is path to krb5.conf file.

Regards

-Original Message-
From: AnandaVelMurugan Chandra Mohan [mailto:ananthu2...@gmail.com] 
Sent: 10 December 2014 15:41
To: user@hbase.apache.org
Subject: Re: Reg. HBase client API calls in secure cluster (Kerberos)

Hi,

Thanks for responding back. But I get this error now

Failure to initialize security context [Caused by GSSException: Invalid name 
provided (Mechanism level: Could not load configuration file 
C:\Windows\krb5.ini (The system cannot find the file specified))]

My problem is very similar to this stackflow question

http://stackoverflow.com/questions/21193453/how-to-access-secure-kerberized-hadoop-using-just-java-api

Basically I want to run the examples in this link
http://java.dzone.com/articles/handling-big-data-hbase-part-4 against my secure 
cluster.

Regards,
Anand

On Wed, Dec 10, 2014 at 11:58 AM, Srikanth Srungarapu srikanth...@gmail.com
 wrote:

 Hi,
 Please take a look at the patch added as part of HBASE-12366 
 https://issues.apache.org/jira/browse/HBASE-12366. There will be a 
 new AuthUtil. launchAuthChore() which should help in your case. And 
 also, the documentation patch is here HBASE-12528 
 https://issues.apache.org/jira/browse/HBASE-12528 just in case. Hope 
 this helps.
 Thanks,
 Srikanth.

 On Tue, Dec 9, 2014 at 10:11 PM, AnandaVelMurugan Chandra Mohan  
 ananthu2...@gmail.com wrote:

  Hi All,
 
  My Hbase admin has set up kerberos authentication in our cluster. 
  Now all the HBase Java client API calls hang indefinitely.
  I could scan/get in HBase shell, but when I do the same through the 
  java api,it hangs in the scan statement.
 
  This is code which was working earlier, but not now. Earlier I was
 running
  this code outside of the cluster without any impersonation.
 
  Configuration config = HBaseConfiguration.create(); HTable table = 
  new HTable(config, Assets); Scan Scan = new Scan(); ResultScanner 
  results = table.getScanner(Scan);
 
  Do I need to impersonate as any super user to make this work now? 
  How do
 I
  pass the kerberos credentials? Any pointers would be greatly appreciated.
  --
  Regards,
  Anand
 




--
Regards,
Anand


Re: My cdh5.2 cluster get FileNotFoundException when running hbase MR jobs

2014-12-10 Thread Ehud Lev
Hi Dima,
Thanks for the fast response,
Unfortunately this is not working for me, I tried :

hadoop jar /usr/lib/hbase/hbase-server-0.98.6-cdh5.2.1.jar rowcounter
-libjars /usr/lib/hbase/lib/hbase-client-0.98.6-cdh5.2.1.jar mytable
and
hadoop jar /usr/lib/hbase/hbase-server-0.98.6-cdh5.2.1.jar rowcounter
mytable -libjars /usr/lib/hbase/lib/hbase-client-0.98.6-cdh5.2.1.jar

Same error !

In addition this is working:
ls /usr/lib/hbase/lib/hbase-client-0.98.6-cdh5.2.1.jar
/usr/lib/hbase/lib/hbase-client-0.98.6-cdh5.2.1.jar

I also tried to run my own fat jars that was working on cdh4, but after I
compiled it with the cdh5 (same version as the cluster) I get the same
error.
I guess this is an hbase environment issue, but I can't put my finger on it.







On Wed, Dec 10, 2014 at 9:56 AM, Yaniv Yancovich ya...@gigya-inc.com
wrote:


 -- Forwarded message --
 From: Dima Spivak dspi...@cloudera.com
 Date: Tue, Dec 9, 2014 at 11:23 PM
 Subject: Re: My cdh5.2 cluster get FileNotFoundException when running
 hbase MR jobs
 To: user@hbase.apache.org user@hbase.apache.org
 Cc: Yaniv Yancovich ya...@gigya-inc.com


 Dear Ehud,

 You need the -libjars jar argument to move the dependency on your local
 file system into HDFS (the error is because that JAR is not there).

 -Dima

 On Tue, Dec 9, 2014 at 1:05 AM, Ehud Lev e...@gigya-inc.com wrote:

 My cdh5.2 cluster has a problem to run hbase MR jobs.

 For example, I added the hbase classpath into the hadoop classpath:
 vi /etc/hadoop/conf/hadoop-env.sh
 add the line:
 export HADOOP_CLASSPATH=/usr/lib/hbase/bin/hbase
 classpath:$HADOOP_CLASSPATH

 And when I am running:
 hadoop jar /usr/lib/hbase/hbase-server-0.98.6-cdh5.2.1.jar rowcounter
 mytable

 I get the following exception:

 14/12/09 03:44:02 WARN security.UserGroupInformation:
 PriviledgedActionException as:root (auth:SIMPLE)
 cause:java.io.FileNotFoundException: File does not exist:
 hdfs://le-hds3-hb2/usr/lib/hbase/lib/hbase-client-0.98.6-cdh5.2.1.jar
 Exception in thread main java.lang.reflect.InvocationTargetException
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at

 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at

 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at org.apache.hadoop.hbase.mapreduce.Driver.main(Driver.java:54)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at

 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at

 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
 Caused by: java.io.FileNotFoundException: File does not exist:
 hdfs://le-hds3-hb2/usr/lib/hbase/lib/hbase-client-0.98.6-cdh5.2.1.jar
 at

 org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1083)
 at

 org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1075)
 at

 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
 at

 org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1075)
 at

 org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
 at

 org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)
 at

 org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93)
 at

 org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)
 at

 org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:265)
 at

 org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:301)
 at

 org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394)
 at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1295)
 at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1292)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at

 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
 at org.apache.hadoop.mapreduce.Job.submit(Job.java:1292)
 at
 org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1313)
 at
 org.apache.hadoop.hbase.mapreduce.RowCounter.main(RowCounter.java:191)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at

 

[ANNOUNCE] HBase 0.99.2 (developer preview release) is now available for download

2014-12-10 Thread Enis Söztutar
The HBase Team is pleased to announce the immediate release of HBase 0.99.2.
Download it from your favorite Apache mirror [1] or maven repository.

THIS RELEASE IS NOT INTENDED FOR PRODUCTION USE, and does not contain any
backwards or forwards compatibility guarantees (even within minor versions
of
0.99.x). Please refrain from deploying this over important data. Use latest
0.98.x release instead. HBase 0.99.2 is a developer preview release, and
an odd-numbered release as defined in [2].

0.99.2 is the last planned release from 0.99.x line of developer preview
releases.Please use this release as a test bed for the upcoming HBase-1.0
release. Report any encountered problems or features that you think need
fixing before 1.0. This release also contains some API changes, and
deprecation of older APIs which won't be supported in 2.0 series. Please
give them a try and let us know what you think. All contribution in terms
of testing, benchmarking, checking API / source /wire compatibility,
checking out documentation and further code contribution is highly
appreciated. 1.0 will be the first series in the 1.x line of releases
which are expected to keep compatibility with previous 1.x releases. Thus
it
is very important to check the client side and server side APIs for
compatibility and maintainability concerns for future releases.

0.99.2 builds on top of all the changes that is in the 0.99.1 and 0.99.0
releases (an overview can be found at [4,5]). The theme of (eventual) 1.0
release is to become a stable base for future 1.x series of releases.
1.0 release will aim to achieve at least the same level of stability of
0.98 releases without introducing too many new features.

The work to clearly mark and differentiate client facing  APIs, and
redefine
some of the client interfaces for improving semantics, easy of use and
maintainability has continued in 0.99.2 release. Remaining work can
be found in HBASE-10602. Marking/remarking of interfaces with
InterfaceAudience
has also been going on, which will identify areas for compatibility (with
clients, coprocessors and dependent projects like Phoenix) for future
releases.

The work to clearly mark and differentiate client facing  APIs, and
redefine
some of the client interfaces for improving semantics, easy of use and
maintainability has continued in 0.99.2 release. Marking/remarking of
interfaces with InterfaceAudience has also been going on (HBASE-10462),
which will identify areas for compatibility (with clients, coprocessors
and dependent projects like Phoenix) for future releases.

0.99.2 contains 190 issues fixed on top of 0.99.1. Some other notable
improvements
in this release are
 - [HBASE-12075] - Preemptive Fast Fail
 - [HBASE-12147] - Porting Online Config Change from 89-fb
 - [HBASE-12354] - Update dependencies in time for 1.0 release
 - [HBASE-12363] - Improve how KEEP_DELETED_CELLS works with MIN_VERSIONS
 - [HBASE-12434] - Add a command to compact all the regions in a
regionserver
 - [HBASE-8707] - Add LongComparator for filter
 - [HBASE-12286] - [shell] Add server/cluster online load of configuration
changes
 - [HBASE-12361] - Show data locality of region in table page
 - [HBASE-12496] - A blockedRequestsCount metric
 - Switch to using new style of client APIs internally (in a lot of places)
 - Improvements in visibility labels
 - Perf improvements
 - Some more documentation improvements
 - Numerous improvements in other areas and bug fixes.

The release has these changes in default behaviour (from 0.99.1):
 - Disabled the Distributed Log Replay feature by default. Similar to
0.98
  and earlier releases Distributed Log Split is the default.


The list of changes in this release can be found in the release notes [3].
Thanks to everybody who contributed to this release!

ps. The release announcement was delayed by a couple of days due to some
INFRA issues.

Cheers,
The HBase Team

1. http://www.apache.org/dyn/closer.cgi/hbase/
2. https://hbase.apache.org/book/upgrading.html#hbase.versioning
3.
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12310753version=12328551
4.
https://mail-archives.apache.org/mod_mbox/hbase-dev/201409.mbox/%3ccamuu0w94oarqcz2zy4zlqy_aaqn70whhh1ycs_0bjpseeec...@mail.gmail.com%3E
5.
https://mail-archives.apache.org/mod_mbox/hbase-dev/201409.mbox/%3ccamuu0w9y_+afw6ww0ha_p8kbew35b3ncshbuqacfndzs8tc...@mail.gmail.com%3E


Re: My cdh5.2 cluster get FileNotFoundException when running hbase MR jobs

2014-12-10 Thread Bharath Vissapragada
Are you using yarn? If yes, can you try yarn jar /path/to/hbase-server.jar
rowcounter 't1'  and see if that works?


On Wed, Dec 10, 2014 at 1:43 PM, Ehud Lev e...@gigya-inc.com wrote:

 Hi Dima,
 Thanks for the fast response,
 Unfortunately this is not working for me, I tried :

 hadoop jar /usr/lib/hbase/hbase-server-0.98.6-cdh5.2.1.jar rowcounter
 -libjars /usr/lib/hbase/lib/hbase-client-0.98.6-cdh5.2.1.jar mytable
 and
 hadoop jar /usr/lib/hbase/hbase-server-0.98.6-cdh5.2.1.jar rowcounter
 mytable -libjars /usr/lib/hbase/lib/hbase-client-0.98.6-cdh5.2.1.jar

 Same error !

 In addition this is working:
 ls /usr/lib/hbase/lib/hbase-client-0.98.6-cdh5.2.1.jar
 /usr/lib/hbase/lib/hbase-client-0.98.6-cdh5.2.1.jar

 I also tried to run my own fat jars that was working on cdh4, but after I
 compiled it with the cdh5 (same version as the cluster) I get the same
 error.
 I guess this is an hbase environment issue, but I can't put my finger on
 it.







 On Wed, Dec 10, 2014 at 9:56 AM, Yaniv Yancovich ya...@gigya-inc.com
 wrote:

 
  -- Forwarded message --
  From: Dima Spivak dspi...@cloudera.com
  Date: Tue, Dec 9, 2014 at 11:23 PM
  Subject: Re: My cdh5.2 cluster get FileNotFoundException when running
  hbase MR jobs
  To: user@hbase.apache.org user@hbase.apache.org
  Cc: Yaniv Yancovich ya...@gigya-inc.com
 
 
  Dear Ehud,
 
  You need the -libjars jar argument to move the dependency on your local
  file system into HDFS (the error is because that JAR is not there).
 
  -Dima
 
  On Tue, Dec 9, 2014 at 1:05 AM, Ehud Lev e...@gigya-inc.com wrote:
 
  My cdh5.2 cluster has a problem to run hbase MR jobs.
 
  For example, I added the hbase classpath into the hadoop classpath:
  vi /etc/hadoop/conf/hadoop-env.sh
  add the line:
  export HADOOP_CLASSPATH=/usr/lib/hbase/bin/hbase
  classpath:$HADOOP_CLASSPATH
 
  And when I am running:
  hadoop jar /usr/lib/hbase/hbase-server-0.98.6-cdh5.2.1.jar rowcounter
  mytable
 
  I get the following exception:
 
  14/12/09 03:44:02 WARN security.UserGroupInformation:
  PriviledgedActionException as:root (auth:SIMPLE)
  cause:java.io.FileNotFoundException: File does not exist:
  hdfs://le-hds3-hb2/usr/lib/hbase/lib/hbase-client-0.98.6-cdh5.2.1.jar
  Exception in thread main java.lang.reflect.InvocationTargetException
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at
 
 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  at
 
 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606)
  at org.apache.hadoop.hbase.mapreduce.Driver.main(Driver.java:54)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at
 
 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  at
 
 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:606)
  at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
  Caused by: java.io.FileNotFoundException: File does not exist:
  hdfs://le-hds3-hb2/usr/lib/hbase/lib/hbase-client-0.98.6-cdh5.2.1.jar
  at
 
 
 org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1083)
  at
 
 
 org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1075)
  at
 
 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
  at
 
 
 org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1075)
  at
 
 
 org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
  at
 
 
 org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)
  at
 
 
 org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93)
  at
 
 
 org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)
  at
 
 
 org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:265)
  at
 
 
 org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:301)
  at
 
 
 org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394)
  at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1295)
  at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1292)
  at java.security.AccessController.doPrivileged(Native Method)
  at javax.security.auth.Subject.doAs(Subject.java:415)
  at
 
 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
  at 

[ANNOUNCE] Apache Phoenix 4.2.2 and 3.2.2 released

2014-12-10 Thread James Taylor
The Apache Phoenix team is pleased to announce the immediate
availability of the 4.2.2/3.2.2 release. For details of the release,
see our release announcement[1].

The Apache Phoenix team

[1] https://blogs.apache.org/phoenix/entry/announcing_phoenix_4_2_2