[ 
https://issues.apache.org/jira/browse/RANGER-752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15371000#comment-15371000
 ] 

Selvamohan Neethiraj commented on RANGER-752:
---------------------------------------------

This issue was resolved in user group .....
{code}
From: ********
Reply-To: <[email protected]>
Date: Monday, November 30, 2015 at 10:47 AM
To: <[email protected]>
Subject: Re: hdfs plugin enable issue

yes it worked, name node is started 

On Mon, Nov 30, 2015 at 11:41 PM, ***** wrote:
Hafiz!

Please try placing it in hadoop/share/hdfs/lib and try again.

Thanks,
Ramesh
From: *******
Reply-To: "[email protected]" <[email protected]>
Date: Monday, November 30, 2015 at 10:21 AM

To: "[email protected]" <[email protected]>
Subject: Re: hdfs plugin enable issue

Ramesh!

I tried by copying ranger-hdfs-plugin-impl to hadoop/lib  but still same issue

On Mon, Nov 30, 2015 at 10:58 PM, Hafiz Mujadid <[email protected]> 
wrote:
Yes I am using manual install 

On Mon, Nov 30, 2015 at 10:56 PM, Don Bosco Durai <[email protected]> wrote:
Ramesh, Hafiz might be using manual install. Have we tried that with the recent 
code? We selectively pick jars to be packaged in the components.

Thanks

Bosco


From: ********
Reply-To: <[email protected]>
Date: Monday, November 30, 2015 at 9:53 AM

To: "[email protected]" <[email protected]>
Subject: Re: hdfs plugin enable issue

Hafiz,

Under lib directory there is a folder ranger-hdfs-plugin-impl, please copy this 
folder to hadoop/lib  and try?

Regards,
Ramesh
{code}

> Name node not starting due StackOverFlowError exception 
> --------------------------------------------------------
>
>                 Key: RANGER-752
>                 URL: https://issues.apache.org/jira/browse/RANGER-752
>             Project: Ranger
>          Issue Type: Bug
>    Affects Versions: 0.6.0
>            Reporter: Mujadid khalid
>             Fix For: 0.6.0
>
>         Attachments: hadoop-hduser-namenode-vmubuntu2-VirtualBox.log, 
> install.properties
>
>
> When We enable hdfs plugin, and then restart hadoop, namenode does not start 
> with following exception 
> 2015-11-30 18:24:30,572 INFO org.apache.hadoop.util.GSet: Computing capacity 
> for map NameNodeRetryCache
> 2015-11-30 18:24:30,572 INFO org.apache.hadoop.util.GSet: VM type       = 
> 64-bit
> 2015-11-30 18:24:30,573 INFO org.apache.hadoop.util.GSet: 
> 0.029999999329447746% max memory 889 MB = 273.1 KB
> 2015-11-30 18:24:30,573 INFO org.apache.hadoop.util.GSet: capacity      = 
> 2^15 = 32768 entries
> 2015-11-30 18:24:30,860 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.
> java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
>       at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:134)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:843)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:673)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:584)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:644)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:811)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:795)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1488)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)
> Caused by: java.lang.reflect.InvocationTargetException
>       at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>       at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>       at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>       at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>       at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:132)
>       ... 8 more
> Caused by: java.lang.StackOverflowError
>       at java.lang.Exception.<init>(Exception.java:102)
>       at 
> java.lang.ReflectiveOperationException.<init>(ReflectiveOperationException.java:89)
>       at 
> java.lang.reflect.InvocationTargetException.<init>(InvocationTargetException.java:72)
>       at sun.reflect.GeneratedConstructorAccessor7.newInstance(Unknown Source)
>       at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>       at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>       at java.lang.Class.newInstance(Class.java:383)
>       at 
> org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer.init(RangerHdfsAuthorizer.java:65)
>       at 
> org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer.<init>(RangerHdfsAuthorizer.java:44)
>       at sun.reflect.GeneratedConstructorAccessor7.newInstance(Unknown Source)
>       at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>       at java.lang.reflect.Constructor.newInstance(Constructor.java:526)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to