Well, so this is what I have: - Ranger 0.7.1 - Hadoop 2.6 HADOOP_HOME=/usr/share/hadoop HADOOP_CONF_DIR=/etc/hadoop
As part of HADOOP_CONF_DIR (/etc/hadoop) I have the following files: - ranger-hdfs-security.xml - ranger-policymgr-ssl.xml - ranger-security.xml - ranger-hdfs-audit.xml hdfs-site.xml contains: <property> <name>dfs.permissions.enabled</name> <value>true</value> </property> <property> <name>dfs.namenode.inode.attributes.provider.class</name> <value>org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer</value> </property> As part of /usr/share/hadoop/share/hadoop/hdfs/lib: - ranger-hdfs-plugin-impl -> /usr/local/ranger-0.7.1-hdfs-plugin/lib/ranger-hdfs-plugin-impl - ranger-hdfs-plugin-shim-0.7.1.jar -> /usr/local/ranger-0.7.1-hdfs-plugin/lib/ranger-hdfs-plugin-shim-0.7.1.jar - ranger-plugin-classloader-0.7.1.jar -> /usr/local/ranger-0.7.1-hdfs-plugin/lib/ranger-plugin-classloader-0.7.1.jar Is there anything missing? looks good to me. When restarting the namenode the only I see in the logs regarding ranger is: STARTUP_MSG: classpath = /etc/hadoop:/usr/share/hadoop/share/hadoop/common/lib/slf4j-api-1.7.5.jar: ... ... :/usr/share/hadoop/share/hadoop/hdfs/lib/ranger-plugin-classloader-0.7.1.jar::/usr/share/hadoop/share/hadoop/hdfs/lib/ranger-hdfs-plugin-shim-0.7.1.jar: ... ... Any idea? On Fri, 11 Jan 2019 at 23:00, Ramesh Mani <rm...@hortonworks.com> wrote: > Odon, > > In your hdfs installation patch and conf folder do you see the following > files and folders > > E.g > [root@localhost lib]# pwd > /usr/hdp/3.1.0.0-38/hadoop/lib > [root@localhost lib]# ls -ltr ranger* > -rw-r--r--. 1 root root 16801 Nov 27 06:06 > ranger-plugin-classloader-1.2.0.3.1.0.0-38.jar > -rw-r--r--. 1 root root 14063 Nov 27 06:06 > ranger-hdfs-plugin-shim-1.2.0.3.1.0.0-38.jar > drwxr-xr-x. 2 root root 4096 Dec 18 15:41 ranger-hdfs-plugin-impl > > [root@localhost lib]# ls -ltr > /usr/hdp/3.1.0.0-38/hadoop/lib/ranger-hdfs-plugin-impl/ > total 17284 > -rw-r--r--. 1 root root 512742 Nov 27 06:06 woodstox-core-5.0.3.jar > -rw-r--r--. 1 root root 161867 Nov 27 06:06 stax2-api-3.1.4.jar > -rw-r--r--. 1 root root 1225258 Nov 27 06:06 solr-solrj-6.6.1.jar > -rw-r--r--. 1 root root 12278 Nov 27 06:06 > ranger-plugins-cred-1.2.0.3.1.0.0-38.jar > -rw-r--r--. 1 root root 26720 Nov 27 06:06 noggit-0.6.jar > -rw-r--r--. 1 root root 162126 Nov 27 06:06 javax.persistence-2.1.0.jar > -rw-r--r--. 1 root root 41029 Nov 27 06:06 httpmime-4.5.3.jar > -rw-r--r--. 1 root root 323824 Nov 27 06:06 httpcore-4.4.6.jar > -rw-r--r--. 1 root root 747794 Nov 27 06:06 httpclient-4.5.3.jar > -rw-r--r--. 1 root root 1502280 Nov 27 06:06 > htrace-core4-4.1.0-incubating.jar > -rw-r--r--. 1 root root 8632395 Nov 27 06:06 eclipselink-2.5.2.jar > -rw-r--r--. 1 root root 616888 Nov 27 06:06 > commons-configuration2-2.1.1.jar > -rw-r--r--. 1 root root 53540 Nov 29 10:00 > ranger-hdfs-plugin-1.2.0.3.1.0.0-42.jar > -rw-r--r--. 1 root root 715832 Dec 11 11:24 > ranger-plugins-common-1.2.0.3.1.0.0-78.jar > -rw-r--r--. 1 root root 2734339 Dec 17 14:27 guava-25.1-jre.jar > -rw-r--r--. 1 root root 195520 Dec 18 15:41 > ranger-plugins-audit-1.2.0.3.1.0.0-78.jar > > Also in /etc/hadoop/conf do you see the following ranger files. > > -rwxr--r--. 1 hdfs hadoop 1422 Nov 28 12:34 ranger-hdfs-audit.xml > -rwxr--r--. 1 hdfs hadoop 1066 Nov 28 12:34 ranger-hdfs-security.xml > -rwxr--r--. 1 hdfs hadoop 1061 Nov 28 12:34 ranger-policymgr-ssl.xml > -rw-r--r--. 1 hdfs hadoop 64 Dec 18 15:45 ranger-security.xml > > > Thanks, > Ramesh > > > From: Odon Copon <odonco...@gmail.com> > Reply-To: "user@ranger.apache.org" <user@ranger.apache.org> > Date: Friday, January 11, 2019 at 2:05 PM > To: "user@ranger.apache.org" <user@ranger.apache.org> > Subject: Re: Accessing Ranger Policy Manager API from HDFS plugin > > Abhay, yes, I can see dfs.namenode.inode.attributes.provider.class in my > hdfs-site.xml with > "org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer" as value. > > Kevin, or anyone, would be possible for you to share where the different > files from ranger are located are running the enable script? Maybe the > structure is not the same, but would help me debug what I should be looking > for. > Thanks. > > On Fri, 11 Jan 2019 at 18:11, Odon Copon <odonco...@gmail.com> wrote: > >> Added the DEBUG flag, and this are the only lines that contain references >> to Ranger, no errors at all and doesn't seem to be loading anything: >> 19/01/11 09:58:36 DEBUG mortbay.log: TLD search of >> file:/home/poodah/apache-ranger-1.2.0/target/ranger-1.2.0-hdfs-plugin/lib/ranger-plugin-classloader-1.2.0.jar >> 19/01/11 09:58:36 DEBUG mortbay.log: TLD search of >> file:/usr/share/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar >> 19/01/11 09:58:36 DEBUG mortbay.log: TLD search of >> file:/home/poodah/apache-ranger-1.2.0/target/ranger-1.2.0-hdfs-plugin/lib/ranger-hdfs-plugin-shim-1.2.0.jar >> >> On Fri, 11 Jan 2019 at 17:48, Ramesh Mani <rm...@hortonworks.com> wrote: >> >>> Hi Odon, >>> >>> Could you please put name node in debug and see the logs, you can see if >>> there are any exceptions related to Ranger when name node comes. >>> >>> Like Bosco and Kevin had mentioned mostly like that configurations are >>> not picked up and plugin is not getting enabled. >>> >>> Thanks, >>> Ramesh >>> >>> From: Kevin Risden <kris...@apache.org> >>> Reply-To: "user@ranger.apache.org" <user@ranger.apache.org> >>> Date: Friday, January 11, 2019 at 9:27 AM >>> To: "user@ranger.apache.org" <user@ranger.apache.org> >>> Subject: Re: Accessing Ranger Policy Manager API from HDFS plugin >>> >>> Do you have the ranger plugin on the hdfs classpath? Could be that >>> nothing is getting picked up because it's not installed in the right >>> location? >>> >>> Kevin Risden >>> >>> On Fri, Jan 11, 2019, 12:19 Odon Copon <odonco...@gmail.com wrote: >>> >>>> Ok, I'm running out of ideas to debug the issue. >>>> No logs like Bosco posted before, and I cannot find a reason for this >>>> not to work, when the endpoint is accessible and is correctly set in >>>> install.properties. >>>> >>>> On Fri, 11 Jan 2019 at 15:42, Odon Copon <odonco...@gmail.com> wrote: >>>> >>>>> Thanks Bosco, that's really helpful. >>>>> Ran the following search "grep -irnw '/' -e 'AuditProviderFactory' >>>>> --exclude \*.java --exclude \*.class --exclude \*.jar 2>/dev/null" trying >>>>> to look for "AuditProviderFactory" keyword, but there's nothing with that >>>>> word in it (exluding java, class and jar files). So no log files like >>>>> yours. >>>>> What could be causing this? I'm restarting the namenode executing the >>>>> following command: "sudo /etc/init.d/hadoop-namenode restart", is this >>>>> enough or is there any other internal component that needs to be restarted >>>>> as well? like the nodemanager per example. >>>>> >>>>> On Fri, 11 Jan 2019 at 15:34, Don Bosco Durai <bo...@apache.org> >>>>> wrote: >>>>> >>>>>> You might want to look into the logs. Here are some of mine from the >>>>>> HDFS namenode log .. >>>>>> >>>>>> >>>>>> >>>>>> 2019-01-10 06:52:22,128 INFO provider.AuditProviderFactory >>>>>> (AuditProviderFactory.java:init(150)) - AUDIT PROPERTY: >>>>>> ranger.plugin.hdfs.policy.rest.url=http://orange1.mydomain.com:6080 >>>>>> >>>>>> >>>>>> >>>>>> 2019-01-10 06:52:22,200 INFO service.RangerBasePlugin >>>>>> (RangerBasePlugin.java:init(151)) - PolicyEngineOptions: { evaluatorType: >>>>>> auto, cacheAuditResult: true, disableContextEnrichers: false, >>>>>> disableCustomConditions: false, disableTrieLookupPrefilter: false } >>>>>> >>>>>> 2019-01-10 06:52:23,274 INFO util.PolicyRefresher >>>>>> (PolicyRefresher.java:loadPolicyfromPolicyAdmin(277)) - >>>>>> PolicyRefresher(serviceName=orange_hadoop): found updated version. >>>>>> lastKnownVersion=-1; newVersion=3 >>>>>> >>>>>> 2019-01-10 06:52:23,337 INFO util.RangerResourceTrie >>>>>> (RangerResourceTrie.java:<init>(112)) - resourceName=path; >>>>>> optIgnoreCase=false; optWildcard=true; wildcardChars=*?{}\; nodeCount=18; >>>>>> leafNodeCount=1; singleChildNodeCount=17; maxDepth=18; >>>>>> evaluatorListCount=0; wildcardEvaluatorListCount=2; >>>>>> evaluatorListRefCount=17; wildcardEvaluatorListRefCount=15 >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> Bosco >>>>>> >>>>>> >>>>>> >>>>>> *From: *Odon Copon <odonco...@gmail.com> >>>>>> *Reply-To: *<user@ranger.apache.org> >>>>>> *Date: *Friday, January 11, 2019 at 7:23 AM >>>>>> *To: *<user@ranger.apache.org> >>>>>> *Subject: *Re: Accessing Ranger Policy Manager API from HDFS plugin >>>>>> >>>>>> >>>>>> >>>>>> On the namenode I'm editing install.properties file and then "sudo >>>>>> ./enable-hdfs-plugin.sh". >>>>>> >>>>>> Then I'm restarting the namenode service. >>>>>> >>>>>> >>>>>> >>>>>> On Fri, 11 Jan 2019 at 15:19, Don Bosco Durai <bo...@apache.org> >>>>>> wrote: >>>>>> >>>>>> It is every 30 seconds. If you are not seeing anything, then it might >>>>>> be a configuration issue. How are you enabling the HDFS plugin? >>>>>> >>>>>> >>>>>> >>>>>> Bosco >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> *From: *Odon Copon <odonco...@gmail.com> >>>>>> *Reply-To: *<user@ranger.apache.org> >>>>>> *Date: *Friday, January 11, 2019 at 7:18 AM >>>>>> *To: *<user@ranger.apache.org> >>>>>> *Subject: *Re: Accessing Ranger Policy Manager API from HDFS plugin >>>>>> >>>>>> >>>>>> >>>>>> 1.- How often is the plugin pulling policies from the API? >>>>>> >>>>>> 2.- I don't see anything in the logs regarding the plugin, how can I >>>>>> ensure is it running correctly? >>>>>> >>>>>> 3.- If I run a manual CURL requesting the policies I can see the UI >>>>>> is showing that attempt to retrieve the policies, with a 200. But just >>>>>> the >>>>>> manual CURL requests, no requests coming from the plugin. >>>>>> >>>>>> >>>>>> >>>>>> On Fri, 11 Jan 2019 at 14:58, Don Bosco Durai <bo...@apache.org> >>>>>> wrote: >>>>>> >>>>>> Also make sure you have the correct hostname and port for Ranger >>>>>> Admin. >>>>>> >>>>>> >>>>>> >>>>>> Bosco >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> *From: *Velmurugan Periasamy <vperias...@hortonworks.com> >>>>>> *Reply-To: *<user@ranger.apache.org> >>>>>> *Date: *Friday, January 11, 2019 at 6:32 AM >>>>>> *To: *"user@ranger.apache.org" <user@ranger.apache.org> >>>>>> *Subject: *Re: Accessing Ranger Policy Manager API from HDFS plugin >>>>>> >>>>>> >>>>>> >>>>>> You can check namenode log for any errors from HDFS plugin. >>>>>> >>>>>> >>>>>> ------------------------------ >>>>>> >>>>>> *From:* Odon Copon <odonco...@gmail.com> >>>>>> *Sent:* Friday, January 11, 2019 9:21 AM >>>>>> *To:* user@ranger.apache.org >>>>>> *Subject:* Re: Accessing Ranger Policy Manager API from HDFS plugin >>>>>> >>>>>> >>>>>> >>>>>> I fired manually a CURL request to >>>>>> "/service/plugins/policies/download/<service_name>" and now the UI is >>>>>> displaying some information in plugin tab. >>>>>> >>>>>> 1. Is Ranger Admin thinking the call was made from the plugin and is >>>>>> trying to list it? >>>>>> >>>>>> 2. If plugin would have executed this request, the UI should have >>>>>> displayed this information earlier, right? >>>>>> >>>>>> 3. Any specific log to check for more information? >>>>>> >>>>>> >>>>>> >>>>>> On Fri, 11 Jan 2019 at 14:07, Velmurugan Periasamy < >>>>>> vperias...@hortonworks.com> wrote: >>>>>> >>>>>> You should see plugin sync'ing policies in plugin tab. If it is not >>>>>> showing up, you need to check the logs for any error messages. >>>>>> ------------------------------ >>>>>> >>>>>> *From:* Odon Copon <odonco...@gmail.com> >>>>>> *Sent:* Friday, January 11, 2019 8:47 AM >>>>>> *To:* user@ranger.apache.org >>>>>> *Subject:* Re: Accessing Ranger Policy Manager API from HDFS plugin >>>>>> >>>>>> >>>>>> >>>>>> ok, seems "service/plugins/policies/download/" has public access, so >>>>>> confirms what we have been discussing, no authorization is required to >>>>>> download the policies. Good to know, thanks guys. >>>>>> >>>>>> The question know is how to know the plugin is calling this endpoint, >>>>>> and working properly, because the UI doesn't display this plugin. Any tip >>>>>> on this? >>>>>> >>>>>> >>>>>> >>>>>> On Fri, 11 Jan 2019 at 13:08, Odon Copon <odonco...@gmail.com> wrote: >>>>>> >>>>>> Yes, makes sense to have a 2-way SSL between the plugin and Ranger >>>>>> Admin, but: >>>>>> >>>>>> - 1. Does it mean there's no authentication at all between them? >>>>>> >>>>>> - 2. If there's no authentication, shouldn't a simple CURL work? At >>>>>> the moment if no user/pass is provided the API returns 401, or is there >>>>>> another different endpoint? If so, which one is it? >>>>>> >>>>>> - 3. What is the best way to debug the plugin is communicating or >>>>>> trying to communicate with Ranger admin? >>>>>> >>>>>> >>>>>> >>>>>> Thanks. >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Fri, 11 Jan 2019 at 12:53, Velmurugan Periasamy < >>>>>> vperias...@hortonworks.com> wrote: >>>>>> >>>>>> If there is no kerberos HDFS plugin uses the open Download policies >>>>>> API, so it is recommended to use 2-way SSL between HDFS plugin and Ranger >>>>>> Admin. >>>>>> >>>>>> >>>>>> On Jan 11, 2019, at 5:26 AM, Odon Copon <odonco...@gmail.com> wrote: >>>>>> >>>>>> I cannot perform a CURL to the API from the namenode without >>>>>> user/password, I get a 401 when doing that. So it might required >>>>>> credentials to do that. If I use the admin/password credentials or >>>>>> rangerusersync credentials the CURL works. So wondering if those >>>>>> credentials need to be setup somewhere. >>>>>> >>>>>> >>>>>> >>>>>> On Fri, 11 Jan 2019 at 10:15, Don Bosco Durai <bo...@apache.org> >>>>>> wrote: >>>>>> >>>>>> > In terms of "no authentication", is the HDFS plugin using Policy >>>>>> Manager API with no credentials at all? >>>>>> >>>>>> No credentials, because there is no user/password for HDFS service >>>>>> user. It’s been a while, I think we used to have admin/password before, >>>>>> but >>>>>> it was taken out eventually. The code might be still there… >>>>>> >>>>>> >>>>>> >>>>>> > What's the first action the plugin is performing to be detected by >>>>>> the UI as active and 200 response? >>>>>> >>>>>> Abhay or Madhan might be able to give you more specifics. Since the >>>>>> plugins are polling and it knows the previous version number, if there >>>>>> are >>>>>> no changes, then it is not registered in the UI. The plugins primarily >>>>>> pull >>>>>> the policies and tags from Ranger Admin. Rest everything is done by the >>>>>> plugin within the component. >>>>>> >>>>>> >>>>>> >>>>>> Bosco >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> *From: *Odon Copon <odonco...@gmail.com> >>>>>> *Reply-To: *<user@ranger.apache.org> >>>>>> *Date: *Friday, January 11, 2019 at 2:03 AM >>>>>> *To: *<user@ranger.apache.org> >>>>>> *Subject: *Re: Accessing Ranger Policy Manager API from HDFS plugin >>>>>> >>>>>> >>>>>> >>>>>> Indeed, I know that at the moment without something like Kerberos, >>>>>> users can impersonate others, but I'm currently building a POC with the >>>>>> basic security to evaluate Ranger, and once is ready, start improving the >>>>>> security and scalability. But thank you for pointing that out. >>>>>> >>>>>> In terms of "no authentication", is the HDFS plugin using Policy >>>>>> Manager API with no credentials at all? or default ones? >>>>>> >>>>>> What's the first action the plugin is performing to be detected by >>>>>> the UI as active and 200 response? Some king of ping/heartbeat? or just a >>>>>> rest petition to download the policies? >>>>>> >>>>>> Is there anywhere where I can see in the logs what kind of actions >>>>>> the plugin is doing? I don't find any log information coming from the >>>>>> plugin. >>>>>> >>>>>> >>>>>> >>>>>> Thanks! >>>>>> >>>>>> >>>>>> >>>>>> On Fri, 11 Jan 2019 at 09:53, Don Bosco Durai <bo...@apache.org> >>>>>> wrote: >>>>>> >>>>>> If there is no Kerberos, then you have 2 options: >>>>>> >>>>>> 1. No authentication (default) >>>>>> 2. Two way SSL to authenticate the request from the plugin. >>>>>> >>>>>> >>>>>> >>>>>> Note, if it is non-Kerberos environment, then authorization cannot be >>>>>> enforced, because users can impersonate anyone. >>>>>> >>>>>> >>>>>> >>>>>> Bosco >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> *From: *Odon Copon <odonco...@gmail.com> >>>>>> *Reply-To: *<user@ranger.apache.org> >>>>>> *Date: *Friday, January 11, 2019 at 1:22 AM >>>>>> *To: *<user@ranger.apache.org> >>>>>> *Subject: *Re: Accessing Ranger Policy Manager API from HDFS plugin >>>>>> >>>>>> >>>>>> >>>>>> Mmm, but what if the system is not using Kerberos? >>>>>> >>>>>> >>>>>> >>>>>> On Fri, 11 Jan 2019, 04:21 Velmurugan Periasamy < >>>>>> vperias...@hortonworks.com wrote: >>>>>> >>>>>> Yes, that's what I referred to. >>>>>> >>>>>> >>>>>> ------------------------------ >>>>>> >>>>>> *From:* Odon Copon <odonco...@gmail.com> >>>>>> *Sent:* Thursday, January 10, 2019 5:07 PM >>>>>> *To:* user@ranger.apache.org >>>>>> *Subject:* Re: Accessing Ranger Policy Manager API from HDFS plugin >>>>>> >>>>>> >>>>>> >>>>>> Are we talking about principal in Kerberos or any other principal I'm >>>>>> not understanding? >>>>>> >>>>>> >>>>>> >>>>>> On Thu, 10 Jan 2019 at 18:05, Odon Copon <odonco...@gmail.com> wrote: >>>>>> >>>>>> What do you mean by *HDFS plugin uses service (Namenode) user's >>>>>> principal *? >>>>>> >>>>>> Could you provide an example? >>>>>> Thanks. >>>>>> >>>>>> >>>>>> >>>>>> On Thu, 10 Jan 2019 at 17:08, Velmurugan Periasamy < >>>>>> vperias...@hortonworks.com> wrote: >>>>>> >>>>>> HDFS plugin uses service (Namenode) user's principal. >>>>>> >>>>>> >>>>>> ------------------------------ >>>>>> >>>>>> *From:* Odon Copon <odonco...@gmail.com> >>>>>> *Sent:* Thursday, January 10, 2019 8:59 AM >>>>>> *To:* user@ranger.apache.org >>>>>> *Subject:* Accessing Ranger Policy Manager API from HDFS plugin >>>>>> >>>>>> >>>>>> >>>>>> Hi, >>>>>> >>>>>> How does the Ranger HDFS plugin communicates with the Policy Manager >>>>>> API? Is it using a specific user/password combination? >>>>>> >>>>>> I know the User Sync has rangerusersync user and pass, and all that >>>>>> information is stored in rangerusersync.jceks, but what about the HDFS >>>>>> plugin or any other plugin? >>>>>> I'm having issues with that, my plugin once enabled doesn't get >>>>>> displayed in the UI and would like to check the credentials the plugin is >>>>>> using to use the API. >>>>>> >>>>>> For the User Sync - Policy Manager communication works fine. >>>>>> >>>>>> >>>>>> >>>>>> Thanks. >>>>>> >>>>>>