Hello !

I sent this mail to ask a question about the plugins for namenode, hive
etc...

I readed this description from the HW website :
###
Ranger plugins
Plugins are lightweight Java programs which embed within processes of each
cluster component. For example, the Apache Ranger plugin for Apache Hive is
embedded within Hiveserver2.These plugins pull in policies from a central
server and store them locally in a file. When a user request comes through
the component, these plugins intercept the request and evaluate it against
the security policy. Plugins also collect data from the user request and
follow a separate thread to send this data back to the audit server.
###

Link :
http://hortonworks.com/hadoop/ranger/#section_2

My questions are :

Q1 - Is the path where the file is stored configurable ? If yes, which
posix permissions should I set for the path and the file ? hdfs:hdfs for
namenode, hive:hive for hiveserver2 etc... ? And 440 is sufficient for the
file ?

Q2 - How are these plugin java programs launched on the hosts ? By Ranger
server ? Or do I need to start them manually ?

Q3 - These are some ranger agents on the dfferent hosts in fact, no ? If
yes, which components on the server side do they contact to get the
policies ? With which protocole do they contact this component ?

Q4 - Same question this time but for the audit logs ? Which components do
the ranger "agents" contact and with which protocole ?

Sorry for all these questions. ^_^

Hope you can help me.

Best regards.

Lune

Reply via email to