[ 
https://issues.apache.org/jira/browse/HADOOP-15722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16627456#comment-16627456
 ] 

Brahma Reddy Battula commented on HADOOP-15722:
-----------------------------------------------

Here *user_a* and *user_b* launched as different jvm's in samce machine and 
*hive.exec.scratchdir* is used by client to create the temp/intermediate files.

As hive is having feature of  "Impersonate the connected 
user(hive.server2.enable.doAs=true)",might be overiding the 'user.name' to 
support.(not sure how it does, will try to update on this,will update on this). 
But for their usecase,this should be required.

*Example for Env is not getting resolved.*
{code:java}
public void testProxyUserFromEnvironment() throws IOException {
 String proxyUser = "foo.bar";
 System.setProperty(UserGroupInformation.HADOOP_PROXY_USER, proxyUser);
 UserGroupInformation proxyUser1 = 
UserGroupInformation.createProxyUser(proxyUser, 
UserGroupInformation.getLoginUser());
 final Configuration conf = new Configuration();
//set user.name.from.proxy to /tmp/${user.name} in following xml.
 conf.addResource(new 
Path("D:\\trunk\\hadoop\\hadoop-common-project\\hadoop-common\\src\\test\\resources\\fi-site.xml"));
 proxyUser1.doAs(new PrivilegedAction<Void>() {

 @Override
 public Void run() {
 String user = conf.get("user.name.from.proxy");
 System.out.print("####Value after Exporting: " + user);
 return null;
 }
 });
}
 
Output:
####Value after Exporting: /tmp/${user.name}
{code}
 
{quote}How do we decide what is safe?
{quote}
Even I was thinking same. But only admin will be configuring the cluster,so he 
will be aware ,hence allowing system properties should be fine I feel. In this 
particular,they need to store temp files user level.
{quote}How about not using the system property in the scratch dir path?
{quote}
Yes,this will work.

 

> regression: Hadoop 2.7.7 release breaks spark submit
> ----------------------------------------------------
>
>                 Key: HADOOP-15722
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15722
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: build, conf, security
>    Affects Versions: 2.7.7
>            Reporter: Steve Loughran
>            Priority: Major
>
> SPARK-25330 highlights that upgrading spark to hadoop 2.7.7 is causing a 
> regression in client setup, with things only working when 
> {{Configuration.getRestrictParserDefault(Object resource)}} = false.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to