[ 
https://issues.apache.org/jira/browse/FLINK-25108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

aresyhzhang updated FLINK-25108:
--------------------------------
    Description: 
version:
flink version:1.14.0
java version:1.8
run mode:flink native k8s session

problem:
When I use flink sql batch mode to read the data in the hive table, (we use 
flink native k8s session submit job)it appears:
Caused by: java.io.IOException: Can't get Master Kerberos principal for use as 
renewer
        at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:116)
 ~[hadoop.jar:2.6.5-10.0]
        at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
 ~[hadoop.jar:2.6.5-10.0]
        at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
 ~[hadoop.jar:2.6.5-10.0]
        at 
org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:205) 
~[hadoop.jar:2.6.5-10.0].
I think this exception is caused by the absence of yarn-site.xml under the 
environment variable HADOOP_CONF_DIR. The default value of the environment 
variable is: /opt/hadoop/conf
I tried by specifying pod-template.yaml:
- name: HADOOP_CONF_DIR
value: "/etc/hive/conf"
Change the value of HADOOP_CONF_DIR because I have stored yarn-site.xml under 
/etc/hive/conf, but it will always be overwritten by "/opt/hadoop/conf".

Remark:
1. When I look at the source code of getHadoopConfigurationFileItems of the 
HadoopConfMountDecorator class, I find
    final List<String> expectedFileNames = new ArrayList<>();
         expectedFileNames.add("core-site.xml");
         expectedFileNames.add("hdfs-site.xml");
Only core-site.xml and hdfs-site.xml are downloaded here, but "yarn-site.xml" 
is not downloaded, which leads to failure to pass kerberos authentication
Should I add another line of code:
epectedFileNames.add("yarn-site.xml") To pass kerberos authentication

2. Or is there any other way to actually change the value of the environment 
variable HADOOP_CONF_DIR so that it points to the "/etc/hive/conf" I want 
instead of "/opt/hadoop/conf" To pass kerberos authentication

  was:
version:
flink version:1.14.0
java version:1.8
run mode:flink native k8s session

problem:
When I use flink sql batch mode to read the data in the hive table, (我们采用flink 
native k8s session提交任务)it appears:
Caused by: java.io.IOException: Can't get Master Kerberos principal for use as 
renewer
        at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:116)
 ~[hadoop.jar:2.6.5-10.0]
        at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
 ~[hadoop.jar:2.6.5-10.0]
        at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
 ~[hadoop.jar:2.6.5-10.0]
        at 
org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:205) 
~[hadoop.jar:2.6.5-10.0].
I think this exception is caused by the absence of yarn-site.xml under the 
environment variable HADOOP_CONF_DIR. The default value of the environment 
variable is: /opt/hadoop/conf
I tried by specifying pod-template.yaml:
- name: HADOOP_CONF_DIR
value: "/etc/hive/conf"
Change the value of HADOOP_CONF_DIR because I have stored yarn-site.xml under 
/etc/hive/conf, but it will always be overwritten by "/opt/hadoop/conf".

Remark:
1. When I look at the source code of getHadoopConfigurationFileItems of the 
HadoopConfMountDecorator class, I find
    final List<String> expectedFileNames = new ArrayList<>();
         expectedFileNames.add("core-site.xml");
         expectedFileNames.add("hdfs-site.xml");
Only core-site.xml and hdfs-site.xml are downloaded here, but "yarn-site.xml" 
is not downloaded, which leads to failure to pass kerberos authentication
Should I add another line of code:
epectedFileNames.add("yarn-site.xml") To pass kerberos authentication

2. Or is there any other way to actually change the value of the environment 
variable HADOOP_CONF_DIR so that it points to the "/etc/hive/conf" I want 
instead of "/opt/hadoop/conf" To pass kerberos authentication


> When the environment variable HADOOP_CONF_DIR flink kerberos authentication 
> error is set
> ----------------------------------------------------------------------------------------
>
>                 Key: FLINK-25108
>                 URL: https://issues.apache.org/jira/browse/FLINK-25108
>             Project: Flink
>          Issue Type: Bug
>          Components: Deployment / Kubernetes
>    Affects Versions: 1.14.0
>            Reporter: aresyhzhang
>            Priority: Minor
>
> version:
> flink version:1.14.0
> java version:1.8
> run mode:flink native k8s session
> problem:
> When I use flink sql batch mode to read the data in the hive table, (we use 
> flink native k8s session submit job)it appears:
> Caused by: java.io.IOException: Can't get Master Kerberos principal for use 
> as renewer
>       at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:116)
>  ~[hadoop.jar:2.6.5-10.0]
>       at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
>  ~[hadoop.jar:2.6.5-10.0]
>       at 
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
>  ~[hadoop.jar:2.6.5-10.0]
>       at 
> org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:205) 
> ~[hadoop.jar:2.6.5-10.0].
> I think this exception is caused by the absence of yarn-site.xml under the 
> environment variable HADOOP_CONF_DIR. The default value of the environment 
> variable is: /opt/hadoop/conf
> I tried by specifying pod-template.yaml:
> - name: HADOOP_CONF_DIR
> value: "/etc/hive/conf"
> Change the value of HADOOP_CONF_DIR because I have stored yarn-site.xml under 
> /etc/hive/conf, but it will always be overwritten by "/opt/hadoop/conf".
> Remark:
> 1. When I look at the source code of getHadoopConfigurationFileItems of the 
> HadoopConfMountDecorator class, I find
>     final List<String> expectedFileNames = new ArrayList<>();
>          expectedFileNames.add("core-site.xml");
>          expectedFileNames.add("hdfs-site.xml");
> Only core-site.xml and hdfs-site.xml are downloaded here, but "yarn-site.xml" 
> is not downloaded, which leads to failure to pass kerberos authentication
> Should I add another line of code:
> epectedFileNames.add("yarn-site.xml") To pass kerberos authentication
> 2. Or is there any other way to actually change the value of the environment 
> variable HADOOP_CONF_DIR so that it points to the "/etc/hive/conf" I want 
> instead of "/opt/hadoop/conf" To pass kerberos authentication



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to