[jira] [Commented] (SPARK-23790) proxy-user failed connecting to a kerberos configured metastore
[ https://issues.apache.org/jira/browse/SPARK-23790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420472#comment-16420472 ] Apache Spark commented on SPARK-23790: -- User 'skonto' has created a pull request for this issue: https://github.com/apache/spark/pull/20945 > proxy-user failed connecting to a kerberos configured metastore > --- > > Key: SPARK-23790 > URL: https://issues.apache.org/jira/browse/SPARK-23790 > Project: Spark > Issue Type: Bug > Components: Mesos >Affects Versions: 2.3.0 >Reporter: Stavros Kontopoulos >Priority: Major > > This appeared at a customer trying to integrate with a kerberized hdfs > cluster. > This can be easily fixed with the proposed fix > [here|https://github.com/apache/spark/pull/17333] and the problem was > reported first [here|https://issues.apache.org/jira/browse/SPARK-19995] for > yarn. > The other option is to add the delegation tokens to the current user's UGI as > in [here|https://github.com/apache/spark/pull/17335] . The last fixes the > problem but leads to a failure when someones uses a HadoopRDD because the > latter, uses FileInputFormat to get the splits which calls the local ticket > cache by using TokenCache.obtainTokensForNamenodes. Eventually this will fail > with: > {quote}Exception in thread "main" > org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token > can be issued only with kerberos or web authenticationat > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDelegationToken(FSNamesystem.java:5896) > {quote} > This implies that security mode is SIMPLE and hadoop libs there are not aware > of kerberos. > This is related to this issue the workaround decided was to > [trick|https://github.com/apache/spark/blob/a33655348c4066d9c1d8ad2055aadfbc892ba7fd/core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala#L795-L804] > hadoop. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-23790) proxy-user failed connecting to a kerberos configured metastore
[ https://issues.apache.org/jira/browse/SPARK-23790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16414291#comment-16414291 ] Stavros Kontopoulos commented on SPARK-23790: - Yes that is what I am saying. The initial fix here: [https://github.com/apache/spark/pull/17333] does the trick but I want to have a similar approach with yarn that adds delegation tokens in current user's ugi. When I did that I hit the issue with HadoopRDD which fetches its delegation tokens on its own. > proxy-user failed connecting to a kerberos configured metastore > --- > > Key: SPARK-23790 > URL: https://issues.apache.org/jira/browse/SPARK-23790 > Project: Spark > Issue Type: Bug > Components: Mesos >Affects Versions: 2.3.0 >Reporter: Stavros Kontopoulos >Priority: Major > > This appeared at a customer trying to integrate with a kerberized hdfs > cluster. > This can be easily fixed with the proposed fix > [here|https://github.com/apache/spark/pull/17333] and the problem was > reported first [here|https://issues.apache.org/jira/browse/SPARK-19995] for > yarn. > The other option is to add the delegation tokens to the current user's UGI as > in [here|https://github.com/apache/spark/pull/17335] . The last fixes the > problem but leads to a failure when someones uses a HadoopRDD because the > latter, uses FileInputFormat to get the splits which calls the local ticket > cache by using TokenCache.obtainTokensForNamenodes. Eventually this will fail > with: > {quote}Exception in thread "main" > org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token > can be issued only with kerberos or web authenticationat > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDelegationToken(FSNamesystem.java:5896) > {quote} > This implies that security mode is SIMPLE and hadoop libs there are not aware > of kerberos. > This is related to this issue the workaround decided was to > [trick|https://github.com/apache/spark/blob/a33655348c4066d9c1d8ad2055aadfbc892ba7fd/core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala#L795-L804] > hadoop. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-23790) proxy-user failed connecting to a kerberos configured metastore
[ https://issues.apache.org/jira/browse/SPARK-23790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16414208#comment-16414208 ] Marcelo Vanzin commented on SPARK-23790: BTW if what you're saying is that Yuming's fix also works for the issue you're seeing, we should probably dupe this to the other bug. > proxy-user failed connecting to a kerberos configured metastore > --- > > Key: SPARK-23790 > URL: https://issues.apache.org/jira/browse/SPARK-23790 > Project: Spark > Issue Type: Bug > Components: Mesos >Affects Versions: 2.3.0 >Reporter: Stavros Kontopoulos >Priority: Major > > This appeared at a customer trying to integrate with a kerberized hdfs > cluster. > This can be easily fixed with the proposed fix > [here|https://github.com/apache/spark/pull/17333] and the problem was > reported first [here|https://issues.apache.org/jira/browse/SPARK-19995] for > yarn. > The other option is to add the delegation tokens to the current user's UGI as > in [here|https://github.com/apache/spark/pull/17335] . The last fixes the > problem but leads to a failure when someones uses a HadoopRDD because the > latter, uses FileInputFormat to get the splits which calls the local ticket > cache by using TokenCache.obtainTokensForNamenodes. Eventually this will fail > with: > {quote}Exception in thread "main" > org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token > can be issued only with kerberos or web authenticationat > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDelegationToken(FSNamesystem.java:5896) > {quote} > This implies that security mode is SIMPLE and hadoop libs there are not aware > of kerberos. > This is related to this issue the workaround decided was to > [trick|https://github.com/apache/spark/blob/a33655348c4066d9c1d8ad2055aadfbc892ba7fd/core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala#L795-L804] > hadoop. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-23790) proxy-user failed connecting to a kerberos configured metastore
[ https://issues.apache.org/jira/browse/SPARK-23790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16414176#comment-16414176 ] Marcelo Vanzin commented on SPARK-23790: I haven't had the time to see exactly what spark-cli is doing. This looks the same as SPARK-23639, and I don't like the place where the fix is being made. But I don't know enough about spark-cli yet to suggest something different. > proxy-user failed connecting to a kerberos configured metastore > --- > > Key: SPARK-23790 > URL: https://issues.apache.org/jira/browse/SPARK-23790 > Project: Spark > Issue Type: Bug > Components: Mesos >Affects Versions: 2.3.0 >Reporter: Stavros Kontopoulos >Priority: Major > > This appeared at a customer trying to integrate with a kerberized hdfs > cluster. > This can be easily fixed with the proposed fix > [here|https://github.com/apache/spark/pull/17333] and the problem was > reported first [here|https://issues.apache.org/jira/browse/SPARK-19995] for > yarn. > The other option is to add the delegation tokens to the current user's UGI as > in [here|https://github.com/apache/spark/pull/17335] . The last fixes the > problem but leads to a failure when someones uses a HadoopRDD because the > latter, uses FileInputFormat to get the splits which calls the local ticket > cache by using TokenCache.obtainTokensForNamenodes. Eventually this will fail > with: > {quote}Exception in thread "main" > org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token > can be issued only with kerberos or web authenticationat > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDelegationToken(FSNamesystem.java:5896) > {quote} > This implies that security mode is SIMPLE and hadoop libs there are not aware > of kerberos. > This is related to this issue the workaround decided was to > [trick|https://github.com/apache/spark/blob/a33655348c4066d9c1d8ad2055aadfbc892ba7fd/core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala#L795-L804] > hadoop. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-23790) proxy-user failed connecting to a kerberos configured metastore
[ https://issues.apache.org/jira/browse/SPARK-23790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412974#comment-16412974 ] Stavros Kontopoulos commented on SPARK-23790: - [~q79969786] I see the PRs you created to fix the other PR, btw the doAsRealUser does the work: {quote}18/03/23 19:26:18 DEBUG UserGroupInformation: PrivilegedAction as:hive@LOCAL (auth:KERBEROS) from:org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49) 18/03/23 19:26:18 DEBUG TSaslTransport: opening transport org.apache.thrift.transport.TSaslClientTransport@64201482 18/03/23 19:26:18 DEBUG TSaslClientTransport: Sending mechanism name GSSAPI and initial response of length 607 18/03/23 19:26:18 DEBUG TSaslTransport: CLIENT: Writing message with status START and payload length 6 18/03/23 19:26:18 DEBUG TSaslTransport: CLIENT: Writing message with status OK and payload length 607 18/03/23 19:26:18 DEBUG TSaslTransport: CLIENT: Start message handled 18/03/23 19:26:18 DEBUG TSaslTransport: CLIENT: Received message with status OK and payload length 108 18/03/23 19:26:18 DEBUG TSaslTransport: CLIENT: Writing message with status OK and payload length 0 18/03/23 19:26:18 DEBUG TSaslTransport: CLIENT: Received message with status OK and payload length 32 18/03/23 19:26:18 DEBUG TSaslTransport: CLIENT: Writing message with status COMPLETE and payload length 32 18/03/23 19:26:18 DEBUG TSaslTransport: CLIENT: Main negotiation loop complete 18/03/23 19:26:18 DEBUG TSaslTransport: CLIENT: SASL Client receiving last message 18/03/23 19:26:18 DEBUG TSaslTransport: CLIENT: Received message with status COMPLETE and payload length 0 18/03/23 19:26:18 INFO metastore: Connected to metastore. {quote} The reason is that I use an earlier branch to build stuff for the customer which does not contain the commit. Thank you though there is a regression I should know for the next releases and will follow the work being done. My problem is that I tried to fetch delegation tokens earlier so consequent operations dont use a TGT all the time but hit this issue with HadoopRDD. I believed I could add the delegation tokens when the mesos scheduler backend starts like in the case of yarn where Client.java does something similar. > proxy-user failed connecting to a kerberos configured metastore > --- > > Key: SPARK-23790 > URL: https://issues.apache.org/jira/browse/SPARK-23790 > Project: Spark > Issue Type: Bug > Components: Mesos >Affects Versions: 2.3.0 >Reporter: Stavros Kontopoulos >Priority: Major > > This appeared at a customer trying to integrate with a kerberized hdfs > cluster. > This can be easily fixed with the proposed fix > [here|https://github.com/apache/spark/pull/17333] and the problem was > reported first [here|https://issues.apache.org/jira/browse/SPARK-19995] for > yarn. > The other option is to add the delegation tokens to the current user's UGI as > in [here|https://github.com/apache/spark/pull/17335] . The last fixes the > problem but leads to a failure when someones uses a HadoopRDD because the > latter, uses FileInputFormat to get the splits which calls the local ticket > cache by using TokenCache.obtainTokensForNamenodes. Eventually this will fail > with: > {quote}Exception in thread "main" > org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token > can be issued only with kerberos or web authenticationat > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDelegationToken(FSNamesystem.java:5896) > {quote} > This implies that security mode is SIMPLE and hadoop libs there are not aware > of kerberos. > This is related to this issue the workaround decided was to > [trick|https://github.com/apache/spark/blob/a33655348c4066d9c1d8ad2055aadfbc892ba7fd/core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala#L795-L804] > hadoop. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-23790) proxy-user failed connecting to a kerberos configured metastore
[ https://issues.apache.org/jira/browse/SPARK-23790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412960#comment-16412960 ] Yuming Wang commented on SPARK-23790: - Can you try https://github.com/apache/spark/pull/20898? > proxy-user failed connecting to a kerberos configured metastore > --- > > Key: SPARK-23790 > URL: https://issues.apache.org/jira/browse/SPARK-23790 > Project: Spark > Issue Type: Bug > Components: Mesos >Affects Versions: 2.3.0 >Reporter: Stavros Kontopoulos >Priority: Major > > This appeared at a customer trying to integrate with a kerberized hdfs > cluster. > This can be easily fixed with the proposed fix > [here|https://github.com/apache/spark/pull/17333] and the problem was > reported first [here|https://issues.apache.org/jira/browse/SPARK-19995] for > yarn. > The other option is to add the delegation tokens to the current user's UGI as > in [here|https://github.com/apache/spark/pull/17335] . The last fixes the > problem but leads to a failure when someones uses a HadoopRDD because the > latter, uses FileInputFormat to get the splits which calls the local ticket > cache by using TokenCache.obtainTokensForNamenodes. Eventually this will fail > with: > {quote}Exception in thread "main" > org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token > can be issued only with kerberos or web authenticationat > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDelegationToken(FSNamesystem.java:5896) > {quote} > This implies that security mode is SIMPLE and hadoop libs there are not aware > of kerberos. > This is related to this issue the workaround decided was to > [trick|https://github.com/apache/spark/blob/a33655348c4066d9c1d8ad2055aadfbc892ba7fd/core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala#L795-L804] > hadoop. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-23790) proxy-user failed connecting to a kerberos configured metastore
[ https://issues.apache.org/jira/browse/SPARK-23790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412644#comment-16412644 ] Stavros Kontopoulos commented on SPARK-23790: - [~susanxhuynh] fyi. [~vanzin], [~jerryshao] do you think we should revert back to the other solution with the doAsRealUser(SessionState.start(state))? I dont think there is much progress [here|https://issues.apache.org/jira/browse/MAPREDUCE-6876]. > proxy-user failed connecting to a kerberos configured metastore > --- > > Key: SPARK-23790 > URL: https://issues.apache.org/jira/browse/SPARK-23790 > Project: Spark > Issue Type: Bug > Components: Mesos >Affects Versions: 2.3.0 >Reporter: Stavros Kontopoulos >Priority: Major > > This appeared at a customer trying to integrate with a kerberized hdfs > cluster. > This can be easily fixed with the proposed fix > [here|https://github.com/apache/spark/pull/17333]. > The other option is to add the delegation tokens to the current user's UGI as > in [here|https://github.com/apache/spark/pull/17335] . The last fixes the > problem but leads to a failure when someones uses a HadoopRDD because the > latter, uses FileInputFormat to get the splits which calls the local ticket > cache by using TokenCache.obtainTokensForNamenodes. Eventually this will fail > with: > {quote}Exception in thread "main" > org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token > can be issued only with kerberos or web authenticationat > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDelegationToken(FSNamesystem.java:5896) > {quote} > This implies that security mode is SIMPLE and hadoop libs there are not aware > of kerberos. > This is related to this > [issue|https://issues.apache.org/jira/browse/MAPREDUCE-6876] and the > workaround decided was to > [trick|https://github.com/apache/spark/blob/a33655348c4066d9c1d8ad2055aadfbc892ba7fd/core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala#L795-L804] > hadoop. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org