Github user hemshankar commented on the pull request:

    https://github.com/apache/spark/pull/4405#issuecomment-171233550
  
    I have few doubts about running in client mode and cluster mode.
    Currently I am using a cloudera hadoop single node cluster (kerberos 
enabled.)
    
    In client mode I use following commands 
    
        kinit
        spark-submit --master yarn-client --proxy-user cloudera 
examples/src/main/python/pi.py 
    
    This works fine. In cluster mode I use following command (no kinit done and 
no TGT is present in the cache) 
    
        spark-submit --principal <myprinc> --keytab <KT location> --master 
yarn-cluster examples/src/main/python/pi.py 
    
    Also works fine. But when I use following command in cluster mode (no kinit 
done and no TGT is present in the cache) 
    
           spark-submit --principal <myprinc> --keytab <KT location> --master 
yarn-cluster --proxy-user cloudera examples/src/main/python/pi.py 
    
    throws following error
    
          No valid credentials provided (Mechanism level: Failed to find any 
Kerberos tgt)
    
    I guess in cluster mode the spark-submit do not look for TGT in the client 
machine... it transfers the "keytab" file to the cluster and then starts the 
spark job. So why does the specifying "--proxy-user" option looks for TGT while 
submitting in the "yarn-cluster" mode. Am I doing some thing wrong.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to