artiship opened a new pull request #30301:
URL: https://github.com/apache/spark/pull/30301


   ### What changes were proposed in this pull request?
   Modify SparkSQLCLIDriver.scala to move ahead calling the 
cli.printMasterAndAppId method before process file.
   
   ### Why are the changes needed?
   Even though in SPARK-25043 it has already brought in the printing 
application id feature. But the process file situation seems have not been 
included. This small change is to make spark-sql will also print out 
application id when process file.
   
   ### Does this PR introduce _any_ user-facing change?
   No.
   
   ### How was this patch tested?
   env
   
   ```
   spark version: 3.0.1
   os: centos 7
   ```
   
   /tmp/tmp.sql
   
   ```sql
   select 1;
   ```
   
   submit command:
   
   ```sh
   export HADOOP_USER_NAME=my-hadoop-user
   bin/spark-sql  \
   --master yarn \
   --deploy-mode client \
   --queue my.queue.name \
   --conf spark.driver.host=$(hostname -i) \
   --conf spark.app.name=spark-test  \
   --name "spark-test" \
   -f /tmp/tmp.sql 
   ```
   
   execution log:
   
   ```sh
   20/11/09 23:18:39 WARN NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
   20/11/09 23:18:40 WARN HiveConf: HiveConf of name 
hive.spark.client.rpc.server.address.use.ip does not exist
   20/11/09 23:18:40 WARN HiveConf: HiveConf of name 
hive.spark.client.submit.timeout.interval does not exist
   20/11/09 23:18:40 WARN HiveConf: HiveConf of name hive.enforce.bucketing 
does not exist
   20/11/09 23:18:40 WARN HiveConf: HiveConf of name 
hive.server2.enable.impersonation does not exist
   20/11/09 23:18:40 WARN HiveConf: HiveConf of name hive.run.timeout.seconds 
does not exist
   20/11/09 23:18:40 WARN HiveConf: HiveConf of name 
hive.support.sql11.reserved.keywords does not exist
   20/11/09 23:18:40 WARN DomainSocketFactory: The short-circuit local reads 
feature cannot be used because libhadoop cannot be loaded.
   20/11/09 23:18:41 WARN SparkConf: Note that spark.local.dir will be 
overridden by the value set by the cluster manager (via SPARK_LOCAL_DIRS in 
mesos/standalone/kubernetes and LOCAL_DIRS in YARN).
   20/11/09 23:18:42 WARN Client: Neither spark.yarn.jars nor 
spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
   20/11/09 23:18:52 WARN YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted 
to request executors before the AM has registered!
   
   Spark master: yarn, Application Id: application_1567136266901_27355775
   1
   1
   Time taken: 4.974 seconds, Fetched 1 row(s)```


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to