KevinSmile commented on pull request #29653:
URL: https://github.com/apache/spark/pull/29653#issuecomment-688562460


   I think it only affects run-example in standalone-cluster mode.
   
   As client mode use a different logic.
   
   And for `spark-submit`,  if `--jars` is provided(including your main jar),  
you will still get the right result, but with a log error.
   
   
   ```
   ./bin/spark-submit \
   >   --class org.apache.spark.examples.SparkPi \
   >   --jars 
./examples/jars/spark-examples_2.12-3.0.0.jar,./examples/jars/scopt_2.12-3.7.1.jar
 \
   >   --master spark://KevinMac.local:7077 \
   >   --deploy-mode client \
   >   100
   20/09/08 08:52:28 WARN NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
   Using Spark's default log4j profile: 
org/apache/spark/log4j-defaults.properties
   20/09/08 08:52:28 WARN DependencyUtils: Local jar 
/Users/Kevin/Development/learnSpark/spark-3.0.0-bin-hadoop2.7/100 does not 
exist, skipping.
   20/09/08 08:52:29 INFO SparkContext: Running Spark version 3.0.0
   20/09/08 08:52:29 INFO ResourceUtils: 
==============================================================
   20/09/08 08:52:29 INFO ResourceUtils: Resources for spark.driver:
   
   20/09/08 08:52:29 INFO ResourceUtils: 
==============================================================
   20/09/08 08:52:29 INFO SparkContext: Submitted application: Spark Pi
   20/09/08 08:52:29 INFO SecurityManager: Changing view acls to: Kevin
   20/09/08 08:52:29 INFO SecurityManager: Changing modify acls to: Kevin
   20/09/08 08:52:29 INFO SecurityManager: Changing view acls groups to:
   20/09/08 08:52:29 INFO SecurityManager: Changing modify acls groups to:
   20/09/08 08:52:29 INFO SecurityManager: SecurityManager: authentication 
disabled; ui acls disabled; users  with view permissions: Set(Kevin); groups 
with view permissions: Set(); users  with modify permissions: Set(Kevin); 
groups with modify permissions: Set()
   20/09/08 08:52:29 INFO Utils: Successfully started service 'sparkDriver' on 
port 62182.
   20/09/08 08:52:29 INFO SparkEnv: Registering MapOutputTracker
   20/09/08 08:52:29 INFO SparkEnv: Registering BlockManagerMaster
   20/09/08 08:52:29 INFO BlockManagerMasterEndpoint: Using 
org.apache.spark.storage.DefaultTopologyMapper for getting topology information
   20/09/08 08:52:29 INFO BlockManagerMasterEndpoint: 
BlockManagerMasterEndpoint up
   20/09/08 08:52:29 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
   20/09/08 08:52:29 INFO DiskBlockManager: Created local directory at 
/private/var/folders/dg/b9vzh3ls5d57qmmffjpvc8540000gn/T/blockmgr-8ba808a8-d7dc-4e69-901d-578282febf22
   20/09/08 08:52:29 INFO MemoryStore: MemoryStore started with capacity 366.3 
MiB
   20/09/08 08:52:29 INFO SparkEnv: Registering OutputCommitCoordinator
   20/09/08 08:52:30 INFO Utils: Successfully started service 'SparkUI' on port 
4040.
   20/09/08 08:52:30 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at 
http://192.168.0.100:4040
   20/09/08 08:52:30 INFO SparkContext: Added JAR 
file:///Users/Kevin/Development/learnSpark/spark-3.0.0-bin-hadoop2.7/examples/jars/spark-examples_2.12-3.0.0.jar
 at spark://192.168.0.100:62182/jars/spark-examples_2.12-3.0.0.jar with 
timestamp 1599526350171
   20/09/08 08:52:30 INFO SparkContext: Added JAR 
file:///Users/Kevin/Development/learnSpark/spark-3.0.0-bin-hadoop2.7/examples/jars/scopt_2.12-3.7.1.jar
 at spark://192.168.0.100:62182/jars/scopt_2.12-3.7.1.jar with timestamp 
1599526350172
   20/09/08 08:52:30 ERROR SparkContext: Failed to add 
file:/Users/Kevin/Development/learnSpark/spark-3.0.0-bin-hadoop2.7/100 to Spark 
environment
   java.io.FileNotFoundException: Jar 
/Users/Kevin/Development/learnSpark/spark-3.0.0-bin-hadoop2.7/100 not found
        at 
org.apache.spark.SparkContext.addLocalJarFile$1(SparkContext.scala:1827)
        at org.apache.spark.SparkContext.addJar(SparkContext.scala:1881)
        at org.apache.spark.SparkContext.$anonfun$new$11(SparkContext.scala:485)
        at 
org.apache.spark.SparkContext.$anonfun$new$11$adapted(SparkContext.scala:485)
        at 
scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
        at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:485)
        at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2555)
        at 
org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$1(SparkSession.scala:930)
        at scala.Option.getOrElse(Option.scala:189)
        at 
org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:921)
        at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:30)
        at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
        at 
org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:928)
        at 
org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
        at 
org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1007)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1016)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
   20/09/08 08:52:30 INFO StandaloneAppClient$ClientEndpoint: Connecting to 
master spark://KevinMac.local:7077...
   20/09/08 08:52:30 INFO TransportClientFactory: Successfully created 
connection to KevinMac.local/192.168.0.100:7077 after 70 ms (0 ms spent in 
bootstraps)
   20/09/08 08:52:30 INFO StandaloneSchedulerBackend: Connected to Spark 
cluster with app ID app-20200908085230-0001
   20/09/08 08:52:30 INFO StandaloneAppClient$ClientEndpoint: Executor added: 
app-20200908085230-0001/0 on worker-20200908082422-192.168.0.100-61280 
(192.168.0.100:61280) with 4 core(s)
   20/09/08 08:52:30 INFO StandaloneSchedulerBackend: Granted executor ID 
app-20200908085230-0001/0 on hostPort 192.168.0.100:61280 with 4 core(s), 
1024.0 MiB RAM
   20/09/08 08:52:30 INFO Utils: Successfully started service 
'org.apache.spark.network.netty.NettyBlockTransferService' on port 62184.
   20/09/08 08:52:30 INFO NettyBlockTransferService: Server created on 
192.168.0.100:62184
   20/09/08 08:52:30 INFO BlockManager: Using 
org.apache.spark.storage.RandomBlockReplicationPolicy for block replication 
policy
   20/09/08 08:52:30 INFO StandaloneAppClient$ClientEndpoint: Executor updated: 
app-20200908085230-0001/0 is now RUNNING
   20/09/08 08:52:30 INFO BlockManagerMaster: Registering BlockManager 
BlockManagerId(driver, 192.168.0.100, 62184, None)
   20/09/08 08:52:30 INFO BlockManagerMasterEndpoint: Registering block manager 
192.168.0.100:62184 with 366.3 MiB RAM, BlockManagerId(driver, 192.168.0.100, 
62184, None)
   20/09/08 08:52:30 INFO BlockManagerMaster: Registered BlockManager 
BlockManagerId(driver, 192.168.0.100, 62184, None)
   20/09/08 08:52:30 INFO BlockManager: Initialized BlockManager: 
BlockManagerId(driver, 192.168.0.100, 62184, None)
   20/09/08 08:52:31 INFO StandaloneSchedulerBackend: SchedulerBackend is ready 
for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
   20/09/08 08:52:32 INFO SparkContext: Starting job: reduce at SparkPi.scala:38
   20/09/08 08:52:32 INFO DAGScheduler: Got job 0 (reduce at SparkPi.scala:38) 
with 2 output partitions
   20/09/08 08:52:32 INFO DAGScheduler: Final stage: ResultStage 0 (reduce at 
SparkPi.scala:38)
   20/09/08 08:52:32 INFO DAGScheduler: Parents of final stage: List()
   20/09/08 08:52:32 INFO DAGScheduler: Missing parents: List()
   20/09/08 08:52:32 INFO DAGScheduler: Submitting ResultStage 0 
(MapPartitionsRDD[1] at map at SparkPi.scala:34), which has no missing parents
   20/09/08 08:52:32 INFO MemoryStore: Block broadcast_0 stored as values in 
memory (estimated size 3.1 KiB, free 366.3 MiB)
   20/09/08 08:52:32 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes 
in memory (estimated size 1816.0 B, free 366.3 MiB)
   20/09/08 08:52:32 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory 
on 192.168.0.100:62184 (size: 1816.0 B, free: 366.3 MiB)
   20/09/08 08:52:32 INFO SparkContext: Created broadcast 0 from broadcast at 
DAGScheduler.scala:1200
   20/09/08 08:52:32 INFO DAGScheduler: Submitting 2 missing tasks from 
ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34) (first 15 tasks 
are for partitions Vector(0, 1))
   20/09/08 08:52:32 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
   20/09/08 08:52:34 INFO ResourceProfile: Default ResourceProfile created, 
executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , 
memory -> name: memory, amount: 1024, script: , vendor: ), task resources: 
Map(cpus -> name: cpus, amount: 1.0)
   20/09/08 08:52:35 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: 
Registered executor NettyRpcEndpointRef(spark-client://Executor) 
(192.168.0.100:62186) with ID 0
   20/09/08 08:52:35 INFO BlockManagerMasterEndpoint: Registering block manager 
192.168.0.100:62188 with 366.3 MiB RAM, BlockManagerId(0, 192.168.0.100, 62188, 
None)
   20/09/08 08:52:35 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 
0, 192.168.0.100, executor 0, partition 0, PROCESS_LOCAL, 7397 bytes)
   20/09/08 08:52:35 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 
1, 192.168.0.100, executor 0, partition 1, PROCESS_LOCAL, 7397 bytes)
   20/09/08 08:52:35 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory 
on 192.168.0.100:62188 (size: 1816.0 B, free: 366.3 MiB)
   20/09/08 08:52:36 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 
0) in 1387 ms on 192.168.0.100 (executor 0) (1/2)
   20/09/08 08:52:36 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 
1) in 1340 ms on 192.168.0.100 (executor 0) (2/2)
   20/09/08 08:52:36 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks 
have all completed, from pool
   20/09/08 08:52:36 INFO DAGScheduler: ResultStage 0 (reduce at 
SparkPi.scala:38) finished in 4.205 s
   20/09/08 08:52:36 INFO DAGScheduler: Job 0 is finished. Cancelling potential 
speculative or zombie tasks for this job
   20/09/08 08:52:36 INFO TaskSchedulerImpl: Killing all running tasks in stage 
0: Stage finished
   20/09/08 08:52:36 INFO DAGScheduler: Job 0 finished: reduce at 
SparkPi.scala:38, took 4.375118 s
   Pi is roughly 3.1478357391786957
   20/09/08 08:52:36 INFO SparkUI: Stopped Spark web UI at 
http://192.168.0.100:4040
   20/09/08 08:52:36 INFO StandaloneSchedulerBackend: Shutting down all 
executors
   20/09/08 08:52:36 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asking 
each executor to shut down
   20/09/08 08:52:36 INFO MapOutputTrackerMasterEndpoint: 
MapOutputTrackerMasterEndpoint stopped!
   20/09/08 08:52:36 INFO MemoryStore: MemoryStore cleared
   20/09/08 08:52:36 INFO BlockManager: BlockManager stopped
   20/09/08 08:52:36 INFO BlockManagerMaster: BlockManagerMaster stopped
   20/09/08 08:52:36 INFO 
OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: 
OutputCommitCoordinator stopped!
   20/09/08 08:52:36 INFO SparkContext: Successfully stopped SparkContext
   20/09/08 08:52:36 INFO ShutdownHookManager: Shutdown hook called
   20/09/08 08:52:37 INFO ShutdownHookManager: Deleting directory 
/private/var/folders/dg/b9vzh3ls5d57qmmffjpvc8540000gn/T/spark-3c24377c-36fe-46d7-b948-3a6f8321451a
   20/09/08 08:52:37 INFO ShutdownHookManager: Deleting directory 
/private/var/folders/dg/b9vzh3ls5d57qmmffjpvc8540000gn/T/spark-ea62154a-ece4-4631-adb5-cc971e3d2b1f
   ```
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to