sarutak opened a new pull request #29620:
URL: https://github.com/apache/spark/pull/29620
### What changes were proposed in this pull request?
<!--
Please clarify what changes you are proposing. The purpose of this section
is to outline the changes and how this PR fixes the issue.
If possible, please consider writing useful notes for better and faster
reviews in your PR. See the examples below.
1. If you refactor some codes with changing classes, showing the class
hierarchy will help reviewers.
2. If you fix some SQL features, you can provide some references of other
DBMSes.
3. If there is design documentation, please add the link.
4. If there is a discussion in the mailing list, please add the link.
-->
This PR changes the behavior of `sc.listJars` and `sc.listFiles` to list
jars/files added by --jars/--files options when we run apps on YARN.
### Why are the changes needed?
<!--
Please clarify why the changes are needed. For instance,
1. If you propose a new API, clarify the use case for a new API.
2. If you fix a bug, you can clarify why it is a bug.
-->
To fix the inconsistent behavior between YARN and other cluster managers.
Jars/files specified with --jars / --files options are listed by sc.listJars
and listFiles except when we run apps on YARN.
If we run apps not on YARN, those files are served by the embedded file
server in the driver and listJars/listFiles list the served files.
```
$ bin/spark-shell --master spark://localhost:7077 --jars /tmp/test1.jar
--files /tmp/test1.txt
scala> sc.listJars
res0: Seq[String] = Vector(spark://192.168.1.204:35969/jars/test1.jar)
scala> sc.listFiles
res1: Seq[String] = Vector(spark://192.168.1.204:35969/files/test1.txt)
```
But with YARN, such files specified by the options are not served by the
embedded file server so listJars and listFiles don't list them.
```
$ bin/spark-shell --master yarn --jars /tmp/test1.jar --files /tmp/test1.txt
scala> sc.listJars
res0: Seq[String] = Vector()
scala> sc.listFiles
res1: Seq[String] = Vector()
```
### Does this PR introduce _any_ user-facing change?
<!--
Note that it means *any* user-facing change including all aspects such as
the documentation fix.
If yes, please clarify the previous behavior and the change this PR proposes
- provide the console output, description and/or an example to show the
behavior difference if possible.
If possible, please also clarify if this is a user-facing change compared to
the released Spark versions or within the unreleased branches such as master.
If no, write 'No'.
-->
Yes. The behavior of listJars/listFiles are consistent among supported
cluster managers.
### How was this patch tested?
<!--
If tests were added, say they were added here. Please make sure to add some
test cases that check the changes thoroughly including negative and positive
cases if possible.
If it was tested in a way different from regular unit tests, please clarify
how you tested step by step, ideally copy and paste-able, so that other
reviewers can test and check, and descendants can verify in the future.
If tests were not added, please describe why they were not added and/or why
it was difficult to add.
-->
For YARN client mode, I ran the following code with spark-shell and
confirmed the result.
```
$ bin/spark-shell --master yarn --jars /tmp/test1.jar,hdfs:///tmp/test2.jar
--files /tmp/test1.txt,hdfs:///tmp/test2.txt
scala> sc.listJars
res0: Seq[String] = Vector(file:///tmp/test1.jar,
hdfs://namenode-ha-cluster/tmp/test2.jar)
scala> sc.listFiles
res1: Seq[String] = Vector(file:///tmp/test1.txt,
hdfs://namenode-ha-cluster/tmp/test2.txt)
```
For YARN cluster mode, I ran the following code and confirmed the result in
the ApplicationMaster log.
```
import org.apache.spark.internal.Logging
import org.apache.spark.sql.SparkSession
object ListJarsTest extends Logging {
def main(args: Array[String]) {
val spark = SparkSession.builder.appName("ListJarsTest").getOrCreate
val sc = spark.sparkContext
println("Jars")
sc.listJars.foreach(println)
println("Files")
sc.listFiles.foreach(println)
// dummy
spark.range(1, 2).collect
spark.stop()
}
}
```
```
Container: container_1598943095836_0026_01_000001 on hadoop-slave00_39924
LogAggregationType: AGGREGATED
=========================================================================
LogType:stdout
LogLastModifiedTime:火 9 01 20:49:29 +0900 2020
LogLength:137
LogContents:
Jars
file:///tmp/test1.jar
hdfs://namenode-ha-cluster/tmp/test2.jar
Files
file:///tmp/test1.txt
hdfs://namenode-ha-cluster/tmp/test2.txt
```
I also tested with spark-sql CLI.
```
spark-sql> LIST JARS;
20/09/01 21:02:56 INFO CodeGenerator: Code generated in 157.948349 ms
file:///tmp/test1.jar
hdfs://namenode-ha-cluster/tmp/test2.jar
Time taken: 1.895 seconds, Fetched 2 row(s)
20/09/01 21:02:56 INFO SparkSQLCLIDriver: Time taken: 1.895 seconds, Fetched
2 row(s)
spark-sql> LIST FILES;
file:///tmp/test1.txt
hdfs://namenode-ha-cluster/tmp/test2.txt
Time taken: 0.033 seconds, Fetched 2 row(s)
20/09/01 21:03:06 INFO SparkSQLCLIDriver: Time taken: 0.033 seconds, Fetched
2 row(s)
```
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]