HyukjinKwon commented on code in PR #41415: URL: https://github.com/apache/spark/pull/41415#discussion_r1217286686
########## connector/connect/server/src/main/scala/org/apache/spark/sql/connect/artifact/SparkConnectArtifactManager.scala: ########## @@ -154,6 +154,8 @@ class SparkConnectArtifactManager private[connect] { val canonicalUri = fragment.map(UriBuilder.fromUri(target.toUri).fragment).getOrElse(target.toUri) sessionHolder.session.sparkContext.addArchive(canonicalUri.toString) + } else if (remoteRelativePath.startsWith(s"files${File.separator}")) { + sessionHolder.session.sparkContext.addFile(target.toString) Review Comment: For regular files and archives, I don't intend to expose `org.apache.spark.SparkFiles` for now. Since the files are archives are always stored at the current working directory of executors in production, I was simply thinking about creating a session dedicated directory, and change the current working directory to that. Meaning that the end users would continue accessing to their file with `./myfile.txt` or `./myarchive`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org