Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/12435#discussion_r60547975
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -443,6 +444,27 @@ object SQLConf {
.booleanConf
.createWithDefault(false)
+ val FILE_SINK_LOG_DELETION =
SQLConfigBuilder("spark.sql.streaming.fileSink.log.deletion")
+ .internal()
+ .doc("Whether to delete the expired log files in file stream sink.")
+ .booleanConf
+ .createWithDefault(true)
+
+ val FILE_SINK_LOG_COMPACT_INTERVAL =
+ SQLConfigBuilder("spark.sql.streaming.fileSink.log.compactInterval")
+ .internal()
+ .doc("Number of log files after which all the previous files " +
+ "are compacted into the next log file.")
+ .intConf
+ .createWithDefault(10)
+
+ val FILE_SINK_LOG_CLEANUP_DELAY =
+ SQLConfigBuilder("spark.sql.streaming.fileSink.log.cleanupDelay")
+ .internal()
+ .doc("How long in milliseconds a file is guaranteed to be visible
for all readers.")
--- End diff --
Ignore S3; look at S3N in Hadoop 2.4. Sadly, it [doesn't
either](https://github.com/apache/hadoop/blob/release-2.4.0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/s3native/NativeS3FileSystem.java#L556);
I didn't fix that till 2.5 & HADOOP-9361/HADOOP-9597. Hadoop 2.4 s3n is broken
in other ways; look at HADOOP-10457.
to summarise: Don't use s3n in Hadoop 2.4; it was the first update to a
later Jets3t library and under tested. 2.5 fixed it, 2.6.0 added s3a, though
that's not ready for use in 2.7.
Best to do a check for existence up front (getFileStatus()), which works
everywhere.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]