felixcheung commented on a change in pull request #24548: [MINOR][SS][DOC] 
Added missing config `maxFileAge` in file streaming source
URL: https://github.com/apache/spark/pull/24548#discussion_r282334994
 
 

 ##########
 File path: docs/structured-streaming-programming-guide.md
 ##########
 @@ -510,8 +510,7 @@ returned by `SparkSession.readStream()`. In 
[R](api/R/read.stream.html), with th
 #### Input Sources
 There are a few built-in sources.
 
-  - **File source** - Reads files written in a directory as a stream of data. 
Supported file formats are text, csv, json, orc, parquet. See the docs of the 
DataStreamReader interface for a more up-to-date list, and supported options 
for each file format. Note that the files must be atomically placed in the 
given directory, which in most file systems, can be achieved by file move 
operations.
-
+  - **File source** - Reads files written in a directory as a stream of data. 
Files will be processed in the order of file modification time. If 
`latestFirst` is set, order will be reversed. Supported file formats are text, 
CSV, JSON, ORC, Parquet. See the docs of the DataStreamReader interface for a 
more up-to-date list, and supported options for each file format. Note that the 
files must be atomically placed in the given directory, which in most file 
systems, can be achieved by file move operations.
 
 Review comment:
   why are ` text, CSV, JSON, ORC, Parquet` capitalized? I thought file format 
names are lower cased

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to