Github user srowen commented on a diff in the pull request:
    --- Diff: docs/ ---
    @@ -644,17 +644,90 @@ methods for creating DStreams from files as input 
    -   Spark Streaming will monitor the directory `dataDirectory` and process 
any files created in that directory (files written in nested directories not 
supported). Note that
    +   Spark Streaming will monitor the directory `dataDirectory` and process 
any files created in that directory.
    +     ++ The files must have the same data format.
    +     + A simple directory can be monitored, such as 
    +       All files directly such a path will be processed as they are 
    +     + A POSIX glob pattern can be supplied, such as
    +       `hdfs://namenode:8040/logs/2016-??-31`.
    +       Here, the DStream will consist of all files directly under those 
    +       matching the regular expression.
    +       That is: it is a pattern of directories, not of files in 
    +     + All files must be in the same data format.
    +     * A file is considered part of a time period based on its 
modification time
    +       —not its creation time.
    +     + Files must be created in/moved under the `dataDirectory` 
directory/directories by
    +       an atomic operation. In HDFS and similar filesystems, this can be 
done *renaming* them
    +       into the data directory from another part of the same filesystem.
    +     * If a wildcard is used to identify directories, such as 
    +       renaming an entire directory to match the path will add the 
directory to the list of
    +       monitored directories. Only the files in the directory whose 
modification time is
    +       within the current window will be included in the stream.
    +     + Once processed, changes to a file within the current window will 
not cause the file to be reread.
    +       That is: *updates are ignored*.
    +     + The more files under a directory/wildcard pattern, the longer it 
will take to
    +       scan for changes —even if no files have actually changed.
    +     + Calling `FileSystem.setTimes()` to fix the timestamp is a way to 
have the file picked
    +       up in a later window, even if its contents have not changed.
    -     + The files must have the same data format.
    -     + The files must be created in the `dataDirectory` by atomically 
*moving* or *renaming* them into
    -     the data directory.
    -     + Once moved, the files must not be changed. So if the files are 
being continuously appended, the new data will not be read.
        For simple text files, there is an easier method 
`streamingContext.textFileStream(dataDirectory)`. And file streams do not 
require running a receiver, hence does not require allocating cores.
        <span class="badge" style="background-color: grey">Python API</span> 
`fileStream` is not available in the Python API, only      `textFileStream` is  
    +    Special points for HDFS
    +    The HDFS filesystem does not update the modification time while it is 
being written to.
    +    Specifically
    +    + `FileSystem.create()` creation: a zero-byte file is listed; creation 
and modification time is
    +      set to the current time as seen on the NameNode.
    +    * Writes to a file via the output stream returned in the `create()` 
call: the modification
    +      time *does not change*.
    +    * When `OutputStream.close()` is called, all remaining data is 
written, the file closed and
    +      the NameNode updated with the final size of the file. The 
modification time is set to
    +      the time the file was closed.
    +    * File opened for appends via an `append()` operation. This does not 
change the modification
    +      time of the file until the `close()` call is made on the output 
    +    * `FileSystem.setTimes()` can be used to explicitly set the time on a 
    +    * The rarely used operations:  `FileSystem.concat()`, 
`createSnapshot()`, `createSymlink()` and
    +      `truncate()` all update the modification time.  
    +    Together, this means that when a file is opened, even before data has 
been completely written,
    +    it may be included in the DStream -after which updates to the file 
within the same window
    +    will be ignored. That is: changes may be missed, and data omitted from 
the stream. 
    +    To guarantee that changes are picked up in a window, write the file
    +    to an unmonitored directory, then immediately after the output stream 
is closed,
    +    rename it into the destination directory. 
    +    Provided the renamed file appears in the scanned destination directory 
during the window
    +    of its creation, the new data will be picked up.
    +    Object stores have a different set of limitations.
    +     + Wildcard directory enumeration can very slow, especially if there 
are many directories
    +       or files to scan.
    +     * The file only becomes visible at the end of the write operation; 
this also defines.
    +       the creation time of the file.
    +     + A file's  modification time is always the same as its creation time.
    +     + The `FileSystem.setTimes()` command to set file timestamps will be 
    +     + `FileSystem.rename(file)` of a single file will update the 
modification time.
    +       The time to rename a file is generally `O(length(file))`: the 
bigger the file, the longer
    +       it takes. 
    +     + `FileSystem.rename(directory)` is not atomic, and slower the more 
data there is to rename. 
    +       It may update the modification times of renamed files.
    +     + Object creation directly though a PUT operation is atomic, 
irrespective of
    +       the programming language or library used to upload the data.
    +     + Writing a file to an object store using Hadoop's APIs is an atomic 
    +       the object is only created via a PUT operation in the final 
`OutputStream.close()` call.
    +    Applications using an object store as the direct destination of data
    +    should use PUT operations to directly publish data for a DStream to 
pick up —even though this
    +    is not the mechanism to use when writing to HDFS or other filesystem. 
When using the Hadoop
    +    FileSystem API in Spark that means: write the data directory directly 
to the target directory,
    +    knowing that it is the final `close()` call will make the file visible 
and set its creation &
    --- End diff --
    This is a lot of prose about HDFS semantics. Some of this seems like it's 
not specific to Spark's usage, and at best should be pointed to from these docs 
rather than repeated. What about these semantics are necessary to use this API?

If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at or file a JIRA ticket
with INFRA.

To unsubscribe, e-mail:
For additional commands, e-mail:

Reply via email to