Documentation for FileSplitter, BlockReader and FileOutput operators.

Project: http://git-wip-us.apache.org/repos/asf/incubator-apex-malhar/repo
Commit: 
http://git-wip-us.apache.org/repos/asf/incubator-apex-malhar/commit/afbcfc21
Tree: http://git-wip-us.apache.org/repos/asf/incubator-apex-malhar/tree/afbcfc21
Diff: http://git-wip-us.apache.org/repos/asf/incubator-apex-malhar/diff/afbcfc21

Branch: refs/heads/devel-3
Commit: afbcfc21beb5c25735c7ef64adad302afbbc4ef3
Parents: 7b1a757
Author: Chandni Singh <[email protected]>
Authored: Mon Nov 9 18:48:45 2015 -0800
Committer: Thomas Weise <[email protected]>
Committed: Fri Mar 11 19:22:33 2016 -0800

----------------------------------------------------------------------
 docs/operators/block_reader.md                  | 226 +++++++++++++++++++
 docs/operators/file_output.md                   | 180 +++++++++++++++
 docs/operators/file_splitter.md                 | 163 +++++++++++++
 .../images/blockreader/classdiagram.png         | Bin 0 -> 48613 bytes
 .../images/blockreader/flowdiagram.png          | Bin 0 -> 48160 bytes
 .../images/blockreader/fsreaderexample.png      | Bin 0 -> 29927 bytes
 .../blockreader/totalBacklogProcessing.png      | Bin 0 -> 55944 bytes
 .../images/fileoutput/FileRotation.png          | Bin 0 -> 26067 bytes
 docs/operators/images/fileoutput/diagram1.png   | Bin 0 -> 30754 bytes
 .../images/filesplitter/baseexample.png         | Bin 0 -> 14493 bytes
 .../images/filesplitter/classdiagram.png        | Bin 0 -> 14513 bytes
 .../images/filesplitter/inputexample.png        | Bin 0 -> 16012 bytes
 docs/operators/images/filesplitter/sequence.png | Bin 0 -> 17020 bytes
 13 files changed, 569 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-apex-malhar/blob/afbcfc21/docs/operators/block_reader.md
----------------------------------------------------------------------
diff --git a/docs/operators/block_reader.md b/docs/operators/block_reader.md
new file mode 100644
index 0000000..9b7628a
--- /dev/null
+++ b/docs/operators/block_reader.md
@@ -0,0 +1,226 @@
+Block Reader
+=============
+
+This is a scalable operator that reads and parses blocks of data sources into 
records. A data source can be a file or a message bus that contains records and 
a block defines a chunk of data in the source by specifying the block offset 
and the length of the source belonging to the block. 
+
+## Why is it needed?
+
+A Block Reader is needed to parallelize reading and parsing of a single data 
source, for example a file. Simple parallelism of reading data sources can be 
achieved by multiple partitions reading different source of same type (for 
files see 
[AbstractFileInputOperator](https://github.com/apache/incubator-apex-malhar/blob/devel-3/library/src/main/java/com/datatorrent/lib/io/fs/AbstractFileInputOperator.java))
 but Block Reader partitions can read blocks of same source in parallel and 
parse them for records ensuring that no record is duplicated or missed.
+
+## Class Diagram
+
+![BlockReader class diagram](images/blockreader/classdiagram.png)
+
+## AbstractBlockReader
+This is the abstract implementation that serves as the base for different 
types of data sources. It defines how a block metadata is processed. The flow 
diagram below describes the processing of a block metadata.
+
+![BlockReader flow diagram](images/blockreader/flowdiagram.png)
+
+### Ports
+
+- blocksMetadataInput: input port on which block metadata are received.
+
+- blocksMetadataOutput: output port on which block metadata are emitted if the 
port is connected. This port is useful when a downstream operator that receives 
records from block reader may also be interested to know the details of the 
corresponding blocks.
+
+- messages: output port on which tuples of type 
`com.datatorrent.lib.io.block.AbstractBlockReader.ReaderRecord` are emitted. 
This class encapsulates a `record` and the `blockId` of the corresponding block.
+
+### readerContext
+
+This is one of the most important fields in the block reader. It is of type 
`com.datatorrent.lib.io.block.ReaderContext` and is responsible for fetching 
bytes that make a record. It also lets the reader know how many total bytes 
were consumed which may not be equal to the total bytes in a record because 
consumed bytes also include bytes for the record delimiter which may not be a 
part of the actual record.
+ 
+Once the reader creates an input stream for the block (or uses the previous 
opened stream if the current block is successor of the previous block) it 
initializes the reader context by invoking `readerContext.initialize(stream, 
blockMetadata, consecutiveBlock);`. Initialize method is where any 
implementation of `ReaderContext` can perform all the operations which have to 
be executed just before reading the block or create states which are used 
during the lifetime of reading the block.
+
+Once the initialization is done, `readerContext.next()` is called repeatedly 
until it returns `null`. It is left to the `ReaderContext` implementations to 
decide when a block is completely processed. In cases when a record is split 
across adjacent blocks, reader context may decide to read ahead of the current 
block boundary to completely fetch the split record (examples- 
`LineReaderContext` and `ReadAheadLineReaderContext`). In other cases when 
there isn't a possibility of split record (example- `FixedBytesReaderContext`), 
it returns `null` immediately when the block boundary is reached. The return 
type of `readerContext.next()` is of type 
`com.datatorrent.lib.io.block.ReaderContext.Entity` which is just a wrapper for 
a `byte[]` that represents the record and total bytes used in fetching the 
record.
+
+### Abstract methods
+
+- `STREAM setupStream(B block)`: creating a stream for a block is dependent on 
the type of source which is not known to AbstractBlockReader. Sub-classes which 
deal with a specific data source provide this implementation.
+
+- `R convertToRecord(byte[] bytes)`<a name="convertToRecord"></a>: this 
converts the array of bytes into the actual instance of record type.
+
+### Auto-scalability
+
+Block reader can auto-scale, that is, depending on the backlog (total number 
of all the blocks which are waiting in the `blocksMetadataInput` port queue of 
all partitions) it can create more partitions or reduce them. Details are 
discussed in the last section which covers the [partitioner and 
stats-listener](#partitioning).
+
+### Configuration
+
+1.  <a name="maxReaders"></a>**maxReaders**: when auto-scaling is enabled, 
this controls the maximum number of block reader partitions that can be created.
+2. <a name="minReaders"></a>**minReaders**: when auto-scaling is enabled, this 
controls the minimum number of block reader partitions that should always exist.
+3. <a name="collectStats"></a>**collectStats**: this enables or disables 
auto-scaling. When it is set to `true` the stats (number of blocks in the 
queue) are collected and this triggers partitioning; otherwise auto-scaling is 
disabled.
+4. **intervalMillis**: when auto-scaling is enabled, this specifies the 
interval at which the reader will trigger the logic of computing the backlog 
and auto-scale.
+
+## <a name="AbstractFSBlockReader"></a> AbstractFSBlockReader
+
+This abstract implementation deals with files. Different types of file systems 
that are implementations of `org.apache.hadoop.fs.FileSystem` are supported. 
The user can override `getFSInstance()` method to create an instance of a 
specific `FileSystem`. By default, filesystem instance is created from the 
filesytem URI that comes from the default hadoop configuration.
+
+```java
+protected FileSystem getFSInstance() throws IOException
+{
+  return FileSystem.newInstance(configuration);
+}
+```
+It uses this filesystem instance to setup a stream of type 
`org.apache.hadoop.fs.FSDataInputStream` to read the block.
+
+```java
+@Override
+protected FSDataInputStream setupStream(BlockMetadata.FileBlockMetadata block) 
throws IOException
+{
+  return fs.open(new Path(block.getFilePath()));
+}
+```
+All the ports and configurations are derived from the super class. It doesn't 
provide an implementation of [`convertToRecord(byte[] 
bytes)`](#convertToRecord) method which is delegated to concrete sub-classes.
+
+### Example Application
+This simple dag demonstrates how any concrete implementation of 
`AbstractFSBlockReader` can be plugged into an application. 
+
+![Application with FSBlockReader](images/blockreader/fsreaderexample.png)
+
+In the above application, file splitter creates block metadata for files which 
are sent to block reader. Partitions of the block reader parses the file blocks 
for records which are filtered, transformed and then persisted to a file 
(created per block). Therefore block reader is parallel partitioned with the 2 
downstream operators - filter/converter and record output operator. The code 
which implements this dag is below.
+
+```java
+public class ExampleApplication implements StreamingApplication
+{
+  @Override
+  public void populateDAG(DAG dag, Configuration configuration)
+  {
+    FileSplitterInput input = dag.addOperator("File-splitter", new 
FileSplitterInput());
+    //any concrete implementation of AbstractFSBlockReader based on the 
use-case can be added here.
+    LineReader blockReader = dag.addOperator("Block-reader", new LineReader());
+    Filter filter = dag.addOperator("Filter", new Filter());
+    RecordOutputOperator recordOutputOperator = 
dag.addOperator("Record-writer", new RecordOutputOperator());
+
+    dag.addStream("file-block metadata", input.blocksMetadataOutput, 
blockReader.blocksMetadataInput);
+    dag.addStream("records", blockReader.messages, filter.input);
+    dag.addStream("filtered-records", filter.output, 
recordOutputOperator.input);
+  }
+
+  /**
+   * Concrete implementation of {@link AbstractFSBlockReader} for which a 
record is a line in the file.
+   */
+  public static class LineReader extends 
AbstractFSBlockReader.AbstractFSReadAheadLineReader<String>
+  {
+
+    @Override
+    protected String convertToRecord(byte[] bytes)
+    {
+      return new String(bytes);
+    }
+  }
+
+  /**
+   * Considers any line starting with a '.' as invalid. Emits the valid 
records.
+   */
+  public static class Filter extends BaseOperator
+  {
+    public final transient 
DefaultOutputPort<AbstractBlockReader.ReaderRecord<String>> output = new 
DefaultOutputPort<>();
+    public final transient 
DefaultInputPort<AbstractBlockReader.ReaderRecord<String>> input = new 
DefaultInputPort<AbstractBlockReader.ReaderRecord<String>>()
+    {
+      @Override
+      public void process(AbstractBlockReader.ReaderRecord<String> 
stringRecord)
+      {
+        //filter records and transform
+        //if the string starts with a '.' ignore the string.
+        if (!StringUtils.startsWith(stringRecord.getRecord(), ".")) {
+          output.emit(stringRecord);
+        }
+      }
+    };
+  }
+
+  /**
+   * Persists the valid records to corresponding block files.
+   */
+  public static class RecordOutputOperator extends 
AbstractFileOutputOperator<AbstractBlockReader.ReaderRecord<String>>
+  {
+    @Override
+    protected String getFileName(AbstractBlockReader.ReaderRecord<String> 
tuple)
+    {
+      return Long.toHexString(tuple.getBlockId());
+    }
+
+    @Override
+    protected byte[] getBytesForTuple(AbstractBlockReader.ReaderRecord<String> 
tuple)
+    {
+      return tuple.getRecord().getBytes();
+    }
+  }
+}
+```
+Configuration to parallel partition block reader with its downstream operators.
+
+```xml
+  <property>
+    <name>dt.operator.Filter.port.input.attr.PARTITION_PARALLEL</name>
+    <value>true</value>
+  </property>
+  <property>
+    <name>dt.operator.Record-writer.port.input.attr.PARTITION_PARALLEL</name>
+    <value>true</value>
+  </property>
+```
+
+## AbstractFSReadAheadLineReader
+
+This extension of [`AbstractFSBlockReader`](#AbstractFSBlockReader) parses 
lines from a block and binds the `readerContext` field to an instance of 
`ReaderContext.ReadAheadLineReaderContext`.
+
+It is abstract because it doesn't provide an implementation of 
[`convertToRecord(byte[] bytes)`](#convertToRecord) since the user may want to 
convert the bytes that make a line into some other type. 
+
+### ReadAheadLineReaderContext
+
+In order to handle a line split across adjacent blocks, 
ReadAheadLineReaderContext always reads beyond the block boundary and ignores 
the bytes till the first end-of-line character of all the blocks except the 
first block of the file. This ensures that no line is missed or incomplete.
+
+This is one of the most common ways of handling a split record. It doesn't 
require any further information to decide if a line is complete. However, the 
cost of this consistent way to handle a line split is that it always reads from 
the next block.
+
+## AbstractFSLineReader
+
+Similar to `AbstractFSReadAheadLineReader`, even this parses lines from a 
block. However, it binds the `readerContext` field to an instance of 
`ReaderContext.LineReaderContext`.
+
+### LineReaderContext
+
+This handles the line split differently from `ReadAheadLineReaderContext`. It 
doesn't always read from the next block. If the end of the last line is aligned 
with the block boundary then it stops processing the block. It does read from 
the next block when the boundaries are not aligned, that is, last line extends 
beyond the block boundary. The result of this is an inconsistency in reading 
the next block.
+
+When the boundary of the last line of the previous block was aligned with its 
block, then the first line of the current block is a valid line. However, in 
the other case the bytes from the block start offset to the first end-of-line 
character should be ignored. Therefore, this means that any record formed by 
this reader context has to be validated. For example, if the lines are of fixed 
size then size of each record can be validated or if each line begins with a 
special field then that knowledge can be used to check if a record is complete.
+
+If the validations of completeness fails for a line then 
[`convertToRecord(byte[] bytes)`](#convertToRecord) should return null.
+
+## FSSliceReader
+
+A concrete extension of [`AbstractFSBlockReader`](#AbstractFSBlockReader) that 
reads fixed-size `byte[]` from a block and emits the byte array wrapped in 
`com.datatorrent.netlet.util.Slice`.
+
+This operator binds the `readerContext` to an instance of 
`ReaderContext.FixedBytesReaderContext`.
+
+### FixedBytesReaderContext
+
+This implementation of `ReaderContext` never reads beyond a block boundary 
which can result in the last `byte[]` of a block to be of a shorter length than 
the rest of the records.
+
+### Configuration
+
+**readerContext.length**: length of each record. By default, this is 
initialized to the default hdfs block size.
+
+## Partitioner and StatsListener
+
+The logical instance of the block reader acts as the Partitioner (unless a 
custom partitioner is set using the operator attribute - `PARTITIONER`) as well 
as a StatsListener. This is because the 
+`AbstractBlockReader` implements both the `com.datatorrent.api.Partitioner` 
and `com.datatorrent.api.StatsListener` interfaces and provides an 
implementation of `definePartitions(...)` and `processStats(...)` which make it 
auto-scalable.
+
+### processStats <a name="processStats"></a>
+
+The application master invokes `Response processStats(BatchedOperatorStats 
stats)` method on the logical instance with the stats (`tuplesProcessedPSMA`, 
`tuplesEmittedPSMA`, `latencyMA`, etc.) of each partition. The data which this 
operator is interested in is the `queueSize` of the input port 
`blocksMetadataInput`.
+
+Usually the `queueSize` of an input port gives the count of waiting control 
tuples plus data tuples. However, if a stats listener is interested only in the 
count of data tuples then that can be expressed by annotating the class with 
`@DataQueueSize`. In this case `AbstractBlockReader` itself is the 
`StatsListener` which is why it is annotated with `@DataQueueSize`.
+
+The logical instance caches the queue size per partition and at regular 
intervals (configured by `intervalMillis`) sums these values to find the total 
backlog which is then used to decide whether re-partitioning is needed. The 
flow-diagram below describes this logic.
+
+![Processing of total-backlog](images/blockreader/totalBacklogProcessing.png)
+
+The goal of this logic is to create as many partitions within bounds (see 
[`maxReaders`](#maxReaders) and [`minReaders`](#minReaders) above) to quickly 
reduce this backlog or if the backlog is small then remove any idle partitions.
+
+### definePartitions
+
+Based on the `repartitionRequired` field of the `Response` object which is 
returned by *[processStats](#processStats)* method, the application master 
invokes 
+
+```java
+Collection<Partition<AbstractBlockReader<...>>> 
definePartitions(Collection<Partition<AbstractBlockReader<...>>> partitions, 
PartitioningContext context)
+```
+on the logical instance which is also the partitioner instance. The 
implementation calculates the difference between required partitions and the 
existing count of partitions. If this difference is negative, then equivalent 
number of partitions are removed otherwise new partitions are created. 
+
+Please note auto-scaling can be disabled by setting 
[`collectStats`](#collectStats) to `false`. If the use-case requires only 
static partitioning, then that can be achieved by setting 
[`StatelessPartitioner`](https://github.com/chandnisingh/incubator-apex-core/blob/master/common/src/main/java/com/datatorrent/common/partitioner/StatelessPartitioner.java)
 as the operator attribute- `PARTITIONER` on the block reader.

http://git-wip-us.apache.org/repos/asf/incubator-apex-malhar/blob/afbcfc21/docs/operators/file_output.md
----------------------------------------------------------------------
diff --git a/docs/operators/file_output.md b/docs/operators/file_output.md
new file mode 100644
index 0000000..81f9482
--- /dev/null
+++ b/docs/operators/file_output.md
@@ -0,0 +1,180 @@
+AbstractFileOutputOperator
+===========================
+
+The abstract file output operator in Apache Apex Malhar library &mdash; 
[`AbstractFileOutputOperator`](https://github.com/apache/incubator-apex-malhar/blob/devel-3/library/src/main/java/com/datatorrent/lib/io/fs/AbstractFileOutputOperator.java)
 writes streaming data to files. The main features of this operator are:
+
+1. Persisting data to files.
+2. Automatic rotation of files based on:  
+  a. maximum length of a file.  
+  b. time-based rotation where time is specified using a count of application 
windows.
+3. Fault-tolerance.
+4. Compression and encryption of data before it is persisted.
+
+In this tutorial we will cover the details of the basic structure and 
implementation of all the above features in `AbstractFileOutputOperator`. 
Configuration items related to each feature are discussed as they are 
introduced in the section of that feature.
+
+## Persisting data to files
+The principal function of this operator is to persist tuples to files 
efficiently. These files are created under a specific directory on the file 
system. The relevant configuration item is:
+
+**filePath**: path specifying the directory where files are written.
+
+Different types of file system that are implementations of 
`org.apache.hadoop.fs.FileSystem` are supported. The file system instance which 
is used for creating streams is constructed from the `filePath` URI.
+
+```java
+FileSystem.newInstance(new Path(filePath).toUri(), new Configuration())
+```
+
+Tuples may belong to different files therefore expensive IO operations like 
creating multiple output streams, flushing of data to disk, and closing streams 
are handled carefully.
+
+### Ports
+- `input`: the input port on which tuples to be persisted are received.
+
+### `streamsCache`
+This transient state caches output streams per file in memory. The file to 
which the data is appended may change with incoming tuples. It will be highly 
inefficient to keep re-opening streams for a file just because tuples for that 
file are interleaved with tuples for another file. Therefore, the operator 
maintains a cache of limited size with open output streams.
+
+ `streamsCache` is of type `com.google.common.cache.LoadingCache`. A 
`LoadingCache` has an attached `CacheLoader` which is responsible to load value 
of a key when the key is not present in the cache. Details are explained here- 
[CachesExplained](https://github.com/google/guava/wiki/CachesExplained).
+
+The operator constructs this cache in `setup(...)`. It is built with the 
following configuration items:
+
+- **maxOpenFiles**: maximum size of the cache. The cache evicts entries that 
haven't been used recently when the cache size is approaching this limit. 
*Default*: 100
+- **expireStreamAfterAcessMillis**: expires streams after the specified 
duration has passed since the stream was last accessed. *Default*: value of 
attribute- `OperatorContext.SPIN_MILLIS`.
+
+An important point to note here is that the guava cache does not perform 
cleanup and evict values asynchronously, that is, instantly after a value 
expires. Instead, it performs small amounts of maintenance during write 
operations, or during occasional read operations if writes are rare.
+
+#### CacheLoader
+`streamsCache` is created with a `CacheLoader` that opens an 
`FSDataOutputStream` for a file which is not in the cache. The output stream is 
opened in either `append` or `create` mode and the basic logic to determine 
this is explained by the simple diagram below.
+
+![Opening an output stream](images/fileoutput/diagram1.png)
+
+This process gets complicated when fault-tolerance (writing to temporary 
files)  and rotation is added.
+
+Following are few configuration items used for opening the streams:
+
+- **replication**: specifies the replication factor of the output files. 
*Default*: `fs.getDefaultReplication(new Path(filePath))`
+- **filePermission**: specifies the permission of the output files. The 
permission is an octal number similar to that used by the Unix chmod command. 
*Default*: 0777
+
+#### RemovalListener
+A `Guava` cache also allows specification of removal listener which can 
perform some operation when an entry is removed from the cache. Since 
`streamsCache` is of limited size and also has time-based expiry enabled, it is 
imperative that when a stream is evicted from the cache it is closed properly. 
Therefore, we attach a removal listener to `streamsCache` which closes the 
stream when it is evicted.
+
+### `setup(OperatorContext context)`
+During setup the following main tasks are performed:
+
+1. FileSystem instance is created.
+2. The cache of streams is created.
+3. Files are recovered (see Fault-tolerance section).
+4. Stray part files are cleaned (see Automatic rotation section).
+
+### <a name="processTuple"></a>`processTuple(INPUT tuple)`
+The code snippet below highlights the basic steps of processing a tuple.
+
+```java
+protected void processTuple(INPUT tuple)
+{  
+  //which file to write to is derived from the tuple.
+  String fileName = getFileName(tuple);  
+
+  //streamsCache is queried for the output stream. If the stream is already 
opened then it is returned immediately otherwise the cache loader creates one.
+  FilterOutputStream fsOutput = streamsCache.get(fileName).getFilterStream();
+
+  byte[] tupleBytes = getBytesForTuple(tuple);
+
+  fsOutput.write(tupleBytes);
+}
+```
+
+### <a name="endWindow"></a>endWindow()
+It should be noted that while processing a tuple we do not flush the stream 
after every write. Since flushing is expensive it is done periodically for all 
the open streams in the operator's `endWindow()`.
+
+```java
+Map<String, FSFilterStreamContext> openStreams = streamsCache.asMap();
+for (FSFilterStreamContext streamContext: openStreams.values()) {
+  ...
+  //this flushes the stream
+  streamContext.finalizeContext();
+  ...
+}
+```
+`FSFilterStreamContext` will be explained with compression and encryption.
+
+### <a name="teardown"></a>teardown()
+When any operator in a DAG fails then the application master invokes 
`teardown()` for that operator and its downstream operators. In 
`AbstractFileOutputOperator` we have a bunch of open streams in the cache and 
the operator (acting as HDFS client) holds leases for all the corresponding 
files. It is important to release these leases for clean re-deployment. 
Therefore, we try to close all the open streams in `teardown()`.
+
+## Automatic rotation
+
+In a streaming application where data is being continuously processed, when 
this output operator is used, data will be continuously written to an output 
file. The users may want to be able to take the data from time to time to use 
it, copy it out of Hadoop or do some other processing. Having all the data in a 
single file makes it difficult as the user needs to keep track of how much data 
has been read from the file each time so that the same data is not read again. 
Also users may already have processes and scripts in place that work with full 
files and not partial data from a file.
+
+To help solve these problems the operator supports creating many smaller files 
instead of writing to just one big file. Data is written to a file and when 
some condition is met the file is finalized and data is written to a new file. 
This is called file rotation. The user can determine when the file gets 
rotated. Each of these files is called a part file as they contain portion of 
the data.
+
+### Part filename
+
+The filename for a part file is formed by using the original file name and the 
part number. The part number starts from 0 and is incremented each time a new 
part file created. The default filename has the format, assuming origfile 
represents the original filename and partnum represents the part number,
+
+`origfile.partnum`
+
+This naming scheme can be changed by the user. It can be done so by overriding 
the following method
+
+```java
+protected String getPartFileName(String fileName, int part)
+```
+
+This method is passed the original filename and part number as arguments and 
should return the part filename.
+
+### Mechanisms
+
+The user has a couple of ways to specify when a file gets rotated. First is 
based on size and second on time. In the first case the files are limited by 
size and in the second they are rotated by time.
+
+#### Size Based
+
+With size based rotation the user specifies a size limit. Once the size of the 
currently file reaches this limit the file is rotated. The size limit can be 
specified by setting the following property
+
+`maxLength`
+
+Like any other property this can be set in Java application code or in the 
property file.
+
+#### Time Based
+
+In time based rotation user specifies a time interval. This interval is 
specified as number of application windows. The files are rotated periodically 
once the specified number of application windows have elapsed. Since the 
interval is application window based it is not always exactly constant time. 
The interval can be specified using the following property
+
+`rotationWindows`
+
+### `setup(OperatorContext context)`
+
+When an operator is being started there may be stray part files and they need 
to be cleaned up. One common scenario, when these could be present, is in the 
case of failure, where a node running the operator failed and a previous 
instance of the operator was killed. This cleanup and other initial processing 
for the part files happens in the operator setup. The following diagram 
describes this process
+
+![Rotation setup](images/fileoutput/FileRotation.png)
+
+
+## Fault-tolerance
+There are two issues that should be addressed in order to make the operator 
fault-tolerant:
+
+1. The operator flushes data to the filesystem every application window. This 
implies that after a failure when the operator is re-deployed and tuples of a 
window are replayed, then duplicate data will be saved to the files. This is 
handled by recording how much the operator has written to each file every 
window in a state that is checkpointed and truncating files back to the 
recovery checkpoint after re-deployment.
+
+2. While writing to HDFS, if the operator gets killed and didn't have the 
opportunity to close a file, then later when it is redeployed it will attempt 
to truncate/restore that file. Restoring a file may fail because the lease that 
the previous process (operator instance before failure) had acquired from 
namenode to write to a file may still linger and therefore there can be 
exceptions in acquiring the lease again by the new process (operator instance 
after failure). This is handled by always writing data to temporary files and 
renaming these files to actual files when a file is finalized (closed) for 
writing, that is, we are sure that no more data will be written to it. The 
relevant configuration item is:  
+  - **alwaysWriteToTmp**: enables/disables writing to a temporary file. 
*Default*: true.
+
+Most of the complexity in the code comes from making this operator 
fault-tolerant.
+
+### Checkpointed states needed for fault-tolerance
+
+- `endOffsets`: contains the size of each file as it is being updated by the 
operator. It helps the operator to restore a file during recovery in operator 
`setup(...)` and is also used while loading a stream to find out if the 
operator has seen a file before.
+
+- `fileNameToTmpName`: contains the name of the temporary file per actual 
file. It is needed because the name of a temporary file is random. They are 
named based on the timestamp when the stream is created. During recovery the 
operator needs to know the temp file which it was writing to and if it needs 
restoration then it creates a new temp file and updates this mapping.
+
+- `finalizedFiles`: contains set of files which were requested to be finalized 
per window id.
+
+- `finalizedPart`: contains the latest `part` of each file which was requested 
to be finalized.
+
+The use of `finalizedFiles` and `finalizedPart` are explained in detail under 
[`requestFinalize(...)`](#requestFinalize) method.
+
+### Recovering files
+When the operator is re-deployed, it checks in its `setup(...)` method if the 
state of a file which it has seen before the failure is consistent with the 
file's state on the file system, that is, the size of the file on the file 
system should match the size in the `endOffsets`. When it doesn't the operator 
truncates the file.
+
+For example, let's say the operator wrote 100 bytes to test1.txt by the end of 
window 10. It wrote another 20 bytes by the end of window 12 but failed in 
window 13. When the operator gets re-deployed it is restored with window 10 
(recovery checkpoint) state. In the previous run, by the end of window 10, the 
size of file on the filesystem was 100 bytes but now it is 120 bytes. Tuples 
for windows 11 and 12 are going to be replayed. Therefore, in order to avoid 
writing duplicates to test1.txt, the operator truncates the file to 100 bytes 
(size at the end of window 10) discarding the last 20 bytes.
+
+### <a name="requestFinalize"></a>`requestFinalize(String fileName)`
+When the operator is always writing to temporary files (in order to avoid HDFS 
Lease exceptions), then it is necessary to rename the temporary files to the 
actual files once it has been determined that the files are closed. This is 
refered to as *finalization* of files and the method allows the user code to 
specify when a file is ready for finalization.
+
+In this method, the requested file (or in the case of rotation &mdash; all the 
file parts including the latest open part which have not yet been requested for 
finalization) are registered for finalization. Registration is basically adding 
the file names to `finalizedFiles` state and updating `finalizedPart`.
+
+The process of *finalization* of all the files which were requested till the 
window *w* is deferred till window *w* is committed. This is because until a 
window is committed it can be replayed after a failure which means that a file 
can be open for writing even after it was requested for finalization.
+
+When rotation is enabled, part files as and when they get completed are 
requested for finalization. However, when rotation is not enabled user code 
needs to invoke this method as the knowledge that when a file is closed is 
unknown to this abstract operator.

http://git-wip-us.apache.org/repos/asf/incubator-apex-malhar/blob/afbcfc21/docs/operators/file_splitter.md
----------------------------------------------------------------------
diff --git a/docs/operators/file_splitter.md b/docs/operators/file_splitter.md
new file mode 100644
index 0000000..777e7b6
--- /dev/null
+++ b/docs/operators/file_splitter.md
@@ -0,0 +1,163 @@
+File Splitter
+===================
+
+This is a simple operator whose main function is to split a file virtually and 
create metadata describing the files and the splits. 
+
+## Why is it needed?
+It is a common operation to read a file and parse it. This operation can be 
parallelized by having multiple partitions of such operators and each partition 
operating on different files. However, at times when a file is large then a 
single partition reading it can become a bottleneck.
+In these cases, throughput can be increased if instances of the partitioned 
operator can read and parse non-overlapping sets of file blocks. This is where 
file splitter comes in handy. It creates metadata of blocks of file which 
serves as tasks handed out to downstream operator partitions. 
+The downstream partitions can read/parse the block without the need of 
interacting with other partitions.
+
+## Class Diagram
+![FileSplitter class dierarchy](images/filesplitter/classdiagram.png)
+
+## AbstractFileSplitter
+The abstract implementation defines the logic of processing `FileInfo`. This 
comprises the following tasks -  
+
+- building `FileMetadata` per file and emitting it. This metadata contains the 
file information such as filepath, no. of blocks in it, length of the file, all 
the block ids, etc.
+  
+- creating `BlockMetadataIterator` from `FileMetadata`. The iterator 
lazy-loads the block metadata when needed. We use an iterator because the no. 
of blocks in a file can be huge if the block size is small and loading all of 
them at once in memory may cause out of memory errors.
+ 
+- retrieving `BlockMetadata.FileBlockMetadata` from the block metadata 
iterator and emitting it. The FileBlockMetadata contains the block id, start 
offset of the block, length of file in the block, etc. The number of block 
metadata emitted per window are controlled by `blocksThreshold` setting which 
by default is 1.  
+
+The main utility method that performs all the above tasks is the 
[`process()`](#process_method) method. Concrete implementations can invoke this 
method whenever they have data to process.
+
+### Ports
+Declares only output ports on which file metadata and block metadata are 
emitted.
+
+- filesMetadataOutput: metadata for each file is emitted on this port. 
+- blocksMetadataOutput: metadata for each block is emitted on this port. 
+
+### <a name="process_method"></a>`process()` method
+When process() is invoked, any pending blocks from the current file are 
emitted on the 'blocksMetadataOutput' port. If the threshold for blocks per 
window is still not met then a new input file is processed - corresponding 
metadata is emitted on 'filesMetadataOutput' and more of its blocks are 
emitted. This operation is repeated until the `blocksThreshold` is reached or 
there are no more new files.
+
+```java
+  protected void process()
+  {
+    if (blockMetadataIterator != null && blockCount < blocksThreshold) {
+      emitBlockMetadata();
+    }
+
+    FileInfo fileInfo;
+    while (blockCount < blocksThreshold && (fileInfo = getFileInfo()) != null) 
{
+      if (!processFileInfo(fileInfo)) {
+        break;
+      }
+    }
+  }
+```
+### Abstract methods
+
+- `FileInfo getFileInfo()`: called from within the `process()` and provides 
the next file to process.
+
+- `long getDefaultBlockSize()`: provides the block size which is used when 
user hasn't configured the size.
+
+- `FileStatus getFileStatus(Path path)`: provides the 
`org.apache.hadoop.fs.FileStatus` instance for a path.   
+
+### Configuration
+1. **blockSize**: size of a block.
+2. **blocksThreshold**<a name="blocksThreshold"></a>: threshold on the number 
of blocks emitted by file splitter every window. This setting is used for 
throttling the work for downstream operators.
+
+
+## FileSplitterBase
+Simple operator that receives tuples of type `FileInfo` on its `input` port. 
`FileInfo` contains the information (currently just the file path) about the 
file which this operator uses to create file metadata and block metadata.
+### Example application
+This is a simple sub-dag that demonstrates how FileSplitterBase can be plugged 
into an application.
+![Application with FileSplitterBase](images/filesplitter/baseexample.png)
+
+The upstream operator emits tuples of type `FileInfo` on its output port which 
is connected to splitter input port. The downstream receives tuples of type 
`BlockMetadata.FileBlockMetadata` from the splitter's block metadata output 
port.
+
+```java
+public class ApplicationWithBaseSplitter implements StreamingApplication
+{
+  @Override
+  public void populateDAG(DAG dag, Configuration configuration)
+  {
+    JMSInput input = dag.addOperator("Input", new JMSInput());
+    FileSplitterBase splitter = dag.addOperator("Splitter", new 
FileSplitterBase());
+    FSSliceReader blockReader = dag.addOperator("BlockReader", new 
FSSliceReader());
+    ...
+    dag.addStream("file-info", input.output, splitter.input);
+    dag.addStream("block-metadata", splitter.blocksMetadataOutput, 
blockReader.blocksMetadataInput);
+    ...
+  }
+
+  public static class JMSInput extends 
AbstractJMSInputOperator<AbstractFileSplitter.FileInfo>
+  {
+
+    public final transient DefaultOutputPort<AbstractFileSplitter.FileInfo> 
output = new DefaultOutputPort<>();
+
+    @Override
+    protected AbstractFileSplitter.FileInfo convert(Message message) throws 
JMSException
+    {
+      //assuming the message is a text message containing the absolute path of 
the file.
+      return new AbstractFileSplitter.FileInfo(null, 
((TextMessage)message).getText());
+    }
+
+    @Override
+    protected void emit(AbstractFileSplitter.FileInfo payload)
+    {
+      output.emit(payload);
+    }
+  }
+}
+```
+
+### Ports
+Declares an input port on which it receives tuples from the upstream operator. 
Output ports are inherited from AbstractFileSplitter.
+ 
+- input: non optional port on which tuples of type `FileInfo` are received.
+
+### Configuration
+1. **file**: path of the file from which the filesystem is inferred. 
FileSplitter creates an instance of `org.apache.hadoop.fs.FileSystem` which is 
why this path is needed.  
+```
+FileSystem.newInstance(new Path(file).toUri(), new Configuration());
+```
+The fs instance is then used to fetch the default block size and 
`org.apache.hadoop.fs.FileStatus` for each file path.
+
+## FileSplitterInput
+This is an input operator that discovers files itself. The scanning of the 
directories for new files is asynchronous which is handled by 
`TimeBasedDirectoryScanner`. The function of TimeBasedDirectoryScanner is to 
periodically scan specified directories and find files which were newly added 
or modified. The interaction between the operator and the scanner is depicted 
in the diagram below.
+
+![Interaction between operator and scanner](images/filesplitter/sequence.png)
+
+### Example application
+This is a simple sub-dag that demonstrates how FileSplitterInput can be 
plugged into an application.
+
+![Application with FileSplitterInput](images/filesplitter/inputexample.png)
+
+Splitter is the input operator here that sends block metadata to the 
downstream BlockReader.
+
+```java
+  @Override
+  public void populateDAG(DAG dag, Configuration configuration)
+  {
+    FileSplitterInput input = dag.addOperator("Input", new 
FileSplitterInput());
+    FSSliceReader reader = dag.addOperator("Block Reader", new 
FSSliceReader());
+    ...
+    dag.addStream("block-metadata", input.blocksMetadataOutput, 
reader.blocksMetadataInput);
+    ...
+  }
+
+```
+### Ports
+Since it is an input operator there are no input ports and output ports are 
inherited from AbstractFileSplitter.
+
+### Configuration
+1. **scanner**: the component that scans directories asynchronously. It is of 
type `com.datatorrent.lib.io.fs.FileSplitter.TimeBasedDirectoryScanner`. The 
basic implementation of TimeBasedDirectoryScanner can be customized by users.  
+  
+  a. **files**: comma separated list of directories to scan.  
+  
+  b. **recursive**: flag that controls whether the directories should be 
scanned recursively.  
+ 
+  c. **scanIntervalMillis**: interval specified in milliseconds after which 
another scan iteration is triggered.  
+  
+  d. **filePatternRegularExp**: regular expression for accepted file names.  
+  
+  e. **trigger**: a flag that triggers a scan iteration instantly. If the 
scanner thread is idling then it will initiate a scan immediately otherwise if 
a scan is in progress, then the new iteration will be triggered immediately 
after the completion of current one.
+2. **idempotentStorageManager**: by default FileSplitterInput is idempotent. 
+Idempotency ensures that the operator will process the same set of 
files/blocks in a window if it has seen that window previously, i.e., before a 
failure. For example, let's say the operator completed window 10 and failed 
somewhere between window 11. If the operator gets restored at window 10 then it 
will process the same file/block again in window 10 which it did in the 
previous run before the failure. Idempotency is important but comes with higher 
cost because at the end of each window the operator needs to persist some state 
with respect to that window. Therefore, if one doesn't care about idempotency 
then they can set this property to be an instance of 
`com.datatorrent.lib.io.IdempotentStorageManager.NoopIdempotentStorageManager`.
+
+## Handling of split records
+Splitting of files to create tasks for downstream operator needs to be a 
simple operation that doesn't consume a lot of resources and is fast. This is 
why the file splitter doesn't open files to read. The downside of that is if 
the file contains records then a record may split across adjacent blocks. 
Handling of this is left to the downstream operator.
+
+We have created Block readers in Apex-malhar library that handle line splits 
efficiently. The 2 line readers- `AbstractFSLineReader` and 
`AbstractFSReadAheadLineReader` can be found here 
[AbstractFSBlockReader](https://github.com/apache/incubator-apex-malhar/blob/master/library/src/main/java/com/datatorrent/lib/io/block/AbstractFSBlockReader.java).

http://git-wip-us.apache.org/repos/asf/incubator-apex-malhar/blob/afbcfc21/docs/operators/images/blockreader/classdiagram.png
----------------------------------------------------------------------
diff --git a/docs/operators/images/blockreader/classdiagram.png 
b/docs/operators/images/blockreader/classdiagram.png
new file mode 100644
index 0000000..8fbd6fc
Binary files /dev/null and b/docs/operators/images/blockreader/classdiagram.png 
differ

http://git-wip-us.apache.org/repos/asf/incubator-apex-malhar/blob/afbcfc21/docs/operators/images/blockreader/flowdiagram.png
----------------------------------------------------------------------
diff --git a/docs/operators/images/blockreader/flowdiagram.png 
b/docs/operators/images/blockreader/flowdiagram.png
new file mode 100644
index 0000000..1b2897d
Binary files /dev/null and b/docs/operators/images/blockreader/flowdiagram.png 
differ

http://git-wip-us.apache.org/repos/asf/incubator-apex-malhar/blob/afbcfc21/docs/operators/images/blockreader/fsreaderexample.png
----------------------------------------------------------------------
diff --git a/docs/operators/images/blockreader/fsreaderexample.png 
b/docs/operators/images/blockreader/fsreaderexample.png
new file mode 100644
index 0000000..571b60a
Binary files /dev/null and 
b/docs/operators/images/blockreader/fsreaderexample.png differ

http://git-wip-us.apache.org/repos/asf/incubator-apex-malhar/blob/afbcfc21/docs/operators/images/blockreader/totalBacklogProcessing.png
----------------------------------------------------------------------
diff --git a/docs/operators/images/blockreader/totalBacklogProcessing.png 
b/docs/operators/images/blockreader/totalBacklogProcessing.png
new file mode 100644
index 0000000..2ed481f
Binary files /dev/null and 
b/docs/operators/images/blockreader/totalBacklogProcessing.png differ

http://git-wip-us.apache.org/repos/asf/incubator-apex-malhar/blob/afbcfc21/docs/operators/images/fileoutput/FileRotation.png
----------------------------------------------------------------------
diff --git a/docs/operators/images/fileoutput/FileRotation.png 
b/docs/operators/images/fileoutput/FileRotation.png
new file mode 100644
index 0000000..624c96e
Binary files /dev/null and b/docs/operators/images/fileoutput/FileRotation.png 
differ

http://git-wip-us.apache.org/repos/asf/incubator-apex-malhar/blob/afbcfc21/docs/operators/images/fileoutput/diagram1.png
----------------------------------------------------------------------
diff --git a/docs/operators/images/fileoutput/diagram1.png 
b/docs/operators/images/fileoutput/diagram1.png
new file mode 100644
index 0000000..0a260de
Binary files /dev/null and b/docs/operators/images/fileoutput/diagram1.png 
differ

http://git-wip-us.apache.org/repos/asf/incubator-apex-malhar/blob/afbcfc21/docs/operators/images/filesplitter/baseexample.png
----------------------------------------------------------------------
diff --git a/docs/operators/images/filesplitter/baseexample.png 
b/docs/operators/images/filesplitter/baseexample.png
new file mode 100644
index 0000000..6af2b44
Binary files /dev/null and b/docs/operators/images/filesplitter/baseexample.png 
differ

http://git-wip-us.apache.org/repos/asf/incubator-apex-malhar/blob/afbcfc21/docs/operators/images/filesplitter/classdiagram.png
----------------------------------------------------------------------
diff --git a/docs/operators/images/filesplitter/classdiagram.png 
b/docs/operators/images/filesplitter/classdiagram.png
new file mode 100644
index 0000000..6490368
Binary files /dev/null and 
b/docs/operators/images/filesplitter/classdiagram.png differ

http://git-wip-us.apache.org/repos/asf/incubator-apex-malhar/blob/afbcfc21/docs/operators/images/filesplitter/inputexample.png
----------------------------------------------------------------------
diff --git a/docs/operators/images/filesplitter/inputexample.png 
b/docs/operators/images/filesplitter/inputexample.png
new file mode 100644
index 0000000..65e199f
Binary files /dev/null and 
b/docs/operators/images/filesplitter/inputexample.png differ

http://git-wip-us.apache.org/repos/asf/incubator-apex-malhar/blob/afbcfc21/docs/operators/images/filesplitter/sequence.png
----------------------------------------------------------------------
diff --git a/docs/operators/images/filesplitter/sequence.png 
b/docs/operators/images/filesplitter/sequence.png
new file mode 100644
index 0000000..85cf702
Binary files /dev/null and b/docs/operators/images/filesplitter/sequence.png 
differ


Reply via email to