Repository: samza
Updated Branches:
  refs/heads/master db8b228d9 -> f01b28628


SAMZA-1025: Documentation for HdfsSystemConsumer


Project: http://git-wip-us.apache.org/repos/asf/samza/repo
Commit: http://git-wip-us.apache.org/repos/asf/samza/commit/f01b2862
Tree: http://git-wip-us.apache.org/repos/asf/samza/tree/f01b2862
Diff: http://git-wip-us.apache.org/repos/asf/samza/diff/f01b2862

Branch: refs/heads/master
Commit: f01b2862879b640456a46e0a6fe375cfe7e17b37
Parents: db8b228
Author: Hai Lu <lhai...@gmail.com>
Authored: Mon Jan 30 10:58:15 2017 -0800
Committer: vjagadish1989 <jvenk...@linkedin.com>
Committed: Mon Jan 30 10:58:46 2017 -0800

----------------------------------------------------------------------
 .../documentation/versioned/hdfs/consumer.md    | 109 +++++++++++++++++++
 .../documentation/versioned/hdfs/producer.md    |   2 +-
 docs/learn/documentation/versioned/index.html   |   1 +
 .../versioned/jobs/configuration-table.html     |  40 +++++++
 4 files changed, 151 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/samza/blob/f01b2862/docs/learn/documentation/versioned/hdfs/consumer.md
----------------------------------------------------------------------
diff --git a/docs/learn/documentation/versioned/hdfs/consumer.md 
b/docs/learn/documentation/versioned/hdfs/consumer.md
new file mode 100644
index 0000000..401b228
--- /dev/null
+++ b/docs/learn/documentation/versioned/hdfs/consumer.md
@@ -0,0 +1,109 @@
+---
+layout: page
+title: Reading from HDFS
+---
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+You can configure your Samza job to read from HDFS files. The 
[HdfsSystemConsumer](javadocs/org/apache/samza/system/hdfs/HdfsSystemConsumer.html)
 can read from HDFS files. Avro encoded records are supported out of the box 
and it is easy to extend to support other formats (plain text, csv, json etc). 
See `Event format` section below.
+
+### Environment
+
+Your job needs to run on the same YARN cluster which hosts the HDFS you want 
to consume from.
+
+### Partitioning
+
+Partitioning works at the level of individual HDFS files. Each file is treated 
as a stream partition, while a directory that contains these files is a stream. 
For example, if you want to read from a HDFS path which contains 10 individual 
files, there will naturally be 10 partitions created. You can configure up to 
10 Samza containers to process these partitions. If you want to read from a 
single HDFS file, there is currently no way to break down the consumption - you 
can only have one container to process the file.
+
+### Event format
+
+[HdfsSystemConsumer](javadocs/org/apache/samza/system/hdfs/HdfsSystemConsumer.html)
 currently supports reading from avro files. The received 
[IncomingMessageEnvelope](javadocs/org/apache/samza/system/IncomingMessageEnvelope.html)
 contains three significant fields: 
+1. The key which is empty
+2. The message which is set to the avro 
[GenericRecord](https://avro.apache.org/docs/1.7.6/api/java/org/apache/avro/generic/GenericRecord.html)
+3. The stream partition which is set to the name of the HDFS file
+
+To extend the support beyond avro files (e.g. json, csv, etc.), you can 
implement the interface 
[SingleFileHdfsReader](javadocs/org/apache/samza/system/hdfs/reader/SingleFileHdfsReader.html)
 (take a look at the implementation of 
[AvroFileHdfsReader](javadocs/org/apache/samza/system/hdfs/reader/AvroFileHdfsReader)
 as a sample).
+
+### End of stream support
+
+One major difference between HDFS data and Kafka data is that while a 
kafka topic has an unbounded stream of messages, HDFS files are bounded and 
have a notion of EOF.
+
+You can choose to implement 
[EndOfStreamListenerTask](javadocs/org/apache/samza/task/EndOfStreamListenerTask.html)
 to receive a callback when all partitions are at end of stream. When all 
partitions being processed by the task are at end of stream (i.e. EOF has been 
reached for all files), the Samza job exits automatically.
+
+### Basic Configuration
+
+Here is a few of the basic configs to set up HdfsSystemConsumer:
+
+```
+# The HDFS system consumer is implemented under the 
org.apache.samza.system.hdfs package,
+# so use HdfsSystemFactory as the system factory for your system
+systems.hdfs-clickstream.samza.factory=org.apache.samza.system.hdfs.HdfsSystemFactory
+
+# You need to specify the path of files you want to consume in task.inputs
+task.inputs=hdfs-clickstream.hdfs:/data/clickstream/2016/09/11
+
+# You can specify a white list of files you want your job to process (in Java 
Pattern style)
+systems.hdfs-clickstream.partitioner.defaultPartitioner.whitelist=.*avro
+
+# You can specify a black list of files you don't want your job to process (in 
Java Pattern style),
+# by default it's empty.
+# Note that you can have both white list and black list, in which case both 
will be applied.
+systems.hdfs-clickstream.partitioner.defaultPartitioner.blacklist=somefile.avro
+
+```
+
+### Security Configuration
+
+The following additional configs are required when accessing HDFS clusters 
that have kerberos enabled:
+
+```
+# Use the SamzaYarnSecurityManagerFactory, which fetches and renews the 
Kerberos delegation tokens when the job is running in a secure environment.
+job.security.manager.factory=org.apache.samza.job.yarn.SamzaYarnSecurityManagerFactory
+
+# Kerberos principal
+yarn.kerberos.principal=your-principal-name
+
+# Path of the keytab file (local path)
+yarn.kerberos.keytab=/tmp/keytab
+```
+
+### Advanced Configuration
+
+Some of the advanced configuration you might need to set up:
+
+```
+# Specify the group pattern for advanced partitioning.
+systems.hdfs-clickstream.partitioner.defaultPartitioner.groupPattern=part-[id]-.*
+```
+
+The advanced partitioning goes beyond the basic assumption that each file is a 
partition. With advanced partitioning you can group files into partitions 
arbitrarily. For example, if you have a set of files as [part-01-a.avro, 
part-01-b.avro, part-02-a.avro, part-02-b.avro, part-03-a.avro] that you want 
to organize into three partitions as (part-01-a.avro, part-01-b.avro), 
(part-02-a.avro, part-02-b.avro), (part-03-a.avro), where the numbers in the 
middle act as a "group identifier", you can then set this property to be 
"part-[id]-.*" (note that **[id]** is a reserved term here, i.e. you have to 
literally put it as **[id]**). The partitioner will apply this pattern to all 
file names and extract the "group identifier" ("[id]" in the pattern), then use 
the "group identifier" to group files into partitions.
+
+```
+# Specify the type of files your job want to process (support avro only for 
now)
+systems.hdfs-clickstream.consumer.reader=avro
+
+# Max number of retries (per-partition) before the container fails.
+system.hdfs-clickstream.consumer.numMaxRetries=10
+
+```
+
+For the list of all configs, check out the configuration table page 
[here](../jobs/configuration-table.html)
+
+### More Information
+[HdfsSystemConsumer design 
doc](https://issues.apache.org/jira/secure/attachment/12827670/HDFSSystemConsumer.pdf)
+
+## [Security &raquo;](../operations/security.html)
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/samza/blob/f01b2862/docs/learn/documentation/versioned/hdfs/producer.md
----------------------------------------------------------------------
diff --git a/docs/learn/documentation/versioned/hdfs/producer.md 
b/docs/learn/documentation/versioned/hdfs/producer.md
index b0e936f..a157cd8 100644
--- a/docs/learn/documentation/versioned/hdfs/producer.md
+++ b/docs/learn/documentation/versioned/hdfs/producer.md
@@ -67,4 +67,4 @@ 
systems.hdfs-clickstream.producer.hdfs.write.batch.size.bytes=134217728
 
 The above configuration assumes a Metrics and Serde implemnetation has been 
properly configured against the `some-serde-impl` and `some-metrics-impl` 
labels somewhere else in the same `job.properties` file. Each of these 
properties has a reasonable default, so you can leave out the ones you don't 
need to customize for your job run.
 
-## [Security &raquo;](../operations/security.html)
\ No newline at end of file
+## [Reading from HDFS &raquo;](../hdfs/consumer.html)
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/samza/blob/f01b2862/docs/learn/documentation/versioned/index.html
----------------------------------------------------------------------
diff --git a/docs/learn/documentation/versioned/index.html 
b/docs/learn/documentation/versioned/index.html
index d0b14ec..6651b3b 100644
--- a/docs/learn/documentation/versioned/index.html
+++ b/docs/learn/documentation/versioned/index.html
@@ -82,6 +82,7 @@ title: Documentation
   <li><a href="yarn/isolation.html">Isolation</a></li>
   <li><a href="yarn/yarn-host-affinity.html">Host Affinity & Yarn</a></li>
   <li><a href="hdfs/producer.html">Writing to HDFS</a></li>
+  <li><a href="hdfs/consumer.html">Reading from HDFS</a></li>
   <li><a href="hdfs/yarn-security.html">Yarn Security</a></li>
 <!-- TODO write yarn pages
   <li><a href="">Fault Tolerance</a></li>

http://git-wip-us.apache.org/repos/asf/samza/blob/f01b2862/docs/learn/documentation/versioned/jobs/configuration-table.html
----------------------------------------------------------------------
diff --git a/docs/learn/documentation/versioned/jobs/configuration-table.html 
b/docs/learn/documentation/versioned/jobs/configuration-table.html
index ba5ebbc..7bac935 100644
--- a/docs/learn/documentation/versioned/jobs/configuration-table.html
+++ b/docs/learn/documentation/versioned/jobs/configuration-table.html
@@ -1810,6 +1810,46 @@
                 </tr>
 
                 <tr>
+                    <th colspan="3" class="section" 
id="hdfs-system-consumer"><a href="../hdfs/consumer.html">Reading from 
HDFS</a></th>
+                </tr>
+
+                <tr>
+                    <td class="property" 
id="hdfs-consumer-buffer-capacity">systems.*.consumer.bufferCapacity</td>
+                    <td class="default">10</td>
+                    <td class="description">Capacity of the hdfs consumer 
buffer - the blocking queue used for storing messages. Larger buffer capacity 
typically leads to better throughput but consumes more memory.</td>
+                </tr>
+                <tr>
+                    <td class="property" 
id="hdfs-consumer-numMaxRetries">systems.*.consumer.numMaxRetries</td>
+                    <td class="default">10</td>
+                    <td class="description">The number of retry attempts when 
there is a failure to fetch messages from HDFS, before the container fails.</td>
+                </tr>
+                <tr>
+                    <td class="property" 
id="hdfs-partitioner-whitelist">systems.*.partitioner.defaultPartitioner.whitelist</td>
+                    <td class="default">.*</td>
+                    <td class="description">White list used by directory 
partitioner to select files in a hdfs directory, in Java Pattern style.</td>
+                </tr>
+                <tr>
+                    <td class="property" 
id="hdfs-partitioner-blacklist">systems.*.partitioner.defaultPartitioner.blacklist</td>
+                    <td class="default"></td>
+                    <td class="description">Black list used by directory 
partitioner to filter out unwanted files in a hdfs directory, in Java Pattern 
style.</td>
+                </tr>
+                <tr>
+                    <td class="property" 
id="hdfs-partitioner-group-pattern">systems.*.partitioner.defaultPartitioner.groupPattern</td>
+                    <td class="default"></td>
+                    <td class="description">Group pattern used by directory 
partitioner for advanced partitioning. The advanced partitioning goes beyond 
the basic assumption that each file is a partition. With advanced partitioning 
you can group files into partitions arbitrarily. For example, if you have a set 
of files as [part-01-a.avro, part-01-b.avro, part-02-a.avro, part-02-b.avro, 
part-03-a.avro], and you want to organize the partitions as (part-01-a.avro, 
part-01-b.avro), (part-02-a.avro, part-02-b.avro), (part-03-a.avro), where the 
numbers in the middle act as a "group identifier", you can then set this 
property to be "part-[id]-.*" (note that "[id]" is a reserved term here, i.e. 
you have to literally put it as "[id]"). The partitioner will apply this 
pattern to all file names and extract the "group identifier" ("[id]" in the 
pattern), then use the "group identifier" to group files into partitions. See 
more details in <a href="https://issues.apache.org/jira/secure/attachment/
 12827670/HDFSSystemConsumer.pdf">HdfsSystemConsumer design doc</a> </td>
+                </tr>
+                <tr>
+                    <td class="property" 
id="hdfs-consumer-reader-type">systems.*.consumer.reader</td>
+                    <td class="default">avro</td>
+                    <td class="description">Type of the file reader for 
different event formats (avro, plain, json, etc.). "avro" is only type 
supported for now.</td>
+                </tr>
+                <tr>
+                    <td class="property" 
id="hdfs-staging-directory">systems.*.stagingDirectory</td>
+                    <td class="default"></td>
+                    <td class="description">Staging directory for storing 
partition description. By default (if not set by users) the value is inherited 
from "yarn.job.staging.directory" internally. The default value is typically 
good enough unless you want explicitly use a separate location.</td>
+                </tr>
+
+                <tr>
                     <th colspan="3" class="section" id="task-migration">
                         Migrating from Samza 0.9.1 to 0.10.0<br>
                         <span class="subtitle">

Reply via email to