[ 
https://issues.apache.org/jira/browse/BEAM-3079?focusedWorklogId=117757&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-117757
 ]

ASF GitHub Bot logged work on BEAM-3079:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 30/Jun/18 04:49
            Start Date: 30/Jun/18 04:49
    Worklog Time Spent: 10m 
      Work Description: asfgit closed pull request #471: [BEAM-3079]: Samza 
Runner docs and capability matrix
URL: https://github.com/apache/beam-site/pull/471
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/src/_data/capability-matrix.yml b/src/_data/capability-matrix.yml
index acac0ad40..508ac1f42 100644
--- a/src/_data/capability-matrix.yml
+++ b/src/_data/capability-matrix.yml
@@ -17,6 +17,8 @@ columns:
     name: JStorm
   - class: ibmstreams
     name: IBM Streams
+  - class: samza
+    name: Apache Samza
 
 categories:
   - description: What is being computed?
@@ -64,6 +66,10 @@ categories:
             l1: 'Yes'
             l2: fully supported
             l3: ''
+          - class: samza
+            l1: 'Yes'
+            l2: fully supported
+            l3: Supported with per-element transformation.
       - name: GroupByKey
         values:
           - class: model
@@ -102,6 +108,10 @@ categories:
             l1: 'Yes'
             l2: fully supported
             l3: ''
+          - class: samza
+            l1: 'Yes'
+            l2: fully supported
+            l3: "Uses Samza's partitionBy for key grouping and Beam's logic 
for window aggregation and triggering."
       - name: Flatten
         values:
           - class: model
@@ -140,6 +150,10 @@ categories:
             l1: 'Yes'
             l2: fully supported
             l3: ''
+          - class: samza
+            l1: 'Yes'
+            l2: fully supported
+            l3: ''
       - name: Combine
         values:
           - class: model
@@ -178,6 +192,10 @@ categories:
             l1: 'Yes'
             l2: fully supported
             l3: ''
+          - class: samza
+            l1: 'Yes'
+            l2: fully supported
+            l3: Use combiner for efficient pre-aggregation.
       - name: Composite Transforms
         values:
           - class: model
@@ -216,6 +234,10 @@ categories:
             l1: 'Partially'
             l2: supported via inlining
             l3: ''
+          - class: samza
+            l1: 'Partially'
+            l2: supported via inlining
+            l3: ''
       - name: Side Inputs
         values:
           - class: model
@@ -254,6 +276,10 @@ categories:
             l1: 'Yes'
             l2: fully supported
             l3: ''
+          - class: samza
+            l1: 'Yes'
+            l2: fully supported
+            l3: Uses Samza's broadcast operator to distribute the side inputs.
       - name: Source API
         values:
           - class: model
@@ -292,6 +318,10 @@ categories:
             l1: 'Yes'
             l2: fully supported
             l3: ''
+          - class: samza
+            l1: 'Yes'
+            l2: fully supported
+            l3: ''
       - name: Splittable DoFn
         values:
           - class: model
@@ -329,7 +359,11 @@ categories:
           - class: ibmstreams
             l1: 'No'
             l2: not implemented
-            l3: 
+            l3:
+          - class: samza
+            l1: 'No'
+            l2: not implemented
+            l3:
       - name: Metrics
         values:
           - class: model
@@ -368,6 +402,10 @@ categories:
             l1: 'Partially'
             l2: All metrics types are supported.
             l3: Only attempted values are supported. No committed values for 
metrics.
+          - class: samza
+            l1: 'Partially'
+            l2: Counter and Gauge are supported.
+            l3: Only attempted values are supported. No committed values for 
metrics.
       - name: Stateful Processing
         values:
           - class: model
@@ -406,6 +444,10 @@ categories:
             l1: 'Partially'
             l2: non-merging windows
             l3: ''
+          - class: samza
+            l1: 'Partially'
+            l2: non-merging windows
+            l3: 'States are backed up by either rocksDb KV store or in-memory 
hash map, and persist using changelog.'
   - description: Where in event time?
     anchor: where
     color-b: '37d'
@@ -451,6 +493,10 @@ categories:
             l1: 'Yes'
             l2: supported
             l3: ''
+          - class: samza
+            l1: 'Yes'
+            l2: supported
+            l3: ''
       - name: Fixed windows
         values:
           - class: model
@@ -489,6 +535,10 @@ categories:
             l1: 'Yes'
             l2: supported
             l3: ''
+          - class: samza
+            l1: 'Yes'
+            l2: supported
+            l3: ''
       - name: Sliding windows
         values:
           - class: model
@@ -527,6 +577,10 @@ categories:
             l1: 'Yes'
             l2: supported
             l3: ''
+          - class: samza
+            l1: 'Yes'
+            l2: supported
+            l3: ''
       - name: Session windows
         values:
           - class: model
@@ -565,6 +619,10 @@ categories:
             l1: 'Yes'
             l2: supported
             l3: ''
+          - class: samza
+            l1: 'Yes'
+            l2: supported
+            l3: ''
       - name: Custom windows
         values:
           - class: model
@@ -603,6 +661,10 @@ categories:
             l1: 'Yes'
             l2: supported
             l3: ''
+          - class: samza
+            l1: 'Yes'
+            l2: supported
+            l3: ''
       - name: Custom merging windows
         values:
           - class: model
@@ -641,6 +703,10 @@ categories:
             l1: 'Yes'
             l2: supported
             l3: ''
+          - class: samza
+            l1: 'Yes'
+            l2: supported
+            l3: ''
       - name: Timestamp control
         values:
           - class: model
@@ -679,6 +745,10 @@ categories:
             l1: 'Yes'
             l2: supported
             l3: ''
+          - class: samza
+            l1: 'Yes'
+            l2: supported
+            l3: ''
 
   - description: When in processing time?
     anchor: when
@@ -726,6 +796,10 @@ categories:
             l1: 'Yes'
             l2: fully supported
             l3: ''
+          - class: samza
+            l1: 'Yes'
+            l2: fully supported
+            l3: ''
 
       - name: Event-time triggers
         values:
@@ -765,6 +839,10 @@ categories:
             l1: 'Yes'
             l2: fully supported
             l3: ''
+          - class: samza
+            l1: 'Yes'
+            l2: fully supported
+            l3: ''
 
       - name: Processing-time triggers
         values:
@@ -804,6 +882,10 @@ categories:
             l1: 'Yes'
             l2: fully supported
             l3: ''
+          - class: samza
+            l1: 'Yes'
+            l2: fully supported
+            l3: ''
 
       - name: Count triggers
         values:
@@ -843,6 +925,10 @@ categories:
             l1: 'Yes'
             l2: fully supported
             l3: ''
+          - class: samza
+            l1: 'Yes'
+            l2: fully supported
+            l3: ''
 
       - name: '[Meta]data driven triggers'
         values:
@@ -882,7 +968,11 @@ categories:
           - class: ibmstreams
             l1: 'No'
             l2: pending model support
-            l3: 
+            l3:
+          - class: samza
+            l1: 'No'
+            l2: pending model support
+            l3:
 
       - name: Composite triggers
         values:
@@ -922,6 +1012,10 @@ categories:
             l1: 'Yes'
             l2: fully supported
             l3: ''
+          - class: samza
+            l1: 'Yes'
+            l2: fully supported
+            l3: ''
 
       - name: Allowed lateness
         values:
@@ -961,6 +1055,10 @@ categories:
             l1: 'Yes'
             l2: fully supported
             l3: ''
+          - class: samza
+            l1: 'Yes'
+            l2: fully supported
+            l3: ''
 
       - name: Timers
         values:
@@ -1000,6 +1098,10 @@ categories:
             l1: 'Partially'
             l2: non-merging windows
             l3: ''
+          - class: samza
+            l1: 'No'
+            l2: ''
+            l3: ''
 
   - description: How do refinements relate?
     anchor: how
@@ -1047,6 +1149,10 @@ categories:
             l1: 'Yes'
             l2: fully supported
             l3: ''
+          - class: samza
+            l1: 'Yes'
+            l2: fully supported
+            l3: ''
 
       - name: Accumulating
         values:
@@ -1086,6 +1192,10 @@ categories:
             l1: 'Yes'
             l2: fully supported
             l3: ''
+          - class: samza
+            l1: 'Yes'
+            l2: fully supported
+            l3: ''
 
       - name: 'Accumulating & Retracting'
         values:
@@ -1126,3 +1236,7 @@ categories:
             l1: 'No'
             l2: pending model support
             l3: ''
+          - class: samza
+            l1: 'No'
+            l2: pending model support
+            l3: ''
\ No newline at end of file
diff --git a/src/_includes/section-menu/runners.html 
b/src/_includes/section-menu/runners.html
index a05fcd938..08212d572 100644
--- a/src/_includes/section-menu/runners.html
+++ b/src/_includes/section-menu/runners.html
@@ -6,3 +6,4 @@
 <li><a href="{{ site.baseurl }}/documentation/runners/gearpump/">Apache 
Gearpump</a></li>
 <li><a href="{{ site.baseurl }}/documentation/runners/spark/">Apache 
Spark</a></li>
 <li><a href="{{ site.baseurl }}/documentation/runners/dataflow/">Google Cloud 
Dataflow</a></li>
+<li><a href="{{ site.baseurl }}/documentation/runners/samza/">Apache 
Samza</a></li>
diff --git a/src/documentation/index.md b/src/documentation/index.md
index e7b36cc16..8c008f138 100644
--- a/src/documentation/index.md
+++ b/src/documentation/index.md
@@ -46,6 +46,7 @@ A Beam Runner runs a Beam pipeline on a specific (often 
distributed) data proces
 * [SparkRunner]({{ site.baseurl }}/documentation/runners/spark/): Runs on 
[Apache Spark](http://spark.apache.org).
 * [DataflowRunner]({{ site.baseurl }}/documentation/runners/dataflow/): Runs 
on [Google Cloud Dataflow](https://cloud.google.com/dataflow), a fully managed 
service within [Google Cloud Platform](https://cloud.google.com/).
 * [GearpumpRunner]({{ site.baseurl }}/documentation/runners/gearpump/): Runs 
on [Apache Gearpump (incubating)](http://gearpump.apache.org).
+* [SamzaRunner]({{ site.baseurl }}/documentation/runners/samza/): Runs on 
[Apache Samza](http://samza.apache.org).
 
 ### Choosing a Runner
 
diff --git a/src/documentation/runners/samza.md 
b/src/documentation/runners/samza.md
new file mode 100644
index 000000000..2b63189ef
--- /dev/null
+++ b/src/documentation/runners/samza.md
@@ -0,0 +1,151 @@
+---
+layout: section
+title: "Apache Samza Runner"
+section_menu: section-menu/runners.html
+permalink: /documentation/runners/samza/
+redirect_from: /learn/runners/Samza/
+---
+# Using the Apache Samza Runner
+
+The Apache Samza Runner can be used to execute Beam pipelines using [Apache 
Samza](http://samza.apache.org/). The Samza Runner executes Beam pipeline in a 
Samza application and can run locally. The application can further be built 
into a .tgz file, and deployed to a YARN cluster or Samza standalone cluster 
with Zookeeper.
+
+The Samza Runner and Samza are suitable for large scale, stateful streaming 
jobs, and provide:
+
+* First class support for local state (with RocksDB store). This allows fast 
state access for high frequency streaming jobs.
+* Fault-tolerance with support for incremental checkpointing of state instead 
of full snapshots. This enables Samza to scale to applications with very large 
state.
+* A fully asynchronous processing engine that makes remote calls efficient.
+* Flexible deployment model for running the the applications in any hosting 
environment with Zookeeper.
+* Features like canaries, upgrades and rollbacks that support extremely large 
deployments with minimal downtime.
+
+The [Beam Capability Matrix]({{ site.baseurl 
}}/documentation/runners/capability-matrix/) documents the currently supported 
capabilities of the Samza Runner.
+
+## Samza Runner prerequisites and setup
+
+The Samza Runner is built on Samza version greater than 0.14.1, and uses Scala 
version 2.11.
+
+### Specify your dependency
+
+<span class="language-java">You can specify your dependency on the Samza 
Runner by adding the following to your `pom.xml`:</span>
+```java
+<dependency>
+  <groupId>org.apache.beam</groupId>
+  <artifactId>beam-runners-samza_2.11</artifactId>
+  <version>{{ site.release_latest }}</version>
+  <scope>runtime</scope>
+</dependency>
+
+<!-- Samza dependencies -->
+<dependency>
+  <groupId>org.apache.samza</groupId>
+  <artifactId>samza-api</artifactId>
+  <version>${samza.version}</version>
+</dependency>
+
+<dependency>
+  <groupId>org.apache.samza</groupId>
+  <artifactId>samza-core_2.11</artifactId>
+  <version>${samza.version}</version>
+</dependency>
+
+<dependency>
+  <groupId>org.apache.samza</groupId>
+  <artifactId>samza-kafka_2.11</artifactId>
+  <version>${samza.version}</version>
+  <scope>runtime</scope>
+</dependency>
+
+<dependency>
+  <groupId>org.apache.samza</groupId>
+  <artifactId>samza-kv_2.11</artifactId>
+  <version>${samza.version}</version>
+  <scope>runtime</scope>
+</dependency>
+
+<dependency>
+  <groupId>org.apache.samza</groupId>
+  <artifactId>samza-kv-rocksdb_2.11</artifactId>
+  <version>${samza.version}</version>
+  <scope>runtime</scope>
+</dependency>
+    
+```
+
+## Executing a pipeline with Samza Runner
+
+If you run your pipeline locally or deploy it to a standalone cluster with all 
the jars and resource files, no packaging is required. For example, the 
following command runs the WordCount example:
+
+```
+$ mvn exec:java -Dexec.mainClass=org.apache.beam.examples.WordCount \
+    -Psamza-runner \
+    -Dexec.args="--runner=SamzaRunner \
+      --inputFile=/path/to/input \
+      --output=/path/to/counts"
+```
+
+To deploy your pipeline to a YARN cluster, here is the 
[instructions](https://samza.apache.org/startup/hello-samza/latest/) of 
deploying a sample Samza job. First you need to package your application jars 
and resource files into a `.tgz` archive file, and make it available to 
download for Yarn containers. In your config, you need to specify the URI of 
this TGZ file location:
+
+```
+yarn.package.path=${your_job_tgz_URI}
+
+job.name=${your_job_name}
+job.factory.class=org.apache.samza.job.yarn.YarnJobFactory
+job.coordinator.system=${job_coordinator_system}
+job.default.system=${job_default_system}
+```
+
+For more details on the configuration, see [Samza Configuration 
Reference](https://samza.apache.org/learn/documentation/latest/jobs/configuration-table.html).
+
+The config file will be passed in by setting the command line arg 
`--configFilePath=/path/to/config.properties`. With that, you can run your main 
class of Beam pipeline in a Yarn Resource Manager, and the Samza Runner will 
submit a Yarn job under the hood. 
+
+## Pipeline options for the Samza Runner
+
+When executing your pipeline with the Samza Runner, you can use the following 
pipeline options.
+
+<table class="table table-bordered">
+<tr>
+  <th>Field</th>
+  <th>Description</th>
+  <th>Default Value</th>
+</tr>
+<tr>
+  <td><code>runner</code></td>
+  <td>The pipeline runner to use. This option allows you to determine the 
pipeline runner at runtime.</td>
+  <td>Set to <code>SamzaRunner</code> to run using Samza.</td>
+</tr>
+<tr>
+  <td><code>configFilePath</code></td>
+  <td>The config for Samza using a properties file.</td>
+  <td><code>empty</code>, i.e. use local execution.</td>
+</tr>
+<tr>
+  <td><code>configOverride</code></td>
+  <td>The config override to set programmatically.</td>
+  <td><code>empty</code>, i.e. use config file or local execution.</td>
+</tr>
+<tr>
+  <td><code>watermarkInterval</code></td>
+  <td>The interval to check for watermarks in milliseconds.</td>
+  <td><code>1000</code></td>
+</tr>
+<tr>
+  <td><code>systemBufferSize</code></td>
+  <td>The maximum number of messages to buffer for a given system.</td>
+  <td><code>5000</code></td>
+</tr>
+<tr>
+  <td><code>maxSourceParallelism</code></td>
+  <td>The maximum parallelism allowed for any data source.</td>
+  <td><code>1</code></td>
+</tr>
+<tr>
+  <td><code>storeBatchGetSize</code></td>
+  <td>The batch get size limit for the state store.</td>
+  <td><code>10000</code></td>
+</tr>
+</table>
+
+## Monitoring your job
+
+You can monitor your pipeline job using metrics emitted from both Beam and 
Samza, e.g. Beam source metrics such as `elements_read` and `backlog_elements`, 
and Samza job metrics such as `job-healthy` and `process-envelopes`. A complete 
list of Samza metrics is in [Samza Metrics 
Reference](https://samza.apache.org/learn/documentation/latest/container/metrics-table.html).
 You can view your job's metrics via JMX in development, and send the metrics 
to graphing system such as [Graphite](http://graphiteapp.org/). For more 
details, please see [Samza 
Metrics](https://samza.apache.org/learn/documentation/latest/container/metrics.html).
+
+For a running Samza YARN job, you can use YARN web UI to monitor the job 
status and check logs.
diff --git a/src/get-started/beam-overview.md b/src/get-started/beam-overview.md
index 1b2fbfc7e..68b16deff 100644
--- a/src/get-started/beam-overview.md
+++ b/src/get-started/beam-overview.md
@@ -37,6 +37,7 @@ Beam currently supports Runners that work with the following 
distributed process
 * Apache Gearpump (incubating) ![Apache Gearpump logo]({{ 
"/images/logos/runners/gearpump.png" | prepend: site.baseurl }})
 * Apache Spark ![Apache Spark logo]({{ "/images/logos/runners/spark.png" | 
prepend: site.baseurl }})
 * Google Cloud Dataflow ![Google Cloud Dataflow logo]({{ 
"/images/logos/runners/dataflow.png" | prepend: site.baseurl }})
+* Apache Samza ![Apache Samza logo]({{ "/images/logos/runners/samza.png" | 
prepend: site.baseurl }})
 
 **Note:** You can always execute your pipeline locally for testing and 
debugging purposes.
 
diff --git a/src/get-started/quickstart-java.md 
b/src/get-started/quickstart-java.md
index c80576f13..0775ab826 100644
--- a/src/get-started/quickstart-java.md
+++ b/src/get-started/quickstart-java.md
@@ -113,6 +113,11 @@ $ mvn compile exec:java 
-Dexec.mainClass=org.apache.beam.examples.WordCount \
      -Pdataflow-runner
 ```
 
+{:.runner-samza-local}
+```
+$ mvn compile exec:java -Dexec.mainClass=org.apache.beam.examples.WordCount \
+     -Dexec.args="--inputFile=pom.xml --output=/tmp/counts 
--runner=SamzaRunner" -Psamza-runner
+```
 
 ## Inspect the results
 
@@ -148,6 +153,11 @@ $ ls counts*
 $ gsutil ls gs://<your-gcs-bucket>/counts*
 ```
 
+{:.runner-samza-local}
+```
+$ ls /tmp/counts*
+```
+
 When you look into the contents of the file, you'll see that they contain 
unique words and the number of occurrences of each word. The order of elements 
within the file may differ because the Beam model does not generally guarantee 
ordering, again to allow runners to optimize for efficiency.
 
 {:.runner-direct}
@@ -228,6 +238,19 @@ barrenly: 1
 ...
 ```
 
+{:.runner-samza-local}
+```  
+$ more /tmp/counts*
+api: 7
+are: 2
+can: 2
+com: 14
+end: 14
+for: 14
+has: 2
+...
+```
+
 ## Next Steps
 
 * Learn more about the [Beam SDK for Java]({{ site.baseurl 
}}/documentation/sdks/java/)
diff --git a/src/get-started/wordcount-example.md 
b/src/get-started/wordcount-example.md
index 41f21f56f..6a7025b30 100644
--- a/src/get-started/wordcount-example.md
+++ b/src/get-started/wordcount-example.md
@@ -365,6 +365,12 @@ $ mvn compile exec:java 
-Dexec.mainClass=org.apache.beam.examples.WordCount \
      -Pdataflow-runner
 ```
 
+{:.runner-samza-local}
+```
+$ mvn compile exec:java -Dexec.mainClass=org.apache.beam.examples.WordCount \
+     -Dexec.args="--inputFile=pom.xml --output=counts --runner=SamzaRunner" 
-Psamza-runner
+```
+
 To view the full code in Java, see
 
**[WordCount](https://github.com/apache/beam/blob/master/examples/java/src/main/java/org/apache/beam/examples/WordCount.java).**
 
@@ -406,6 +412,11 @@ python -m apache_beam.examples.wordcount --input 
gs://dataflow-samples/shakespea
                                          --temp_location 
gs://YOUR_GCS_BUCKET/tmp/
 ```
 
+{:.runner-samza-local}
+```
+This runner is not yet available for the Python SDK.
+```
+
 To view the full code in Python, see
 
**[wordcount.py](https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/wordcount.py).**
 
@@ -448,6 +459,11 @@ $ wordcount --input 
gs://dataflow-samples/shakespeare/kinglear.txt \
             
--worker_harness_container_image=apache-docker-beam-snapshots-docker.bintray.io/beam/go:20180515
 ```
 
+{:.runner-samza-local}
+```
+This runner is not yet available for the Go SDK.
+```
+
 To view the full code in Go, see
 
**[wordcount.go](https://github.com/apache/beam/blob/master/sdks/go/examples/wordcount/wordcount.go).**
 
@@ -676,6 +692,12 @@ $ mvn compile exec:java 
-Dexec.mainClass=org.apache.beam.examples.DebuggingWordC
      -Pdataflow-runner
 ```
 
+{:.runner-samza-local}
+```
+$ mvn compile exec:java 
-Dexec.mainClass=org.apache.beam.examples.DebuggingWordCount \
+     -Dexec.args="--runner=SamzaRunner --output=counts" -Psamza-runner
+```
+
 To view the full code in Java, see
 
[DebuggingWordCount](https://github.com/apache/beam/blob/master/examples/java/src/main/java/org/apache/beam/examples/DebuggingWordCount.java).
 
@@ -717,6 +739,11 @@ python -m apache_beam.examples.wordcount_debugging --input 
gs://dataflow-samples
                                          --temp_location 
gs://YOUR_GCS_BUCKET/tmp/
 ```
 
+{:.runner-samza-local}
+```
+This runner is not yet available for the Python SDK.
+```
+
 To view the full code in Python, see
 
**[wordcount_debugging.py](https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/wordcount_debugging.py).**
 
@@ -759,6 +786,11 @@ $ debugging_wordcount --input 
gs://dataflow-samples/shakespeare/kinglear.txt \
                       
--worker_harness_container_image=apache-docker-beam-snapshots-docker.bintray.io/beam/go:20180515
 ```
 
+{:.runner-samza-local}
+```
+This runner is not yet available for the Go SDK.
+```
+
 To view the full code in Go, see
 
**[debugging_wordcount.go](https://github.com/apache/beam/blob/master/sdks/go/examples/debugging_wordcount/debugging_wordcount.go).**
 
@@ -981,6 +1013,12 @@ $ mvn compile exec:java 
-Dexec.mainClass=org.apache.beam.examples.WindowedWordCo
      -Pdataflow-runner
 ```
 
+{:.runner-samza-local}
+```
+$ mvn compile exec:java 
-Dexec.mainClass=org.apache.beam.examples.WindowedWordCount \
+     -Dexec.args="--runner=SamzaRunner --inputFile=pom.xml --output=counts" 
-Psamza-runner
+```
+
 To view the full code in Java, see
 
**[WindowedWordCount](https://github.com/apache/beam/blob/master/examples/java/src/main/java/org/apache/beam/examples/WindowedWordCount.java).**
 
@@ -1026,6 +1064,11 @@ python -m apache_beam.examples.windowed_wordcount 
--input YOUR_INPUT_FILE \
                                          --temp_location 
gs://YOUR_GCS_BUCKET/tmp/
 ```
 
+{:.runner-samza-local}
+```
+This runner is not yet available for the Python SDK.
+```
+
 To view the full code in Python, see
 
**[windowed_wordcount.py](https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/windowed_wordcount.py).**
 
@@ -1068,6 +1111,11 @@ $ windowed_wordcount --input 
gs://dataflow-samples/shakespeare/kinglear.txt \
             
--worker_harness_container_image=apache-docker-beam-snapshots-docker.bintray.io/beam/go:20180515
 ```
 
+{:.runner-samza-local}
+```
+This runner is not yet available for the Go SDK.
+```
+
 To view the full code in Go, see
 
**[windowed_wordcount.go](https://github.com/apache/beam/blob/master/sdks/go/examples/windowed_wordcount/windowed_wordcount.go).**
 
@@ -1294,6 +1342,11 @@ python -m apache_beam.examples.streaming_wordcount \
   --streaming
 ```
 
+{:.runner-samza-local}
+```
+This runner is not yet available for the Python SDK.
+```
+
 To view the full code in Python, see
 
**[streaming_wordcount.py](https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/streaming_wordcount.py).**
 
diff --git a/src/images/logo_samza.png b/src/images/logo_samza.png
new file mode 100644
index 000000000..88e5ba322
Binary files /dev/null and b/src/images/logo_samza.png differ
diff --git a/src/images/logos/runners/samza.png 
b/src/images/logos/runners/samza.png
new file mode 100644
index 000000000..88e5ba322
Binary files /dev/null and b/src/images/logos/runners/samza.png differ
diff --git a/src/index.md b/src/index.md
index bb5eaece5..a71d27e19 100644
--- a/src/index.md
+++ b/src/index.md
@@ -18,6 +18,9 @@ logos:
 - title: Gearpump
   image_url: /images/logo_gearpump.png
   url: http://gearpump.apache.org/
+- title: Samza
+  image_url: /images/logo_samza.png
+  url: http://samza.apache.org/
 
 pillars:
 - title: Unified


 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 117757)
    Time Spent: 4h 40m  (was: 4.5h)

> Samza runner
> ------------
>
>                 Key: BEAM-3079
>                 URL: https://issues.apache.org/jira/browse/BEAM-3079
>             Project: Beam
>          Issue Type: Wish
>          Components: runner-samza
>            Reporter: Xinyu Liu
>            Assignee: Kenneth Knowles
>            Priority: Major
>             Fix For: Not applicable
>
>          Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> Apache Samza is a distributed data-processing platform which supports both 
> stream and batch processing. It'll be awesome if we can run BEAM's advanced 
> data transform and multi-language sdks on top of Samza.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to