Repository: kafka-site
Updated Branches:
  refs/heads/asf-site 04dce4fcf -> 970abca91


http://git-wip-us.apache.org/repos/asf/kafka-site/blob/970abca9/0101/streams.html
----------------------------------------------------------------------
diff --git a/0101/streams.html b/0101/streams.html
index 74620ec..ee63358 100644
--- a/0101/streams.html
+++ b/0101/streams.html
@@ -15,367 +15,410 @@
   ~ limitations under the License.
   ~-->
 
-<h3><a id="streams_overview" href="#streams_overview">9.1 Overview</a></h3>
-
-<p>
-Kafka Streams is a client library for processing and analyzing data stored in 
Kafka and either write the resulting data back to Kafka or send the final 
output to an external system. It builds upon important stream processing 
concepts such as properly distinguishing between event time and processing 
time, windowing support, and simple yet efficient management of application 
state.
-Kafka Streams has a <b>low barrier to entry</b>: You can quickly write and run 
a small-scale proof-of-concept on a single machine; and you only need to run 
additional instances of your application on multiple machines to scale up to 
high-volume production workloads. Kafka Streams transparently handles the load 
balancing of multiple instances of the same application by leveraging Kafka's 
parallelism model.
-</p>
-<p>
-Some highlights of Kafka Streams:
-</p>
-
-<ul>
-    <li>Designed as a <b>simple and lightweight client library</b>, which can 
be easily embedded in any Java application and integrated with any existing 
packaging, deployment and operational tools that users have for their streaming 
applications.</li>
-    <li>Has <b>no external dependencies on systems other than Apache Kafka 
itself</b> as the internal messaging layer; notably, it uses Kafka's 
partitioning model to horizontally scale processing while maintaining strong 
ordering guarantees.</li>
-    <li>Supports <b>fault-tolerant local state</b>, which enables very fast 
and efficient stateful operations like joins and windowed aggregations.</li>
-    <li>Employs <b>one-record-at-a-time processing</b> to achieve low 
processing latency, and supports <b>event-time based windowing 
operations</b>.</li>
-    <li>Offers necessary stream processing primitives, along with a 
<b>high-level Streams DSL</b> and a <b>low-level Processor API</b>.</li>
-
-</ul>
-
-<h3><a id="streams_developer" href="#streams_developer">9.2 Developer 
Guide</a></h3>
-
-<p>
-There is a <a href="#quickstart_kafkastreams">quickstart</a> example that 
provides how to run a stream processing program coded in the Kafka Streams 
library.
-This section focuses on how to write, configure, and execute a Kafka Streams 
application.
-</p>
-
-<h4><a id="streams_concepts" href="#streams_concepts">Core Concepts</a></h4>
-
-<p>
-We first summarize the key concepts of Kafka Streams.
-</p>
-
-<h5><a id="streams_topology" href="#streams_topology">Stream Processing 
Topology</a></h5>
-
-<ul>
-    <li>A <b>stream</b> is the most important abstraction provided by Kafka 
Streams: it represents an unbounded, continuously updating data set. A stream 
is an ordered, replayable, and fault-tolerant sequence of immutable data 
records, where a <b>data record</b> is defined as a key-value pair.</li>
-    <li>A <b>stream processing application</b> written in Kafka Streams 
defines its computational logic through one or more <b>processor 
topologies</b>, where a processor topology is a graph of stream processors 
(nodes) that are connected by streams (edges).</li>
-    <li>A <b>stream processor</b> is a node in the processor topology; it 
represents a processing step to transform data in streams by receiving one 
input record at a time from its upstream processors in the topology, applying 
its operation to it, and may subsequently producing one or more output records 
to its downstream processors.</li>
-</ul>
-
-<p>
-Kafka Streams offers two ways to define the stream processing topology: the <a 
href="#streams_dsl"><b>Kafka Streams DSL</b></a> provides
-the most common data transformation operations such as <code>map</code> and 
<code>filter</code>; the lower-level <a href="#streams_processor"><b>Processor 
API</b></a> allows
-developers define and connect custom processors as well as to interact with <a 
href="#streams_state">state stores</a>.
-</p>
-
-<h5><a id="streams_time" href="#streams_time">Time</a></h5>
-
-<p>
-A critical aspect in stream processing is the notion of <b>time</b>, and how 
it is modeled and integrated.
-For example, some operations such as <b>windowing</b> are defined based on 
time boundaries.
-</p>
-<p>
-Common notions of time in streams are:
-</p>
-
-<ul>
-    <li><b>Event time</b> - The point in time when an event or data record 
occurred, i.e. was originally created "at the source".</li>
-    <li><b>Processing time</b> - The point in time when the event or data 
record happens to be processed by the stream processing application, i.e. when 
the record is being consumed. The processing time may be milliseconds, hours, 
or days etc. later than the original event time.</li>
-    <li><b>Ingestion time</b> - The point in time when an event or data record 
is stored in a topic partition by a Kafka broker. The difference to event time 
is that this ingestion timestamp is generated when the record is appended to 
the target topic by the Kafka broker, not when the record is created "at the 
source". The difference to processing time is that processing time is when the 
stream processing application processes the record. For example, if a record is 
never processed, there is no notion of processing time for it, but it still has 
an ingestion time.
-</ul>
-<p>
-The choice between event-time and ingestion-time is actually done through the 
configuration of Kafka (not Kafka Streams): From Kafka 0.10.x onwards, 
timestamps are automatically embedded into Kafka messages. Depending on Kafka's 
configuration these timestamps represent event-time or ingestion-time. The 
respective Kafka configuration setting can be specified on the broker level or 
per topic. The default timestamp extractor in Kafka Streams will retrieve these 
embedded timestamps as-is. Hence, the effective time semantics of your 
application depend on the effective Kafka configuration for these embedded 
timestamps.
-</p>
-<p>
-Kafka Streams assigns a <b>timestamp</b> to every data record
-via the <code>TimestampExtractor</code> interface.
-Concrete implementations of this interface may retrieve or compute timestamps 
based on the actual contents of data records such as an embedded timestamp field
-to provide event-time semantics, or use any other approach such as returning 
the current wall-clock time at the time of processing,
-thereby yielding processing-time semantics to stream processing applications.
-Developers can thus enforce different notions of time depending on their 
business needs. For example,
-per-record timestamps describe the progress of a stream with regards to time 
(although records may be out-of-order within the stream) and
-are leveraged by time-dependent operations such as joins.
-</p>
-
-<p>
-  Finally, whenever a Kafka Streams application writes records to Kafka, then 
it will also assign timestamps to these new records. The way the timestamps are 
assigned depends on the context:
-  <ul>
-    <li> When new output records are generated via processing some input 
record, for example, <code>context.forward()</code> triggered in the 
<code>process()</code> function call, output record timestamps are inherited 
from input record timestamps directly.</li>
-    <li> When new output records are generated via periodic functions such as 
<code>punctuate()</code>, the output record timestamp is defined as the current 
internal time (obtained through <code>context.timestamp()</code>) of the stream 
task.</li>
-    <li> For aggregations, the timestamp of a resulting aggregate update 
record will be that of the latest arrived input record that triggered the 
update.</li>
-  </ul>
-</p>
-
-<h5><a id="streams_state" href="#streams_state">States</a></h5>
-
-<p>
-Some stream processing applications don't require state, which means the 
processing of a message is independent from
-the processing of all other messages.
-However, being able to maintain state opens up many possibilities for 
sophisticated stream processing applications: you
-can join input streams, or group and aggregate data records. Many such 
stateful operators are provided by the <a href="#streams_dsl"><b>Kafka Streams 
DSL</b></a>.
-</p>
-<p>
-Kafka Streams provides so-called <b>state stores</b>, which can be used by 
stream processing applications to store and query data.
-This is an important capability when implementing stateful operations.
-Every task in Kafka Streams embeds one or more state stores that can be 
accessed via APIs to store and query data required for processing.
-These state stores can either be a persistent key-value store, an in-memory 
hashmap, or another convenient data structure.
-Kafka Streams offers fault-tolerance and automatic recovery for local state 
stores.
-</p>
-<p>
-  Kafka Streams allows direct read-only queries of the state stores by 
methods, threads, processes or applications external to the stream processing 
application that created the state stores. This is provided through a feature 
called <b>Interactive Queries</b>. All stores are named and Interactive Queries 
exposes only the read operations of the underlying implementation. 
-</p>
-<br>
-<p>
-As we have mentioned above, the computational logic of a Kafka Streams 
application is defined as a <a href="#streams_topology">processor topology</a>.
-Currently Kafka Streams provides two sets of APIs to define the processor 
topology, which will be described in the subsequent sections.
-</p>
-
-<h4><a id="streams_processor" href="#streams_processor">Low-Level Processor 
API</a></h4>
-
-<h5><a id="streams_processor_process" 
href="#streams_processor_process">Processor</a></h5>
-
-<p>
-Developers can define their customized processing logic by implementing the 
<code>Processor</code> interface, which
-provides <code>process</code> and <code>punctuate</code> methods. The 
<code>process</code> method is performed on each
-of the received record; and the <code>punctuate</code> method is performed 
periodically based on elapsed time.
-In addition, the processor can maintain the current 
<code>ProcessorContext</code> instance variable initialized in the
-<code>init</code> method, and use the context to schedule the punctuation 
period (<code>context().schedule</code>), to
-forward the modified / new key-value pair to downstream processors 
(<code>context().forward</code>), to commit the current
-processing progress (<code>context().commit</code>), etc.
-</p>
-
-<pre>
-    public class MyProcessor extends Processor<String, String> {
-        private ProcessorContext context;
-        private KeyValueStore<String, Integer> kvStore;
-
-        @Override
-        @SuppressWarnings("unchecked")
-        public void init(ProcessorContext context) {
-            this.context = context;
-            this.context.schedule(1000);
-            this.kvStore = (KeyValueStore<String, Integer>) 
context.getStateStore("Counts");
-        }
-
-        @Override
-        public void process(String dummy, String line) {
-            String[] words = line.toLowerCase().split(" ");
-
-            for (String word : words) {
-                Integer oldValue = this.kvStore.get(word);
-
-                if (oldValue == null) {
-                    this.kvStore.put(word, 1);
-                } else {
-                    this.kvStore.put(word, oldValue + 1);
+<script><!--#include virtual="js/templateData.js" --></script>
+
+<script id="streams-template" type="text/x-handlebars-template">
+    <h1>Streams</h1>
+
+        <ol class="toc">
+            <li>
+                <a href="#streams_overview">Overview</a>
+            </li>
+            <li>
+                <a href="#streams_developer">Developer guide</a>
+                <ul>
+                    <li><a href="#streams_concepts">Core concepts</a>
+                    <li><a href="#streams_processor">Low-level processor 
API</a>
+                    <li><a href="#streams_dsl">High-level streams DSL</a>
+                </ul>
+            </li>
+        </ol>
+
+        <h2><a id="streams_overview" href="#overview">Overview</a></h2>
+
+        <p>
+        Kafka Streams is a client library for processing and analyzing data 
stored in Kafka and either write the resulting data back to Kafka or send the 
final output to an external system. It builds upon important stream processing 
concepts such as properly distinguishing between event time and processing 
time, windowing support, and simple yet efficient management of application 
state.
+        Kafka Streams has a <b>low barrier to entry</b>: You can quickly write 
and run a small-scale proof-of-concept on a single machine; and you only need 
to run additional instances of your application on multiple machines to scale 
up to high-volume production workloads. Kafka Streams transparently handles the 
load balancing of multiple instances of the same application by leveraging 
Kafka's parallelism model.
+        </p>
+        <p>
+        Some highlights of Kafka Streams:
+        </p>
+
+        <ul>
+            <li>Designed as a <b>simple and lightweight client library</b>, 
which can be easily embedded in any Java application and integrated with any 
existing packaging, deployment and operational tools that users have for their 
streaming applications.</li>
+            <li>Has <b>no external dependencies on systems other than Apache 
Kafka itself</b> as the internal messaging layer; notably, it uses Kafka's 
partitioning model to horizontally scale processing while maintaining strong 
ordering guarantees.</li>
+            <li>Supports <b>fault-tolerant local state</b>, which enables very 
fast and efficient stateful operations like joins and windowed 
aggregations.</li>
+            <li>Employs <b>one-record-at-a-time processing</b> to achieve low 
processing latency, and supports <b>event-time based windowing 
operations</b>.</li>
+            <li>Offers necessary stream processing primitives, along with a 
<b>high-level Streams DSL</b> and a <b>low-level Processor API</b>.</li>
+
+        </ul>
+
+        <h2><a id="streams_developer" href="#streams_developer">Developer 
Guide</a></h2>
+
+        <p>
+        There is a <a href="#quickstart_kafkastreams">quickstart</a> example 
that provides how to run a stream processing program coded in the Kafka Streams 
library.
+        This section focuses on how to write, configure, and execute a Kafka 
Streams application.
+        </p>
+
+        <h4><a id="streams_concepts" href="#streams_concepts">Core 
Concepts</a></h4>
+
+        <p>
+        We first summarize the key concepts of Kafka Streams.
+        </p>
+
+        <h5><a id="streams_topology" href="#streams_topology">Stream 
Processing Topology</a></h5>
+
+        <ul>
+            <li>A <b>stream</b> is the most important abstraction provided by 
Kafka Streams: it represents an unbounded, continuously updating data set. A 
stream is an ordered, replayable, and fault-tolerant sequence of immutable data 
records, where a <b>data record</b> is defined as a key-value pair.</li>
+            <li>A <b>stream processing application</b> written in Kafka 
Streams defines its computational logic through one or more <b>processor 
topologies</b>, where a processor topology is a graph of stream processors 
(nodes) that are connected by streams (edges).</li>
+            <li>A <b>stream processor</b> is a node in the processor topology; 
it represents a processing step to transform data in streams by receiving one 
input record at a time from its upstream processors in the topology, applying 
its operation to it, and may subsequently producing one or more output records 
to its downstream processors.</li>
+        </ul>
+
+        <p>
+        Kafka Streams offers two ways to define the stream processing 
topology: the <a href="#streams_dsl"><b>Kafka Streams DSL</b></a> provides
+        the most common data transformation operations such as 
<code>map</code> and <code>filter</code>; the lower-level <a 
href="#streams_processor"><b>Processor API</b></a> allows
+        developers define and connect custom processors as well as to interact 
with <a href="#streams_state">state stores</a>.
+        </p>
+
+        <h5><a id="streams_time" href="#streams_time">Time</a></h5>
+
+        <p>
+        A critical aspect in stream processing is the notion of <b>time</b>, 
and how it is modeled and integrated.
+        For example, some operations such as <b>windowing</b> are defined 
based on time boundaries.
+        </p>
+        <p>
+        Common notions of time in streams are:
+        </p>
+
+        <ul>
+            <li><b>Event time</b> - The point in time when an event or data 
record occurred, i.e. was originally created "at the source".</li>
+            <li><b>Processing time</b> - The point in time when the event or 
data record happens to be processed by the stream processing application, i.e. 
when the record is being consumed. The processing time may be milliseconds, 
hours, or days etc. later than the original event time.</li>
+            <li><b>Ingestion time</b> - The point in time when an event or 
data record is stored in a topic partition by a Kafka broker. The difference to 
event time is that this ingestion timestamp is generated when the record is 
appended to the target topic by the Kafka broker, not when the record is 
created "at the source". The difference to processing time is that processing 
time is when the stream processing application processes the record. For 
example, if a record is never processed, there is no notion of processing time 
for it, but it still has an ingestion time.
+        </ul>
+        <p>
+        The choice between event-time and ingestion-time is actually done 
through the configuration of Kafka (not Kafka Streams): From Kafka 0.10.x 
onwards, timestamps are automatically embedded into Kafka messages. Depending 
on Kafka's configuration these timestamps represent event-time or 
ingestion-time. The respective Kafka configuration setting can be specified on 
the broker level or per topic. The default timestamp extractor in Kafka Streams 
will retrieve these embedded timestamps as-is. Hence, the effective time 
semantics of your application depend on the effective Kafka configuration for 
these embedded timestamps.
+        </p>
+        <p>
+        Kafka Streams assigns a <b>timestamp</b> to every data record
+        via the <code>TimestampExtractor</code> interface.
+        Concrete implementations of this interface may retrieve or compute 
timestamps based on the actual contents of data records such as an embedded 
timestamp field
+        to provide event-time semantics, or use any other approach such as 
returning the current wall-clock time at the time of processing,
+        thereby yielding processing-time semantics to stream processing 
applications.
+        Developers can thus enforce different notions of time depending on 
their business needs. For example,
+        per-record timestamps describe the progress of a stream with regards 
to time (although records may be out-of-order within the stream) and
+        are leveraged by time-dependent operations such as joins.
+        </p>
+
+        <p>
+        Finally, whenever a Kafka Streams application writes records to Kafka, 
then it will also assign timestamps to these new records. The way the 
timestamps are assigned depends on the context:
+        <ul>
+            <li> When new output records are generated via processing some 
input record, for example, <code>context.forward()</code> triggered in the 
<code>process()</code> function call, output record timestamps are inherited 
from input record timestamps directly.</li>
+            <li> When new output records are generated via periodic functions 
such as <code>punctuate()</code>, the output record timestamp is defined as the 
current internal time (obtained through <code>context.timestamp()</code>) of 
the stream task.</li>
+            <li> For aggregations, the timestamp of a resulting aggregate 
update record will be that of the latest arrived input record that triggered 
the update.</li>
+        </ul>
+        </p>
+
+        <h5><a id="streams_state" href="#streams_state">States</a></h5>
+
+        <p>
+        Some stream processing applications don't require state, which means 
the processing of a message is independent from
+        the processing of all other messages.
+        However, being able to maintain state opens up many possibilities for 
sophisticated stream processing applications: you
+        can join input streams, or group and aggregate data records. Many such 
stateful operators are provided by the <a href="#streams_dsl"><b>Kafka Streams 
DSL</b></a>.
+        </p>
+        <p>
+        Kafka Streams provides so-called <b>state stores</b>, which can be 
used by stream processing applications to store and query data.
+        This is an important capability when implementing stateful operations.
+        Every task in Kafka Streams embeds one or more state stores that can 
be accessed via APIs to store and query data required for processing.
+        These state stores can either be a persistent key-value store, an 
in-memory hashmap, or another convenient data structure.
+        Kafka Streams offers fault-tolerance and automatic recovery for local 
state stores.
+        </p>
+        <p>
+        Kafka Streams allows direct read-only queries of the state stores by 
methods, threads, processes or applications external to the stream processing 
application that created the state stores. This is provided through a feature 
called <b>Interactive Queries</b>. All stores are named and Interactive Queries 
exposes only the read operations of the underlying implementation. 
+        </p>
+        <br>
+        <p>
+        As we have mentioned above, the computational logic of a Kafka Streams 
application is defined as a <a href="#streams_topology">processor topology</a>.
+        Currently Kafka Streams provides two sets of APIs to define the 
processor topology, which will be described in the subsequent sections.
+        </p>
+
+        <h4><a id="streams_processor" href="#streams_processor">Low-Level 
Processor API</a></h4>
+
+        <h5><a id="streams_processor_process" 
href="#streams_processor_process">Processor</a></h5>
+
+        <p>
+        Developers can define their customized processing logic by 
implementing the <code>Processor</code> interface, which
+        provides <code>process</code> and <code>punctuate</code> methods. The 
<code>process</code> method is performed on each
+        of the received record; and the <code>punctuate</code> method is 
performed periodically based on elapsed time.
+        In addition, the processor can maintain the current 
<code>ProcessorContext</code> instance variable initialized in the
+        <code>init</code> method, and use the context to schedule the 
punctuation period (<code>context().schedule</code>), to
+        forward the modified / new key-value pair to downstream processors 
(<code>context().forward</code>), to commit the current
+        processing progress (<code>context().commit</code>), etc.
+        </p>
+
+        <pre>
+            public class MyProcessor extends Processor<String, String> {
+                private ProcessorContext context;
+                private KeyValueStore<String, Integer> kvStore;
+
+                @Override
+                @SuppressWarnings("unchecked")
+                public void init(ProcessorContext context) {
+                    this.context = context;
+                    this.context.schedule(1000);
+                    this.kvStore = (KeyValueStore<String, Integer>) 
context.getStateStore("Counts");
                 }
-            }
-        }
 
-        @Override
-        public void punctuate(long timestamp) {
-            KeyValueIterator<String, Integer> iter = this.kvStore.all();
+                @Override
+                public void process(String dummy, String line) {
+                    String[] words = line.toLowerCase().split(" ");
 
-            while (iter.hasNext()) {
-                KeyValue<String, Integer> entry = iter.next();
-                context.forward(entry.key, entry.value.toString());
-            }
+                    for (String word : words) {
+                        Integer oldValue = this.kvStore.get(word);
 
-            iter.close();
-            context.commit();
-        }
+                        if (oldValue == null) {
+                            this.kvStore.put(word, 1);
+                        } else {
+                            this.kvStore.put(word, oldValue + 1);
+                        }
+                    }
+                }
+
+                @Override
+                public void punctuate(long timestamp) {
+                    KeyValueIterator<String, Integer> iter = 
this.kvStore.all();
+
+                    while (iter.hasNext()) {
+                        KeyValue<String, Integer> entry = iter.next();
+                        context.forward(entry.key, entry.value.toString());
+                    }
+
+                    iter.close();
+                    context.commit();
+                }
+
+                @Override
+                public void close() {
+                    this.kvStore.close();
+                }
+            };
+        </pre>
+
+        <p>
+        In the above implementation, the following actions are performed:
+
+        <ul>
+            <li>In the <code>init</code> method, schedule the punctuation 
every 1 second and retrieve the local state store by its name "Counts".</li>
+            <li>In the <code>process</code> method, upon each received record, 
split the value string into words, and update their counts into the state store 
(we will talk about this feature later in the section).</li>
+            <li>In the <code>punctuate</code> method, iterate the local state 
store and send the aggregated counts to the downstream processor, and commit 
the current stream state.</li>
+        </ul>
+        </p>
+
+        <h5><a id="streams_processor_topology" 
href="#streams_processor_topology">Processor Topology</a></h5>
+
+        <p>
+        With the customized processors defined in the Processor API, 
developers can use the <code>TopologyBuilder</code> to build a processor 
topology
+        by connecting these processors together:
 
-        @Override
-        public void close() {
-            this.kvStore.close();
-        }
-    };
-</pre>
+        <pre>
+            TopologyBuilder builder = new TopologyBuilder();
 
-<p>
-In the above implementation, the following actions are performed:
+            builder.addSource("SOURCE", "src-topic")
 
-<ul>
-    <li>In the <code>init</code> method, schedule the punctuation every 1 
second and retrieve the local state store by its name "Counts".</li>
-    <li>In the <code>process</code> method, upon each received record, split 
the value string into words, and update their counts into the state store (we 
will talk about this feature later in the section).</li>
-    <li>In the <code>punctuate</code> method, iterate the local state store 
and send the aggregated counts to the downstream processor, and commit the 
current stream state.</li>
-</ul>
-</p>
+                .addProcessor("PROCESS1", MyProcessor1::new /* the 
ProcessorSupplier that can generate MyProcessor1 */, "SOURCE")
+                .addProcessor("PROCESS2", MyProcessor2::new /* the 
ProcessorSupplier that can generate MyProcessor2 */, "PROCESS1")
+                .addProcessor("PROCESS3", MyProcessor3::new /* the 
ProcessorSupplier that can generate MyProcessor3 */, "PROCESS1")
 
-<h5><a id="streams_processor_topology" 
href="#streams_processor_topology">Processor Topology</a></h5>
+                .addSink("SINK1", "sink-topic1", "PROCESS1")
+                .addSink("SINK2", "sink-topic2", "PROCESS2")
+                .addSink("SINK3", "sink-topic3", "PROCESS3");
+        </pre>
 
-<p>
-With the customized processors defined in the Processor API, developers can 
use the <code>TopologyBuilder</code> to build a processor topology
-by connecting these processors together:
+        There are several steps in the above code to build the topology, and 
here is a quick walk through:
 
-<pre>
-    TopologyBuilder builder = new TopologyBuilder();
+        <ul>
+            <li>First of all a source node named "SOURCE" is added to the 
topology using the <code>addSource</code> method, with one Kafka topic 
"src-topic" fed to it.</li>
+            <li>Three processor nodes are then added using the 
<code>addProcessor</code> method; here the first processor is a child of the 
"SOURCE" node, but is the parent of the other two processors.</li>
+            <li>Finally three sink nodes are added to complete the topology 
using the <code>addSink</code> method, each piping from a different parent 
processor node and writing to a separate topic.</li>
+        </ul>
+        </p>
 
-    builder.addSource("SOURCE", "src-topic")
+        <h5><a id="streams_processor_statestore" 
href="#streams_processor_statestore">Local State Store</a></h5>
 
-        .addProcessor("PROCESS1", MyProcessor1::new /* the ProcessorSupplier 
that can generate MyProcessor1 */, "SOURCE")
-        .addProcessor("PROCESS2", MyProcessor2::new /* the ProcessorSupplier 
that can generate MyProcessor2 */, "PROCESS1")
-        .addProcessor("PROCESS3", MyProcessor3::new /* the ProcessorSupplier 
that can generate MyProcessor3 */, "PROCESS1")
-
-        .addSink("SINK1", "sink-topic1", "PROCESS1")
-        .addSink("SINK2", "sink-topic2", "PROCESS2")
-        .addSink("SINK3", "sink-topic3", "PROCESS3");
-</pre>
-
-There are several steps in the above code to build the topology, and here is a 
quick walk through:
-
-<ul>
-    <li>First of all a source node named "SOURCE" is added to the topology 
using the <code>addSource</code> method, with one Kafka topic "src-topic" fed 
to it.</li>
-    <li>Three processor nodes are then added using the 
<code>addProcessor</code> method; here the first processor is a child of the 
"SOURCE" node, but is the parent of the other two processors.</li>
-    <li>Finally three sink nodes are added to complete the topology using the 
<code>addSink</code> method, each piping from a different parent processor node 
and writing to a separate topic.</li>
-</ul>
-</p>
-
-<h5><a id="streams_processor_statestore" 
href="#streams_processor_statestore">Local State Store</a></h5>
-
-<p>
-Note that the Processor API is not limited to only accessing the current 
records as they arrive, but can also maintain local state stores
-that keep recently arrived records to use in stateful processing operations 
such as aggregation or windowed joins.
-To take advantage of this local states, developers can use the 
<code>TopologyBuilder.addStateStore</code> method when building the
-processor topology to create the local state and associate it with the 
processor nodes that needs to access it; or they can connect a created
-local state store with the existing processor nodes through 
<code>TopologyBuilder.connectProcessorAndStateStores</code>.
-
-<pre>
-    TopologyBuilder builder = new TopologyBuilder();
-
-    builder.addSource("SOURCE", "src-topic")
-
-        .addProcessor("PROCESS1", MyProcessor1::new, "SOURCE")
-        // create the in-memory state store "COUNTS" associated with processor 
"PROCESS1"
-        
.addStateStore(Stores.create("COUNTS").withStringKeys().withStringValues().inMemory().build(),
 "PROCESS1")
-        .addProcessor("PROCESS2", MyProcessor3::new /* the ProcessorSupplier 
that can generate MyProcessor3 */, "PROCESS1")
-        .addProcessor("PROCESS3", MyProcessor3::new /* the ProcessorSupplier 
that can generate MyProcessor3 */, "PROCESS1")
-
-        // connect the state store "COUNTS" with processor "PROCESS2"
-        .connectProcessorAndStateStores("PROCESS2", "COUNTS");
-
-        .addSink("SINK1", "sink-topic1", "PROCESS1")
-        .addSink("SINK2", "sink-topic2", "PROCESS2")
-        .addSink("SINK3", "sink-topic3", "PROCESS3");
-</pre>
-
-</p>
-
-In the next section we present another way to build the processor topology: 
the Kafka Streams DSL.
-
-<h4><a id="streams_dsl" href="#streams_dsl">High-Level Streams DSL</a></h4>
-
-To build a processor topology using the Streams DSL, developers can apply the 
<code>KStreamBuilder</code> class, which is extended from the 
<code>TopologyBuilder</code>.
-A simple example is included with the source code for Kafka in the 
<code>streams/examples</code> package. The rest of this section will walk
-through some code to demonstrate the key steps in creating a topology using 
the Streams DSL, but we recommend developers to read the full example source
-codes for details. 
-
-<h5><a id="streams_kstream_ktable" href="#streams_kstream_ktable">KStream and 
KTable</a></h5>
-The DSL uses two main abstractions. A <b>KStream</b> is an abstraction of a 
record stream, where each data record represents a self-contained datum in the 
unbounded data set. A <b>KTable</b> is an abstraction of a changelog stream, 
where each data record represents an update. More precisely, the value in a 
data record is considered to be an update of the last value for the same record 
key, if any (if a corresponding key doesn't exist yet, the update will be 
considered a create). To illustrate the difference between KStreams and 
KTables, let’s imagine the following two data records are being sent to the 
stream: <code>("alice", 1) --> ("alice", 3)</code>. If these records a KStream 
and the stream processing application were to sum the values it would return 
<code>4</code>. If these records were a KTable, the return would be 
<code>3</code>, since the last record would be considered as an update.
-
-
-
-<h5><a id="streams_dsl_source" href="#streams_dsl_source">Create Source 
Streams from Kafka</a></h5>
-
-<p>
-Either a <b>record stream</b> (defined as <code>KStream</code>) or a 
<b>changelog stream</b> (defined as <code>KTable</code>)
-can be created as a source stream from one or more Kafka topics (for 
<code>KTable</code> you can only create the source stream
-from a single topic).
-</p>
-
-<pre>
-    KStreamBuilder builder = new KStreamBuilder();
-
-    KStream<String, GenericRecord> source1 = builder.stream("topic1", 
"topic2");
-    KTable<String, GenericRecord> source2 = builder.table("topic3", 
"stateStoreName");
-</pre>
-
-<h5><a id="streams_dsl_windowing" href="#streams_dsl_windowing">Windowing a 
stream</a></h5>
-A stream processor may need to divide data records into time buckets, i.e. to 
<b>window</b> the stream by time. This is usually needed for join and 
aggregation operations, etc. Kafka Streams currently defines the following 
types of windows:
-<ul>
-  <li><b>Hopping time windows</b> are windows based on time intervals. They 
model fixed-sized, (possibly) overlapping windows. A hopping window is defined 
by two properties: the window's size and its advance interval (aka "hop"). The 
advance interval specifies by how much a window moves forward relative to the 
previous one. For example, you can configure a hopping window with a size 5 
minutes and an advance interval of 1 minute. Since hopping windows can overlap 
a data record may belong to more than one such windows.</li>
-  <li><b>Tumbling time windows</b> are a special case of hopping time windows 
and, like the latter, are windows based on time intervals. They model 
fixed-size, non-overlapping, gap-less windows. A tumbling window is defined by 
a single property: the window's size. A tumbling window is a hopping window 
whose window size is equal to its advance interval. Since tumbling windows 
never overlap, a data record will belong to one and only one window.</li>
-  <li><b>Sliding windows</b> model a fixed-size window that slides 
continuously over the time axis; here, two data records are said to be included 
in the same window if the difference of their timestamps is within the window 
size. Thus, sliding windows are not aligned to the epoch, but on the data 
record timestamps. In Kafka Streams, sliding windows are used only for join 
operations, and can be specified through the <code>JoinWindows</code> 
class.</li>
-  </ul>
-
-<h5><a id="streams_dsl_joins" href="#streams_dsl_joins">Joins</a></h5>
-A <b>join</b> operation merges two streams based on the keys of their data 
records, and yields a new stream. A join over record streams usually needs to 
be performed on a windowing basis because otherwise the number of records that 
must be maintained for performing the join may grow indefinitely. In Kafka 
Streams, you may perform the following join operations:
-<ul>
-  <li><b>KStream-to-KStream Joins</b> are always windowed joins, since 
otherwise the memory and state required to compute the join would grow 
infinitely in size. Here, a newly received record from one of the streams is 
joined with the other stream's records within the specified window interval to 
produce one result for each matching pair based on user-provided 
<code>ValueJoiner</code>. A new <code>KStream</code> instance representing the 
result stream of the join is returned from this operator.</li>
-  
-  <li><b>KTable-to-KTable Joins</b> are join operations designed to be 
consistent with the ones in relational databases. Here, both changelog streams 
are materialized into local state stores first. When a new record is received 
from one of the streams, it is joined with the other stream's materialized 
state stores to produce one result for each matching pair based on 
user-provided ValueJoiner. A new <code>KTable</code> instance representing the 
result stream of the join, which is also a changelog stream of the represented 
table, is returned from this operator.</li>
-  <li><b>KStream-to-KTable Joins</b> allow you to perform table lookups 
against a changelog stream (<code>KTable</code>) upon receiving a new record 
from another record stream (KStream). An example use case would be to enrich a 
stream of user activities (<code>KStream</code>) with the latest user profile 
information (<code>KTable</code>). Only records received from the record stream 
will trigger the join and produce results via <code>ValueJoiner</code>, not 
vice versa (i.e., records received from the changelog stream will be used only 
to update the materialized state store). A new <code>KStream</code> instance 
representing the result stream of the join is returned from this operator.</li>
-  </ul>
-
-Depending on the operands the following join operations are supported: 
<b>inner joins</b>, <b>outer joins</b> and <b>left joins</b>. Their semantics 
are similar to the corresponding operators in relational databases.
-a
-<h5><a id="streams_dsl_transform" href="#streams_dsl_transform">Transform a 
stream</a></h5>
-
-<p>
-There is a list of transformation operations provided for <code>KStream</code> 
and <code>KTable</code> respectively.
-Each of these operations may generate either one or more <code>KStream</code> 
and <code>KTable</code> objects and
-can be translated into one or more connected processors into the underlying 
processor topology.
-All these transformation methods can be chained together to compose a complex 
processor topology.
-Since <code>KStream</code> and <code>KTable</code> are strongly typed, all 
these transformation operations are defined as
-generics functions where users could specify the input and output data types.
-</p>
-
-<p>
-Among these transformations, <code>filter</code>, <code>map</code>, 
<code>mapValues</code>, etc, are stateless
-transformation operations and can be applied to both <code>KStream</code> and 
<code>KTable</code>,
-where users can usually pass a customized function to these functions as a 
parameter, such as <code>Predicate</code> for <code>filter</code>,
-<code>KeyValueMapper</code> for <code>map</code>, etc:
-
-</p>
-
-<pre>
-    // written in Java 8+, using lambda expressions
-    KStream<String, GenericRecord> mapped = source1.mapValue(record -> 
record.get("category"));
-</pre>
-
-<p>
-Stateless transformations, by definition, do not depend on any state for 
processing, and hence implementation-wise
-they do not require a state store associated with the stream processor; 
Stateful transformations, on the other hand,
-require accessing an associated state for processing and producing outputs.
-For example, in <code>join</code> and <code>aggregate</code> operations, a 
windowing state is usually used to store all the received records
-within the defined window boundary so far. The operators can then access these 
accumulated records in the store and compute
-based on them.
-</p>
-
-<pre>
-    // written in Java 8+, using lambda expressions
-    KTable<Windowed<String>, Long> counts = source1.groupByKey().aggregate(
-        () -> 0L,  // initial value
-        (aggKey, value, aggregate) -> aggregate + 1L,   // aggregating value
-        TimeWindows.of("counts", 5000L).advanceBy(1000L), // intervals in 
milliseconds
-        Serdes.Long() // serde for aggregated value
-    );
-
-    KStream<String, String> joined = source1.leftJoin(source2,
-        (record1, record2) -> record1.get("user") + "-" + 
record2.get("region");
-    );
-</pre>
-
-<h5><a id="streams_dsl_sink" href="#streams_dsl_sink">Write streams back to 
Kafka</a></h5>
-
-<p>
-At the end of the processing, users can choose to (continuously) write the 
final resulted streams back to a Kafka topic through
-<code>KStream.to</code> and <code>KTable.to</code>.
-</p>
-
-<pre>
-    joined.to("topic4");
-</pre>
-
-If your application needs to continue reading and processing the records after 
they have been materialized
-to a topic via <code>to</code> above, one option is to construct a new stream 
that reads from the output topic;
-Kafka Streams provides a convenience method called <code>through</code>:
-
-<pre>
-    // equivalent to
-    //
-    // joined.to("topic4");
-    // materialized = builder.stream("topic4");
-    KStream<String, String> materialized = joined.through("topic4");
-</pre>
-
-
-<br>
-<p>
-Besides defining the topology, developers will also need to configure their 
applications
-in <code>StreamsConfig</code> before running it. A complete list of
-Kafka Streams configs can be found <a href="#streamsconfigs"><b>here</b></a>.
-</p>
+        <p>
+        Note that the Processor API is not limited to only accessing the 
current records as they arrive, but can also maintain local state stores
+        that keep recently arrived records to use in stateful processing 
operations such as aggregation or windowed joins.
+        To take advantage of this local states, developers can use the 
<code>TopologyBuilder.addStateStore</code> method when building the
+        processor topology to create the local state and associate it with the 
processor nodes that needs to access it; or they can connect a created
+        local state store with the existing processor nodes through 
<code>TopologyBuilder.connectProcessorAndStateStores</code>.
+
+        <pre>
+            TopologyBuilder builder = new TopologyBuilder();
+
+            builder.addSource("SOURCE", "src-topic")
+
+                .addProcessor("PROCESS1", MyProcessor1::new, "SOURCE")
+                // create the in-memory state store "COUNTS" associated with 
processor "PROCESS1"
+                
.addStateStore(Stores.create("COUNTS").withStringKeys().withStringValues().inMemory().build(),
 "PROCESS1")
+                .addProcessor("PROCESS2", MyProcessor3::new /* the 
ProcessorSupplier that can generate MyProcessor3 */, "PROCESS1")
+                .addProcessor("PROCESS3", MyProcessor3::new /* the 
ProcessorSupplier that can generate MyProcessor3 */, "PROCESS1")
+
+                // connect the state store "COUNTS" with processor "PROCESS2"
+                .connectProcessorAndStateStores("PROCESS2", "COUNTS");
+
+                .addSink("SINK1", "sink-topic1", "PROCESS1")
+                .addSink("SINK2", "sink-topic2", "PROCESS2")
+                .addSink("SINK3", "sink-topic3", "PROCESS3");
+        </pre>
+
+        </p>
+
+        In the next section we present another way to build the processor 
topology: the Kafka Streams DSL.
+
+        <h4><a id="streams_dsl" href="#streams_dsl">High-Level Streams 
DSL</a></h4>
+
+        To build a processor topology using the Streams DSL, developers can 
apply the <code>KStreamBuilder</code> class, which is extended from the 
<code>TopologyBuilder</code>.
+        A simple example is included with the source code for Kafka in the 
<code>streams/examples</code> package. The rest of this section will walk
+        through some code to demonstrate the key steps in creating a topology 
using the Streams DSL, but we recommend developers to read the full example 
source
+        codes for details. 
+
+        <h5><a id="streams_kstream_ktable" 
href="#streams_kstream_ktable">KStream and KTable</a></h5>
+        The DSL uses two main abstractions. A <b>KStream</b> is an abstraction 
of a record stream, where each data record represents a self-contained datum in 
the unbounded data set. A <b>KTable</b> is an abstraction of a changelog 
stream, where each data record represents an update. More precisely, the value 
in a data record is considered to be an update of the last value for the same 
record key, if any (if a corresponding key doesn't exist yet, the update will 
be considered a create). To illustrate the difference between KStreams and 
KTables, let’s imagine the following two data records are being sent to the 
stream: <code>("alice", 1) --> ("alice", 3)</code>. If these records a KStream 
and the stream processing application were to sum the values it would return 
<code>4</code>. If these records were a KTable, the return would be 
<code>3</code>, since the last record would be considered as an update.
+
+
+
+        <h5><a id="streams_dsl_source" href="#streams_dsl_source">Create 
Source Streams from Kafka</a></h5>
+
+        <p>
+        Either a <b>record stream</b> (defined as <code>KStream</code>) or a 
<b>changelog stream</b> (defined as <code>KTable</code>)
+        can be created as a source stream from one or more Kafka topics (for 
<code>KTable</code> you can only create the source stream
+        from a single topic).
+        </p>
+
+        <pre>
+            KStreamBuilder builder = new KStreamBuilder();
+
+            KStream<String, GenericRecord> source1 = builder.stream("topic1", 
"topic2");
+            KTable<String, GenericRecord> source2 = builder.table("topic3", 
"stateStoreName");
+        </pre>
+
+        <h5><a id="streams_dsl_windowing" 
href="#streams_dsl_windowing">Windowing a stream</a></h5>
+        A stream processor may need to divide data records into time buckets, 
i.e. to <b>window</b> the stream by time. This is usually needed for join and 
aggregation operations, etc. Kafka Streams currently defines the following 
types of windows:
+        <ul>
+        <li><b>Hopping time windows</b> are windows based on time intervals. 
They model fixed-sized, (possibly) overlapping windows. A hopping window is 
defined by two properties: the window's size and its advance interval (aka 
"hop"). The advance interval specifies by how much a window moves forward 
relative to the previous one. For example, you can configure a hopping window 
with a size 5 minutes and an advance interval of 1 minute. Since hopping 
windows can overlap a data record may belong to more than one such windows.</li>
+        <li><b>Tumbling time windows</b> are a special case of hopping time 
windows and, like the latter, are windows based on time intervals. They model 
fixed-size, non-overlapping, gap-less windows. A tumbling window is defined by 
a single property: the window's size. A tumbling window is a hopping window 
whose window size is equal to its advance interval. Since tumbling windows 
never overlap, a data record will belong to one and only one window.</li>
+        <li><b>Sliding windows</b> model a fixed-size window that slides 
continuously over the time axis; here, two data records are said to be included 
in the same window if the difference of their timestamps is within the window 
size. Thus, sliding windows are not aligned to the epoch, but on the data 
record timestamps. In Kafka Streams, sliding windows are used only for join 
operations, and can be specified through the <code>JoinWindows</code> 
class.</li>
+        </ul>
+
+        <h5><a id="streams_dsl_joins" href="#streams_dsl_joins">Joins</a></h5>
+        A <b>join</b> operation merges two streams based on the keys of their 
data records, and yields a new stream. A join over record streams usually needs 
to be performed on a windowing basis because otherwise the number of records 
that must be maintained for performing the join may grow indefinitely. In Kafka 
Streams, you may perform the following join operations:
+        <ul>
+        <li><b>KStream-to-KStream Joins</b> are always windowed joins, since 
otherwise the memory and state required to compute the join would grow 
infinitely in size. Here, a newly received record from one of the streams is 
joined with the other stream's records within the specified window interval to 
produce one result for each matching pair based on user-provided 
<code>ValueJoiner</code>. A new <code>KStream</code> instance representing the 
result stream of the join is returned from this operator.</li>
+        
+        <li><b>KTable-to-KTable Joins</b> are join operations designed to be 
consistent with the ones in relational databases. Here, both changelog streams 
are materialized into local state stores first. When a new record is received 
from one of the streams, it is joined with the other stream's materialized 
state stores to produce one result for each matching pair based on 
user-provided ValueJoiner. A new <code>KTable</code> instance representing the 
result stream of the join, which is also a changelog stream of the represented 
table, is returned from this operator.</li>
+        <li><b>KStream-to-KTable Joins</b> allow you to perform table lookups 
against a changelog stream (<code>KTable</code>) upon receiving a new record 
from another record stream (KStream). An example use case would be to enrich a 
stream of user activities (<code>KStream</code>) with the latest user profile 
information (<code>KTable</code>). Only records received from the record stream 
will trigger the join and produce results via <code>ValueJoiner</code>, not 
vice versa (i.e., records received from the changelog stream will be used only 
to update the materialized state store). A new <code>KStream</code> instance 
representing the result stream of the join is returned from this operator.</li>
+        </ul>
+
+        Depending on the operands the following join operations are supported: 
<b>inner joins</b>, <b>outer joins</b> and <b>left joins</b>. Their semantics 
are similar to the corresponding operators in relational databases.
+        a
+        <h5><a id="streams_dsl_transform" 
href="#streams_dsl_transform">Transform a stream</a></h5>
+
+        <p>
+        There is a list of transformation operations provided for 
<code>KStream</code> and <code>KTable</code> respectively.
+        Each of these operations may generate either one or more 
<code>KStream</code> and <code>KTable</code> objects and
+        can be translated into one or more connected processors into the 
underlying processor topology.
+        All these transformation methods can be chained together to compose a 
complex processor topology.
+        Since <code>KStream</code> and <code>KTable</code> are strongly typed, 
all these transformation operations are defined as
+        generics functions where users could specify the input and output data 
types.
+        </p>
+
+        <p>
+        Among these transformations, <code>filter</code>, <code>map</code>, 
<code>mapValues</code>, etc, are stateless
+        transformation operations and can be applied to both 
<code>KStream</code> and <code>KTable</code>,
+        where users can usually pass a customized function to these functions 
as a parameter, such as <code>Predicate</code> for <code>filter</code>,
+        <code>KeyValueMapper</code> for <code>map</code>, etc:
+
+        </p>
+
+        <pre>
+            // written in Java 8+, using lambda expressions
+            KStream<String, GenericRecord> mapped = source1.mapValue(record -> 
record.get("category"));
+        </pre>
+
+        <p>
+        Stateless transformations, by definition, do not depend on any state 
for processing, and hence implementation-wise
+        they do not require a state store associated with the stream 
processor; Stateful transformations, on the other hand,
+        require accessing an associated state for processing and producing 
outputs.
+        For example, in <code>join</code> and <code>aggregate</code> 
operations, a windowing state is usually used to store all the received records
+        within the defined window boundary so far. The operators can then 
access these accumulated records in the store and compute
+        based on them.
+        </p>
+
+        <pre>
+            // written in Java 8+, using lambda expressions
+            KTable<Windowed<String>, Long> counts = 
source1.groupByKey().aggregate(
+                () -> 0L,  // initial value
+                (aggKey, value, aggregate) -> aggregate + 1L,   // aggregating 
value
+                TimeWindows.of("counts", 5000L).advanceBy(1000L), // intervals 
in milliseconds
+                Serdes.Long() // serde for aggregated value
+            );
+
+            KStream<String, String> joined = source1.leftJoin(source2,
+                (record1, record2) -> record1.get("user") + "-" + 
record2.get("region");
+            );
+        </pre>
+
+        <h5><a id="streams_dsl_sink" href="#streams_dsl_sink">Write streams 
back to Kafka</a></h5>
+
+        <p>
+        At the end of the processing, users can choose to (continuously) write 
the final resulted streams back to a Kafka topic through
+        <code>KStream.to</code> and <code>KTable.to</code>.
+        </p>
+
+        <pre>
+            joined.to("topic4");
+        </pre>
+
+        If your application needs to continue reading and processing the 
records after they have been materialized
+        to a topic via <code>to</code> above, one option is to construct a new 
stream that reads from the output topic;
+        Kafka Streams provides a convenience method called 
<code>through</code>:
+
+        <pre>
+            // equivalent to
+            //
+            // joined.to("topic4");
+            // materialized = builder.stream("topic4");
+            KStream<String, String> materialized = joined.through("topic4");
+        </pre>
+
+
+        <br>
+        <p>
+        Besides defining the topology, developers will also need to configure 
their applications
+        in <code>StreamsConfig</code> before running it. A complete list of
+        Kafka Streams configs can be found <a 
href="#streamsconfigs"><b>here</b></a>.
+        </p>
+</script>
+
+<!--#include virtual="../includes/_header.htm" -->
+<!--#include virtual="../includes/_top.htm" -->
+<div class="content documentation documentation--current">
+       <!--#include virtual="../includes/_nav.htm" -->
+       <div class="right">
+               <!--#include virtual="../includes/_docs_banner.htm" -->
+        <ul class="breadcrumbs">
+            <li><a href="/documentation">Documentation</a></li>
+        </ul>
+        <div class="p-streams"></div>
+    </div>
+</div>
+<!--#include virtual="../includes/_footer.htm" -->
+<script>
+$(function() {
+  // Show selected style on nav item
+  $('.b-nav__streams').addClass('selected');
+
+  // Display docs subnav items
+  $('.b-nav__docs').parent().toggleClass('nav__item__with__subs--expanded');
+});
+</script>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/970abca9/documentation/index.html
----------------------------------------------------------------------
diff --git a/documentation/index.html b/documentation/index.html
new file mode 100644
index 0000000..59ac251
--- /dev/null
+++ b/documentation/index.html
@@ -0,0 +1,2 @@
+<!-- should always link the the latest release's documentation -->
+<!--#include virtual="../0101/documentation.html" -->
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/970abca9/documentation/streams.html
----------------------------------------------------------------------
diff --git a/documentation/streams.html b/documentation/streams.html
new file mode 100644
index 0000000..178b0d8
--- /dev/null
+++ b/documentation/streams.html
@@ -0,0 +1,2 @@
+<!-- should always link the the latest release's documentation -->
+<!--#include virtual="../0101/streams.html" -->
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/970abca9/includes/_footer.htm
----------------------------------------------------------------------
diff --git a/includes/_footer.htm b/includes/_footer.htm
index a104f23..1fd1332 100644
--- a/includes/_footer.htm
+++ b/includes/_footer.htm
@@ -15,6 +15,35 @@
                </div>
        </body>
 
+       <script 
src="https://cdnjs.cloudflare.com/ajax/libs/handlebars.js/2.0.0/handlebars.js";></script>
+       <script>
+               $(function () {
+                       // list of pages that are rendered with Handlebars
+                       var templates = [
+                               'introduction',
+                               'implementation',
+                               'design',
+                               'api',
+                               'configuration',
+                               'ops',
+                               'security',
+                               'connect',
+                               'streams'
+                       ];
+
+                       // loop through all Handlebar templates on the page and 
render them
+                       for(var i = 0; i < templates.length; i++) {
+                               var templateScript = $("#" + templates[i] + 
"-template").html();
+                               if(templateScript) {
+                                       var template = 
Handlebars.compile(templateScript);
+                                       var html = template(context);
+                                       $(".p-" + templates[i]).html(html);
+                               }
+                       }
+               });
+       </script>
+
+       <script src="/js/jquery.sticky-kit.min.js"></script>
        <script>
                $(function() {
                        // Set mobile scroll position on nav
@@ -71,6 +100,7 @@
                        };
                });
        </script>
+
        <script>
                
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
                (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new 
Date();a=s.createElement(o),

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/970abca9/includes/_header.htm
----------------------------------------------------------------------
diff --git a/includes/_header.htm b/includes/_header.htm
index db0e12e..f48e8a5 100644
--- a/includes/_header.htm
+++ b/includes/_header.htm
@@ -17,5 +17,25 @@
                <meta property="og:type" value="website" />
                <link 
href="https://fonts.googleapis.com/css?family=Cutive+Mono|Roboto:400,700,900" 
rel="stylesheet">
                <script src="/js/jquery.min.js"></script>
-               <script src="/js/jquery.sticky-kit.min.js"></script>
+               <script>
+                       // Legacy versions of the documentation to leave alone
+                       var legacyDocPaths = [
+                               '/07/documentation',
+                               '/08/documentation',
+                               '/081/documentation',
+                               '/082/documentation',
+                               '/090/documentation',
+                               '/0100/documentation'
+                       ];
+
+                       // Any direct request for Streams documentation in docs 
versions prior to 0101
+                       // Redirect these requests to the standalone Streams 
doc page
+                       var currentPath = window.location.pathname;
+                       if (legacyDocPaths.indexOf(currentPath) === -1 &&
+                               currentPath.includes('/documentation') &&
+                               currentPath !== '/documentation/streams' &&
+                               window.location.hash.includes('#streams_')) {
+                               window.location.pathname = 
'/documentation/streams';
+                       }
+               </script>
        </head>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/970abca9/includes/_nav.htm
----------------------------------------------------------------------
diff --git a/includes/_nav.htm b/includes/_nav.htm
index 85ad126..a1038bb 100644
--- a/includes/_nav.htm
+++ b/includes/_nav.htm
@@ -15,7 +15,7 @@
         <a class="nav__item nav__sub__item" 
href="/documentation#operations">operations</a>
         <a class="nav__item nav__sub__item" 
href="/documentation#security">security</a>
         <a class="nav__item nav__sub__item" 
href="/documentation#connect">kafka connect</a>
-        <a class="nav__item nav__sub__item" 
href="/documentation#streams">kafka streams</a>
+        <a class="b-nav__streams nav__item nav__sub__item" 
href="/documentation/streams">kafka streams</a>
       </div>
       <a class="b-nav__performance nav__item" 
href="/performance">performance</a>
       <a class="b-nav__poweredby nav__item" target="_blank" 
href="https://cwiki.apache.org/confluence/display/KAFKA/Powered+By";>powered 
by</a>

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/970abca9/intro.html
----------------------------------------------------------------------
diff --git a/intro.html b/intro.html
index 460e65d..c3a4033 100644
--- a/intro.html
+++ b/intro.html
@@ -4,7 +4,7 @@
   <!--#include virtual="includes/_nav.htm" -->
   <div class="right">
                <h1>Introduction</h1>
-<!--#include virtual="0101/introduction.html" -->
+    <!--#include virtual="0101/introduction.html" -->
 
 <!--#include virtual="includes/_footer.htm" -->
 

http://git-wip-us.apache.org/repos/asf/kafka-site/blob/970abca9/styles.css
----------------------------------------------------------------------
diff --git a/styles.css b/styles.css
index ac3efeb..190dc93 100644
--- a/styles.css
+++ b/styles.css
@@ -35,7 +35,7 @@ code b, pre b,
 h1, h2, h3, h4 {
        line-height: 1.2;
 }
-h1 a, h2 a, h3 a, h4 a {
+h1 a, h2 a, h3 a, h4 a, h5 a, h6 a {
        color: #000000;
 }
 h1 {
@@ -106,15 +106,22 @@ ul {
        padding: 0;
        list-style: none;
        text-transform: uppercase;
+       margin-bottom: 5rem;
+}
+.toc li {
+       margin-bottom: .5rem;
+}
+.toc > li {
+       margin-bottom: .6rem;
 }
 .toc a {
        color: #000000;
 }
 .toc ul {
-       padding: 0 3.2rem;
+       padding: .5rem 1.7rem 0;
        text-transform: none;
        font-weight: 400;
-       margin-bottom: 1.5rem;
+       margin: 0 0 2rem;
 }
 .toc a:hover,
 .toc ul a {
@@ -124,6 +131,13 @@ ul {
        padding: 0 1.8rem;
        margin-bottom: 0;
 }
+ol.toc {
+       list-style: decimal;
+       padding: 0 0 0 2rem;
+}
+ol.toc > li {
+       padding-left: .2rem;
+}
 .data-table {
        display: block;
   border: none;
@@ -215,6 +229,29 @@ ul {
        overflow: hidden;
        margin: 3rem 0 1rem;
 }
+.breadcrumbs {
+       list-style: none;
+    padding: 0;
+    text-transform: uppercase;
+       overflow: hidden;
+       margin-bottom: 0;
+}
+.breadcrumbs li {
+       display: flex;
+       align-items: center;
+}
+.breadcrumbs li::after {
+       content: '\00BB';
+    margin-left: .5rem;
+       height: 1rem;
+    line-height: .9rem;
+}
+.breadcrumbs a {
+       color: #000000;
+}
+.breadcrumbs a:hover {
+       color: #0B6D88;
+}
 .content {
        margin-top: 3rem;
 }

Reply via email to