This is an automated email from the ASF dual-hosted git repository.
rhauch pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/kafka-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new e4e3004 MINOR: Fix renamed and reformatted quickstart, broken since
2.6.0 release (#286)
e4e3004 is described below
commit e4e30045807d0faa3af4745a2a399e2745d73391
Author: Randall Hauch <[email protected]>
AuthorDate: Mon Aug 10 14:17:56 2020 -0500
MINOR: Fix renamed and reformatted quickstart, broken since 2.6.0 release
(#286)
---
26/quickstart-docker.html | 204 +++++++++++++++++++++++++++++++
26/quickstart-zookeeper.html | 277 +++++++++++++++++++++++++++++++++++++++++++
26/quickstart.html | 250 --------------------------------------
3 files changed, 481 insertions(+), 250 deletions(-)
diff --git a/26/quickstart-docker.html b/26/quickstart-docker.html
new file mode 100644
index 0000000..d8816ba
--- /dev/null
+++ b/26/quickstart-docker.html
@@ -0,0 +1,204 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements. See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License. You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<script><!--#include virtual="js/templateData.js" --></script>
+
+<script id="quickstart-docker-template" type="text/x-handlebars-template">
+<div class="quickstart-step">
+<h4 class="anchor-heading">
+ <a class="anchor-link" id="step-1-get-kafka" href="#step-1-get-kafka"></a>
+ <a href="#step-1-get-kafka">Step 1: Get Kafka</a>
+</h4>
+
+<p>
+ This docker-compose file will run everything for you via <a
href="https://www.docker.com/" rel="nofollow">Docker</a>.
+ Copy and paste it into a file named <code>docker-compose.yml</code> on
your local filesystem.
+</p>
+<pre class="line-numbers"><code class="language-bash">---
+ version: '2'
+
+ services:
+ broker:
+ image: apache-kafka/broker:2.5.0
+ hostname: kafka-broker
+ container_name: kafka-broker
+
+ # ...rest omitted...</code></pre>
+</div>
+
+<div class="quickstart-step">
+<h4 class="anchor-heading">
+ <a class="anchor-link" id="step-2-start-kafka"
href="#step-2-start-kafka"></a>
+ <a href="#step-2-start-kafka">Step 2: Start the Kafka environment</a>
+</h4>
+
+<p>
+ From the directory containing the <code>docker-compose.yml</code> file
created in the previous step, run this
+ command in order to start all services in the correct order:
+</p>
+<pre class="line-numbers"><code class="language-bash">$ docker-compose
up</code></pre>
+<p>
+ Once all services have successfully launched, you will have a basic Kafka
environment running and ready to use.
+</p>
+</div>
+
+<div class="quickstart-step">
+<h4 class="anchor-heading">
+ <a class="anchor-link" id="step-3-create-a-topic"
href="#step-3-create-a-topic"></a>
+ <a href="#step-3-create-a-topic">Step 3: Create a topic to store your
events</a>
+</h4>
+<p>Kafka is a distributed <em>event streaming platform</em> that lets you
read, write, store, and process
+<a href="/documentation/#messages" rel="nofollow"><em>events</em></a> (also
called <em>records</em> or <em>messages</em> in the documentation)
+across many machines.
+Example events are payment transactions, geolocation updates from mobile
phones, shipping orders, sensor measurements
+from IoT devices or medical equipment, and much more.
+These events are organized and stored in <a
href="/documentation/#intro_topics" rel="nofollow"><em>topics</em></a>.
+Very simplified, a topic is similar to a folder in a filesystem, and the
events are the files in that folder.</p>
+<p>So before you can write your first events, you must create a topic:</p>
+<pre class="line-numbers"><code class="language-bash">$ docker exec -it
kafka-broker kafka-topics.sh --create --topic quickstart-events</code></pre>
+<p>All of Kafka's command line tools have additional options: run the
<code>kafka-topics.sh</code> command without any
+arguments to display usage information.
+For example, it can also show you
+<a href="/documentation/#intro_topics" rel="nofollow">details such as the
partition count</a> of the new topic:</p>
+<pre class="line-numbers"><code class="language-bash">$ docker exec -it
kafka-broker kafka-topics.sh --describe --topic quickstart-events
+ Topic:quickstart-events PartitionCount:1 ReplicationFactor:1 Configs:
+ Topic: quickstart-events Partition: 0 Leader: 0 Replicas: 0 Isr:
0</code></pre>
+</div>
+
+<div class="quickstart-step">
+<h4 class="anchor-heading">
+ <a class="anchor-link" id="step-4-write-events"
href="#step-4-write-events"></a>
+ <a href="#step-4-write-events">Step 4: Write some events into the topic</a>
+</h4>
+<p>A Kafka client communicates with the Kafka brokers via the network for
writing (or reading) events.
+Once received, the brokers will store the events in a durable and
fault-tolerant manner for as long as you
+need—even forever.</p>
+<p>Run the console producer client to write a few events into your topic.
+By default, each line you enter will result in a separate event being written
to the topic.</p>
+<pre class="line-numbers"><code class="language-bash">$ docker exec -it
kafka-broker kafka-console-producer.sh --topic quickstart-events
+This is my first event
+This is my second event</code></pre>
+<p>You can stop the producer client with <code>Ctrl-C</code> at any time.</p>
+</div>
+
+<div class="quickstart-step">
+<h4 class="anchor-heading">
+ <a class="anchor-link" id="step-5-read-the-events"
href="#step-5-read-the-events"></a>
+ <a href="#step-5-read-the-events">Step 5: Read the events</a>
+</h4>
+<p>Open another terminal session and run the console consumer client to read
the events you just created:</p>
+<pre class="line-numbers"><code class="language-bash">$ docker exec -it
kafka-broker kafka-console-consumer.sh --topic quickstart-events
--from-beginning
+This is my first event
+This is my second event</code></pre>
+<p>You can stop the consumer client with <code>Ctrl-C</code> at any time.</p>
+<p>Feel free to experiment: for example, switch back to your producer terminal
(previous step) to write
+additional events, and see how the events immediately show up in your consumer
terminal.</p>
+<p>Because events are durably stored in Kafka, they can be read as many times
and by as many consumers as you want.
+You can easily verify this by opening yet another terminal session and
re-running the previous command again.</p>
+
+</div>
+
+<div class="quickstart-step">
+<h4 class="anchor-heading">
+ <a class="anchor-link" id="step-5-read-the-events"
href="#step-5-read-the-events"></a>
+ <a href="#step-5-read-the-events">Step 6: Import/export your data as
streams of events with Kafka Connect</a>
+</h4>
+<p>You probably have lots of data in existing systems like relational
databases or traditional messaging systems, along
+with many applications that already use these systems.
+<a href="/documentation/#connect" rel="nofollow">Kafka Connect</a> allows you
to continuously ingest data from external
+systems into Kafka, and vice versa. It is thus
+very easy to integrate existing systems with Kafka. To make this process even
easier, there are hundreds of such
+connectors readily available.</p>
+<p>Take a look at the <a href="/documentation/#connect" rel="nofollow">Kafka
Connect section</a> in the documentation to
+learn more about how to continuously import/export your data into and out of
Kafka.</p>
+
+</div>
+
+<div class="quickstart-step">
+<h4 class="anchor-heading">
+ <a class="anchor-link" id="step-7-process-events"
href="#step-7-process-events"></a>
+ <a href="#step-7-process-events">Step 7: Process your events with Kafka
Streams</a>
+</h4>
+
+<p>Once your data is stored in Kafka as events, you can process the data with
the
+<a href="/documentation/streams" rel="nofollow">Kafka Streams</a> client
library for Java/Scala.
+It allows you to implement mission-critical real-time applications and
microservices, where the input and/or output data
+is stored in Kafka topics. Kafka Streams combines the simplicity of writing
and deploying standard Java and Scala
+applications on the client side with the benefits of Kafka's server-side
cluster technology to make these applications
+highly scalable, elastic, fault-tolerant, and distributed. The library
supports exactly-once processing, stateful
+operations and aggregations, windowing, joins, processing based on event-time,
and much more.</p>
+<p>To give you a first taste, here's how one would implement the popular
<code>WordCount</code> algorithm:</p>
+<pre class="line-numbers"><code class="language-java">KStream<String, String>
textLines = builder.stream("quickstart-events");
+
+KTable<String, Long> wordCounts = textLines
+ .flatMapValues(line -> Arrays.asList(line.toLowerCase().split("
")))
+ .groupBy((keyIgnored, word) -> word)
+ .count();
+
+wordCounts.toStream().to("output-topic"), Produced.with(Serdes.String(),
Serdes.Long()));</code></pre>
+<p>The <a href="/25/documentation/streams/quickstart" rel="nofollow">Kafka
Streams demo</a> and the
+<a href="/25/documentation/streams/tutorial" rel="nofollow">app development
tutorial</a> demonstrate how to code and run
+such a streaming application from start to finish.</p>
+
+</div>
+
+<div class="quickstart-step">
+<h4 class="anchor-heading">
+ <a class="anchor-link" id="step-8-terminate" href="#step-8-terminate"></a>
+ <a href="#step-8-terminate">Step 8: Terminate the Kafka environment</a>
+</h4>
+<p>Now that you reached the end of the quickstart, feel free to tear down the
Kafka environment—or continue playing around.</p>
+<p>Run the following command to tear down the environment, which also deletes
any events you have created along the way:</p>
+<pre class="line-numbers"><code class="language-bash">$ docker-compose
down</code></pre>
+
+</div>
+
+<div class="quickstart-step">
+<h4 class="anchor-heading">
+ <a class="anchor-link" id="quickstart_kafkacongrats"
href="#quickstart_kafkacongrats"></a>
+ <a href="#quickstart_kafkacongrats">Congratulations!</a>
+ </h4>
+
+ <p>You have successfully finished the Apache Kafka quickstart.<div>
+
+ <p>To learn more, we suggest the following next steps:</p>
+
+ <ul>
+ <li>
+ Read through the brief <a href="/intro">Introduction</a> to learn
how Kafka works at a high level, its
+ main concepts, and how it compares to other technologies. To
understand Kafka in more detail, head over to the
+ <a href="/documentation/">Documentation</a>.
+ </li>
+ <li>
+ Browse through the <a href="/powered-by">Use Cases</a> to learn how
other users in our world-wide
+ community are getting value out of Kafka.
+ </li>
+ <!--
+ <li>
+ Learn how _Kafka compares to other technologies_ [note to design
team: this new page is not yet written] you might be familiar with.
+ </li>
+ -->
+ <li>
+ Join a <a href="/events">local Kafka meetup group</a> and
+ <a href="https://kafka-summit.org/past-events/">watch talks from
Kafka Summit</a>,
+ the main conference of the Kafka community.
+ </li>
+ </ul>
+</div>
+</script>
+
+<div class="p-quickstart-docker"></div>
diff --git a/26/quickstart-zookeeper.html b/26/quickstart-zookeeper.html
new file mode 100644
index 0000000..e9fea43
--- /dev/null
+++ b/26/quickstart-zookeeper.html
@@ -0,0 +1,277 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements. See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License. You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<script>
+ <!--#include virtual="js/templateData.js" -->
+</script>
+
+<script id="quickstart-template" type="text/x-handlebars-template">
+
+ <div class="quickstart-step">
+ <h4 class="anchor-heading">
+ <a class="anchor-link" id="quickstart_download"
href="#quickstart_download"></a>
+ <a href="#quickstart_download">Step 1: Get Kafka</a>
+ </h4>
+
+ <p>
+ <a
href="https://www.apache.org/dyn/closer.cgi?path=/kafka/2.6.0/kafka_2.13-2.6.0.tgz">Download</a>
+ the latest Kafka release and extract it:
+ </p>
+
+<pre class="line-numbers"><code class="language-bash">$ tar -xzf
kafka_2.13-2.6.0.tgz
+$ cd kafka_2.13-2.6.0</code></pre>
+ </div>
+
+ <div class="quickstart-step">
+ <h4 class="anchor-heading">
+ <a class="anchor-link" id="quickstart_startserver"
href="#quickstart_startserver"></a>
+ <a href="#quickstart_startserver">Step 2: Start the Kafka
environment</a>
+ </h4>
+
+ <p class="note">
+ NOTE: Your local environment must have Java 8+ installed.
+ </p>
+
+ <p>
+ Run the following commands in order to start all services in the
correct order:
+ </p>
+
+<pre class="line-numbers"><code class="language-bash"># Start the ZooKeeper
service
+# Note: Soon, ZooKeeper will no longer be required by Apache Kafka.
+$ bin/zookeeper-server-start.sh config/zookeeper.properties</code></pre>
+
+ <p>
+ Open another terminal session and run:
+ </p>
+
+<pre class="line-numbers"><code class="language-bash"># Start the Kafka broker
service
+$ bin/kafka-server-start.sh config/server.properties</code></pre>
+
+ <p>
+ Once all services have successfully launched, you will have a basic
Kafka environment running and ready to use.
+ </p>
+ </div>
+
+ <div class="quickstart-step">
+ <h4 class="anchor-heading">
+ <a class="anchor-link" id="quickstart_createtopic"
href="#quickstart_createtopic"></a>
+ <a href="#quickstart_createtopic">Step 3: Create a topic to store
your events</a>
+ </h4>
+
+ <p>
+ Kafka is a distributed <em>event streaming platform</em> that lets
you read, write, store, and process
+ <a href="/documentation/#messages"><em>events</em></a> (also called
<em>records</em> or
+ <em>messages</em> in the documentation)
+ across many machines.
+ </p>
+
+ <p>
+ Example events are payment transactions, geolocation updates from
mobile phones, shipping orders, sensor measurements
+ from IoT devices or medical equipment, and much more. These events
are organized and stored in
+ <a href="/documentation/#intro_topics"><em>topics</em></a>.
+ Very simplified, a topic is similar to a folder in a filesystem, and
the events are the files in that folder.
+ </p>
+
+ <p>
+ So before you can write your first events, you must create a topic.
Open another terminal session and run:
+ </p>
+
+<pre class="line-numbers"><code class="language-bash">$ bin/kafka-topics.sh
--create --topic quickstart-events --bootstrap-server
localhost:9092</code></pre>
+
+ <p>
+ All of Kafka's command line tools have additional options: run the
<code>kafka-topics.sh</code> command without any
+ arguments to display usage information. For example, it can also
show you
+ <a href="/documentation/#intro_topics">details such as the partition
count</a>
+ of the new topic:
+ </p>
+
+<pre class="line-numbers"><code class="language-bash">$ bin/kafka-topics.sh
--describe --topic quickstart-events --bootstrap-server localhost:9092
+Topic:quickstart-events PartitionCount:1 ReplicationFactor:1 Configs:
+ Topic: quickstart-events Partition: 0 Leader: 0 Replicas: 0 Isr:
0</code></pre>
+ </div>
+
+ <div class="quickstart-step">
+ <h4 class="anchor-heading">
+ <a class="anchor-link" id="quickstart_send"
href="#quickstart_send"></a>
+ <a href="#quickstart_send">Step 4: Write some events into the
topic</a>
+ </h4>
+
+ <p>
+ A Kafka client communicates with the Kafka brokers via the network
for writing (or reading) events.
+ Once received, the brokers will store the events in a durable and
fault-tolerant manner for as long as you
+ need—even forever.
+ </p>
+
+ <p>
+ Run the console producer client to write a few events into your
topic.
+ By default, each line you enter will result in a separate event
being written to the topic.
+ </p>
+
+<pre class="line-numbers"><code class="language-bash">$
bin/kafka-console-producer.sh --topic quickstart-events --bootstrap-server
localhost:9092
+This is my first event
+This is my second event</code></pre>
+
+ <p>
+ You can stop the producer client with <code>Ctrl-C</code> at any
time.
+ </p>
+ </div>
+
+ <div class="quickstart-step">
+ <h4 class="anchor-heading">
+ <a class="anchor-link" id="quickstart_consume"
href="#quickstart_consume"></a>
+ <a href="#quickstart_consume">Step 5: Read the events</a>
+ </h4>
+
+ <p>Open another terminal session and run the console consumer client to read
the events you just created:</p>
+
+<pre class="line-numbers"><code class="language-bash">$
bin/kafka-console-consumer.sh --topic quickstart-events --from-beginning
--bootstrap-server localhost:9092
+This is my first event
+This is my second event</code></pre>
+
+ <p>You can stop the consumer client with <code>Ctrl-C</code> at any time.</p>
+
+ <p>Feel free to experiment: for example, switch back to your producer
terminal (previous step) to write
+ additional events, and see how the events immediately show up in your
consumer terminal.</p>
+
+ <p>Because events are durably stored in Kafka, they can be read as many
times and by as many consumers as you want.
+ You can easily verify this by opening yet another terminal session and
re-running the previous command again.</p>
+ </div>
+
+ <div class="quickstart-step">
+ <h4 class="anchor-heading">
+ <a class="anchor-link" id="quickstart_kafkaconnect"
href="#quickstart_kafkaconnect"></a>
+ <a href="#quickstart_kafkaconnect">Step 6: Import/export your data as
streams of events with Kafka Connect</a>
+ </h4>
+
+ <p>
+ You probably have lots of data in existing systems like relational
databases or traditional messaging systems,
+ along with many applications that already use these systems.
+ <a href="/documentation/#connect">Kafka Connect</a> allows you to
continuously ingest
+ data from external systems into Kafka, and vice versa. It is thus very
easy to integrate existing systems with
+ Kafka. To make this process even easier, there are hundreds of such
connectors readily available.
+ </p>
+
+ <p>Take a look at the <a href="/documentation/#connect">Kafka Connect
section</a>
+ learn more about how to continuously import/export your data into and out of
Kafka.</p>
+
+ </div>
+
+ <div class="quickstart-step">
+ <h4 class="anchor-heading">
+ <a class="anchor-link" id="quickstart_kafkastreams"
href="#quickstart_kafkastreams"></a>
+ <a href="#quickstart_kafkastreams">Step 7: Process your events with
Kafka Streams</a>
+ </h4>
+
+ <p>
+ Once your data is stored in Kafka as events, you can process the data
with the
+ <a href="/documentation/streams">Kafka Streams</a> client library for
Java/Scala.
+ It allows you to implement mission-critical real-time applications and
microservices, where the input
+ and/or output data is stored in Kafka topics. Kafka Streams combines
the simplicity of writing and deploying
+ standard Java and Scala applications on the client side with the
benefits of Kafka's server-side cluster
+ technology to make these applications highly scalable, elastic,
fault-tolerant, and distributed. The library
+ supports exactly-once processing, stateful operations and aggregations,
windowing, joins, processing based
+ on event-time, and much more.
+ </p>
+
+ <p>To give you a first taste, here's how one would implement the popular
<code>WordCount</code> algorithm:</p>
+
+<pre class="line-numbers"><code class="language-bash">KStream<String,
String> textLines = builder.stream("quickstart-events");
+
+KTable<String, Long> wordCounts = textLines
+ .flatMapValues(line -> Arrays.asList(line.toLowerCase().split("
")))
+ .groupBy((keyIgnored, word) -> word)
+ .count();
+
+wordCounts.toStream().to("output-topic"), Produced.with(Serdes.String(),
Serdes.Long()));</code></pre>
+
+ <p>
+ The <a href="/25/documentation/streams/quickstart">Kafka Streams demo</a>
+ and the <a href="/25/documentation/streams/tutorial">app development
tutorial</a>
+ demonstrate how to code and run such a streaming application from start
to finish.
+ </p>
+
+ </div>
+
+ <div class="quickstart-step">
+ <h4 class="anchor-heading">
+ <a class="anchor-link" id="quickstart_kafkaterminate"
href="#quickstart_kafkaterminate"></a>
+ <a href="#quickstart_kafkaterminate">Step 8: Terminate the Kafka
environment</a>
+ </h4>
+
+ <p>
+ Now that you reached the end of the quickstart, feel free to tear down
the Kafka environment—or
+ continue playing around.
+ </p>
+
+ <ol>
+ <li>
+ Stop the producer and consumer clients with <code>Ctrl-C</code>, if
you haven't done so already.
+ </li>
+ <li>
+ Stop the Kafka broker with <code>Ctrl-C</code>.
+ </li>
+ <li>
+ Lastly, stop the ZooKeeper server with <code>Ctrl-C</code>.
+ </li>
+ </ol>
+
+ <p>
+ If you also want to delete any data of your local Kafka environment
including any events you have created
+ along the way, run the command:
+ </p>
+
+<pre class="line-numbers"><code class="language-bash">$ rm -rf /tmp/kafka-logs
/tmp/zookeeper</code></pre>
+
+ </div>
+
+ <div class="quickstart-step">
+ <h4 class="anchor-heading">
+ <a class="anchor-link" id="quickstart_kafkacongrats"
href="#quickstart_kafkacongrats"></a>
+ <a href="#quickstart_kafkacongrats">Congratulations!</a>
+ </h4>
+
+ <p>You have successfully finished the Apache Kafka quickstart.<div>
+
+ <p>To learn more, we suggest the following next steps:</p>
+
+ <ul>
+ <li>
+ Read through the brief <a href="/intro">Introduction</a>
+ to learn how Kafka works at a high level, its main concepts, and
how it compares to other
+ technologies. To understand Kafka in more detail, head over to the
+ <a href="/documentation/">Documentation</a>.
+ </li>
+ <li>
+ Browse through the <a href="/powered-by">Use Cases</a> to learn
how
+ other users in our world-wide community are getting value out of
Kafka.
+ </li>
+ <!--
+ <li>
+ Learn how _Kafka compares to other technologies_ you might be
familiar with.
+ [note to design team: this new page is not yet written]
+ </li>
+ -->
+ <li>
+ Join a <a href="/events">local Kafka meetup group</a> and
+ <a href="/past-events">watch talks from Kafka Summit</a>,
+ the main conference of the Kafka community.
+ </li>
+ </ul>
+ </div>
+</script>
+
+<div class="p-quickstart"></div>
diff --git a/26/quickstart.html b/26/quickstart.html
deleted file mode 100644
index 929985b..0000000
--- a/26/quickstart.html
+++ /dev/null
@@ -1,250 +0,0 @@
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements. See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License. You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-
-<script><!--#include virtual="js/templateData.js" --></script>
-
-<script id="quickstart-template" type="text/x-handlebars-template">
-<p>
-This tutorial assumes you are starting fresh and have no existing Kafka or
ZooKeeper data.
-Since Kafka console scripts are different for Unix-based and Windows
platforms, on Windows platforms use <code>bin\windows\</code> instead of
<code>bin/</code>, and change the script extension to <code>.bat</code>.
-</p>
-
-<h4><a id="quickstart_download" href="#quickstart_download">Step 1: Download
the code</a></h4>
-
-<a
href="https://www.apache.org/dyn/closer.cgi?path=/kafka/{{fullDotVersion}}/kafka_{{scalaVersion}}-{{fullDotVersion}}.tgz"
title="Kafka downloads">Download</a> the {{fullDotVersion}} release and un-tar
it.
-
-<pre class="line-numbers"><code class="language-bash">> tar -xzf
kafka_{{scalaVersion}}-{{fullDotVersion}}.tgz
-> cd kafka_{{scalaVersion}}-{{fullDotVersion}}</code></pre>
-
-<h4><a id="quickstart_startserver" href="#quickstart_startserver">Step 2:
Start the server</a></h4>
-
-<p>
-Kafka uses <a href="https://zookeeper.apache.org/">ZooKeeper</a> so you need
to first start a ZooKeeper server if you don't already have one. You can use
the convenience script packaged with kafka to get a quick-and-dirty single-node
ZooKeeper instance.
-</p>
-
-<pre class="line-numbers"><code class="language-bash">>
bin/zookeeper-server-start.sh config/zookeeper.properties
-[2013-04-22 15:01:37,495] INFO Reading configuration from:
config/zookeeper.properties
(org.apache.zookeeper.server.quorum.QuorumPeerConfig)
-...</code></pre>
-
-<p>Now start the Kafka server:</p>
-<pre class="line-numbers"><code class="language-bash">>
bin/kafka-server-start.sh config/server.properties
-[2013-04-22 15:01:47,028] INFO Verifying properties
(kafka.utils.VerifiableProperties)
-[2013-04-22 15:01:47,051] INFO Property socket.send.buffer.bytes is overridden
to 1048576 (kafka.utils.VerifiableProperties)
-...</code></pre>
-
-<h4><a id="quickstart_createtopic" href="#quickstart_createtopic">Step 3:
Create a topic</a></h4>
-
-<p>Let's create a topic named "test" with a single partition and only one
replica:</p>
-<pre class="line-numbers"><code class="language-bash">> bin/kafka-topics.sh
--create --bootstrap-server localhost:9092 --replication-factor 1 --partitions
1 --topic test</code></pre>
-
-<p>We can now see that topic if we run the list topic command:</p>
-<pre class="line-numbers"><code class="language-bash">> bin/kafka-topics.sh
--list --bootstrap-server localhost:9092
-test</code></pre>
-<p>Alternatively, instead of manually creating topics you can also configure
your brokers to auto-create topics when a non-existent topic is published
to.</p>
-
-<h4><a id="quickstart_send" href="#quickstart_send">Step 4: Send some
messages</a></h4>
-
-<p>Kafka comes with a command line client that will take input from a file or
from standard input and send it out as messages to the Kafka cluster. By
default, each line will be sent as a separate message.</p>
-<p>
-Run the producer and then type a few messages into the console to send to the
server.</p>
-
-<pre class="line-numbers"><code class="language-bash">>
bin/kafka-console-producer.sh --bootstrap-server localhost:9092 --topic test
-This is a message
-This is another message</code></pre>
-
-<h4><a id="quickstart_consume" href="#quickstart_consume">Step 5: Start a
consumer</a></h4>
-
-<p>Kafka also has a command line consumer that will dump out messages to
standard output.</p>
-
-<pre class="line-numbers"><code class="language-bash">>
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test
--from-beginning
-This is a message
-This is another message</code></pre>
-<p>
-If you have each of the above commands running in a different terminal then
you should now be able to type messages into the producer terminal and see them
appear in the consumer terminal.
-</p>
-<p>
-All of the command line tools have additional options; running the command
with no arguments will display usage information documenting them in more
detail.
-</p>
-
-<h4><a id="quickstart_multibroker" href="#quickstart_multibroker">Step 6:
Setting up a multi-broker cluster</a></h4>
-
-<p>So far we have been running against a single broker, but that's no fun. For
Kafka, a single broker is just a cluster of size one, so nothing much changes
other than starting a few more broker instances. But just to get feel for it,
let's expand our cluster to three nodes (still all on our local machine).</p>
-<p>
-First we make a config file for each of the brokers (on Windows use the
<code>copy</code> command instead):
-</p>
-<pre class="line-numbers"><code class="language-bash">> cp
config/server.properties config/server-1.properties
-> cp config/server.properties config/server-2.properties</code></pre>
-
-<p>
-Now edit these new files and set the following properties:
-</p>
-<pre class="line-numbers"><code class="language-text">
-config/server-1.properties:
- broker.id=1
- listeners=PLAINTEXT://:9093
- log.dirs=/tmp/kafka-logs-1
-
-config/server-2.properties:
- broker.id=2
- listeners=PLAINTEXT://:9094
- log.dirs=/tmp/kafka-logs-2</code></pre>
-<p>The <code>broker.id</code> property is the unique and permanent name of
each node in the cluster. We have to override the port and log directory only
because we are running these all on the same machine and we want to keep the
brokers from all trying to register on the same port or overwrite each other's
data.</p>
-<p>
-We already have Zookeeper and our single node started, so we just need to
start the two new nodes:
-</p>
-<pre class="line-numbers"><code class="language-bash">>
bin/kafka-server-start.sh config/server-1.properties &
-...
-> bin/kafka-server-start.sh config/server-2.properties &
-...</code></pre>
-
-<p>Now create a new topic with a replication factor of three:</p>
-<pre class="line-numbers"><code class="language-bash">> bin/kafka-topics.sh
--create --bootstrap-server localhost:9092 --replication-factor 3 --partitions
1 --topic my-replicated-topic</code></pre>
-
-<p>Okay but now that we have a cluster how can we know which broker is doing
what? To see that run the "describe topics" command:</p>
-<pre class="line-numbers"><code class="language-bash">> bin/kafka-topics.sh
--describe --bootstrap-server localhost:9092 --topic my-replicated-topic
-Topic:my-replicated-topic PartitionCount:1 ReplicationFactor:3
Configs:
- Topic: my-replicated-topic Partition: 0 Leader: 1
Replicas: 1,2,0 Isr: 1,2,0</code></pre>
-<p>Here is an explanation of output. The first line gives a summary of all the
partitions, each additional line gives information about one partition. Since
we have only one partition for this topic there is only one line.</p>
-<ul>
- <li>"leader" is the node responsible for all reads and writes for the given
partition. Each node will be the leader for a randomly selected portion of the
partitions.
- <li>"replicas" is the list of nodes that replicate the log for this
partition regardless of whether they are the leader or even if they are
currently alive.
- <li>"isr" is the set of "in-sync" replicas. This is the subset of the
replicas list that is currently alive and caught-up to the leader.
-</ul>
-<p>Note that in my example node 1 is the leader for the only partition of the
topic.</p>
-<p>
-We can run the same command on the original topic we created to see where it
is:
-</p>
-<pre class="line-numbers"><code class="language-bash">> bin/kafka-topics.sh
--describe --bootstrap-server localhost:9092 --topic test
-Topic:test PartitionCount:1 ReplicationFactor:1 Configs:
- Topic: test Partition: 0 Leader: 0 Replicas: 0 Isr:
0</code></pre>
-<p>So there is no surprise there—the original topic has no replicas and
is on server 0, the only server in our cluster when we created it.</p>
-<p>
-Let's publish a few messages to our new topic:
-</p>
-<pre class="line-numbers"><code class="language-bash">>
bin/kafka-console-producer.sh --bootstrap-server localhost:9092 --topic
my-replicated-topic
-...
-my test message 1
-my test message 2
-^C</code></pre>
-<p>Now let's consume these messages:</p>
-<pre class="line-numbers"><code class="language-bash">>
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092
--from-beginning --topic my-replicated-topic
-...
-my test message 1
-my test message 2
-^C</code></pre>
-
-<p>Now let's test out fault-tolerance. Broker 1 was acting as the leader so
let's kill it:</p>
-<pre class="line-numbers"><code class="language-bash">> ps aux | grep
server-1.properties
-7564 ttys002 0:15.91
/System/Library/Frameworks/JavaVM.framework/Versions/1.8/Home/bin/java...
-> kill -9 7564</code></pre>
-
-On Windows use:
-<pre class="line-numbers"><code class="language-bash">> wmic process where
"caption = 'java.exe' and commandline like '%server-1.properties%'" get
processid
-ProcessId
-6016
-> taskkill /pid 6016 /f</code></pre>
-
-<p>Leadership has switched to one of the followers and node 1 is no longer in
the in-sync replica set:</p>
-
-<pre class="line-numbers"><code class="language-bash">> bin/kafka-topics.sh
--describe --bootstrap-server localhost:9092 --topic my-replicated-topic
-Topic:my-replicated-topic PartitionCount:1 ReplicationFactor:3
Configs:
- Topic: my-replicated-topic Partition: 0 Leader: 2
Replicas: 1,2,0 Isr: 2,0</code></pre>
-<p>But the messages are still available for consumption even though the leader
that took the writes originally is down:</p>
-<pre class="line-numbers"><code class="language-bash">>
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092
--from-beginning --topic my-replicated-topic
-...
-my test message 1
-my test message 2
-^C</code></pre>
-
-
-<h4><a id="quickstart_kafkaconnect" href="#quickstart_kafkaconnect">Step 7:
Use Kafka Connect to import/export data</a></h4>
-
-<p>Reading data from the console and writing it back to the console is a
convenient place to start, but you'll probably want
-to use data from other sources or export data from Kafka to other systems. For
many systems, instead of writing custom
-integration code you can use Kafka Connect to import or export data.</p>
-
-<p>Kafka Connect is a tool included with Kafka that imports and exports data
to Kafka. It is an extensible tool that runs
-<i>connectors</i>, which implement the custom logic for interacting with an
external system. In this quickstart we'll see
-how to run Kafka Connect with simple connectors that import data from a file
to a Kafka topic and export data from a
-Kafka topic to a file.</p>
-
-<p>First, we'll start by creating some seed data to test with:</p>
-
-<pre class="line-numbers"><code class="language-bash">> echo -e "foo\nbar"
> test.txt</code></pre>
-Or on Windows:
-<pre class="line-numbers"><code class="language-bash">> echo foo> test.txt
-> echo bar>> test.txt</code></pre>
-
-<p>Next, we'll start two connectors running in <i>standalone</i> mode, which
means they run in a single, local, dedicated
-process. We provide three configuration files as parameters. The first is
always the configuration for the Kafka Connect
-process, containing common configuration such as the Kafka brokers to connect
to and the serialization format for data.
-The remaining configuration files each specify a connector to create. These
files include a unique connector name, the connector
-class to instantiate, and any other configuration required by the
connector.</p>
-
-<pre class="line-numbers"><code class="language-bash">>
bin/connect-standalone.sh config/connect-standalone.properties
config/connect-file-source.properties
config/connect-file-sink.properties</code></pre>
-
-<p>
-These sample configuration files, included with Kafka, use the default local
cluster configuration you started earlier
-and create two connectors: the first is a source connector that reads lines
from an input file and produces each to a Kafka topic
-and the second is a sink connector that reads messages from a Kafka topic and
produces each as a line in an output file.
-</p>
-
-<p>
-During startup you'll see a number of log messages, including some indicating
that the connectors are being instantiated.
-Once the Kafka Connect process has started, the source connector should start
reading lines from <code>test.txt</code> and
-producing them to the topic <code>connect-test</code>, and the sink connector
should start reading messages from the topic <code>connect-test</code>
-and write them to the file <code>test.sink.txt</code>. We can verify the data
has been delivered through the entire pipeline
-by examining the contents of the output file:
-</p>
-
-
-<pre class="line-numbers"><code class="language-bash">> more test.sink.txt
-foo
-bar</code></pre>
-
-<p>
-Note that the data is being stored in the Kafka topic
<code>connect-test</code>, so we can also run a console consumer to see the
-data in the topic (or use custom consumer code to process it):
-</p>
-
-
-<pre class="line-numbers"><code class="language-bash">>
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic
connect-test --from-beginning
-{"schema":{"type":"string","optional":false},"payload":"foo"}
-{"schema":{"type":"string","optional":false},"payload":"bar"}
-...</code></pre>
-
-<p>The connectors continue to process data, so we can add data to the file and
see it move through the pipeline:</p>
-
-<pre class="line-numbers"><code class="language-bash">> echo Another line>>
test.txt</code></pre>
-
-<p>You should see the line appear in the console consumer output and in the
sink file.</p>
-
-<h4><a id="quickstart_kafkastreams" href="#quickstart_kafkastreams">Step 8:
Use Kafka Streams to process data</a></h4>
-
-<p>
- Kafka Streams is a client library for building mission-critical real-time
applications and microservices,
- where the input and/or output data is stored in Kafka clusters. Kafka
Streams combines the simplicity of
- writing and deploying standard Java and Scala applications on the client
side with the benefits of Kafka's
- server-side cluster technology to make these applications highly scalable,
elastic, fault-tolerant, distributed,
- and much more. This <a
href="/{{version}}/documentation/streams/quickstart">quickstart example</a>
will demonstrate how
- to run a streaming application coded in this library.
-</p>
-
-
-</script>
-
-<div class="p-quickstart"></div>