This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch kafka2.0_doc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git

commit 14d281a3918b6d13d385b3511a2c79581194fb26
Author: Xiang Fu <fx19880...@gmail.com>
AuthorDate: Sat Aug 3 16:22:18 2019 +0800

    Adding kafka 2.0 doc for using simple consumer
---
 docs/pluggable_streams.rst | 46 ++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 44 insertions(+), 2 deletions(-)

diff --git a/docs/pluggable_streams.rst b/docs/pluggable_streams.rst
index ff27cde..7117534 100644
--- a/docs/pluggable_streams.rst
+++ b/docs/pluggable_streams.rst
@@ -162,7 +162,9 @@ How to build and release Pinot package with Kafka 2.x 
connector
 How to use Kafka 2.x connector
 ------------------------------
 
-Below is a sample `streamConfigs` used to create a realtime table with Kafka 
Stream(High) level consumer:
+Below is a sample ``streamConfigs`` used to create a realtime table with Kafka 
Stream(High) level consumer.
+
+Similar to using Kafka 0.9 hlc consumer, Kafka 2.x hlc consumer uses 
``org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory`` in config 
``stream.kafka.consumer.factory.class.name``.
 
 .. code-block:: none
 
@@ -177,10 +179,50 @@ Below is a sample `streamConfigs` used to create a 
realtime table with Kafka Str
     "stream.kafka.hlc.bootstrap.server": "localhost:19092"
   }
 
+Below is a sample table config used to create a realtime table with Kafka 
Partition(Low) level consumer:
+
+.. code-block:: none
+
+  {
+    "tableName": "meetupRsvp",
+    "tableType": "REALTIME",
+    "segmentsConfig": {
+      "timeColumnName": "mtime",
+      "timeType": "MILLISECONDS",
+      "segmentPushType": "APPEND",
+      "segmentAssignmentStrategy": "BalanceNumSegmentAssignmentStrategy",
+      "schemaName": "meetupRsvp",
+      "replication": "1",
+      "replicasPerPartition": "1"
+    },
+    "tenants": {},
+    "tableIndexConfig": {
+      "loadMode": "MMAP",
+      "streamConfigs": {
+        "streamType": "kafka",
+        "stream.kafka.consumer.type": "simple",
+        "stream.kafka.topic.name": "meetupRSVPEvents",
+        "stream.kafka.decoder.class.name": 
"org.apache.pinot.core.realtime.impl.kafka.KafkaJSONMessageDecoder",
+        "stream.kafka.consumer.factory.class.name": 
"org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory",
+        "stream.kafka.zk.broker.url": "localhost:2191/kafka",
+        "stream.kafka.broker.list": "localhost:19092"
+      }
+    },
+    "metadata": {
+      "customConfigs": {}
+    }
+  }
+
+Please note that,
+
+1. Config ``replicasPerPartition`` under ``segmentsConfig`` is required to 
specify table replication.
+2. Config ``stream.kafka.consumer.type`` should be specified as ``simple`` to 
use partition level consumer.
+3. Configs ``stream.kafka.zk.broker.url`` and ``stream.kafka.broker.list`` are 
required under ``tableIndexConfig.streamConfigs`` to provide kafka related 
information.
+
 Upgrade from Kafka 0.9 connector to Kafka 2.x connector
 -------------------------------------------------------
 
-* Update  table config:
+* Update table config for both high level and low level consumer:
 Update config: ``stream.kafka.consumer.factory.class.name`` from 
``org.apache.pinot.core.realtime.impl.kafka.KafkaConsumerFactory`` to 
``org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory``.
 
 * If using Stream(High) level consumer:


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org

Reply via email to