[incubator-pinot] 01/01: dding kafka 2.0 doc for using simple consumer

2019-08-03 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch kafka2.0_doc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git

commit c4dceab933ac2529ad67cc76ff12a9fb5d161b3c
Author: Xiang Fu 
AuthorDate: Sat Aug 3 16:22:18 2019 +0800

dding kafka 2.0 doc for using simple consumer
---
 docs/pluggable_streams.rst | 51 --
 1 file changed, 49 insertions(+), 2 deletions(-)

diff --git a/docs/pluggable_streams.rst b/docs/pluggable_streams.rst
index ff27cde..b07c38b 100644
--- a/docs/pluggable_streams.rst
+++ b/docs/pluggable_streams.rst
@@ -162,7 +162,11 @@ How to build and release Pinot package with Kafka 2.x 
connector
 How to use Kafka 2.x connector
 --
 
-Below is a sample `streamConfigs` used to create a realtime table with Kafka 
Stream(High) level consumer:
+- **Use Kafka Stream(High) Level Consumer**
+
+Below is a sample ``streamConfigs`` used to create a realtime table with Kafka 
Stream(High) level consumer.
+
+Kafka 2.x HLC consumer uses 
``org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory`` in config 
``stream.kafka.consumer.factory.class.name``.
 
 .. code-block:: none
 
@@ -177,10 +181,53 @@ Below is a sample `streamConfigs` used to create a 
realtime table with Kafka Str
 "stream.kafka.hlc.bootstrap.server": "localhost:19092"
   }
 
+
+- **Use Kafka Partition(Low) Level Consumer**
+
+Below is a sample table config used to create a realtime table with Kafka 
Partition(Low) level consumer:
+
+.. code-block:: none
+
+  {
+"tableName": "meetupRsvp",
+"tableType": "REALTIME",
+"segmentsConfig": {
+  "timeColumnName": "mtime",
+  "timeType": "MILLISECONDS",
+  "segmentPushType": "APPEND",
+  "segmentAssignmentStrategy": "BalanceNumSegmentAssignmentStrategy",
+  "schemaName": "meetupRsvp",
+  "replication": "1",
+  "replicasPerPartition": "1"
+},
+"tenants": {},
+"tableIndexConfig": {
+  "loadMode": "MMAP",
+  "streamConfigs": {
+"streamType": "kafka",
+"stream.kafka.consumer.type": "simple",
+"stream.kafka.topic.name": "meetupRSVPEvents",
+"stream.kafka.decoder.class.name": 
"org.apache.pinot.core.realtime.impl.kafka.KafkaJSONMessageDecoder",
+"stream.kafka.consumer.factory.class.name": 
"org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory",
+"stream.kafka.zk.broker.url": "localhost:2191/kafka",
+"stream.kafka.broker.list": "localhost:19092"
+  }
+},
+"metadata": {
+  "customConfigs": {}
+}
+  }
+
+Please note that,
+
+1. Config ``replicasPerPartition`` under ``segmentsConfig`` is required to 
specify table replication.
+2. Config ``stream.kafka.consumer.type`` should be specified as ``simple`` to 
use partition level consumer.
+3. Configs ``stream.kafka.zk.broker.url`` and ``stream.kafka.broker.list`` are 
required under ``tableIndexConfig.streamConfigs`` to provide kafka related 
information.
+
 Upgrade from Kafka 0.9 connector to Kafka 2.x connector
 ---
 
-* Update  table config:
+* Update table config for both high level and low level consumer:
 Update config: ``stream.kafka.consumer.factory.class.name`` from 
``org.apache.pinot.core.realtime.impl.kafka.KafkaConsumerFactory`` to 
``org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory``.
 
 * If using Stream(High) level consumer:


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org



[incubator-pinot] 01/01: dding kafka 2.0 doc for using simple consumer

2019-08-03 Thread xiangfu
This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch kafka2.0_doc
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git

commit f7a1bd7bdbe2dd1c9843e3287e657d24c67f5506
Author: Xiang Fu 
AuthorDate: Sat Aug 3 16:22:18 2019 +0800

dding kafka 2.0 doc for using simple consumer
---
 docs/pluggable_streams.rst | 51 --
 1 file changed, 49 insertions(+), 2 deletions(-)

diff --git a/docs/pluggable_streams.rst b/docs/pluggable_streams.rst
index ff27cde..2088dfa 100644
--- a/docs/pluggable_streams.rst
+++ b/docs/pluggable_streams.rst
@@ -162,7 +162,11 @@ How to build and release Pinot package with Kafka 2.x 
connector
 How to use Kafka 2.x connector
 --
 
-Below is a sample `streamConfigs` used to create a realtime table with Kafka 
Stream(High) level consumer:
+- Use Kafka Stream(High) Level Consumer
+
+Below is a sample ``streamConfigs`` used to create a realtime table with Kafka 
Stream(High) level consumer.
+
+Kafka 2.x HLC consumer uses 
``org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory`` in config 
``stream.kafka.consumer.factory.class.name``.
 
 .. code-block:: none
 
@@ -177,10 +181,53 @@ Below is a sample `streamConfigs` used to create a 
realtime table with Kafka Str
 "stream.kafka.hlc.bootstrap.server": "localhost:19092"
   }
 
+
+- Use Kafka Partition(Low) Level Consumer
+
+Below is a sample table config used to create a realtime table with Kafka 
Partition(Low) level consumer:
+
+.. code-block:: none
+
+  {
+"tableName": "meetupRsvp",
+"tableType": "REALTIME",
+"segmentsConfig": {
+  "timeColumnName": "mtime",
+  "timeType": "MILLISECONDS",
+  "segmentPushType": "APPEND",
+  "segmentAssignmentStrategy": "BalanceNumSegmentAssignmentStrategy",
+  "schemaName": "meetupRsvp",
+  "replication": "1",
+  "replicasPerPartition": "1"
+},
+"tenants": {},
+"tableIndexConfig": {
+  "loadMode": "MMAP",
+  "streamConfigs": {
+"streamType": "kafka",
+"stream.kafka.consumer.type": "simple",
+"stream.kafka.topic.name": "meetupRSVPEvents",
+"stream.kafka.decoder.class.name": 
"org.apache.pinot.core.realtime.impl.kafka.KafkaJSONMessageDecoder",
+"stream.kafka.consumer.factory.class.name": 
"org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory",
+"stream.kafka.zk.broker.url": "localhost:2191/kafka",
+"stream.kafka.broker.list": "localhost:19092"
+  }
+},
+"metadata": {
+  "customConfigs": {}
+}
+  }
+
+Please note that,
+
+1. Config ``replicasPerPartition`` under ``segmentsConfig`` is required to 
specify table replication.
+2. Config ``stream.kafka.consumer.type`` should be specified as ``simple`` to 
use partition level consumer.
+3. Configs ``stream.kafka.zk.broker.url`` and ``stream.kafka.broker.list`` are 
required under ``tableIndexConfig.streamConfigs`` to provide kafka related 
information.
+
 Upgrade from Kafka 0.9 connector to Kafka 2.x connector
 ---
 
-* Update  table config:
+* Update table config for both high level and low level consumer:
 Update config: ``stream.kafka.consumer.factory.class.name`` from 
``org.apache.pinot.core.realtime.impl.kafka.KafkaConsumerFactory`` to 
``org.apache.pinot.core.realtime.impl.kafka2.KafkaConsumerFactory``.
 
 * If using Stream(High) level consumer:


-
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org