Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/22703#discussion_r224936431
--- Diff: docs/streaming-kafka-0-10-integration.md ---
@@ -3,7 +3,11 @@ layout: global
title: Spark Streaming + Kafka Integration Guide (Kafka broker version
0.10.0 or higher)
---
-The Spark Streaming integration for Kafka 0.10 is similar in design to the
0.8 [Direct Stream
approach](streaming-kafka-0-8-integration.html#approach-2-direct-approach-no-receivers).
It provides simple parallelism, 1:1 correspondence between Kafka partitions
and Spark partitions, and access to offsets and metadata. However, because the
newer integration uses the [new Kafka consumer
API](http://kafka.apache.org/documentation.html#newconsumerapi) instead of the
simple API, there are notable differences in usage. This version of the
integration is marked as experimental, so the API is potentially subject to
change.
+The Spark Streaming integration for Kafka 0.10 provides simple
parallelism, 1:1 correspondence between Kafka
+partitions and Spark partitions, and access to offsets and metadata.
However, because the newer integration uses
+the [new Kafka consumer
API](https://kafka.apache.org/documentation.html#newconsumerapi) instead of the
simple API,
+there are notable differences in usage. This version of the integration is
marked as experimental, so the API is
--- End diff --
Yeah, good general point. Is the kafka 0.10 integration at all experimental
anymore? Is anything that survives from 2.x to 3.x? I'd say "no" in almost all
cases. What are your personal views on that?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]