Cool! Thanks for your inputs Jacek and Mark!

From: Mark Hamstra [mailto:m...@clearstorydata.com]
Sent: 13 January 2017 12:59
To: Phadnis, Varun <phad...@sky.optymyze.com>
Cc: user@spark.apache.org
Subject: Re: Spark and Kafka integration

See "API compatibility" in http://spark.apache.org/versioning-policy.html

While code that is annotated as Experimental is still a good faith effort to 
provide a stable and useful API, the fact is that we're not yet confident 
enough that we've got the public API in exactly the form that we want to commit 
to maintaining until at least the next major release.  That means that the API 
may change in the next minor/feature-level release (but it shouldn't in a 
patch/bugfix-level release), which would require that your source code be 
rewritten to use the new API.  In the most extreme case, we may decide that the 
experimental code didn't work out the way we wanted, so it could be withdrawn 
entirely.  Complete withdrawal of the Kafka code is unlikely, but it may well 
change in incompatible way with future releases even before Spark 3.0.0.

On Thu, Jan 12, 2017 at 5:57 AM, Phadnis, Varun 
<phad...@sky.optymyze.com<mailto:phad...@sky.optymyze.com>> wrote:
Hello,

We are using  Spark 2.0 with Kafka 0.10.

As I understand, much of the API packaged in the following dependency we are 
targeting is marked as “@Experimental”

<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-streaming-kafka-0-10_2.11</artifactId>
    <version>2.0.0</version>
</dependency>

What are implications of this being marked as experimental? Are they stable 
enough for production?

Thanks,
Varun


Reply via email to