[GitHub] spark pull request #16918: [SPARK-19584] [SS] [DOCS] update structured strea...

2017-02-14 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/spark/pull/16918


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16918: [SPARK-19584] [SS] [DOCS] update structured strea...

2017-02-14 Thread tcondie
Github user tcondie commented on a diff in the pull request:

https://github.com/apache/spark/pull/16918#discussion_r101168119
  
--- Diff: docs/structured-streaming-kafka-integration.md ---
@@ -187,50 +306,68 @@ The following options must be set for the Kafka 
source.
 The following configurations are optional:
 
 
-Optionvaluedefaultmeaning
+Optionvaluedefaultquery 
typemeaning
 
   startingOffsets
-  earliest, latest, or json string
-  {"topicA":{"0":23,"1":-1},"topicB":{"0":-2}}
+  "earliest", "latest" (streaming only), or json string
+  """ {"topicA":{"0":23,"1":-1},"topicB":{"0":-2}} """
   
-  latest
+  "latest" for streaming, "earliest" for batch
+  streaming and batch
   The start point when a query is started, either "earliest" which is 
from the earliest offsets,
   "latest" which is just from the latest offsets, or a json string 
specifying a starting offset for
   each TopicPartition.  In the json, -2 as an offset can be used to refer 
to earliest, -1 to latest.
-  Note: This only applies when a new Streaming query is started, and that 
resuming will always pick
-  up from where the query left off. Newly discovered partitions during a 
query will start at
+  Note: For Batch queries, latest (either implicitly or by using -1 in 
json) is not allowed.
--- End diff --

Agreed. The previous version had "S" caps, so I wasn't sure if this was a 
convention or not. I'll make the correction.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16918: [SPARK-19584] [SS] [DOCS] update structured strea...

2017-02-14 Thread tdas
Github user tdas commented on a diff in the pull request:

https://github.com/apache/spark/pull/16918#discussion_r101160486
  
--- Diff: docs/structured-streaming-kafka-integration.md ---
@@ -187,50 +306,68 @@ The following options must be set for the Kafka 
source.
 The following configurations are optional:
 
 
-Optionvaluedefaultmeaning
+Optionvaluedefaultquery 
typemeaning
 
   startingOffsets
-  earliest, latest, or json string
-  {"topicA":{"0":23,"1":-1},"topicB":{"0":-2}}
+  "earliest", "latest" (streaming only), or json string
+  """ {"topicA":{"0":23,"1":-1},"topicB":{"0":-2}} """
   
-  latest
+  "latest" for streaming, "earliest" for batch
+  streaming and batch
   The start point when a query is started, either "earliest" which is 
from the earliest offsets,
   "latest" which is just from the latest offsets, or a json string 
specifying a starting offset for
   each TopicPartition.  In the json, -2 as an offset can be used to refer 
to earliest, -1 to latest.
-  Note: This only applies when a new Streaming query is started, and that 
resuming will always pick
-  up from where the query left off. Newly discovered partitions during a 
query will start at
+  Note: For Batch queries, latest (either implicitly or by using -1 in 
json) is not allowed.
--- End diff --

nit: why is B and S in Batch queries and Streaming queries randomly caps? I 
dont think "batch query" is a proper name of any API, and neither is "streaming 
query", unless you are referring the specific class "StreamingQuery".


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16918: [SPARK-19584] [SS] [DOCS] update structured strea...

2017-02-14 Thread tdas
Github user tdas commented on a diff in the pull request:

https://github.com/apache/spark/pull/16918#discussion_r101160295
  
--- Diff: docs/structured-streaming-kafka-integration.md ---
@@ -187,50 +306,68 @@ The following options must be set for the Kafka 
source.
 The following configurations are optional:
 
 
-Optionvaluedefaultmeaning
+Optionvaluedefaultquery 
typemeaning
 
   startingOffsets
-  earliest, latest, or json string
-  {"topicA":{"0":23,"1":-1},"topicB":{"0":-2}}
+  "earliest", "latest" (streaming only), or json string
+  """ {"topicA":{"0":23,"1":-1},"topicB":{"0":-2}} """
   
-  latest
+  "latest" for streaming, "earliest" for batch
+  streaming and batch
   The start point when a query is started, either "earliest" which is 
from the earliest offsets,
   "latest" which is just from the latest offsets, or a json string 
specifying a starting offset for
   each TopicPartition.  In the json, -2 as an offset can be used to refer 
to earliest, -1 to latest.
-  Note: This only applies when a new Streaming query is started, and that 
resuming will always pick
-  up from where the query left off. Newly discovered partitions during a 
query will start at
+  Note: For Batch queries, latest (either implicitly or by using -1 in 
json) is not allowed.
+  For Streaming queries, this only applies when a new query is started, 
and that resuming will
+  always pick up from where the query left off. Newly discovered 
partitions during a query will start at
   earliest.
 
 
+  endingOffsets
+  latest or json string
+  {"topicA":{"0":23,"1":-1},"topicB":{"0":-1}}
+  
+  latest
+  batch only
+  The end point when a batch query is ended, either "latest" which is 
just referred to the
+  latest, or a json string specifying an ending offset for each 
TopicPartition.  In the json, -1
+  as an offset can be used to refer to latest, and -2 (earliest) as an 
offset is not allowed.
+
+
   failOnDataLoss
   true or false
   true
-  Whether to fail the query when it's possible that data is lost 
(e.g., topics are deleted, or 
+  streaming only
+  Whether to fail the query when it's possible that data is lost 
(e.g., topics are deleted, or
--- End diff --

nit: the *streaming* query
also could you add, what is the behavior for batch queries? that is the 
batch query will always fail if it fails to read any data.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16918: [SPARK-19584] [SS] [DOCS] update structured strea...

2017-02-14 Thread tdas
Github user tdas commented on a diff in the pull request:

https://github.com/apache/spark/pull/16918#discussion_r101122357
  
--- Diff: docs/structured-streaming-kafka-integration.md ---
@@ -187,50 +305,68 @@ The following options must be set for the Kafka 
source.
 The following configurations are optional:
 
 
-Optionvaluedefaultmeaning

+Optionvaluedefaultmodemeaning
 
   startingOffsets
-  earliest, latest, or json string
+  earliest, latest (streaming only), or json string
   {"topicA":{"0":23,"1":-1},"topicB":{"0":-2}}
   
-  latest
+  streaming=latest, batch=earliest
+  streaming and batch
   The start point when a query is started, either "earliest" which is 
from the earliest offsets,
   "latest" which is just from the latest offsets, or a json string 
specifying a starting offset for
   each TopicPartition.  In the json, -2 as an offset can be used to refer 
to earliest, -1 to latest.
-  Note: This only applies when a new Streaming query is started, and that 
resuming will always pick
-  up from where the query left off. Newly discovered partitions during a 
query will start at
+  Note: For Batch, latest (either implicitly or by using -1 in json) is 
not allowed.
+  For Streaming, this only applies when a new Streaming query is started, 
and that resuming will
--- End diff --

nit: For streaming queries, this only ... when a new query is stasrted


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16918: [SPARK-19584] [SS] [DOCS] update structured strea...

2017-02-14 Thread tdas
Github user tdas commented on a diff in the pull request:

https://github.com/apache/spark/pull/16918#discussion_r101122005
  
--- Diff: docs/structured-streaming-kafka-integration.md ---
@@ -187,50 +305,68 @@ The following options must be set for the Kafka 
source.
 The following configurations are optional:
 
 
-Optionvaluedefaultmeaning

+Optionvaluedefaultmodemeaning
 
   startingOffsets
-  earliest, latest, or json string
+  earliest, latest (streaming only), or json string
   {"topicA":{"0":23,"1":-1},"topicB":{"0":-2}}
   
-  latest
+  streaming=latest, batch=earliest
--- End diff --

same as above, put the value quotes. its clearer if 
"latest" for streaming, 
"earliest" for batch


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16918: [SPARK-19584] [SS] [DOCS] update structured strea...

2017-02-14 Thread tdas
Github user tdas commented on a diff in the pull request:

https://github.com/apache/spark/pull/16918#discussion_r101121799
  
--- Diff: docs/structured-streaming-kafka-integration.md ---
@@ -187,50 +305,68 @@ The following options must be set for the Kafka 
source.
 The following configurations are optional:
 
 
-Optionvaluedefaultmeaning

+Optionvaluedefaultmodemeaning
 
   startingOffsets
-  earliest, latest, or json string
+  earliest, latest (streaming only), or json string
   {"topicA":{"0":23,"1":-1},"topicB":{"0":-2}}
   
-  latest
+  streaming=latest, batch=earliest
+  streaming and batch
   The start point when a query is started, either "earliest" which is 
from the earliest offsets,
   "latest" which is just from the latest offsets, or a json string 
specifying a starting offset for
   each TopicPartition.  In the json, -2 as an offset can be used to refer 
to earliest, -1 to latest.
-  Note: This only applies when a new Streaming query is started, and that 
resuming will always pick
-  up from where the query left off. Newly discovered partitions during a 
query will start at
+  Note: For Batch, latest (either implicitly or by using -1 in json) is 
not allowed.
--- End diff --

nit: For batch queries...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16918: [SPARK-19584] [SS] [DOCS] update structured strea...

2017-02-14 Thread tdas
Github user tdas commented on a diff in the pull request:

https://github.com/apache/spark/pull/16918#discussion_r101121720
  
--- Diff: docs/structured-streaming-kafka-integration.md ---
@@ -187,50 +305,68 @@ The following options must be set for the Kafka 
source.
 The following configurations are optional:
 
 
-Optionvaluedefaultmeaning

+Optionvaluedefaultmodemeaning
 
   startingOffsets
-  earliest, latest, or json string
+  earliest, latest (streaming only), or json string
--- End diff --

can you put the possible values in quotes. e.g. 
```
"earliest", "latest" (only for streaming) or json string """  
{"topicA":{"0":23,"1":-1},"topicB":{"0":-2}} """
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16918: [SPARK-19584] [SS] [DOCS] update structured strea...

2017-02-14 Thread tdas
Github user tdas commented on a diff in the pull request:

https://github.com/apache/spark/pull/16918#discussion_r101120977
  
--- Diff: docs/structured-streaming-kafka-integration.md ---
@@ -187,50 +305,68 @@ The following options must be set for the Kafka 
source.
 The following configurations are optional:
 
 
-Optionvaluedefaultmeaning

+Optionvaluedefaultmodemeaning
--- End diff --

rather than "mode", how about "query type"



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16918: [SPARK-19584] [SS] [DOCS] update structured strea...

2017-02-14 Thread tdas
Github user tdas commented on a diff in the pull request:

https://github.com/apache/spark/pull/16918#discussion_r101120862
  
--- Diff: docs/structured-streaming-kafka-integration.md ---
@@ -152,7 +270,7 @@ Each row in the source has the following schema:
 
 
 
-The following options must be set for the Kafka source.
+The following options must be set for the Kafka source (streaming and 
batch).
--- End diff --

nit: Kafka source for both batch and streaming queries.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16918: [SPARK-19584] [SS] [DOCS] update structured strea...

2017-02-13 Thread zsxwing
Github user zsxwing commented on a diff in the pull request:

https://github.com/apache/spark/pull/16918#discussion_r100961000
  
--- Diff: docs/structured-streaming-kafka-integration.md ---
@@ -119,6 +119,124 @@ ds3.selectExpr("CAST(key AS STRING)", "CAST(value AS 
STRING)")
 
 
 
+### Creating a Kafka Source Batch
+If you have a use case that is better suited to batch processing,
+you can create an Dataset/DataFrame for a defined range of offsets.
+
+
+
+{% highlight scala %}
+
+// Subscribe to 1 topic defaults to the earliest and latest offsets
+val ds1 = spark
+  .read
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribe", "topic1")
+  .load()
+ds1.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
+  .as[(String, String)]
+
+// Subscribe to multiple topics, specifying explicit Kafka offsets
+val ds2 = spark
+  .read
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribe", "topic1,topic2")
+  .option("startingOffsets", 
"""{"topic1":{"0":23,"1":-2},"topic2":{"0":-2}}""")
+  .option("endingOffsets", 
"""{"topic1":{"0":50,"1":-1},"topic2":{"0":-1}}""")
+  .load()
+ds2.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
+  .as[(String, String)]
+
+// Subscribe to a pattern, at the earliest and latest offsets
+val ds3 = spark
+  .read
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribePattern", "topic.*")
+  .option("startingOffsets", "earliest")
+  .option("endingOffsets", "latest")
+  .load()
+ds3.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
+  .as[(String, String)]
+
+{% endhighlight %}
+
+
+{% highlight java %}
+
+// Subscribe to 1 topic defaults to the earliest and latest offsets
+Dataset ds1 = spark
+  .read()
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribe", "topic1")
+  .load();
+ds1.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)");
+
+// Subscribe to multiple topics, specifying explicit Kafka offsets
+Dataset ds2 = spark
+  .read()
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribe", "topic1,topic2")
+  .option("startingOffsets", 
"{\"topic1\":{\"0\":23,\"1\":-2},\"topic2\":{\"0\":-2}}")
+  .option("endingOffsets", 
"{\"topic1\":{\"0\":50,\"1\":-1},\"topic2\":{\"0\":-1}}")
+  .load();
+ds2.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)");
+
+// Subscribe to a pattern, at the earliest and latest offsets
+Dataset ds3 = spark
+  .read()
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribePattern", "topic.*")
+  .option("startingOffsets", "earliest")
+  .option("endingOffsets", "latest")
+  .load();
+ds3.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)");
+
+{% endhighlight %}
+
+
+{% highlight python %}
+
+# Subscribe to 1 topic defaults to the earliest and latest offsets
+ds1 = spark \
+  .read
--- End diff --

You need to all `\` to all lines.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16918: [SPARK-19584] [SS] [DOCS] update structured strea...

2017-02-13 Thread zsxwing
Github user zsxwing commented on a diff in the pull request:

https://github.com/apache/spark/pull/16918#discussion_r100937348
  
--- Diff: docs/structured-streaming-kafka-integration.md ---
@@ -119,6 +119,122 @@ ds3.selectExpr("CAST(key AS STRING)", "CAST(value AS 
STRING)")
 
 
 
+### Creating a Kafka Source Batch
+
+
+
+{% highlight scala %}
+
+// Subscribe to 1 topic defaults to the earliest and latest offsets
+val ds1 = spark
+  .read
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribe", "topic1")
+  .load()
+ds1.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
+  .as[(String, String)]
+
+// Subscribe to multiple topics, specifying explicit Kafka offsets
+val ds2 = spark
+  .read
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribe", "topic1,topic2")
+  .option("startingOffsets", 
"""{"topic1":{"0":23,"1":-2},"topic2":{"0":-2}}""")
+  .option("endingOffsets", 
"""{"topic1":{"0":50,"1":-1},"topic2":{"0":-1}}""")
+  .load()
+ds2.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
+  .as[(String, String)]
+
+// Subscribe to a pattern, at the earliest and latest offsets
+val ds3 = spark
+  .read
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribePattern", "topic.*")
+  .option("startingOffsets", "earliest")
+  .option("endingOffsets", "latest")
+  .load()
+ds3.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
+  .as[(String, String)]
+
+{% endhighlight %}
+
+
+{% highlight java %}
+
+// Subscribe to 1 topic defaults to the earliest and latest offsets
+Dataset ds1 = spark
+  .read()
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribe", "topic1")
+  .load()
+ds1.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
+
+// Subscribe to multiple topics, specifying explicit Kafka offsets
+Dataset ds2 = spark
+  .read()
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribe", "topic1,topic2")
+  .option("startingOffsets", 
"{\"topic1\":{\"0\":23,\"1\":-2},\"topic2\":{\"0\":-2}}")
+  .option("endingOffsets", 
"{\"topic1\":{\"0\":50,\"1\":-1},\"topic2\":{\"0\":-1}}")
+  .load()
+ds2.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
+
+// Subscribe to a pattern, at the earliest and latest offsets
+Dataset ds3 = spark
+  .read()
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribePattern", "topic.*")
+  .option("startingOffsets", "earliest")
+  .option("endingOffsets", "latest")
+  .load()
--- End diff --

nit: missing ";"


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16918: [SPARK-19584] [SS] [DOCS] update structured strea...

2017-02-13 Thread zsxwing
Github user zsxwing commented on a diff in the pull request:

https://github.com/apache/spark/pull/16918#discussion_r100934876
  
--- Diff: docs/structured-streaming-kafka-integration.md ---
@@ -187,50 +303,68 @@ The following options must be set for the Kafka 
source.
 The following configurations are optional:
 
 
-Optionvaluedefaultmeaning

+Optionvaluedefaultmodemeaning
 
   startingOffsets
-  earliest, latest, or json string
+  earliest, latest (streaming only), or json string
   {"topicA":{"0":23,"1":-1},"topicB":{"0":-2}}
   
-  latest
+  streaming=latest, batch=earliest
+  streaming and batch
   The start point when a query is started, either "earliest" which is 
from the earliest offsets,
   "latest" which is just from the latest offsets, or a json string 
specifying a starting offset for
   each TopicPartition.  In the json, -2 as an offset can be used to refer 
to earliest, -1 to latest.
-  Note: This only applies when a new Streaming query is started, and that 
resuming will always pick
-  up from where the query left off. Newly discovered partitions during a 
query will start at
+  Note: For Batch, latest (either implicitly or by using -1 in json) is 
not allowed.
+  For Streaming, this only applies when a new Streaming query is started, 
and that resuming will
+  always pick up from where the query left off. Newly discovered 
partitions during a query will start at
   earliest.
 
 
+  endingOffsets
+  latest or json string
+  {"topicA":{"0":23,"1":-1},"topicB":{"0":-1}}
+  
+  latest
+  batch only
+  The end point when a batch query is started, either "latest" which 
is just from the latest
--- End diff --

nit The end point when a batch query is **started** , either "latest" which 
is just **from** the latest -> The end point when a batch query is **ended**, 
either "latest" which is just **referred to** the latest


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16918: [SPARK-19584] [SS] [DOCS] update structured strea...

2017-02-13 Thread zsxwing
Github user zsxwing commented on a diff in the pull request:

https://github.com/apache/spark/pull/16918#discussion_r100937332
  
--- Diff: docs/structured-streaming-kafka-integration.md ---
@@ -119,6 +119,122 @@ ds3.selectExpr("CAST(key AS STRING)", "CAST(value AS 
STRING)")
 
 
 
+### Creating a Kafka Source Batch
+
+
+
+{% highlight scala %}
+
+// Subscribe to 1 topic defaults to the earliest and latest offsets
+val ds1 = spark
+  .read
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribe", "topic1")
+  .load()
+ds1.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
+  .as[(String, String)]
+
+// Subscribe to multiple topics, specifying explicit Kafka offsets
+val ds2 = spark
+  .read
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribe", "topic1,topic2")
+  .option("startingOffsets", 
"""{"topic1":{"0":23,"1":-2},"topic2":{"0":-2}}""")
+  .option("endingOffsets", 
"""{"topic1":{"0":50,"1":-1},"topic2":{"0":-1}}""")
+  .load()
+ds2.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
+  .as[(String, String)]
+
+// Subscribe to a pattern, at the earliest and latest offsets
+val ds3 = spark
+  .read
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribePattern", "topic.*")
+  .option("startingOffsets", "earliest")
+  .option("endingOffsets", "latest")
+  .load()
+ds3.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
+  .as[(String, String)]
+
+{% endhighlight %}
+
+
+{% highlight java %}
+
+// Subscribe to 1 topic defaults to the earliest and latest offsets
+Dataset ds1 = spark
+  .read()
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribe", "topic1")
+  .load()
+ds1.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
+
+// Subscribe to multiple topics, specifying explicit Kafka offsets
+Dataset ds2 = spark
+  .read()
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribe", "topic1,topic2")
+  .option("startingOffsets", 
"{\"topic1\":{\"0\":23,\"1\":-2},\"topic2\":{\"0\":-2}}")
+  .option("endingOffsets", 
"{\"topic1\":{\"0\":50,\"1\":-1},\"topic2\":{\"0\":-1}}")
+  .load()
--- End diff --

nit: missing ";"


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16918: [SPARK-19584] [SS] [DOCS] update structured strea...

2017-02-13 Thread zsxwing
Github user zsxwing commented on a diff in the pull request:

https://github.com/apache/spark/pull/16918#discussion_r100937891
  
--- Diff: docs/structured-streaming-kafka-integration.md ---
@@ -119,6 +119,122 @@ ds3.selectExpr("CAST(key AS STRING)", "CAST(value AS 
STRING)")
 
 
 
+### Creating a Kafka Source Batch
+
+
+
+{% highlight scala %}
+
+// Subscribe to 1 topic defaults to the earliest and latest offsets
+val ds1 = spark
+  .read
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribe", "topic1")
+  .load()
+ds1.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
+  .as[(String, String)]
+
+// Subscribe to multiple topics, specifying explicit Kafka offsets
+val ds2 = spark
+  .read
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribe", "topic1,topic2")
+  .option("startingOffsets", 
"""{"topic1":{"0":23,"1":-2},"topic2":{"0":-2}}""")
+  .option("endingOffsets", 
"""{"topic1":{"0":50,"1":-1},"topic2":{"0":-1}}""")
+  .load()
+ds2.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
+  .as[(String, String)]
+
+// Subscribe to a pattern, at the earliest and latest offsets
+val ds3 = spark
+  .read
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribePattern", "topic.*")
+  .option("startingOffsets", "earliest")
+  .option("endingOffsets", "latest")
+  .load()
+ds3.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
+  .as[(String, String)]
+
+{% endhighlight %}
+
+
+{% highlight java %}
+
+// Subscribe to 1 topic defaults to the earliest and latest offsets
+Dataset ds1 = spark
+  .read()
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribe", "topic1")
+  .load()
+ds1.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
+
+// Subscribe to multiple topics, specifying explicit Kafka offsets
+Dataset ds2 = spark
+  .read()
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribe", "topic1,topic2")
+  .option("startingOffsets", 
"{\"topic1\":{\"0\":23,\"1\":-2},\"topic2\":{\"0\":-2}}")
+  .option("endingOffsets", 
"{\"topic1\":{\"0\":50,\"1\":-1},\"topic2\":{\"0\":-1}}")
+  .load()
+ds2.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
+
+// Subscribe to a pattern, at the earliest and latest offsets
+Dataset ds3 = spark
+  .read()
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribePattern", "topic.*")
+  .option("startingOffsets", "earliest")
+  .option("endingOffsets", "latest")
+  .load()
+ds3.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
+
+{% endhighlight %}
+
+
+{% highlight python %}
+
+# Subscribe to 1 topic defaults to the earliest and latest offsets
+ds1 = spark
--- End diff --

You need to add `\` to tell the Python compiler not starting a new line.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16918: [SPARK-19584] [SS] [DOCS] update structured strea...

2017-02-13 Thread zsxwing
Github user zsxwing commented on a diff in the pull request:

https://github.com/apache/spark/pull/16918#discussion_r100937314
  
--- Diff: docs/structured-streaming-kafka-integration.md ---
@@ -119,6 +119,122 @@ ds3.selectExpr("CAST(key AS STRING)", "CAST(value AS 
STRING)")
 
 
 
+### Creating a Kafka Source Batch
+
+
+
+{% highlight scala %}
+
+// Subscribe to 1 topic defaults to the earliest and latest offsets
+val ds1 = spark
+  .read
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribe", "topic1")
+  .load()
+ds1.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
+  .as[(String, String)]
+
+// Subscribe to multiple topics, specifying explicit Kafka offsets
+val ds2 = spark
+  .read
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribe", "topic1,topic2")
+  .option("startingOffsets", 
"""{"topic1":{"0":23,"1":-2},"topic2":{"0":-2}}""")
+  .option("endingOffsets", 
"""{"topic1":{"0":50,"1":-1},"topic2":{"0":-1}}""")
+  .load()
+ds2.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
+  .as[(String, String)]
+
+// Subscribe to a pattern, at the earliest and latest offsets
+val ds3 = spark
+  .read
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribePattern", "topic.*")
+  .option("startingOffsets", "earliest")
+  .option("endingOffsets", "latest")
+  .load()
+ds3.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
+  .as[(String, String)]
+
+{% endhighlight %}
+
+
+{% highlight java %}
+
+// Subscribe to 1 topic defaults to the earliest and latest offsets
+Dataset ds1 = spark
+  .read()
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribe", "topic1")
+  .load()
--- End diff --

nit: missing ";"


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16918: [SPARK-19584] [SS] [DOCS] update structured strea...

2017-02-13 Thread zsxwing
Github user zsxwing commented on a diff in the pull request:

https://github.com/apache/spark/pull/16918#discussion_r100937204
  
--- Diff: docs/structured-streaming-kafka-integration.md ---
@@ -187,50 +303,68 @@ The following options must be set for the Kafka 
source.
 The following configurations are optional:
 
 
-Optionvaluedefaultmeaning

+Optionvaluedefaultmodemeaning
 
   startingOffsets
-  earliest, latest, or json string
+  earliest, latest (streaming only), or json string
   {"topicA":{"0":23,"1":-1},"topicB":{"0":-2}}
   
-  latest
+  streaming=latest, batch=earliest
+  streaming and batch
   The start point when a query is started, either "earliest" which is 
from the earliest offsets,
   "latest" which is just from the latest offsets, or a json string 
specifying a starting offset for
   each TopicPartition.  In the json, -2 as an offset can be used to refer 
to earliest, -1 to latest.
-  Note: This only applies when a new Streaming query is started, and that 
resuming will always pick
-  up from where the query left off. Newly discovered partitions during a 
query will start at
+  Note: For Batch, latest (either implicitly or by using -1 in json) is 
not allowed.
+  For Streaming, this only applies when a new Streaming query is started, 
and that resuming will
+  always pick up from where the query left off. Newly discovered 
partitions during a query will start at
   earliest.
 
 
+  endingOffsets
+  latest or json string
+  {"topicA":{"0":23,"1":-1},"topicB":{"0":-1}}
+  
+  latest
+  batch only
+  The end point when a batch query is started, either "latest" which 
is just from the latest
--- End diff --

nit: a json string specifying **a starting** offset for each 
TopicPartition. -> a json string specifying **an ending** offset for each 
TopicPartition.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16918: [SPARK-19584] [SS] [DOCS] update structured strea...

2017-02-13 Thread zsxwing
Github user zsxwing commented on a diff in the pull request:

https://github.com/apache/spark/pull/16918#discussion_r100937342
  
--- Diff: docs/structured-streaming-kafka-integration.md ---
@@ -119,6 +119,122 @@ ds3.selectExpr("CAST(key AS STRING)", "CAST(value AS 
STRING)")
 
 
 
+### Creating a Kafka Source Batch
+
+
+
+{% highlight scala %}
+
+// Subscribe to 1 topic defaults to the earliest and latest offsets
+val ds1 = spark
+  .read
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribe", "topic1")
+  .load()
+ds1.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
+  .as[(String, String)]
+
+// Subscribe to multiple topics, specifying explicit Kafka offsets
+val ds2 = spark
+  .read
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribe", "topic1,topic2")
+  .option("startingOffsets", 
"""{"topic1":{"0":23,"1":-2},"topic2":{"0":-2}}""")
+  .option("endingOffsets", 
"""{"topic1":{"0":50,"1":-1},"topic2":{"0":-1}}""")
+  .load()
+ds2.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
+  .as[(String, String)]
+
+// Subscribe to a pattern, at the earliest and latest offsets
+val ds3 = spark
+  .read
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribePattern", "topic.*")
+  .option("startingOffsets", "earliest")
+  .option("endingOffsets", "latest")
+  .load()
+ds3.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
+  .as[(String, String)]
+
+{% endhighlight %}
+
+
+{% highlight java %}
+
+// Subscribe to 1 topic defaults to the earliest and latest offsets
+Dataset ds1 = spark
+  .read()
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribe", "topic1")
+  .load()
+ds1.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
+
+// Subscribe to multiple topics, specifying explicit Kafka offsets
+Dataset ds2 = spark
+  .read()
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribe", "topic1,topic2")
+  .option("startingOffsets", 
"{\"topic1\":{\"0\":23,\"1\":-2},\"topic2\":{\"0\":-2}}")
+  .option("endingOffsets", 
"{\"topic1\":{\"0\":50,\"1\":-1},\"topic2\":{\"0\":-1}}")
+  .load()
+ds2.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
--- End diff --

nit: missing ";"


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16918: [SPARK-19584] [SS] [DOCS] update structured strea...

2017-02-13 Thread zsxwing
Github user zsxwing commented on a diff in the pull request:

https://github.com/apache/spark/pull/16918#discussion_r100937355
  
--- Diff: docs/structured-streaming-kafka-integration.md ---
@@ -119,6 +119,122 @@ ds3.selectExpr("CAST(key AS STRING)", "CAST(value AS 
STRING)")
 
 
 
+### Creating a Kafka Source Batch
+
+
+
+{% highlight scala %}
+
+// Subscribe to 1 topic defaults to the earliest and latest offsets
+val ds1 = spark
+  .read
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribe", "topic1")
+  .load()
+ds1.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
+  .as[(String, String)]
+
+// Subscribe to multiple topics, specifying explicit Kafka offsets
+val ds2 = spark
+  .read
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribe", "topic1,topic2")
+  .option("startingOffsets", 
"""{"topic1":{"0":23,"1":-2},"topic2":{"0":-2}}""")
+  .option("endingOffsets", 
"""{"topic1":{"0":50,"1":-1},"topic2":{"0":-1}}""")
+  .load()
+ds2.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
+  .as[(String, String)]
+
+// Subscribe to a pattern, at the earliest and latest offsets
+val ds3 = spark
+  .read
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribePattern", "topic.*")
+  .option("startingOffsets", "earliest")
+  .option("endingOffsets", "latest")
+  .load()
+ds3.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
+  .as[(String, String)]
+
+{% endhighlight %}
+
+
+{% highlight java %}
+
+// Subscribe to 1 topic defaults to the earliest and latest offsets
+Dataset ds1 = spark
+  .read()
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribe", "topic1")
+  .load()
+ds1.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
+
+// Subscribe to multiple topics, specifying explicit Kafka offsets
+Dataset ds2 = spark
+  .read()
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribe", "topic1,topic2")
+  .option("startingOffsets", 
"{\"topic1\":{\"0\":23,\"1\":-2},\"topic2\":{\"0\":-2}}")
+  .option("endingOffsets", 
"{\"topic1\":{\"0\":50,\"1\":-1},\"topic2\":{\"0\":-1}}")
+  .load()
+ds2.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
+
+// Subscribe to a pattern, at the earliest and latest offsets
+Dataset ds3 = spark
+  .read()
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribePattern", "topic.*")
+  .option("startingOffsets", "earliest")
+  .option("endingOffsets", "latest")
+  .load()
+ds3.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
--- End diff --

nit: missing ";"


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16918: [SPARK-19584] [SS] [DOCS] update structured strea...

2017-02-13 Thread zsxwing
Github user zsxwing commented on a diff in the pull request:

https://github.com/apache/spark/pull/16918#discussion_r100934257
  
--- Diff: docs/structured-streaming-kafka-integration.md ---
@@ -119,6 +119,122 @@ ds3.selectExpr("CAST(key AS STRING)", "CAST(value AS 
STRING)")
 
 
 
+### Creating a Kafka Source Batch
+
--- End diff --

It's better to add a sentence here before the codes, such as `If you have a 
use case that is better suited to batch processing, you can create an 
Dataset/DataFrame for a defined range of offsets.`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16918: [SPARK-19584] [SS] [DOCS] update structured strea...

2017-02-13 Thread zsxwing
Github user zsxwing commented on a diff in the pull request:

https://github.com/apache/spark/pull/16918#discussion_r100937326
  
--- Diff: docs/structured-streaming-kafka-integration.md ---
@@ -119,6 +119,122 @@ ds3.selectExpr("CAST(key AS STRING)", "CAST(value AS 
STRING)")
 
 
 
+### Creating a Kafka Source Batch
+
+
+
+{% highlight scala %}
+
+// Subscribe to 1 topic defaults to the earliest and latest offsets
+val ds1 = spark
+  .read
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribe", "topic1")
+  .load()
+ds1.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
+  .as[(String, String)]
+
+// Subscribe to multiple topics, specifying explicit Kafka offsets
+val ds2 = spark
+  .read
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribe", "topic1,topic2")
+  .option("startingOffsets", 
"""{"topic1":{"0":23,"1":-2},"topic2":{"0":-2}}""")
+  .option("endingOffsets", 
"""{"topic1":{"0":50,"1":-1},"topic2":{"0":-1}}""")
+  .load()
+ds2.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
+  .as[(String, String)]
+
+// Subscribe to a pattern, at the earliest and latest offsets
+val ds3 = spark
+  .read
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribePattern", "topic.*")
+  .option("startingOffsets", "earliest")
+  .option("endingOffsets", "latest")
+  .load()
+ds3.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
+  .as[(String, String)]
+
+{% endhighlight %}
+
+
+{% highlight java %}
+
+// Subscribe to 1 topic defaults to the earliest and latest offsets
+Dataset ds1 = spark
+  .read()
+  .format("kafka")
+  .option("kafka.bootstrap.servers", "host1:port1,host2:port2")
+  .option("subscribe", "topic1")
+  .load()
+ds1.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
--- End diff --

nit: missing ";"


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #16918: [SPARK-19584] [SS] [DOCS] update structured strea...

2017-02-13 Thread tcondie
GitHub user tcondie opened a pull request:

https://github.com/apache/spark/pull/16918

[SPARK-19584] [SS] [DOCS] update structured streaming documentation around 
batch mode

## What changes were proposed in this pull request?

Revision to structured-streaming-kafka-integration.md to reflect new Batch 
query specification and options. 

@zsxwing @tdas

## How was this patch tested?

N/A

Please review http://spark.apache.org/contributing.html before opening a 
pull request.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tcondie/spark kafka-docs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/16918.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #16918


commit debe111da3cf1981a4c666554eba669be248f9d4
Author: Tyson Condie 
Date:   2017-02-14T00:01:06Z

update structured streaming documentation around batch mode




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org