Repository: spark
Updated Branches:
  refs/heads/branch-2.0 59032570f -> f35b10ab1


[SPARK-17264][SQL] DataStreamWriter should document that it only supports 
Parquet for now

## What changes were proposed in this pull request?

Clarify that only parquet files are supported by DataStreamWriter now

## How was this patch tested?

(Doc build -- no functional changes to test)

Author: Sean Owen <[email protected]>

Closes #14860 from srowen/SPARK-17264.

(cherry picked from commit befab9c1c6b59ad90f63a7d10e12b186be897f15)
Signed-off-by: Sean Owen <[email protected]>


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/f35b10ab
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/f35b10ab
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/f35b10ab

Branch: refs/heads/branch-2.0
Commit: f35b10ab1556e3ea76ce2322af7b6749b7f1357f
Parents: 5903257
Author: Sean Owen <[email protected]>
Authored: Tue Aug 30 11:19:45 2016 +0100
Committer: Sean Owen <[email protected]>
Committed: Tue Aug 30 11:19:53 2016 +0100

----------------------------------------------------------------------
 python/pyspark/sql/streaming.py                                    | 2 +-
 .../scala/org/apache/spark/sql/streaming/DataStreamWriter.scala    | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/f35b10ab/python/pyspark/sql/streaming.py
----------------------------------------------------------------------
diff --git a/python/pyspark/sql/streaming.py b/python/pyspark/sql/streaming.py
index 3761d2b..9487f9d 100644
--- a/python/pyspark/sql/streaming.py
+++ b/python/pyspark/sql/streaming.py
@@ -589,7 +589,7 @@ class DataStreamWriter(object):
 
         .. note:: Experimental.
 
-        :param source: string, name of the data source, e.g. 'json', 'parquet'.
+        :param source: string, name of the data source, which for now can be 
'parquet'.
 
         >>> writer = sdf.writeStream.format('json')
         """

http://git-wip-us.apache.org/repos/asf/spark/blob/f35b10ab/sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamWriter.scala
----------------------------------------------------------------------
diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamWriter.scala 
b/sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamWriter.scala
index d38e3e5..f70c7d0 100644
--- 
a/sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamWriter.scala
+++ 
b/sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamWriter.scala
@@ -122,7 +122,7 @@ final class DataStreamWriter[T] private[sql](ds: 
Dataset[T]) {
 
   /**
    * :: Experimental ::
-   * Specifies the underlying output data source. Built-in options include 
"parquet", "json", etc.
+   * Specifies the underlying output data source. Built-in options include 
"parquet" for now.
    *
    * @since 2.0.0
    */


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to