Repository: spark
Updated Branches:
  refs/heads/master 4f01265f7 -> 2300eb58a


[SPARK-3773][PySpark][Doc] Sphinx build warning

When building Sphinx documents for PySpark, we have 12 warnings.
Their causes are almost docstrings in broken ReST format.

To reproduce this issue, we should run following commands on the commit: 
6e27cb630de69fa5acb510b4e2f6b980742b1957.

```bash
$ cd ./python/docs
$ make clean html
...
/Users/<user>/MyRepos/Scala/spark/python/pyspark/__init__.py:docstring of 
pyspark.SparkContext.sequenceFile:4: ERROR: Unexpected indentation.
/Users/<user>/MyRepos/Scala/spark/python/pyspark/__init__.py:docstring of 
pyspark.RDD.saveAsSequenceFile:4: ERROR: Unexpected indentation.
/Users/<user>/MyRepos/Scala/spark/python/pyspark/mllib/classification.py:docstring
 of pyspark.mllib.classification.LogisticRegressionWithSGD.train:14: ERROR: 
Unexpected indentation.
/Users/<user>/MyRepos/Scala/spark/python/pyspark/mllib/classification.py:docstring
 of pyspark.mllib.classification.LogisticRegressionWithSGD.train:16: WARNING: 
Definition list ends without a blank line; unexpected unindent.
/Users/<user>/MyRepos/Scala/spark/python/pyspark/mllib/classification.py:docstring
 of pyspark.mllib.classification.LogisticRegressionWithSGD.train:17: WARNING: 
Block quote ends without a blank line; unexpected unindent.
/Users/<user>/MyRepos/Scala/spark/python/pyspark/mllib/classification.py:docstring
 of pyspark.mllib.classification.SVMWithSGD.train:14: ERROR: Unexpected 
indentation.
/Users/<user>/MyRepos/Scala/spark/python/pyspark/mllib/classification.py:docstring
 of pyspark.mllib.classification.SVMWithSGD.train:16: WARNING: Definition list 
ends without a blank line; unexpected unindent.
/Users/<user>/MyRepos/Scala/spark/python/pyspark/mllib/classification.py:docstring
 of pyspark.mllib.classification.SVMWithSGD.train:17: WARNING: Block quote ends 
without a blank line; unexpected unindent.
/Users/<user>/MyRepos/Scala/spark/python/docs/pyspark.mllib.rst:50: WARNING: 
missing attribute mentioned in :members: or __all__: module 
pyspark.mllib.regression, attribute RidgeRegressionModelLinearRegressionWithSGD
/Users/<user>/MyRepos/Scala/spark/python/pyspark/mllib/tree.py:docstring of 
pyspark.mllib.tree.DecisionTreeModel.predict:3: ERROR: Unexpected indentation.
...
checking consistency... 
/Users/<user>/MyRepos/Scala/spark/python/docs/modules.rst:: WARNING: document 
isn't included in any toctree
...
copying static files... WARNING: html_static_path entry 
u'/Users/<user>/MyRepos/Scala/spark/python/docs/_static' does not exist
...
build succeeded, 12 warnings.
```

Author: cocoatomo <[email protected]>

Closes #2653 from cocoatomo/issues/3773-sphinx-build-warnings and squashes the 
following commits:

6f65661 [cocoatomo] [SPARK-3773][PySpark][Doc] Sphinx build warning


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/2300eb58
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/2300eb58
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/2300eb58

Branch: refs/heads/master
Commit: 2300eb58ae79a86e65b3ff608a578f5d4c09892b
Parents: 4f01265
Author: cocoatomo <[email protected]>
Authored: Mon Oct 6 14:08:40 2014 -0700
Committer: Josh Rosen <[email protected]>
Committed: Mon Oct 6 14:08:40 2014 -0700

----------------------------------------------------------------------
 python/docs/modules.rst                |  7 -------
 python/pyspark/context.py              |  1 +
 python/pyspark/mllib/classification.py | 26 ++++++++++++++++----------
 python/pyspark/mllib/regression.py     | 15 +++++++++------
 python/pyspark/mllib/tree.py           |  1 +
 python/pyspark/rdd.py                  |  1 +
 6 files changed, 28 insertions(+), 23 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/2300eb58/python/docs/modules.rst
----------------------------------------------------------------------
diff --git a/python/docs/modules.rst b/python/docs/modules.rst
deleted file mode 100644
index 1835646..0000000
--- a/python/docs/modules.rst
+++ /dev/null
@@ -1,7 +0,0 @@
-.
-=
-
-.. toctree::
-   :maxdepth: 4
-
-   pyspark

http://git-wip-us.apache.org/repos/asf/spark/blob/2300eb58/python/pyspark/context.py
----------------------------------------------------------------------
diff --git a/python/pyspark/context.py b/python/pyspark/context.py
index e941832..a45d79d 100644
--- a/python/pyspark/context.py
+++ b/python/pyspark/context.py
@@ -410,6 +410,7 @@ class SparkContext(object):
         Read a Hadoop SequenceFile with arbitrary key and value Writable class 
from HDFS,
         a local file system (available on all nodes), or any Hadoop-supported 
file system URI.
         The mechanism is as follows:
+
             1. A Java RDD is created from the SequenceFile or other 
InputFormat, and the key
                and value Writable classes
             2. Serialization is attempted via Pyrolite pickling

http://git-wip-us.apache.org/repos/asf/spark/blob/2300eb58/python/pyspark/mllib/classification.py
----------------------------------------------------------------------
diff --git a/python/pyspark/mllib/classification.py 
b/python/pyspark/mllib/classification.py
index ac142fb..a765b1c 100644
--- a/python/pyspark/mllib/classification.py
+++ b/python/pyspark/mllib/classification.py
@@ -89,11 +89,14 @@ class LogisticRegressionWithSGD(object):
         @param regParam:          The regularizer parameter (default: 1.0).
         @param regType:           The type of regularizer used for training
                                   our model.
-                                  Allowed values: "l1" for using L1Updater,
-                                                  "l2" for using
-                                                       SquaredL2Updater,
-                                                  "none" for no regularizer.
-                                  (default: "none")
+
+                                  :Allowed values:
+                                     - "l1" for using L1Updater
+                                     - "l2" for using SquaredL2Updater
+                                     - "none" for no regularizer
+
+                                     (default: "none")
+
         @param intercept:         Boolean parameter which indicates the use
                                   or not of the augmented representation for
                                   training data (i.e. whether bias features
@@ -158,11 +161,14 @@ class SVMWithSGD(object):
         @param initialWeights:    The initial weights (default: None).
         @param regType:           The type of regularizer used for training
                                   our model.
-                                  Allowed values: "l1" for using L1Updater,
-                                                  "l2" for using
-                                                       SquaredL2Updater,
-                                                  "none" for no regularizer.
-                                  (default: "none")
+
+                                  :Allowed values:
+                                     - "l1" for using L1Updater
+                                     - "l2" for using SquaredL2Updater,
+                                     - "none" for no regularizer.
+
+                                     (default: "none")
+
         @param intercept:         Boolean parameter which indicates the use
                                   or not of the augmented representation for
                                   training data (i.e. whether bias features

http://git-wip-us.apache.org/repos/asf/spark/blob/2300eb58/python/pyspark/mllib/regression.py
----------------------------------------------------------------------
diff --git a/python/pyspark/mllib/regression.py 
b/python/pyspark/mllib/regression.py
index 8fe8c6d..54f34a9 100644
--- a/python/pyspark/mllib/regression.py
+++ b/python/pyspark/mllib/regression.py
@@ -22,7 +22,7 @@ from pyspark import SparkContext
 from pyspark.mllib.linalg import SparseVector, _convert_to_vector
 from pyspark.serializers import PickleSerializer, AutoBatchedSerializer
 
-__all__ = ['LabeledPoint', 'LinearModel', 'LinearRegressionModel', 
'RidgeRegressionModel'
+__all__ = ['LabeledPoint', 'LinearModel', 'LinearRegressionModel', 
'RidgeRegressionModel',
            'LinearRegressionWithSGD', 'LassoWithSGD', 'RidgeRegressionWithSGD']
 
 
@@ -155,11 +155,14 @@ class LinearRegressionWithSGD(object):
         @param regParam:          The regularizer parameter (default: 1.0).
         @param regType:           The type of regularizer used for training
                                   our model.
-                                  Allowed values: "l1" for using L1Updater,
-                                                  "l2" for using
-                                                       SquaredL2Updater,
-                                                  "none" for no regularizer.
-                                  (default: "none")
+
+                                  :Allowed values:
+                                     - "l1" for using L1Updater,
+                                     - "l2" for using SquaredL2Updater,
+                                     - "none" for no regularizer.
+
+                                     (default: "none")
+
         @param intercept:         Boolean parameter which indicates the use
                                   or not of the augmented representation for
                                   training data (i.e. whether bias features

http://git-wip-us.apache.org/repos/asf/spark/blob/2300eb58/python/pyspark/mllib/tree.py
----------------------------------------------------------------------
diff --git a/python/pyspark/mllib/tree.py b/python/pyspark/mllib/tree.py
index afdcdbd..5d7abfb 100644
--- a/python/pyspark/mllib/tree.py
+++ b/python/pyspark/mllib/tree.py
@@ -48,6 +48,7 @@ class DecisionTreeModel(object):
     def predict(self, x):
         """
         Predict the label of one or more examples.
+
         :param x:  Data point (feature vector),
                    or an RDD of data points (feature vectors).
         """

http://git-wip-us.apache.org/repos/asf/spark/blob/2300eb58/python/pyspark/rdd.py
----------------------------------------------------------------------
diff --git a/python/pyspark/rdd.py b/python/pyspark/rdd.py
index dc64977..e77669a 100644
--- a/python/pyspark/rdd.py
+++ b/python/pyspark/rdd.py
@@ -1208,6 +1208,7 @@ class RDD(object):
         Output a Python RDD of key-value pairs (of form C{RDD[(K, V)]}) to any 
Hadoop file
         system, using the L{org.apache.hadoop.io.Writable} types that we 
convert from the
         RDD's key and value types. The mechanism is as follows:
+
             1. Pyrolite is used to convert pickled Python RDD into RDD of Java 
objects.
             2. Keys and values of this Java RDD are converted to Writables and 
written out.
 


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to