Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/6499#discussion_r32748830
--- Diff: docs/mllib-clustering.md ---
@@ -593,6 +593,58 @@ ssc.start()
ssc.awaitTermination()
{% endhighlight %}
+</div>
+
+<div data-lang="python" markdown="1">
+First we import the neccessary classes.
+
+{% highlight python %}
+
+from pyspark.mllib.linalg import Vectors
+from pyspark.mllib.regression import LabeledPoint
+from pyspark.mllib.clustering import StreamingKMeans
+
+{% endhighlight %}
+
+Then we make an input stream of vectors for training, as well as a stream
of labeled data
+points for testing. We assume a StreamingContext `ssc` has been created,
see
+[Spark Streaming Programming
Guide](streaming-programming-guide.html#initializing) for more info.
+
+{% highlight python %}
+
+trainingData = ssc.textFileStream("/training/data/dir").map(Vectors.parse)
+testData = ssc.textFileStream("/testing/data/dir").map(LabeledPoint.parse)
+
+{% endhighlight %}
+
+We create a model with random clusters and specify the number of clusters
to find
+
+{% highlight python %}
+
+numDimensions = 3
+numClusters = 2
+model = StreamingKMeans()
+model.setK(numClusters)
--- End diff --
chain setters
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]