Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/363#discussion_r11554158
--- Diff: docs/sql-programming-guide.md ---
@@ -176,6 +202,32 @@ List<String> teenagerNames = teenagers.map(new
Function<Row, String>() {
</div>
+<div data-lang="python" markdown="1">
+
+One type of table that is supported by Spark SQL is an RDD of
dictionaries. The keys of the
+dictionary define the columns names of the table, and the types are
inferred by looking at the first
+row. Any RDD of dictionaries can converted to a SchemaRDD and then
registered as a table. Tables
+can be used in subsequent SQL statements.
+
+{% highlight python %}
+# Load a text file and convert each line to a dictionary.
+lines = sc.textFile("examples/src/main/resources/people.txt")
+parts = lines.map(lambda l: l.split(","))
+people = parts.map(lambda p: {"name": p[0], "age": int(p[1])})
+
+# Infer the schema, and register the SchemaRDD as a table.
+peopleTable = sqlCtx.inferSchema(people)
+peopleTable.registerAsTable("people")
+
+# SQL can be run over SchemaRDDs that have been registered as a table.
+teenagers = sqlCtx.sql("SELECT name FROM people WHERE age >= 13 AND age <=
19")
+
+# The results of SQL queries are RDDs and support all the normal RDD
operations.
+teenNames = teenagers.map(lambda p: "Name: " + p.name)
+{% endhighlight %}
+
--- End diff --
Maybe add something saying that in future versions of PySpark, we'd like to
support RDDs with other data types in registerAsTable too.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---