Github user marmbrus commented on a diff in the pull request:

    https://github.com/apache/spark/pull/999#discussion_r13827732
  
    --- Diff: docs/sql-programming-guide.md ---
    @@ -310,37 +325,190 @@ parquetFile = sqlCtx.parquetFile("people.parquet")
     # Parquet files can also be registered as tables and then used in SQL 
statements.
     parquetFile.registerAsTable("parquetFile");
     teenagers = sqlCtx.sql("SELECT name FROM parquetFile WHERE age >= 13 AND 
age <= 19")
    -
    +teenNames = teenagers.map(lambda p: "Name: " + p.name)
    +for teenName in teenNames.collect():
    +  print teenName
     {% endhighlight %}
     
     </div>
     
     </div>
     
    -## Writing Language-Integrated Relational Queries
    +## JSON Datasets
    +<div class="codetabs">
    +
    +<div data-lang="scala"  markdown="1">
    +Spark SQL supports querying JSON datasets. To query a JSON dataset, a 
SchemaRDD needs to be created for this JSON dataset. There are two ways to 
create a SchemaRDD for a JSON dataset:
     
    -**Language-Integrated queries are currently only supported in Scala.**
    +1. Creating the SchemaRDD from text files that store one JSON object per 
line.
    +2. Creating the SchemaRDD from a RDD of strings (`RDD[String]`) that 
stores one JSON object.
     
    -Spark SQL also supports a domain specific language for writing queries.  
Once again,
    -using the data from the above examples:
    +The schema of a JSON dataset is automatically inferred when the SchemaRDD 
is created.
     
     {% highlight scala %}
    -val sqlContext = new org.apache.spark.sql.SQLContext(sc)
    -import sqlContext._
    -val people: RDD[Person] = ... // An RDD of case class objects, from the 
first example.
    +// sc is an existing SparkContext.
    +val sqlCtx = new org.apache.spark.sql.SQLContext(sc)
    +
    +// A JSON dataset is pointed by path.
    +// The path can be either a single text file or a directory storing text 
files.
    +val path = "examples/src/main/resources/people.json"
    +// Create a SchemaRDD from the file(s) pointed by path
    +val people = sqlCtx.jsonFile(path)
    +
    +// Because the schema of a JSON dataset is automatically inferred, to 
write queries,
    +// it is better to take a look at what is the schema.
    +people.printSchema()
    +// The schema of people is ...
    +// root
    +//  |-- age: IntegerType
    +//  |-- name: StringType
    +
    +// Register this SchemaRDD as a table.
    +people.registerAsTable("people")
     
    -// The following is the same as 'SELECT name FROM people WHERE age >= 10 
AND age <= 19'
    -val teenagers = people.where('age >= 10).where('age <= 19).select('name)
    +// SQL statements can be run by using the sql methods provided by sqlCtx.
    +val teenagers = sqlCtx.sql("SELECT name FROM people WHERE age >= 13 AND 
age <= 19")
    +
    +// The results of SQL queries are SchemaRDDs and support all the normal 
RDD operations.
    +// The columns of a row in the result can be accessed by ordinal.
    +teenagers.map(t => "Name: " + t(0)).collect().foreach(println)
    --- End diff --
    
    This is probably overkill as we have shown this multiple times in this 
guide.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to