Looks like a typo, try:

*df.select**(**df**("name"), **df**("age") + 1)*

Or

df.select("name", "age")

PRs to fix docs are always appreciated :)
On Apr 2, 2015 7:44 PM, "java8964" <java8...@hotmail.com> wrote:

> The import command already run.
>
> Forgot the mention, the rest of examples related to "df" all works, just
> this one caused problem.
>
> Thanks
>
> Yong
>
> ------------------------------
> Date: Fri, 3 Apr 2015 10:36:45 +0800
> From: fightf...@163.com
> To: java8...@hotmail.com; user@spark.apache.org
> Subject: Re: Cannot run the example in the Spark 1.3.0 following the
> document
>
> Hi, there
>
> you may need to add :
>   import sqlContext.implicits._
>
> Best,
> Sun
>
> ------------------------------
> fightf...@163.com
>
>
> *From:* java8964 <java8...@hotmail.com>
> *Date:* 2015-04-03 10:15
> *To:* user@spark.apache.org
> *Subject:* Cannot run the example in the Spark 1.3.0 following the
> document
> I tried to check out what Spark SQL 1.3.0. I installed it and following
> the online document here:
>
> http://spark.apache.org/docs/latest/sql-programming-guide.html
>
> In the example, it shows something like this:
>
> // Select everybody, but increment the age by 1df.select("name", df("age") + 
> 1).show()// name    (age + 1)// Michael null// Andy    31// Justin  20
>
>
> But what I got on my Spark 1.3.0 is the following error:
>
> *Welcome to
>       ____              __
>      / __/__  ___ _____/ /__
>     _\ \/ _ \/ _ `/ __/  '_/
>    /___/ .__/\_,_/_/ /_/\_\   version 1.3.0
>       /_/
> Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.6.0_43)*
>
> *scala> val sqlContext = new org.apache.spark.sql.SQLContext(sc)
> sqlContext: org.apache.spark.sql.SQLContext = 
> org.apache.spark.sql.SQLContext@1c845f64
> scala> val df = sqlContext.jsonFile("/user/yzhang/people.json")*
>
> *df: org.apache.spark.sql.DataFrame = [age: bigint, name: string]*
>
> *scala> df.printSchema
> root
>  |-- age: long (nullable = true)
>  |-- name: string (nullable = true)*
>
> *scala> df.select("name", df("age") + 1).show()
> <console>:30: error: overloaded method value select with alternatives:
>   (col: String,cols: String*)org.apache.spark.sql.DataFrame <and>
>   (cols: org.apache.spark.sql.Column*)org.apache.spark.sql.DataFrame
>  cannot be applied to (String, org.apache.spark.sql.Column)
>               df.select("name", df("age") + 1).show()
>                  ^*
>
>
> Is this a bug in Spark 1.3.0, or my build having some problem?
>
> Thanks
>
>

Reply via email to