If you use the spark-csv package:

$ spark-shell  --packages com.databricks:spark-csv_2.11:1.3.0
....


scala> val df =
sc.parallelize(Array(Array(1,2,3),Array(2,3,4),Array(3,4,6))).map(x =>
(x(0), x(1), x(2))).toDF()
df: org.apache.spark.sql.DataFrame = [_1: int, _2: int, _3: int]

scala> df.write.format("com.databricks.spark.csv").option("delimiter", "
").save("1.txt")

$ cat 1.txt/*
1 2 3
2 3 4
3 4 6


-Don


On Sat, Feb 27, 2016 at 7:20 PM, Bonsen <hengbohe...@126.com> wrote:

> I get results from RDDs,
> like :
> Array(Array(1,2,3),Array(2,3,4),Array(3,4,6))
> how can I output them to 1.txt
> like :
> 1 2 3
> 2 3 4
> 3 4 6
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/output-the-datas-txt-tp26350.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>


-- 
Donald Drake
Drake Consulting
http://www.drakeconsulting.com/
https://twitter.com/dondrake <http://www.MailLaunder.com/>
800-733-2143

Reply via email to