Here is the snippet of code :

//The entry point into all functionality in Spark is the SparkSession
class. To create a basic SparkSession, just use SparkSession.builder():

SparkSession spark = SparkSession.builder().appName("Java Spark SQL Example"
).master("local").getOrCreate();

//With a SparkSession, applications can create DataFrames from an existing
RDD, from a Hive table, or from Spark data sources.

Dataset<Row> rows_salaries = spark.read().json(
"/Users/sreeharsha/Downloads/rows_salaries.json");

// Register the DataFrame as a SQL temporary view

rows_salaries.createOrReplaceTempView("salaries");

// SQL statements can be run by using the sql methods provided by spark

List<Row> df = spark.sql("select * from salaries").collectAsList();

for(Row r:df){

                        if(r.get(0)!=null)

                       System.out.println(r.get(0).toString());


                    }


Actaul Output :

WrappedArray(WrappedArray(1, B9B42DE1-E810-4489-9735-B365A47A4012, 1,
1467358044, 697390, 1467358044, 697390, null, Aaron,Patricia G,
Facilities/Office Services II, A03031, OED-Employment Dev (031),
1979-10-24T00:00:00, 56705.00, 54135.44))

Expecting Output:

Need elements from the WrappedArray

Below you can find the attachment of .json file


rows_salaries.json (4M) 
<http://apache-spark-user-list.1001560.n3.nabble.com/attachment/27615/0/rows_salaries.json>




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/How-to-acess-the-WrappedArray-tp27615.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to