I am using following releases: 
Spark 1.1 (built using */sbt/sbt -Dhadoop.version=2.2.0 -Phive assembly/*) ,
Apache HDFS 2.2

My job is able to create/add/read data in hive, parquet formatted, tables
using HiveContext. 
But, after changing schema, job is not able to read existing data and throws
following exception:
*/java.lang.ArrayIndexOutOfBoundsException: 2
        at
org.apache.hadoop.hive.ql.io.parquet.serde.ArrayWritableObjectInspector.getStructFieldData(ArrayWritableObjectInspector.java:127)
        at
org.apache.spark.sql.hive.HadoopTableReader$$anonfun$fillObject$1.apply(TableReader.scala:284)
        at
org.apache.spark.sql.hive.HadoopTableReader$$anonfun$fillObject$1.apply(TableReader.scala:278)
        at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
        at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
        at scala.collection.Iterator$class.foreach(Iterator.scala:727)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
        at 
scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
        at
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
        at 
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
        at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
        at scala.collection.AbstractIterator.to(Iterator.scala:1157)
        at
scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
        at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
        at
scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
        at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
        at org.apache.spark.rdd.RDD$$anonfun$16.apply(RDD.scala:774)
        at org.apache.spark.rdd.RDD$$anonfun$16.apply(RDD.scala:774)
        at
org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1121)
        at
org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1121)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
        at org.apache.spark.scheduler.Task.run(Task.scala:54)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:744)/*


Please find below, code snippet:

/       public static void main(String[] args) {
        SparkConf sparkConf = (new
SparkConf()).setAppName("SchemaChangeTest").set("spark.cores.max",
"16").set("spark.executor.memory", "8g");
        JavaSparkContext sparkContext = new JavaSparkContext(sparkConf);
        JavaHiveContext hiveContext = new JavaHiveContext(sparkContext);
        
        List<String> people1List = new ArrayList<String>();
        people1List.add("Michael,30");
        people1List.add("William,31");
        JavaRDD<String> people1RDD = sparkContext.parallelize(people1List);

        //String encoded schema#1
        String schema1String = "name STRING,age INT";
        //Generate the schema based on the string of schema
        StructType people1Schema = getSchema(schema1String);
        //Convert records of the RDD (people) to Rows.
        JavaRDD<Row> people1RowRDD = people1RDD.map(new Function<String,
Row>() {
                                        public Row call(String record) throws 
Exception {
                                        String[] fields = record.split(",");
                                        return Row.create(fields[0],
Integer.parseInt(fields[1].trim()));
                                }
                        });
        //Apply schema & register as temporary table
        hiveContext.applySchema(people1RowRDD,
people1Schema).registerTempTable("temp_table_people1");
        //Create people table
        hiveContext.sql("CREATE EXTERNAL TABLE IF NOT EXISTS people_table
(name String, age INT) *ROW FORMAT SERDE
'parquet.hive.serde.ParquetHiveSerDe' STORED AS INPUTFORMAT
'parquet.hive.DeprecatedParquetInputFormat' OUTPUTFORMAT
'parquet.hive.DeprecatedParquetOutputFormat*'");
        //Add new data
        hiveContext.sql("INSERT INTO TABLE people_table SELECT name, age
FROM temp_table_people1");
        //Fetch rows and print
        JavaSchemaRDD people1TableRows = hiveContext.sql("SELECT * FROM
people_table");
        logger.info(people1TableRows.collect());

        *//Until this point everything is fine, job creates new table, add
data in to table and then able to read from table*

        //----------------------------------------------------
        //------------ Change Schema --------------------
        //----------------------------------------------------
        hiveContext.sql("ALTER TABLE people_table ADD COLUMNS (gender
STRING)");
        
        List<String> people2List = new ArrayList<String>();
        people2List.add("David,32,M");
        people2List.add("Lorena,33,F");
        JavaRDD<String> people2RDD = sparkContext.parallelize(people2List);
        
        //String encoded schema#2
        String schema2String = "name STRING,age INT,gender STRING";
        //Generate the schema based on the string of schema
        StructType people2Schema = getSchema(schema2String);
        //Convert records of the RDD (people) to Rows.
        JavaRDD<Row> people2RowRDD = people2RDD.map(new Function<String,
Row>() {
                                        public Row call(String record) throws 
Exception {
                                        String[] fields = record.split(",");
                                        return Row.create(fields[0], 
Integer.parseInt(fields[1].trim()),
fields[2].trim());
                                }                       });
        //Apply schema & register as temporary table
        hiveContext.applySchema(people2RowRDD,
people2Schema).registerTempTable("temp_table_people2");
        //Add new data
        hiveContext.sql("INSERT INTO TABLE people_table SELECT name, age,
gender FROM temp_table_people2");
        
        //Fetch rows and print
        JavaSchemaRDD people2TableRows = hiveContext.sql("SELECT * FROM
people_table");
        *logger.info(people2TableRows.collect()); //Exception is being
thrown here *

        }/

Any pointers towards to the root cause, solution, or workaround are
appreciated....





--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Schema-change-on-Spark-Hive-Parquet-file-format-table-not-working-tp15360.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to