[jira] [Updated] (HIVE-14058) UPDATE/DELETE statement with condition on extended column causes ArraysIndexOutOfBoundsException

2016-06-20 Thread Hong Dai Thanh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hong Dai Thanh updated HIVE-14058:
--
Description: 
Create a transactional table, insert some data into the table. Then we extend 
the schema of the table by adding a column at the end, and add data to the 
table with the extended schema.

{code}
drop table if exists test purge;

create table test (
  a int,
  b int
)
clustered by (a) into 10 buckets
stored as orc
tblproperties ('transactional' = 'true');

insert into test values (1, 1), (2, 2), (3, 3);
insert into test values (4, 4), (5, 5), (6, 6);


alter table test add columns (c int);

insert into test values (10, 10, 10), (11, 11, 11), (12, 12, 12);
{code}

Whenever we run any {{UPDATE}} or {{DELETE}} statement with a condition on the 
added column, for example:

{code}
update test set c = b where c is null;
update test set c = 0 where c is null;
update test set c = c + 1 where c is not null;
update test set c = 0 where c >= 11;
update test set c = c + 1 where c >= 11;
update test set b = b + 1 where c >= 11;
delete from test where c is not null;
{code}

{{ArrayIndexOutOfBoundsException}} occurs.

Full stack trace below:

{code}
ERROR : Vertex failed, vertexName=Map 1, 
vertexId=vertex_1466049070874_0069_1_00, diagnostics=[Task failed, 
taskId=task_1466049070874_0069_1_00_00, diagnostics=[TaskAttempt 0 failed, 
info=[Error: Failure while running task:java.lang.RuntimeException: 
java.lang.RuntimeException: java.io.IOException: 
java.lang.ArrayIndexOutOfBoundsException: 9
at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:173)
at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:139)
at 
org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:344)
at 
org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:181)
at 
org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:172)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
at 
org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:172)
at 
org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:168)
at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: java.io.IOException: 
java.lang.ArrayIndexOutOfBoundsException: 9
at 
org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:196)
at 
org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.next(TezGroupedSplitsInputFormat.java:142)
at 
org.apache.tez.mapreduce.lib.MRReaderMapred.next(MRReaderMapred.java:113)
at 
org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:61)
at 
org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:328)
at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:150)
... 14 more
Caused by: java.io.IOException: java.lang.ArrayIndexOutOfBoundsException: 9
at 
org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
at 
org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
at 
org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:253)
at 
org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:193)
... 19 more
Caused by: java.lang.ArrayIndexOutOfBoundsException: 9
at 
org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$SargApplier.pickRowGroups(RecordReaderImpl.java:730)
at 
org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.pickRowGroups(RecordReaderImpl.java:777)
at 
org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.readStripe(RecordReaderImpl.java:803)
at 
org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.advanceStripe(RecordReaderImpl.java:1013)
at 
org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.advanceToNextRow(RecordReaderImpl.java:1046)
  

[jira] [Updated] (HIVE-14058) UPDATE statement with condition on extended column causes ArraysIndexOutOfBoundsException

2016-06-20 Thread Hong Dai Thanh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hong Dai Thanh updated HIVE-14058:
--
Attachment: hive-site.xml

Added hive-site.xml

> UPDATE statement with condition on extended column causes 
> ArraysIndexOutOfBoundsException
> -
>
> Key: HIVE-14058
> URL: https://issues.apache.org/jira/browse/HIVE-14058
> Project: Hive
>  Issue Type: Bug
> Environment: HDP 2.4.2/Hive 1.2.1
>Reporter: Hong Dai Thanh
> Attachments: hive-site.xml
>
>
> Create a transactional table, insert some data into the table. Then we extend 
> the schema of the table by adding a column at the end, and add data to the 
> table with the extended schema.
> {code}
> drop table if exists test purge;
> create table test (
>   a int,
>   b int
> )
> clustered by (a) into 10 buckets
> stored as orc
> tblproperties ('transactional' = 'true');
> insert into test values (1, 1), (2, 2), (3, 3);
> insert into test values (4, 4), (5, 5), (6, 6);
> alter table test add columns (c int);
> insert into test values (10, 10, 10), (11, 11, 11), (12, 12, 12);
> {code}
> Whenever we run any {{UPDATE}} statement with a condition on the added 
> column, for example:
> {code}
> update test set c = b where c is null;
> update test set c = 0 where c is null;
> update test set c = c + 1 where c is not null;
> update test set c = 0 where c >= 11;
> update test set c = c + 1 where c >= 11;
> update test set b = b + 1 where c >= 11;
> {code}
> {{ArrayIndexOutOfBoundsException}} occurs.
> Full stack trace below:
> {code}
> ERROR : Vertex failed, vertexName=Map 1, 
> vertexId=vertex_1466049070874_0069_1_00, diagnostics=[Task failed, 
> taskId=task_1466049070874_0069_1_00_00, diagnostics=[TaskAttempt 0 
> failed, info=[Error: Failure while running task:java.lang.RuntimeException: 
> java.lang.RuntimeException: java.io.IOException: 
> java.lang.ArrayIndexOutOfBoundsException: 9
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:173)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:139)
>   at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:344)
>   at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:181)
>   at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:172)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
>   at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:172)
>   at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:168)
>   at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.RuntimeException: java.io.IOException: 
> java.lang.ArrayIndexOutOfBoundsException: 9
>   at 
> org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:196)
>   at 
> org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.next(TezGroupedSplitsInputFormat.java:142)
>   at 
> org.apache.tez.mapreduce.lib.MRReaderMapred.next(MRReaderMapred.java:113)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:61)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:328)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:150)
>   ... 14 more
> Caused by: java.io.IOException: java.lang.ArrayIndexOutOfBoundsException: 9
>   at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
>   at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
>   at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:253)
>   at 
> org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:193)
>   ... 19 more
> Caused by: 

[jira] [Comment Edited] (HIVE-14017) Compaction failed when run on ACID table with extended schema

2016-06-19 Thread Hong Dai Thanh (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15338963#comment-15338963
 ] 

Hong Dai Thanh edited comment on HIVE-14017 at 6/20/16 4:10 AM:


Added hive-site.xml configuration file


was (Author: nhahtdh):
hive-site.xml

> Compaction failed when run on ACID table with extended schema
> -
>
> Key: HIVE-14017
> URL: https://issues.apache.org/jira/browse/HIVE-14017
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1
> Environment: HDP 2.4.0/Hive 1.2.1 on RHEL 6
>Reporter: Hong Dai Thanh
> Attachments: hive-site.xml
>
>
> Create an ACID table, insert some data into the table. Then we extend the 
> schema of the table by adding a column at the end, and add data to the table 
> with the extended schema.
> {code:borderStyle=solid}
> drop table if exists test purge;
> create table test (
>   a int,
>   b int
> )
> clustered by (a) into 10 buckets
> stored as orc
> tblproperties ('transactional' = 'true');
> insert into test values (1, 1), (2, 2), (3, 3);
> insert into test values (4, 4), (5, 5), (6, 6);
> alter table test add columns (c int);
> insert into test values (10, 10, 10), (11, 11, 11), (12, 12, 12);
> {code}
> We then run compaction on the table:
> {code}alter table test compact 'major';{code}
> However, the compaction job fails with the following exception:
> {code}
> 2016-06-15 09:54:52,517 INFO [IPC Server handler 5 on 25906] 
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt 
> attempt_1465960802609_0030_m_08_0 is : 0.0
> 2016-06-15 09:54:52,525 FATAL [IPC Server handler 4 on 25906] 
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: 
> attempt_1465960802609_0030_m_08_0 - exited : java.io.IOException: subtype 
> 9 exceeds the included array size 9 fileTypes [kind: STRUCT
> subtypes: 1
> subtypes: 2
> subtypes: 3
> subtypes: 4
> subtypes: 5
> subtypes: 6
> fieldNames: "operation"
> fieldNames: "originalTransaction"
> fieldNames: "bucket"
> fieldNames: "rowId"
> fieldNames: "currentTransaction"
> fieldNames: "row"
> , kind: INT
> , kind: LONG
> , kind: INT
> , kind: LONG
> , kind: LONG
> , kind: STRUCT
> subtypes: 7
> subtypes: 8
> subtypes: 9
> fieldNames: "_col0"
> fieldNames: "_col1"
> fieldNames: "_col2"
> , kind: INT
> , kind: INT
> , kind: INT
> ] schemaTypes [kind: STRUCT
> subtypes: 1
> subtypes: 2
> subtypes: 3
> subtypes: 4
> subtypes: 5
> subtypes: 6
> fieldNames: "operation"
> fieldNames: "originalTransaction"
> fieldNames: "bucket"
> fieldNames: "rowId"
> fieldNames: "currentTransaction"
> fieldNames: "row"
> , kind: INT
> , kind: LONG
> , kind: INT
> , kind: LONG
> , kind: LONG
> , kind: STRUCT
> subtypes: 7
> subtypes: 8
> subtypes: 9
> fieldNames: "_col0"
> fieldNames: "_col1"
> fieldNames: "_col2"
> , kind: INT
> , kind: INT
> , kind: INT
> ] innerStructSubtype -1
>   at 
> org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory$StructTreeReader.(TreeReaderFactory.java:2066)
>   at 
> org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory.createTreeReader(TreeReaderFactory.java:2492)
>   at 
> org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory$StructTreeReader.(TreeReaderFactory.java:2072)
>   at 
> org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory.createTreeReader(TreeReaderFactory.java:2492)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.(RecordReaderImpl.java:219)
>   at 
> org.apache.hadoop.hive.ql.io.orc.ReaderImpl.rowsOptions(ReaderImpl.java:598)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger$ReaderPair.(OrcRawRecordMerger.java:179)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger.(OrcRawRecordMerger.java:476)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRawReader(OrcInputFormat.java:1463)
>   at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:573)
>   at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:552)
>   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
>   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14017) Compaction failed when run on ACID table with extended schema

2016-06-19 Thread Hong Dai Thanh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hong Dai Thanh updated HIVE-14017:
--
Attachment: hive-site.xml

hive-site.xml

> Compaction failed when run on ACID table with extended schema
> -
>
> Key: HIVE-14017
> URL: https://issues.apache.org/jira/browse/HIVE-14017
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1
> Environment: HDP 2.4.0/Hive 1.2.1 on RHEL 6
>Reporter: Hong Dai Thanh
> Attachments: hive-site.xml
>
>
> Create an ACID table, insert some data into the table. Then we extend the 
> schema of the table by adding a column at the end, and add data to the table 
> with the extended schema.
> {code:borderStyle=solid}
> drop table if exists test purge;
> create table test (
>   a int,
>   b int
> )
> clustered by (a) into 10 buckets
> stored as orc
> tblproperties ('transactional' = 'true');
> insert into test values (1, 1), (2, 2), (3, 3);
> insert into test values (4, 4), (5, 5), (6, 6);
> alter table test add columns (c int);
> insert into test values (10, 10, 10), (11, 11, 11), (12, 12, 12);
> {code}
> We then run compaction on the table:
> {code}alter table test compact 'major';{code}
> However, the compaction job fails with the following exception:
> {code}
> 2016-06-15 09:54:52,517 INFO [IPC Server handler 5 on 25906] 
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt 
> attempt_1465960802609_0030_m_08_0 is : 0.0
> 2016-06-15 09:54:52,525 FATAL [IPC Server handler 4 on 25906] 
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: 
> attempt_1465960802609_0030_m_08_0 - exited : java.io.IOException: subtype 
> 9 exceeds the included array size 9 fileTypes [kind: STRUCT
> subtypes: 1
> subtypes: 2
> subtypes: 3
> subtypes: 4
> subtypes: 5
> subtypes: 6
> fieldNames: "operation"
> fieldNames: "originalTransaction"
> fieldNames: "bucket"
> fieldNames: "rowId"
> fieldNames: "currentTransaction"
> fieldNames: "row"
> , kind: INT
> , kind: LONG
> , kind: INT
> , kind: LONG
> , kind: LONG
> , kind: STRUCT
> subtypes: 7
> subtypes: 8
> subtypes: 9
> fieldNames: "_col0"
> fieldNames: "_col1"
> fieldNames: "_col2"
> , kind: INT
> , kind: INT
> , kind: INT
> ] schemaTypes [kind: STRUCT
> subtypes: 1
> subtypes: 2
> subtypes: 3
> subtypes: 4
> subtypes: 5
> subtypes: 6
> fieldNames: "operation"
> fieldNames: "originalTransaction"
> fieldNames: "bucket"
> fieldNames: "rowId"
> fieldNames: "currentTransaction"
> fieldNames: "row"
> , kind: INT
> , kind: LONG
> , kind: INT
> , kind: LONG
> , kind: LONG
> , kind: STRUCT
> subtypes: 7
> subtypes: 8
> subtypes: 9
> fieldNames: "_col0"
> fieldNames: "_col1"
> fieldNames: "_col2"
> , kind: INT
> , kind: INT
> , kind: INT
> ] innerStructSubtype -1
>   at 
> org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory$StructTreeReader.(TreeReaderFactory.java:2066)
>   at 
> org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory.createTreeReader(TreeReaderFactory.java:2492)
>   at 
> org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory$StructTreeReader.(TreeReaderFactory.java:2072)
>   at 
> org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory.createTreeReader(TreeReaderFactory.java:2492)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.(RecordReaderImpl.java:219)
>   at 
> org.apache.hadoop.hive.ql.io.orc.ReaderImpl.rowsOptions(ReaderImpl.java:598)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger$ReaderPair.(OrcRawRecordMerger.java:179)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger.(OrcRawRecordMerger.java:476)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRawReader(OrcInputFormat.java:1463)
>   at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:573)
>   at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:552)
>   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
>   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14017) Compaction failed when run on ACID table with extended schema

2016-06-15 Thread Hong Dai Thanh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hong Dai Thanh updated HIVE-14017:
--
Description: 
Create an ACID table, insert some data into the table. Then we extend the 
schema of the table by adding a column at the end, and add data to the table 
with the extended schema.

{code:borderStyle=solid}
drop table if exists test purge;

create table test (
  a int,
  b int
)
clustered by (a) into 10 buckets
stored as orc
tblproperties ('transactional' = 'true');

insert into test values (1, 1), (2, 2), (3, 3);
insert into test values (4, 4), (5, 5), (6, 6);


alter table test add columns (c int);

insert into test values (10, 10, 10), (11, 11, 11), (12, 12, 12);
{code}

We then run compaction on the table:

{code}alter table test compact 'major';{code}

However, the compaction job fails with the following exception:

{code}
2016-06-15 09:54:52,517 INFO [IPC Server handler 5 on 25906] 
org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt 
attempt_1465960802609_0030_m_08_0 is : 0.0
2016-06-15 09:54:52,525 FATAL [IPC Server handler 4 on 25906] 
org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: 
attempt_1465960802609_0030_m_08_0 - exited : java.io.IOException: subtype 9 
exceeds the included array size 9 fileTypes [kind: STRUCT
subtypes: 1
subtypes: 2
subtypes: 3
subtypes: 4
subtypes: 5
subtypes: 6
fieldNames: "operation"
fieldNames: "originalTransaction"
fieldNames: "bucket"
fieldNames: "rowId"
fieldNames: "currentTransaction"
fieldNames: "row"
, kind: INT
, kind: LONG
, kind: INT
, kind: LONG
, kind: LONG
, kind: STRUCT
subtypes: 7
subtypes: 8
subtypes: 9
fieldNames: "_col0"
fieldNames: "_col1"
fieldNames: "_col2"
, kind: INT
, kind: INT
, kind: INT
] schemaTypes [kind: STRUCT
subtypes: 1
subtypes: 2
subtypes: 3
subtypes: 4
subtypes: 5
subtypes: 6
fieldNames: "operation"
fieldNames: "originalTransaction"
fieldNames: "bucket"
fieldNames: "rowId"
fieldNames: "currentTransaction"
fieldNames: "row"
, kind: INT
, kind: LONG
, kind: INT
, kind: LONG
, kind: LONG
, kind: STRUCT
subtypes: 7
subtypes: 8
subtypes: 9
fieldNames: "_col0"
fieldNames: "_col1"
fieldNames: "_col2"
, kind: INT
, kind: INT
, kind: INT
] innerStructSubtype -1
at 
org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory$StructTreeReader.(TreeReaderFactory.java:2066)
at 
org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory.createTreeReader(TreeReaderFactory.java:2492)
at 
org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory$StructTreeReader.(TreeReaderFactory.java:2072)
at 
org.apache.hadoop.hive.ql.io.orc.TreeReaderFactory.createTreeReader(TreeReaderFactory.java:2492)
at 
org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.(RecordReaderImpl.java:219)
at 
org.apache.hadoop.hive.ql.io.orc.ReaderImpl.rowsOptions(ReaderImpl.java:598)
at 
org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger$ReaderPair.(OrcRawRecordMerger.java:179)
at 
org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger.(OrcRawRecordMerger.java:476)
at 
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRawReader(OrcInputFormat.java:1463)
at 
org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:573)
at 
org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:552)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
{code}

  was:
Create an ACID table, insert the data into the table, then extend the schema of 
the table by adding a column at the end, then add data to the table with the 
extended schema.

{code:borderStyle=solid}
drop table if exists test purge;

create table test (
  a int,
  b int
)
clustered by (a) into 10 buckets
stored as orc
tblproperties ('transactional' = 'true');

insert into test values (1, 1), (2, 2), (3, 3);
insert into test values (4, 4), (5, 5), (6, 6);


alter table test add columns (c int);

insert into test values (10, 10, 10), (11, 11, 11), (12, 12, 12);
{code}

We then run compaction on the table:

{code}alter table test compact 'major';{code}

However, the compaction job fails with the following exception:

{code}
2016-06-15 09:54:52,517 INFO [IPC Server handler 5 on 25906] 
org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt 
attempt_1465960802609_0030_m_08_0 is : 0.0
2016-06-15 09:54:52,525 FATAL [IPC Server handler 4