[jira] [Created] (HIVE-9961) HookContext for view should return a table type of VIRTUAL_VIEW

2015-03-13 Thread Szehon Ho (JIRA)
Szehon Ho created HIVE-9961:
---

 Summary: HookContext for view should return a table type of 
VIRTUAL_VIEW
 Key: HIVE-9961
 URL: https://issues.apache.org/jira/browse/HIVE-9961
 Project: Hive
  Issue Type: Bug
Reporter: Szehon Ho
Assignee: Szehon Ho


Run a 'create view' statement.

The view entity (which is in the hook's outputs) has a table with tableType 
'MANAGED_TABLE').  It should be of type 'VIRTUAL_VIEW' so that auditing tools 
can correctly identify it as a view.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9962) JsonSerDe does not support reader schema different from data schema

2015-03-13 Thread Johndee Burks (JIRA)
Johndee Burks created HIVE-9962:
---

 Summary: JsonSerDe does not support reader schema different from 
data schema
 Key: HIVE-9962
 URL: https://issues.apache.org/jira/browse/HIVE-9962
 Project: Hive
  Issue Type: Improvement
  Components: HCatalog, Serializers/Deserializers
Reporter: Johndee Burks
Priority: Minor


To reproduce the limitation do the following. 

Create a two tables the first with full schema and the second with partial 
schema. 

{code}
add jar 
/opt/cloudera/parcels/CDH/lib/hive-hcatalog/share/hcatalog/hive-hcatalog-core.jar;

CREATE TABLE json_full
(autopolicy structis_active:boolean, policy_holder_name:string, 
policy_num:string, vehicle:structbrand:structmodel:string, year:int, 
price:double, vin:string)
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe';

CREATE TABLE json_part 
(autopolicy structis_active:boolean, policy_holder_name:string, 
policy_num:string, vehicle:structbrand:structmodel:string, year:int, 
price:double)
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe';
{code}

The data for the table is below: 

{code}
{autopolicy: {policy_holder_name: someone, policy_num: 20141012, 
is_active: true, vehicle: {brand: {model: Lexus, year: 2012}, 
vin: RANDOM123, price: 23450.50}}}
{code}

I put that data into a file and load it into the tables like this: 

{code}
load data local inpath 'data.json' into table json_full;
load data local inpath 'data.json' into table json_part;
{code}

Then do a select against each table: 

{code}
select * from json_full;
select * from json_part;
{code}

The second select should fail with an error simlar to that below: 

{code}
15/03/12 23:19:30 [main]: ERROR CliDriver: Failed with exception 
java.io.IOException:java.lang.NullPointerException
{code}

The code that throws this error is below: 

{code}
172 private void populateRecord(ListObject r, JsonToken token, JsonParser p, 
HCatSchema s) throws IOException { 
173 if (token != JsonToken.FIELD_NAME) { 
174 throw new IOException(Field name expected); 
175 } 
176 String fieldName = p.getText(); 
177 int fpos; 
178 try { 
179 fpos = s.getPosition(fieldName); 
180 } catch (NullPointerException npe) { 
181 fpos = getPositionFromHiveInternalColumnName(fieldName); 
182 LOG.debug(NPE finding position for field [{}] in schema [{}], fieldName, 
s); 
183 if (!fieldName.equalsIgnoreCase(getHiveInternalColumnName(fpos))) { 
184 LOG.error(Hive internal column name {} and position  
185 + encoding {} for the column name are at odds, fieldName, fpos); 
186 throw npe; 
187 } 
188 if (fpos == -1) { 
189 return; // unknown field, we return. 
190 } 
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9965) CBO (Calcite Return Path): Improvement in the cost calculation algorithm for Aggregate and Join operators [CBO Branch]

2015-03-13 Thread Jesus Camacho Rodriguez (JIRA)
Jesus Camacho Rodriguez created HIVE-9965:
-

 Summary: CBO (Calcite Return Path): Improvement in the cost 
calculation algorithm for Aggregate and Join operators [CBO Branch]
 Key: HIVE-9965
 URL: https://issues.apache.org/jira/browse/HIVE-9965
 Project: Hive
  Issue Type: Sub-task
Reporter: Jesus Camacho Rodriguez
Assignee: Jesus Camacho Rodriguez






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9966) Get rid of customBucketMapJoin from MapJoinDesc

2015-03-13 Thread Ashutosh Chauhan (JIRA)
Ashutosh Chauhan created HIVE-9966:
--

 Summary: Get rid of customBucketMapJoin from MapJoinDesc
 Key: HIVE-9966
 URL: https://issues.apache.org/jira/browse/HIVE-9966
 Project: Hive
  Issue Type: Task
  Components: Query Planning, Tez
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan


Currently, its used to determine whether BMJ is running in mapper or reducer in 
ReduceSinkMapJoinProc rule. But this determination can be made locally by 
examining operator tree in the rule.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9964) CBO (Calcite Return Path): Traits propagation for Aggregate operator [CBO Branch]

2015-03-13 Thread Jesus Camacho Rodriguez (JIRA)
Jesus Camacho Rodriguez created HIVE-9964:
-

 Summary: CBO (Calcite Return Path): Traits propagation for 
Aggregate operator [CBO Branch]
 Key: HIVE-9964
 URL: https://issues.apache.org/jira/browse/HIVE-9964
 Project: Hive
  Issue Type: Sub-task
Reporter: Jesus Camacho Rodriguez
Assignee: Jesus Camacho Rodriguez






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 32015: HIVE-9947 ScriptOperator replaceAll uses unescaped dot and result is not assigned

2015-03-13 Thread Alexander Pivovarov

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/32015/
---

(Updated March 13, 2015, 10:59 p.m.)


Review request for hive, Alan Gates and Gopal V.


Changes
---

fixed transform_acid.q.out


Bugs: HIVE-9947
https://issues.apache.org/jira/browse/HIVE-9947


Repository: hive-git


Description
---

HIVE-9947 ScriptOperator replaceAll uses unescaped dot and result is not 
assigned


Diffs (updated)
-

  ql/src/java/org/apache/hadoop/hive/ql/exec/ScriptOperator.java 
6f6f5fa6fe8e84d62f1a0e6ac191f82b488b7554 
  ql/src/test/results/clientpositive/transform_acid.q.out 
29d0638bd493d3d358ea96834a81d07f8a5781e2 

Diff: https://reviews.apache.org/r/32015/diff/


Testing
---


Thanks,

Alexander Pivovarov



[jira] [Created] (HIVE-9963) HiveServer2 deregister command doesn't provide any feedback

2015-03-13 Thread Deepesh Khandelwal (JIRA)
Deepesh Khandelwal created HIVE-9963:


 Summary: HiveServer2 deregister command doesn't provide any 
feedback
 Key: HIVE-9963
 URL: https://issues.apache.org/jira/browse/HIVE-9963
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.14.0
Reporter: Deepesh Khandelwal


HiveServer2 deregister functionality provided by HIVE-8288 doesn't provide any 
feedback upon completion. Here is a sample console output:
{noformat}
$ hive --service hiveserver2 --deregister 0.14.0-SNAPSHOT
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/root/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/root/hive/lib/hive-jdbc-0.14.0-SNAPSHOT-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
{noformat}
This will not change even if the znode did not exist. Ideally we should print 
some feedback after the command completes like HiveServer2 with version 
'0.14.0-SNAPSHOT' deregistered successfully or in case of failure an 
appropriate reason No HiveServer2 with version '0.14.0-SNAPSHOT' exists to 
deregister.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9956) use BigDecimal.valueOf instead of new in TestFileDump

2015-03-13 Thread Alexander Pivovarov (JIRA)
Alexander Pivovarov created HIVE-9956:
-

 Summary: use BigDecimal.valueOf instead of new in TestFileDump
 Key: HIVE-9956
 URL: https://issues.apache.org/jira/browse/HIVE-9956
 Project: Hive
  Issue Type: Bug
Reporter: Alexander Pivovarov
Assignee: Alexander Pivovarov
Priority: Minor


TestFileDump builds data row where one of the column is BigDecimal
The test adds value 2.
There are 2 ways to create BigDecimal object.
1. use new
2. use valueOf

in this particular case 
1. new will create 2.222153
2. valueOf will use the canonical String representation and the result will be 
2.

Probably we should use valueOf to create BigDecimal object

TestTimestampWritable and TestHCatStores use valueOf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9955) TestVectorizedRowBatchCtx compares byte[] using equals() method

2015-03-13 Thread Alexander Pivovarov (JIRA)
Alexander Pivovarov created HIVE-9955:
-

 Summary: TestVectorizedRowBatchCtx compares byte[] using equals() 
method
 Key: HIVE-9955
 URL: https://issues.apache.org/jira/browse/HIVE-9955
 Project: Hive
  Issue Type: Bug
  Components: Vectorization
Reporter: Alexander Pivovarov
Assignee: Alexander Pivovarov
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 32026: HIVE-9955 TestVectorizedRowBatchCtx compares byte[] using equals() method

2015-03-13 Thread Alexander Pivovarov

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/32026/
---

Review request for hive, Ashutosh Chauhan, Gopal V, and Sergey Shelukhin.


Repository: hive-git


Description
---

HIVE-9955 TestVectorizedRowBatchCtx compares byte[] using equals() method


Diffs
-

  
ql/src/test/org/apache/hadoop/hive/ql/exec/vector/TestVectorizedRowBatchCtx.java
 f7fea17b6037bab15fe53f8c8ef51e92c95de4e5 

Diff: https://reviews.apache.org/r/32026/diff/


Testing
---


Thanks,

Alexander Pivovarov



Review Request 32027: HIVE-9956 use BigDecimal.valueOf instead of new in TestFileDump

2015-03-13 Thread Alexander Pivovarov

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/32027/
---

Review request for hive and Brock Noland.


Bugs: HIVE-9956
https://issues.apache.org/jira/browse/HIVE-9956


Repository: hive-git


Description
---

HIVE-9956 use BigDecimal.valueOf instead of new in TestFileDump


Diffs
-

  ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestFileDump.java 
00afdac274d390dc6605c266b29a0ae490c54910 

Diff: https://reviews.apache.org/r/32027/diff/


Testing
---


Thanks,

Alexander Pivovarov



[jira] [Created] (HIVE-9957) Hive 1.1.0 not compatible with Hadoop 2.4.0

2015-03-13 Thread Vivek Shrivastava (JIRA)
Vivek Shrivastava created HIVE-9957:
---

 Summary: Hive 1.1.0 not compatible with Hadoop 2.4.0
 Key: HIVE-9957
 URL: https://issues.apache.org/jira/browse/HIVE-9957
 Project: Hive
  Issue Type: Bug
  Components: Encryption
Reporter: Vivek Shrivastava


Getting this exception while accessing data through Hive. 

Exception in thread main java.lang.NoSuchMethodError: 
org.apache.hadoop.hdfs.DFSClient.getKeyProvider()Lorg/apache/hadoop/crypto/key/KeyProvider;
at 
org.apache.hadoop.hive.shims.Hadoop23Shims$HdfsEncryptionShim.init(Hadoop23Shims.java:1152)
at 
org.apache.hadoop.hive.shims.Hadoop23Shims.createHdfsEncryptionShim(Hadoop23Shims.java:1279)
at 
org.apache.hadoop.hive.ql.session.SessionState.getHdfsEncryptionShim(SessionState.java:392)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.isPathEncrypted(SemanticAnalyzer.java:1756)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getStagingDirectoryPathname(SemanticAnalyzer.java:1875)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:1689)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:1427)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genResolvedParseTree(SemanticAnalyzer.java:10132)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10147)
at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:192)
at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:222)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:421)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:307)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1112)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1160)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1039)
at 
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:207)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:159)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:370)
at 
org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:754)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Is Hive 1.1.0 compatible with Hadoop 2.4.0?

2015-03-13 Thread Vivek Shrivastava
https://issues.apache.org/jira/browse/HIVE-9957


On Thu, Mar 12, 2015 at 3:48 PM, Thejas Nair the...@hortonworks.com wrote:

 Looks like this would need a code change. I canĀ¹t think of any workaround.
 Can you please open a jira ?
 This change is part of the changes to support the encryption feature. Hive
 1.0.0 should not have this issue.

 -Thejas


 On 3/12/15, 2:34 AM, Vivek Shrivastava vivshrivast...@gmail.com wrote:

 Hi,
 
 It seems Hive 1.1.0 does not work with Apache Hadoop 2.4.0. I am getting
 this exception while running hive command. Even the build was  not
 successful if I use hadoop version 2.4.0 instead of 2.6.0 in the pom file.
 Is there any way I can run it on Hadoop 2.4.0?
 
 Thanks,
 
 Vivek
 
 Exception in thread main java.lang.NoSuchMethodError:
 org.apache.hadoop.hdfs.DFSClient.getKeyProvider()Lorg/apache/hadoop/crypto
 /key/KeyProvider;
 
 at
 org.apache.hadoop.hive.shims.Hadoop23Shims$HdfsEncryptionShim.init(Hadoo
 p23Shims.java:1152)
 
 at
 org.apache.hadoop.hive.shims.Hadoop23Shims.createHdfsEncryptionShim(Hadoop
 23Shims.java:1279)
 
 at
 org.apache.hadoop.hive.ql.session.SessionState.getHdfsEncryptionShim(Sessi
 onState.java:392)
 
 at
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.isPathEncrypted(SemanticA
 nalyzer.java:1756)
 
 at
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getStagingDirectoryPathna
 me(SemanticAnalyzer.java:1875)
 
 at
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnaly
 zer.java:1689)
 
 at
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnaly
 zer.java:1427)
 
 at
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genResolvedParseTree(Sema
 nticAnalyzer.java:10132)
 
 at
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticA
 nalyzer.java:10147)
 
 at
 org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlan
 ner.java:192)
 
 at
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticA
 nalyzer.java:222)
 
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:421)
 
 at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:307)
 
 at
 org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1112)
 
 at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1160)
 
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)
 
 at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1039)
 
 at
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:207)
 
 at
 org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:159)
 
 at
 org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:370)
 
 at
 org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:754)
 
 at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675)
 
 at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
 
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 
 at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
 57)
 
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorIm
 pl.java:43)
 
 at java.lang.reflect.Method.invoke(Method.java:606)
 
 at org.apache.hadoop.util.RunJar.main(RunJar.java:212)




[jira] [Created] (HIVE-9958) LLAP: YARN registry for Auto-organizing Slider instances

2015-03-13 Thread Gopal V (JIRA)
Gopal V created HIVE-9958:
-

 Summary: LLAP: YARN registry for Auto-organizing Slider instances 
 Key: HIVE-9958
 URL: https://issues.apache.org/jira/browse/HIVE-9958
 Project: Hive
  Issue Type: Sub-task
Affects Versions: llap
Reporter: Gopal V
Assignee: Gopal V
 Fix For: llap


The Slider deployed instances start on random machines without any pre-planned 
organization.

Allow the llap-daemon-site.xml to refer to the Slider registry by using 
indirection references instead of explicit host names

{code}
  property
namellap.daemon.service.hosts/name
value@llap0/value
  /property
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-9959) CBO (Calcite Return Path): Use table md to calculate column size instead of Calcite default values [CBO branch]

2015-03-13 Thread Jesus Camacho Rodriguez (JIRA)
Jesus Camacho Rodriguez created HIVE-9959:
-

 Summary: CBO (Calcite Return Path): Use table md to calculate 
column size instead of Calcite default values [CBO branch]
 Key: HIVE-9959
 URL: https://issues.apache.org/jira/browse/HIVE-9959
 Project: Hive
  Issue Type: Sub-task
Reporter: Jesus Camacho Rodriguez
Assignee: Jesus Camacho Rodriguez






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 32027: HIVE-9956 use BigDecimal.valueOf instead of new in TestFileDump

2015-03-13 Thread Xuefu Zhang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/32027/#review76364
---

Ship it!


Ship It!

- Xuefu Zhang


On March 13, 2015, 6:41 a.m., Alexander Pivovarov wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/32027/
 ---
 
 (Updated March 13, 2015, 6:41 a.m.)
 
 
 Review request for hive and Brock Noland.
 
 
 Bugs: HIVE-9956
 https://issues.apache.org/jira/browse/HIVE-9956
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 HIVE-9956 use BigDecimal.valueOf instead of new in TestFileDump
 
 
 Diffs
 -
 
   ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestFileDump.java 
 00afdac274d390dc6605c266b29a0ae490c54910 
 
 Diff: https://reviews.apache.org/r/32027/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Alexander Pivovarov
 




[jira] [Created] (HIVE-9960) Hive not backward compatibilie while adding optional new field to struct in parquet files

2015-03-13 Thread Arup Malakar (JIRA)
Arup Malakar created HIVE-9960:
--

 Summary: Hive not backward compatibilie while adding optional new 
field to struct in parquet files
 Key: HIVE-9960
 URL: https://issues.apache.org/jira/browse/HIVE-9960
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Reporter: Arup Malakar


I recently added an optional field to a struct, when I tried to query old data 
with the new hive table which has the new field as column it throws error. Any 
clue how I can make it backward compatible so that I am still able to query old 
data with the new table definition.
 
I am using hive-0.14.0 release with  HIVE-8909 patch applied.

Details:

New optional field in a struct
{code}
struct Event {
1: optional Type type;
2: optional mapstring, string values;
3: optional i32 num = -1; // --- New field
}
{code}

Main thrift definition
{code}
 10: optional listEvent events;
{code}

Corresponding hive table definition
{code}
  events array struct type: string, values: mapstring, string, num: int)
{code}

Try to read something from the old data, using the new table definition
{{select events from table1 limit 1;}}

Failed with exception:
{code}
java.io.IOException:org.apache.hadoop.hive.ql.metadata.HiveException: 
java.lang.ArrayIndexOutOfBoundsException: 2   

Error thrown:   

15/03/12 17:23:43 [main]: ERROR CliDriver: Failed with exception 
java.io.IOException:org.apache.hadoop.hive.ql.metadata.HiveException: 
java.lang.ArrayIndexOutOfBoundsException: 2   

java.io.IOException: org.apache.hadoop.hive.ql.metadata.HiveException: 
java.lang.ArrayIndexOutOfBoundsException: 2 
  

at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:152)   

 

at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1621)

 

at 
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:267)

  

at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:199)  

 

at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:410) 

 

at 
org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:783)  

  

at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:677) 

 

at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:616)

 

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  

 

at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)   

  

at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

 

at java.lang.reflect.Method.invoke(Method.java:597) 

 

at org.apache.hadoop.util.RunJar.main(RunJar.java:212)  

 

Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
java.lang.ArrayIndexOutOfBoundsException: 2 


at