[jira] [Created] (HIVE-27211) Backport HIVE-22453: Describe table unnecessarily fetches partitions

2023-04-02 Thread Nikhil Gupta (Jira)
Nikhil Gupta created HIVE-27211:
---

 Summary: Backport HIVE-22453: Describe table unnecessarily fetches 
partitions
 Key: HIVE-27211
 URL: https://issues.apache.org/jira/browse/HIVE-27211
 Project: Hive
  Issue Type: Sub-task
Affects Versions: 3.1.2
Reporter: Nikhil Gupta
 Fix For: 3.2.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HIVE-27210) Backport HIVE-23338: Bump jackson version to 2.10.0

2023-04-02 Thread Nikhil Gupta (Jira)
Nikhil Gupta created HIVE-27210:
---

 Summary: Backport HIVE-23338: Bump jackson version to 2.10.0
 Key: HIVE-27210
 URL: https://issues.apache.org/jira/browse/HIVE-27210
 Project: Hive
  Issue Type: Sub-task
Reporter: Nikhil Gupta
 Fix For: 3.2.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HIVE-27209) Backport HIVE-24569: LLAP daemon leaks file descriptors/log4j appenders

2023-04-02 Thread Nikhil Gupta (Jira)
Nikhil Gupta created HIVE-27209:
---

 Summary: Backport HIVE-24569: LLAP daemon leaks file 
descriptors/log4j appenders
 Key: HIVE-27209
 URL: https://issues.apache.org/jira/browse/HIVE-27209
 Project: Hive
  Issue Type: Sub-task
  Components: llap
Affects Versions: 2.2.0
Reporter: Nikhil Gupta






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HIVE-26284) ClassCastException: java.io.PushbackInputStream cannot be cast to org.apache.hadoop.fs.Seekable when table properties contains 'skip.header.line.count' = '1' and datafile

2022-06-01 Thread Nikhil Gupta (Jira)
Nikhil Gupta created HIVE-26284:
---

 Summary: ClassCastException: java.io.PushbackInputStream cannot be 
cast to org.apache.hadoop.fs.Seekable when table properties contains 
'skip.header.line.count' = '1' and datafiles are in UTF-16 encoding
 Key: HIVE-26284
 URL: https://issues.apache.org/jira/browse/HIVE-26284
 Project: Hive
  Issue Type: Bug
Reporter: Nikhil Gupta


{noformat}
ERROR : Vertex failed, vertexName=Map 4, 
vertexId=vertex_1648118653114_0507_2_00, diagnostics=[Vertex 
vertex_1648118653114_0507_2_00 [Map 4] killed/failed due 
to:ROOT_INPUT_INIT_FAILURE, Vertex Input:  initializer failed, 
vertex=vertex_1648118653114_0507_2_00 [Map 4], java.lang.ClassCastException: 
java.io.PushbackInputStream cannot be cast to org.apache.hadoop.fs.Seekable
  at 
org.apache.hadoop.fs.FSDataInputStream.getPos(FSDataInputStream.java:78)
  at 
org.apache.hadoop.hive.ql.io.SkippingTextInputFormat.getCachedStartIndex(SkippingTextInputFormat.java:120)
  at 
org.apache.hadoop.hive.ql.io.SkippingTextInputFormat.makeSplitInternal(SkippingTextInputFormat.java:73)
  at 
org.apache.hadoop.hive.ql.io.SkippingTextInputFormat.makeSplit(SkippingTextInputFormat.java:66)
  at 
org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:379)
  at 
org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:532)
  at 
org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:789)
  at 
org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:243)
  at 
org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:278)
  at 
org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:269)
  at java.security.AccessController.doPrivileged(Native Method) 
 at javax.security.auth.Subject.doAs(Subject.java:422)
  at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1732)
  at 
org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:269)
  at 
org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:253){noformat}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (HIVE-26232) AcidUtils getLogicalLength shouldn't be called for external tables

2022-05-17 Thread Nikhil Gupta (Jira)
Nikhil Gupta created HIVE-26232:
---

 Summary: AcidUtils getLogicalLength shouldn't be called for 
external tables
 Key: HIVE-26232
 URL: https://issues.apache.org/jira/browse/HIVE-26232
 Project: Hive
  Issue Type: Bug
Affects Versions: 3.1.2
Reporter: Nikhil Gupta






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (HIVE-25802) Log4j2 Vulnerability in Hive Storage API

2021-12-13 Thread Nikhil Gupta (Jira)
Nikhil Gupta created HIVE-25802:
---

 Summary: Log4j2 Vulnerability in Hive Storage API
 Key: HIVE-25802
 URL: https://issues.apache.org/jira/browse/HIVE-25802
 Project: Hive
  Issue Type: Bug
  Components: storage-api
Affects Versions: 4.0.0
Reporter: Nikhil Gupta
 Fix For: 4.0.0


Storage API branch also brings in log4j2 dependency <= 2.14.1 that can still 
expose a vulnerability in hive



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (HIVE-25795) [CVE-2021-44228] Update log4j2 version to 2.15.0

2021-12-10 Thread Nikhil Gupta (Jira)
Nikhil Gupta created HIVE-25795:
---

 Summary: [CVE-2021-44228] Update log4j2 version to 2.15.0
 Key: HIVE-25795
 URL: https://issues.apache.org/jira/browse/HIVE-25795
 Project: Hive
  Issue Type: Bug
  Components: Logging
Reporter: Nikhil Gupta






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (HIVE-25659) Divide IN/(NOT IN) queries based on number of max parameters SQL engine can support

2021-10-28 Thread Nikhil Gupta (Jira)
Nikhil Gupta created HIVE-25659:
---

 Summary: Divide IN/(NOT IN) queries based on number of max 
parameters SQL engine can support
 Key: HIVE-25659
 URL: https://issues.apache.org/jira/browse/HIVE-25659
 Project: Hive
  Issue Type: Bug
  Components: Standalone Metastore
Affects Versions: 3.1.0, 4.0.0
Reporter: Nikhil Gupta
 Fix For: 4.0.0


 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HIVE-25600) Compaction job creates redundant base/delta folder within base/delta folder

2021-10-07 Thread Nikhil Gupta (Jira)
Nikhil Gupta created HIVE-25600:
---

 Summary: Compaction job creates redundant base/delta folder within 
base/delta folder
 Key: HIVE-25600
 URL: https://issues.apache.org/jira/browse/HIVE-25600
 Project: Hive
  Issue Type: Bug
Affects Versions: 3.1.2, 3.1.0
Reporter: Nikhil Gupta


{noformat}
Hive table 'myntra_wms.myntra_wms_item' is corrupt. Found sub-directory 
'abfs://bifrostx-hive-d...@gen2hivebifros.dfs.core.windows.net/prod-data/myntra_wms.db/myntra_wms_item/part_created_on=202105/base_0004042/base_0004042'
 in bucket directory for partition: part_created_on=202105
 at 
io.prestosql.plugin.hive.BackgroundHiveSplitLoader.loadPartition(BackgroundHiveSplitLoader.java:543)
 at 
io.prestosql.plugin.hive.BackgroundHiveSplitLoader.loadSplits(BackgroundHiveSplitLoader.java:325)
 at 
io.prestosql.plugin.hive.BackgroundHiveSplitLoader$HiveSplitLoaderTask.process(BackgroundHiveSplitLoader.java:254)
 at io.prestosql.plugin.hive.util.ResumableTasks$1.run(ResumableTasks.java:38)
 at io.prestosql.$gen.Presto_34720210615_143054_2.run(Unknown Source)
 at io.airlift.concurrent.BoundedExecutor.drainQueue(BoundedExecutor.java:80)
 at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
 at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 at java.base/java.lang.Thread.run(Thread.java:829);{noformat}
Why it happens:
Multiple compaction jobs for the same transactions can be triggered if the HMS 
gets restarted and the MR job is still in progress.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HIVE-25452) CLONE - Hive job fails while closing reducer output - Unable to rename

2021-08-16 Thread Nikhil Gupta (Jira)
Nikhil Gupta created HIVE-25452:
---

 Summary: CLONE - Hive job fails while closing reducer output - 
Unable to rename
 Key: HIVE-25452
 URL: https://issues.apache.org/jira/browse/HIVE-25452
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.11.0, 0.12.0, 0.13.0, 0.13.1, 2.3.0
 Environment: OS: 2.6.18-194.el5xen #1 SMP Fri Apr 2 15:34:40 EDT 2010 
x86_64 x86_64 x86_64 GNU/Linux
Hadoop 1.1.2
Reporter: Nikhil Gupta
Assignee: Oleksiy Sayankin
 Attachments: HIVE-4605.2.patch, HIVE-4605.3.patch, HIVE-4605.patch

1, create a table with ORC storage model
{code}
create table iparea_analysis_orc (network int, ip string,   )
stored as ORC;
{code}
2, insert table iparea_analysis_orc select  network, ip,  , the script 
success, but failed after add *OVERWRITE* keyword.  the main error log list as 
here.
{code}
java.lang.RuntimeException: Hive Runtime Error while closing operators: Unable 
to rename output from: 
hdfs://qa3hop001.uucun.com:9000/tmp/hive-hadoop/hive_2013-05-24_15-11-06_511_7746839019590922068/_task_tmp.-ext-1/_tmp.00_0
 to: 
hdfs://qa3hop001.uucun.com:9000/tmp/hive-hadoop/hive_2013-05-24_15-11-06_511_7746839019590922068/_tmp.-ext-1/00_0
at 
org.apache.hadoop.hive.ql.exec.ExecReducer.close(ExecReducer.java:317)
at 
org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:530)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:421)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to rename 
output from: 
hdfs://qa3hop001.uucun.com:9000/tmp/hive-hadoop/hive_2013-05-24_15-11-06_511_7746839019590922068/_task_tmp.-ext-1/_tmp.00_0
 to: 
hdfs://qa3hop001.uucun.com:9000/tmp/hive-hadoop/hive_2013-05-24_15-11-06_511_7746839019590922068/_tmp.-ext-1/00_0
at 
org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.commit(FileSinkOperator.java:197)
at 
org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.access$300(FileSinkOperator.java:108)
at 
org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:867)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:588)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:597)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:597)
at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:597)
at 
org.apache.hadoop.hive.ql.exec.ExecReducer.close(ExecReducer.java:309)
... 7 more
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HIVE-25291) Fix q.out files after HIVE-25240

2021-06-25 Thread Nikhil Gupta (Jira)
Nikhil Gupta created HIVE-25291:
---

 Summary: Fix q.out files after HIVE-25240
 Key: HIVE-25291
 URL: https://issues.apache.org/jira/browse/HIVE-25291
 Project: Hive
  Issue Type: Bug
  Components: Tests
Affects Versions: 4.0.0
Reporter: Nikhil Gupta
 Fix For: 4.0.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HIVE-25268) date_format udf doesn't work for dates prior to 1900 if the timezone is different from UTC

2021-06-18 Thread Nikhil Gupta (Jira)
Nikhil Gupta created HIVE-25268:
---

 Summary: date_format udf doesn't work for dates prior to 1900 if 
the timezone is different from UTC
 Key: HIVE-25268
 URL: https://issues.apache.org/jira/browse/HIVE-25268
 Project: Hive
  Issue Type: Bug
  Components: UDF
Affects Versions: 3.1.2, 3.1.1, 3.1.0, 4.0.0
Reporter: Nikhil Gupta
 Fix For: 4.0.0


*HDI 3.6 (Hive 1.2.1)*:
{code:java}
 select date_format('1400-01-14 01:00:00', '-MM-dd HH:mm:ss Z');
 +
 | _c0 |
 +
 | 1400-01-14 01:00:00 +0700 |
 +
{code}
*HDI 4.0(Hive 3.1):*
{code:java}
select date_format('1400-01-14 01:00:00', '-MM-dd HH:mm:ss Z');
++
| _c0 |
++
| 1400-01-06 01:17:56 +0700 |
++{code}
VM timezone is set to 'Asia/Bangkok'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HIVE-24803) WorkloadManager doesn't update allocation and metrics after Kill Trigger action

2021-02-21 Thread Nikhil Gupta (Jira)
Nikhil Gupta created HIVE-24803:
---

 Summary: WorkloadManager doesn't update allocation and metrics 
after Kill Trigger action
 Key: HIVE-24803
 URL: https://issues.apache.org/jira/browse/HIVE-24803
 Project: Hive
  Issue Type: Bug
Affects Versions: 4.0.0
Reporter: Nikhil Gupta


At present after the query is killed following metrics are not updated by 
Workload Manager
 # numRunningQueries
 # numExecutors

Also, The WorkloadManager doesn't add update the pool allocations after the 
query is killed i.e. poolsToRedistribute doesn't contain the pool name for 
which the query is killed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HIVE-24751) Everyone should have kill query access if authorization is not enabled

2021-02-08 Thread Nikhil Gupta (Jira)
Nikhil Gupta created HIVE-24751:
---

 Summary: Everyone should have kill query access if authorization 
is not enabled
 Key: HIVE-24751
 URL: https://issues.apache.org/jira/browse/HIVE-24751
 Project: Hive
  Issue Type: Bug
Affects Versions: 4.0.0
Reporter: Nikhil Gupta
 Fix For: 4.0.0


At present it is not checked whether authorization is enabled or not for Kill 
Query access.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HIVE-24621) TEXT and varchar datatype does not support unicode encoding in MSSQL

2021-01-11 Thread Nikhil Gupta (Jira)
Nikhil Gupta created HIVE-24621:
---

 Summary: TEXT and varchar datatype does not support unicode 
encoding in MSSQL
 Key: HIVE-24621
 URL: https://issues.apache.org/jira/browse/HIVE-24621
 Project: Hive
  Issue Type: Bug
  Components: Standalone Metastore
Affects Versions: 4.0.0
Reporter: Nikhil Gupta
Assignee: Nikhil Gupta


Why Unicode is required?
In following example the Chinese character cannot be properly interpreted. 
{noformat}
CREATE VIEW `test_view` AS select `test_tbl_char`.`col1` from 
`test_db5`.`test_tbl_char` where `test_tbl_char`.`col1`='你好'; 

show create table test_view;
++
|                   createtab_stmt                   |
++
| CREATE VIEW `test_view` AS select `test_tbl_char`.`col1` from 
`test_db5`.`test_tbl_char` where `test_tbl_char`.`col1`='??' |
++ {noformat}
 
This issue comes because TBLS is defined as follows:
 
CREATE TABLE TBLS
(
 TBL_ID bigint NOT NULL,
 CREATE_TIME int NOT NULL,
 DB_ID bigint NULL,
 LAST_ACCESS_TIME int NOT NULL,
 OWNER nvarchar(767) NULL,
 OWNER_TYPE nvarchar(10) NULL,
 RETENTION int NOT NULL,
 SD_ID bigint NULL,
 TBL_NAME nvarchar(256) NULL,
 TBL_TYPE nvarchar(128) NULL,
 VIEW_EXPANDED_TEXT text NULL,
 VIEW_ORIGINAL_TEXT text NULL,
 IS_REWRITE_ENABLED bit NOT NULL DEFAULT 0,
 WRITE_ID bigint NOT NULL DEFAULT 0
);

Text data type does not support unicode encoding irrespective of collation
varchar data type does not support unicode encoding prior to SQL Server 2019. 
Also UTF8 enabled Collation needs to be defined for use of unicode characters.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HIVE-24529) Metastore truncates milliseconds while storing timestamp column stats

2020-12-13 Thread Nikhil Gupta (Jira)
Nikhil Gupta created HIVE-24529:
---

 Summary: Metastore truncates milliseconds while storing timestamp 
column stats
 Key: HIVE-24529
 URL: https://issues.apache.org/jira/browse/HIVE-24529
 Project: Hive
  Issue Type: Bug
Affects Versions: 4.0.0
Reporter: Nikhil Gupta
Assignee: Nikhil Gupta


Steps to reproduce the issue:

create table tnikhil (t timestamp);
insert into tnikhil values ('2019-01-01 23:12:45.123456');
analyze table tnikhil compute statistics for columns;
select * from tnikhil;

{noformat}
+-+
|  tnikhil.t  |
+-+
| 2019-01-01 23:12:45.123456  |
+-+{noformat}
desc formatted tnikhil t; 
{noformat}
+++
|col_name| data_type  |
+++
| col_name   | t  |
| data_type  | timestamp  |
| min| 1546384365 |
| max| 1546384365 |
+++
{noformat}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HIVE-24317) External Table is not replicated for Cloud store (e.g. Microsoft ADLS Gen2)

2020-10-28 Thread Nikhil Gupta (Jira)
Nikhil Gupta created HIVE-24317:
---

 Summary: External Table is not replicated for Cloud store (e.g. 
Microsoft ADLS Gen2)
 Key: HIVE-24317
 URL: https://issues.apache.org/jira/browse/HIVE-24317
 Project: Hive
  Issue Type: Bug
  Components: repl
Affects Versions: 4.0.0
Reporter: Nikhil Gupta
Assignee: Nikhil Gupta


External Table is not replicated properly because of distcp options. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HIVE-24174) Create Table command fails for a JOIN query

2020-09-17 Thread Nikhil Gupta (Jira)
Nikhil Gupta created HIVE-24174:
---

 Summary: Create Table command fails for a JOIN query
 Key: HIVE-24174
 URL: https://issues.apache.org/jira/browse/HIVE-24174
 Project: Hive
  Issue Type: Bug
  Components: Hive, HiveServer2
Reporter: Nikhil Gupta
Assignee: Nikhil Gupta


When creating a table over results of a join query the command fails with 
following exception:

 
{code:java}
0: jdbc:hive2://zk1-nikhil.q5dzd3jj30bupgln50> create temporary table temp as 
(select * from textTable a join texttable b);
Error: Error while compiling statement: FAILED: SemanticException [Error 
10036]: Duplicate column name: semiotic_class (state=42000,code=10036)
0: jdbc:hive2://zk1-nikhil.q5dzd3jj30bupgln50> create table temp as (select * 
from textTable a join texttable b);
Error: Error while compiling statement: FAILED: SemanticException [Error 
10036]: Duplicate column name: semiotic_class (state=42000,code=10036){code}

{{Full Stacktrace:}}



 
{noformat}
2020-09-17T08:35:08,528 ERROR [5cf31486-eb7d-4023-b0c8-4f32a6847945 
HiveServer2-HttpHandler-Pool: Thread-11353] ql.Driver: FAILED: 
SemanticException [Error 10036]: Duplicate column name: semiotic_class
org.apache.hadoop.hive.ql.parse.SemanticException: Duplicate column name: 
semiotic_class
at 
org.apache.hadoop.hive.ql.parse.ParseUtils.validateColumnNameUniqueness(ParseUtils.java:141)
at 
org.apache.hadoop.hive.ql.plan.CreateTableDesc.validate(CreateTableDesc.java:551)
at 
org.apache.hadoop.hive.ql.parse.TaskCompiler.compile(TaskCompiler.java:329)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12481)
at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:361)
at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:289)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:664)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1869)
at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1816)
at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1811)
at 
org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:126)
at 
org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:197)
at 
org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:262)
at 
org.apache.hive.service.cli.operation.Operation.run(Operation.java:260)
at 
org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:575)
at 
org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:561)
at 
org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:315)
at 
org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:566)
at 
org.apache.hive.service.rpc.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1557)
at 
org.apache.hive.service.rpc.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1542)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at org.apache.thrift.server.TServlet.doPost(TServlet.java:83)
at 
org.apache.hive.service.cli.thrift.ThriftHttpServlet.doPost(ThriftHttpServlet.java:208)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at 
org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:224)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:513)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:493)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:539)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
at