[jira] [Commented] (HIVE-24437) Add more removed configs for(Don't fail config validation for removed configs)

2020-11-27 Thread JiangZhu (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-24437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17239649#comment-17239649
 ] 

JiangZhu commented on HIVE-24437:
-

Add more removed configs for this

> Add more removed configs for(Don't fail config validation for removed configs)
> --
>
> Key: HIVE-24437
> URL: https://issues.apache.org/jira/browse/HIVE-24437
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.3.7
>Reporter: JiangZhu
>Priority: Major
> Fix For: 2.3.0, 2.3.7
>
> Attachments: HIVE-24437.patch
>
>
> Add more removed configs for(HIVE-14132 Don't fail config validation for 
> removed configs)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-24437) Add more removed configs for(Don't fail config validation for removed configs)

2020-11-27 Thread JiangZhu (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-24437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17239648#comment-17239648
 ] 

JiangZhu commented on HIVE-24437:
-

Add more removed configs for this

> Add more removed configs for(Don't fail config validation for removed configs)
> --
>
> Key: HIVE-24437
> URL: https://issues.apache.org/jira/browse/HIVE-24437
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.3.7
>Reporter: JiangZhu
>Priority: Major
> Fix For: 2.3.0, 2.3.7
>
> Attachments: HIVE-24437.patch
>
>
> Add more removed configs for(HIVE-14132 Don't fail config validation for 
> removed configs)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-24437) Add more removed configs for(Don't fail config validation for removed configs)

2020-11-27 Thread JiangZhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-24437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JiangZhu updated HIVE-24437:

Attachment: HIVE-24437.patch

> Add more removed configs for(Don't fail config validation for removed configs)
> --
>
> Key: HIVE-24437
> URL: https://issues.apache.org/jira/browse/HIVE-24437
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.3.7
>Reporter: JiangZhu
>Priority: Major
> Fix For: 2.3.0, 2.3.7
>
> Attachments: HIVE-24437.patch
>
>
> Add more removed configs for(HIVE-14132 Don't fail config validation for 
> removed configs)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-24433) AutoCompaction is not getting triggered for CamelCase Partition Values

2020-11-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-24433?focusedWorklogId=517278=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-517278
 ]

ASF GitHub Bot logged work on HIVE-24433:
-

Author: ASF GitHub Bot
Created on: 27/Nov/20 07:59
Start Date: 27/Nov/20 07:59
Worklog Time Spent: 10m 
  Work Description: nareshpr commented on pull request #1712:
URL: https://github.com/apache/hive/pull/1712#issuecomment-734699849


   Thanks for looking into this @pvargacl. I added testcase in TestInitiator, 
please review



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 517278)
Time Spent: 0.5h  (was: 20m)

> AutoCompaction is not getting triggered for CamelCase Partition Values
> --
>
> Key: HIVE-24433
> URL: https://issues.apache.org/jira/browse/HIVE-24433
> Project: Hive
>  Issue Type: Bug
>Reporter: Naresh P R
>Assignee: Naresh P R
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> PartionKeyValue is getting converted into lowerCase in below 2 places.
> [https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java#L2728]
> [https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java#L2851]
> Because of which TXN_COMPONENTS & HIVE_LOCKS tables are not having entries 
> from proper partition values.
> When query completes, the entry moves from TXN_COMPONENTS to 
> COMPLETED_TXN_COMPONENTS. Hive AutoCompaction will not recognize the 
> partition & considers it as invalid partition
> {code:java}
> create table abc(name string) partitioned by(city string) stored as orc 
> tblproperties('transactional'='true');
> insert into abc partition(city='Bangalore') values('aaa');
> {code}
> Example entry in COMPLETED_TXN_COMPONENTS
> {noformat}
> +---+--++---+-+-+---+
> | CTC_TXNID | CTC_DATABASE | CTC_TABLE          | CTC_PARTITION     | 
> CTC_TIMESTAMP       | CTC_WRITEID | CTC_UPDATE_DELETE |
> +---+--++---+-+-+---+
> |         2 | default      | abc    | city=bangalore    | 2020-11-25 09:26:59 
> |           1 | N                 |
> +---+--++---+-+-+---+
> {noformat}
>  
> AutoCompaction fails to get triggered with below error
> {code:java}
> 2020-11-25T09:35:10,364 INFO [Thread-9]: compactor.Initiator 
> (Initiator.java:run(98)) - Checking to see if we should compact 
> default.abc.city=bangalore
> 2020-11-25T09:35:10,380 INFO [Thread-9]: compactor.Initiator 
> (Initiator.java:run(155)) - Can't find partition 
> default.compaction_test.city=bangalore, assuming it has been dropped and 
> moving on{code}
> I verifed below 4 SQL's with my PR, those all produced correct 
> PartitionKeyValue
> i.e, COMPLETED_TXN_COMPONENTS.CTC_PARTITION="city=Bangalore"
> {code:java}
> insert into table abc PARTITION(CitY='Bangalore') values('Dan');
> insert overwrite table abc partition(CiTy='Bangalore') select Name from abc;
> update table abc set Name='xy' where CiTy='Bangalore';
> delete from abc where CiTy='Bangalore';{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-24439) HS2 memory leak when commitTxn fails and queries involve partitioned tables

2020-11-27 Thread Stamatis Zampetakis (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-24439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stamatis Zampetakis updated HIVE-24439:
---
Attachment: HIVE-24439-gc-roots.png

> HS2 memory leak when commitTxn fails and queries involve partitioned tables
> ---
>
> Key: HIVE-24439
> URL: https://issues.apache.org/jira/browse/HIVE-24439
> Project: Hive
>  Issue Type: Task
>Reporter: Stamatis Zampetakis
>Priority: Major
> Attachments: HIVE-24439-gc-roots.png, HIVE-24439-hive.log.gz, 
> heap_dump_overview.png
>
>
> Running explain plans on queries involving partitioned tables with many 
> partitions (for instance TPC-DS 30TB) leads to a memory leak when there are 
> failures during the commit of a transaction. 
> The heap dump shows many {{FieldSchema}} instances which cannot be garbage 
> collected since they are retained in the {{Context}} of the 
> {{DriverTxnHandler}} due to a [shutdown 
> hook|https://github.com/apache/hive/blob/aed7c86cdd59f0b2a4979633fbd191d451f2fd75/ql/src/java/org/apache/hadoop/hive/ql/DriverTxnHandler.java#L124]
>  that keeps a reference to the enclosing instance of DriverTxnHandler.
> !heap_dump_overview.png!
> In this case the commit failures are due to a metastore with a broken schema 
> (see stacktrace below) but I think that similar kind of failures can lead to 
> the same situation.
> {noformat}
> 2020-11-27T05:45:32,629 ERROR [c69f30a1-864e-4b66-973a-0cc03fb81f3f main] 
> ql.Driver: FAILED: Hive Internal Error: 
> org.apache.hadoop.hive.ql.lockmgr.LockException(Error communicating with the 
> metastore)
> org.apache.hadoop.hive.ql.lockmgr.LockException: Error communicating with the 
> metastore
> at 
> org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.commitTxn(DbTxnManager.java:535)
> at 
> org.apache.hadoop.hive.ql.DriverTxnHandler.commitOrRollback(DriverTxnHandler.java:572)
> at 
> org.apache.hadoop.hive.ql.DriverTxnHandler.endTransactionAndCleanup(DriverTxnHandler.java:554)
> at 
> org.apache.hadoop.hive.ql.DriverTxnHandler.endTransactionAndCleanup(DriverTxnHandler.java:537)
> at 
> org.apache.hadoop.hive.ql.DriverTxnHandler.handleTransactionAfterExecution(DriverTxnHandler.java:487)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:333)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:149)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:144)
> at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:164)
> at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:230)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:258)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processCmd1(CliDriver.java:203)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:129)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:424)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:355)
> at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:744)
> at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:714)
> at 
> org.apache.hadoop.hive.cli.control.CorePerfCliDriver.runTest(CorePerfCliDriver.java:103)
> at 
> org.apache.hadoop.hive.cli.control.CliAdapter.runTest(CliAdapter.java:157)
> at 
> org.apache.hadoop.hive.cli.TestTezTPCDS30TBPerfCliDriver.testCliDriver(TestTezTPCDS30TBPerfCliDriver.java:79)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.apache.hadoop.hive.cli.TestTezTPCDS30TBPerfCliDriver$1.evaluate(TestTezTPCDS30TBPerfCliDriver.java:62)
> Caused by: MetaException(message:Unable to update transaction database 
> org.postgresql.util.PSQLException: ERROR: column "CQ_TXN_ID" does not exist
>   Position: 271
> at 
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2532)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2267)
> at 
> 

[jira] [Assigned] (HIVE-24438) Review shutdown hooks for memory leak

2020-11-27 Thread Stamatis Zampetakis (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-24438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stamatis Zampetakis reassigned HIVE-24438:
--


> Review shutdown hooks for memory leak
> -
>
> Key: HIVE-24438
> URL: https://issues.apache.org/jira/browse/HIVE-24438
> Project: Hive
>  Issue Type: Task
>Reporter: Stamatis Zampetakis
>Assignee: Stamatis Zampetakis
>Priority: Major
>
> Shutdown hooks cannot be garbage collected till the JVM shutdown making the 
> application prone to memory leaks. In many cases shutdown hooks are 
> registered through the use of an anonymous class. If the class is created 
> from non-static context then the hook implicitly holds a reference to an 
> enclosing instance that is not always desirable and can lead to memory leak.
> The goal of this issue is to review calls registering shutdown hooks and 
> eliminate any reference to enclosing instances if that is possible. 
> Check the callers of:
>  * ShutdownHookManager#addShutdownHook(java.lang.Runnable)
>  * ShutdownHookManager#addShutdownHook(java.lang.Runnable, int)
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-24439) HS2 memory leak when commitTxn fails and queries involve partitioned tables

2020-11-27 Thread Stamatis Zampetakis (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-24439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stamatis Zampetakis updated HIVE-24439:
---
Attachment: heap_dump_overview.png

> HS2 memory leak when commitTxn fails and queries involve partitioned tables
> ---
>
> Key: HIVE-24439
> URL: https://issues.apache.org/jira/browse/HIVE-24439
> Project: Hive
>  Issue Type: Task
>Reporter: Stamatis Zampetakis
>Priority: Major
> Attachments: heap_dump_overview.png
>
>
> Running explain plans on queries involving partitioned tables with many 
> partitions (for instance TPC-DS 30TB) leads to a memory leak when there are 
> failures during the commit of a transaction. 
> The heap dump shows many {{FieldSchema}} instances which cannot be garbage 
> collected since they are retained in the {{Context}} of the 
> {{DriverTxnHandler}} due to a [shutdown 
> hook|https://github.com/apache/hive/blob/aed7c86cdd59f0b2a4979633fbd191d451f2fd75/ql/src/java/org/apache/hadoop/hive/ql/DriverTxnHandler.java#L124]
>  that keeps a reference to the enclosing instance of DriverTxnHandler.
> !heap_dump_overview.png!
> In this case the commit failures are due to a metastore with a broken schema 
> (see stacktrace below) but I think that similar kind of failures can lead to 
> the same situation.
> {noformat}
> 2020-11-27T05:45:32,629 ERROR [c69f30a1-864e-4b66-973a-0cc03fb81f3f main] 
> ql.Driver: FAILED: Hive Internal Error: 
> org.apache.hadoop.hive.ql.lockmgr.LockException(Error communicating with the 
> metastore)
> org.apache.hadoop.hive.ql.lockmgr.LockException: Error communicating with the 
> metastore
> at 
> org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.commitTxn(DbTxnManager.java:535)
> at 
> org.apache.hadoop.hive.ql.DriverTxnHandler.commitOrRollback(DriverTxnHandler.java:572)
> at 
> org.apache.hadoop.hive.ql.DriverTxnHandler.endTransactionAndCleanup(DriverTxnHandler.java:554)
> at 
> org.apache.hadoop.hive.ql.DriverTxnHandler.endTransactionAndCleanup(DriverTxnHandler.java:537)
> at 
> org.apache.hadoop.hive.ql.DriverTxnHandler.handleTransactionAfterExecution(DriverTxnHandler.java:487)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:333)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:149)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:144)
> at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:164)
> at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:230)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:258)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processCmd1(CliDriver.java:203)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:129)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:424)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:355)
> at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:744)
> at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:714)
> at 
> org.apache.hadoop.hive.cli.control.CorePerfCliDriver.runTest(CorePerfCliDriver.java:103)
> at 
> org.apache.hadoop.hive.cli.control.CliAdapter.runTest(CliAdapter.java:157)
> at 
> org.apache.hadoop.hive.cli.TestTezTPCDS30TBPerfCliDriver.testCliDriver(TestTezTPCDS30TBPerfCliDriver.java:79)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.apache.hadoop.hive.cli.TestTezTPCDS30TBPerfCliDriver$1.evaluate(TestTezTPCDS30TBPerfCliDriver.java:62)
> Caused by: MetaException(message:Unable to update transaction database 
> org.postgresql.util.PSQLException: ERROR: column "CQ_TXN_ID" does not exist
>   Position: 271
> at 
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2532)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2267)
> at 
> 

[jira] [Commented] (HIVE-24439) HS2 memory leak when commitTxn fails and queries involve partitioned tables

2020-11-27 Thread Stamatis Zampetakis (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-24439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17239715#comment-17239715
 ] 

Stamatis Zampetakis commented on HIVE-24439:


I will include steps/patch to reproduce when HIVE-23965 gets merged to master. 

We should examine why the context in DriverTxnHandler is not cleaned up 
properly and if we can avoid passing everything to shutdown hook.

 

> HS2 memory leak when commitTxn fails and queries involve partitioned tables
> ---
>
> Key: HIVE-24439
> URL: https://issues.apache.org/jira/browse/HIVE-24439
> Project: Hive
>  Issue Type: Task
>Reporter: Stamatis Zampetakis
>Priority: Major
> Attachments: heap_dump_overview.png
>
>
> Running explain plans on queries involving partitioned tables with many 
> partitions (for instance TPC-DS 30TB) leads to a memory leak when there are 
> failures during the commit of a transaction. 
> The heap dump shows many {{FieldSchema}} instances which cannot be garbage 
> collected since they are retained in the {{Context}} of the 
> {{DriverTxnHandler}} due to a [shutdown 
> hook|https://github.com/apache/hive/blob/aed7c86cdd59f0b2a4979633fbd191d451f2fd75/ql/src/java/org/apache/hadoop/hive/ql/DriverTxnHandler.java#L124]
>  that keeps a reference to the enclosing instance of DriverTxnHandler.
> !heap_dump_overview.png!
> In this case the commit failures are due to a metastore with a broken schema 
> (see stacktrace below) but I think that similar kind of failures can lead to 
> the same situation.
> {noformat}
> 2020-11-27T05:45:32,629 ERROR [c69f30a1-864e-4b66-973a-0cc03fb81f3f main] 
> ql.Driver: FAILED: Hive Internal Error: 
> org.apache.hadoop.hive.ql.lockmgr.LockException(Error communicating with the 
> metastore)
> org.apache.hadoop.hive.ql.lockmgr.LockException: Error communicating with the 
> metastore
> at 
> org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.commitTxn(DbTxnManager.java:535)
> at 
> org.apache.hadoop.hive.ql.DriverTxnHandler.commitOrRollback(DriverTxnHandler.java:572)
> at 
> org.apache.hadoop.hive.ql.DriverTxnHandler.endTransactionAndCleanup(DriverTxnHandler.java:554)
> at 
> org.apache.hadoop.hive.ql.DriverTxnHandler.endTransactionAndCleanup(DriverTxnHandler.java:537)
> at 
> org.apache.hadoop.hive.ql.DriverTxnHandler.handleTransactionAfterExecution(DriverTxnHandler.java:487)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:333)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:149)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:144)
> at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:164)
> at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:230)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:258)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processCmd1(CliDriver.java:203)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:129)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:424)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:355)
> at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:744)
> at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:714)
> at 
> org.apache.hadoop.hive.cli.control.CorePerfCliDriver.runTest(CorePerfCliDriver.java:103)
> at 
> org.apache.hadoop.hive.cli.control.CliAdapter.runTest(CliAdapter.java:157)
> at 
> org.apache.hadoop.hive.cli.TestTezTPCDS30TBPerfCliDriver.testCliDriver(TestTezTPCDS30TBPerfCliDriver.java:79)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.apache.hadoop.hive.cli.TestTezTPCDS30TBPerfCliDriver$1.evaluate(TestTezTPCDS30TBPerfCliDriver.java:62)
> Caused by: MetaException(message:Unable to update transaction database 
> org.postgresql.util.PSQLException: ERROR: column "CQ_TXN_ID" does not exist
>   Position: 271
> at 
> 

[jira] [Updated] (HIVE-24439) HS2 memory leak when commitTxn fails and queries involve partitioned tables

2020-11-27 Thread Stamatis Zampetakis (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-24439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stamatis Zampetakis updated HIVE-24439:
---
Attachment: HIVE-24439-hive.log.gz

> HS2 memory leak when commitTxn fails and queries involve partitioned tables
> ---
>
> Key: HIVE-24439
> URL: https://issues.apache.org/jira/browse/HIVE-24439
> Project: Hive
>  Issue Type: Task
>Reporter: Stamatis Zampetakis
>Priority: Major
> Attachments: HIVE-24439-hive.log.gz, heap_dump_overview.png
>
>
> Running explain plans on queries involving partitioned tables with many 
> partitions (for instance TPC-DS 30TB) leads to a memory leak when there are 
> failures during the commit of a transaction. 
> The heap dump shows many {{FieldSchema}} instances which cannot be garbage 
> collected since they are retained in the {{Context}} of the 
> {{DriverTxnHandler}} due to a [shutdown 
> hook|https://github.com/apache/hive/blob/aed7c86cdd59f0b2a4979633fbd191d451f2fd75/ql/src/java/org/apache/hadoop/hive/ql/DriverTxnHandler.java#L124]
>  that keeps a reference to the enclosing instance of DriverTxnHandler.
> !heap_dump_overview.png!
> In this case the commit failures are due to a metastore with a broken schema 
> (see stacktrace below) but I think that similar kind of failures can lead to 
> the same situation.
> {noformat}
> 2020-11-27T05:45:32,629 ERROR [c69f30a1-864e-4b66-973a-0cc03fb81f3f main] 
> ql.Driver: FAILED: Hive Internal Error: 
> org.apache.hadoop.hive.ql.lockmgr.LockException(Error communicating with the 
> metastore)
> org.apache.hadoop.hive.ql.lockmgr.LockException: Error communicating with the 
> metastore
> at 
> org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.commitTxn(DbTxnManager.java:535)
> at 
> org.apache.hadoop.hive.ql.DriverTxnHandler.commitOrRollback(DriverTxnHandler.java:572)
> at 
> org.apache.hadoop.hive.ql.DriverTxnHandler.endTransactionAndCleanup(DriverTxnHandler.java:554)
> at 
> org.apache.hadoop.hive.ql.DriverTxnHandler.endTransactionAndCleanup(DriverTxnHandler.java:537)
> at 
> org.apache.hadoop.hive.ql.DriverTxnHandler.handleTransactionAfterExecution(DriverTxnHandler.java:487)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:333)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:149)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:144)
> at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:164)
> at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:230)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:258)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processCmd1(CliDriver.java:203)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:129)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:424)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:355)
> at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:744)
> at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:714)
> at 
> org.apache.hadoop.hive.cli.control.CorePerfCliDriver.runTest(CorePerfCliDriver.java:103)
> at 
> org.apache.hadoop.hive.cli.control.CliAdapter.runTest(CliAdapter.java:157)
> at 
> org.apache.hadoop.hive.cli.TestTezTPCDS30TBPerfCliDriver.testCliDriver(TestTezTPCDS30TBPerfCliDriver.java:79)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.apache.hadoop.hive.cli.TestTezTPCDS30TBPerfCliDriver$1.evaluate(TestTezTPCDS30TBPerfCliDriver.java:62)
> Caused by: MetaException(message:Unable to update transaction database 
> org.postgresql.util.PSQLException: ERROR: column "CQ_TXN_ID" does not exist
>   Position: 271
> at 
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2532)
> at 
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2267)
> at 
> 

[jira] [Work logged] (HIVE-24423) Improve DbNotificationListener Thread

2020-11-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-24423?focusedWorklogId=517415=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-517415
 ]

ASF GitHub Bot logged work on HIVE-24423:
-

Author: ASF GitHub Bot
Created on: 27/Nov/20 17:48
Start Date: 27/Nov/20 17:48
Worklog Time Spent: 10m 
  Work Description: belugabehr commented on a change in pull request #1703:
URL: https://github.com/apache/hive/pull/1703#discussion_r531715773



##
File path: 
hcatalog/server-extensions/src/main/java/org/apache/hive/hcatalog/listener/DbNotificationListener.java
##
@@ -1242,64 +1244,50 @@ private void process(NotificationEvent event, 
ListenerEvent listenerEvent) throw
   }
 
   private static class CleanerThread extends Thread {
-private RawStore rs;
+private final RawStore rs;
 private int ttl;
-private boolean shouldRun = true;
 private long sleepTime;
 
 CleanerThread(Configuration conf, RawStore rs) {
   super("DB-Notification-Cleaner");
-  this.rs = rs;
-  boolean isReplEnabled = MetastoreConf.getBoolVar(conf, 
ConfVars.REPLCMENABLED);
-  if(isReplEnabled){
-setTimeToLive(MetastoreConf.getTimeVar(conf, 
ConfVars.REPL_EVENT_DB_LISTENER_TTL,
-TimeUnit.SECONDS));
-  }
-  else {
-setTimeToLive(MetastoreConf.getTimeVar(conf, 
ConfVars.EVENT_DB_LISTENER_TTL,
-TimeUnit.SECONDS));
-  }
-  setCleanupInterval(MetastoreConf.getTimeVar(conf, 
ConfVars.EVENT_DB_LISTENER_CLEAN_INTERVAL,
-  TimeUnit.MILLISECONDS));
   setDaemon(true);
+  this.rs = Objects.requireNonNull(rs);
+
+  boolean isReplEnabled = MetastoreConf.getBoolVar(conf, 
ConfVars.REPLCMENABLED);
+  ConfVars ttlConf = (isReplEnabled) ?  
ConfVars.REPL_EVENT_DB_LISTENER_TTL : ConfVars.EVENT_DB_LISTENER_TTL;
+  setTimeToLive(MetastoreConf.getTimeVar(conf, ttlConf, TimeUnit.SECONDS));
+  setCleanupInterval(
+  MetastoreConf.getTimeVar(conf, 
ConfVars.EVENT_DB_LISTENER_CLEAN_INTERVAL, TimeUnit.MILLISECONDS));
 }
 
 @Override
 public void run() {
-  while (shouldRun) {
+  while (true) {
+LOG.debug("Cleaner thread running");
 try {
   rs.cleanNotificationEvents(ttl);
   rs.cleanWriteNotificationEvents(ttl);
 } catch (Exception ex) {
-  //catching exceptions here makes sure that the thread doesn't die in 
case of unexpected
-  //exceptions
-  LOG.warn("Exception received while cleaning notifications: ", ex);
+  LOG.warn("Exception received while cleaning notifications", ex);

Review comment:
   Hey, fair question, and I considered that, however, 
`InterruptedException` is a checked-exception and neither of these two methods 
declare it so it will never be thrown here (unless the signature changes later 
to throw one).  In fact, these methods do not throw any checked exceptions at 
all.
   
   ```
void cleanWriteNotificationEvents(int olderThan);
void cleanNotificationEvents(int olderThan);
   ```
   
   So, if the thread is interrupted at any point, it will eventually make its 
ways to the `Thread.sleep()` call and throw the `InterruptedException` at that 
time (and exit).





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 517415)
Time Spent: 0.5h  (was: 20m)

> Improve DbNotificationListener Thread
> -
>
> Key: HIVE-24423
> URL: https://issues.apache.org/jira/browse/HIVE-24423
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Clean up and simplify {{DbNotificationListener}} thread class.
> Most importantly, stop the thread and wait for it to finish before launching 
> a new thread.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-24204) LLAP: Invalid TEZ Job token in multi fragment query

2020-11-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-24204?focusedWorklogId=517476=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-517476
 ]

ASF GitHub Bot logged work on HIVE-24204:
-

Author: ASF GitHub Bot
Created on: 28/Nov/20 00:44
Start Date: 28/Nov/20 00:44
Worklog Time Spent: 10m 
  Work Description: github-actions[bot] commented on pull request #1530:
URL: https://github.com/apache/hive/pull/1530#issuecomment-735019885


   This pull request has been automatically marked as stale because it has not 
had recent activity. It will be closed if no further activity occurs.
   Feel free to reach out on the d...@hive.apache.org list if the patch is in 
need of reviews.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 517476)
Time Spent: 20m  (was: 10m)

> LLAP: Invalid TEZ Job token in multi fragment query
> ---
>
> Key: HIVE-24204
> URL: https://issues.apache.org/jira/browse/HIVE-24204
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 2.3.0
>Reporter: Yuriy Baltovskyy
>Assignee: Yuriy Baltovskyy
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When using LLAP server in the Kerberized environment and submitting the query 
> via LLAP client that is planned as multi fragment (multiple splits), the 
> following error occurs and the query fails:
> org.apache.hadoop.ipc.Server: javax.security.sasl.SaslException: DIGEST-MD5: 
> digest response format violation. Mismatched response.
> This occurs because each split uses its own connection to LLAP server and its 
> own TEZ job token while LLAP server stores only one token binding it to the 
> whole query and not the separate fragment. When LLAP server communicates to 
> the clients and uses the stored token, this causes Sasl exception due to 
> using invalid token.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-23737) LLAP: Reuse dagDelete Feature Of Tez Custom Shuffle Handler Instead Of LLAP's dagDelete

2020-11-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23737?focusedWorklogId=517477=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-517477
 ]

ASF GitHub Bot logged work on HIVE-23737:
-

Author: ASF GitHub Bot
Created on: 28/Nov/20 00:44
Start Date: 28/Nov/20 00:44
Worklog Time Spent: 10m 
  Work Description: github-actions[bot] commented on pull request #1195:
URL: https://github.com/apache/hive/pull/1195#issuecomment-735019890


   This pull request has been automatically marked as stale because it has not 
had recent activity. It will be closed if no further activity occurs.
   Feel free to reach out on the d...@hive.apache.org list if the patch is in 
need of reviews.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 517477)
Time Spent: 1.5h  (was: 1h 20m)

> LLAP: Reuse dagDelete Feature Of Tez Custom Shuffle Handler Instead Of LLAP's 
> dagDelete
> ---
>
> Key: HIVE-23737
> URL: https://issues.apache.org/jira/browse/HIVE-23737
> Project: Hive
>  Issue Type: Improvement
>  Components: llap
>Reporter: Syed Shameerur Rahman
>Assignee: Syed Shameerur Rahman
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> LLAP have a dagDelete feature added as part of HIVE-9911, But now that Tez 
> have added support for dagDelete in custom shuffle handler (TEZ-3362) we 
> could re-use that feature in LLAP. 
> There are some added advantages of using Tez's dagDelete feature rather than 
> the current LLAP's dagDelete feature.
> 1) We can easily extend this feature to accommodate the upcoming features 
> such as vertex and failed task attempt shuffle data clean up. Refer TEZ-3363 
> and TEZ-4129
> 2) It will be more easier to maintain this feature by separating it out from 
> the Hive's code path. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-24431) Null Pointer exception while sending data to jdbc

2020-11-27 Thread Fabien Carrion (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-24431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabien Carrion updated HIVE-24431:
--
Attachment: (was: check_null.patch)

> Null Pointer exception while sending data to jdbc
> -
>
> Key: HIVE-24431
> URL: https://issues.apache.org/jira/browse/HIVE-24431
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC storage handler
>Affects Versions: All Versions
>Reporter: Fabien Carrion
>Priority: Trivial
>
> I was receiving some null pointer while writing in db:
> {quote}ERROR : FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Reducer 
> 3, vertexId=vertex_1604850281565_5081_1_02, diagnostics=[Task failed, 
> taskId=task_1604850281565_5081_1_02_01, diagnostics=[TaskAttempt 0 
> failed, info=[Error: Error while running task ( failure ) : 
> attempt_1604850281565_5081_1_02_01_0:java.lang.RuntimeException: 
> java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: 
> Hive Runtime Error while processing row
>  at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:296)
>  at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250)
>  at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374)
>  at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
>  at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
>  at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
>  at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
>  at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
>  at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69)
>  at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing row
>  at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:304)
>  at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.run(ReduceRecordProcessor.java:318)
>  at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267)
>  ... 16 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing row
>  at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource$GroupIterator.next(ReduceRecordSource.java:378)
>  at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:294)
>  ... 18 more
> Caused by: java.lang.NullPointerException
>  at org.apache.hive.storage.jdbc.JdbcSerDe.serialize(JdbcSerDe.java:166)
>  at org.apache.hive.storage.jdbc.JdbcSerDe.serialize(JdbcSerDe.java:59)
>  at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:961)
>  at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994)
>  at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940)
>  at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:927)
>  at 
> org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95)
>  at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994)
>  at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940)
>  at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:927)
>  at 
> org.apache.hadoop.hive.ql.exec.PTFOperator$PTFInvocation.handleOutputRows(PTFOperator.java:337)
>  at 
> org.apache.hadoop.hive.ql.exec.PTFOperator$PTFInvocation.processRow(PTFOperator.java:325)
>  at org.apache.hadoop.hive.ql.exec.PTFOperator.process(PTFOperator.java:139)
>  at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994)
>  at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940)
>  at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:927)
>  at 
> 

[jira] [Updated] (HIVE-24431) Null Pointer exception while sending data to jdbc

2020-11-27 Thread Fabien Carrion (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-24431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabien Carrion updated HIVE-24431:
--
Attachment: check_null.patch

> Null Pointer exception while sending data to jdbc
> -
>
> Key: HIVE-24431
> URL: https://issues.apache.org/jira/browse/HIVE-24431
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC storage handler
>Affects Versions: All Versions
>Reporter: Fabien Carrion
>Priority: Trivial
> Attachments: check_null.patch
>
>
> I was receiving some null pointer while writing in db:
> {quote}ERROR : FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Reducer 
> 3, vertexId=vertex_1604850281565_5081_1_02, diagnostics=[Task failed, 
> taskId=task_1604850281565_5081_1_02_01, diagnostics=[TaskAttempt 0 
> failed, info=[Error: Error while running task ( failure ) : 
> attempt_1604850281565_5081_1_02_01_0:java.lang.RuntimeException: 
> java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: 
> Hive Runtime Error while processing row
>  at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:296)
>  at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250)
>  at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374)
>  at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
>  at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
>  at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
>  at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
>  at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
>  at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69)
>  at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing row
>  at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:304)
>  at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.run(ReduceRecordProcessor.java:318)
>  at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267)
>  ... 16 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing row
>  at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource$GroupIterator.next(ReduceRecordSource.java:378)
>  at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:294)
>  ... 18 more
> Caused by: java.lang.NullPointerException
>  at org.apache.hive.storage.jdbc.JdbcSerDe.serialize(JdbcSerDe.java:166)
>  at org.apache.hive.storage.jdbc.JdbcSerDe.serialize(JdbcSerDe.java:59)
>  at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:961)
>  at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994)
>  at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940)
>  at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:927)
>  at 
> org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95)
>  at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994)
>  at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940)
>  at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:927)
>  at 
> org.apache.hadoop.hive.ql.exec.PTFOperator$PTFInvocation.handleOutputRows(PTFOperator.java:337)
>  at 
> org.apache.hadoop.hive.ql.exec.PTFOperator$PTFInvocation.processRow(PTFOperator.java:325)
>  at org.apache.hadoop.hive.ql.exec.PTFOperator.process(PTFOperator.java:139)
>  at org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994)
>  at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940)
>  at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:927)
>  at 
> 

[jira] [Assigned] (HIVE-24437) Add more removed configs for(Don't fail config validation for removed configs)

2020-11-27 Thread JiangZhu (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-24437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JiangZhu reassigned HIVE-24437:
---

Assignee: JiangZhu

> Add more removed configs for(Don't fail config validation for removed configs)
> --
>
> Key: HIVE-24437
> URL: https://issues.apache.org/jira/browse/HIVE-24437
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.3.7
>Reporter: JiangZhu
>Assignee: JiangZhu
>Priority: Major
> Fix For: 2.3.0, 2.3.7
>
> Attachments: HIVE-24437.patch
>
>
> Add more removed configs for(HIVE-14132 Don't fail config validation for 
> removed configs)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)