Re: Review Request 71784: HiveProtoLoggingHook might consume lots of memory

2019-11-20 Thread Harish Jaiprakash via Review Board


> On Nov. 20, 2019, 7:32 a.m., Harish Jaiprakash wrote:
> > ql/src/test/org/apache/hadoop/hive/ql/hooks/TestHiveProtoLoggingHook.java
> > Lines 176 (patched)
> > 
> >
> > We expect the dequeue to have not happened by this time. There is no 
> > guarantee, since its another thread. Can we atleast add a comment that this 
> > test can fail intermittently?
> 
> Attila Magyar wrote:
> I guess this effects the existing tests as well, right? However I don't 
> remember seeing any of those faling. Maybe because we're calling the 
> shutdown() on the evtLogger. According to its javadoc it waits for already 
> submitted task to complete.

The existing tests do not rely on order of execution between threads. But this 
test relys on the fact that the ExecutorService thread is not scheduled before 
the 4 sumbits to the executor service are executed and hence can cause 
intermittent failures. The only way to fix it would be to expose the underlying 
executor service and submit a task which will block the thread. And do the rest 
of the calls.


- Harish


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71784/#review218710
---


On Nov. 19, 2019, 9:13 p.m., Attila Magyar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/71784/
> ---
> 
> (Updated Nov. 19, 2019, 9:13 p.m.)
> 
> 
> Review request for hive, Laszlo Bodor, Harish Jaiprakash, Mustafa Iman, and 
> Panos Garefalakis.
> 
> 
> Bugs: HIVE-22514
> https://issues.apache.org/jira/browse/HIVE-22514
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> HiveProtoLoggingHook uses a ScheduledThreadPoolExecutor to submit writer 
> tasks and to periodically handle rollover. The builtin 
> ScheduledThreadPoolExecutor uses a unbounded queue which cannot be replaced 
> from the outside. If log events are generated at a very fast rate this queue 
> can grow large.
> 
> Since ScheduledThreadPoolExecutor does not support changing the default 
> unbounded queue to a bounded one, the queue capacity is checked manually by 
> the patch.
> 
> 
> Diffs
> -
> 
>   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java a7687d59004 
>   ql/src/java/org/apache/hadoop/hive/ql/hooks/HiveProtoLoggingHook.java 
> 8eab54859bf 
>   ql/src/test/org/apache/hadoop/hive/ql/hooks/TestHiveProtoLoggingHook.java 
> 450a0b544d6 
> 
> 
> Diff: https://reviews.apache.org/r/71784/diff/1/
> 
> 
> Testing
> ---
> 
> unittest
> 
> 
> Thanks,
> 
> Attila Magyar
> 
>



[jira] [Created] (HIVE-22522) llap doesn't work using complex join operation

2019-11-20 Thread lv haiyang (Jira)
lv haiyang created HIVE-22522:
-

 Summary: llap doesn't work using complex join operation
 Key: HIVE-22522
 URL: https://issues.apache.org/jira/browse/HIVE-22522
 Project: Hive
  Issue Type: Bug
  Components: llap
Affects Versions: 3.1.1
Reporter: lv haiyang


ERROR : FAILED: Execution Error, return code 2 from 
org.apache.hadoop.hive.ql.exec.tez.TezTask. 
 Dag received [DAG_TERMINATE, SERVICE_PLUGIN_ERROR] in RUNNING state.
 Error reported by TaskScheduler [[2:LLAP]][SERVICE_UNAVAILABLE] 
 No LLAP Daemons are runningVertex killed, vertexName=Reducer 3, 
vertexId=vertex_1574126686177_0029_47_08,
 diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not 
succeed due to DAG_TERMINATED, 
 failedTasks:0 killedTasks:1, Vertex vertex_1574126686177_0029_47_08 [Reducer 
3] killed/failed due to:
 DAG_TERMINATED]Vertex killed, vertexName=Map 1, 
vertexId=vertex_1574126686177_0029_47_05, 
 diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not 
succeed due to DAG_TERMINATED,
 failedTasks:0 killedTasks:23, Vertex vertex_1574126686177_0029_47_05 [Map 1] 
killed/failed due to:
 DAG_TERMINATED]Vertex killed, vertexName=Reducer 2, 
vertexId=vertex_1574126686177_0029_47_07, 
 diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not 
succeed due to DAG_TERMINATED,
 failedTasks:0 killedTasks:68, Vertex vertex_1574126686177_0029_47_07 [Reducer 
2] killed/failed due to:
 DAG_TERMINATED]Vertex killed, vertexName=Reducer 4, 
vertexId=vertex_1574126686177_0029_47_06,
 diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not 
succeed due to DAG_TERMINATED,
 failedTasks:0 killedTasks:72, Vertex vertex_1574126686177_0029_47_06 [Reducer 
4] killed/failed due to:
 DAG_TERMINATED]DAG did not succeed due to SERVICE_PLUGIN_ERROR. 
failedVertices:0 killedVertices:4
INFO : Completed executing 
command(queryId=hive_20191120101841_c7d177d8-28bb-48f8-a14f-eb65fc3b); 
Time taken: 557.077 seconds
Error: Error while processing statement: FAILED: Execution Error,
 return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. 
 Dag received [DAG_TERMINATE, SERVICE_PLUGIN_ERROR] in RUNNING state.
 Error reported by TaskScheduler [[2:LLAP]][SERVICE_UNAVAILABLE] 
 No LLAP Daemons are runningVertex killed, vertexName=Reducer 3, 
vertexId=vertex_1574126686177_0029_47_08,
 diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not 
succeed due to DAG_TERMINATED, 
 failedTasks:0 killedTasks:1, Vertex vertex_1574126686177_0029_47_08 [Reducer 
3] killed/failed due to:
 DAG_TERMINATED]Vertex killed, vertexName=Map 1, 
vertexId=vertex_1574126686177_0029_47_05, 
 diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not 
succeed due to DAG_TERMINATED, 
 failedTasks:0 killedTasks:23, Vertex vertex_1574126686177_0029_47_05 [Map 1] 
killed/failed due to:
 DAG_TERMINATED]Vertex killed, vertexName=Reducer 2, 
vertexId=vertex_1574126686177_0029_47_07, 
 diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not 
succeed due to DAG_TERMINATED, 
 failedTasks:0 killedTasks:68, Vertex vertex_1574126686177_0029_47_07 [Reducer 
2] killed/failed due to:
 DAG_TERMINATED]Vertex killed, vertexName=Reducer 4, 
vertexId=vertex_1574126686177_0029_47_06, 
 diagnostics=[Vertex received Kill while in RUNNING state., Vertex did not 
succeed due to DAG_TERMINATED, 
 failedTasks:0 killedTasks:72, Vertex vertex_1574126686177_0029_47_06 [Reducer 
4] killed/failed due to:
 DAG_TERMINATED]DAG did not succeed due to SERVICE_PLUGIN_ERROR. 
failedVertices:0 killedVertices:
 4 (state=08S01,code=2)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Review Request 71775: HIVE-22280: Q tests for partitioned temporary tables

2019-11-20 Thread Laszlo Pinter via Review Board

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71775/
---

Review request for hive, Marta Kuczora, Peter Vary, and Adam Szita.


Repository: hive-git


Description
---

HIVE-22280: Q tests for partitioned temporary tables


Diffs
-

  itests/src/test/resources/testconfiguration.properties 
2918a6852c6f8448ea44472df0be9d521d5c3b27 
  ql/src/test/queries/clientnegative/temp_table_addpart1.q PRE-CREATION 
  
ql/src/test/queries/clientnegative/temp_table_alter_rename_partition_failure.q 
PRE-CREATION 
  
ql/src/test/queries/clientnegative/temp_table_alter_rename_partition_failure2.q 
PRE-CREATION 
  
ql/src/test/queries/clientnegative/temp_table_alter_rename_partition_failure3.q 
PRE-CREATION 
  ql/src/test/queries/clientnegative/temp_table_drop_partition_failure.q 
PRE-CREATION 
  ql/src/test/queries/clientnegative/temp_table_drop_partition_filter_failure.q 
PRE-CREATION 
  ql/src/test/queries/clientnegative/temp_table_exchange_partitions.q 
PRE-CREATION 
  ql/src/test/queries/clientpositive/temp_table_add_part_exist.q PRE-CREATION 
  ql/src/test/queries/clientpositive/temp_table_add_part_multiple.q 
PRE-CREATION 
  ql/src/test/queries/clientpositive/temp_table_add_part_with_loc.q 
PRE-CREATION 
  ql/src/test/queries/clientpositive/temp_table_alter_partition_change_col.q 
PRE-CREATION 
  
ql/src/test/queries/clientpositive/temp_table_alter_partition_clusterby_sortby.q
 PRE-CREATION 
  ql/src/test/queries/clientpositive/temp_table_alter_partition_coltype.q 
PRE-CREATION 
  
ql/src/test/queries/clientpositive/temp_table_alter_partition_onto_nocurrent_db.q
 PRE-CREATION 
  ql/src/test/queries/clientpositive/temp_table_alter_rename_partition.q 
PRE-CREATION 
  ql/src/test/queries/clientpositive/temp_table_avro_partitioned.q PRE-CREATION 
  ql/src/test/queries/clientpositive/temp_table_avro_partitioned_native.q 
PRE-CREATION 
  ql/src/test/queries/clientpositive/temp_table_default_partition_name.q 
PRE-CREATION 
  ql/src/test/queries/clientpositive/temp_table_drop_multi_partitions.q 
PRE-CREATION 
  ql/src/test/queries/clientpositive/temp_table_drop_partitions_filter.q 
PRE-CREATION 
  ql/src/test/queries/clientpositive/temp_table_drop_partitions_filter2.q 
PRE-CREATION 
  ql/src/test/queries/clientpositive/temp_table_drop_partitions_filter3.q 
PRE-CREATION 
  ql/src/test/queries/clientpositive/temp_table_drop_partitions_filter4.q 
PRE-CREATION 
  ql/src/test/queries/clientpositive/temp_table_exchange_partition.q 
PRE-CREATION 
  ql/src/test/queries/clientpositive/temp_table_exchange_partition2.q 
PRE-CREATION 
  ql/src/test/queries/clientpositive/temp_table_exchange_partition3.q 
PRE-CREATION 
  ql/src/test/queries/clientpositive/temp_table_exchgpartition2lel.q 
PRE-CREATION 
  ql/src/test/queries/clientpositive/temp_table_insert1_overwrite_partitions.q 
PRE-CREATION 
  ql/src/test/queries/clientpositive/temp_table_insert2_overwrite_partitions.q 
PRE-CREATION 
  
ql/src/test/queries/clientpositive/temp_table_insert_values_dynamic_partitioned.q
 PRE-CREATION 
  ql/src/test/queries/clientpositive/temp_table_insert_values_partitioned.q 
PRE-CREATION 
  
ql/src/test/queries/clientpositive/temp_table_insert_with_move_files_from_source_dir.q
 PRE-CREATION 
  ql/src/test/queries/clientpositive/temp_table_llap_partitioned.q PRE-CREATION 
  ql/src/test/queries/clientpositive/temp_table_load_dyn_part1.q PRE-CREATION 
  ql/src/test/queries/clientpositive/temp_table_loadpart1.q PRE-CREATION 
  ql/src/test/queries/clientpositive/temp_table_loadpart2.q PRE-CREATION 
  ql/src/test/queries/clientpositive/temp_table_merge_dynamic_partition.q 
PRE-CREATION 
  ql/src/test/queries/clientpositive/temp_table_merge_dynamic_partition2.q 
PRE-CREATION 
  ql/src/test/queries/clientpositive/temp_table_merge_dynamic_partition3.q 
PRE-CREATION 
  ql/src/test/queries/clientpositive/temp_table_merge_dynamic_partition4.q 
PRE-CREATION 
  ql/src/test/queries/clientpositive/temp_table_merge_dynamic_partition5.q 
PRE-CREATION 
  ql/src/test/queries/clientpositive/temp_table_multi_insert_partitioned.q 
PRE-CREATION 
  ql/src/test/queries/clientpositive/temp_table_orc_diff_part_cols.q 
PRE-CREATION 
  ql/src/test/queries/clientpositive/temp_table_orc_diff_part_cols2.q 
PRE-CREATION 
  
ql/src/test/queries/clientpositive/temp_table_parquet_mixed_partition_formats.q 
PRE-CREATION 
  
ql/src/test/queries/clientpositive/temp_table_parquet_mixed_partition_formats2.q
 PRE-CREATION 
  ql/src/test/queries/clientpositive/temp_table_parquet_partitioned.q 
PRE-CREATION 
  ql/src/test/queries/clientpositive/temp_table_parquet_ppd_partition.q 
PRE-CREATION 
  ql/src/test/queries/clientpositive/temp_table_partInit.q PRE-CREATION 
  ql/src/test/queries/clientpositive/temp_table_partcols1.q PRE-CREATION 
  ql/src/test/queries/clientpositive/temp_table_partition_boolean.q 
PRE-CREATION 
  ql/src/test/q

[jira] [Created] (HIVE-22521) Both Driver and SessionState has a userName

2019-11-20 Thread Zoltan Haindrich (Jira)
Zoltan Haindrich created HIVE-22521:
---

 Summary: Both Driver and SessionState has a userName
 Key: HIVE-22521
 URL: https://issues.apache.org/jira/browse/HIVE-22521
 Project: Hive
  Issue Type: Bug
Reporter: Zoltan Haindrich
Assignee: Zoltan Haindrich


This caused some confusing behaviour to me...especially when the 2 values were 
different.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Review Request 71792: COMPLETED_TXN_COMPONENTS table is never cleaned up unless Compactor runs

2019-11-20 Thread Denys Kuzmenko via Review Board

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71792/#review218723
---



Not ready. Need to handle aborted and currently active compactions.

- Denys Kuzmenko


On Nov. 20, 2019, 12:20 p.m., Denys Kuzmenko wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/71792/
> ---
> 
> (Updated Nov. 20, 2019, 12:20 p.m.)
> 
> 
> Review request for hive, Laszlo Pinter and Peter Vary.
> 
> 
> Bugs: HIVE-21917
> https://issues.apache.org/jira/browse/HIVE-21917
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> The Initiator thread in the metastore repeatedly loops over entries in the 
> COMPLETED_TXN_COMPONENTS table to determine which partitions / tables might 
> need to be compacted. However, entries are never removed from this table 
> except by a completed Compactor run.
> 
> In a cluster where most tables / partitions are write-once read-many, this 
> results in stale entries in this table never being cleaned up. In a small 
> test cluster, we have observed approximately 45k entries in this table 
> (virtually equal to the number of partitions in the cluster) while < 100 of 
> these tables have delta files at all. Since most of the tables will never get 
> enough writes to trigger a compaction (and in fact have only ever been 
> written to once), the initiator thread keeps trying to evaluate them on every 
> loop.
> 
> 
> Diffs
> -
> 
>   ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Initiator.java 
> 610cf05204 
>   
> ql/src/test/org/apache/hadoop/hive/metastore/txn/TestCompactionTxnHandler.java
>  b28b57779b 
>   
> standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/CompactionTxnHandler.java
>  8253ccb9c9 
>   
> standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
>  6281208247 
>   
> standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnStore.java
>  e840758c9d 
> 
> 
> Diff: https://reviews.apache.org/r/71792/diff/1/
> 
> 
> Testing
> ---
> 
> Unit tests
> 
> 
> Thanks,
> 
> Denys Kuzmenko
> 
>



Review Request 71792: COMPLETED_TXN_COMPONENTS table is never cleaned up unless Compactor runs

2019-11-20 Thread Denys Kuzmenko via Review Board

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71792/
---

Review request for hive, Laszlo Pinter and Peter Vary.


Bugs: HIVE-21917
https://issues.apache.org/jira/browse/HIVE-21917


Repository: hive-git


Description
---

The Initiator thread in the metastore repeatedly loops over entries in the 
COMPLETED_TXN_COMPONENTS table to determine which partitions / tables might 
need to be compacted. However, entries are never removed from this table except 
by a completed Compactor run.

In a cluster where most tables / partitions are write-once read-many, this 
results in stale entries in this table never being cleaned up. In a small test 
cluster, we have observed approximately 45k entries in this table (virtually 
equal to the number of partitions in the cluster) while < 100 of these tables 
have delta files at all. Since most of the tables will never get enough writes 
to trigger a compaction (and in fact have only ever been written to once), the 
initiator thread keeps trying to evaluate them on every loop.


Diffs
-

  ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Initiator.java 610cf05204 
  
ql/src/test/org/apache/hadoop/hive/metastore/txn/TestCompactionTxnHandler.java 
b28b57779b 
  
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/CompactionTxnHandler.java
 8253ccb9c9 
  
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java
 6281208247 
  
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnStore.java
 e840758c9d 


Diff: https://reviews.apache.org/r/71792/diff/1/


Testing
---

Unit tests


Thanks,

Denys Kuzmenko



[jira] [Created] (HIVE-22520) MS-SQL server: Load partition throws error in TxnHandler (ACID dataset)

2019-11-20 Thread Rajesh Balamohan (Jira)
Rajesh Balamohan created HIVE-22520:
---

 Summary: MS-SQL server: Load partition throws error in TxnHandler 
(ACID dataset)
 Key: HIVE-22520
 URL: https://issues.apache.org/jira/browse/HIVE-22520
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 3.1.2
Reporter: Rajesh Balamohan


When loading ACID table with MS-SQL server as backend, it ends up throwing 
following exception.

 
{noformat}
 thrift.ProcessFunction: Internal error processing add_dynamic_partitions
org.apache.hadoop.hive.metastore.api.MetaException: Unable to insert into from 
transaction database com.microsoft.sqlserver.jdbc.SQLServerException: The 
incoming request has too many parameters. The server supports a maximum of 2100 
parameters. Reduce the number of parameters and resend the request.
at 
com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:254)
at 
com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1608)
at 
com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatement(SQLServerPreparedStatement.java:578)
at 
com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement$PrepStmtExecCmd.doExecute(SQLServerPreparedStatement.java:508)
at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7240)
at 
com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:2869)
at 
com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:243)
at 
com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:218)
at 
com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeUpdate(SQLServerPreparedStatement.java:461)
at 
com.zaxxer.hikari.pool.ProxyPreparedStatement.executeUpdate(ProxyPreparedStatement.java:61)
at 
com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeUpdate(HikariProxyPreparedStatement.java)
at 
org.apache.hadoop.hive.metastore.txn.TxnHandler.addDynamicPartitions(TxnHandler.java:3149)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.add_dynamic_partitions(HiveMetaStore.java:7824)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
at 
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
at com.sun.proxy.$Proxy32.add_dynamic_partitions(Unknown Source)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$add_dynamic_partitions.getResult(ThriftHiveMetastore.java:19038)
at 
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$add_dynamic_partitions.getResult(ThriftHiveMetastore.java:19022)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at 
org.apache.hadoop.hive.metastore.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:48)
at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
{noformat}

https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java#L3258
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HIVE-22519) TestMiniLlapLocalCliDriver#testCliDriver[sysdb_schq] fails intermittently

2019-11-20 Thread Zoltan Haindrich (Jira)
Zoltan Haindrich created HIVE-22519:
---

 Summary: TestMiniLlapLocalCliDriver#testCliDriver[sysdb_schq] 
fails intermittently
 Key: HIVE-22519
 URL: https://issues.apache.org/jira/browse/HIVE-22519
 Project: Hive
  Issue Type: Bug
  Components: Tests
Reporter: Zoltan Haindrich
Assignee: Zoltan Haindrich


Sometimes there is some "desc formatted" alike output in the q.out ; which 
shows up as a difference



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HIVE-22518) SQLStdHiveAuthorizerFactoryForTest doesn't work correctly for llap tests

2019-11-20 Thread Zoltan Haindrich (Jira)
Zoltan Haindrich created HIVE-22518:
---

 Summary: SQLStdHiveAuthorizerFactoryForTest doesn't work correctly 
for llap tests
 Key: HIVE-22518
 URL: https://issues.apache.org/jira/browse/HIVE-22518
 Project: Hive
  Issue Type: Bug
Reporter: Zoltan Haindrich






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HIVE-22517) Sysdb related qtests also output the sysdb sql commands to q.out

2019-11-20 Thread Zoltan Haindrich (Jira)
Zoltan Haindrich created HIVE-22517:
---

 Summary: Sysdb related qtests also output the sysdb sql commands 
to q.out
 Key: HIVE-22517
 URL: https://issues.apache.org/jira/browse/HIVE-22517
 Project: Hive
  Issue Type: Bug
  Components: Test
Reporter: Zoltan Haindrich
Assignee: Zoltan Haindrich


it would be better to not have it on the outputs



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (HIVE-22516) TestScheduledQueryIntegration fails occasionally

2019-11-20 Thread Zoltan Haindrich (Jira)
Zoltan Haindrich created HIVE-22516:
---

 Summary: TestScheduledQueryIntegration fails occasionally
 Key: HIVE-22516
 URL: https://issues.apache.org/jira/browse/HIVE-22516
 Project: Hive
  Issue Type: Bug
Reporter: Zoltan Haindrich
Assignee: Zoltan Haindrich


failure seems to be caused by some filesystem level operation:

{code}

Failed
org.apache.hadoop.hive.schq.TestScheduledQueryIntegration.testScheduledQueryExecutionImpersonation

Failing for the past 2 builds (Since Failed#19506 )
Took 21 sec.
Error Message
java.io.IOException: ExitCodeException exitCode=1: chmod: cannot access 
‘/home/hiveptest/35.224.52.88-hiveptest-0/apache-github-source-source/target/tmp/junit9072291964634791171/scratchdir/hiveptest/_tez_session_dir/d1aa15eb-d23c-4248-b509-0b29c456a1cd/.tez/application_1574237195383_0001_wd/localmode-log-dir’:
 No such file or directory
Stacktrace
java.lang.RuntimeException: 
java.io.IOException: ExitCodeException exitCode=1: chmod: cannot access 
‘/home/hiveptest/35.224.52.88-hiveptest-0/apache-github-source-source/target/tmp/junit9072291964634791171/scratchdir/hiveptest/_tez_session_dir/d1aa15eb-d23c-4248-b509-0b29c456a1cd/.tez/application_1574237195383_0001_wd/localmode-log-dir’:
 No such file or directory

at 
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:701)
at 
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:606)
at 
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:586)
at 
org.apache.hadoop.hive.schq.TestScheduledQueryIntegration.createDriver(TestScheduledQueryIntegration.java:164)
at 
org.apache.hadoop.hive.schq.TestScheduledQueryIntegration.runAsUser(TestScheduledQueryIntegration.java:132)
at 
org.apache.hadoop.hive.schq.TestScheduledQueryIntegration.testScheduledQueryExecutionImpersonation(TestScheduledQueryIntegration.java:115)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413)
Caused by: java.io.IOException: ExitCodeException exitCode=1: chmod: cannot 
access 
‘/home/hiveptest/35.224.52.88-hiveptest-0/apache-github-source-source/target/tmp/junit9072291964634791171/scratchdir/hiveptest/_tez_session_dir/d1aa15eb-d23c-4248-b509-0b29c456a1cd/.tez/application_1574237195383_0001_wd/localmode-log-dir’:
 No s

Re: Review Request 71784: HiveProtoLoggingHook might leak memory

2019-11-20 Thread Attila Magyar


> On Nov. 20, 2019, 1:58 a.m., Harish Jaiprakash wrote:
> > Thanks for the change. This does solve the memory problem and it looks good 
> > for me.
> > 
> > We need a follow up JIRA to address why the queue size was 17,000 events. 
> > Was this hdfs or s3fs? In either case we should have some more 
> > optimizations like:
> > * if there are lot of events, batch the flush to hdfs.
> > * if its one event per file mode, increase parallelism since writes are not 
> > happening in different files.
> > 
> > FYI, the events are lost since it is not written to the hdfs file and DAS 
> > will not get these events. But that is better than crashing hiveserver2.

Thanks for the review. The hive.hook.proto.base-directory points to an s3a path.


- Attila


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71784/#review218709
---


On Nov. 19, 2019, 3:43 p.m., Attila Magyar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/71784/
> ---
> 
> (Updated Nov. 19, 2019, 3:43 p.m.)
> 
> 
> Review request for hive, Laszlo Bodor, Harish Jaiprakash, Mustafa Iman, and 
> Panos Garefalakis.
> 
> 
> Bugs: HIVE-22514
> https://issues.apache.org/jira/browse/HIVE-22514
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> HiveProtoLoggingHook uses a ScheduledThreadPoolExecutor to submit writer 
> tasks and to periodically handle rollover. The builtin 
> ScheduledThreadPoolExecutor uses a unbounded queue which cannot be replaced 
> from the outside. If log events are generated at a very fast rate this queue 
> can grow large.
> 
> Since ScheduledThreadPoolExecutor does not support changing the default 
> unbounded queue to a bounded one, the queue capacity is checked manually by 
> the patch.
> 
> 
> Diffs
> -
> 
>   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java a7687d59004 
>   ql/src/java/org/apache/hadoop/hive/ql/hooks/HiveProtoLoggingHook.java 
> 8eab54859bf 
>   ql/src/test/org/apache/hadoop/hive/ql/hooks/TestHiveProtoLoggingHook.java 
> 450a0b544d6 
> 
> 
> Diff: https://reviews.apache.org/r/71784/diff/1/
> 
> 
> Testing
> ---
> 
> unittest
> 
> 
> Thanks,
> 
> Attila Magyar
> 
>



Re: Review Request 71784: HiveProtoLoggingHook might leak memory

2019-11-20 Thread Attila Magyar


> On Nov. 20, 2019, 2:02 a.m., Harish Jaiprakash wrote:
> > ql/src/test/org/apache/hadoop/hive/ql/hooks/TestHiveProtoLoggingHook.java
> > Lines 176 (patched)
> > 
> >
> > We expect the dequeue to have not happened by this time. There is no 
> > guarantee, since its another thread. Can we atleast add a comment that this 
> > test can fail intermittently?

I guess this effects the existing tests as well, right? However I don't 
remember seeing any of those faling. Maybe because we're calling the shutdown() 
on the evtLogger. According to its javadoc it waits for already submitted task 
to complete.


- Attila


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/71784/#review218710
---


On Nov. 19, 2019, 3:43 p.m., Attila Magyar wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/71784/
> ---
> 
> (Updated Nov. 19, 2019, 3:43 p.m.)
> 
> 
> Review request for hive, Laszlo Bodor, Harish Jaiprakash, Mustafa Iman, and 
> Panos Garefalakis.
> 
> 
> Bugs: HIVE-22514
> https://issues.apache.org/jira/browse/HIVE-22514
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> HiveProtoLoggingHook uses a ScheduledThreadPoolExecutor to submit writer 
> tasks and to periodically handle rollover. The builtin 
> ScheduledThreadPoolExecutor uses a unbounded queue which cannot be replaced 
> from the outside. If log events are generated at a very fast rate this queue 
> can grow large.
> 
> Since ScheduledThreadPoolExecutor does not support changing the default 
> unbounded queue to a bounded one, the queue capacity is checked manually by 
> the patch.
> 
> 
> Diffs
> -
> 
>   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java a7687d59004 
>   ql/src/java/org/apache/hadoop/hive/ql/hooks/HiveProtoLoggingHook.java 
> 8eab54859bf 
>   ql/src/test/org/apache/hadoop/hive/ql/hooks/TestHiveProtoLoggingHook.java 
> 450a0b544d6 
> 
> 
> Diff: https://reviews.apache.org/r/71784/diff/1/
> 
> 
> Testing
> ---
> 
> unittest
> 
> 
> Thanks,
> 
> Attila Magyar
> 
>