[jira] [Commented] (HIVE-22275) OperationManager.queryIdOperation does not properly clean up multiple queryIds
[ https://issues.apache.org/jira/browse/HIVE-22275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16965032#comment-16965032 ] Dinesh Chitlangia commented on HIVE-22275: -- Thanks [~jdere] > OperationManager.queryIdOperation does not properly clean up multiple queryIds > -- > > Key: HIVE-22275 > URL: https://issues.apache.org/jira/browse/HIVE-22275 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-22275.1.patch, HIVE-22275.2.patch > > > In the case that multiple statements are run by a single Session before being > cleaned up, it appears that OperationManager.queryIdOperation is not cleaned > up properly. > See the log statements below - with the exception of the first "Removed > queryId:" log line, the queryId listed during cleanup is the same, when each > of these handles should have their own queryId. Looks like only the last > queryId executed is being cleaned up. > As a result, HS2 can run out of memory as OperationManager.queryIdOperation > grows and never cleans these queryIds/Operations up. > {noformat} > 2019-09-13T08:37:36,785 INFO [8eaa1601-f045-4ad5-9c2e-1e5944b75f6a > HiveServer2-Handler-Pool: Thread-202]: operation.OperationManager (:()) - > Adding operation: OperationHandle [opType=EXECUTE_STATEMENT, > getHandleIdentifier()=dfed4c18-a284-4640-9f4a-1a20527105f9] > 2019-09-13T08:37:38,432 INFO [8eaa1601-f045-4ad5-9c2e-1e5944b75f6a > HiveServer2-Handler-Pool: Thread-202]: operation.OperationManager (:()) - > Removed queryId: hive_20190913083736_c49cf3cc-cfe8-48a1-bd22-8b924dfb0396 > corresponding to operation: OperationHandle [opType=EXECUTE_STATEMENT, > getHandleIdentifier()=dfed4c18-a284-4640-9f4a-1a20527105f9] with tag: null > 2019-09-13T08:37:38,469 INFO [8eaa1601-f045-4ad5-9c2e-1e5944b75f6a > HiveServer2-Handler-Pool: Thread-202]: operation.OperationManager (:()) - > Adding operation: OperationHandle [opType=EXECUTE_STATEMENT, > getHandleIdentifier()=24d0030c-0e49-45fb-a918-2276f0941cfb] > 2019-09-13T08:37:52,662 INFO [8eaa1601-f045-4ad5-9c2e-1e5944b75f6a > HiveServer2-Handler-Pool: Thread-202]: operation.OperationManager (:()) - > Adding operation: OperationHandle [opType=EXECUTE_STATEMENT, > getHandleIdentifier()=b983802c-1dec-4fa0-8680-d05ab555321b] > 2019-09-13T08:37:56,239 INFO [8eaa1601-f045-4ad5-9c2e-1e5944b75f6a > HiveServer2-Handler-Pool: Thread-202]: operation.OperationManager (:()) - > Adding operation: OperationHandle [opType=EXECUTE_STATEMENT, > getHandleIdentifier()=75dbc531-2964-47b2-84d7-85b59f88999c] > 2019-09-13T08:38:02,551 INFO [8eaa1601-f045-4ad5-9c2e-1e5944b75f6a > HiveServer2-Handler-Pool: Thread-202]: operation.OperationManager (:()) - > Adding operation: OperationHandle [opType=EXECUTE_STATEMENT, > getHandleIdentifier()=72c79076-9d67-4894-a526-c233fa5450b2] > 2019-09-13T08:38:10,558 INFO [8eaa1601-f045-4ad5-9c2e-1e5944b75f6a > HiveServer2-Handler-Pool: Thread-202]: operation.OperationManager (:()) - > Adding operation: OperationHandle [opType=EXECUTE_STATEMENT, > getHandleIdentifier()=17b30a62-612d-4b70-9ba7-4287d2d9229b] > 2019-09-13T08:38:16,930 INFO [8eaa1601-f045-4ad5-9c2e-1e5944b75f6a > HiveServer2-Handler-Pool: Thread-202]: operation.OperationManager (:()) - > Adding operation: OperationHandle [opType=EXECUTE_STATEMENT, > getHandleIdentifier()=ea97e99d-cc77-470b-b49a-b869c73a4615] > 2019-09-13T08:38:20,440 INFO [8eaa1601-f045-4ad5-9c2e-1e5944b75f6a > HiveServer2-Handler-Pool: Thread-202]: operation.OperationManager (:()) - > Adding operation: OperationHandle [opType=EXECUTE_STATEMENT, > getHandleIdentifier()=a277b789-ebb8-4925-878f-6728d3e8c5fb] > 2019-09-13T08:38:26,303 INFO [8eaa1601-f045-4ad5-9c2e-1e5944b75f6a > HiveServer2-Handler-Pool: Thread-202]: operation.OperationManager (:()) - > Adding operation: OperationHandle [opType=EXECUTE_STATEMENT, > getHandleIdentifier()=9a023ab8-aa80-45db-af88-94790cc83033] > 2019-09-13T08:38:30,791 INFO [8eaa1601-f045-4ad5-9c2e-1e5944b75f6a > HiveServer2-Handler-Pool: Thread-202]: operation.OperationManager (:()) - > Adding operation: OperationHandle [opType=EXECUTE_STATEMENT, > getHandleIdentifier()=b697c801-7da0-4544-bcfa-442eb1d3bd77] > 2019-09-13T08:39:10,187 INFO [8eaa1601-f045-4ad5-9c2e-1e5944b75f6a > HiveServer2-Handler-Pool: Thread-202]: operation.OperationManager (:()) - > Adding operation: OperationHandle [opType=EXECUTE_STATEMENT, > getHandleIdentifier()=bda93c8f-0822-4592-a61c-4701720a1a5c] > 2019-09-13T08:39:15,471 INFO [8eaa1601-f045-4ad5-9c2e-1e5944b75f6a > HiveServer2-Handler-Pool: Thread-202]: operation.OperationManager (:()) - > Removed queryId:
[jira] [Commented] (HIVE-22275) OperationManager.queryIdOperation does not properly clean up multiple queryIds
[ https://issues.apache.org/jira/browse/HIVE-22275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16964874#comment-16964874 ] Dinesh Chitlangia commented on HIVE-22275: -- [~jdere] Does this issue also impact LLAP or only HiveServer2 ? > OperationManager.queryIdOperation does not properly clean up multiple queryIds > -- > > Key: HIVE-22275 > URL: https://issues.apache.org/jira/browse/HIVE-22275 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-22275.1.patch, HIVE-22275.2.patch > > > In the case that multiple statements are run by a single Session before being > cleaned up, it appears that OperationManager.queryIdOperation is not cleaned > up properly. > See the log statements below - with the exception of the first "Removed > queryId:" log line, the queryId listed during cleanup is the same, when each > of these handles should have their own queryId. Looks like only the last > queryId executed is being cleaned up. > As a result, HS2 can run out of memory as OperationManager.queryIdOperation > grows and never cleans these queryIds/Operations up. > {noformat} > 2019-09-13T08:37:36,785 INFO [8eaa1601-f045-4ad5-9c2e-1e5944b75f6a > HiveServer2-Handler-Pool: Thread-202]: operation.OperationManager (:()) - > Adding operation: OperationHandle [opType=EXECUTE_STATEMENT, > getHandleIdentifier()=dfed4c18-a284-4640-9f4a-1a20527105f9] > 2019-09-13T08:37:38,432 INFO [8eaa1601-f045-4ad5-9c2e-1e5944b75f6a > HiveServer2-Handler-Pool: Thread-202]: operation.OperationManager (:()) - > Removed queryId: hive_20190913083736_c49cf3cc-cfe8-48a1-bd22-8b924dfb0396 > corresponding to operation: OperationHandle [opType=EXECUTE_STATEMENT, > getHandleIdentifier()=dfed4c18-a284-4640-9f4a-1a20527105f9] with tag: null > 2019-09-13T08:37:38,469 INFO [8eaa1601-f045-4ad5-9c2e-1e5944b75f6a > HiveServer2-Handler-Pool: Thread-202]: operation.OperationManager (:()) - > Adding operation: OperationHandle [opType=EXECUTE_STATEMENT, > getHandleIdentifier()=24d0030c-0e49-45fb-a918-2276f0941cfb] > 2019-09-13T08:37:52,662 INFO [8eaa1601-f045-4ad5-9c2e-1e5944b75f6a > HiveServer2-Handler-Pool: Thread-202]: operation.OperationManager (:()) - > Adding operation: OperationHandle [opType=EXECUTE_STATEMENT, > getHandleIdentifier()=b983802c-1dec-4fa0-8680-d05ab555321b] > 2019-09-13T08:37:56,239 INFO [8eaa1601-f045-4ad5-9c2e-1e5944b75f6a > HiveServer2-Handler-Pool: Thread-202]: operation.OperationManager (:()) - > Adding operation: OperationHandle [opType=EXECUTE_STATEMENT, > getHandleIdentifier()=75dbc531-2964-47b2-84d7-85b59f88999c] > 2019-09-13T08:38:02,551 INFO [8eaa1601-f045-4ad5-9c2e-1e5944b75f6a > HiveServer2-Handler-Pool: Thread-202]: operation.OperationManager (:()) - > Adding operation: OperationHandle [opType=EXECUTE_STATEMENT, > getHandleIdentifier()=72c79076-9d67-4894-a526-c233fa5450b2] > 2019-09-13T08:38:10,558 INFO [8eaa1601-f045-4ad5-9c2e-1e5944b75f6a > HiveServer2-Handler-Pool: Thread-202]: operation.OperationManager (:()) - > Adding operation: OperationHandle [opType=EXECUTE_STATEMENT, > getHandleIdentifier()=17b30a62-612d-4b70-9ba7-4287d2d9229b] > 2019-09-13T08:38:16,930 INFO [8eaa1601-f045-4ad5-9c2e-1e5944b75f6a > HiveServer2-Handler-Pool: Thread-202]: operation.OperationManager (:()) - > Adding operation: OperationHandle [opType=EXECUTE_STATEMENT, > getHandleIdentifier()=ea97e99d-cc77-470b-b49a-b869c73a4615] > 2019-09-13T08:38:20,440 INFO [8eaa1601-f045-4ad5-9c2e-1e5944b75f6a > HiveServer2-Handler-Pool: Thread-202]: operation.OperationManager (:()) - > Adding operation: OperationHandle [opType=EXECUTE_STATEMENT, > getHandleIdentifier()=a277b789-ebb8-4925-878f-6728d3e8c5fb] > 2019-09-13T08:38:26,303 INFO [8eaa1601-f045-4ad5-9c2e-1e5944b75f6a > HiveServer2-Handler-Pool: Thread-202]: operation.OperationManager (:()) - > Adding operation: OperationHandle [opType=EXECUTE_STATEMENT, > getHandleIdentifier()=9a023ab8-aa80-45db-af88-94790cc83033] > 2019-09-13T08:38:30,791 INFO [8eaa1601-f045-4ad5-9c2e-1e5944b75f6a > HiveServer2-Handler-Pool: Thread-202]: operation.OperationManager (:()) - > Adding operation: OperationHandle [opType=EXECUTE_STATEMENT, > getHandleIdentifier()=b697c801-7da0-4544-bcfa-442eb1d3bd77] > 2019-09-13T08:39:10,187 INFO [8eaa1601-f045-4ad5-9c2e-1e5944b75f6a > HiveServer2-Handler-Pool: Thread-202]: operation.OperationManager (:()) - > Adding operation: OperationHandle [opType=EXECUTE_STATEMENT, > getHandleIdentifier()=bda93c8f-0822-4592-a61c-4701720a1a5c] > 2019-09-13T08:39:15,471 INFO [8eaa1601-f045-4ad5-9c2e-1e5944b75f6a > HiveServer2-Handler-Pool: Thread-202]: operation.OperationManager (:()) - >
[jira] [Commented] (HIVE-22336) The updates should be pushed to the Metastore backend DB before creating the notification event
[ https://issues.apache.org/jira/browse/HIVE-22336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954935#comment-16954935 ] Dinesh Chitlangia commented on HIVE-22336: -- [~kuczoram] Thanks for filing this patch. Latest patch looks clean. > The updates should be pushed to the Metastore backend DB before creating the > notification event > --- > > Key: HIVE-22336 > URL: https://issues.apache.org/jira/browse/HIVE-22336 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 4.0.0 >Reporter: Marta Kuczora >Assignee: Marta Kuczora >Priority: Major > Attachments: HIVE-22336.1.patch, HIVE-22336.2.patch, > HIVE-22336.3.patch > > > There was an issue on HDP-3.1 where a table couldn't be deleted, because some > related objects (like storage descriptor) were missing from the metastore. > There was a previous delete attempt on that table which went wrong, but no > rollback happened, that's why the SD were missing. In that previous delete, > the notification creation swallowed the error which came from the backend DB, > that's why no rollback happened. Here are the steps which happened in the > first delete attempt: > > # Open a transaction (transaction_1) - this step was successful > # Delete all the objects which are related to the table - this step was > successful too, so the SD and other objects were deleted > # Delete the table - this step failed in the backend DB, but according to the > log the delete happens in a batch statement, so it won't necessarily be > executed right at this moment, so we won't see an error here > # Create a notification about the table delete: > ## Open an other transaction for the notification creation (transaction_2) - > call the ObjectStore.openTransaction method which increases a counter for > open transactions and then checks if there is already an active transaction. > If there is, then just returns true and doesn't really create a new > transaction. > ## Lock the notification id in the metastore backend db for update - here is > where the exception from the backend DB (let's call it "MySQL Exception") > manifests > ## If an exception occurs during acquiring the log, retry - The "MySQL > Exception" was caught and since there is no check on the exception, the retry > mechanism thinks that it happened because couldn't acquire the log for the > notification id, so retries and "forgot" about the "MySQL Exception". > ## If the lock was acquired successfully, create the notification - Second > time, the lock was acquired successfully, so the notification creation was > successful. > ## Commit transaction_2 - Just decrease the transaction counter, but doesn't > actually commits anything. > # Commit transaction_1 - This commits the transaction, but since the error > already got manifested and kind of "handled", here we won't see any error, > just that the commit was successful, so no rollback happens and leaves the > table object in an invalid state. > # If the commit was not successful then rollback > In the customer setup, this issue could be fixed by adding a flush call > before creating the notification event, so all the updates would be pushed to > the backend db and the error would manifest at this point. With this, the > error would go back to the HiveMetastore class which would do the rollback > and the delete table operation would fail as it should be, since the table > couldn't be deleted. But then the Hivemetastore retry mechanism could try the > table deletion again. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-22255) Hive don't trigger Major Compaction automatically if table contains only base files
[ https://issues.apache.org/jira/browse/HIVE-22255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HIVE-22255: - Summary: Hive don't trigger Major Compaction automatically if table contains only base files (was: Hive don't trigger Major Compaction automatically if table contains all base files ) > Hive don't trigger Major Compaction automatically if table contains only base > files > > > Key: HIVE-22255 > URL: https://issues.apache.org/jira/browse/HIVE-22255 > Project: Hive > Issue Type: Bug > Components: Hive, Transactions >Affects Versions: 3.1.2 > Environment: Hive-3.1.1 >Reporter: Rajkumar Singh >Assignee: Rajkumar Singh >Priority: Major > > user may run into the issue if the table consists of all base files but no > delta, then the following condition will yield false and automatic major > compaction will be skipped. > [https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Initiator.java#L313] > > Steps to Reproduce: > # create Acid table > {code:java} > // create table myacid(id int); > {code} > # Run multiple insert table > {code:java} > // insert overwrite table myacid values(1);insert overwrite table myacid > values(2),(3),(4){code} > # DFS ls output > {code:java} > // dfs -ls -R /warehouse/tablespace/managed/hive/myacid; > ++ > | DFS Output | > ++ > | drwxrwx---+ - hive hadoop 0 2019-09-27 16:42 > /warehouse/tablespace/managed/hive/myacid/base_001 | > | -rw-rw+ 3 hive hadoop 1 2019-09-27 16:42 > /warehouse/tablespace/managed/hive/myacid/base_001/_orc_acid_version | > | -rw-rw+ 3 hive hadoop 610 2019-09-27 16:42 > /warehouse/tablespace/managed/hive/myacid/base_001/bucket_0 | > | drwxrwx---+ - hive hadoop 0 2019-09-27 16:43 > /warehouse/tablespace/managed/hive/myacid/base_002 | > | -rw-rw+ 3 hive hadoop 1 2019-09-27 16:43 > /warehouse/tablespace/managed/hive/myacid/base_002/_orc_acid_version | > | -rw-rw+ 3 hive hadoop 633 2019-09-27 16:43 > /warehouse/tablespace/managed/hive/myacid/base_002/bucket_0 | > ++{code} > > you will see that Major compaction will not be trigger until you run alter > table compact MAJOR. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22255) Hive don't trigger Major Compaction automatically if table contains all base files
[ https://issues.apache.org/jira/browse/HIVE-22255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16939771#comment-16939771 ] Dinesh Chitlangia commented on HIVE-22255: -- [~Rajkumar Singh] Thanks for filing this issue. Isn't {{insert overwrite}} supposed to wipe out existing base file and create new one? > Hive don't trigger Major Compaction automatically if table contains all base > files > --- > > Key: HIVE-22255 > URL: https://issues.apache.org/jira/browse/HIVE-22255 > Project: Hive > Issue Type: Bug > Components: Hive, Transactions >Affects Versions: 3.1.2 > Environment: Hive-3.1.1 >Reporter: Rajkumar Singh >Assignee: Rajkumar Singh >Priority: Major > > user may run into the issue if the table consists of all base files but no > delta, then the following condition will yield false and automatic major > compaction will be skipped. > [https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Initiator.java#L313] > > Steps to Reproduce: > # create Acid table > {code:java} > // create table myacid(id int); > {code} > # Run multiple insert table > {code:java} > // insert overwrite table myacid values(1);insert overwrite table myacid > values(2),(3),(4){code} > # DFS ls output > {code:java} > // dfs -ls -R /warehouse/tablespace/managed/hive/myacid; > ++ > | DFS Output | > ++ > | drwxrwx---+ - hive hadoop 0 2019-09-27 16:42 > /warehouse/tablespace/managed/hive/myacid/base_001 | > | -rw-rw+ 3 hive hadoop 1 2019-09-27 16:42 > /warehouse/tablespace/managed/hive/myacid/base_001/_orc_acid_version | > | -rw-rw+ 3 hive hadoop 610 2019-09-27 16:42 > /warehouse/tablespace/managed/hive/myacid/base_001/bucket_0 | > | drwxrwx---+ - hive hadoop 0 2019-09-27 16:43 > /warehouse/tablespace/managed/hive/myacid/base_002 | > | -rw-rw+ 3 hive hadoop 1 2019-09-27 16:43 > /warehouse/tablespace/managed/hive/myacid/base_002/_orc_acid_version | > | -rw-rw+ 3 hive hadoop 633 2019-09-27 16:43 > /warehouse/tablespace/managed/hive/myacid/base_002/bucket_0 | > ++{code} > > you will see that Major compaction will not be trigger until you run alter > table compact MAJOR. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-22118) Log the table name while skipping the compaction because it's sorted table/partitions
[ https://issues.apache.org/jira/browse/HIVE-22118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16913473#comment-16913473 ] Dinesh Chitlangia commented on HIVE-22118: -- Very useful improvement. Thanks for filing this patch and the fix [~Rajkumar Singh] > Log the table name while skipping the compaction because it's sorted > table/partitions > - > > Key: HIVE-22118 > URL: https://issues.apache.org/jira/browse/HIVE-22118 > Project: Hive > Issue Type: Improvement > Components: Transactions >Affects Versions: 3.1.1 >Reporter: Rajkumar Singh >Assignee: Rajkumar Singh >Priority: Minor > Attachments: HIVE-22118.patch > > > for debugging perspective it's good if we log the full table name while > skipping the table for compaction otherwise it's tedious to know why the > compaction is not happening for the target table. -- This message was sent by Atlassian Jira (v8.3.2#803003)
[jira] [Commented] (HIVE-21917) COMPLETED_TXN_COMPONENTS table is never cleaned up unless Compactor runs
[ https://issues.apache.org/jira/browse/HIVE-21917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892195#comment-16892195 ] Dinesh Chitlangia commented on HIVE-21917: -- [~ccondit] Thanks for filing this and writing an elaborate description. We hit this and your issue description certainly helped us unblock ourselves! > COMPLETED_TXN_COMPONENTS table is never cleaned up unless Compactor runs > > > Key: HIVE-21917 > URL: https://issues.apache.org/jira/browse/HIVE-21917 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.1.0, 3.1.1 >Reporter: Craig Condit >Priority: Major > > The Initiator thread in the metastore repeatedly loops over entries in the > COMPLETED_TXN_COMPONENTS table to determine which partitions / tables might > need to be compacted. However, entries are never removed from this table > except by a completed Compactor run. > In a cluster where most tables / partitions are write-once read-many, this > results in stale entries in this table never being cleaned up. In a small > test cluster, we have observed approximately 45k entries in this table > (virtually equal to the number of partitions in the cluster) while < 100 of > these tables have delta files at all. Since most of the tables will never get > enough writes to trigger a compaction (and in fact have only ever been > written to once), the initiator thread keeps trying to evaluate them on every > loop. > On this test cluster, it takes approximately 10 minutes to loop through all > the entries and results in severe performance degradation on metastore > operations. With the default run timing of 5 minutes, the initiator basically > never stops running. > On a production cluster with 2M partitions, this would be a non-starter. > The initiator thread should proactively remove entries from > COMPLETED_TXN_COMPONENTS when it determines that a compaction is not needed, > so that they are not evaluated again on the next loop. > -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Commented] (HIVE-19477) Hiveserver2 in http mode not emitting metric default.General.open_connections
[ https://issues.apache.org/jira/browse/HIVE-19477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16474797#comment-16474797 ] Dinesh Chitlangia commented on HIVE-19477: -- [~jcamachorodriguez] thank you for fixing this! > Hiveserver2 in http mode not emitting metric default.General.open_connections > - > > Key: HIVE-19477 > URL: https://issues.apache.org/jira/browse/HIVE-19477 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Dinesh Chitlangia >Assignee: Jesus Camacho Rodriguez >Priority: Minor > Fix For: 3.0.0 > > Attachments: HIVE-19477.01.patch, HIVE-19477.patch > > > Instances in binary mode are emitting the metric > _default.General.open_connections_ but the instances operating in http mode > are not emitting this metric. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Issue Comment Deleted] (HIVE-19477) Hiveserver2 in http mode not emitting metric default.General.open_connections
[ https://issues.apache.org/jira/browse/HIVE-19477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HIVE-19477: - Comment: was deleted (was: [~jcamachorodriguez] - Thanks for opening this jira.) > Hiveserver2 in http mode not emitting metric default.General.open_connections > - > > Key: HIVE-19477 > URL: https://issues.apache.org/jira/browse/HIVE-19477 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Dinesh Chitlangia >Assignee: Jesus Camacho Rodriguez >Priority: Minor > Attachments: HIVE-19477.01.patch, HIVE-19477.patch > > > Instances in binary mode are emitting the metric > _default.General.open_connections_ but the instances operating in http mode > are not emitting this metric. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19477) Hiveserver2 in http mode not emitting metric default.General.open_connections
[ https://issues.apache.org/jira/browse/HIVE-19477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469126#comment-16469126 ] Dinesh Chitlangia commented on HIVE-19477: -- [~jcamachorodriguez] - Thanks for opening this jira. > Hiveserver2 in http mode not emitting metric default.General.open_connections > - > > Key: HIVE-19477 > URL: https://issues.apache.org/jira/browse/HIVE-19477 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Dinesh Chitlangia >Assignee: Jesus Camacho Rodriguez >Priority: Minor > Attachments: HIVE-19477.patch > > > Instances in binary mode are emitting the metric > _default.General.open_connections_ but the instances operating in http mode > are not emitting this metric. -- This message was sent by Atlassian JIRA (v7.6.3#76005)