[jira] [Commented] (HIVE-16886) HMS log notifications may have duplicated event IDs if multiple HMS are running concurrently
[ https://issues.apache.org/jira/browse/HIVE-16886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16106834#comment-16106834 ] anishek commented on HIVE-16886: [~spena] I am not exactly sure why we dont use the NotificationLog Id, i think there is some limitation when we use ORACLE db as metastore RDBMS, I think [~thejas] might be able to provide a more definitive answer, but * the test above would be more comprehensive if you can provide an example of unique NL_ID with duplicate EVENT_ID. * looking at the mapping NL_ID is generated using *native* strategy which falls back to sequence/identity/increment for different DB, the default values for these strategies will differ based on which underlying db is used, if *increment* gets used anywhere then in HMS HA mode there will be a cache of 10 sequence numbers that Datanuclues will maintain and this will definitely lead to duplicates / failues( since we have PK constraint), we might want to add additional retry logic in that case. * if we can move to *sequence* based identity strategy generation for NL_ID that will be great, though might have to look if all supported rdbms will work with them. > HMS log notifications may have duplicated event IDs if multiple HMS are > running concurrently > > > Key: HIVE-16886 > URL: https://issues.apache.org/jira/browse/HIVE-16886 > Project: Hive > Issue Type: Bug > Components: Hive, Metastore >Reporter: Sergio Peña > > When running multiple Hive Metastore servers and DB notifications are > enabled, I could see that notifications can be persisted with a duplicated > event ID. > This does not happen when running multiple threads in a single HMS node due > to the locking acquired on the DbNotificationsLog class, but multiple HMS > could cause conflicts. > The issue is in the ObjectStore#addNotificationEvent() method. The event ID > fetched from the datastore is used for the new notification, incremented in > the server itself, then persisted or updated back to the datastore. If 2 > servers read the same ID, then these 2 servers write a new notification with > the same ID. > The event ID is not unique nor a primary key. > Here's a test case using the TestObjectStore class that confirms this issue: > {noformat} > @Test > public void testConcurrentAddNotifications() throws ExecutionException, > InterruptedException { > final int NUM_THREADS = 2; > CountDownLatch countIn = new CountDownLatch(NUM_THREADS); > CountDownLatch countOut = new CountDownLatch(1); > HiveConf conf = new HiveConf(); > conf.setVar(HiveConf.ConfVars.METASTORE_EXPRESSION_PROXY_CLASS, > MockPartitionExpressionProxy.class.getName()); > ExecutorService executorService = > Executors.newFixedThreadPool(NUM_THREADS); > FutureTask tasks[] = new FutureTask[NUM_THREADS]; > for (int i=0; ifinal int n = i; > tasks[i] = new FutureTask(new Callable() { > @Override > public Void call() throws Exception { > ObjectStore store = new ObjectStore(); > store.setConf(conf); > NotificationEvent dbEvent = > new NotificationEvent(0, 0, > EventMessage.EventType.CREATE_DATABASE.toString(), "CREATE DATABASE DB" + n); > System.out.println("ADDING NOTIFICATION"); > countIn.countDown(); > countOut.await(); > store.addNotificationEvent(dbEvent); > System.out.println("FINISH NOTIFICATION"); > return null; > } > }); > executorService.execute(tasks[i]); > } > countIn.await(); > countOut.countDown(); > for (int i = 0; i < NUM_THREADS; ++i) { > tasks[i].get(); > } > NotificationEventResponse eventResponse = > objectStore.getNextNotification(new NotificationEventRequest()); > Assert.assertEquals(2, eventResponse.getEventsSize()); > Assert.assertEquals(1, eventResponse.getEvents().get(0).getEventId()); > // This fails because the next notification has an event ID = 1 > Assert.assertEquals(2, eventResponse.getEvents().get(1).getEventId()); > } > {noformat} > The last assertion fails expecting an event ID 1 instead of 2. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17174) LLAP: ShuffleHandler: optimize fadvise calls for broadcast edge
[ https://issues.apache.org/jira/browse/HIVE-17174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajesh Balamohan updated HIVE-17174: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) Thanks [~gopalv]. Committed to master. > LLAP: ShuffleHandler: optimize fadvise calls for broadcast edge > --- > > Key: HIVE-17174 > URL: https://issues.apache.org/jira/browse/HIVE-17174 > Project: Hive > Issue Type: Improvement >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan >Priority: Minor > Fix For: 3.0.0 > > Attachments: HIVE-17174.1.patch, HIVE-17174.2.patch > > > Currently, once the data is transferred `fadvise` call is invoked to throw > away the pages. This may not be very helpful in broadcast, as it would tend > to transfer the same data to multiple downstream tasks. > e.g Q50 at 1 TB scale > {noformat} > Edges: > Map 1 <- Map 5 (BROADCAST_EDGE) > Map 6 <- Reducer 2 (BROADCAST_EDGE), Reducer 3 (BROADCAST_EDGE), > Reducer 4 (BROADCAST_EDGE) > Reducer 2 <- Map 1 (CUSTOM_SIMPLE_EDGE) > Reducer 3 <- Map 1 (CUSTOM_SIMPLE_EDGE) > Reducer 4 <- Map 1 (CUSTOM_SIMPLE_EDGE) > Reducer 7 <- Map 1 (CUSTOM_SIMPLE_EDGE), Map 10 (BROADCAST_EDGE), Map > 11 (BROADCAST_EDGE), Map 6 (CUSTOM_SIMPLE_EDGE) > Reducer 8 <- Reducer 7 (SIMPLE_EDGE) > Reducer 9 <- Reducer 8 (SIMPLE_EDGE) > Status: Running (Executing on YARN cluster with App id > application_1490656001509_6084) > -- > VERTICES MODESTATUS TOTAL COMPLETED RUNNING PENDING > FAILED KILLED > -- > Map 5 .. llap SUCCEEDED 1 100 > 0 0 > Map 1 .. llap SUCCEEDED 11 1100 > 0 0 > Reducer 4 .. llap SUCCEEDED 1 100 > 0 0 > Reducer 2 .. llap SUCCEEDED 1 100 > 0 0 > Reducer 3 .. llap SUCCEEDED 1 100 > 0 0 > Map 6 .. llap SUCCEEDED13913900 > 0 0 > Map 10 . llap SUCCEEDED 1 100 > 0 0 > Map 11 . llap SUCCEEDED 1 100 > 0 0 > Reducer 7 .. llap SUCCEEDED83483400 > 0 0 > Reducer 8 .. llap SUCCEEDED 24 2400 > 0 0 > Reducer 9 .. llap SUCCEEDED 1 100 > 0 0 > -- > e.g count of evictions on files > 139 > /grid/3/hadoop/yarn/local/usercache/rbalamohan/appcache/application_1490656001509_6084/1/output/attempt_1490656001509_6084_1_05_00_0_18387/file.out > 834 > /grid/3/hadoop/yarn/local/usercache/rbalamohan/appcache/application_1490656001509_6084/1/output/attempt_1490656001509_6084_1_07_00_0_18420_1/file.out > 834 > /grid/3/hadoop/yarn/local/usercache/rbalamohan/appcache/application_1490656001509_6084/1/output/attempt_1490656001509_6084_1_07_00_0_18420_2/file.out > > {noformat} > It would be good to fadvise for cases when "partition != 0". This would help > retaining the pages for broadcast. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17195) Long chain of tasks created by REPL LOAD shouldn't cause stack corruption.
[ https://issues.apache.org/jira/browse/HIVE-17195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16106795#comment-16106795 ] Sankar Hariappan commented on HIVE-17195: - explainuser_3.q is a flaky test case. No test failures due to this patch. > Long chain of tasks created by REPL LOAD shouldn't cause stack corruption. > -- > > Key: HIVE-17195 > URL: https://issues.apache.org/jira/browse/HIVE-17195 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, repl >Affects Versions: 2.1.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan > Labels: DAG, DR, Executor, replication > Fix For: 3.0.0 > > Attachments: HIVE-17195.01.patch, HIVE-17195.02.patch > > > Currently, long chain REPL LOAD tasks lead to huge recursive calls when try > to traverse the DAG. > For example, getMRTasks, getTezTasks, getSparkTasks and iterateTasks methods > run recursively to traverse the DAG. > Need to modify this traversal logic to reduce stack usage. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17209) ObjectCacheFactory should return null when tez shared object registry is not setup
[ https://issues.apache.org/jira/browse/HIVE-17209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16106764#comment-16106764 ] Hive QA commented on HIVE-17209: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12879552/HIVE-17209.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 15 failed/errored test(s), 11018 tests executed *Failed tests:* {noformat} TestPerfCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=235) org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[create_merge_compressed] (batchId=240) org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[materialized_view_create_rewrite] (batchId=240) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_vectorized_dynamic_partition_pruning] (batchId=168) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainuser_3] (batchId=99) org.apache.hive.hcatalog.api.TestHCatClient.testPartitionRegistrationWithCustomSchema (batchId=179) org.apache.hive.hcatalog.api.TestHCatClient.testPartitionSpecRegistrationWithCustomSchema (batchId=179) org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation (batchId=179) org.apache.hive.minikdc.TestJdbcWithDBTokenStore.testConnection (batchId=241) org.apache.hive.minikdc.TestJdbcWithDBTokenStore.testIsValid (batchId=241) org.apache.hive.minikdc.TestJdbcWithDBTokenStore.testIsValidNeg (batchId=241) org.apache.hive.minikdc.TestJdbcWithDBTokenStore.testNegativeProxyAuth (batchId=241) org.apache.hive.minikdc.TestJdbcWithDBTokenStore.testNegativeTokenAuth (batchId=241) org.apache.hive.minikdc.TestJdbcWithDBTokenStore.testProxyAuth (batchId=241) org.apache.hive.minikdc.TestJdbcWithDBTokenStore.testTokenAuth (batchId=241) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/6193/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/6193/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-6193/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 15 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12879552 - PreCommit-HIVE-Build > ObjectCacheFactory should return null when tez shared object registry is not > setup > -- > > Key: HIVE-17209 > URL: https://issues.apache.org/jira/browse/HIVE-17209 > Project: Hive > Issue Type: Bug >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan >Priority: Minor > Attachments: HIVE-17209.1.patch > > > HIVE-15269 introduced dynamic min/max bloom filter > ("hive.tez.dynamic.semijoin.reduction=true"). This needs to access > ObjectCache and in tez, ObjectCache can only be created by {{TezProcessor}}. > In the following case {{AM --> splits --> > OrcInputFormat.pickStripes::evaluatePredicateMinMax --> > DynamicValue.getLiteral --> objectCache access}}, AM ends up throwing lots of > NPE since AM has not created ObjectCache. > Orc reader catches these exceptions, skips PPD and proceeds further. For e.g, > in Q95 it ends up throwing ~30,000 NPE before completing split information. > ObjectCacheFactory should return null when tez shared object registry is not > setup. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17209) ObjectCacheFactory should return null when tez shared object registry is not setup
[ https://issues.apache.org/jira/browse/HIVE-17209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajesh Balamohan updated HIVE-17209: Assignee: Rajesh Balamohan Status: Patch Available (was: Open) > ObjectCacheFactory should return null when tez shared object registry is not > setup > -- > > Key: HIVE-17209 > URL: https://issues.apache.org/jira/browse/HIVE-17209 > Project: Hive > Issue Type: Bug >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan >Priority: Minor > Attachments: HIVE-17209.1.patch > > > HIVE-15269 introduced dynamic min/max bloom filter > ("hive.tez.dynamic.semijoin.reduction=true"). This needs to access > ObjectCache and in tez, ObjectCache can only be created by {{TezProcessor}}. > In the following case {{AM --> splits --> > OrcInputFormat.pickStripes::evaluatePredicateMinMax --> > DynamicValue.getLiteral --> objectCache access}}, AM ends up throwing lots of > NPE since AM has not created ObjectCache. > Orc reader catches these exceptions, skips PPD and proceeds further. For e.g, > in Q95 it ends up throwing ~30,000 NPE before completing split information. > ObjectCacheFactory should return null when tez shared object registry is not > setup. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17209) ObjectCacheFactory should return null when tez shared object registry is not setup
[ https://issues.apache.org/jira/browse/HIVE-17209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajesh Balamohan updated HIVE-17209: Attachment: HIVE-17209.1.patch \cc [~gopalv] > ObjectCacheFactory should return null when tez shared object registry is not > setup > -- > > Key: HIVE-17209 > URL: https://issues.apache.org/jira/browse/HIVE-17209 > Project: Hive > Issue Type: Bug >Reporter: Rajesh Balamohan >Priority: Minor > Attachments: HIVE-17209.1.patch > > > HIVE-15269 introduced dynamic min/max bloom filter > ("hive.tez.dynamic.semijoin.reduction=true"). This needs to access > ObjectCache and in tez, ObjectCache can only be created by {{TezProcessor}}. > In the following case {{AM --> splits --> > OrcInputFormat.pickStripes::evaluatePredicateMinMax --> > DynamicValue.getLiteral --> objectCache access}}, AM ends up throwing lots of > NPE since AM has not created ObjectCache. > Orc reader catches these exceptions, skips PPD and proceeds further. For e.g, > in Q95 it ends up throwing ~30,000 NPE before completing split information. > ObjectCacheFactory should return null when tez shared object registry is not > setup. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17177) move TestSuite.java to the right position
[ https://issues.apache.org/jira/browse/HIVE-17177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16106722#comment-16106722 ] Saijin Huang commented on HIVE-17177: - [~lirui] can you plz take a quick review and commit? > move TestSuite.java to the right position > - > > Key: HIVE-17177 > URL: https://issues.apache.org/jira/browse/HIVE-17177 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Saijin Huang >Assignee: Saijin Huang >Priority: Minor > Fix For: 3.0.0 > > Attachments: HIVE-17177.1.patch > > > TestSuite.java is currently not belong to the package > org.apache.hive.storage.jdbc.Move it to the right position. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-16998) Add config to enable HoS DPP only for map-joins
[ https://issues.apache.org/jira/browse/HIVE-16998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HIVE-16998: Resolution: Fixed Status: Resolved (was: Patch Available) > Add config to enable HoS DPP only for map-joins > --- > > Key: HIVE-16998 > URL: https://issues.apache.org/jira/browse/HIVE-16998 > Project: Hive > Issue Type: Sub-task > Components: Logical Optimizer, Spark >Reporter: Sahil Takiar >Assignee: Janaki Lahorani > Attachments: HIVE16998.1.patch, HIVE16998.2.patch, HIVE16998.3.patch, > HIVE16998.4.patch, HIVE16998.5.patch > > > HoS DPP will split a given operator tree in two under the following > conditions: it has detected that the query can benefit from DPP, and the > filter is not a map-join (see SplitOpTreeForDPP). > This can hurt performance if the the non-partitioned side of the join > involves a complex operator tree - e.g. the query {{select count(*) from > srcpart where srcpart.ds in (select max(srcpart.ds) from srcpart union all > select min(srcpart.ds) from srcpart)}} will require running the subquery > twice, once in each Spark job. > Queries with map-joins don't get split into two operator trees and thus don't > suffer from this drawback. Thus, it would be nice to have a config key that > just enables DPP on HoS for map-joins. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-16998) Add config to enable HoS DPP only for map-joins
[ https://issues.apache.org/jira/browse/HIVE-16998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16106720#comment-16106720 ] Sahil Takiar commented on HIVE-16998: - Thanks [~janulatha], committed to master. > Add config to enable HoS DPP only for map-joins > --- > > Key: HIVE-16998 > URL: https://issues.apache.org/jira/browse/HIVE-16998 > Project: Hive > Issue Type: Sub-task > Components: Logical Optimizer, Spark >Reporter: Sahil Takiar >Assignee: Janaki Lahorani > Attachments: HIVE16998.1.patch, HIVE16998.2.patch, HIVE16998.3.patch, > HIVE16998.4.patch, HIVE16998.5.patch > > > HoS DPP will split a given operator tree in two under the following > conditions: it has detected that the query can benefit from DPP, and the > filter is not a map-join (see SplitOpTreeForDPP). > This can hurt performance if the the non-partitioned side of the join > involves a complex operator tree - e.g. the query {{select count(*) from > srcpart where srcpart.ds in (select max(srcpart.ds) from srcpart union all > select min(srcpart.ds) from srcpart)}} will require running the subquery > twice, once in each Spark job. > Queries with map-joins don't get split into two operator trees and thus don't > suffer from this drawback. Thus, it would be nice to have a config key that > just enables DPP on HoS for map-joins. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17176) Add ASF header for LlapAllocatorBuffer.java
[ https://issues.apache.org/jira/browse/HIVE-17176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16106718#comment-16106718 ] Saijin Huang commented on HIVE-17176: - [~lirui] can you plz take a quick review and commit? > Add ASF header for LlapAllocatorBuffer.java > --- > > Key: HIVE-17176 > URL: https://issues.apache.org/jira/browse/HIVE-17176 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: Saijin Huang >Assignee: Saijin Huang >Priority: Minor > Fix For: 3.0.0 > > Attachments: HIVE-17176.1.patch > > > Reproduce the problem from hive-16233,find the asf header missed. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-16972) FetchOperator: filter out inputSplits which length is zero
[ https://issues.apache.org/jira/browse/HIVE-16972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chaozhong Yang updated HIVE-16972: -- Resolution: Won't Fix Status: Resolved (was: Patch Available) > FetchOperator: filter out inputSplits which length is zero > -- > > Key: HIVE-16972 > URL: https://issues.apache.org/jira/browse/HIVE-16972 > Project: Hive > Issue Type: Improvement > Components: Physical Optimizer >Affects Versions: 2.1.0, 2.1.1 >Reporter: Chaozhong Yang >Assignee: Chaozhong Yang > Attachments: HIVE-16972.2.patch, HIVE-16972.3.patch, > HIVE-16972.4.patch, HIVE-16972.5.patch, HIVE-16972.6.patch, HIVE-16972.patch > > > * Background >We can describe the basic work flow of common HQL query as follows: > 1. compile and execute > 2. fetch results > In many cases, we don't need to worry about the issues fetching results > from HDFS(iff there are mapreduce jobs generated in planning step). However, > the number of results files on HDFS and data distribution will affect the > final status of HQL query, especially for HiveServer2. We have some map-only > queries, e.g: > {code:sql} > select * from myTable where date > '20170101' and date <= '20170301' and id = > 88; > {code} > This query will generate more than 20,000 files(look at screenshot image > uploaded) on HDFS and most of those files are empty. Of course, they are very > sparse. If we send TFetchResultsRequest from HiveServer2 client with some > parameters(timeout:90s, maxRows:1024) , FetchOperator can not fetch 1024 rows > in 90 seconds and our HiveServer2 client will mark this TFetchResultsRequest > as timed out failure. Why? In fact, It's expensive to fetch results from > empty file. In our HDFS cluster( 5000+ DataNodes) , reading data from an > empty file will cost almost 100 ms (100ms * 1000 ==> 100s > 90s timeout). > Obviously, we can filter out those empty files or splits to speed up the > process of FetchResults. > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17208) Repl dump should pass in db/table information to authorization API
[ https://issues.apache.org/jira/browse/HIVE-17208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16106707#comment-16106707 ] Hive QA commented on HIVE-17208: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12879547/HIVE-17208.1.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 9 failed/errored test(s), 11018 tests executed *Failed tests:* {noformat} TestPerfCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=235) org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_7] (batchId=240) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_vectorized_dynamic_partition_pruning] (batchId=168) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainuser_3] (batchId=99) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[repl_dump_requires_admin] (batchId=90) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[repl_load_requires_admin] (batchId=90) org.apache.hive.hcatalog.api.TestHCatClient.testPartitionRegistrationWithCustomSchema (batchId=179) org.apache.hive.hcatalog.api.TestHCatClient.testPartitionSpecRegistrationWithCustomSchema (batchId=179) org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation (batchId=179) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/6192/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/6192/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-6192/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 9 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12879547 - PreCommit-HIVE-Build > Repl dump should pass in db/table information to authorization API > -- > > Key: HIVE-17208 > URL: https://issues.apache.org/jira/browse/HIVE-17208 > Project: Hive > Issue Type: Bug > Components: Authorization >Reporter: Daniel Dai >Assignee: Daniel Dai > Attachments: HIVE-17208.1.patch > > > "repl dump" does not provide db/table information. That is necessary for > authorization replication in ranger. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17208) Repl dump should pass in db/table information to authorization API
[ https://issues.apache.org/jira/browse/HIVE-17208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-17208: -- Attachment: HIVE-17208.1.patch > Repl dump should pass in db/table information to authorization API > -- > > Key: HIVE-17208 > URL: https://issues.apache.org/jira/browse/HIVE-17208 > Project: Hive > Issue Type: Bug > Components: Authorization >Reporter: Daniel Dai >Assignee: Daniel Dai > Attachments: HIVE-17208.1.patch > > > "repl dump" does not provide db/table information. That is necessary for > authorization replication in ranger. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17208) Repl dump should pass in db/table information to authorization API
[ https://issues.apache.org/jira/browse/HIVE-17208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-17208: -- Status: Patch Available (was: Open) > Repl dump should pass in db/table information to authorization API > -- > > Key: HIVE-17208 > URL: https://issues.apache.org/jira/browse/HIVE-17208 > Project: Hive > Issue Type: Bug > Components: Authorization >Reporter: Daniel Dai >Assignee: Daniel Dai > Attachments: HIVE-17208.1.patch > > > "repl dump" does not provide db/table information. That is necessary for > authorization replication in ranger. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-17208) Repl dump should pass in db/table information to authorization API
[ https://issues.apache.org/jira/browse/HIVE-17208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai reassigned HIVE-17208: - > Repl dump should pass in db/table information to authorization API > -- > > Key: HIVE-17208 > URL: https://issues.apache.org/jira/browse/HIVE-17208 > Project: Hive > Issue Type: Bug > Components: Authorization >Reporter: Daniel Dai >Assignee: Daniel Dai > > "repl dump" does not provide db/table information. That is necessary for > authorization replication in ranger. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17185) TestHiveMetaStoreStatsMerge.testStatsMerge is failing
[ https://issues.apache.org/jira/browse/HIVE-17185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pengcheng Xiong updated HIVE-17185: --- Fix Version/s: 3.0.0 > TestHiveMetaStoreStatsMerge.testStatsMerge is failing > - > > Key: HIVE-17185 > URL: https://issues.apache.org/jira/browse/HIVE-17185 > Project: Hive > Issue Type: Sub-task > Components: Metastore, Test >Affects Versions: 3.0.0 >Reporter: Ashutosh Chauhan >Assignee: Pengcheng Xiong > Fix For: 3.0.0 > > > Likely because of HIVE-16997 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (HIVE-17185) TestHiveMetaStoreStatsMerge.testStatsMerge is failing
[ https://issues.apache.org/jira/browse/HIVE-17185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pengcheng Xiong resolved HIVE-17185. Resolution: Fixed updated tests. > TestHiveMetaStoreStatsMerge.testStatsMerge is failing > - > > Key: HIVE-17185 > URL: https://issues.apache.org/jira/browse/HIVE-17185 > Project: Hive > Issue Type: Sub-task > Components: Metastore, Test >Affects Versions: 3.0.0 >Reporter: Ashutosh Chauhan >Assignee: Pengcheng Xiong > > Likely because of HIVE-16997 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-17185) TestHiveMetaStoreStatsMerge.testStatsMerge is failing
[ https://issues.apache.org/jira/browse/HIVE-17185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pengcheng Xiong reassigned HIVE-17185: -- Assignee: Pengcheng Xiong > TestHiveMetaStoreStatsMerge.testStatsMerge is failing > - > > Key: HIVE-17185 > URL: https://issues.apache.org/jira/browse/HIVE-17185 > Project: Hive > Issue Type: Test > Components: Metastore, Test >Affects Versions: 3.0.0 >Reporter: Ashutosh Chauhan >Assignee: Pengcheng Xiong > > Likely because of HIVE-16997 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-17185) TestHiveMetaStoreStatsMerge.testStatsMerge is failing
[ https://issues.apache.org/jira/browse/HIVE-17185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pengcheng Xiong updated HIVE-17185: --- Issue Type: Sub-task (was: Test) Parent: HIVE-16995 > TestHiveMetaStoreStatsMerge.testStatsMerge is failing > - > > Key: HIVE-17185 > URL: https://issues.apache.org/jira/browse/HIVE-17185 > Project: Hive > Issue Type: Sub-task > Components: Metastore, Test >Affects Versions: 3.0.0 >Reporter: Ashutosh Chauhan >Assignee: Pengcheng Xiong > > Likely because of HIVE-16997 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-15305) Add tests for METASTORE_EVENT_LISTENERS
[ https://issues.apache.org/jira/browse/HIVE-15305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16106547#comment-16106547 ] Hive QA commented on HIVE-15305: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12879535/HIVE-15305.1.patch {color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 10 failed/errored test(s), 11042 tests executed *Failed tests:* {noformat} TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed out) (batchId=233) TestPerfCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=235) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_vectorized_dynamic_partition_pruning] (batchId=168) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_2] (batchId=100) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainuser_3] (batchId=99) org.apache.hadoop.hive.metastore.TestHiveMetaStoreStatsMerge.testStatsMerge (batchId=206) org.apache.hive.hcatalog.api.TestHCatClient.testPartitionRegistrationWithCustomSchema (batchId=179) org.apache.hive.hcatalog.api.TestHCatClient.testPartitionSpecRegistrationWithCustomSchema (batchId=179) org.apache.hive.hcatalog.api.TestHCatClient.testTableSchemaPropagation (batchId=179) org.apache.hive.hcatalog.listener.TestTransactionalDbNotificationListener.sqlInsertPartition (batchId=233) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/6191/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/6191/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-6191/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 10 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12879535 - PreCommit-HIVE-Build > Add tests for METASTORE_EVENT_LISTENERS > --- > > Key: HIVE-15305 > URL: https://issues.apache.org/jira/browse/HIVE-15305 > Project: Hive > Issue Type: Bug >Reporter: Mohit Sabharwal >Assignee: Mohit Sabharwal > Attachments: HIVE-15305.1.patch, HIVE-15305.patch > > > HIVE-15232 reused TestDbNotificationListener to test > METASTORE_TRANSACTIONAL_EVENT_LISTENERS and removed unit testing of > METASTORE_EVENT_LISTENERS config. We should test both. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-15305) Add tests for METASTORE_EVENT_LISTENERS
[ https://issues.apache.org/jira/browse/HIVE-15305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mohit Sabharwal updated HIVE-15305: --- Attachment: HIVE-15305.1.patch > Add tests for METASTORE_EVENT_LISTENERS > --- > > Key: HIVE-15305 > URL: https://issues.apache.org/jira/browse/HIVE-15305 > Project: Hive > Issue Type: Bug >Reporter: Mohit Sabharwal >Assignee: Mohit Sabharwal > Attachments: HIVE-15305.1.patch, HIVE-15305.patch > > > HIVE-15232 reused TestDbNotificationListener to test > METASTORE_TRANSACTIONAL_EVENT_LISTENERS and removed unit testing of > METASTORE_EVENT_LISTENERS config. We should test both. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-15305) Add tests for METASTORE_EVENT_LISTENERS
[ https://issues.apache.org/jira/browse/HIVE-15305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16106526#comment-16106526 ] Mohit Sabharwal commented on HIVE-15305: Incorporated review feedback and rebased patch. Apologies for the delay on this. [~vgumashta], could you please take a look ? RB link: https://reviews.apache.org/r/61244/ > Add tests for METASTORE_EVENT_LISTENERS > --- > > Key: HIVE-15305 > URL: https://issues.apache.org/jira/browse/HIVE-15305 > Project: Hive > Issue Type: Bug >Reporter: Mohit Sabharwal >Assignee: Mohit Sabharwal > Attachments: HIVE-15305.patch > > > HIVE-15232 reused TestDbNotificationListener to test > METASTORE_TRANSACTIONAL_EVENT_LISTENERS and removed unit testing of > METASTORE_EVENT_LISTENERS config. We should test both. -- This message was sent by Atlassian JIRA (v6.4.14#64029)