[jira] [Work logged] (HIVE-21880) Enable flaky test TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21880?focusedWorklogId=270613=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-270613
 ]

ASF GitHub Bot logged work on HIVE-21880:
-

Author: ASF GitHub Bot
Created on: 02/Jul/19 05:52
Start Date: 02/Jul/19 05:52
Worklog Time Spent: 10m 
  Work Description: ashutosh-bapat commented on pull request #684: 
HIVE-21880 : Enable flaky test 
TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.
URL: https://github.com/apache/hive/pull/684#discussion_r299311334
 
 

 ##
 File path: 
standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java
 ##
 @@ -3236,18 +3238,28 @@ public NotificationEventResponse 
getNextNotification(long lastEventId, int maxEv
 NotificationEventResponse filtered = new NotificationEventResponse();
 if (rsp != null && rsp.getEvents() != null) {
   long nextEventId = lastEventId + 1;
+  long prevEventId = lastEventId;
   for (NotificationEvent e : rsp.getEvents()) {
+LOG.debug("Got event with id : " + e.getEventId());
 if (e.getEventId() != nextEventId) {
-  LOG.error("Requested events are found missing in NOTIFICATION_LOG 
table. Expected: {}, Actual: {}. "
-  + "Probably, cleaner would've cleaned it up. "
-  + "Try setting higher value for 
hive.metastore.event.db.listener.timetolive. "
-  + "Also, bootstrap the system again to get back the 
consistent replicated state.",
-  nextEventId, e.getEventId());
-  throw new IllegalStateException(REPL_EVENTS_MISSING_IN_METASTORE);
+  if (e.getEventId() == prevEventId) {
+LOG.error("NOTIFICATION_LOG table has multiple events with the 
same event Id {}. " +
+"Something went wrong when inserting notification events.  
Bootstrap the system " +
+"again to get back teh consistent replicated state.", 
prevEventId);
 
 Review comment:
   If it happens by any chance, say because of a db error, the error reported 
is quite misleading i.e. the cleaner has not removed any events in that case. 
So, I think it's better to differentiate between these two cases. If every 
duplicate eventIds show up at a proper error message will be conveyed.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 270613)
Time Spent: 2.5h  (was: 2h 20m)

> Enable flaky test 
> TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.
> ---
>
> Key: HIVE-21880
> URL: https://issues.apache.org/jira/browse/HIVE-21880
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21880.01.patch, HIVE-21880.02.patch, 
> HIVE-21880.03.patch
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> Need tp enable 
> TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites
>  which is disabled as it is flaky and randomly failing with below error.
> {code}
> Error Message
> Notification events are missing in the meta store.
> Stacktrace
> java.lang.IllegalStateException: Notification events are missing in the meta 
> store.
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getNextNotification(HiveMetaStoreClient.java:3246)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212)
>   at com.sun.proxy.$Proxy58.getNextNotification(Unknown Source)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$MSClientNotificationFetcher.getNextNotificationEvents(EventUtils.java:107)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$NotificationEventIterator.fetchNextBatch(EventUtils.java:159)
>   at 
> 

[jira] [Work logged] (HIVE-21880) Enable flaky test TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21880?focusedWorklogId=270611=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-270611
 ]

ASF GitHub Bot logged work on HIVE-21880:
-

Author: ASF GitHub Bot
Created on: 02/Jul/19 05:50
Start Date: 02/Jul/19 05:50
Worklog Time Spent: 10m 
  Work Description: ashutosh-bapat commented on pull request #684: 
HIVE-21880 : Enable flaky test 
TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.
URL: https://github.com/apache/hive/pull/684#discussion_r299310923
 
 

 ##
 File path: 
itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/listener/TestConcurrentDbNotificationListener.java
 ##
 @@ -0,0 +1,201 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.hive.hcatalog.listener;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.fail;
+
+import java.util.concurrent.TimeUnit;
+import java.lang.reflect.Field;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Stack;
+
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.Lists;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hive.cli.CliSessionState;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.metastore.HiveMetaStoreClient;
+import org.apache.hadoop.hive.metastore.IMetaStoreClient;
+import org.apache.hadoop.hive.metastore.MetaStoreEventListener;
+import org.apache.hadoop.hive.metastore.MetaStoreEventListenerConstants;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.api.Database;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.FireEventRequest;
+import org.apache.hadoop.hive.metastore.api.FireEventRequestData;
+import org.apache.hadoop.hive.metastore.api.Function;
+import org.apache.hadoop.hive.metastore.api.FunctionType;
+import org.apache.hadoop.hive.metastore.api.InsertEventRequestData;
+import org.apache.hadoop.hive.metastore.api.MetaException;
+import org.apache.hadoop.hive.metastore.api.NotificationEvent;
+import org.apache.hadoop.hive.metastore.api.NotificationEventResponse;
+import org.apache.hadoop.hive.metastore.api.NotificationEventsCountRequest;
+import org.apache.hadoop.hive.metastore.api.Partition;
+import org.apache.hadoop.hive.metastore.api.PrincipalType;
+import org.apache.hadoop.hive.metastore.api.ResourceType;
+import org.apache.hadoop.hive.metastore.api.ResourceUri;
+import org.apache.hadoop.hive.metastore.api.SerDeInfo;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.hive.metastore.api.Table;
+import org.apache.hadoop.hive.metastore.conf.MetastoreConf;
 
 Review comment:
   Removed. Sorry.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 270611)
Time Spent: 2h 20m  (was: 2h 10m)

> Enable flaky test 
> TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.
> ---
>
> Key: HIVE-21880
> URL: https://issues.apache.org/jira/browse/HIVE-21880
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Ashutosh Bapat
>Priority: Major
> 

[jira] [Work logged] (HIVE-21880) Enable flaky test TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21880?focusedWorklogId=270610=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-270610
 ]

ASF GitHub Bot logged work on HIVE-21880:
-

Author: ASF GitHub Bot
Created on: 02/Jul/19 05:46
Start Date: 02/Jul/19 05:46
Worklog Time Spent: 10m 
  Work Description: ashutosh-bapat commented on pull request #684: 
HIVE-21880 : Enable flaky test 
TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.
URL: https://github.com/apache/hive/pull/684#discussion_r299310307
 
 

 ##
 File path: 
hcatalog/server-extensions/src/main/java/org/apache/hive/hcatalog/listener/DbNotificationListener.java
 ##
 @@ -997,6 +998,15 @@ private void addNotificationLog(NotificationEvent event, 
ListenerEvent listenerE
 stmt.execute("SET @@session.sql_mode=ANSI_QUOTES");
   }
 
+  // Derby doesn't allow FOR UPDATE to lock the row being selected (See 
https://db.apache
+  // .org/derby/docs/10.1/ref/rrefsqlj31783.html) . So lock the whole 
table. Since there's
+  // only one row in the table, this shouldn't cause any performance 
degradation. Also we not
+  // suggest Derby to be used in production so that's fine.
+  if (sqlGenerator.getDbProduct() == DatabaseProduct.DERBY) {
+String lockingQuery = "lock table \"NOTIFICATION_SEQUENCE\" in 
exclusive mode";
+LOG.info("Going to execute query <" + lockingQuery + ">");
+stmt.executeUpdate(lockingQuery);
+  }
   String s = sqlGenerator.addForUpdateClause("select \"NEXT_EVENT_ID\" " +
 
 Review comment:
   sqlGenerator.addForUpdateClause() does not add FOR UPDATE clause for Derby, 
so effect is same.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 270610)
Time Spent: 2h 10m  (was: 2h)

> Enable flaky test 
> TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.
> ---
>
> Key: HIVE-21880
> URL: https://issues.apache.org/jira/browse/HIVE-21880
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21880.01.patch, HIVE-21880.02.patch, 
> HIVE-21880.03.patch
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Need tp enable 
> TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites
>  which is disabled as it is flaky and randomly failing with below error.
> {code}
> Error Message
> Notification events are missing in the meta store.
> Stacktrace
> java.lang.IllegalStateException: Notification events are missing in the meta 
> store.
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getNextNotification(HiveMetaStoreClient.java:3246)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212)
>   at com.sun.proxy.$Proxy58.getNextNotification(Unknown Source)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$MSClientNotificationFetcher.getNextNotificationEvents(EventUtils.java:107)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$NotificationEventIterator.fetchNextBatch(EventUtils.java:159)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$NotificationEventIterator.hasNext(EventUtils.java:189)
>   at 
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.incrementalDump(ReplDumpTask.java:231)
>   at 
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.execute(ReplDumpTask.java:121)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2709)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2361)
>   at 

[jira] [Assigned] (HIVE-21941) Use checkstyle ruleset in Pre Upgrade Tool project

2019-07-01 Thread Krisztian Kasa (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krisztian Kasa reassigned HIVE-21941:
-


> Use checkstyle ruleset in Pre Upgrade Tool project 
> ---
>
> Key: HIVE-21941
> URL: https://issues.apache.org/jira/browse/HIVE-21941
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 3.1.0
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Major
> Fix For: 4.0.0
>
>
> The project upgrade-acid/pre-upgrade does not uses the same checkstyle 
> ruleset as hive root project



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21571) SHOW COMPACTIONS shows column names as its first output row

2019-07-01 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876581#comment-16876581
 ] 

Hive QA commented on HIVE-21571:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12973358/HIVE-21571.04.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 16359 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.metastore.TestMetastoreHousekeepingLeaderEmptyConfig.testHouseKeepingThreadExistence
 (batchId=242)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17811/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17811/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17811/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12973358 - PreCommit-HIVE-Build

> SHOW COMPACTIONS shows column names as its first output row
> ---
>
> Key: HIVE-21571
> URL: https://issues.apache.org/jira/browse/HIVE-21571
> Project: Hive
>  Issue Type: Bug
>Reporter: Todd Lipcon
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-21571.01.patch, HIVE-21571.02.patch, 
> HIVE-21571.03.patch, HIVE-21571.04.patch, HIVE-21571.patch
>
>
> SHOW COMPACTIONS yields a resultset with nice column names, and then the 
> first row of data is a repetition of those column names. This is somewhat 
> confusing and hard to read.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21571) SHOW COMPACTIONS shows column names as its first output row

2019-07-01 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876566#comment-16876566
 ] 

Hive QA commented on HIVE-21571:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
49s{color} | {color:blue} ql in master has 2254 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-17811/dev-support/hive-personality.sh
 |
| git revision | master / 2d9e0e4 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17811/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> SHOW COMPACTIONS shows column names as its first output row
> ---
>
> Key: HIVE-21571
> URL: https://issues.apache.org/jira/browse/HIVE-21571
> Project: Hive
>  Issue Type: Bug
>Reporter: Todd Lipcon
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-21571.01.patch, HIVE-21571.02.patch, 
> HIVE-21571.03.patch, HIVE-21571.04.patch, HIVE-21571.patch
>
>
> SHOW COMPACTIONS yields a resultset with nice column names, and then the 
> first row of data is a repetition of those column names. This is somewhat 
> confusing and hard to read.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21133) Support views with rewriting enabled useful for debugging

2019-07-01 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876557#comment-16876557
 ] 

Hive QA commented on HIVE-21133:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12955863/HIVE-21133.02.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17809/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17809/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17809/

Messages:
{noformat}
 This message was trimmed, see log for full details 
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-17809/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2019-07-01 23:24:09.769
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 2d9e0e4 HIVE-19831: Hiveserver2 should skip doAuth checks for 
CREATE DATABASE/TABLE if database/table already exists (Rajkumar Singh, 
reviewed by Daniel Dai)
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 2d9e0e4 HIVE-19831: Hiveserver2 should skip doAuth checks for 
CREATE DATABASE/TABLE if database/table already exists (Rajkumar Singh, 
reviewed by Daniel Dai)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2019-07-01 23:24:10.908
+ rm -rf ../yetus_PreCommit-HIVE-Build-17809
+ mkdir ../yetus_PreCommit-HIVE-Build-17809
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-17809
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-17809/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: a/common/src/java/org/apache/hadoop/hive/conf/Constants.java: does not 
exist in index
error: a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java: does not 
exist in index
error: a/itests/src/test/resources/testconfiguration.properties: does not exist 
in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/Context.java: does not exist in 
index
error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java: does not 
exist in index
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveMaterializedViewsRegistry.java:
 does not exist in index
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/RelOptHiveTable.java: 
does not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java: 
does not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java: does 
not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java: 
does not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g: does not 
exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g: does not 
exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g: does 
not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java: 
does not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/plan/CreateViewDesc.java: does 
not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/plan/ImportTableDesc.java: does 
not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java: does 
not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/stats/StatsUtils.java: does not 
exist in index
error: a/ql/src/test/results/clientpositive/llap/subquery_scalar.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/perf/tez/cbo_query14.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/perf/tez/cbo_query23.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/perf/tez/cbo_query24.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/perf/tez/cbo_query28.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/perf/tez/cbo_query54.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/perf/tez/cbo_query58.q.out: does 
not exist in index
error: a/ql/src/test/results/clientpositive/perf/tez/cbo_query6.q.out: does not 
exist in index
error: a/ql/src/test/results/clientpositive/perf/tez/cbo_query61.q.out: does 
not exist in index
error: 

[jira] [Commented] (HIVE-21923) Vectorized MapJoin may miss results when only the join key is selected

2019-07-01 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876552#comment-16876552
 ] 

Hive QA commented on HIVE-21923:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12973346/HIVE-21923.02.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17808/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17808/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17808/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2019-07-01 23:14:57.111
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-17808/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2019-07-01 23:14:57.115
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 2d9e0e4 HIVE-19831: Hiveserver2 should skip doAuth checks for 
CREATE DATABASE/TABLE if database/table already exists (Rajkumar Singh, 
reviewed by Daniel Dai)
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 2d9e0e4 HIVE-19831: Hiveserver2 should skip doAuth checks for 
CREATE DATABASE/TABLE if database/table already exists (Rajkumar Singh, 
reviewed by Daniel Dai)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2019-07-01 23:14:58.265
+ rm -rf ../yetus_PreCommit-HIVE-Build-17808
+ mkdir ../yetus_PreCommit-HIVE-Build-17808
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-17808
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-17808/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: patch failed: 
ql/src/test/results/clientpositive/llap/hybridgrace_hashjoin_2.q.out:1392
Falling back to three-way merge...
Applied patch to 
'ql/src/test/results/clientpositive/llap/hybridgrace_hashjoin_2.q.out' with 
conflicts.
error: patch failed: 
ql/src/test/results/clientpositive/tez/hybridgrace_hashjoin_2.q.out:1374
Falling back to three-way merge...
Applied patch to 
'ql/src/test/results/clientpositive/tez/hybridgrace_hashjoin_2.q.out' with 
conflicts.
Going to apply patch with: git apply -p0
/data/hiveptest/working/scratch/build.patch:187: trailing whitespace.
Map 5 
/data/hiveptest/working/scratch/build.patch:232: trailing whitespace.
Map 6 
/data/hiveptest/working/scratch/build.patch:240: trailing whitespace.
Reducer 2 
/data/hiveptest/working/scratch/build.patch:273: trailing whitespace.
sort order: 
/data/hiveptest/working/scratch/build.patch:276: trailing whitespace.
Reducer 4 
error: patch failed: 
ql/src/test/results/clientpositive/llap/hybridgrace_hashjoin_2.q.out:1392
Falling back to three-way merge...
Applied patch to 
'ql/src/test/results/clientpositive/llap/hybridgrace_hashjoin_2.q.out' with 
conflicts.
error: patch failed: 
ql/src/test/results/clientpositive/tez/hybridgrace_hashjoin_2.q.out:1374
Falling back to three-way merge...
Applied patch to 
'ql/src/test/results/clientpositive/tez/hybridgrace_hashjoin_2.q.out' with 
conflicts.
U ql/src/test/results/clientpositive/llap/hybridgrace_hashjoin_2.q.out
U ql/src/test/results/clientpositive/tez/hybridgrace_hashjoin_2.q.out
warning: squelched 151 whitespace errors
warning: 156 lines add whitespace errors.
+ result=1
+ '[' 1 -ne 0 ']'
+ rm -rf yetus_PreCommit-HIVE-Build-17808
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12973346 - PreCommit-HIVE-Build


[jira] [Commented] (HIVE-21910) Multiple target location generation in HostAffinitySplitLocationProvider

2019-07-01 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876528#comment-16876528
 ] 

Hive QA commented on HIVE-21910:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12973337/HIVE-21910.4.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 16363 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.ql.TestWarehouseExternalDir.org.apache.hadoop.hive.ql.TestWarehouseExternalDir
 (batchId=255)
org.apache.hadoop.hive.ql.TestWarehouseExternalDir.testExternalDefaultPaths 
(batchId=255)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17807/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17807/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17807/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12973337 - PreCommit-HIVE-Build

> Multiple target location generation in HostAffinitySplitLocationProvider
> 
>
> Key: HIVE-21910
> URL: https://issues.apache.org/jira/browse/HIVE-21910
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21910.2.patch, HIVE-21910.3.patch, 
> HIVE-21910.4.patch, HIVE-21910.patch
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> We need to generate multiple target locations by 
> HostAffinitySplitLocationProvider, so we will have deterministic fallback 
> nodes in case the target node is disabled



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21910) Multiple target location generation in HostAffinitySplitLocationProvider

2019-07-01 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876499#comment-16876499
 ] 

Hive QA commented on HIVE-21910:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
38s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
33s{color} | {color:blue} common in master has 62 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
25s{color} | {color:blue} llap-tez in master has 17 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
3s{color} | {color:blue} ql in master has 2254 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} The patch common passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} The patch llap-tez passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} ql: The patch generated 0 new + 41 unchanged - 1 
fixed = 41 total (was 42) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m 
11s{color} | {color:red} ql generated 1 new + 2253 unchanged - 1 fixed = 2254 
total (was 2254) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  Null passed for non-null parameter of new java.util.HashSet(Collection) 
in new 
org.apache.hadoop.hive.ql.exec.tez.HostAffinitySplitLocationProvider(List, 
boolean, int)  Method invoked at HostAffinitySplitLocationProvider.java:of new 
java.util.HashSet(Collection) in new 
org.apache.hadoop.hive.ql.exec.tez.HostAffinitySplitLocationProvider(List, 
boolean, int)  Method invoked at HostAffinitySplitLocationProvider.java:[line 
68] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-17807/dev-support/hive-personality.sh
 |
| git revision | master / 2d9e0e4 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17807/yetus/new-findbugs-ql.html
 |
| modules | C: common llap-tez ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17807/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Multiple target location generation in HostAffinitySplitLocationProvider
> 

[jira] [Commented] (HIVE-21579) Introduce more complex SQL:2016 datetime formats

2019-07-01 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876471#comment-16876471
 ] 

Hive QA commented on HIVE-21579:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12973336/HIVE-21579.01.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 16361 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_dynamic_partition_pruning]
 (batchId=192)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17806/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17806/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17806/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12973336 - PreCommit-HIVE-Build

> Introduce more complex SQL:2016 datetime formats
> 
>
> Key: HIVE-21579
> URL: https://issues.apache.org/jira/browse/HIVE-21579
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
> Attachments: HIVE-21579.01.patch
>
>
> Enable Hive to parse the following datetime formats when any 
> combination/subset of these or previously implemented patterns is provided in 
> one string. Also catch combinations that conflict.
>  * MONTH
>  * MON
>  * D
>  * DAY
>  * DY
>  * Q
>  * WW
>  * W
> [https://docs.google.com/document/d/1V7k6-lrPGW7_uhqM-FhKl3QsxwCRy69v2KIxPsGjc1k/edit]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21571) SHOW COMPACTIONS shows column names as its first output row

2019-07-01 Thread Rajkumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajkumar Singh updated HIVE-21571:
--
Status: Open  (was: Patch Available)

> SHOW COMPACTIONS shows column names as its first output row
> ---
>
> Key: HIVE-21571
> URL: https://issues.apache.org/jira/browse/HIVE-21571
> Project: Hive
>  Issue Type: Bug
>Reporter: Todd Lipcon
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-21571.01.patch, HIVE-21571.02.patch, 
> HIVE-21571.03.patch, HIVE-21571.04.patch, HIVE-21571.patch
>
>
> SHOW COMPACTIONS yields a resultset with nice column names, and then the 
> first row of data is a repetition of those column names. This is somewhat 
> confusing and hard to read.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21571) SHOW COMPACTIONS shows column names as its first output row

2019-07-01 Thread Rajkumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajkumar Singh updated HIVE-21571:
--
Attachment: HIVE-21571.04.patch
Status: Patch Available  (was: Open)

> SHOW COMPACTIONS shows column names as its first output row
> ---
>
> Key: HIVE-21571
> URL: https://issues.apache.org/jira/browse/HIVE-21571
> Project: Hive
>  Issue Type: Bug
>Reporter: Todd Lipcon
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-21571.01.patch, HIVE-21571.02.patch, 
> HIVE-21571.03.patch, HIVE-21571.04.patch, HIVE-21571.patch
>
>
> SHOW COMPACTIONS yields a resultset with nice column names, and then the 
> first row of data is a repetition of those column names. This is somewhat 
> confusing and hard to read.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19831) Hiveserver2 should skip doAuth checks for CREATE DATABASE/TABLE if database/table already exists

2019-07-01 Thread Rajkumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876461#comment-16876461
 ] 

Rajkumar Singh commented on HIVE-19831:
---

I think this will filter the database on which user don't have auth 
https://github.com/apache/hive/blob/fcd4721591e0ba7d9ba24821f9528dadb7ec16fd/standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java#L1830

> Hiveserver2 should skip doAuth checks for CREATE DATABASE/TABLE if 
> database/table already exists
> 
>
> Key: HIVE-19831
> URL: https://issues.apache.org/jira/browse/HIVE-19831
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 1.2.1, 2.1.0
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-19831.01.patch, HIVE-19831.patch
>
>
> with sqlstdauth on, Create database if exists take TOO LONG if there are too 
> many objects inside the database directory. Hive should not run the doAuth 
> checks for all the objects within database if the database already exists.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21579) Introduce more complex SQL:2016 datetime formats

2019-07-01 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876442#comment-16876442
 ] 

Hive QA commented on HIVE-21579:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
33s{color} | {color:blue} common in master has 62 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
54s{color} | {color:blue} ql in master has 2254 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
11s{color} | {color:red} common: The patch generated 1 new + 0 unchanged - 0 
fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-17806/dev-support/hive-personality.sh
 |
| git revision | master / 2d9e0e4 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17806/yetus/diff-checkstyle-common.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17806/yetus/whitespace-eol.txt
 |
| modules | C: common ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17806/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Introduce more complex SQL:2016 datetime formats
> 
>
> Key: HIVE-21579
> URL: https://issues.apache.org/jira/browse/HIVE-21579
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
> Attachments: HIVE-21579.01.patch
>
>
> Enable Hive to parse the following datetime formats when any 
> combination/subset of these or previously implemented patterns is provided in 
> one string. Also catch combinations that conflict.
>  * MONTH
>  * MON
>  * D
>  * DAY
>  * DY
>  * Q
>  * WW
>  * W
> [https://docs.google.com/document/d/1V7k6-lrPGW7_uhqM-FhKl3QsxwCRy69v2KIxPsGjc1k/edit]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21571) SHOW COMPACTIONS shows column names as its first output row

2019-07-01 Thread Daniel Dai (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876430#comment-16876430
 ] 

Daniel Dai commented on HIVE-21571:
---

+1 pending tests.

> SHOW COMPACTIONS shows column names as its first output row
> ---
>
> Key: HIVE-21571
> URL: https://issues.apache.org/jira/browse/HIVE-21571
> Project: Hive
>  Issue Type: Bug
>Reporter: Todd Lipcon
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-21571.01.patch, HIVE-21571.02.patch, 
> HIVE-21571.03.patch, HIVE-21571.patch
>
>
> SHOW COMPACTIONS yields a resultset with nice column names, and then the 
> first row of data is a repetition of those column names. This is somewhat 
> confusing and hard to read.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21437) Vectorization: Decimal64 division with integer columns

2019-07-01 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876429#comment-16876429
 ] 

Hive QA commented on HIVE-21437:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12973330/HIVE-21437.5.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 16331 tests 
executed
*Failed tests:*
{noformat}
TestReplAcrossInstancesWithJsonMessageFormat - did not produce a TEST-*.xml 
file (likely timed out) (batchId=255)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_decimal_col_scalar_division]
 (batchId=40)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17805/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17805/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17805/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12973330 - PreCommit-HIVE-Build

> Vectorization: Decimal64 division with integer columns
> --
>
> Key: HIVE-21437
> URL: https://issues.apache.org/jira/browse/HIVE-21437
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 4.0.0
>Reporter: Gopal V
>Assignee: Attila Magyar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21437.1.patch, HIVE-21437.2.patch, 
> HIVE-21437.3.patch, HIVE-21437.4.patch, HIVE-21437.5.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Vectorizer fails for
> {code}
> CREATE temporary TABLE `catalog_Sales`(
>   `cs_quantity` int, 
>   `cs_wholesale_cost` decimal(7,2), 
>   `cs_list_price` decimal(7,2), 
>   `cs_sales_price` decimal(7,2), 
>   `cs_ext_discount_amt` decimal(7,2), 
>   `cs_ext_sales_price` decimal(7,2), 
>   `cs_ext_wholesale_cost` decimal(7,2), 
>   `cs_ext_list_price` decimal(7,2), 
>   `cs_ext_tax` decimal(7,2), 
>   `cs_coupon_amt` decimal(7,2), 
>   `cs_ext_ship_cost` decimal(7,2), 
>   `cs_net_paid` decimal(7,2), 
>   `cs_net_paid_inc_tax` decimal(7,2), 
>   `cs_net_paid_inc_ship` decimal(7,2), 
>   `cs_net_paid_inc_ship_tax` decimal(7,2), 
>   `cs_net_profit` decimal(7,2))
>  ;
> explain vectorization detail select maxcs_ext_list_price - 
> cs_ext_wholesale_cost) - cs_ext_discount_amt) + cs_ext_sales_price) / 2) from 
> catalog_sales;
> {code}
> {code}
> 'Map Vectorization:'
> 'enabled: true'
> 'enabledConditionsMet: 
> hive.vectorized.use.vectorized.input.format IS true'
> 'inputFileFormats: 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
> 'notVectorizedReason: SELECT operator: Could not instantiate 
> DecimalColDivideDecimalScalar with arguments arguments: [21, 20, 22], 
> argument classes: [Integer, Integer, Integer], exception: 
> java.lang.IllegalArgumentException: java.lang.ClassCastException@63b56be0 
> stack trace: 
> sun.reflect.GeneratedConstructorAccessor.newInstance(Unknown 
> Source), 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45),
>  java.lang.reflect.Constructor.newInstance(Constructor.java:423), 
> org.apache.hadoop.hive.ql.exec.vector.VectorizationContext.instantiateExpression(VectorizationContext.java:2088),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.fixDecimalDataTypePhysicalVariations(Vectorizer.java:4662),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.fixDecimalDataTypePhysicalVariations(Vectorizer.java:4602),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.vectorizeSelectOperator(Vectorizer.java:4584),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.validateAndVectorizeOperator(Vectorizer.java:5171),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.doProcessChild(Vectorizer.java:923),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.doProcessChildren(Vectorizer.java:809),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.validateAndVectorizeOperatorTree(Vectorizer.java:776),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.access$2400(Vectorizer.java:240),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.validateAndVectorizeMapOperators(Vectorizer.java:2038),
>  
> 

[jira] [Commented] (HIVE-19831) Hiveserver2 should skip doAuth checks for CREATE DATABASE/TABLE if database/table already exists

2019-07-01 Thread Sergey Shelukhin (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876426#comment-16876426
 ] 

Sergey Shelukhin commented on HIVE-19831:
-

Hmm... doesn't this expose the existence of the database by that name to an 
unauthorized user?

> Hiveserver2 should skip doAuth checks for CREATE DATABASE/TABLE if 
> database/table already exists
> 
>
> Key: HIVE-19831
> URL: https://issues.apache.org/jira/browse/HIVE-19831
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 1.2.1, 2.1.0
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-19831.01.patch, HIVE-19831.patch
>
>
> with sqlstdauth on, Create database if exists take TOO LONG if there are too 
> many objects inside the database directory. Hive should not run the doAuth 
> checks for all the objects within database if the database already exists.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19831) Hiveserver2 should skip doAuth checks for CREATE DATABASE/TABLE if database/table already exists

2019-07-01 Thread Daniel Dai (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-19831:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 4.0.0
   Status: Resolved  (was: Patch Available)

+1.

Ptest is clean last time, the only diff is comment. So no need to wait for 
ptest result.

Patch pushed to master. Thanks Rajkumar!

> Hiveserver2 should skip doAuth checks for CREATE DATABASE/TABLE if 
> database/table already exists
> 
>
> Key: HIVE-19831
> URL: https://issues.apache.org/jira/browse/HIVE-19831
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 1.2.1, 2.1.0
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-19831.01.patch, HIVE-19831.patch
>
>
> with sqlstdauth on, Create database if exists take TOO LONG if there are too 
> many objects inside the database directory. Hive should not run the doAuth 
> checks for all the objects within database if the database already exists.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21437) Vectorization: Decimal64 division with integer columns

2019-07-01 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876400#comment-16876400
 ] 

Hive QA commented on HIVE-21437:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
31s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
58s{color} | {color:blue} ql in master has 2253 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-17805/dev-support/hive-personality.sh
 |
| git revision | master / be6bf93 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17805/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Vectorization: Decimal64 division with integer columns
> --
>
> Key: HIVE-21437
> URL: https://issues.apache.org/jira/browse/HIVE-21437
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 4.0.0
>Reporter: Gopal V
>Assignee: Attila Magyar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21437.1.patch, HIVE-21437.2.patch, 
> HIVE-21437.3.patch, HIVE-21437.4.patch, HIVE-21437.5.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Vectorizer fails for
> {code}
> CREATE temporary TABLE `catalog_Sales`(
>   `cs_quantity` int, 
>   `cs_wholesale_cost` decimal(7,2), 
>   `cs_list_price` decimal(7,2), 
>   `cs_sales_price` decimal(7,2), 
>   `cs_ext_discount_amt` decimal(7,2), 
>   `cs_ext_sales_price` decimal(7,2), 
>   `cs_ext_wholesale_cost` decimal(7,2), 
>   `cs_ext_list_price` decimal(7,2), 
>   `cs_ext_tax` decimal(7,2), 
>   `cs_coupon_amt` decimal(7,2), 
>   `cs_ext_ship_cost` decimal(7,2), 
>   `cs_net_paid` decimal(7,2), 
>   `cs_net_paid_inc_tax` decimal(7,2), 
>   `cs_net_paid_inc_ship` decimal(7,2), 
>   `cs_net_paid_inc_ship_tax` decimal(7,2), 
>   `cs_net_profit` decimal(7,2))
>  ;
> explain vectorization detail select maxcs_ext_list_price - 
> cs_ext_wholesale_cost) - cs_ext_discount_amt) + cs_ext_sales_price) / 2) from 
> catalog_sales;
> {code}
> {code}
> 'Map Vectorization:'
> 'enabled: true'
> 'enabledConditionsMet: 
> hive.vectorized.use.vectorized.input.format IS true'
> '

[jira] [Commented] (HIVE-19831) Hiveserver2 should skip doAuth checks for CREATE DATABASE/TABLE if database/table already exists

2019-07-01 Thread Rajkumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876383#comment-16876383
 ] 

Rajkumar Singh commented on HIVE-19831:
---

Thanks [~daijy] Included the suggested comment, updated the fresh patch for a 
clean run.

> Hiveserver2 should skip doAuth checks for CREATE DATABASE/TABLE if 
> database/table already exists
> 
>
> Key: HIVE-19831
> URL: https://issues.apache.org/jira/browse/HIVE-19831
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 1.2.1, 2.1.0
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HIVE-19831.01.patch, HIVE-19831.patch
>
>
> with sqlstdauth on, Create database if exists take TOO LONG if there are too 
> many objects inside the database directory. Hive should not run the doAuth 
> checks for all the objects within database if the database already exists.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19831) Hiveserver2 should skip doAuth checks for CREATE DATABASE/TABLE if database/table already exists

2019-07-01 Thread Rajkumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajkumar Singh updated HIVE-19831:
--
Status: Open  (was: Patch Available)

> Hiveserver2 should skip doAuth checks for CREATE DATABASE/TABLE if 
> database/table already exists
> 
>
> Key: HIVE-19831
> URL: https://issues.apache.org/jira/browse/HIVE-19831
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 2.1.0, 1.2.1
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HIVE-19831.01.patch, HIVE-19831.patch
>
>
> with sqlstdauth on, Create database if exists take TOO LONG if there are too 
> many objects inside the database directory. Hive should not run the doAuth 
> checks for all the objects within database if the database already exists.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19831) Hiveserver2 should skip doAuth checks for CREATE DATABASE/TABLE if database/table already exists

2019-07-01 Thread Rajkumar Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajkumar Singh updated HIVE-19831:
--
Attachment: HIVE-19831.01.patch
Status: Patch Available  (was: Open)

> Hiveserver2 should skip doAuth checks for CREATE DATABASE/TABLE if 
> database/table already exists
> 
>
> Key: HIVE-19831
> URL: https://issues.apache.org/jira/browse/HIVE-19831
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 2.1.0, 1.2.1
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HIVE-19831.01.patch, HIVE-19831.patch
>
>
> with sqlstdauth on, Create database if exists take TOO LONG if there are too 
> many objects inside the database directory. Hive should not run the doAuth 
> checks for all the objects within database if the database already exists.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21911) Pluggable LlapMetricsListener on Tez side to disable / resize Daemons

2019-07-01 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876381#comment-16876381
 ] 

Hive QA commented on HIVE-21911:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12973327/HIVE-21911.2.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 16361 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[hybridgrace_hashjoin_2]
 (batchId=110)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17804/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17804/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17804/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12973327 - PreCommit-HIVE-Build

> Pluggable LlapMetricsListener on Tez side to disable / resize Daemons
> -
>
> Key: HIVE-21911
> URL: https://issues.apache.org/jira/browse/HIVE-21911
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap, Tez
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21911.2.patch, HIVE-21911.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> We need to have a way to plug in different listeners which act upon the 
> LlapDaemon statistics.
> This listener should be able to disable / resize the LlapDaemons based on 
> health data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21935) Hive Vectorization : degraded performance with vectorize UDF

2019-07-01 Thread Rajkumar Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876376#comment-16876376
 ] 

Rajkumar Singh commented on HIVE-21935:
---

[~gopalv] This is greatly affecting the job performance, with 
hive.vectorized.adaptor.usage.mode=all the same job is taking ~5000seconds  
while setting the property none it will be able to complete in ~21seconds. is 
there a way we can increase the rows queued for buffering?

> Hive Vectorization : degraded performance with vectorize UDF  
> --
>
> Key: HIVE-21935
> URL: https://issues.apache.org/jira/browse/HIVE-21935
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 3.1.1
> Environment: Hive-3, JDK-8
>Reporter: Rajkumar Singh
>Priority: Major
>  Labels: performance
> Attachments: CustomSplit-1.0-SNAPSHOT.jar
>
>
> with vectorization turned on and hive.vectorized.adaptor.usage.mode=all we 
> were seeing severe performance degradation. looking at the task jstacks it 
> seems that it is running the code which vectorizes UDF and stuck in some loop.
> {code:java}
> jstack -l 14954 | grep 0x3af0 -A20
> "TezChild" #15 daemon prio=5 os_prio=0 tid=0x7f157538d800 nid=0x3af0 
> runnable [0x7f1547581000]
>java.lang.Thread.State: RUNNABLE
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorAssignRow.assignRowColumn(VectorAssignRow.java:573)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorAssignRow.assignRowColumn(VectorAssignRow.java:350)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.udf.VectorUDFAdaptor.setResult(VectorUDFAdaptor.java:205)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.udf.VectorUDFAdaptor.evaluate(VectorUDFAdaptor.java:150)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.expressions.VectorExpression.evaluateChildren(VectorExpression.java:271)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.expressions.ListIndexColScalar.evaluate(ListIndexColScalar.java:59)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:146)
>   at 
> org.apache.hadoop.hive.ql.exec.Operator.vectorForward(Operator.java:965)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:938)
>   at 
> org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:125)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:889)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:92)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:76)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:426)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250)
>   at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374)
>   at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
>   at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
> [yarn@hdp32b ~]$ jstack -l 14954 | grep 0x3af0 -A20
> "TezChild" #15 daemon prio=5 os_prio=0 tid=0x7f157538d800 nid=0x3af0 
> runnable [0x7f1547581000]
>java.lang.Thread.State: RUNNABLE
>   at 
> org.apache.hadoop.hive.ql.exec.vector.BytesColumnVector.ensureSize(BytesColumnVector.java:554)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorAssignRow.assignRowColumn(VectorAssignRow.java:570)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorAssignRow.assignRowColumn(VectorAssignRow.java:350)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.udf.VectorUDFAdaptor.setResult(VectorUDFAdaptor.java:205)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.udf.VectorUDFAdaptor.evaluate(VectorUDFAdaptor.java:150)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.expressions.VectorExpression.evaluateChildren(VectorExpression.java:271)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.expressions.ListIndexColScalar.evaluate(ListIndexColScalar.java:59)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorSelectOperator.process(VectorSelectOperator.java:146)
>   at 
> org.apache.hadoop.hive.ql.exec.Operator.vectorForward(Operator.java:965)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:938)
>   at 
> org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:125)
>   at 
> org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:889)
>   at 
> 

[jira] [Commented] (HIVE-21911) Pluggable LlapMetricsListener on Tez side to disable / resize Daemons

2019-07-01 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876340#comment-16876340
 ] 

Hive QA commented on HIVE-21911:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
32s{color} | {color:blue} common in master has 62 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
25s{color} | {color:blue} llap-tez in master has 17 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
11s{color} | {color:red} llap-tez: The patch generated 1 new + 76 unchanged - 0 
fixed = 77 total (was 76) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-17804/dev-support/hive-personality.sh
 |
| git revision | master / be6bf93 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17804/yetus/diff-checkstyle-llap-tez.txt
 |
| modules | C: common llap-tez U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17804/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Pluggable LlapMetricsListener on Tez side to disable / resize Daemons
> -
>
> Key: HIVE-21911
> URL: https://issues.apache.org/jira/browse/HIVE-21911
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap, Tez
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21911.2.patch, HIVE-21911.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> We need to have a way to plug in different listeners which act upon the 
> LlapDaemon statistics.
> This listener should be able to disable / resize the LlapDaemons based on 
> health data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21133) Support views with rewriting enabled useful for debugging

2019-07-01 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-21133:
---
Description: 
Implement virtual views with rewriting enabled, useful to check whether a 
certain rewriting will be triggered. These view definitions will be stored in 
the user session, and they will only be used when simulation mode is enabled 
and user runs {{explain cbo}} / {{explain cbo extended}}.

{code}
set hive.simulation.enable=true;

create view mv1_n2 enable rewrite as
select * from emps_n3 where empid < 150;

explain cbo
select *
from (select * from emps_n3 where empid < 120) t
join depts_n2 using (deptno);

drop view mv1_n2;
{code}

  was:
Implement simulated materialized views, useful to check whether a certain 
rewriting will be triggered. Simulated materialized views definitions will be 
stored in the user session, and they will only be used when simulation mode is 
enabled and user runs {{explain cbo}} / {{explain cbo extended}}.

{code}
set hive.simulation.enable=true;

create simulated materialized view mv1_n2 as
select * from emps_n3 where empid < 150;

explain cbo
select *
from (select * from emps_n3 where empid < 120) t
join depts_n2 using (deptno);

drop simulated materialized view mv1_n2;
{code}


> Support views with rewriting enabled useful for debugging
> -
>
> Key: HIVE-21133
> URL: https://issues.apache.org/jira/browse/HIVE-21133
> Project: Hive
>  Issue Type: Improvement
>  Components: Materialized views
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-21133.01.patch, HIVE-21133.02.patch, 
> HIVE-21133.02.patch, HIVE-21133.patch
>
>
> Implement virtual views with rewriting enabled, useful to check whether a 
> certain rewriting will be triggered. These view definitions will be stored in 
> the user session, and they will only be used when simulation mode is enabled 
> and user runs {{explain cbo}} / {{explain cbo extended}}.
> {code}
> set hive.simulation.enable=true;
> create view mv1_n2 enable rewrite as
> select * from emps_n3 where empid < 150;
> explain cbo
> select *
> from (select * from emps_n3 where empid < 120) t
> join depts_n2 using (deptno);
> drop view mv1_n2;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21133) Support views with rewriting enabled useful for debugging

2019-07-01 Thread Jesus Camacho Rodriguez (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-21133:
---
Summary: Support views with rewriting enabled useful for debugging  (was: 
Add simulated materialized views useful for rewriting debugging)

> Support views with rewriting enabled useful for debugging
> -
>
> Key: HIVE-21133
> URL: https://issues.apache.org/jira/browse/HIVE-21133
> Project: Hive
>  Issue Type: Improvement
>  Components: Materialized views
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-21133.01.patch, HIVE-21133.02.patch, 
> HIVE-21133.02.patch, HIVE-21133.patch
>
>
> Implement simulated materialized views, useful to check whether a certain 
> rewriting will be triggered. Simulated materialized views definitions will be 
> stored in the user session, and they will only be used when simulation mode 
> is enabled and user runs {{explain cbo}} / {{explain cbo extended}}.
> {code}
> set hive.simulation.enable=true;
> create simulated materialized view mv1_n2 as
> select * from emps_n3 where empid < 150;
> explain cbo
> select *
> from (select * from emps_n3 where empid < 120) t
> join depts_n2 using (deptno);
> drop simulated materialized view mv1_n2;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21874) Implement add partitions related methods on temporary table

2019-07-01 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876317#comment-16876317
 ] 

Hive QA commented on HIVE-21874:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12973326/HIVE-21874.05.patch

{color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 16588 tests 
executed
*Failed tests:*
{noformat}
TestReplAcrossInstancesWithJsonMessageFormat - did not produce a TEST-*.xml 
file (likely timed out) (batchId=255)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17803/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17803/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17803/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12973326 - PreCommit-HIVE-Build

> Implement add partitions related methods on temporary table
> ---
>
> Key: HIVE-21874
> URL: https://issues.apache.org/jira/browse/HIVE-21874
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Laszlo Pinter
>Assignee: Laszlo Pinter
>Priority: Major
> Attachments: HIVE-21874.01.patch, HIVE-21874.02.patch, 
> HIVE-21874.03.patch, HIVE-21874.04.patch, HIVE-21874.05.patch
>
>
> IMetaStoreClient exposes the following add partition related methods:
> {code:java}
> Partition add_partition(Partition partition);
> int add_partitions(List partitions);
> int add_partitions_pspec(PartitionSpecProxy partitionSpec);
> List add_partitions(List partitions, boolean 
> ifNotExists, boolean needResults);
> {code}
> These methods should be implemented in order to handle addition of partitions 
> to temporary tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21923) Vectorized MapJoin may miss results when only the join key is selected

2019-07-01 Thread Zoltan Haindrich (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-21923:

Attachment: HIVE-21923.02.patch

> Vectorized MapJoin may miss results when only the join key is selected
> --
>
> Key: HIVE-21923
> URL: https://issues.apache.org/jira/browse/HIVE-21923
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-21923.01.patch, HIVE-21923.02.patch
>
>
> HIVE-21189 have introduced some resultset changes
> in ql/src/test/results/clientpositive/llap/hybridgrace_hashjoin_2.q.out
> https://github.com/apache/hive/commit/5799398450c17d06e8ef144ce835a8524f5abec9#diff-56b3ab96b6c90fdbebe2c4f84e8595afL500



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21874) Implement add partitions related methods on temporary table

2019-07-01 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876270#comment-16876270
 ] 

Hive QA commented on HIVE-21874:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  1m  
9s{color} | {color:blue} standalone-metastore/metastore-server in master has 
179 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
57s{color} | {color:blue} ql in master has 2253 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
39s{color} | {color:red} ql: The patch generated 1 new + 15 unchanged - 0 fixed 
= 16 total (was 15) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-17803/dev-support/hive-personality.sh
 |
| git revision | master / be6bf93 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17803/yetus/diff-checkstyle-ql.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17803/yetus/whitespace-eol.txt
 |
| modules | C: standalone-metastore/metastore-server ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17803/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Implement add partitions related methods on temporary table
> ---
>
> Key: HIVE-21874
> URL: https://issues.apache.org/jira/browse/HIVE-21874
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Laszlo Pinter
>Assignee: Laszlo Pinter
>Priority: Major
> Attachments: HIVE-21874.01.patch, HIVE-21874.02.patch, 
> HIVE-21874.03.patch, HIVE-21874.04.patch, HIVE-21874.05.patch
>
>
> IMetaStoreClient exposes the following add partition related methods:
> {code:java}
> Partition add_partition(Partition partition);
> int add_partitions(List partitions);
> int add_partitions_pspec(PartitionSpecProxy partitionSpec);
> List add_partitions(List partitions, boolean 
> ifNotExists, boolean 

[jira] [Updated] (HIVE-21578) Introduce SQL:2016 formats FM, FX, and nested strings

2019-07-01 Thread Karen Coppage (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Coppage updated HIVE-21578:
-
Status: Open  (was: Patch Available)

> Introduce SQL:2016 formats FM, FX, and nested strings
> -
>
> Key: HIVE-21578
> URL: https://issues.apache.org/jira/browse/HIVE-21578
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
> Attachments: HIVE-21578.01.patch, HIVE-21578.02.patch, 
> HIVE-21578.02.patch, HIVE-21578.03.patch
>
>
> Enable Hive to parse the following datetime formats when any combination or 
> subset of these or previously implemented formats is provided in one string. 
>  * "text" (nested strings)
>  * FM
>  * FX
> [Definitions 
> here|https://docs.google.com/document/d/1V7k6-lrPGW7_uhqM-FhKl3QsxwCRy69v2KIxPsGjc1k/edit]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21578) Introduce SQL:2016 formats FM, FX, and nested strings

2019-07-01 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876241#comment-16876241
 ] 

Hive QA commented on HIVE-21578:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12973320/HIVE-21578.03.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 16361 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17802/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17802/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17802/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12973320 - PreCommit-HIVE-Build

> Introduce SQL:2016 formats FM, FX, and nested strings
> -
>
> Key: HIVE-21578
> URL: https://issues.apache.org/jira/browse/HIVE-21578
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
> Attachments: HIVE-21578.01.patch, HIVE-21578.02.patch, 
> HIVE-21578.02.patch, HIVE-21578.03.patch
>
>
> Enable Hive to parse the following datetime formats when any combination or 
> subset of these or previously implemented formats is provided in one string. 
>  * "text" (nested strings)
>  * FM
>  * FX
> [Definitions 
> here|https://docs.google.com/document/d/1V7k6-lrPGW7_uhqM-FhKl3QsxwCRy69v2KIxPsGjc1k/edit]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21880) Enable flaky test TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21880?focusedWorklogId=270210=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-270210
 ]

ASF GitHub Bot logged work on HIVE-21880:
-

Author: ASF GitHub Bot
Created on: 01/Jul/19 14:18
Start Date: 01/Jul/19 14:18
Worklog Time Spent: 10m 
  Work Description: ashutosh-bapat commented on pull request #684: 
HIVE-21880 : Enable flaky test 
TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.
URL: https://github.com/apache/hive/pull/684#discussion_r299064763
 
 

 ##
 File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java
 ##
 @@ -1,17 +1,25 @@ private void prepareQuotes() throws SQLException {
 }
   }
 
-  private void lockForUpdate() throws MetaException {
-String selectQuery = "select \"NEXT_EVENT_ID\" from 
\"NOTIFICATION_SEQUENCE\"";
-String selectForUpdateQuery = sqlGenerator.addForUpdateClause(selectQuery);
-new RetryingExecutor(conf, () -> {
-  prepareQuotes();
-  Query query = pm.newQuery("javax.jdo.query.SQL", selectForUpdateQuery);
-  query.setUnique(true);
-  // only need to execute it to get db Lock
-  query.execute();
-  query.closeAll();
-}).run();
+  private void lockNotificationSequenceForUpdate() throws MetaException {
+if (sqlGenerator.getDbProduct() == DatabaseProduct.DERBY && directSql != 
null) {
+  // Derby doesn't allow FOR UPDATE to lock the row being selected (See 
https://db.apache
+  // .org/derby/docs/10.1/ref/rrefsqlj31783.html) . So lock the whole 
table. Since there's
+  // only one row in the table, this shouldn't cause any performance 
degradation. Also we not
+  // suggest Derby to be used in production so that's fine.
+  directSql.lockDbTable("NOTIFICATION_SEQUENCE");
 
 Review comment:
   LOCK table is more restrictive than SELECT FOR UPDATE. E.g. LOCK table in 
exclusive mode will block any SELECTs as well, but SELECT FOR UPDATE will allow 
SELECT. So, it's better to restrict this method only to Derby where SELECT FOR 
UPDATE is not possible.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 270210)
Time Spent: 2h  (was: 1h 50m)

> Enable flaky test 
> TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.
> ---
>
> Key: HIVE-21880
> URL: https://issues.apache.org/jira/browse/HIVE-21880
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21880.01.patch, HIVE-21880.02.patch, 
> HIVE-21880.03.patch
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Need tp enable 
> TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites
>  which is disabled as it is flaky and randomly failing with below error.
> {code}
> Error Message
> Notification events are missing in the meta store.
> Stacktrace
> java.lang.IllegalStateException: Notification events are missing in the meta 
> store.
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getNextNotification(HiveMetaStoreClient.java:3246)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212)
>   at com.sun.proxy.$Proxy58.getNextNotification(Unknown Source)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$MSClientNotificationFetcher.getNextNotificationEvents(EventUtils.java:107)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$NotificationEventIterator.fetchNextBatch(EventUtils.java:159)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$NotificationEventIterator.hasNext(EventUtils.java:189)
>   at 
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.incrementalDump(ReplDumpTask.java:231)
>   at 
> 

[jira] [Work logged] (HIVE-21880) Enable flaky test TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21880?focusedWorklogId=270206=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-270206
 ]

ASF GitHub Bot logged work on HIVE-21880:
-

Author: ASF GitHub Bot
Created on: 01/Jul/19 14:10
Start Date: 01/Jul/19 14:10
Worklog Time Spent: 10m 
  Work Description: ashutosh-bapat commented on pull request #684: 
HIVE-21880 : Enable flaky test 
TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.
URL: https://github.com/apache/hive/pull/684#discussion_r299057717
 
 

 ##
 File path: ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java
 ##
 @@ -518,6 +518,8 @@
   REPL_INVALID_DB_OR_TABLE_PATTERN(20021,
   "Invalid pattern for the DB or table name in the replication policy. 
"
   + "It should be a valid regex enclosed within single or 
double quotes."),
+  REPL_EVENTS_WITH_DUPLICATE_ID_IN_METASTORE(20026, "Notification events with 
duplicate " +
 
 Review comment:
   Done.
   
   Also, I noticed that the new error code added is not the next to the 
previous one. I don't remember why did I do so. Corrected it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 270206)
Time Spent: 1h 50m  (was: 1h 40m)

> Enable flaky test 
> TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.
> ---
>
> Key: HIVE-21880
> URL: https://issues.apache.org/jira/browse/HIVE-21880
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21880.01.patch, HIVE-21880.02.patch, 
> HIVE-21880.03.patch
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Need tp enable 
> TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites
>  which is disabled as it is flaky and randomly failing with below error.
> {code}
> Error Message
> Notification events are missing in the meta store.
> Stacktrace
> java.lang.IllegalStateException: Notification events are missing in the meta 
> store.
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getNextNotification(HiveMetaStoreClient.java:3246)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212)
>   at com.sun.proxy.$Proxy58.getNextNotification(Unknown Source)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$MSClientNotificationFetcher.getNextNotificationEvents(EventUtils.java:107)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$NotificationEventIterator.fetchNextBatch(EventUtils.java:159)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$NotificationEventIterator.hasNext(EventUtils.java:189)
>   at 
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.incrementalDump(ReplDumpTask.java:231)
>   at 
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.execute(ReplDumpTask.java:121)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2709)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2361)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2028)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1788)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1782)
>   at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:162)
>   at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:223)
>   at 
> org.apache.hadoop.hive.ql.parse.WarehouseInstance.run(WarehouseInstance.java:227)
>   at 
> org.apache.hadoop.hive.ql.parse.WarehouseInstance.dump(WarehouseInstance.java:282)
>   at 
> 

[jira] [Commented] (HIVE-21578) Introduce SQL:2016 formats FM, FX, and nested strings

2019-07-01 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876208#comment-16876208
 ] 

Hive QA commented on HIVE-21578:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
36s{color} | {color:blue} common in master has 62 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
59s{color} | {color:blue} ql in master has 2253 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-17802/dev-support/hive-personality.sh
 |
| git revision | master / be6bf93 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: common ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17802/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Introduce SQL:2016 formats FM, FX, and nested strings
> -
>
> Key: HIVE-21578
> URL: https://issues.apache.org/jira/browse/HIVE-21578
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
> Attachments: HIVE-21578.01.patch, HIVE-21578.02.patch, 
> HIVE-21578.02.patch, HIVE-21578.03.patch
>
>
> Enable Hive to parse the following datetime formats when any combination or 
> subset of these or previously implemented formats is provided in one string. 
>  * "text" (nested strings)
>  * FM
>  * FX
> [Definitions 
> here|https://docs.google.com/document/d/1V7k6-lrPGW7_uhqM-FhKl3QsxwCRy69v2KIxPsGjc1k/edit]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21880) Enable flaky test TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21880?focusedWorklogId=270202=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-270202
 ]

ASF GitHub Bot logged work on HIVE-21880:
-

Author: ASF GitHub Bot
Created on: 01/Jul/19 14:02
Start Date: 01/Jul/19 14:02
Worklog Time Spent: 10m 
  Work Description: ashutosh-bapat commented on pull request #684: 
HIVE-21880 : Enable flaky test 
TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.
URL: https://github.com/apache/hive/pull/684#discussion_r299057717
 
 

 ##
 File path: ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java
 ##
 @@ -518,6 +518,8 @@
   REPL_INVALID_DB_OR_TABLE_PATTERN(20021,
   "Invalid pattern for the DB or table name in the replication policy. 
"
   + "It should be a valid regex enclosed within single or 
double quotes."),
+  REPL_EVENTS_WITH_DUPLICATE_ID_IN_METASTORE(20026, "Notification events with 
duplicate " +
 
 Review comment:
   Done.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 270202)
Time Spent: 1h 40m  (was: 1.5h)

> Enable flaky test 
> TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.
> ---
>
> Key: HIVE-21880
> URL: https://issues.apache.org/jira/browse/HIVE-21880
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21880.01.patch, HIVE-21880.02.patch, 
> HIVE-21880.03.patch
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Need tp enable 
> TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites
>  which is disabled as it is flaky and randomly failing with below error.
> {code}
> Error Message
> Notification events are missing in the meta store.
> Stacktrace
> java.lang.IllegalStateException: Notification events are missing in the meta 
> store.
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getNextNotification(HiveMetaStoreClient.java:3246)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212)
>   at com.sun.proxy.$Proxy58.getNextNotification(Unknown Source)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$MSClientNotificationFetcher.getNextNotificationEvents(EventUtils.java:107)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$NotificationEventIterator.fetchNextBatch(EventUtils.java:159)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$NotificationEventIterator.hasNext(EventUtils.java:189)
>   at 
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.incrementalDump(ReplDumpTask.java:231)
>   at 
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.execute(ReplDumpTask.java:121)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2709)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2361)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2028)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1788)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1782)
>   at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:162)
>   at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:223)
>   at 
> org.apache.hadoop.hive.ql.parse.WarehouseInstance.run(WarehouseInstance.java:227)
>   at 
> org.apache.hadoop.hive.ql.parse.WarehouseInstance.dump(WarehouseInstance.java:282)
>   at 
> org.apache.hadoop.hive.ql.parse.WarehouseInstance.dump(WarehouseInstance.java:265)
>   at 
> 

[jira] [Work logged] (HIVE-21880) Enable flaky test TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21880?focusedWorklogId=270199=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-270199
 ]

ASF GitHub Bot logged work on HIVE-21880:
-

Author: ASF GitHub Bot
Created on: 01/Jul/19 13:58
Start Date: 01/Jul/19 13:58
Worklog Time Spent: 10m 
  Work Description: ashutosh-bapat commented on pull request #684: 
HIVE-21880 : Enable flaky test 
TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.
URL: https://github.com/apache/hive/pull/684#discussion_r299055847
 
 

 ##
 File path: 
hcatalog/server-extensions/src/main/java/org/apache/hive/hcatalog/listener/DbNotificationListener.java
 ##
 @@ -997,6 +998,15 @@ private void addNotificationLog(NotificationEvent event, 
ListenerEvent listenerE
 stmt.execute("SET @@session.sql_mode=ANSI_QUOTES");
   }
 
+  // Derby doesn't allow FOR UPDATE to lock the row being selected (See 
https://db.apache
+  // .org/derby/docs/10.1/ref/rrefsqlj31783.html) . So lock the whole 
table. Since there's
+  // only one row in the table, this shouldn't cause any performance 
degradation. Also we not
+  // suggest Derby to be used in production so that's fine.
 
 Review comment:
   You are right!. Removed.
   
   I was misled because of 
https://cwiki.apache.org/confluence/display/Hive/AdminManual+Metastore+Administration#AdminManualMetastoreAdministration-Local/EmbeddedMetastoreDatabase(Derby),
 where it says it's only for unit tests. But that's only for embedded db. Sorry 
for the confusion
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 270199)
Time Spent: 1.5h  (was: 1h 20m)

> Enable flaky test 
> TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.
> ---
>
> Key: HIVE-21880
> URL: https://issues.apache.org/jira/browse/HIVE-21880
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21880.01.patch, HIVE-21880.02.patch, 
> HIVE-21880.03.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Need tp enable 
> TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites
>  which is disabled as it is flaky and randomly failing with below error.
> {code}
> Error Message
> Notification events are missing in the meta store.
> Stacktrace
> java.lang.IllegalStateException: Notification events are missing in the meta 
> store.
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getNextNotification(HiveMetaStoreClient.java:3246)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212)
>   at com.sun.proxy.$Proxy58.getNextNotification(Unknown Source)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$MSClientNotificationFetcher.getNextNotificationEvents(EventUtils.java:107)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$NotificationEventIterator.fetchNextBatch(EventUtils.java:159)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$NotificationEventIterator.hasNext(EventUtils.java:189)
>   at 
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.incrementalDump(ReplDumpTask.java:231)
>   at 
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.execute(ReplDumpTask.java:121)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2709)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2361)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2028)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1788)
>   at 

[jira] [Commented] (HIVE-21910) Multiple target location generation in HostAffinitySplitLocationProvider

2019-07-01 Thread Adam Szita (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876184#comment-16876184
 ] 

Adam Szita commented on HIVE-21910:
---

Thanks for the patch, Peter. +1 pending tests

> Multiple target location generation in HostAffinitySplitLocationProvider
> 
>
> Key: HIVE-21910
> URL: https://issues.apache.org/jira/browse/HIVE-21910
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21910.2.patch, HIVE-21910.3.patch, 
> HIVE-21910.4.patch, HIVE-21910.patch
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> We need to generate multiple target locations by 
> HostAffinitySplitLocationProvider, so we will have deterministic fallback 
> nodes in case the target node is disabled



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19831) Hiveserver2 should skip doAuth checks for CREATE DATABASE/TABLE if database/table already exists

2019-07-01 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876182#comment-16876182
 ] 

Hive QA commented on HIVE-19831:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12927542/HIVE-19831.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 16330 tests 
executed
*Failed tests:*
{noformat}
TestReplAcrossInstancesWithJsonMessageFormat - did not produce a TEST-*.xml 
file (likely timed out) (batchId=255)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomCreatedDynamicPartitions
 (batchId=275)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomCreatedDynamicPartitionsUnionAll
 (batchId=275)
org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerHighShuffleBytes
 (batchId=275)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17801/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17801/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17801/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12927542 - PreCommit-HIVE-Build

> Hiveserver2 should skip doAuth checks for CREATE DATABASE/TABLE if 
> database/table already exists
> 
>
> Key: HIVE-19831
> URL: https://issues.apache.org/jira/browse/HIVE-19831
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 1.2.1, 2.1.0
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HIVE-19831.patch
>
>
> with sqlstdauth on, Create database if exists take TOO LONG if there are too 
> many objects inside the database directory. Hive should not run the doAuth 
> checks for all the objects within database if the database already exists.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19831) Hiveserver2 should skip doAuth checks for CREATE DATABASE/TABLE if database/table already exists

2019-07-01 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876152#comment-16876152
 ] 

Hive QA commented on HIVE-19831:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
33s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
59s{color} | {color:blue} ql in master has 2253 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m 
18s{color} | {color:red} ql generated 1 new + 2253 unchanged - 0 fixed = 2254 
total (was 2253) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 27s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  Nullcheck of op at line 1136 of value previously dereferenced in 
org.apache.hadoop.hive.ql.Driver.doAuthorization(HiveOperation, 
BaseSemanticAnalyzer, String)  At Driver.java:1136 of value previously 
dereferenced in org.apache.hadoop.hive.ql.Driver.doAuthorization(HiveOperation, 
BaseSemanticAnalyzer, String)  At Driver.java:[line 1085] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-17801/dev-support/hive-personality.sh
 |
| git revision | master / be6bf93 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17801/yetus/whitespace-eol.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17801/yetus/new-findbugs-ql.html
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17801/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Hiveserver2 should skip doAuth checks for CREATE DATABASE/TABLE if 
> database/table already exists
> 
>
> Key: HIVE-19831
> URL: https://issues.apache.org/jira/browse/HIVE-19831
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 1.2.1, 2.1.0
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HIVE-19831.patch
>
>
> with sqlstdauth on, Create database if exists take TOO LONG if there are too 
> many objects inside the database directory. Hive should not run the doAuth 
> checks for all the objects within database if the database already exists.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21940) Metastore: Postgres text <-> clob mismatch for PARTITION_PARAMS/PARAM_VALUE

2019-07-01 Thread Laszlo Bodor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Bodor updated HIVE-21940:

Description: 
The issue is reproducible on a cluster with postgres metastore db by the 
following statements:
{code}
USE default;
drop table if exists my_table;
create external table my_table (col1 int, col3 int) partitioned by (col2 
string) STORED AS TEXTFILE;
insert into my_table VALUES(11,201,"F");
SELECT pp.* FROM sys.partition_params pp join sys.partitions p on p.part_id = 
pp.part_id join sys.tbls t on t.tbl_id = p.tbl_id where t.tbl_name = "my_table";
{code}

sys query results in:
{code}
+-++-+
| pp.part_id  |  pp.param_key  | pp.param_value  |
+-++-+
| 151 | rawDataSize| 28629   |
| 151 | numRows| 28628   |
| 151 | transient_lastDdlTime  | 28627   |
| 151 | COLUMN_STATS_ACCURATE  | 28626   |
| 151 | numFiles   | 28625   |
| 151 | totalSize  | 28622   |
+-++-+
{code}


Seems like (propably) since HIVE-20833/HIVE-20221 there is an inconvenience 
while using PARTITION_PARAMS/PARAM_VALUE, because in postgres there is no such 
type as CLOB, and metastore simply saves large object ids into this field. More 
interesting is that the large object can be resolved in some codepaths. In case 
of a describe for partition it works correctly:
{code}
describe formatted my_table_for_sqoop partition (col2='F');

...

| Partition Parameters: | NULL  
 | NULL   |
|   | COLUMN_STATS_ACCURATE 
 | 
{\"BASIC_STATS\":\"true\",\"COLUMN_STATS\":{\"col1\":\"true\",\"col3\":\"true\"}}
 |
|   | numFiles  
 | 1  |
|   | numRows   
 | 1  |
|   | rawDataSize   
 | 6  |
|   | totalSize 
 | 7  |
|   | transient_lastDdlTime 
 | 1561976024 |
|   | NULL  
 | NULL   |

{code}

But in case of a direct metastore query (from hive's sys schema, but the same 
result for direct postgres), it shows the result above (see sys query output). 
This is an issue when hive treats these ids as they were real values, but they 
are obviously not correct, and this causes various failures (e.g. using serde 
parameter serialization.format=28392)



param_value values above are large object ids, according to pg_dump
| 151 | COLUMN_STATS_ACCURATE  | 28626   |

{code}
SELECT pg_catalog.lo_open('28626', 131072);
SELECT pg_catalog.lowrite(0, 
'\x7b2242415349435f5354415453223a2274727565222c22434f4c554d4e5f5354415453223a7b22636f6c31223a2274727565222c22636f6c33223a2274727565227d7d');
SELECT pg_catalog.lo_close(0);

{code}
decoded large object value:
{code}
{"BASIC_STATS":"true","COLUMN_STATS":{"col1":"true","col3":"true"}}
{code}


  was:
The issue is reproducible on a cluster with postgres metastore db by the 
following statements:
{code}
USE default;
drop table if exists my_table;
create external table my_table (col1 int, col3 int) partitioned by (col2 
string) STORED AS TEXTFILE;
insert into my_table VALUES(11,201,"F");
SELECT pp.* FROM sys.partition_params pp join sys.partitions p on p.part_id = 
pp.part_id join sys.tbls t on t.tbl_id = p.tbl_id where t.tbl_name = "my_table";
{code}

sys query results in:
{code}
+-++-+
| pp.part_id  |  pp.param_key  | pp.param_value  |
+-++-+
| 151 | rawDataSize| 28629   |
| 151 | numRows| 28628   |
| 151 | transient_lastDdlTime  | 28627   |
| 151 | COLUMN_STATS_ACCURATE  | 28626   |
| 151 | numFiles   | 28625   |
| 151 | totalSize  | 28622   |
+-++-+
{code}


Seems like (propably) since HIVE-20833/HIVE-20221 there is an inconvenience 
while using 

[jira] [Updated] (HIVE-21940) Metastore: Postgres text <-> clob mismatch for PARTITION_PARAMS/PARAM_VALUE

2019-07-01 Thread Laszlo Bodor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Bodor updated HIVE-21940:

Description: 
The issue is reproducible on a cluster with postgres metastore db by the 
following statements:
{code}
USE default;
drop table if exists my_table;
create external table my_table (col1 int, col3 int) partitioned by (col2 
string) STORED AS TEXTFILE;
insert into my_table VALUES(11,201,"F");
SELECT pp.* FROM sys.partition_params pp join sys.partitions p on p.part_id = 
pp.part_id join sys.tbls t on t.tbl_id = p.tbl_id where t.tbl_name = "my_table";
{code}

sys query results in:
{code}
+-++-+
| pp.part_id  |  pp.param_key  | pp.param_value  |
+-++-+
| 151 | rawDataSize| 28629   |
| 151 | numRows| 28628   |
| 151 | transient_lastDdlTime  | 28627   |
| 151 | COLUMN_STATS_ACCURATE  | 28626   |
| 151 | numFiles   | 28625   |
| 151 | totalSize  | 28622   |
+-++-+
{code}


Seems like (propably) since HIVE-20833/HIVE-20221 there is an inconvenience 
while using PARTITION_PARAMS/PARAM_VALUE, because in postgres there is no such 
type as CLOB, and metastore simply saves large object ids into this field. More 
interesting is that the large object can be resolved in some codepaths. In case 
of a describe for partition it works correctly:
{code}
describe formatted my_table_for_sqoop partition (col2='F');

...

| Partition Parameters: | NULL  
 | NULL   |
|   | COLUMN_STATS_ACCURATE 
 | 
{\"BASIC_STATS\":\"true\",\"COLUMN_STATS\":{\"col1\":\"true\",\"col3\":\"true\"}}
 |
|   | numFiles  
 | 1  |
|   | numRows   
 | 1  |
|   | rawDataSize   
 | 6  |
|   | totalSize 
 | 7  |
|   | transient_lastDdlTime 
 | 1561976024 |
|   | NULL  
 | NULL   |

{code}

But in case for direct metastore query (from hive's sys schema, but the same 
result for direct postgres), it shows the result above. This is an issue when 
hive treats these ids as is, but they are obviously not correct, and this 
causes various failures (e.g. using serde parameter serialization.format=28392)



param_value values above are large object ids, according to pg_dump
| 151 | COLUMN_STATS_ACCURATE  | 28626   |

{code}
SELECT pg_catalog.lo_open('28626', 131072);
SELECT pg_catalog.lowrite(0, 
'\x7b2242415349435f5354415453223a2274727565222c22434f4c554d4e5f5354415453223a7b22636f6c31223a2274727565222c22636f6c33223a2274727565227d7d');
SELECT pg_catalog.lo_close(0);

{code}
decoded large object value:
{code}
{"BASIC_STATS":"true","COLUMN_STATS":{"col1":"true","col3":"true"}}
{code}


  was:
The issue is reproducible on a cluster with postgres metastore db by the 
following statements:
{code}
USE default;
drop table if exists my_table;
create external table my_table (col1 int, col3 int) partitioned by (col2 
string) STORED AS TEXTFILE;
insert into my_table VALUES(11,201,"F");
SELECT pp.* FROM sys.partition_params pp join sys.partitions p on p.part_id = 
pp.part_id join sys.tbls t on t.tbl_id = p.tbl_id where t.tbl_name = "my_table";
{code}

Seems like (propably) since HIVE-20833/HIVE-20221 there is an inconvenience 
while using PARTITION_PARAMS/PARAM_VALUE, because in postgres there is no such 
type as CLOB, and metastore simply saves large object ids into this field. More 
interesting is that the large object can be resolved in some codepaths. In case 
of a describe for partition it works correctly:
{code}
describe formatted my_table_for_sqoop partition (col2='F');

...

| Partition Parameters: | NULL  
 | NULL   |
|   | COLUMN_STATS_ACCURATE 
 | 
{\"BASIC_STATS\":\"true\",\"COLUMN_STATS\":{\"col1\":\"true\",\"col3\":\"true\"}}
 |
| 

[jira] [Updated] (HIVE-21910) Multiple target location generation in HostAffinitySplitLocationProvider

2019-07-01 Thread Peter Vary (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-21910:
--
Attachment: HIVE-21910.4.patch

> Multiple target location generation in HostAffinitySplitLocationProvider
> 
>
> Key: HIVE-21910
> URL: https://issues.apache.org/jira/browse/HIVE-21910
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21910.2.patch, HIVE-21910.3.patch, 
> HIVE-21910.4.patch, HIVE-21910.patch
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> We need to generate multiple target locations by 
> HostAffinitySplitLocationProvider, so we will have deterministic fallback 
> nodes in case the target node is disabled



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21940) Metastore: Postgres text <-> clob mismatch for PARTITION_PARAMS/PARAM_VALUE

2019-07-01 Thread Laszlo Bodor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Bodor updated HIVE-21940:

Description: 
The issue is reproducible on a cluster with postgres metastore db by the 
following statements:
{code}
USE default;
drop table if exists my_table;
create external table my_table (col1 int, col3 int) partitioned by (col2 
string) STORED AS TEXTFILE;
insert into my_table VALUES(11,201,"F");
SELECT pp.* FROM sys.partition_params pp join sys.partitions p on p.part_id = 
pp.part_id join sys.tbls t on t.tbl_id = p.tbl_id where t.tbl_name = "my_table";
{code}

Seems like (propably) since HIVE-20833/HIVE-20221 there is an inconvenience 
while using PARTITION_PARAMS/PARAM_VALUE, because in postgres there is no such 
type as CLOB, and metastore simply saves large object ids into this field. More 
interesting is that the large object can be resolved in some codepaths. In case 
of a describe for partition it works correctly:
{code}
describe formatted my_table_for_sqoop partition (col2='F');

...

| Partition Parameters: | NULL  
 | NULL   |
|   | COLUMN_STATS_ACCURATE 
 | 
{\"BASIC_STATS\":\"true\",\"COLUMN_STATS\":{\"col1\":\"true\",\"col3\":\"true\"}}
 |
|   | numFiles  
 | 1  |
|   | numRows   
 | 1  |
|   | rawDataSize   
 | 6  |
|   | totalSize 
 | 7  |
|   | transient_lastDdlTime 
 | 1561976024 |
|   | NULL  
 | NULL   |

{code}

But in case for direct metastore query (from hive's sys schema, but the same 
result for direct postgres) query:
{code}
+-++-+
| pp.part_id  |  pp.param_key  | pp.param_value  |
+-++-+
| 151 | rawDataSize| 28629   |
| 151 | numRows| 28628   |
| 151 | transient_lastDdlTime  | 28627   |
| 151 | COLUMN_STATS_ACCURATE  | 28626   |
| 151 | numFiles   | 28625   |
| 151 | totalSize  | 28622   |
+-++-+
{code}



param_value values above are large object ids, according to pg_dump
| 151 | COLUMN_STATS_ACCURATE  | 28626   |

{code}
SELECT pg_catalog.lo_open('28626', 131072);
SELECT pg_catalog.lowrite(0, 
'\x7b2242415349435f5354415453223a2274727565222c22434f4c554d4e5f5354415453223a7b22636f6c31223a2274727565222c22636f6c33223a2274727565227d7d');
SELECT pg_catalog.lo_close(0);

{code}
decoded large object value:
{code}
{"BASIC_STATS":"true","COLUMN_STATS":{"col1":"true","col3":"true"}}
{code}


  was:
The issue is reproducible on a cluster with the following statements:
{code}
USE default;
drop table if exists my_table;
create external table my_table (col1 int, col3 int) partitioned by (col2 
string) STORED AS TEXTFILE;
insert into my_table VALUES(11,201,"F");
SELECT pp.* FROM sys.partition_params pp join sys.partitions p on p.part_id = 
pp.part_id join sys.tbls t on t.tbl_id = p.tbl_id where t.tbl_name = "my_table";
{code}

Seems like (propably) since HIVE-20833/HIVE-20221 there is an inconvenience 
while using PARTITION_PARAMS/PARAM_VALUE, because in postgres there is no such 
type as CLOB, and metastore simply saves large object ids into this field. More 
interesting is that the large object can be resolved in some codepaths. In case 
of a describe for partition it works correctly:
{code}
describe formatted my_table_for_sqoop partition (col2='F');

...

| Partition Parameters: | NULL  
 | NULL   |
|   | COLUMN_STATS_ACCURATE 
 | 
{\"BASIC_STATS\":\"true\",\"COLUMN_STATS\":{\"col1\":\"true\",\"col3\":\"true\"}}
 |
|   | numFiles  
 | 1  |
|   | numRows   
 | 1 

[jira] [Updated] (HIVE-21940) Metastore: Postgres text <-> clob mismatch for PARTITION_PARAMS/PARAM_VALUE

2019-07-01 Thread Laszlo Bodor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Bodor updated HIVE-21940:

Description: 
The issue is reproducible on a cluster with the following statements:
{code}
USE default;
drop table if exists my_table;
create external table my_table (col1 int, col3 int) partitioned by (col2 
string) STORED AS TEXTFILE;
insert into my_table VALUES(11,201,"F");
SELECT pp.* FROM sys.partition_params pp join sys.partitions p on p.part_id = 
pp.part_id join sys.tbls t on t.tbl_id = p.tbl_id where t.tbl_name = "my_table";
{code}

Seems like (propably) since HIVE-20833/HIVE-20221 there is an inconvenience 
while using PARTITION_PARAMS/PARAM_VALUE, because in postgres there is no such 
type as CLOB, and metastore simply saves large object ids into this field. More 
interesting is that the large object can be resolved in some codepaths. In case 
of a describe for partition it works correctly:
{code}
describe formatted my_table_for_sqoop partition (col2='F');

...

| Partition Parameters: | NULL  
 | NULL   |
|   | COLUMN_STATS_ACCURATE 
 | 
{\"BASIC_STATS\":\"true\",\"COLUMN_STATS\":{\"col1\":\"true\",\"col3\":\"true\"}}
 |
|   | numFiles  
 | 1  |
|   | numRows   
 | 1  |
|   | rawDataSize   
 | 6  |
|   | totalSize 
 | 7  |
|   | transient_lastDdlTime 
 | 1561976024 |
|   | NULL  
 | NULL   |

{code}

But in case for direct metastore query (from hive's sys schema, but the same 
result for direct postgres) query:
{code}
+-++-+
| pp.part_id  |  pp.param_key  | pp.param_value  |
+-++-+
| 151 | rawDataSize| 28629   |
| 151 | numRows| 28628   |
| 151 | transient_lastDdlTime  | 28627   |
| 151 | COLUMN_STATS_ACCURATE  | 28626   |
| 151 | numFiles   | 28625   |
| 151 | totalSize  | 28622   |
+-++-+
{code}



param_value values above are large object ids, according to pg_dump
| 151 | COLUMN_STATS_ACCURATE  | 28626   |

{code}
SELECT pg_catalog.lo_open('28626', 131072);
SELECT pg_catalog.lowrite(0, 
'\x7b2242415349435f5354415453223a2274727565222c22434f4c554d4e5f5354415453223a7b22636f6c31223a2274727565222c22636f6c33223a2274727565227d7d');
SELECT pg_catalog.lo_close(0);

{code}
decoded large object value:
{code}
{"BASIC_STATS":"true","COLUMN_STATS":{"col1":"true","col3":"true"}}
{code}


  was:
The issue is reproducible on a cluster with the following statements:
{code}
USE default;
drop table if exists my_table;
create external table my_table (col1 int, col3 int) partitioned by (col2 
string) STORED AS TEXTFILE;
insert into my_table VALUES(11,201,"F");
SELECT pp.* FROM sys.partition_params pp join sys.partitions p on p.part_id = 
pp.part_id join sys.tbls t on t.tbl_id = p.tbl_id where t.tbl_name = "my_table";
{code}

Seems like (propably) since HIVE-20833/HIVE-20221 there is an inconvenience 
while using PARTITION_PARAMS/PARAM_VALUE, because in postgres there is no such 
type as CLOB, and metastore simply saves large object ids into this field. More 
interesting is that the large object can be resolved in some codepaths. In case 
of a describe for partition it works correctly:
{code}
describe formatted my_table_for_sqoop partition (col2='F');

...

| Partition Parameters: | NULL  
 | NULL   |
|   | COLUMN_STATS_ACCURATE 
 | 
{\"BASIC_STATS\":\"true\",\"COLUMN_STATS\":{\"col1\":\"true\",\"col3\":\"true\"}}
 |
|   | numFiles  
 | 1  |
|   | numRows   
 | 1  

[jira] [Updated] (HIVE-21940) Metastore: Postgres text <-> clob mismatch for PARTITION_PARAMS/PARAM_VALUE

2019-07-01 Thread Laszlo Bodor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Bodor updated HIVE-21940:

Fix Version/s: 4.0.0

> Metastore: Postgres text <-> clob mismatch for PARTITION_PARAMS/PARAM_VALUE
> ---
>
> Key: HIVE-21940
> URL: https://issues.apache.org/jira/browse/HIVE-21940
> Project: Hive
>  Issue Type: Bug
>Reporter: Laszlo Bodor
>Assignee: Laszlo Bodor
>Priority: Major
> Fix For: 4.0.0
>
>
> The issue is reproducible on a cluster with the following statements:
> {code}
> USE default;
> drop table if exists my_table;
> create external table my_table (col1 int, col3 int) partitioned by (col2 
> string) STORED AS TEXTFILE;
> insert into my_table VALUES(11,201,"F");
> SELECT pp.* FROM sys.partition_params pp join sys.partitions p on p.part_id = 
> pp.part_id join sys.tbls t on t.tbl_id = p.tbl_id where t.tbl_name = 
> "my_table";
> {code}
> Seems like (propably) since HIVE-20833/HIVE-20221 there is an inconvenience 
> while using PARTITION_PARAMS/PARAM_VALUE, because in postgres there is no 
> such type as CLOB, and metastore simply saves large object ids into this 
> field. More interesting is that the large object can be resolved in some 
> codepaths. In case of a describe for partition it works correctly:
> {code}
> describe formatted my_table_for_sqoop partition (col2='F');
> ...
> | Partition Parameters: | NULL
>| NULL   |
> |   | COLUMN_STATS_ACCURATE   
>| 
> {\"BASIC_STATS\":\"true\",\"COLUMN_STATS\":{\"col1\":\"true\",\"col3\":\"true\"}}
>  |
> |   | numFiles
>| 1  |
> |   | numRows 
>| 1  |
> |   | rawDataSize 
>| 6  |
> |   | totalSize   
>| 7  |
> |   | transient_lastDdlTime   
>| 1561976024 |
> |   | NULL
>| NULL   |
> {code}
> But in case for direct metastore query (from hive's sys schema, but the same 
> result for direct postgres) query:
> {code}
> +-++-+
> | pp.part_id  |  pp.param_key  | pp.param_value  |
> +-++-+
> | 151 | rawDataSize| 28629   |
> | 151 | numRows| 28628   |
> | 151 | transient_lastDdlTime  | 28627   |
> | 151 | COLUMN_STATS_ACCURATE  | 28626   |
> | 151 | numFiles   | 28625   |
> | 151 | totalSize  | 28622   |
> +-++-+
> {code}
> param_value values above are large object ids, according to pg_dump
> | 151 | COLUMN_STATS_ACCURATE  | 28626   |
> {code}
> SELECT pg_catalog.lo_open('28626', 131072);
> SELECT pg_catalog.lowrite(0, 
> '\x7b2242415349435f5354415453223a2274727565222c22434f4c554d4e5f5354415453223a7b22636f6c31223a2274727565222c22636f6c33223a2274727565227d7d');
> SELECT pg_catalog.lo_close(0);
> {code}
> decoded large object value:
> {code}
> {"BASIC_STATS":"true","COLUMN_STATS":{"col1":"true","col3":"true"}}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21940) Metastore: Postgres text <-> clob mismatch for PARTITION_PARAMS/PARAM_VALUE

2019-07-01 Thread Laszlo Bodor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Bodor updated HIVE-21940:

Affects Version/s: 3.2.0

> Metastore: Postgres text <-> clob mismatch for PARTITION_PARAMS/PARAM_VALUE
> ---
>
> Key: HIVE-21940
> URL: https://issues.apache.org/jira/browse/HIVE-21940
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.2.0
>Reporter: Laszlo Bodor
>Assignee: Laszlo Bodor
>Priority: Major
> Fix For: 4.0.0
>
>
> The issue is reproducible on a cluster with the following statements:
> {code}
> USE default;
> drop table if exists my_table;
> create external table my_table (col1 int, col3 int) partitioned by (col2 
> string) STORED AS TEXTFILE;
> insert into my_table VALUES(11,201,"F");
> SELECT pp.* FROM sys.partition_params pp join sys.partitions p on p.part_id = 
> pp.part_id join sys.tbls t on t.tbl_id = p.tbl_id where t.tbl_name = 
> "my_table";
> {code}
> Seems like (propably) since HIVE-20833/HIVE-20221 there is an inconvenience 
> while using PARTITION_PARAMS/PARAM_VALUE, because in postgres there is no 
> such type as CLOB, and metastore simply saves large object ids into this 
> field. More interesting is that the large object can be resolved in some 
> codepaths. In case of a describe for partition it works correctly:
> {code}
> describe formatted my_table_for_sqoop partition (col2='F');
> ...
> | Partition Parameters: | NULL
>| NULL   |
> |   | COLUMN_STATS_ACCURATE   
>| 
> {\"BASIC_STATS\":\"true\",\"COLUMN_STATS\":{\"col1\":\"true\",\"col3\":\"true\"}}
>  |
> |   | numFiles
>| 1  |
> |   | numRows 
>| 1  |
> |   | rawDataSize 
>| 6  |
> |   | totalSize   
>| 7  |
> |   | transient_lastDdlTime   
>| 1561976024 |
> |   | NULL
>| NULL   |
> {code}
> But in case for direct metastore query (from hive's sys schema, but the same 
> result for direct postgres) query:
> {code}
> +-++-+
> | pp.part_id  |  pp.param_key  | pp.param_value  |
> +-++-+
> | 151 | rawDataSize| 28629   |
> | 151 | numRows| 28628   |
> | 151 | transient_lastDdlTime  | 28627   |
> | 151 | COLUMN_STATS_ACCURATE  | 28626   |
> | 151 | numFiles   | 28625   |
> | 151 | totalSize  | 28622   |
> +-++-+
> {code}
> param_value values above are large object ids, according to pg_dump
> | 151 | COLUMN_STATS_ACCURATE  | 28626   |
> {code}
> SELECT pg_catalog.lo_open('28626', 131072);
> SELECT pg_catalog.lowrite(0, 
> '\x7b2242415349435f5354415453223a2274727565222c22434f4c554d4e5f5354415453223a7b22636f6c31223a2274727565222c22636f6c33223a2274727565227d7d');
> SELECT pg_catalog.lo_close(0);
> {code}
> decoded large object value:
> {code}
> {"BASIC_STATS":"true","COLUMN_STATS":{"col1":"true","col3":"true"}}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21940) Metastore: Postgres text <-> clob mismatch for PARTITION_PARAMS/PARAM_VALUE

2019-07-01 Thread Laszlo Bodor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Bodor updated HIVE-21940:

Description: 
The issue is reproducible on a cluster with the following statements:
{code}
USE default;
drop table if exists my_table;
create external table my_table (col1 int, col3 int) partitioned by (col2 
string) STORED AS TEXTFILE;
insert into my_table VALUES(11,201,"F");
SELECT pp.* FROM sys.partition_params pp join sys.partitions p on p.part_id = 
pp.part_id join sys.tbls t on t.tbl_id = p.tbl_id where t.tbl_name = "my_table";
{code}

Seems like (propably) since HIVE-20833/HIVE-20221 there is an inconvenience 
while using PARTITION_PARAMS/PARAM_VALUE, because in postgres there is no such 
type as CLOB, and metastore simply saves large object ids into this field. More 
interesting is that the large object can be resolved in some codepaths. In case 
of a describe for partition it works correctly:
{code}
describe formatted my_table_for_sqoop partition (col2='F');

...

| Partition Parameters: | NULL  
 | NULL   |
|   | COLUMN_STATS_ACCURATE 
 | 
{\"BASIC_STATS\":\"true\",\"COLUMN_STATS\":{\"col1\":\"true\",\"col3\":\"true\"}}
 |
|   | numFiles  
 | 1  |
|   | numRows   
 | 1  |
|   | rawDataSize   
 | 6  |
|   | totalSize 
 | 7  |
|   | transient_lastDdlTime 
 | 1561976024 |
|   | NULL  
 | NULL   |

{code}

But in case for direct metastore query (from hive's sys schema, but the same 
result for direct postgres) query:
{code}
+-++-+
| pp.part_id  |  pp.param_key  | pp.param_value  |
+-++-+
| 151 | rawDataSize| 28629   |
| 151 | numRows| 28628   |
| 151 | transient_lastDdlTime  | 28627   |
| 151 | COLUMN_STATS_ACCURATE  | 28626   |
| 151 | numFiles   | 28625   |
| 151 | totalSize  | 28622   |
+-++-+
{code}

param_value values above are large object ids, according to pg_dump
{code}
SELECT pg_catalog.lo_open('28626', 131072);
SELECT pg_catalog.lowrite(0, 
'\x7b2242415349435f5354415453223a2274727565222c22434f4c554d4e5f5354415453223a7b22636f6c31223a2274727565222c22636f6c33223a2274727565227d7d');
SELECT pg_catalog.lo_close(0);

```| 151 | COLUMN_STATS_ACCURATE  | 28626   |```
{code}
decoded large object value:
{code}
```{"BASIC_STATS":"true","COLUMN_STATS":{"col1":"true","col3":"true"}}```
{code}


  was:
The issue is reproducible on a cluster with the following statements:
{code}
USE default;
drop table if exists my_table;
create external table my_table (col1 int, col3 int) partitioned by (col2 
string) STORED AS TEXTFILE;
insert into my_table VALUES(11,201,"F");
SELECT pp.* FROM sys.partition_params pp join sys.partitions p on p.part_id = 
pp.part_id join sys.tbls t on t.tbl_id = p.tbl_id where t.tbl_name = "my_table";
{code}

Seems like (propably) since HIVE-20833/HIVE-20221 there is an inconvenience 
while using PARTITION_PARAMS/PARAM_VALUE, because in postgres there is no such 
type as CLOB, and metastore simply saves large object ids in this field. More 
interesting is that the large object can be resolved in some codepath. In case 
of a describe for partition:
{code}
describe formatted my_table_for_sqoop partition (col2='F');

...

| Partition Parameters: | NULL  
 | NULL   |
|   | COLUMN_STATS_ACCURATE 
 | 
{\"BASIC_STATS\":\"true\",\"COLUMN_STATS\":{\"col1\":\"true\",\"col3\":\"true\"}}
 |
|   | numFiles  
 | 1  |
|   | numRows   
 | 1  |
|   

[jira] [Updated] (HIVE-21940) Metastore: Postgres text <-> clob mismatch for PARTITION_PARAMS/PARAM_VALUE

2019-07-01 Thread Laszlo Bodor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Bodor updated HIVE-21940:

Description: 
The issue is reproducible on a cluster with the following statements:
{code}
USE default;
drop table if exists my_table;
create external table my_table (col1 int, col3 int) partitioned by (col2 
string) STORED AS TEXTFILE;
insert into my_table VALUES(11,201,"F");
SELECT pp.* FROM sys.partition_params pp join sys.partitions p on p.part_id = 
pp.part_id join sys.tbls t on t.tbl_id = p.tbl_id where t.tbl_name = "my_table";
{code}

Seems like (propably) since HIVE-20833/HIVE-20221 there is an inconvenience 
while using PARTITION_PARAMS/PARAM_VALUE, because in postgres there is no such 
type as CLOB, and metastore simply saves large object ids in this field. More 
interesting is that the large object can be resolved in some codepath. In case 
of a describe for partition:
{code}
describe formatted my_table_for_sqoop partition (col2='F');

...

| Partition Parameters: | NULL  
 | NULL   |
|   | COLUMN_STATS_ACCURATE 
 | 
{\"BASIC_STATS\":\"true\",\"COLUMN_STATS\":{\"col1\":\"true\",\"col3\":\"true\"}}
 |
|   | numFiles  
 | 1  |
|   | numRows   
 | 1  |
|   | rawDataSize   
 | 6  |
|   | totalSize 
 | 7  |
|   | transient_lastDdlTime 
 | 1561976024 |
|   | NULL  
 | NULL   |

{code}

But in case for direct metastore query (from hive's sys schema, but the same 
result for direct postgres) query:
{code}
+-++-+
| pp.part_id  |  pp.param_key  | pp.param_value  |
+-++-+
| 151 | rawDataSize| 28629   |
| 151 | numRows| 28628   |
| 151 | transient_lastDdlTime  | 28627   |
| 151 | COLUMN_STATS_ACCURATE  | 28626   |
| 151 | numFiles   | 28625   |
| 151 | totalSize  | 28622   |
+-++-+
{code}


> Metastore: Postgres text <-> clob mismatch for PARTITION_PARAMS/PARAM_VALUE
> ---
>
> Key: HIVE-21940
> URL: https://issues.apache.org/jira/browse/HIVE-21940
> Project: Hive
>  Issue Type: Bug
>Reporter: Laszlo Bodor
>Assignee: Laszlo Bodor
>Priority: Major
>
> The issue is reproducible on a cluster with the following statements:
> {code}
> USE default;
> drop table if exists my_table;
> create external table my_table (col1 int, col3 int) partitioned by (col2 
> string) STORED AS TEXTFILE;
> insert into my_table VALUES(11,201,"F");
> SELECT pp.* FROM sys.partition_params pp join sys.partitions p on p.part_id = 
> pp.part_id join sys.tbls t on t.tbl_id = p.tbl_id where t.tbl_name = 
> "my_table";
> {code}
> Seems like (propably) since HIVE-20833/HIVE-20221 there is an inconvenience 
> while using PARTITION_PARAMS/PARAM_VALUE, because in postgres there is no 
> such type as CLOB, and metastore simply saves large object ids in this field. 
> More interesting is that the large object can be resolved in some codepath. 
> In case of a describe for partition:
> {code}
> describe formatted my_table_for_sqoop partition (col2='F');
> ...
> | Partition Parameters: | NULL
>| NULL   |
> |   | COLUMN_STATS_ACCURATE   
>| 
> {\"BASIC_STATS\":\"true\",\"COLUMN_STATS\":{\"col1\":\"true\",\"col3\":\"true\"}}
>  |
> |   | numFiles
>| 1  |
> |   | numRows 
>| 1  |
> |   | rawDataSize   

[jira] [Assigned] (HIVE-21580) Introduce ISO 8601 week numbering SQL:2016 formats

2019-07-01 Thread Karen Coppage (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Coppage reassigned HIVE-21580:


Assignee: Karen Coppage

> Introduce ISO 8601 week numbering SQL:2016 formats
> --
>
> Key: HIVE-21580
> URL: https://issues.apache.org/jira/browse/HIVE-21580
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
>
> Enable Hive to parse the following datetime formats when any 
> combination/subset of these or previously implemented patterns is provided in 
> one string. Also catch combinations that conflict.
>  * IYYY
>  * IYY
>  * IY
>  * I
>  * IW
> [https://docs.google.com/document/d/1V7k6-lrPGW7_uhqM-FhKl3QsxwCRy69v2KIxPsGjc1k/edit]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21579) Introduce more complex SQL:2016 datetime formats

2019-07-01 Thread Karen Coppage (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Coppage updated HIVE-21579:
-
Attachment: HIVE-21579.01.patch
Status: Patch Available  (was: Open)

> Introduce more complex SQL:2016 datetime formats
> 
>
> Key: HIVE-21579
> URL: https://issues.apache.org/jira/browse/HIVE-21579
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Priority: Major
> Attachments: HIVE-21579.01.patch
>
>
> Enable Hive to parse the following datetime formats when any 
> combination/subset of these or previously implemented patterns is provided in 
> one string. Also catch combinations that conflict.
>  * MONTH
>  * MON
>  * D
>  * DAY
>  * DY
>  * Q
>  * WW
>  * W
> [https://docs.google.com/document/d/1V7k6-lrPGW7_uhqM-FhKl3QsxwCRy69v2KIxPsGjc1k/edit]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21579) Introduce more complex SQL:2016 datetime formats

2019-07-01 Thread Karen Coppage (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Coppage reassigned HIVE-21579:


Assignee: Karen Coppage

> Introduce more complex SQL:2016 datetime formats
> 
>
> Key: HIVE-21579
> URL: https://issues.apache.org/jira/browse/HIVE-21579
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
> Attachments: HIVE-21579.01.patch
>
>
> Enable Hive to parse the following datetime formats when any 
> combination/subset of these or previously implemented patterns is provided in 
> one string. Also catch combinations that conflict.
>  * MONTH
>  * MON
>  * D
>  * DAY
>  * DY
>  * Q
>  * WW
>  * W
> [https://docs.google.com/document/d/1V7k6-lrPGW7_uhqM-FhKl3QsxwCRy69v2KIxPsGjc1k/edit]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21940) Metastore: Postgres text <-> clob mismatch for PARTITION_PARAMS/PARAM_VALUE

2019-07-01 Thread Laszlo Bodor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Bodor reassigned HIVE-21940:
---

Assignee: Laszlo Bodor

> Metastore: Postgres text <-> clob mismatch for PARTITION_PARAMS/PARAM_VALUE
> ---
>
> Key: HIVE-21940
> URL: https://issues.apache.org/jira/browse/HIVE-21940
> Project: Hive
>  Issue Type: Bug
>Reporter: Laszlo Bodor
>Assignee: Laszlo Bodor
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21939) protoc:2.5.0 dependence has broken building on aarch64

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-21939:
--
Labels: pull-request-available  (was: )

> protoc:2.5.0  dependence has broken building on aarch64
> ---
>
> Key: HIVE-21939
> URL: https://issues.apache.org/jira/browse/HIVE-21939
> Project: Hive
>  Issue Type: Bug
>Reporter: liusheng
>Assignee: liusheng
>Priority: Blocker
>  Labels: pull-request-available
>
> When I try to build master of Hive from source code on "aarch64" server, I 
> met following error:
> [ERROR] Failed to execute goal 
> com.github.os72:protoc-jar-maven-plugin:3.5.1.1:run (default) on project 
> hive-standalone-metastore-common: Error resolving artifact: 
> com.google.protobuf:protoc:2.5.0: Could not find artifact 
> com.google.protobuf:protoc:exe:linux-aarch_64:2.5.0 in central 
> ([https://repo.maven.apache.org/maven2)]
> that is because Hive using the "com.google.protobuf:protoc:2.5.0" as required 
> artifact, which does not have released package for "aarch64" platform.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21939) protoc:2.5.0 dependence has broken building on aarch64

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21939?focusedWorklogId=270098=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-270098
 ]

ASF GitHub Bot logged work on HIVE-21939:
-

Author: ASF GitHub Bot
Created on: 01/Jul/19 12:09
Start Date: 01/Jul/19 12:09
Worklog Time Spent: 10m 
  Work Description: liu-sheng commented on pull request #694: HIVE-21939: 
Switch to use protoc of version 2.6.1-build3 than 2.5.0
URL: https://github.com/apache/hive/pull/694
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 270098)
Time Spent: 10m
Remaining Estimate: 0h

> protoc:2.5.0  dependence has broken building on aarch64
> ---
>
> Key: HIVE-21939
> URL: https://issues.apache.org/jira/browse/HIVE-21939
> Project: Hive
>  Issue Type: Bug
>Reporter: liusheng
>Assignee: liusheng
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When I try to build master of Hive from source code on "aarch64" server, I 
> met following error:
> [ERROR] Failed to execute goal 
> com.github.os72:protoc-jar-maven-plugin:3.5.1.1:run (default) on project 
> hive-standalone-metastore-common: Error resolving artifact: 
> com.google.protobuf:protoc:2.5.0: Could not find artifact 
> com.google.protobuf:protoc:exe:linux-aarch_64:2.5.0 in central 
> ([https://repo.maven.apache.org/maven2)]
> that is because Hive using the "com.google.protobuf:protoc:2.5.0" as required 
> artifact, which does not have released package for "aarch64" platform.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21933) Remove unused methods from Utilities

2019-07-01 Thread Laszlo Bodor (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876139#comment-16876139
 ] 

Laszlo Bodor commented on HIVE-21933:
-

+1

> Remove unused methods from Utilities
> 
>
> Key: HIVE-21933
> URL: https://issues.apache.org/jira/browse/HIVE-21933
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Ivan Suller
>Assignee: Ivan Suller
>Priority: Trivial
> Attachments: HIVE-21933.1.patch
>
>
> Over the years it seems org.apache.hadoop.hive.ql.exec.Utilities collected 
> many methods which are not used anymore. Removing them is the right thing to 
> do.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21939) protoc:2.5.0 dependence has broken building on aarch64

2019-07-01 Thread liusheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liusheng updated HIVE-21939:

Environment: (was: When I try to build master of Hive from source code 
on "aarch64" server, I met following error:

[ERROR] Failed to execute goal 
com.github.os72:protoc-jar-maven-plugin:3.5.1.1:run (default) on project 
hive-standalone-metastore-common: Error resolving artifact: 
com.google.protobuf:protoc:2.5.0: Could not find artifact 
com.google.protobuf:protoc:exe:linux-aarch_64:2.5.0 in central 
([https://repo.maven.apache.org/maven2)]

that is because Hive using the "com.google.protobuf:protoc:2.5.0" as required 
artifact, which does not have released package for "aarch64" platform.)

> protoc:2.5.0  dependence has broken building on aarch64
> ---
>
> Key: HIVE-21939
> URL: https://issues.apache.org/jira/browse/HIVE-21939
> Project: Hive
>  Issue Type: Bug
>Reporter: liusheng
>Assignee: liusheng
>Priority: Blocker
>
> When I try to build master of Hive from source code on "aarch64" server, I 
> met following error:
> [ERROR] Failed to execute goal 
> com.github.os72:protoc-jar-maven-plugin:3.5.1.1:run (default) on project 
> hive-standalone-metastore-common: Error resolving artifact: 
> com.google.protobuf:protoc:2.5.0: Could not find artifact 
> com.google.protobuf:protoc:exe:linux-aarch_64:2.5.0 in central 
> ([https://repo.maven.apache.org/maven2)]
> that is because Hive using the "com.google.protobuf:protoc:2.5.0" as required 
> artifact, which does not have released package for "aarch64" platform.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21939) protoc:2.5.0 dependence has broken building on aarch64

2019-07-01 Thread liusheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liusheng updated HIVE-21939:

Description: 
When I try to build master of Hive from source code on "aarch64" server, I met 
following error:

[ERROR] Failed to execute goal 
com.github.os72:protoc-jar-maven-plugin:3.5.1.1:run (default) on project 
hive-standalone-metastore-common: Error resolving artifact: 
com.google.protobuf:protoc:2.5.0: Could not find artifact 
com.google.protobuf:protoc:exe:linux-aarch_64:2.5.0 in central 
([https://repo.maven.apache.org/maven2)]

that is because Hive using the "com.google.protobuf:protoc:2.5.0" as required 
artifact, which does not have released package for "aarch64" platform.

> protoc:2.5.0  dependence has broken building on aarch64
> ---
>
> Key: HIVE-21939
> URL: https://issues.apache.org/jira/browse/HIVE-21939
> Project: Hive
>  Issue Type: Bug
> Environment: When I try to build master of Hive from source code on 
> "aarch64" server, I met following error:
> [ERROR] Failed to execute goal 
> com.github.os72:protoc-jar-maven-plugin:3.5.1.1:run (default) on project 
> hive-standalone-metastore-common: Error resolving artifact: 
> com.google.protobuf:protoc:2.5.0: Could not find artifact 
> com.google.protobuf:protoc:exe:linux-aarch_64:2.5.0 in central 
> ([https://repo.maven.apache.org/maven2)]
> that is because Hive using the "com.google.protobuf:protoc:2.5.0" as required 
> artifact, which does not have released package for "aarch64" platform.
>Reporter: liusheng
>Assignee: liusheng
>Priority: Blocker
>
> When I try to build master of Hive from source code on "aarch64" server, I 
> met following error:
> [ERROR] Failed to execute goal 
> com.github.os72:protoc-jar-maven-plugin:3.5.1.1:run (default) on project 
> hive-standalone-metastore-common: Error resolving artifact: 
> com.google.protobuf:protoc:2.5.0: Could not find artifact 
> com.google.protobuf:protoc:exe:linux-aarch_64:2.5.0 in central 
> ([https://repo.maven.apache.org/maven2)]
> that is because Hive using the "com.google.protobuf:protoc:2.5.0" as required 
> artifact, which does not have released package for "aarch64" platform.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21939) protoc:2.5.0 dependence has broken building on aarch64

2019-07-01 Thread liusheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liusheng reassigned HIVE-21939:
---


> protoc:2.5.0  dependence has broken building on aarch64
> ---
>
> Key: HIVE-21939
> URL: https://issues.apache.org/jira/browse/HIVE-21939
> Project: Hive
>  Issue Type: Bug
> Environment: When I try to build master of Hive from source code on 
> "aarch64" server, I met following error:
> [ERROR] Failed to execute goal 
> com.github.os72:protoc-jar-maven-plugin:3.5.1.1:run (default) on project 
> hive-standalone-metastore-common: Error resolving artifact: 
> com.google.protobuf:protoc:2.5.0: Could not find artifact 
> com.google.protobuf:protoc:exe:linux-aarch_64:2.5.0 in central 
> ([https://repo.maven.apache.org/maven2)]
> that is because Hive using the "com.google.protobuf:protoc:2.5.0" as required 
> artifact, which does not have released package for "aarch64" platform.
>Reporter: liusheng
>Assignee: liusheng
>Priority: Blocker
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21910) Multiple target location generation in HostAffinitySplitLocationProvider

2019-07-01 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876137#comment-16876137
 ] 

Hive QA commented on HIVE-21910:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12973314/HIVE-21910.3.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 16362 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropParitionsCleanup
 (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropPartitionsCacheCrossSession
 (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testPartitionOps (batchId=232)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17800/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17800/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17800/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12973314 - PreCommit-HIVE-Build

> Multiple target location generation in HostAffinitySplitLocationProvider
> 
>
> Key: HIVE-21910
> URL: https://issues.apache.org/jira/browse/HIVE-21910
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21910.2.patch, HIVE-21910.3.patch, HIVE-21910.patch
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> We need to generate multiple target locations by 
> HostAffinitySplitLocationProvider, so we will have deterministic fallback 
> nodes in case the target node is disabled



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21910) Multiple target location generation in HostAffinitySplitLocationProvider

2019-07-01 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876114#comment-16876114
 ] 

Hive QA commented on HIVE-21910:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
55s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
36s{color} | {color:blue} common in master has 62 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
24s{color} | {color:blue} llap-tez in master has 17 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
0s{color} | {color:blue} ql in master has 2253 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
40s{color} | {color:red} ql: The patch generated 5 new + 41 unchanged - 1 fixed 
= 46 total (was 42) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m 
12s{color} | {color:red} ql generated 1 new + 2252 unchanged - 1 fixed = 2253 
total (was 2253) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  Null passed for non-null parameter of new java.util.HashSet(Collection) 
in new 
org.apache.hadoop.hive.ql.exec.tez.HostAffinitySplitLocationProvider(List, 
boolean, int)  Method invoked at HostAffinitySplitLocationProvider.java:of new 
java.util.HashSet(Collection) in new 
org.apache.hadoop.hive.ql.exec.tez.HostAffinitySplitLocationProvider(List, 
boolean, int)  Method invoked at HostAffinitySplitLocationProvider.java:[line 
68] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-17800/dev-support/hive-personality.sh
 |
| git revision | master / be6bf93 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17800/yetus/diff-checkstyle-ql.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17800/yetus/new-findbugs-ql.html
 |
| modules | C: common llap-tez ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17800/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Multiple target location generation in HostAffinitySplitLocationProvider
> 
>
> Key: HIVE-21910
> URL: https://issues.apache.org/jira/browse/HIVE-21910
> Project: Hive
>  

[jira] [Updated] (HIVE-21437) Vectorization: Decimal64 division with integer columns

2019-07-01 Thread Attila Magyar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Magyar updated HIVE-21437:
-
Attachment: HIVE-21437.5.patch

> Vectorization: Decimal64 division with integer columns
> --
>
> Key: HIVE-21437
> URL: https://issues.apache.org/jira/browse/HIVE-21437
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 4.0.0
>Reporter: Gopal V
>Assignee: Attila Magyar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21437.1.patch, HIVE-21437.2.patch, 
> HIVE-21437.3.patch, HIVE-21437.4.patch, HIVE-21437.5.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Vectorizer fails for
> {code}
> CREATE temporary TABLE `catalog_Sales`(
>   `cs_quantity` int, 
>   `cs_wholesale_cost` decimal(7,2), 
>   `cs_list_price` decimal(7,2), 
>   `cs_sales_price` decimal(7,2), 
>   `cs_ext_discount_amt` decimal(7,2), 
>   `cs_ext_sales_price` decimal(7,2), 
>   `cs_ext_wholesale_cost` decimal(7,2), 
>   `cs_ext_list_price` decimal(7,2), 
>   `cs_ext_tax` decimal(7,2), 
>   `cs_coupon_amt` decimal(7,2), 
>   `cs_ext_ship_cost` decimal(7,2), 
>   `cs_net_paid` decimal(7,2), 
>   `cs_net_paid_inc_tax` decimal(7,2), 
>   `cs_net_paid_inc_ship` decimal(7,2), 
>   `cs_net_paid_inc_ship_tax` decimal(7,2), 
>   `cs_net_profit` decimal(7,2))
>  ;
> explain vectorization detail select maxcs_ext_list_price - 
> cs_ext_wholesale_cost) - cs_ext_discount_amt) + cs_ext_sales_price) / 2) from 
> catalog_sales;
> {code}
> {code}
> 'Map Vectorization:'
> 'enabled: true'
> 'enabledConditionsMet: 
> hive.vectorized.use.vectorized.input.format IS true'
> 'inputFileFormats: 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
> 'notVectorizedReason: SELECT operator: Could not instantiate 
> DecimalColDivideDecimalScalar with arguments arguments: [21, 20, 22], 
> argument classes: [Integer, Integer, Integer], exception: 
> java.lang.IllegalArgumentException: java.lang.ClassCastException@63b56be0 
> stack trace: 
> sun.reflect.GeneratedConstructorAccessor.newInstance(Unknown 
> Source), 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45),
>  java.lang.reflect.Constructor.newInstance(Constructor.java:423), 
> org.apache.hadoop.hive.ql.exec.vector.VectorizationContext.instantiateExpression(VectorizationContext.java:2088),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.fixDecimalDataTypePhysicalVariations(Vectorizer.java:4662),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.fixDecimalDataTypePhysicalVariations(Vectorizer.java:4602),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.vectorizeSelectOperator(Vectorizer.java:4584),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.validateAndVectorizeOperator(Vectorizer.java:5171),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.doProcessChild(Vectorizer.java:923),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.doProcessChildren(Vectorizer.java:809),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.validateAndVectorizeOperatorTree(Vectorizer.java:776),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.access$2400(Vectorizer.java:240),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.validateAndVectorizeMapOperators(Vectorizer.java:2038),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.validateAndVectorizeMapOperators(Vectorizer.java:1990),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.validateAndVectorizeMapWork(Vectorizer.java:1963),
>  ...'
> 'vectorized: false'
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21437) Vectorization: Decimal64 division with integer columns

2019-07-01 Thread Attila Magyar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Magyar updated HIVE-21437:
-
Status: Patch Available  (was: Open)

> Vectorization: Decimal64 division with integer columns
> --
>
> Key: HIVE-21437
> URL: https://issues.apache.org/jira/browse/HIVE-21437
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 4.0.0
>Reporter: Gopal V
>Assignee: Attila Magyar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21437.1.patch, HIVE-21437.2.patch, 
> HIVE-21437.3.patch, HIVE-21437.4.patch, HIVE-21437.5.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Vectorizer fails for
> {code}
> CREATE temporary TABLE `catalog_Sales`(
>   `cs_quantity` int, 
>   `cs_wholesale_cost` decimal(7,2), 
>   `cs_list_price` decimal(7,2), 
>   `cs_sales_price` decimal(7,2), 
>   `cs_ext_discount_amt` decimal(7,2), 
>   `cs_ext_sales_price` decimal(7,2), 
>   `cs_ext_wholesale_cost` decimal(7,2), 
>   `cs_ext_list_price` decimal(7,2), 
>   `cs_ext_tax` decimal(7,2), 
>   `cs_coupon_amt` decimal(7,2), 
>   `cs_ext_ship_cost` decimal(7,2), 
>   `cs_net_paid` decimal(7,2), 
>   `cs_net_paid_inc_tax` decimal(7,2), 
>   `cs_net_paid_inc_ship` decimal(7,2), 
>   `cs_net_paid_inc_ship_tax` decimal(7,2), 
>   `cs_net_profit` decimal(7,2))
>  ;
> explain vectorization detail select maxcs_ext_list_price - 
> cs_ext_wholesale_cost) - cs_ext_discount_amt) + cs_ext_sales_price) / 2) from 
> catalog_sales;
> {code}
> {code}
> 'Map Vectorization:'
> 'enabled: true'
> 'enabledConditionsMet: 
> hive.vectorized.use.vectorized.input.format IS true'
> 'inputFileFormats: 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
> 'notVectorizedReason: SELECT operator: Could not instantiate 
> DecimalColDivideDecimalScalar with arguments arguments: [21, 20, 22], 
> argument classes: [Integer, Integer, Integer], exception: 
> java.lang.IllegalArgumentException: java.lang.ClassCastException@63b56be0 
> stack trace: 
> sun.reflect.GeneratedConstructorAccessor.newInstance(Unknown 
> Source), 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45),
>  java.lang.reflect.Constructor.newInstance(Constructor.java:423), 
> org.apache.hadoop.hive.ql.exec.vector.VectorizationContext.instantiateExpression(VectorizationContext.java:2088),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.fixDecimalDataTypePhysicalVariations(Vectorizer.java:4662),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.fixDecimalDataTypePhysicalVariations(Vectorizer.java:4602),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.vectorizeSelectOperator(Vectorizer.java:4584),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.validateAndVectorizeOperator(Vectorizer.java:5171),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.doProcessChild(Vectorizer.java:923),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.doProcessChildren(Vectorizer.java:809),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.validateAndVectorizeOperatorTree(Vectorizer.java:776),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.access$2400(Vectorizer.java:240),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.validateAndVectorizeMapOperators(Vectorizer.java:2038),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.validateAndVectorizeMapOperators(Vectorizer.java:1990),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.validateAndVectorizeMapWork(Vectorizer.java:1963),
>  ...'
> 'vectorized: false'
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21437) Vectorization: Decimal64 division with integer columns

2019-07-01 Thread Attila Magyar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Magyar updated HIVE-21437:
-
Status: Open  (was: Patch Available)

> Vectorization: Decimal64 division with integer columns
> --
>
> Key: HIVE-21437
> URL: https://issues.apache.org/jira/browse/HIVE-21437
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 4.0.0
>Reporter: Gopal V
>Assignee: Attila Magyar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21437.1.patch, HIVE-21437.2.patch, 
> HIVE-21437.3.patch, HIVE-21437.4.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Vectorizer fails for
> {code}
> CREATE temporary TABLE `catalog_Sales`(
>   `cs_quantity` int, 
>   `cs_wholesale_cost` decimal(7,2), 
>   `cs_list_price` decimal(7,2), 
>   `cs_sales_price` decimal(7,2), 
>   `cs_ext_discount_amt` decimal(7,2), 
>   `cs_ext_sales_price` decimal(7,2), 
>   `cs_ext_wholesale_cost` decimal(7,2), 
>   `cs_ext_list_price` decimal(7,2), 
>   `cs_ext_tax` decimal(7,2), 
>   `cs_coupon_amt` decimal(7,2), 
>   `cs_ext_ship_cost` decimal(7,2), 
>   `cs_net_paid` decimal(7,2), 
>   `cs_net_paid_inc_tax` decimal(7,2), 
>   `cs_net_paid_inc_ship` decimal(7,2), 
>   `cs_net_paid_inc_ship_tax` decimal(7,2), 
>   `cs_net_profit` decimal(7,2))
>  ;
> explain vectorization detail select maxcs_ext_list_price - 
> cs_ext_wholesale_cost) - cs_ext_discount_amt) + cs_ext_sales_price) / 2) from 
> catalog_sales;
> {code}
> {code}
> 'Map Vectorization:'
> 'enabled: true'
> 'enabledConditionsMet: 
> hive.vectorized.use.vectorized.input.format IS true'
> 'inputFileFormats: 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
> 'notVectorizedReason: SELECT operator: Could not instantiate 
> DecimalColDivideDecimalScalar with arguments arguments: [21, 20, 22], 
> argument classes: [Integer, Integer, Integer], exception: 
> java.lang.IllegalArgumentException: java.lang.ClassCastException@63b56be0 
> stack trace: 
> sun.reflect.GeneratedConstructorAccessor.newInstance(Unknown 
> Source), 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45),
>  java.lang.reflect.Constructor.newInstance(Constructor.java:423), 
> org.apache.hadoop.hive.ql.exec.vector.VectorizationContext.instantiateExpression(VectorizationContext.java:2088),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.fixDecimalDataTypePhysicalVariations(Vectorizer.java:4662),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.fixDecimalDataTypePhysicalVariations(Vectorizer.java:4602),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.vectorizeSelectOperator(Vectorizer.java:4584),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.validateAndVectorizeOperator(Vectorizer.java:5171),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.doProcessChild(Vectorizer.java:923),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.doProcessChildren(Vectorizer.java:809),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.validateAndVectorizeOperatorTree(Vectorizer.java:776),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.access$2400(Vectorizer.java:240),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.validateAndVectorizeMapOperators(Vectorizer.java:2038),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.validateAndVectorizeMapOperators(Vectorizer.java:1990),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.validateAndVectorizeMapWork(Vectorizer.java:1963),
>  ...'
> 'vectorized: false'
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21437) Vectorization: Decimal64 division with integer columns

2019-07-01 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876090#comment-16876090
 ] 

Hive QA commented on HIVE-21437:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12973309/HIVE-21437.4.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 16360 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[transform_acid] 
(batchId=21)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17799/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17799/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17799/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12973309 - PreCommit-HIVE-Build

> Vectorization: Decimal64 division with integer columns
> --
>
> Key: HIVE-21437
> URL: https://issues.apache.org/jira/browse/HIVE-21437
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 4.0.0
>Reporter: Gopal V
>Assignee: Attila Magyar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21437.1.patch, HIVE-21437.2.patch, 
> HIVE-21437.3.patch, HIVE-21437.4.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Vectorizer fails for
> {code}
> CREATE temporary TABLE `catalog_Sales`(
>   `cs_quantity` int, 
>   `cs_wholesale_cost` decimal(7,2), 
>   `cs_list_price` decimal(7,2), 
>   `cs_sales_price` decimal(7,2), 
>   `cs_ext_discount_amt` decimal(7,2), 
>   `cs_ext_sales_price` decimal(7,2), 
>   `cs_ext_wholesale_cost` decimal(7,2), 
>   `cs_ext_list_price` decimal(7,2), 
>   `cs_ext_tax` decimal(7,2), 
>   `cs_coupon_amt` decimal(7,2), 
>   `cs_ext_ship_cost` decimal(7,2), 
>   `cs_net_paid` decimal(7,2), 
>   `cs_net_paid_inc_tax` decimal(7,2), 
>   `cs_net_paid_inc_ship` decimal(7,2), 
>   `cs_net_paid_inc_ship_tax` decimal(7,2), 
>   `cs_net_profit` decimal(7,2))
>  ;
> explain vectorization detail select maxcs_ext_list_price - 
> cs_ext_wholesale_cost) - cs_ext_discount_amt) + cs_ext_sales_price) / 2) from 
> catalog_sales;
> {code}
> {code}
> 'Map Vectorization:'
> 'enabled: true'
> 'enabledConditionsMet: 
> hive.vectorized.use.vectorized.input.format IS true'
> 'inputFileFormats: 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
> 'notVectorizedReason: SELECT operator: Could not instantiate 
> DecimalColDivideDecimalScalar with arguments arguments: [21, 20, 22], 
> argument classes: [Integer, Integer, Integer], exception: 
> java.lang.IllegalArgumentException: java.lang.ClassCastException@63b56be0 
> stack trace: 
> sun.reflect.GeneratedConstructorAccessor.newInstance(Unknown 
> Source), 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45),
>  java.lang.reflect.Constructor.newInstance(Constructor.java:423), 
> org.apache.hadoop.hive.ql.exec.vector.VectorizationContext.instantiateExpression(VectorizationContext.java:2088),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.fixDecimalDataTypePhysicalVariations(Vectorizer.java:4662),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.fixDecimalDataTypePhysicalVariations(Vectorizer.java:4602),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.vectorizeSelectOperator(Vectorizer.java:4584),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.validateAndVectorizeOperator(Vectorizer.java:5171),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.doProcessChild(Vectorizer.java:923),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.doProcessChildren(Vectorizer.java:809),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.validateAndVectorizeOperatorTree(Vectorizer.java:776),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.access$2400(Vectorizer.java:240),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.validateAndVectorizeMapOperators(Vectorizer.java:2038),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.validateAndVectorizeMapOperators(Vectorizer.java:1990),
>  
> 

[jira] [Updated] (HIVE-21911) Pluggable LlapMetricsListener on Tez side to disable / resize Daemons

2019-07-01 Thread Peter Vary (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-21911:
--
Attachment: HIVE-21911.2.patch

> Pluggable LlapMetricsListener on Tez side to disable / resize Daemons
> -
>
> Key: HIVE-21911
> URL: https://issues.apache.org/jira/browse/HIVE-21911
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap, Tez
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21911.2.patch, HIVE-21911.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> We need to have a way to plug in different listeners which act upon the 
> LlapDaemon statistics.
> This listener should be able to disable / resize the LlapDaemons based on 
> health data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21911) Pluggable LlapMetricsListener on Tez side to disable / resize Daemons

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21911?focusedWorklogId=270059=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-270059
 ]

ASF GitHub Bot logged work on HIVE-21911:
-

Author: ASF GitHub Bot
Created on: 01/Jul/19 09:45
Start Date: 01/Jul/19 09:45
Worklog Time Spent: 10m 
  Work Description: pvary commented on pull request #691: HIVE-21911: 
Pluggable LlapMetricsListener on Tez side to disable / resize Daemons
URL: https://github.com/apache/hive/pull/691#discussion_r298962751
 
 

 ##
 File path: 
llap-tez/src/java/org/apache/hadoop/hive/llap/tezplugins/metrics/LlapMetricsCollector.java
 ##
 @@ -58,26 +61,44 @@
   private final Map llapClients;
   private final Map instanceStatisticsMap;
   private final long metricsCollectionMs;
+  @VisibleForTesting
+  final LlapMetricsListener listener;
 
 
-  public LlapMetricsCollector(Configuration conf) {
+  public LlapMetricsCollector(Configuration conf, LlapRegistryService 
registry) {
 this(
 conf,
 Executors.newSingleThreadScheduledExecutor(
 new 
ThreadFactoryBuilder().setDaemon(true).setNameFormat(THREAD_NAME)
 .build()),
-LlapManagementProtocolClientImplFactory.basicInstance(conf));
+LlapManagementProtocolClientImplFactory.basicInstance(conf),
+registry);
   }
 
   @VisibleForTesting
   LlapMetricsCollector(Configuration conf, ScheduledExecutorService 
scheduledMetricsExecutor,
-   LlapManagementProtocolClientImplFactory clientFactory) {
+   LlapManagementProtocolClientImplFactory clientFactory,
+   LlapRegistryService registry) {
 this.scheduledMetricsExecutor = scheduledMetricsExecutor;
 this.clientFactory = clientFactory;
 this.llapClients = new HashMap<>();
 this.instanceStatisticsMap = new ConcurrentHashMap<>();
 this.metricsCollectionMs = HiveConf.getTimeVar(conf,
 
HiveConf.ConfVars.LLAP_TASK_SCHEDULER_AM_COLLECT_DAEMON_METRICS_MS, 
TimeUnit.MILLISECONDS);
+String listenerClass = HiveConf.getVar(conf,
+
HiveConf.ConfVars.LLAP_TASK_SCHEDULER_AM_COLLECT_DAEMON_METRICS_LISTENER);
+if (Strings.isBlank(listenerClass)) {
+  listener = null;
+} else {
+  try {
+listener = 
(LlapMetricsListener)Class.forName(listenerClass.trim()).newInstance();
 
 Review comment:
   After some thought I decided against it.
   ReflectionUtil is specifically designed to create instances where a 
Configuration object is the only one needed when instantiating the object. Here 
we need an LlapRegistryService too. Adding 2 methods setConf, setRegistry and 
defining the order and so on would result in a code more complicated/harder to 
understand than the current one.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 270059)
Time Spent: 1h 20m  (was: 1h 10m)

> Pluggable LlapMetricsListener on Tez side to disable / resize Daemons
> -
>
> Key: HIVE-21911
> URL: https://issues.apache.org/jira/browse/HIVE-21911
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap, Tez
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21911.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> We need to have a way to plug in different listeners which act upon the 
> LlapDaemon statistics.
> This listener should be able to disable / resize the LlapDaemons based on 
> health data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21437) Vectorization: Decimal64 division with integer columns

2019-07-01 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876061#comment-16876061
 ] 

Hive QA commented on HIVE-21437:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
0s{color} | {color:blue} ql in master has 2253 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
45s{color} | {color:red} ql: The patch generated 14 new + 470 unchanged - 0 
fixed = 484 total (was 470) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 15 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-17799/dev-support/hive-personality.sh
 |
| git revision | master / be6bf93 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17799/yetus/diff-checkstyle-ql.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17799/yetus/whitespace-eol.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17799/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Vectorization: Decimal64 division with integer columns
> --
>
> Key: HIVE-21437
> URL: https://issues.apache.org/jira/browse/HIVE-21437
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 4.0.0
>Reporter: Gopal V
>Assignee: Attila Magyar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21437.1.patch, HIVE-21437.2.patch, 
> HIVE-21437.3.patch, HIVE-21437.4.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Vectorizer fails for
> {code}
> CREATE temporary TABLE `catalog_Sales`(
>   `cs_quantity` int, 
>   `cs_wholesale_cost` decimal(7,2), 
>   `cs_list_price` decimal(7,2), 
>   `cs_sales_price` decimal(7,2), 
>   `cs_ext_discount_amt` decimal(7,2), 
>   `cs_ext_sales_price` decimal(7,2), 
>   `cs_ext_wholesale_cost` decimal(7,2), 
>   `cs_ext_list_price` decimal(7,2), 
>   `cs_ext_tax` decimal(7,2), 
>   `cs_coupon_amt` decimal(7,2), 
>   `cs_ext_ship_cost` decimal(7,2), 
>   `cs_net_paid` decimal(7,2), 
>   `cs_net_paid_inc_tax` decimal(7,2), 
>   `cs_net_paid_inc_ship` decimal(7,2), 
>   `cs_net_paid_inc_ship_tax` decimal(7,2), 
>   `cs_net_profit` decimal(7,2))
>  ;
> explain vectorization detail select 

[jira] [Updated] (HIVE-21874) Implement add partitions related methods on temporary table

2019-07-01 Thread Laszlo Pinter (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Pinter updated HIVE-21874:
-
Attachment: HIVE-21874.05.patch

> Implement add partitions related methods on temporary table
> ---
>
> Key: HIVE-21874
> URL: https://issues.apache.org/jira/browse/HIVE-21874
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Laszlo Pinter
>Assignee: Laszlo Pinter
>Priority: Major
> Attachments: HIVE-21874.01.patch, HIVE-21874.02.patch, 
> HIVE-21874.03.patch, HIVE-21874.04.patch, HIVE-21874.05.patch
>
>
> IMetaStoreClient exposes the following add partition related methods:
> {code:java}
> Partition add_partition(Partition partition);
> int add_partitions(List partitions);
> int add_partitions_pspec(PartitionSpecProxy partitionSpec);
> List add_partitions(List partitions, boolean 
> ifNotExists, boolean needResults);
> {code}
> These methods should be implemented in order to handle addition of partitions 
> to temporary tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21874) Implement add partitions related methods on temporary table

2019-07-01 Thread Laszlo Pinter (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Pinter updated HIVE-21874:
-
Attachment: (was: HIVE-18735.05.patch)

> Implement add partitions related methods on temporary table
> ---
>
> Key: HIVE-21874
> URL: https://issues.apache.org/jira/browse/HIVE-21874
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Laszlo Pinter
>Assignee: Laszlo Pinter
>Priority: Major
> Attachments: HIVE-21874.01.patch, HIVE-21874.02.patch, 
> HIVE-21874.03.patch, HIVE-21874.04.patch, HIVE-21874.05.patch
>
>
> IMetaStoreClient exposes the following add partition related methods:
> {code:java}
> Partition add_partition(Partition partition);
> int add_partitions(List partitions);
> int add_partitions_pspec(PartitionSpecProxy partitionSpec);
> List add_partitions(List partitions, boolean 
> ifNotExists, boolean needResults);
> {code}
> These methods should be implemented in order to handle addition of partitions 
> to temporary tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21874) Implement add partitions related methods on temporary table

2019-07-01 Thread Laszlo Pinter (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Pinter updated HIVE-21874:
-
Attachment: HIVE-21874.05.patch

> Implement add partitions related methods on temporary table
> ---
>
> Key: HIVE-21874
> URL: https://issues.apache.org/jira/browse/HIVE-21874
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Laszlo Pinter
>Assignee: Laszlo Pinter
>Priority: Major
> Attachments: HIVE-18735.05.patch, HIVE-21874.01.patch, 
> HIVE-21874.02.patch, HIVE-21874.03.patch, HIVE-21874.04.patch
>
>
> IMetaStoreClient exposes the following add partition related methods:
> {code:java}
> Partition add_partition(Partition partition);
> int add_partitions(List partitions);
> int add_partitions_pspec(PartitionSpecProxy partitionSpec);
> List add_partitions(List partitions, boolean 
> ifNotExists, boolean needResults);
> {code}
> These methods should be implemented in order to handle addition of partitions 
> to temporary tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21874) Implement add partitions related methods on temporary table

2019-07-01 Thread Laszlo Pinter (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Pinter updated HIVE-21874:
-
Attachment: (was: HIVE-21874.05.patch)

> Implement add partitions related methods on temporary table
> ---
>
> Key: HIVE-21874
> URL: https://issues.apache.org/jira/browse/HIVE-21874
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Laszlo Pinter
>Assignee: Laszlo Pinter
>Priority: Major
> Attachments: HIVE-18735.05.patch, HIVE-21874.01.patch, 
> HIVE-21874.02.patch, HIVE-21874.03.patch, HIVE-21874.04.patch
>
>
> IMetaStoreClient exposes the following add partition related methods:
> {code:java}
> Partition add_partition(Partition partition);
> int add_partitions(List partitions);
> int add_partitions_pspec(PartitionSpecProxy partitionSpec);
> List add_partitions(List partitions, boolean 
> ifNotExists, boolean needResults);
> {code}
> These methods should be implemented in order to handle addition of partitions 
> to temporary tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21578) Introduce SQL:2016 formats FM, FX, and nested strings

2019-07-01 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876036#comment-16876036
 ] 

Hive QA commented on HIVE-21578:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12973307/HIVE-21578.02.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 16361 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropParitionsCleanup
 (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropPartitionsCacheCrossSession
 (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testPartitionOps (batchId=232)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17798/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17798/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17798/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12973307 - PreCommit-HIVE-Build

> Introduce SQL:2016 formats FM, FX, and nested strings
> -
>
> Key: HIVE-21578
> URL: https://issues.apache.org/jira/browse/HIVE-21578
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
> Attachments: HIVE-21578.01.patch, HIVE-21578.02.patch, 
> HIVE-21578.02.patch, HIVE-21578.03.patch
>
>
> Enable Hive to parse the following datetime formats when any combination or 
> subset of these or previously implemented formats is provided in one string. 
>  * "text" (nested strings)
>  * FM
>  * FX
> [Definitions 
> here|https://docs.google.com/document/d/1V7k6-lrPGW7_uhqM-FhKl3QsxwCRy69v2KIxPsGjc1k/edit]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21911) Pluggable LlapMetricsListener on Tez side to disable / resize Daemons

2019-07-01 Thread Adam Szita (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876033#comment-16876033
 ] 

Adam Szita commented on HIVE-21911:
---

Hi Peter, patch looks good to me.

I would've added a new constructor with an LlapServiceRegistry parameter in 
LlapMetricsCollector, instead of modifying the original one, but I will leave 
this up to you as there are not too many usages of this class yet.

+1

> Pluggable LlapMetricsListener on Tez side to disable / resize Daemons
> -
>
> Key: HIVE-21911
> URL: https://issues.apache.org/jira/browse/HIVE-21911
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap, Tez
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21911.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> We need to have a way to plug in different listeners which act upon the 
> LlapDaemon statistics.
> This listener should be able to disable / resize the LlapDaemons based on 
> health data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21578) Introduce SQL:2016 formats FM, FX, and nested strings

2019-07-01 Thread Karen Coppage (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Coppage updated HIVE-21578:
-
Attachment: HIVE-21578.03.patch
Status: Patch Available  (was: Open)

> Introduce SQL:2016 formats FM, FX, and nested strings
> -
>
> Key: HIVE-21578
> URL: https://issues.apache.org/jira/browse/HIVE-21578
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
> Attachments: HIVE-21578.01.patch, HIVE-21578.02.patch, 
> HIVE-21578.02.patch, HIVE-21578.03.patch
>
>
> Enable Hive to parse the following datetime formats when any combination or 
> subset of these or previously implemented formats is provided in one string. 
>  * "text" (nested strings)
>  * FM
>  * FX
> [Definitions 
> here|https://docs.google.com/document/d/1V7k6-lrPGW7_uhqM-FhKl3QsxwCRy69v2KIxPsGjc1k/edit]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21578) Introduce SQL:2016 formats FM, FX, and nested strings

2019-07-01 Thread Karen Coppage (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Coppage updated HIVE-21578:
-
Status: Open  (was: Patch Available)

> Introduce SQL:2016 formats FM, FX, and nested strings
> -
>
> Key: HIVE-21578
> URL: https://issues.apache.org/jira/browse/HIVE-21578
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
> Attachments: HIVE-21578.01.patch, HIVE-21578.02.patch, 
> HIVE-21578.02.patch
>
>
> Enable Hive to parse the following datetime formats when any combination or 
> subset of these or previously implemented formats is provided in one string. 
>  * "text" (nested strings)
>  * FM
>  * FX
> [Definitions 
> here|https://docs.google.com/document/d/1V7k6-lrPGW7_uhqM-FhKl3QsxwCRy69v2KIxPsGjc1k/edit]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21911) Pluggable LlapMetricsListener on Tez side to disable / resize Daemons

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21911?focusedWorklogId=270037=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-270037
 ]

ASF GitHub Bot logged work on HIVE-21911:
-

Author: ASF GitHub Bot
Created on: 01/Jul/19 08:40
Start Date: 01/Jul/19 08:40
Worklog Time Spent: 10m 
  Work Description: pvary commented on pull request #691: HIVE-21911: 
Pluggable LlapMetricsListener on Tez side to disable / resize Daemons
URL: https://github.com/apache/hive/pull/691#discussion_r298936551
 
 

 ##
 File path: 
llap-tez/src/java/org/apache/hadoop/hive/llap/tezplugins/metrics/LlapMetricsCollector.java
 ##
 @@ -101,13 +122,27 @@ void collectMetrics() {
 LlapDaemonProtocolProtos.GetDaemonMetricsResponseProto metrics =
 client.getDaemonMetrics(null,
 
LlapDaemonProtocolProtos.GetDaemonMetricsRequestProto.newBuilder().build());
-instanceStatisticsMap.put(identity, new LlapMetrics(metrics));
-
+LlapMetrics newMetrics = new LlapMetrics(metrics);
 
 Review comment:
   Yes, I see several use-cases where we need statistics without the listener. 
For example: it might be good idea that when we chose a random node for task 
execution we could distribute these tasks based on the available executors
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 270037)
Time Spent: 1h 10m  (was: 1h)

> Pluggable LlapMetricsListener on Tez side to disable / resize Daemons
> -
>
> Key: HIVE-21911
> URL: https://issues.apache.org/jira/browse/HIVE-21911
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap, Tez
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21911.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> We need to have a way to plug in different listeners which act upon the 
> LlapDaemon statistics.
> This listener should be able to disable / resize the LlapDaemons based on 
> health data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21911) Pluggable LlapMetricsListener on Tez side to disable / resize Daemons

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21911?focusedWorklogId=270036=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-270036
 ]

ASF GitHub Bot logged work on HIVE-21911:
-

Author: ASF GitHub Bot
Created on: 01/Jul/19 08:39
Start Date: 01/Jul/19 08:39
Worklog Time Spent: 10m 
  Work Description: pvary commented on pull request #691: HIVE-21911: 
Pluggable LlapMetricsListener on Tez side to disable / resize Daemons
URL: https://github.com/apache/hive/pull/691#discussion_r298935935
 
 

 ##
 File path: 
llap-tez/src/java/org/apache/hadoop/hive/llap/tezplugins/metrics/LlapMetricsCollector.java
 ##
 @@ -58,26 +61,44 @@
   private final Map llapClients;
   private final Map instanceStatisticsMap;
   private final long metricsCollectionMs;
+  @VisibleForTesting
+  final LlapMetricsListener listener;
 
 Review comment:
   I was thinking about the same lines, but we do not need them now, so I 
decided to keep this as simple as possible
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 270036)
Time Spent: 1h  (was: 50m)

> Pluggable LlapMetricsListener on Tez side to disable / resize Daemons
> -
>
> Key: HIVE-21911
> URL: https://issues.apache.org/jira/browse/HIVE-21911
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap, Tez
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21911.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> We need to have a way to plug in different listeners which act upon the 
> LlapDaemon statistics.
> This listener should be able to disable / resize the LlapDaemons based on 
> health data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21911) Pluggable LlapMetricsListener on Tez side to disable / resize Daemons

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21911?focusedWorklogId=270035=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-270035
 ]

ASF GitHub Bot logged work on HIVE-21911:
-

Author: ASF GitHub Bot
Created on: 01/Jul/19 08:38
Start Date: 01/Jul/19 08:38
Worklog Time Spent: 10m 
  Work Description: pvary commented on pull request #691: HIVE-21911: 
Pluggable LlapMetricsListener on Tez side to disable / resize Daemons
URL: https://github.com/apache/hive/pull/691#discussion_r298935738
 
 

 ##
 File path: common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
 ##
 @@ -4353,6 +4353,9 @@ private static void 
populateLlapDaemonVarsSet(Set llapDaemonVarsSetLocal
   new TimeValidator(TimeUnit.MILLISECONDS), "Collect llap daemon metrics 
in the AM every given milliseconds,\n" +
   "so that the AM can use this information, to make better scheduling 
decisions.\n" +
   "If it's set to 0, then the feature is disabled."),
+
LLAP_TASK_SCHEDULER_AM_COLLECT_DAEMON_METRICS_LISTENER("hive.llap.task.scheduler.am.collect.daemon.metrics.listener",
 "",
 
 Review comment:
   Since all of the related configs currently has AM in the names, I would keep 
it this way and when we move this out then we can rename them as needed.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 270035)
Time Spent: 50m  (was: 40m)

> Pluggable LlapMetricsListener on Tez side to disable / resize Daemons
> -
>
> Key: HIVE-21911
> URL: https://issues.apache.org/jira/browse/HIVE-21911
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap, Tez
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21911.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> We need to have a way to plug in different listeners which act upon the 
> LlapDaemon statistics.
> This listener should be able to disable / resize the LlapDaemons based on 
> health data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21578) Introduce SQL:2016 formats FM, FX, and nested strings

2019-07-01 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876005#comment-16876005
 ] 

Hive QA commented on HIVE-21578:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
36s{color} | {color:blue} common in master has 62 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
52s{color} | {color:blue} ql in master has 2253 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
12s{color} | {color:red} common: The patch generated 3 new + 0 unchanged - 0 
fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-17798/dev-support/hive-personality.sh
 |
| git revision | master / be6bf93 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17798/yetus/diff-checkstyle-common.txt
 |
| modules | C: common ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17798/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Introduce SQL:2016 formats FM, FX, and nested strings
> -
>
> Key: HIVE-21578
> URL: https://issues.apache.org/jira/browse/HIVE-21578
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
> Attachments: HIVE-21578.01.patch, HIVE-21578.02.patch, 
> HIVE-21578.02.patch
>
>
> Enable Hive to parse the following datetime formats when any combination or 
> subset of these or previously implemented formats is provided in one string. 
>  * "text" (nested strings)
>  * FM
>  * FX
> [Definitions 
> here|https://docs.google.com/document/d/1V7k6-lrPGW7_uhqM-FhKl3QsxwCRy69v2KIxPsGjc1k/edit]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19831) Hiveserver2 should skip doAuth checks for CREATE DATABASE/TABLE if database/table already exists

2019-07-01 Thread Daniel Dai (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876004#comment-16876004
 ] 

Daniel Dai commented on HIVE-19831:
---

Can you add a comment saying when db exists, we would expect Hive throw an 
exception so there's no need to do the auth? Otherwise from the code, it seems 
we would skip the auth check when db exists.

> Hiveserver2 should skip doAuth checks for CREATE DATABASE/TABLE if 
> database/table already exists
> 
>
> Key: HIVE-19831
> URL: https://issues.apache.org/jira/browse/HIVE-19831
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 1.2.1, 2.1.0
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HIVE-19831.patch
>
>
> with sqlstdauth on, Create database if exists take TOO LONG if there are too 
> many objects inside the database directory. Hive should not run the doAuth 
> checks for all the objects within database if the database already exists.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21910) Multiple target location generation in HostAffinitySplitLocationProvider

2019-07-01 Thread Peter Vary (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-21910:
--
Attachment: HIVE-21910.3.patch

> Multiple target location generation in HostAffinitySplitLocationProvider
> 
>
> Key: HIVE-21910
> URL: https://issues.apache.org/jira/browse/HIVE-21910
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21910.2.patch, HIVE-21910.3.patch, HIVE-21910.patch
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> We need to generate multiple target locations by 
> HostAffinitySplitLocationProvider, so we will have deterministic fallback 
> nodes in case the target node is disabled



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work logged] (HIVE-21880) Enable flaky test TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21880?focusedWorklogId=270022=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-270022
 ]

ASF GitHub Bot logged work on HIVE-21880:
-

Author: ASF GitHub Bot
Created on: 01/Jul/19 08:01
Start Date: 01/Jul/19 08:01
Worklog Time Spent: 10m 
  Work Description: maheshk114 commented on pull request #684: HIVE-21880 : 
Enable flaky test 
TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.
URL: https://github.com/apache/hive/pull/684#discussion_r298921626
 
 

 ##
 File path: 
itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/listener/TestConcurrentDbNotificationListener.java
 ##
 @@ -0,0 +1,201 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.hive.hcatalog.listener;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.fail;
+
+import java.util.concurrent.TimeUnit;
+import java.lang.reflect.Field;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+import java.util.Stack;
+
+import com.google.common.collect.ImmutableMap;
+import com.google.common.collect.Lists;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hive.cli.CliSessionState;
+import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.metastore.HiveMetaStoreClient;
+import org.apache.hadoop.hive.metastore.IMetaStoreClient;
+import org.apache.hadoop.hive.metastore.MetaStoreEventListener;
+import org.apache.hadoop.hive.metastore.MetaStoreEventListenerConstants;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.api.Database;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.FireEventRequest;
+import org.apache.hadoop.hive.metastore.api.FireEventRequestData;
+import org.apache.hadoop.hive.metastore.api.Function;
+import org.apache.hadoop.hive.metastore.api.FunctionType;
+import org.apache.hadoop.hive.metastore.api.InsertEventRequestData;
+import org.apache.hadoop.hive.metastore.api.MetaException;
+import org.apache.hadoop.hive.metastore.api.NotificationEvent;
+import org.apache.hadoop.hive.metastore.api.NotificationEventResponse;
+import org.apache.hadoop.hive.metastore.api.NotificationEventsCountRequest;
+import org.apache.hadoop.hive.metastore.api.Partition;
+import org.apache.hadoop.hive.metastore.api.PrincipalType;
+import org.apache.hadoop.hive.metastore.api.ResourceType;
+import org.apache.hadoop.hive.metastore.api.ResourceUri;
+import org.apache.hadoop.hive.metastore.api.SerDeInfo;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.hive.metastore.api.Table;
+import org.apache.hadoop.hive.metastore.conf.MetastoreConf;
 
 Review comment:
   all these imports are required ?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 270022)
Time Spent: 1h 10m  (was: 1h)

> Enable flaky test 
> TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.
> ---
>
> Key: HIVE-21880
> URL: https://issues.apache.org/jira/browse/HIVE-21880
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Ashutosh Bapat
>Priority: 

[jira] [Work logged] (HIVE-21880) Enable flaky test TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21880?focusedWorklogId=270023=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-270023
 ]

ASF GitHub Bot logged work on HIVE-21880:
-

Author: ASF GitHub Bot
Created on: 01/Jul/19 08:01
Start Date: 01/Jul/19 08:01
Worklog Time Spent: 10m 
  Work Description: maheshk114 commented on pull request #684: HIVE-21880 : 
Enable flaky test 
TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.
URL: https://github.com/apache/hive/pull/684#discussion_r298922242
 
 

 ##
 File path: 
standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java
 ##
 @@ -3236,18 +3238,28 @@ public NotificationEventResponse 
getNextNotification(long lastEventId, int maxEv
 NotificationEventResponse filtered = new NotificationEventResponse();
 if (rsp != null && rsp.getEvents() != null) {
   long nextEventId = lastEventId + 1;
+  long prevEventId = lastEventId;
   for (NotificationEvent e : rsp.getEvents()) {
+LOG.debug("Got event with id : " + e.getEventId());
 if (e.getEventId() != nextEventId) {
-  LOG.error("Requested events are found missing in NOTIFICATION_LOG 
table. Expected: {}, Actual: {}. "
-  + "Probably, cleaner would've cleaned it up. "
-  + "Try setting higher value for 
hive.metastore.event.db.listener.timetolive. "
-  + "Also, bootstrap the system again to get back the 
consistent replicated state.",
-  nextEventId, e.getEventId());
-  throw new IllegalStateException(REPL_EVENTS_MISSING_IN_METASTORE);
+  if (e.getEventId() == prevEventId) {
+LOG.error("NOTIFICATION_LOG table has multiple events with the 
same event Id {}. " +
+"Something went wrong when inserting notification events.  
Bootstrap the system " +
+"again to get back teh consistent replicated state.", 
prevEventId);
 
 Review comment:
   i am not sure if we need to handle this separately. As per code, this can 
never happen now.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 270023)
Time Spent: 1h 20m  (was: 1h 10m)

> Enable flaky test 
> TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.
> ---
>
> Key: HIVE-21880
> URL: https://issues.apache.org/jira/browse/HIVE-21880
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21880.01.patch, HIVE-21880.02.patch, 
> HIVE-21880.03.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Need tp enable 
> TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites
>  which is disabled as it is flaky and randomly failing with below error.
> {code}
> Error Message
> Notification events are missing in the meta store.
> Stacktrace
> java.lang.IllegalStateException: Notification events are missing in the meta 
> store.
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getNextNotification(HiveMetaStoreClient.java:3246)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212)
>   at com.sun.proxy.$Proxy58.getNextNotification(Unknown Source)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$MSClientNotificationFetcher.getNextNotificationEvents(EventUtils.java:107)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$NotificationEventIterator.fetchNextBatch(EventUtils.java:159)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$NotificationEventIterator.hasNext(EventUtils.java:189)
>   at 
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.incrementalDump(ReplDumpTask.java:231)
>   at 
> 

[jira] [Work logged] (HIVE-21880) Enable flaky test TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21880?focusedWorklogId=270019=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-270019
 ]

ASF GitHub Bot logged work on HIVE-21880:
-

Author: ASF GitHub Bot
Created on: 01/Jul/19 07:57
Start Date: 01/Jul/19 07:57
Worklog Time Spent: 10m 
  Work Description: maheshk114 commented on pull request #684: HIVE-21880 : 
Enable flaky test 
TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.
URL: https://github.com/apache/hive/pull/684#discussion_r298902499
 
 

 ##
 File path: ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java
 ##
 @@ -518,6 +518,8 @@
   REPL_INVALID_DB_OR_TABLE_PATTERN(20021,
   "Invalid pattern for the DB or table name in the replication policy. 
"
   + "It should be a valid regex enclosed within single or 
double quotes."),
+  REPL_EVENTS_WITH_DUPLICATE_ID_IN_METASTORE(20026, "Notification events with 
duplicate " +
 
 Review comment:
   add a comment similar to REPL_EVENTS_MISSING_IN_METASTORE -- "//if the error 
message is changed for ..."
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 270019)
Time Spent: 40m  (was: 0.5h)

> Enable flaky test 
> TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.
> ---
>
> Key: HIVE-21880
> URL: https://issues.apache.org/jira/browse/HIVE-21880
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21880.01.patch, HIVE-21880.02.patch, 
> HIVE-21880.03.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Need tp enable 
> TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites
>  which is disabled as it is flaky and randomly failing with below error.
> {code}
> Error Message
> Notification events are missing in the meta store.
> Stacktrace
> java.lang.IllegalStateException: Notification events are missing in the meta 
> store.
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getNextNotification(HiveMetaStoreClient.java:3246)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212)
>   at com.sun.proxy.$Proxy58.getNextNotification(Unknown Source)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$MSClientNotificationFetcher.getNextNotificationEvents(EventUtils.java:107)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$NotificationEventIterator.fetchNextBatch(EventUtils.java:159)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$NotificationEventIterator.hasNext(EventUtils.java:189)
>   at 
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.incrementalDump(ReplDumpTask.java:231)
>   at 
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.execute(ReplDumpTask.java:121)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2709)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2361)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2028)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1788)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1782)
>   at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:162)
>   at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:223)
>   at 
> org.apache.hadoop.hive.ql.parse.WarehouseInstance.run(WarehouseInstance.java:227)
>   at 
> org.apache.hadoop.hive.ql.parse.WarehouseInstance.dump(WarehouseInstance.java:282)
>   at 
> 

[jira] [Work logged] (HIVE-21880) Enable flaky test TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21880?focusedWorklogId=270021=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-270021
 ]

ASF GitHub Bot logged work on HIVE-21880:
-

Author: ASF GitHub Bot
Created on: 01/Jul/19 07:57
Start Date: 01/Jul/19 07:57
Worklog Time Spent: 10m 
  Work Description: maheshk114 commented on pull request #684: HIVE-21880 : 
Enable flaky test 
TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.
URL: https://github.com/apache/hive/pull/684#discussion_r298918317
 
 

 ##
 File path: 
hcatalog/server-extensions/src/main/java/org/apache/hive/hcatalog/listener/DbNotificationListener.java
 ##
 @@ -997,6 +998,15 @@ private void addNotificationLog(NotificationEvent event, 
ListenerEvent listenerE
 stmt.execute("SET @@session.sql_mode=ANSI_QUOTES");
   }
 
+  // Derby doesn't allow FOR UPDATE to lock the row being selected (See 
https://db.apache
+  // .org/derby/docs/10.1/ref/rrefsqlj31783.html) . So lock the whole 
table. Since there's
+  // only one row in the table, this shouldn't cause any performance 
degradation. Also we not
+  // suggest Derby to be used in production so that's fine.
+  if (sqlGenerator.getDbProduct() == DatabaseProduct.DERBY) {
+String lockingQuery = "lock table \"NOTIFICATION_SEQUENCE\" in 
exclusive mode";
+LOG.info("Going to execute query <" + lockingQuery + ">");
+stmt.executeUpdate(lockingQuery);
+  }
   String s = sqlGenerator.addForUpdateClause("select \"NEXT_EVENT_ID\" " +
 
 Review comment:
   add "for update " clause for other kind of databases only
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 270021)
Time Spent: 1h  (was: 50m)

> Enable flaky test 
> TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.
> ---
>
> Key: HIVE-21880
> URL: https://issues.apache.org/jira/browse/HIVE-21880
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21880.01.patch, HIVE-21880.02.patch, 
> HIVE-21880.03.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Need tp enable 
> TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites
>  which is disabled as it is flaky and randomly failing with below error.
> {code}
> Error Message
> Notification events are missing in the meta store.
> Stacktrace
> java.lang.IllegalStateException: Notification events are missing in the meta 
> store.
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getNextNotification(HiveMetaStoreClient.java:3246)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212)
>   at com.sun.proxy.$Proxy58.getNextNotification(Unknown Source)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$MSClientNotificationFetcher.getNextNotificationEvents(EventUtils.java:107)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$NotificationEventIterator.fetchNextBatch(EventUtils.java:159)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$NotificationEventIterator.hasNext(EventUtils.java:189)
>   at 
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.incrementalDump(ReplDumpTask.java:231)
>   at 
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.execute(ReplDumpTask.java:121)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2709)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2361)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2028)
>   at 

[jira] [Work logged] (HIVE-21880) Enable flaky test TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21880?focusedWorklogId=270018=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-270018
 ]

ASF GitHub Bot logged work on HIVE-21880:
-

Author: ASF GitHub Bot
Created on: 01/Jul/19 07:57
Start Date: 01/Jul/19 07:57
Worklog Time Spent: 10m 
  Work Description: maheshk114 commented on pull request #684: HIVE-21880 : 
Enable flaky test 
TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.
URL: https://github.com/apache/hive/pull/684#discussion_r298917275
 
 

 ##
 File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java
 ##
 @@ -2673,4 +2673,13 @@ private void getStatsTableListResult(
   query.closeAll();
 }
   }
+
+  public void lockDbTable(String tableName) throws MetaException {
+String lockCommand = "lock table \"" + tableName + "\" in exclusive mode";
+try {
+  executeNoResult(lockCommand);
+} catch (SQLException sqle) {
 
 Review comment:
   i think retry exception should be handled 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 270018)
Time Spent: 0.5h  (was: 20m)

> Enable flaky test 
> TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.
> ---
>
> Key: HIVE-21880
> URL: https://issues.apache.org/jira/browse/HIVE-21880
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21880.01.patch, HIVE-21880.02.patch, 
> HIVE-21880.03.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Need tp enable 
> TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites
>  which is disabled as it is flaky and randomly failing with below error.
> {code}
> Error Message
> Notification events are missing in the meta store.
> Stacktrace
> java.lang.IllegalStateException: Notification events are missing in the meta 
> store.
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getNextNotification(HiveMetaStoreClient.java:3246)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212)
>   at com.sun.proxy.$Proxy58.getNextNotification(Unknown Source)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$MSClientNotificationFetcher.getNextNotificationEvents(EventUtils.java:107)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$NotificationEventIterator.fetchNextBatch(EventUtils.java:159)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$NotificationEventIterator.hasNext(EventUtils.java:189)
>   at 
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.incrementalDump(ReplDumpTask.java:231)
>   at 
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.execute(ReplDumpTask.java:121)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2709)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2361)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2028)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1788)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1782)
>   at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:162)
>   at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:223)
>   at 
> org.apache.hadoop.hive.ql.parse.WarehouseInstance.run(WarehouseInstance.java:227)
>   at 
> org.apache.hadoop.hive.ql.parse.WarehouseInstance.dump(WarehouseInstance.java:282)
>   at 
> org.apache.hadoop.hive.ql.parse.WarehouseInstance.dump(WarehouseInstance.java:265)
>  

[jira] [Work logged] (HIVE-21880) Enable flaky test TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21880?focusedWorklogId=270016=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-270016
 ]

ASF GitHub Bot logged work on HIVE-21880:
-

Author: ASF GitHub Bot
Created on: 01/Jul/19 07:57
Start Date: 01/Jul/19 07:57
Worklog Time Spent: 10m 
  Work Description: maheshk114 commented on pull request #684: HIVE-21880 : 
Enable flaky test 
TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.
URL: https://github.com/apache/hive/pull/684#discussion_r298898096
 
 

 ##
 File path: 
hcatalog/server-extensions/src/main/java/org/apache/hive/hcatalog/listener/DbNotificationListener.java
 ##
 @@ -997,6 +998,15 @@ private void addNotificationLog(NotificationEvent event, 
ListenerEvent listenerE
 stmt.execute("SET @@session.sql_mode=ANSI_QUOTES");
   }
 
+  // Derby doesn't allow FOR UPDATE to lock the row being selected (See 
https://db.apache
+  // .org/derby/docs/10.1/ref/rrefsqlj31783.html) . So lock the whole 
table. Since there's
+  // only one row in the table, this shouldn't cause any performance 
degradation. Also we not
+  // suggest Derby to be used in production so that's fine.
 
 Review comment:
   i am not sure if this is officially mentioned any where from hive "Also we 
not
 // suggest Derby to be used in production so that's fine."
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 270016)
Time Spent: 20m  (was: 10m)

> Enable flaky test 
> TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.
> ---
>
> Key: HIVE-21880
> URL: https://issues.apache.org/jira/browse/HIVE-21880
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21880.01.patch, HIVE-21880.02.patch, 
> HIVE-21880.03.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Need tp enable 
> TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites
>  which is disabled as it is flaky and randomly failing with below error.
> {code}
> Error Message
> Notification events are missing in the meta store.
> Stacktrace
> java.lang.IllegalStateException: Notification events are missing in the meta 
> store.
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getNextNotification(HiveMetaStoreClient.java:3246)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212)
>   at com.sun.proxy.$Proxy58.getNextNotification(Unknown Source)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$MSClientNotificationFetcher.getNextNotificationEvents(EventUtils.java:107)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$NotificationEventIterator.fetchNextBatch(EventUtils.java:159)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$NotificationEventIterator.hasNext(EventUtils.java:189)
>   at 
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.incrementalDump(ReplDumpTask.java:231)
>   at 
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.execute(ReplDumpTask.java:121)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2709)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2361)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2028)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1788)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1782)
>   at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:162)
>   at 
> 

[jira] [Work logged] (HIVE-21880) Enable flaky test TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21880?focusedWorklogId=270017=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-270017
 ]

ASF GitHub Bot logged work on HIVE-21880:
-

Author: ASF GitHub Bot
Created on: 01/Jul/19 07:57
Start Date: 01/Jul/19 07:57
Worklog Time Spent: 10m 
  Work Description: maheshk114 commented on pull request #684: HIVE-21880 : 
Enable flaky test 
TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.
URL: https://github.com/apache/hive/pull/684#discussion_r298920790
 
 

 ##
 File path: 
hcatalog/server-extensions/src/main/java/org/apache/hive/hcatalog/listener/DbNotificationListener.java
 ##
 @@ -997,6 +998,15 @@ private void addNotificationLog(NotificationEvent event, 
ListenerEvent listenerE
 stmt.execute("SET @@session.sql_mode=ANSI_QUOTES");
   }
 
+  // Derby doesn't allow FOR UPDATE to lock the row being selected (See 
https://db.apache
 
 Review comment:
   For addNotificationLog ..the lock is already taken by the methods in 
txnHandler class. addNotificationLog is always and should be called for events 
generated by txnHandler class ..so i think this lock is not required.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 270017)
Time Spent: 0.5h  (was: 20m)

> Enable flaky test 
> TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.
> ---
>
> Key: HIVE-21880
> URL: https://issues.apache.org/jira/browse/HIVE-21880
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21880.01.patch, HIVE-21880.02.patch, 
> HIVE-21880.03.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Need tp enable 
> TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites
>  which is disabled as it is flaky and randomly failing with below error.
> {code}
> Error Message
> Notification events are missing in the meta store.
> Stacktrace
> java.lang.IllegalStateException: Notification events are missing in the meta 
> store.
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getNextNotification(HiveMetaStoreClient.java:3246)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212)
>   at com.sun.proxy.$Proxy58.getNextNotification(Unknown Source)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$MSClientNotificationFetcher.getNextNotificationEvents(EventUtils.java:107)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$NotificationEventIterator.fetchNextBatch(EventUtils.java:159)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$NotificationEventIterator.hasNext(EventUtils.java:189)
>   at 
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.incrementalDump(ReplDumpTask.java:231)
>   at 
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.execute(ReplDumpTask.java:121)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2709)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2361)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2028)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1788)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1782)
>   at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:162)
>   at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:223)
>   at 
> org.apache.hadoop.hive.ql.parse.WarehouseInstance.run(WarehouseInstance.java:227)
>   at 
> 

[jira] [Work logged] (HIVE-21880) Enable flaky test TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.

2019-07-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21880?focusedWorklogId=270020=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-270020
 ]

ASF GitHub Bot logged work on HIVE-21880:
-

Author: ASF GitHub Bot
Created on: 01/Jul/19 07:57
Start Date: 01/Jul/19 07:57
Worklog Time Spent: 10m 
  Work Description: maheshk114 commented on pull request #684: HIVE-21880 : 
Enable flaky test 
TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.
URL: https://github.com/apache/hive/pull/684#discussion_r298916973
 
 

 ##
 File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java
 ##
 @@ -1,17 +1,25 @@ private void prepareQuotes() throws SQLException {
 }
   }
 
-  private void lockForUpdate() throws MetaException {
-String selectQuery = "select \"NEXT_EVENT_ID\" from 
\"NOTIFICATION_SEQUENCE\"";
-String selectForUpdateQuery = sqlGenerator.addForUpdateClause(selectQuery);
-new RetryingExecutor(conf, () -> {
-  prepareQuotes();
-  Query query = pm.newQuery("javax.jdo.query.SQL", selectForUpdateQuery);
-  query.setUnique(true);
-  // only need to execute it to get db Lock
-  query.execute();
-  query.closeAll();
-}).run();
+  private void lockNotificationSequenceForUpdate() throws MetaException {
+if (sqlGenerator.getDbProduct() == DatabaseProduct.DERBY && directSql != 
null) {
+  // Derby doesn't allow FOR UPDATE to lock the row being selected (See 
https://db.apache
+  // .org/derby/docs/10.1/ref/rrefsqlj31783.html) . So lock the whole 
table. Since there's
+  // only one row in the table, this shouldn't cause any performance 
degradation. Also we not
+  // suggest Derby to be used in production so that's fine.
+  directSql.lockDbTable("NOTIFICATION_SEQUENCE");
 
 Review comment:
   if it can have only one records ..does select for update makes any sense ? 
can we make it lock table for other kind of db also ?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 270020)
Time Spent: 50m  (was: 40m)

> Enable flaky test 
> TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.
> ---
>
> Key: HIVE-21880
> URL: https://issues.apache.org/jira/browse/HIVE-21880
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Affects Versions: 4.0.0
>Reporter: Sankar Hariappan
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21880.01.patch, HIVE-21880.02.patch, 
> HIVE-21880.03.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Need tp enable 
> TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites
>  which is disabled as it is flaky and randomly failing with below error.
> {code}
> Error Message
> Notification events are missing in the meta store.
> Stacktrace
> java.lang.IllegalStateException: Notification events are missing in the meta 
> store.
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getNextNotification(HiveMetaStoreClient.java:3246)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212)
>   at com.sun.proxy.$Proxy58.getNextNotification(Unknown Source)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$MSClientNotificationFetcher.getNextNotificationEvents(EventUtils.java:107)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$NotificationEventIterator.fetchNextBatch(EventUtils.java:159)
>   at 
> org.apache.hadoop.hive.ql.metadata.events.EventUtils$NotificationEventIterator.hasNext(EventUtils.java:189)
>   at 
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.incrementalDump(ReplDumpTask.java:231)
>   at 
> org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.execute(ReplDumpTask.java:121)
>   at 

[jira] [Commented] (HIVE-21938) Add database and table filter options to PreUpgradeTool

2019-07-01 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875991#comment-16875991
 ] 

Hive QA commented on HIVE-21938:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12973305/HIVE-21938.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 15 failed/errored test(s), 16332 tests 
executed
*Failed tests:*
{noformat}
TestReplAcrossInstancesWithJsonMessageFormat - did not produce a TEST-*.xml 
file (likely timed out) (batchId=255)
org.apache.hadoop.hive.metastore.TestObjectStore.catalogs (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testDatabaseOps (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testDeprecatedConfigIsOverwritten
 (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropParitionsCleanup
 (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropPartitionsCacheCrossSession
 (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSqlErrorMetrics 
(batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testEmptyTrustStoreProps 
(batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testMasterKeyOps (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testMaxEventResponse 
(batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testPartitionOps (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testQueryCloseOnError 
(batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testRoleOps (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testTableOps (batchId=232)
org.apache.hadoop.hive.metastore.TestObjectStore.testUseSSLProperty 
(batchId=232)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/17797/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17797/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17797/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 15 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12973305 - PreCommit-HIVE-Build

> Add database and table filter options to PreUpgradeTool
> ---
>
> Key: HIVE-21938
> URL: https://issues.apache.org/jira/browse/HIVE-21938
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 3.1.0
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Blocker
> Fix For: 4.0.0
>
> Attachments: HIVE-21938.1.patch
>
>
> By default pre upgrade tool scans all databases and tables in the warehouse. 
> Add database and table filter options to run the tool for a specific subset 
> of databases and tables only.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21437) Vectorization: Decimal64 division with integer columns

2019-07-01 Thread Attila Magyar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Magyar updated HIVE-21437:
-
Attachment: HIVE-21437.4.patch

> Vectorization: Decimal64 division with integer columns
> --
>
> Key: HIVE-21437
> URL: https://issues.apache.org/jira/browse/HIVE-21437
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 4.0.0
>Reporter: Gopal V
>Assignee: Attila Magyar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21437.1.patch, HIVE-21437.2.patch, 
> HIVE-21437.3.patch, HIVE-21437.4.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Vectorizer fails for
> {code}
> CREATE temporary TABLE `catalog_Sales`(
>   `cs_quantity` int, 
>   `cs_wholesale_cost` decimal(7,2), 
>   `cs_list_price` decimal(7,2), 
>   `cs_sales_price` decimal(7,2), 
>   `cs_ext_discount_amt` decimal(7,2), 
>   `cs_ext_sales_price` decimal(7,2), 
>   `cs_ext_wholesale_cost` decimal(7,2), 
>   `cs_ext_list_price` decimal(7,2), 
>   `cs_ext_tax` decimal(7,2), 
>   `cs_coupon_amt` decimal(7,2), 
>   `cs_ext_ship_cost` decimal(7,2), 
>   `cs_net_paid` decimal(7,2), 
>   `cs_net_paid_inc_tax` decimal(7,2), 
>   `cs_net_paid_inc_ship` decimal(7,2), 
>   `cs_net_paid_inc_ship_tax` decimal(7,2), 
>   `cs_net_profit` decimal(7,2))
>  ;
> explain vectorization detail select maxcs_ext_list_price - 
> cs_ext_wholesale_cost) - cs_ext_discount_amt) + cs_ext_sales_price) / 2) from 
> catalog_sales;
> {code}
> {code}
> 'Map Vectorization:'
> 'enabled: true'
> 'enabledConditionsMet: 
> hive.vectorized.use.vectorized.input.format IS true'
> 'inputFileFormats: 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
> 'notVectorizedReason: SELECT operator: Could not instantiate 
> DecimalColDivideDecimalScalar with arguments arguments: [21, 20, 22], 
> argument classes: [Integer, Integer, Integer], exception: 
> java.lang.IllegalArgumentException: java.lang.ClassCastException@63b56be0 
> stack trace: 
> sun.reflect.GeneratedConstructorAccessor.newInstance(Unknown 
> Source), 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45),
>  java.lang.reflect.Constructor.newInstance(Constructor.java:423), 
> org.apache.hadoop.hive.ql.exec.vector.VectorizationContext.instantiateExpression(VectorizationContext.java:2088),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.fixDecimalDataTypePhysicalVariations(Vectorizer.java:4662),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.fixDecimalDataTypePhysicalVariations(Vectorizer.java:4602),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.vectorizeSelectOperator(Vectorizer.java:4584),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.validateAndVectorizeOperator(Vectorizer.java:5171),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.doProcessChild(Vectorizer.java:923),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.doProcessChildren(Vectorizer.java:809),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.validateAndVectorizeOperatorTree(Vectorizer.java:776),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.access$2400(Vectorizer.java:240),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.validateAndVectorizeMapOperators(Vectorizer.java:2038),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.validateAndVectorizeMapOperators(Vectorizer.java:1990),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.validateAndVectorizeMapWork(Vectorizer.java:1963),
>  ...'
> 'vectorized: false'
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-21437) Vectorization: Decimal64 division with integer columns

2019-07-01 Thread Attila Magyar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Magyar reassigned HIVE-21437:


Assignee: Attila Magyar  (was: Teddy Choi)

> Vectorization: Decimal64 division with integer columns
> --
>
> Key: HIVE-21437
> URL: https://issues.apache.org/jira/browse/HIVE-21437
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 4.0.0
>Reporter: Gopal V
>Assignee: Attila Magyar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21437.1.patch, HIVE-21437.2.patch, 
> HIVE-21437.3.patch, HIVE-21437.4.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Vectorizer fails for
> {code}
> CREATE temporary TABLE `catalog_Sales`(
>   `cs_quantity` int, 
>   `cs_wholesale_cost` decimal(7,2), 
>   `cs_list_price` decimal(7,2), 
>   `cs_sales_price` decimal(7,2), 
>   `cs_ext_discount_amt` decimal(7,2), 
>   `cs_ext_sales_price` decimal(7,2), 
>   `cs_ext_wholesale_cost` decimal(7,2), 
>   `cs_ext_list_price` decimal(7,2), 
>   `cs_ext_tax` decimal(7,2), 
>   `cs_coupon_amt` decimal(7,2), 
>   `cs_ext_ship_cost` decimal(7,2), 
>   `cs_net_paid` decimal(7,2), 
>   `cs_net_paid_inc_tax` decimal(7,2), 
>   `cs_net_paid_inc_ship` decimal(7,2), 
>   `cs_net_paid_inc_ship_tax` decimal(7,2), 
>   `cs_net_profit` decimal(7,2))
>  ;
> explain vectorization detail select maxcs_ext_list_price - 
> cs_ext_wholesale_cost) - cs_ext_discount_amt) + cs_ext_sales_price) / 2) from 
> catalog_sales;
> {code}
> {code}
> 'Map Vectorization:'
> 'enabled: true'
> 'enabledConditionsMet: 
> hive.vectorized.use.vectorized.input.format IS true'
> 'inputFileFormats: 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
> 'notVectorizedReason: SELECT operator: Could not instantiate 
> DecimalColDivideDecimalScalar with arguments arguments: [21, 20, 22], 
> argument classes: [Integer, Integer, Integer], exception: 
> java.lang.IllegalArgumentException: java.lang.ClassCastException@63b56be0 
> stack trace: 
> sun.reflect.GeneratedConstructorAccessor.newInstance(Unknown 
> Source), 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45),
>  java.lang.reflect.Constructor.newInstance(Constructor.java:423), 
> org.apache.hadoop.hive.ql.exec.vector.VectorizationContext.instantiateExpression(VectorizationContext.java:2088),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.fixDecimalDataTypePhysicalVariations(Vectorizer.java:4662),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.fixDecimalDataTypePhysicalVariations(Vectorizer.java:4602),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.vectorizeSelectOperator(Vectorizer.java:4584),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.validateAndVectorizeOperator(Vectorizer.java:5171),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.doProcessChild(Vectorizer.java:923),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.doProcessChildren(Vectorizer.java:809),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.validateAndVectorizeOperatorTree(Vectorizer.java:776),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.access$2400(Vectorizer.java:240),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.validateAndVectorizeMapOperators(Vectorizer.java:2038),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.validateAndVectorizeMapOperators(Vectorizer.java:1990),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.validateAndVectorizeMapWork(Vectorizer.java:1963),
>  ...'
> 'vectorized: false'
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21437) Vectorization: Decimal64 division with integer columns

2019-07-01 Thread Attila Magyar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Magyar updated HIVE-21437:
-
Status: Open  (was: Patch Available)

> Vectorization: Decimal64 division with integer columns
> --
>
> Key: HIVE-21437
> URL: https://issues.apache.org/jira/browse/HIVE-21437
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 4.0.0
>Reporter: Gopal V
>Assignee: Attila Magyar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21437.1.patch, HIVE-21437.2.patch, 
> HIVE-21437.3.patch, HIVE-21437.4.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Vectorizer fails for
> {code}
> CREATE temporary TABLE `catalog_Sales`(
>   `cs_quantity` int, 
>   `cs_wholesale_cost` decimal(7,2), 
>   `cs_list_price` decimal(7,2), 
>   `cs_sales_price` decimal(7,2), 
>   `cs_ext_discount_amt` decimal(7,2), 
>   `cs_ext_sales_price` decimal(7,2), 
>   `cs_ext_wholesale_cost` decimal(7,2), 
>   `cs_ext_list_price` decimal(7,2), 
>   `cs_ext_tax` decimal(7,2), 
>   `cs_coupon_amt` decimal(7,2), 
>   `cs_ext_ship_cost` decimal(7,2), 
>   `cs_net_paid` decimal(7,2), 
>   `cs_net_paid_inc_tax` decimal(7,2), 
>   `cs_net_paid_inc_ship` decimal(7,2), 
>   `cs_net_paid_inc_ship_tax` decimal(7,2), 
>   `cs_net_profit` decimal(7,2))
>  ;
> explain vectorization detail select maxcs_ext_list_price - 
> cs_ext_wholesale_cost) - cs_ext_discount_amt) + cs_ext_sales_price) / 2) from 
> catalog_sales;
> {code}
> {code}
> 'Map Vectorization:'
> 'enabled: true'
> 'enabledConditionsMet: 
> hive.vectorized.use.vectorized.input.format IS true'
> 'inputFileFormats: 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
> 'notVectorizedReason: SELECT operator: Could not instantiate 
> DecimalColDivideDecimalScalar with arguments arguments: [21, 20, 22], 
> argument classes: [Integer, Integer, Integer], exception: 
> java.lang.IllegalArgumentException: java.lang.ClassCastException@63b56be0 
> stack trace: 
> sun.reflect.GeneratedConstructorAccessor.newInstance(Unknown 
> Source), 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45),
>  java.lang.reflect.Constructor.newInstance(Constructor.java:423), 
> org.apache.hadoop.hive.ql.exec.vector.VectorizationContext.instantiateExpression(VectorizationContext.java:2088),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.fixDecimalDataTypePhysicalVariations(Vectorizer.java:4662),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.fixDecimalDataTypePhysicalVariations(Vectorizer.java:4602),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.vectorizeSelectOperator(Vectorizer.java:4584),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.validateAndVectorizeOperator(Vectorizer.java:5171),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.doProcessChild(Vectorizer.java:923),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.doProcessChildren(Vectorizer.java:809),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.validateAndVectorizeOperatorTree(Vectorizer.java:776),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.access$2400(Vectorizer.java:240),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.validateAndVectorizeMapOperators(Vectorizer.java:2038),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.validateAndVectorizeMapOperators(Vectorizer.java:1990),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.validateAndVectorizeMapWork(Vectorizer.java:1963),
>  ...'
> 'vectorized: false'
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21437) Vectorization: Decimal64 division with integer columns

2019-07-01 Thread Attila Magyar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Magyar updated HIVE-21437:
-
Status: Patch Available  (was: Open)

> Vectorization: Decimal64 division with integer columns
> --
>
> Key: HIVE-21437
> URL: https://issues.apache.org/jira/browse/HIVE-21437
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 4.0.0
>Reporter: Gopal V
>Assignee: Attila Magyar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-21437.1.patch, HIVE-21437.2.patch, 
> HIVE-21437.3.patch, HIVE-21437.4.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Vectorizer fails for
> {code}
> CREATE temporary TABLE `catalog_Sales`(
>   `cs_quantity` int, 
>   `cs_wholesale_cost` decimal(7,2), 
>   `cs_list_price` decimal(7,2), 
>   `cs_sales_price` decimal(7,2), 
>   `cs_ext_discount_amt` decimal(7,2), 
>   `cs_ext_sales_price` decimal(7,2), 
>   `cs_ext_wholesale_cost` decimal(7,2), 
>   `cs_ext_list_price` decimal(7,2), 
>   `cs_ext_tax` decimal(7,2), 
>   `cs_coupon_amt` decimal(7,2), 
>   `cs_ext_ship_cost` decimal(7,2), 
>   `cs_net_paid` decimal(7,2), 
>   `cs_net_paid_inc_tax` decimal(7,2), 
>   `cs_net_paid_inc_ship` decimal(7,2), 
>   `cs_net_paid_inc_ship_tax` decimal(7,2), 
>   `cs_net_profit` decimal(7,2))
>  ;
> explain vectorization detail select maxcs_ext_list_price - 
> cs_ext_wholesale_cost) - cs_ext_discount_amt) + cs_ext_sales_price) / 2) from 
> catalog_sales;
> {code}
> {code}
> 'Map Vectorization:'
> 'enabled: true'
> 'enabledConditionsMet: 
> hive.vectorized.use.vectorized.input.format IS true'
> 'inputFileFormats: 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
> 'notVectorizedReason: SELECT operator: Could not instantiate 
> DecimalColDivideDecimalScalar with arguments arguments: [21, 20, 22], 
> argument classes: [Integer, Integer, Integer], exception: 
> java.lang.IllegalArgumentException: java.lang.ClassCastException@63b56be0 
> stack trace: 
> sun.reflect.GeneratedConstructorAccessor.newInstance(Unknown 
> Source), 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45),
>  java.lang.reflect.Constructor.newInstance(Constructor.java:423), 
> org.apache.hadoop.hive.ql.exec.vector.VectorizationContext.instantiateExpression(VectorizationContext.java:2088),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.fixDecimalDataTypePhysicalVariations(Vectorizer.java:4662),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.fixDecimalDataTypePhysicalVariations(Vectorizer.java:4602),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.vectorizeSelectOperator(Vectorizer.java:4584),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.validateAndVectorizeOperator(Vectorizer.java:5171),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.doProcessChild(Vectorizer.java:923),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.doProcessChildren(Vectorizer.java:809),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.validateAndVectorizeOperatorTree(Vectorizer.java:776),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer.access$2400(Vectorizer.java:240),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.validateAndVectorizeMapOperators(Vectorizer.java:2038),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.validateAndVectorizeMapOperators(Vectorizer.java:1990),
>  
> org.apache.hadoop.hive.ql.optimizer.physical.Vectorizer$VectorizationDispatcher.validateAndVectorizeMapWork(Vectorizer.java:1963),
>  ...'
> 'vectorized: false'
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21578) Introduce SQL:2016 formats FM, FX, and nested strings

2019-07-01 Thread Karen Coppage (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Coppage updated HIVE-21578:
-
Status: Open  (was: Patch Available)

> Introduce SQL:2016 formats FM, FX, and nested strings
> -
>
> Key: HIVE-21578
> URL: https://issues.apache.org/jira/browse/HIVE-21578
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
> Attachments: HIVE-21578.01.patch, HIVE-21578.02.patch, 
> HIVE-21578.02.patch
>
>
> Enable Hive to parse the following datetime formats when any combination or 
> subset of these or previously implemented formats is provided in one string. 
>  * "text" (nested strings)
>  * FM
>  * FX
> [Definitions 
> here|https://docs.google.com/document/d/1V7k6-lrPGW7_uhqM-FhKl3QsxwCRy69v2KIxPsGjc1k/edit]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-21578) Introduce SQL:2016 formats FM, FX, and nested strings

2019-07-01 Thread Karen Coppage (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-21578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Coppage updated HIVE-21578:
-
Attachment: HIVE-21578.02.patch
Status: Patch Available  (was: Open)

> Introduce SQL:2016 formats FM, FX, and nested strings
> -
>
> Key: HIVE-21578
> URL: https://issues.apache.org/jira/browse/HIVE-21578
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
> Attachments: HIVE-21578.01.patch, HIVE-21578.02.patch, 
> HIVE-21578.02.patch
>
>
> Enable Hive to parse the following datetime formats when any combination or 
> subset of these or previously implemented formats is provided in one string. 
>  * "text" (nested strings)
>  * FM
>  * FX
> [Definitions 
> here|https://docs.google.com/document/d/1V7k6-lrPGW7_uhqM-FhKl3QsxwCRy69v2KIxPsGjc1k/edit]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21888) Set hive.parquet.timestamp.skip.conversion default to true

2019-07-01 Thread Karen Coppage (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875981#comment-16875981
 ] 

Karen Coppage commented on HIVE-21888:
--

Thanks [~jcamachorodriguez]!

 

> Set hive.parquet.timestamp.skip.conversion default to true
> --
>
> Key: HIVE-21888
> URL: https://issues.apache.org/jira/browse/HIVE-21888
> Project: Hive
>  Issue Type: Bug
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
> Fix For: 4.0.0, 3.2.0
>
> Attachments: HIVE-21888.02.patch, HIVE-21888.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-21938) Add database and table filter options to PreUpgradeTool

2019-07-01 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-21938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875964#comment-16875964
 ] 

Hive QA commented on HIVE-21938:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
21s{color} | {color:blue} upgrade-acid/pre-upgrade in master has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
13s{color} | {color:red} upgrade-acid/pre-upgrade: The patch generated 42 new + 
288 unchanged - 12 fixed = 330 total (was 300) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-17797/dev-support/hive-personality.sh
 |
| git revision | master / be6bf93 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17797/yetus/diff-checkstyle-upgrade-acid_pre-upgrade.txt
 |
| modules | C: upgrade-acid/pre-upgrade U: upgrade-acid/pre-upgrade |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-17797/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Add database and table filter options to PreUpgradeTool
> ---
>
> Key: HIVE-21938
> URL: https://issues.apache.org/jira/browse/HIVE-21938
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 3.1.0
>Reporter: Krisztian Kasa
>Assignee: Krisztian Kasa
>Priority: Blocker
> Fix For: 4.0.0
>
> Attachments: HIVE-21938.1.patch
>
>
> By default pre upgrade tool scans all databases and tables in the warehouse. 
> Add database and table filter options to run the tool for a specific subset 
> of databases and tables only.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >