[jira] [Commented] (HIVE-18859) Incorrect handling of thrift metastore exceptions

2018-03-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387426#comment-16387426
 ] 

Hive QA commented on HIVE-18859:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
11s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9504/dev-support/hive-personality.sh
 |
| git revision | master / 8d88cfa |
| Default Java | 1.8.0_111 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9504/yetus/patch-asflicense-problems.txt
 |
| modules | C: standalone-metastore U: standalone-metastore |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9504/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Incorrect handling of thrift metastore exceptions
> -
>
> Key: HIVE-18859
> URL: https://issues.apache.org/jira/browse/HIVE-18859
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.0, 2.1.1
>Reporter: Ganesha Shreedhara
>Assignee: Ganesha Shreedhara
>Priority: Major
> Attachments: HIVE-18859.patch
>
>
> Currently any run time exception thrown in thrift metastore during the 
> following operations is not getting sent to hive execution engine.
>  * grant/revoke role
>  * grant/revoke privileges
>  * create role
> This is because ThriftHiveMetastore just handles MetaException and throws 
> TException during the processing of these requests.  So, the command just 
> fails at thrift metastore end when there is run time exception (Exception can 
> be seen in metastore log) but the hive execution engine will keep on waiting 
> for the response from thrift metatstore.
>  
> Steps to reproduce this problem :
> Launch thrift metastore
> Launch hive cli by passing --hiveconf 
> hive.metastore.uris=thrift://127.0.0.1:1 (pass the thrift metatstore host 
> and port)
> Execute the following commands:
>  # set role admin
>  # create role test; (succeeds)
>  # create role test; ( hive version 2.1.1 : command is stuck, waiting for the 
> response from thrift metastore; hive version 1.2.1: command fails with 
> exception as null) 
>  
> I have uploaded the patch which has the fix in which I am handling the 
> checked exceptions in MetaException and throwing unchecked exceptions using 
> TException which fixes the problem. Please review and suggest if there is a 
> better way of handling this 

[jira] [Commented] (HIVE-18832) Support change management for trashing data files from ACID tables.

2018-03-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387407#comment-16387407
 ] 

Hive QA commented on HIVE-18832:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12913147/HIVE-18832.0.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 100 failed/errored test(s), 13477 tests 
executed
*Failed tests:*
{noformat}
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=93)


[jira] [Commented] (HIVE-18670) Prevent DROP DATABASE If Other Data Exists

2018-03-05 Thread Scott Eade (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387373#comment-16387373
 ] 

Scott Eade commented on HIVE-18670:
---

{code:java}
-- Example 3
> CREATE DATABASE my_important_database LOCATION /hive/my_important_database;
-- Add a bunch of tables and data.

-- Now create a second database in the same location.
> CREATE DATABASE trouble_maker LOCATION /hive/my_important_database;
-- This will succeed and delete the important data. 
> DROP DATABASE trouble_maker;{code}
This is based on a real example. HDFS snapshots and distcp backups to the 
rescue.

> Prevent DROP DATABASE If Other Data Exists
> --
>
> Key: HIVE-18670
> URL: https://issues.apache.org/jira/browse/HIVE-18670
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.0.0, 2.3.2
>Reporter: BELUGA BEHR
>Priority: Major
>
> A user is not able to drop a database if it has tables under it unless they 
> include the _CASCADE_ keyword to their DROP DATABASE statement.
> [https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-DropTable]
> I would like to propose that, if {{hive.mapred.mode}} is set to 'strict', 
> Hive also checks for other data before dropping the database.
> For example, if the database is stored within HDFS, then Hive should check if 
> there exists other data, not even necessarily related to Hive, within the 
> database's HDFS directory, before dropping.
> The examples are:
> {code:java|title=Example 1}
> /hive/my_database
> /hive/my_database/my_table
> -- Does not succeed because 'my_table' exists
> > DROP DATABASE my_database;
> -- Succeeds and removes the root directory /hive/my_database
> > DROP DATABASE my_database CASCADE;
> {code}
> {code:java|title=Example 2}
> /hive/my_database
> /hive/my_database/my_important_file.txt
> -- Succeeds because no tables exist, but I just lost my "important" file
> > DROP DATABASE my_database;
> {code}
> This "feature" is just to prevent people from shooting themselves in the 
> foot, even if they shouldn't be using Hive space for storing unrelated data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18859) Incorrect handling of thrift metastore exceptions

2018-03-05 Thread Ganesha Shreedhara (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ganesha Shreedhara updated HIVE-18859:
--
Status: Patch Available  (was: Open)

> Incorrect handling of thrift metastore exceptions
> -
>
> Key: HIVE-18859
> URL: https://issues.apache.org/jira/browse/HIVE-18859
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.1.1, 1.2.0
>Reporter: Ganesha Shreedhara
>Assignee: Ganesha Shreedhara
>Priority: Major
> Attachments: HIVE-18859.patch
>
>
> Currently any run time exception thrown in thrift metastore during the 
> following operations is not getting sent to hive execution engine.
>  * grant/revoke role
>  * grant/revoke privileges
>  * create role
> This is because ThriftHiveMetastore just handles MetaException and throws 
> TException during the processing of these requests.  So, the command just 
> fails at thrift metastore end when there is run time exception (Exception can 
> be seen in metastore log) but the hive execution engine will keep on waiting 
> for the response from thrift metatstore.
>  
> Steps to reproduce this problem :
> Launch thrift metastore
> Launch hive cli by passing --hiveconf 
> hive.metastore.uris=thrift://127.0.0.1:1 (pass the thrift metatstore host 
> and port)
> Execute the following commands:
>  # set role admin
>  # create role test; (succeeds)
>  # create role test; ( hive version 2.1.1 : command is stuck, waiting for the 
> response from thrift metastore; hive version 1.2.1: command fails with 
> exception as null) 
>  
> I have uploaded the patch which has the fix in which I am handling the 
> checked exceptions in MetaException and throwing unchecked exceptions using 
> TException which fixes the problem. Please review and suggest if there is a 
> better way of handling this issue.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18859) Incorrect handling of thrift metastore exceptions

2018-03-05 Thread Ganesha Shreedhara (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ganesha Shreedhara updated HIVE-18859:
--
Status: Open  (was: Patch Available)

> Incorrect handling of thrift metastore exceptions
> -
>
> Key: HIVE-18859
> URL: https://issues.apache.org/jira/browse/HIVE-18859
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.1.1, 1.2.0
>Reporter: Ganesha Shreedhara
>Assignee: Ganesha Shreedhara
>Priority: Major
> Attachments: HIVE-18859.patch
>
>
> Currently any run time exception thrown in thrift metastore during the 
> following operations is not getting sent to hive execution engine.
>  * grant/revoke role
>  * grant/revoke privileges
>  * create role
> This is because ThriftHiveMetastore just handles MetaException and throws 
> TException during the processing of these requests.  So, the command just 
> fails at thrift metastore end when there is run time exception (Exception can 
> be seen in metastore log) but the hive execution engine will keep on waiting 
> for the response from thrift metatstore.
>  
> Steps to reproduce this problem :
> Launch thrift metastore
> Launch hive cli by passing --hiveconf 
> hive.metastore.uris=thrift://127.0.0.1:1 (pass the thrift metatstore host 
> and port)
> Execute the following commands:
>  # set role admin
>  # create role test; (succeeds)
>  # create role test; ( hive version 2.1.1 : command is stuck, waiting for the 
> response from thrift metastore; hive version 1.2.1: command fails with 
> exception as null) 
>  
> I have uploaded the patch which has the fix in which I am handling the 
> checked exceptions in MetaException and throwing unchecked exceptions using 
> TException which fixes the problem. Please review and suggest if there is a 
> better way of handling this issue.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18859) Incorrect handling of thrift metastore exceptions

2018-03-05 Thread Ganesha Shreedhara (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387357#comment-16387357
 ] 

Ganesha Shreedhara commented on HIVE-18859:
---

[~akolb] Submitted a code review request (https://reviews.apache.org/r/65913/). 
Please review the patch. 

> Incorrect handling of thrift metastore exceptions
> -
>
> Key: HIVE-18859
> URL: https://issues.apache.org/jira/browse/HIVE-18859
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.0, 2.1.1
>Reporter: Ganesha Shreedhara
>Assignee: Ganesha Shreedhara
>Priority: Major
> Attachments: HIVE-18859.patch
>
>
> Currently any run time exception thrown in thrift metastore during the 
> following operations is not getting sent to hive execution engine.
>  * grant/revoke role
>  * grant/revoke privileges
>  * create role
> This is because ThriftHiveMetastore just handles MetaException and throws 
> TException during the processing of these requests.  So, the command just 
> fails at thrift metastore end when there is run time exception (Exception can 
> be seen in metastore log) but the hive execution engine will keep on waiting 
> for the response from thrift metatstore.
>  
> Steps to reproduce this problem :
> Launch thrift metastore
> Launch hive cli by passing --hiveconf 
> hive.metastore.uris=thrift://127.0.0.1:1 (pass the thrift metatstore host 
> and port)
> Execute the following commands:
>  # set role admin
>  # create role test; (succeeds)
>  # create role test; ( hive version 2.1.1 : command is stuck, waiting for the 
> response from thrift metastore; hive version 1.2.1: command fails with 
> exception as null) 
>  
> I have uploaded the patch which has the fix in which I am handling the 
> checked exceptions in MetaException and throwing unchecked exceptions using 
> TException which fixes the problem. Please review and suggest if there is a 
> better way of handling this issue.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17580) Remove dependency of get_fields_with_environment_context API to serde

2018-03-05 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387354#comment-16387354
 ] 

Vihang Karajgaonkar commented on HIVE-17580:


Combining the enums is not going to help solve the problem, since metastore 
needs to get compile time access to ObjectInspector.TypeCategory unless we are 
okay to break compatibility by modifying the signature of getCategory method in 
TypeInfo. TypeInfo was annotated as public in HIVE-17157 so technically there 
hasn't been any released version of Hive which has it as a public API. So I 
think we are allowed to break compatibility with that respect. What do you 
think?

The other option suggested of moving ObjectInspector to standalone-metastore 
sounds weird to me since it has got nothing to do with metastore. Moving 
ObjectInspector to storage-api is not ideal but atleast is lesser of the two 
evils. It could be argued that it more a storage-api than a metastore api since 
it is related to how to interpret the serialized and deserialized data. I don't 
see a particular logically grouping of classes in storage-api (for example, 
HiveDecimal is in storage-api, but the other types are in serde). I think in 
the longer run we would need to reorganizing this in more consistent modules 
anyways.

> Remove dependency of get_fields_with_environment_context API to serde
> -
>
> Key: HIVE-17580
> URL: https://issues.apache.org/jira/browse/HIVE-17580
> Project: Hive
>  Issue Type: Sub-task
>  Components: Standalone Metastore
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-17580.003-standalone-metastore.patch, 
> HIVE-17580.04-standalone-metastore.patch, 
> HIVE-17580.05-standalone-metastore.patch, 
> HIVE-17580.06-standalone-metastore.patch, 
> HIVE-17580.07-standalone-metastore.patch, 
> HIVE-17580.08-standalone-metastore.patch, 
> HIVE-17580.09-standalone-metastore.patch, 
> HIVE-17580.092-standalone-metastore.patch
>
>
> {{get_fields_with_environment_context}} metastore API uses {{Deserializer}} 
> class to access the fields metadata for the cases where it is stored along 
> with the data files (avro tables). The problem is Deserializer classes is 
> defined in hive-serde module and in order to make metastore independent of 
> Hive we will have to remove this dependency (atleast we should change it to 
> runtime dependency instead of compile time).
> The other option is investigate if we can use SearchArgument to provide this 
> functionality.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18832) Support change management for trashing data files from ACID tables.

2018-03-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387351#comment-16387351
 ] 

Hive QA commented on HIVE-18832:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
19s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
47s{color} | {color:red} ql in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
14s{color} | {color:red} standalone-metastore: The patch generated 3 new + 32 
unchanged - 18 fixed = 35 total (was 50) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9503/dev-support/hive-personality.sh
 |
| git revision | master / 8d88cfa |
| Default Java | 1.8.0_111 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9503/yetus/patch-mvninstall-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9503/yetus/diff-checkstyle-standalone-metastore.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9503/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql standalone-metastore U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9503/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Support change management for trashing data files from ACID tables.
> ---
>
> Key: HIVE-18832
> URL: https://issues.apache.org/jira/browse/HIVE-18832
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: anishek
>Assignee: anishek
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18832.0.patch
>
>
> Currently, cleaner process and DDL drop operations deletes the data files. 
> So, scope to support change management in source warehouse for ACID table 
> operations is given below.
> 1. Cleaner process deletes older files after compaction, aborted files etc. 
> Need to be archived to cmroot path.
> 2. DDL operations such as Drop table, partition already archive the deleted 
> files. Need to extend it for ACID tables too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18571) stats issues for MM tables

2018-03-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387337#comment-16387337
 ] 

Hive QA commented on HIVE-18571:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12913136/HIVE-18571.05.patch

{color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 31 failed/errored test(s), 13476 tests 
executed
*Failed tests:*
{noformat}
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=94)


[jira] [Commented] (HIVE-18571) stats issues for MM tables

2018-03-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387301#comment-16387301
 ] 

Hive QA commented on HIVE-18571:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
 4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
7s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
39s{color} | {color:red} ql in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
13s{color} | {color:red} common: The patch generated 1 new + 428 unchanged - 1 
fixed = 429 total (was 429) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
19s{color} | {color:red} itests/hive-unit: The patch generated 5 new + 657 
unchanged - 0 fixed = 662 total (was 657) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
44s{color} | {color:red} ql: The patch generated 12 new + 1091 unchanged - 4 
fixed = 1103 total (was 1095) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
19s{color} | {color:red} standalone-metastore: The patch generated 8 new + 732 
unchanged - 7 fixed = 740 total (was 739) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9502/dev-support/hive-personality.sh
 |
| git revision | master / 8d88cfa |
| Default Java | 1.8.0_111 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9502/yetus/patch-mvninstall-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9502/yetus/diff-checkstyle-common.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9502/yetus/diff-checkstyle-itests_hive-unit.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9502/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9502/yetus/diff-checkstyle-standalone-metastore.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9502/yetus/whitespace-eol.txt 
|
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9502/yetus/patch-asflicense-problems.txt
 |
| modules | C: common itests/hive-unit ql standalone-metastore U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9502/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> stats issues for MM tables
> --
>
> 

[jira] [Updated] (HIVE-18859) Incorrect handling of thrift metastore exceptions

2018-03-05 Thread Ganesha Shreedhara (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ganesha Shreedhara updated HIVE-18859:
--
Attachment: HIVE-18859.patch

> Incorrect handling of thrift metastore exceptions
> -
>
> Key: HIVE-18859
> URL: https://issues.apache.org/jira/browse/HIVE-18859
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.0, 2.1.1
>Reporter: Ganesha Shreedhara
>Assignee: Ganesha Shreedhara
>Priority: Major
> Attachments: HIVE-18859.patch
>
>
> Currently any run time exception thrown in thrift metastore during the 
> following operations is not getting sent to hive execution engine.
>  * grant/revoke role
>  * grant/revoke privileges
>  * create role
> This is because ThriftHiveMetastore just handles MetaException and throws 
> TException during the processing of these requests.  So, the command just 
> fails at thrift metastore end when there is run time exception (Exception can 
> be seen in metastore log) but the hive execution engine will keep on waiting 
> for the response from thrift metatstore.
>  
> Steps to reproduce this problem :
> Launch thrift metastore
> Launch hive cli by passing --hiveconf 
> hive.metastore.uris=thrift://127.0.0.1:1 (pass the thrift metatstore host 
> and port)
> Execute the following commands:
>  # set role admin
>  # create role test; (succeeds)
>  # create role test; ( hive version 2.1.1 : command is stuck, waiting for the 
> response from thrift metastore; hive version 1.2.1: command fails with 
> exception as null) 
>  
> I have uploaded the patch which has the fix in which I am handling the 
> checked exceptions in MetaException and throwing unchecked exceptions using 
> TException which fixes the problem. Please review and suggest if there is a 
> better way of handling this issue.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18859) Incorrect handling of thrift metastore exceptions

2018-03-05 Thread Ganesha Shreedhara (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ganesha Shreedhara updated HIVE-18859:
--
Attachment: (was: HIVE-18859.patch)

> Incorrect handling of thrift metastore exceptions
> -
>
> Key: HIVE-18859
> URL: https://issues.apache.org/jira/browse/HIVE-18859
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.0, 2.1.1
>Reporter: Ganesha Shreedhara
>Assignee: Ganesha Shreedhara
>Priority: Major
> Attachments: HIVE-18859.patch
>
>
> Currently any run time exception thrown in thrift metastore during the 
> following operations is not getting sent to hive execution engine.
>  * grant/revoke role
>  * grant/revoke privileges
>  * create role
> This is because ThriftHiveMetastore just handles MetaException and throws 
> TException during the processing of these requests.  So, the command just 
> fails at thrift metastore end when there is run time exception (Exception can 
> be seen in metastore log) but the hive execution engine will keep on waiting 
> for the response from thrift metatstore.
>  
> Steps to reproduce this problem :
> Launch thrift metastore
> Launch hive cli by passing --hiveconf 
> hive.metastore.uris=thrift://127.0.0.1:1 (pass the thrift metatstore host 
> and port)
> Execute the following commands:
>  # set role admin
>  # create role test; (succeeds)
>  # create role test; ( hive version 2.1.1 : command is stuck, waiting for the 
> response from thrift metastore; hive version 1.2.1: command fails with 
> exception as null) 
>  
> I have uploaded the patch which has the fix in which I am handling the 
> checked exceptions in MetaException and throwing unchecked exceptions using 
> TException which fixes the problem. Please review and suggest if there is a 
> better way of handling this issue.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18859) Incorrect handling of thrift metastore exceptions

2018-03-05 Thread Ganesha Shreedhara (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ganesha Shreedhara updated HIVE-18859:
--
Status: Patch Available  (was: In Progress)

> Incorrect handling of thrift metastore exceptions
> -
>
> Key: HIVE-18859
> URL: https://issues.apache.org/jira/browse/HIVE-18859
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.1.1, 1.2.0
>Reporter: Ganesha Shreedhara
>Assignee: Ganesha Shreedhara
>Priority: Major
> Attachments: HIVE-18859.patch
>
>
> Currently any run time exception thrown in thrift metastore during the 
> following operations is not getting sent to hive execution engine.
>  * grant/revoke role
>  * grant/revoke privileges
>  * create role
> This is because ThriftHiveMetastore just handles MetaException and throws 
> TException during the processing of these requests.  So, the command just 
> fails at thrift metastore end when there is run time exception (Exception can 
> be seen in metastore log) but the hive execution engine will keep on waiting 
> for the response from thrift metatstore.
>  
> Steps to reproduce this problem :
> Launch thrift metastore
> Launch hive cli by passing --hiveconf 
> hive.metastore.uris=thrift://127.0.0.1:1 (pass the thrift metatstore host 
> and port)
> Execute the following commands:
>  # set role admin
>  # create role test; (succeeds)
>  # create role test; ( hive version 2.1.1 : command is stuck, waiting for the 
> response from thrift metastore; hive version 1.2.1: command fails with 
> exception as null) 
>  
> I have uploaded the patch which has the fix in which I am handling the 
> checked exceptions in MetaException and throwing unchecked exceptions using 
> TException which fixes the problem. Please review and suggest if there is a 
> better way of handling this issue.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17896) TopNKey: Create a standalone vectorizable TopNKey operator

2018-03-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387273#comment-16387273
 ] 

Hive QA commented on HIVE-17896:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12913133/HIVE-17896.7.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 97 failed/errored test(s), 13483 tests 
executed
*Failed tests:*
{noformat}
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=94)


[jira] [Updated] (HIVE-18832) Support change management for trashing data files from ACID tables.

2018-03-05 Thread anishek (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

anishek updated HIVE-18832:
---
Status: Patch Available  (was: Open)

> Support change management for trashing data files from ACID tables.
> ---
>
> Key: HIVE-18832
> URL: https://issues.apache.org/jira/browse/HIVE-18832
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: anishek
>Assignee: anishek
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18832.0.patch
>
>
> Currently, cleaner process and DDL drop operations deletes the data files. 
> So, scope to support change management in source warehouse for ACID table 
> operations is given below.
> 1. Cleaner process deletes older files after compaction, aborted files etc. 
> Need to be archived to cmroot path.
> 2. DDL operations such as Drop table, partition already archive the deleted 
> files. Need to extend it for ACID tables too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18832) Support change management for trashing data files from ACID tables.

2018-03-05 Thread anishek (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

anishek updated HIVE-18832:
---
Attachment: HIVE-18832.0.patch

> Support change management for trashing data files from ACID tables.
> ---
>
> Key: HIVE-18832
> URL: https://issues.apache.org/jira/browse/HIVE-18832
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: anishek
>Assignee: anishek
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18832.0.patch
>
>
> Currently, cleaner process and DDL drop operations deletes the data files. 
> So, scope to support change management in source warehouse for ACID table 
> operations is given below.
> 1. Cleaner process deletes older files after compaction, aborted files etc. 
> Need to be archived to cmroot path.
> 2. DDL operations such as Drop table, partition already archive the deleted 
> files. Need to extend it for ACID tables too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17896) TopNKey: Create a standalone vectorizable TopNKey operator

2018-03-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387239#comment-16387239
 ] 

Hive QA commented on HIVE-17896:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 5s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
40s{color} | {color:red} ql: The patch generated 37 new + 420 unchanged - 0 
fixed = 457 total (was 420) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9501/dev-support/hive-personality.sh
 |
| git revision | master / 8d88cfa |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9501/yetus/diff-checkstyle-ql.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9501/yetus/patch-asflicense-problems.txt
 |
| modules | C: common serde itests ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9501/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> TopNKey: Create a standalone vectorizable TopNKey operator
> --
>
> Key: HIVE-17896
> URL: https://issues.apache.org/jira/browse/HIVE-17896
> Project: Hive
>  Issue Type: New Feature
>  Components: Operators
>Affects Versions: 3.0.0
>Reporter: Gopal V
>Assignee: Teddy Choi
>Priority: Major
> Attachments: HIVE-17896.1.patch, HIVE-17896.3.patch, 
> HIVE-17896.4.patch, HIVE-17896.5.patch, HIVE-17896.6.patch, HIVE-17896.7.patch
>
>
> For TPC-DS Query27, the TopN operation is delayed by the group-by - the 
> group-by operator buffers up all the rows before discarding the 99% of the 
> rows in the TopN Hash within the ReduceSink Operator.
> The RS TopN operator is very restrictive as it only supports doing the 
> filtering on the shuffle keys, but it is better to do this before breaking 
> the vectors into rows and losing the isRepeating properties.
> Adding a TopN Key operator in the physical operator tree allows the following 
> to happen.
> GBY->RS(Top=1)
> can become 
> TNK(1)->GBY->RS(Top=1)
> So that, the TopNKey can remove rows before they are buffered into the GBY 
> and consume memory.
> Here's the equivalent implementation in Presto
> 

[jira] [Updated] (HIVE-18859) Incorrect handling of thrift metastore exceptions

2018-03-05 Thread Ganesha Shreedhara (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ganesha Shreedhara updated HIVE-18859:
--
Status: In Progress  (was: Patch Available)

> Incorrect handling of thrift metastore exceptions
> -
>
> Key: HIVE-18859
> URL: https://issues.apache.org/jira/browse/HIVE-18859
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.1.1, 1.2.0
>Reporter: Ganesha Shreedhara
>Assignee: Ganesha Shreedhara
>Priority: Major
> Attachments: HIVE-18859.patch
>
>
> Currently any run time exception thrown in thrift metastore during the 
> following operations is not getting sent to hive execution engine.
>  * grant/revoke role
>  * grant/revoke privileges
>  * create role
> This is because ThriftHiveMetastore just handles MetaException and throws 
> TException during the processing of these requests.  So, the command just 
> fails at thrift metastore end when there is run time exception (Exception can 
> be seen in metastore log) but the hive execution engine will keep on waiting 
> for the response from thrift metatstore.
>  
> Steps to reproduce this problem :
> Launch thrift metastore
> Launch hive cli by passing --hiveconf 
> hive.metastore.uris=thrift://127.0.0.1:1 (pass the thrift metatstore host 
> and port)
> Execute the following commands:
>  # set role admin
>  # create role test; (succeeds)
>  # create role test; ( hive version 2.1.1 : command is stuck, waiting for the 
> response from thrift metastore; hive version 1.2.1: command fails with 
> exception as null) 
>  
> I have uploaded the patch which has the fix in which I am handling the 
> checked exceptions in MetaException and throwing unchecked exceptions using 
> TException which fixes the problem. Please review and suggest if there is a 
> better way of handling this issue.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18718) Integer like types throws error when there is a mismatch

2018-03-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387210#comment-16387210
 ] 

Hive QA commented on HIVE-18718:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12913118/HIVE-18718.3.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 20 failed/errored test(s), 13478 tests 
executed
*Failed tests:*
{noformat}
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=95)


[jira] [Commented] (HIVE-18846) Query results cache: Allow queries to refer to the pending results of a query that has not finished yet

2018-03-05 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387193#comment-16387193
 ] 

Jason Dere commented on HIVE-18846:
---

Work in progress patch, this passes the existing qfile tests but requires 
testing with concurrent queries to see if this will cause queries to wait on 
the executing query to finish.
This patch causes the query to wait for the pending results by blocking during 
query compilation, not sure if that is the best approach or not. If HIVE-17626 
gets committed then we would have another possible approach where the query 
could block during query execution (and cause a retryable failure if it turns 
out that the pending results did not result in a valid cacheable result).

> Query results cache: Allow queries to refer to the pending results of a query 
> that has not finished yet
> ---
>
> Key: HIVE-18846
> URL: https://issues.apache.org/jira/browse/HIVE-18846
> Project: Hive
>  Issue Type: Sub-task
>  Components: Query Planning
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-18846.1.patch
>
>
> Currently, a query's results can only be looked up in the cache if the query 
> has completely finished execution. Allow new queries to use the results cache 
> to find queries that are still executing so they can re-use the results when 
> the query has finished.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18846) Query results cache: Allow queries to refer to the pending results of a query that has not finished yet

2018-03-05 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-18846:
--
Attachment: HIVE-18846.1.patch

> Query results cache: Allow queries to refer to the pending results of a query 
> that has not finished yet
> ---
>
> Key: HIVE-18846
> URL: https://issues.apache.org/jira/browse/HIVE-18846
> Project: Hive
>  Issue Type: Sub-task
>  Components: Query Planning
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-18846.1.patch
>
>
> Currently, a query's results can only be looked up in the cache if the query 
> has completely finished execution. Allow new queries to use the results cache 
> to find queries that are still executing so they can re-use the results when 
> the query has finished.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18718) Integer like types throws error when there is a mismatch

2018-03-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387174#comment-16387174
 ] 

Hive QA commented on HIVE-18718:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9500/dev-support/hive-personality.sh
 |
| git revision | master / b4e7530 |
| Default Java | 1.8.0_111 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9500/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9500/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Integer like types throws error when there is a mismatch
> 
>
> Key: HIVE-18718
> URL: https://issues.apache.org/jira/browse/HIVE-18718
> Project: Hive
>  Issue Type: Improvement
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18718.1.patch, HIVE-18718.2.patch, 
> HIVE-18718.3.patch
>
>
> If a value is saved with long type and read as int type it results in
> FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-16992) LLAP: better default lambda for LRFU policy

2018-03-05 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin reassigned HIVE-16992:
---

Assignee: Sergey Shelukhin  (was: Gopal V)

> LLAP: better default lambda for LRFU policy
> ---
>
> Key: HIVE-16992
> URL: https://issues.apache.org/jira/browse/HIVE-16992
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
>
> LRFU is currently skewed heavily towards LRU; there are 10k-s or 100k-s of 
> buffers tracked during a typical workload, but the heap size is around 700. 
> We should see if making it closer to LFU (by tweaking the lambda) will 
> improve hit rate with small queries infrequently interleaved with large 
> scans; and whether it will have negative effects due to perf overhead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18837) add a flag and disable some object pools in LLAP until further testing

2018-03-05 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18837:

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Committed to master. Thanks for the review!

> add a flag and disable some object pools in LLAP until further testing
> --
>
> Key: HIVE-18837
> URL: https://issues.apache.org/jira/browse/HIVE-18837
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18837.patch
>
>
> There appears to be some subtle concurrency issue in FixedSizedObjectPool 
> that happens with multiple consumers where some object may be retrieved 
> twice. 
> Unfortunately running a load test for hour(s) does not trigger it for me and 
> overall it happens extremely rarely on non-specific tests; adding debug info 
> at this level is a little bit difficult to determine how it could have 
> happened and interlocked operations in the trace may actually eliminate the 
> issue. I suspect it has something to do with aggressive assumptions made for 
> locking and array elements and the memory model. Maybe that can be simplified 
> without much perf loss.
> Anyway, for now we will disable the pools where multiple consumers use them.
> Need to test perf to see if these two pools even matter; if so, we can 
> simplify the model as per above or debug the issue in some way.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17990) Add Thrift and DB storage for Schema Registry objects

2018-03-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387149#comment-16387149
 ] 

Hive QA commented on HIVE-17990:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12913116/HIVE-17990.2.patch

{color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 25 failed/errored test(s), 13526 tests 
executed
*Failed tests:*
{noformat}
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=94)


[jira] [Comment Edited] (HIVE-18825) Define ValidTxnList before starting query optimization

2018-03-05 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387145#comment-16387145
 ] 

Eugene Koifman edited comment on HIVE-18825 at 3/6/18 2:02 AM:
---

I think there is a problem here that I should've thought of earlier.
Right now we lock in the snapshot after lock acquisition.  This ordering is 
important if we ever want to support lock based concurrency control.  
(Something I think we should do)

Suppose you have 2 concurrent transactions both running "update T1 set x = x + 
1".  If we acquire the update lock first, then record the snapshot, then 2nd 
txn to get the lock will see the result of write of the previous one.  If we 
lock the snapshot before acquiring the lock, both transactions may lock in 
exactly the same snapshot and locking becomes useless as the 2nd will still 
read an "old" snapshot.

Could the predicates you want be inserted at compile time, but bound to actual 
values as some post processing after (or at the end of) 
{{Driver.acquireLocks()}} as currently implemented?


was (Author: ekoifman):
I think there is a problem here that I should've thought of earlier.
Right now we lock in the snapshot after lock acquisition.  This ordering is 
important if we ever want to support lock based concurrency control.

Suppose you have 2 concurrent transactions both running "update T1 set x = x + 
1".  If we acquire the update lock first, then record the snapshot, then 2nd 
txn to get the lock will see the result of write of the previous one.  If we 
lock the snapshot before acquiring the lock, both transactions may lock in 
exactly the same snapshot and locking becomes useless as the 2nd will still 
read an "old" snapshot.

Could the predicates you want be inserted at compile time, but bound to actual 
values as some post processing after (or at the end of) 
{{Driver.acquireLocks()}} as currently implemented?

> Define ValidTxnList before starting query optimization
> --
>
> Key: HIVE-18825
> URL: https://issues.apache.org/jira/browse/HIVE-18825
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-18825.01.patch, HIVE-18825.02.patch, 
> HIVE-18825.03.patch, HIVE-18825.04.patch, HIVE-18825.patch
>
>
> Consider a set of tables used by a materialized view where inserts happened 
> after the materialization was created. To compute incremental view 
> maintenance, we need to be able to filter only new rows from those base 
> tables. That can be done by inserting a filter operator with condition e.g. 
> {{ROW\_\_ID.transactionId < highwatermark and ROW\_\_ID.transactionId NOT 
> IN()}} on top of the MVs query definition and triggering the 
> rewriting (which should in turn produce a partial rewriting). However, to do 
> that, we need to have a value for {{ValidTxnList}} during query compilation 
> so we know the snapshot that we are querying.
> This patch aims to generate {{ValidTxnList}} before query optimization. There 
> should not be any visible changes for end user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18825) Define ValidTxnList before starting query optimization

2018-03-05 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387145#comment-16387145
 ] 

Eugene Koifman commented on HIVE-18825:
---

I think there is a problem here that I should've thought of earlier.
Right now we lock in the snapshot after lock acquisition.  This ordering is 
important if we ever want to support lock based concurrency control.

Suppose you have 2 concurrent transactions both running "update T1 set x = x + 
1".  If we acquire the update lock first, then record the snapshot, then 2nd 
txn to get the lock will see the result of write of the previous one.  If we 
lock the snapshot before acquiring the lock, both transactions may lock in 
exactly the same snapshot and locking becomes useless as the 2nd will still 
read an "old" snapshot.

Could the predicates you want be inserted at compile time, but bound to actual 
values as some post processing after (or at the end of) 
{{Driver.acquireLocks()}} as currently implemented?

> Define ValidTxnList before starting query optimization
> --
>
> Key: HIVE-18825
> URL: https://issues.apache.org/jira/browse/HIVE-18825
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-18825.01.patch, HIVE-18825.02.patch, 
> HIVE-18825.03.patch, HIVE-18825.04.patch, HIVE-18825.patch
>
>
> Consider a set of tables used by a materialized view where inserts happened 
> after the materialization was created. To compute incremental view 
> maintenance, we need to be able to filter only new rows from those base 
> tables. That can be done by inserting a filter operator with condition e.g. 
> {{ROW\_\_ID.transactionId < highwatermark and ROW\_\_ID.transactionId NOT 
> IN()}} on top of the MVs query definition and triggering the 
> rewriting (which should in turn produce a partial rewriting). However, to do 
> that, we need to have a value for {{ValidTxnList}} during query compilation 
> so we know the snapshot that we are querying.
> This patch aims to generate {{ValidTxnList}} before query optimization. There 
> should not be any visible changes for end user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18571) stats issues for MM tables

2018-03-05 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387138#comment-16387138
 ] 

Sergey Shelukhin commented on HIVE-18571:
-

This patch updates the stats for selects where there used to be incorrect stats 
and now there are no stats for row counts, extends the fix for create table 
WriteEntry to also work for default props (mm_default bad update), fixes 
replication test via a test bypass (I filed a follow up jira), and addresses a 
couple more CR comments

> stats issues for MM tables
> --
>
> Key: HIVE-18571
> URL: https://issues.apache.org/jira/browse/HIVE-18571
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18571.01.patch, HIVE-18571.02.patch, 
> HIVE-18571.03.patch, HIVE-18571.04.patch, HIVE-18571.05.patch, 
> HIVE-18571.patch
>
>
> There are multiple stats aggregation issues with MM tables.
> Some simple stats are double counted and some stats (simple stats) are 
> invalid for ACID table dirs altogether. 
> I have a patch almost ready, need to fix some more stuff and clean up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18571) stats issues for MM tables

2018-03-05 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18571:

Attachment: HIVE-18571.05.patch

> stats issues for MM tables
> --
>
> Key: HIVE-18571
> URL: https://issues.apache.org/jira/browse/HIVE-18571
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18571.01.patch, HIVE-18571.02.patch, 
> HIVE-18571.03.patch, HIVE-18571.04.patch, HIVE-18571.05.patch, 
> HIVE-18571.patch
>
>
> There are multiple stats aggregation issues with MM tables.
> Some simple stats are double counted and some stats (simple stats) are 
> invalid for ACID table dirs altogether. 
> I have a patch almost ready, need to fix some more stuff and clean up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18571) stats issues for MM tables

2018-03-05 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18571:

Attachment: (was: HIVE-18571.05.patch)

> stats issues for MM tables
> --
>
> Key: HIVE-18571
> URL: https://issues.apache.org/jira/browse/HIVE-18571
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18571.01.patch, HIVE-18571.02.patch, 
> HIVE-18571.03.patch, HIVE-18571.04.patch, HIVE-18571.patch
>
>
> There are multiple stats aggregation issues with MM tables.
> Some simple stats are double counted and some stats (simple stats) are 
> invalid for ACID table dirs altogether. 
> I have a patch almost ready, need to fix some more stuff and clean up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18571) stats issues for MM tables

2018-03-05 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18571:

Attachment: HIVE-18571.05.patch

> stats issues for MM tables
> --
>
> Key: HIVE-18571
> URL: https://issues.apache.org/jira/browse/HIVE-18571
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18571.01.patch, HIVE-18571.02.patch, 
> HIVE-18571.03.patch, HIVE-18571.04.patch, HIVE-18571.patch
>
>
> There are multiple stats aggregation issues with MM tables.
> Some simple stats are double counted and some stats (simple stats) are 
> invalid for ACID table dirs altogether. 
> I have a patch almost ready, need to fix some more stuff and clean up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17896) TopNKey: Create a standalone vectorizable TopNKey operator

2018-03-05 Thread Teddy Choi (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387122#comment-16387122
 ] 

Teddy Choi commented on HIVE-17896:
---

Updated the patch with the latest master branch.

> TopNKey: Create a standalone vectorizable TopNKey operator
> --
>
> Key: HIVE-17896
> URL: https://issues.apache.org/jira/browse/HIVE-17896
> Project: Hive
>  Issue Type: New Feature
>  Components: Operators
>Affects Versions: 3.0.0
>Reporter: Gopal V
>Assignee: Teddy Choi
>Priority: Major
> Attachments: HIVE-17896.1.patch, HIVE-17896.3.patch, 
> HIVE-17896.4.patch, HIVE-17896.5.patch, HIVE-17896.6.patch, HIVE-17896.7.patch
>
>
> For TPC-DS Query27, the TopN operation is delayed by the group-by - the 
> group-by operator buffers up all the rows before discarding the 99% of the 
> rows in the TopN Hash within the ReduceSink Operator.
> The RS TopN operator is very restrictive as it only supports doing the 
> filtering on the shuffle keys, but it is better to do this before breaking 
> the vectors into rows and losing the isRepeating properties.
> Adding a TopN Key operator in the physical operator tree allows the following 
> to happen.
> GBY->RS(Top=1)
> can become 
> TNK(1)->GBY->RS(Top=1)
> So that, the TopNKey can remove rows before they are buffered into the GBY 
> and consume memory.
> Here's the equivalent implementation in Presto
> https://github.com/prestodb/presto/blob/master/presto-main/src/main/java/com/facebook/presto/operator/TopNOperator.java#L35
> Adding this as a sub-feature of GroupBy prevents further optimizations if the 
> GBY is on keys "a,b,c" and the TopNKey is on just "a".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18571) stats issues for MM tables

2018-03-05 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387121#comment-16387121
 ] 

Sergey Shelukhin commented on HIVE-18571:
-

The stats changes in select explain plans are due to disappearing incorrect 
stats.
For example. table in acid_nullscan has 11 rows, but stats currently report 1 
row because of the last transaction inserting 1 row (I think).
New results report 88 rows because stats don't have row count at all; therefore 
row count in explain is generated based on some heuristic.

> stats issues for MM tables
> --
>
> Key: HIVE-18571
> URL: https://issues.apache.org/jira/browse/HIVE-18571
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18571.01.patch, HIVE-18571.02.patch, 
> HIVE-18571.03.patch, HIVE-18571.04.patch, HIVE-18571.patch
>
>
> There are multiple stats aggregation issues with MM tables.
> Some simple stats are double counted and some stats (simple stats) are 
> invalid for ACID table dirs altogether. 
> I have a patch almost ready, need to fix some more stuff and clean up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17896) TopNKey: Create a standalone vectorizable TopNKey operator

2018-03-05 Thread Teddy Choi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Teddy Choi updated HIVE-17896:
--
Attachment: HIVE-17896.7.patch

> TopNKey: Create a standalone vectorizable TopNKey operator
> --
>
> Key: HIVE-17896
> URL: https://issues.apache.org/jira/browse/HIVE-17896
> Project: Hive
>  Issue Type: New Feature
>  Components: Operators
>Affects Versions: 3.0.0
>Reporter: Gopal V
>Assignee: Teddy Choi
>Priority: Major
> Attachments: HIVE-17896.1.patch, HIVE-17896.3.patch, 
> HIVE-17896.4.patch, HIVE-17896.5.patch, HIVE-17896.6.patch, HIVE-17896.7.patch
>
>
> For TPC-DS Query27, the TopN operation is delayed by the group-by - the 
> group-by operator buffers up all the rows before discarding the 99% of the 
> rows in the TopN Hash within the ReduceSink Operator.
> The RS TopN operator is very restrictive as it only supports doing the 
> filtering on the shuffle keys, but it is better to do this before breaking 
> the vectors into rows and losing the isRepeating properties.
> Adding a TopN Key operator in the physical operator tree allows the following 
> to happen.
> GBY->RS(Top=1)
> can become 
> TNK(1)->GBY->RS(Top=1)
> So that, the TopNKey can remove rows before they are buffered into the GBY 
> and consume memory.
> Here's the equivalent implementation in Presto
> https://github.com/prestodb/presto/blob/master/presto-main/src/main/java/com/facebook/presto/operator/TopNOperator.java#L35
> Adding this as a sub-feature of GroupBy prevents further optimizations if the 
> GBY is on keys "a,b,c" and the TopNKey is on just "a".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18867) create_with_constraints_duplicate_name and default_constraint_invalid_default_value_length failing

2018-03-05 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387104#comment-16387104
 ] 

Ashutosh Chauhan commented on HIVE-18867:
-

+1

> create_with_constraints_duplicate_name and 
> default_constraint_invalid_default_value_length failing 
> ---
>
> Key: HIVE-18867
> URL: https://issues.apache.org/jira/browse/HIVE-18867
> Project: Hive
>  Issue Type: Test
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-18867.1.patch
>
>
> The output file for both of these need to be updated
> {noformat}
> Client Execution succeeded but contained differences (error code = 1) after 
> executing default_constraint_invalid_default_value_length.q 
> 1c1
> < FAILED: SemanticException [Error 10326]: Invalid Constraint syntax Invalid 
> Default value:  
> '12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234'
>  .Maximum character length allowed is 255 .
> ---
> > FAILED: SemanticException [Error 10326]: Invalid Constraint syntax Invalid 
> > Default value:  
> > '12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234'Maximum
> >  character length allowed is 255 .
> {noformat}
> {noformat}
> Client Execution succeeded but contained differences (error code = 1) after 
> executing create_with_constraints_duplicate_name.q 
> 13c13
> < FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. 
> InvalidObjectException(message:Constraint name already exists: pk1)
> ---
> > FAILED: Execution Error, return code 1 from 
> > org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:One or more 
> > instances could not be made persistent)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17990) Add Thrift and DB storage for Schema Registry objects

2018-03-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387086#comment-16387086
 ] 

Hive QA commented on HIVE-17990:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
14s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
16s{color} | {color:red} hcatalog-unit in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
29s{color} | {color:red} standalone-metastore: The patch generated 149 new + 
1400 unchanged - 6 fixed = 1549 total (was 1406) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch has 285 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  
xml  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9499/dev-support/hive-personality.sh
 |
| git revision | master / b4e7530 |
| Default Java | 1.8.0_111 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9499/yetus/patch-mvninstall-itests_hcatalog-unit.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9499/yetus/diff-checkstyle-standalone-metastore.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9499/yetus/whitespace-eol.txt 
|
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9499/yetus/patch-asflicense-problems.txt
 |
| modules | C: itests/hcatalog-unit standalone-metastore U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9499/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Add Thrift and DB storage for Schema Registry objects
> -
>
> Key: HIVE-17990
> URL: https://issues.apache.org/jira/browse/HIVE-17990
> Project: Hive
>  Issue Type: Sub-task
>  Components: Standalone Metastore
>Reporter: Alan Gates
>Assignee: Alan Gates
>Priority: Major
>  Labels: pull-request-available
> Attachments: Adding-Schema-Registry-to-Metastore.pdf, 
> HIVE-17990.2.patch, HIVE-17990.patch
>
>
> This JIRA tracks changes to Thrift, RawStore, and DB scripts to support 
> objects in the Schema Registry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18864) WriteId high water mark (HWM) is incorrect if ValidWriteIdList is obtained after allocating writeId by current transaction.

2018-03-05 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387081#comment-16387081
 ] 

Eugene Koifman commented on HIVE-18864:
---

Alternatively, write_HWM should always be set to that which corresponds 
txn_HWM, rather than explicitly marking it 'open'.

> WriteId high water mark (HWM) is incorrect if ValidWriteIdList is obtained 
> after allocating writeId by current transaction.
> ---
>
> Key: HIVE-18864
> URL: https://issues.apache.org/jira/browse/HIVE-18864
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Blocker
>  Labels: ACID
> Fix For: 3.0.0
>
>
> For multi-statement txns, it is possible that write on a table happens after 
> a read. Let's see the below scenario.
>  # Committed txn=9 writes on table T1 with writeId=5.
>  # Open txn=10. ValidTxnList(open:null, txn_HWM=10),
>  # Read table T1 from txn=10. ValidWriteIdList(open:null, write_HWM=5).
>  # Open txn=11, writes on table T1 with writeid=6.
>  # Read table T1 from txn=10. ValidWriteIdList(open:null, write_HWM=5).
>  # Write table T1 from txn=10 with writeId=7.
>  # Read table T1 from txn=10. {color:#d04437}*ValidWriteIdList(open:null, 
> write_HWM=7)*. – This read will able to see rows added by txn=11 which is 
> still open.{color}
> {color:#d04437}So, it is needed to rebuild the open/aborted list of 
> ValidWriteIdList based on txn_HWM. Any writeId allocated by txnId > txn_HWM 
> should be marked as open.{color}
> {color:#33}{color:#d04437}cc{color}{color} 
> [~ekoifman]{color:#33},{color} [~thejas]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18749) Need to replace transactionId with writeId in RecordIdentifier and other relevant contexts.

2018-03-05 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387067#comment-16387067
 ] 

Eugene Koifman commented on HIVE-18749:
---

+1

> Need to replace transactionId with writeId in RecordIdentifier and other 
> relevant contexts.
> ---
>
> Key: HIVE-18749
> URL: https://issues.apache.org/jira/browse/HIVE-18749
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Minor
>  Labels: ACID, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18749.01.patch, HIVE-18749.02.patch
>
>
> Per table write ID implementation (HIVE-18192) have replaced global 
> transaction ID with write ID for the primary key for a row marked by 
> RecordIdentifier.Field..transactionId.
> Need to replace the same with writeId and update all test results file.
> Also, need to update other references (methods/variables) which currently 
> uses transactionId instead of writeId.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17626) Query reoptimization using cached runtime statistics

2018-03-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387058#comment-16387058
 ] 

Hive QA commented on HIVE-17626:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12913111/HIVE-17626.10.patch

{color:green}SUCCESS:{color} +1 due to 16 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 102 failed/errored test(s), 13495 tests 
executed
*Failed tests:*
{noformat}
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=94)


[jira] [Commented] (HIVE-17626) Query reoptimization using cached runtime statistics

2018-03-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387038#comment-16387038
 ] 

Hive QA commented on HIVE-17626:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
38s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
10s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} The patch common passed checkstyle {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  2m  
7s{color} | {color:red} root: The patch generated 92 new + 1957 unchanged - 167 
fixed = 2049 total (was 2124) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} druid-handler: The patch generated 0 new + 1 
unchanged - 1 fixed = 1 total (was 2) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} itests/util: The patch generated 0 new + 103 
unchanged - 1 fixed = 103 total (was 104) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
48s{color} | {color:red} ql: The patch generated 92 new + 1424 unchanged - 164 
fixed = 1516 total (was 1588) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
15s{color} | {color:red} standalone-metastore: The patch generated 1 new + 3 
unchanged - 2 fixed = 4 total (was 5) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
14s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  
xml  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9498/dev-support/hive-personality.sh
 |
| git revision | master / b4e7530 |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9498/yetus/diff-checkstyle-root.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9498/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9498/yetus/diff-checkstyle-standalone-metastore.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9498/yetus/whitespace-eol.txt 
|
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9498/yetus/patch-asflicense-problems.txt
 |
| modules | C: common . druid-handler 

[jira] [Commented] (HIVE-18867) create_with_constraints_duplicate_name and default_constraint_invalid_default_value_length failing

2018-03-05 Thread Vineet Garg (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16387022#comment-16387022
 ] 

Vineet Garg commented on HIVE-18867:


[~ashutoshc] Can you take a look?

> create_with_constraints_duplicate_name and 
> default_constraint_invalid_default_value_length failing 
> ---
>
> Key: HIVE-18867
> URL: https://issues.apache.org/jira/browse/HIVE-18867
> Project: Hive
>  Issue Type: Test
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-18867.1.patch
>
>
> The output file for both of these need to be updated
> {noformat}
> Client Execution succeeded but contained differences (error code = 1) after 
> executing default_constraint_invalid_default_value_length.q 
> 1c1
> < FAILED: SemanticException [Error 10326]: Invalid Constraint syntax Invalid 
> Default value:  
> '12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234'
>  .Maximum character length allowed is 255 .
> ---
> > FAILED: SemanticException [Error 10326]: Invalid Constraint syntax Invalid 
> > Default value:  
> > '12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234'Maximum
> >  character length allowed is 255 .
> {noformat}
> {noformat}
> Client Execution succeeded but contained differences (error code = 1) after 
> executing create_with_constraints_duplicate_name.q 
> 13c13
> < FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. 
> InvalidObjectException(message:Constraint name already exists: pk1)
> ---
> > FAILED: Execution Error, return code 1 from 
> > org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:One or more 
> > instances could not be made persistent)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18868) fix TestReplicationScenarios & TRSAcross... tests to use ACID txn manager

2018-03-05 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18868:

Summary: fix TestReplicationScenarios & TRSAcross... tests to use ACID txn 
manager  (was: fix TestReplicationScenarios* tests to use ACID txn manager)

> fix TestReplicationScenarios & TRSAcross... tests to use ACID txn manager
> -
>
> Key: HIVE-18868
> URL: https://issues.apache.org/jira/browse/HIVE-18868
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Thejas M Nair
>Priority: Major
>
> These two tests use ACID tables with dummy txn manager, which is not supposed 
> to work. 
> There was a bug in ACID check in SemanticAnalyzer that enabled these tests to 
> run. The bug is being fixed in HIVE-18571, after which creating an ACID table 
> starts failing.
> I tried to switch them to DbTxnManager; it fixed the tests using ACID tables, 
> but some other test cases start to hang for some reason that is not entirely 
> obvious (metastore connection issues?).
> I added another in-test flag in HIVE-18571 for now, but overall these tests 
> should either stop using ACID tables, or set up ACID stuff properly 
> (DbTxnManager and concurrency on).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18868) fix TestReplicationScenarios* tests to use ACID txn manager

2018-03-05 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18868:

Description: 
These two tests use ACID tables with dummy txn manager, which is not supposed 
to work. 
There was a bug in ACID check in SemanticAnalyzer that enabled these tests to 
run. The bug is being fixed in HIVE-18571, after which creating an ACID table 
starts failing.
I tried to switch them to DbTxnManager; it fixed the tests using ACID tables, 
but some other test cases start to hang for some reason that is not entirely 
obvious (metastore connection issues?).

I added another in-test flag in HIVE-18571 for now, but overall these tests 
should either stop using ACID tables, or set up ACID stuff properly 
(DbTxnManager and concurrency on).

  was:
These two tests use ACID tables with dummy txn manager, which is not supposed 
to work. 
There was a bug in ACID check in SemanticAnalyzer which is being fixed in 
HIVE-18571, after which create ACID table starts failing in these.
I tried to switch them to DbTxnManager; it fixed the tests using ACID tables, 
but some tests start to hang for some reason that is not entirely obvious 
(metastore connection issues?).

I added another in-test flag in HIVE-18571 for now, but overall these tests 
should either stop using ACID tables, or set up ACID stuff properly 
(DbTxnManager and concurrency on).


> fix TestReplicationScenarios* tests to use ACID txn manager
> ---
>
> Key: HIVE-18868
> URL: https://issues.apache.org/jira/browse/HIVE-18868
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Thejas M Nair
>Priority: Major
>
> These two tests use ACID tables with dummy txn manager, which is not supposed 
> to work. 
> There was a bug in ACID check in SemanticAnalyzer that enabled these tests to 
> run. The bug is being fixed in HIVE-18571, after which creating an ACID table 
> starts failing.
> I tried to switch them to DbTxnManager; it fixed the tests using ACID tables, 
> but some other test cases start to hang for some reason that is not entirely 
> obvious (metastore connection issues?).
> I added another in-test flag in HIVE-18571 for now, but overall these tests 
> should either stop using ACID tables, or set up ACID stuff properly 
> (DbTxnManager and concurrency on).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-18868) fix TestReplicationScenarios* tests to use ACID txn manager

2018-03-05 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin reassigned HIVE-18868:
---


> fix TestReplicationScenarios* tests to use ACID txn manager
> ---
>
> Key: HIVE-18868
> URL: https://issues.apache.org/jira/browse/HIVE-18868
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Thejas M Nair
>Priority: Major
>
> These two tests use ACID tables with dummy txn manager, which is not supposed 
> to work. 
> There was a bug in ACID check in SemanticAnalyzer which is being fixed in 
> HIVE-18571, after which create ACID table starts failing in these.
> I tried to switch them to DbTxnManager; it fixed the tests using ACID tables, 
> but some tests start to hang for some reason that is not entirely obvious 
> (metastore connection issues?).
> I added another in-test flag in HIVE-18571 for now, but overall these tests 
> should either stop using ACID tables, or set up ACID stuff properly 
> (DbTxnManager and concurrency on).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18867) create_with_constraints_duplicate_name and default_constraint_invalid_default_value_length failing

2018-03-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386954#comment-16386954
 ] 

Hive QA commented on HIVE-18867:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12913109/HIVE-18867.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 20 failed/errored test(s), 13080 tests 
executed
*Failed tests:*
{noformat}
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=93)


[jira] [Assigned] (HIVE-18863) trunc() calls itself trunk() in an error message

2018-03-05 Thread Bharathkrishna Guruvayoor Murali (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharathkrishna Guruvayoor Murali reassigned HIVE-18863:
---

Assignee: Bharathkrishna Guruvayoor Murali

> trunc() calls itself trunk() in an error message
> 
>
> Key: HIVE-18863
> URL: https://issues.apache.org/jira/browse/HIVE-18863
> Project: Hive
>  Issue Type: Bug
>  Components: UDF
>Reporter: Tim Armstrong
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Minor
>  Labels: newbie
>
> {noformat}
> > select  trunc('millennium', cast('2001-02-16 20:38:40' as timestamp))
> FAILED: SemanticException Line 0:-1 Argument type mismatch ''2001-02-16 
> 20:38:40'': trunk() only takes STRING/CHAR/VARCHAR types as second argument, 
> got TIMESTAMP
> {noformat}
> I saw this on a derivative of Hive 1.1.0 (cdh5.15.0), but the string still 
> seems to be present on master:
> https://github.com/apache/hive/blob/6d890faf22fd1ede3658a5eed097476eab3c67e9/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFTrunc.java#L262



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18718) Integer like types throws error when there is a mismatch

2018-03-05 Thread Janaki Lahorani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Janaki Lahorani updated HIVE-18718:
---
Attachment: HIVE-18718.3.patch

> Integer like types throws error when there is a mismatch
> 
>
> Key: HIVE-18718
> URL: https://issues.apache.org/jira/browse/HIVE-18718
> Project: Hive
>  Issue Type: Improvement
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18718.1.patch, HIVE-18718.2.patch, 
> HIVE-18718.3.patch
>
>
> If a value is saved with long type and read as int type it results in
> FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17552) Enable bucket map join by default for Tez

2018-03-05 Thread Deepak Jaiswal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Jaiswal updated HIVE-17552:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to master.

 

Thanks [~jdere] for the review.

> Enable bucket map join by default for Tez
> -
>
> Key: HIVE-17552
> URL: https://issues.apache.org/jira/browse/HIVE-17552
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning, Tez
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-17552.1.patch, HIVE-17552.10.patch, 
> HIVE-17552.11.patch, HIVE-17552.2.patch, HIVE-17552.3.patch, 
> HIVE-17552.4.patch, HIVE-17552.5.patch, HIVE-17552.6.patch, 
> HIVE-17552.7.patch, HIVE-17552.8.patch, HIVE-17552.9.patch
>
>
> Currently bucket map join is disabled by default, however, it is potentially 
> most optimal join we have. Need to enable it by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17990) Add Thrift and DB storage for Schema Registry objects

2018-03-05 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386907#comment-16386907
 ] 

Alan Gates commented on HIVE-17990:
---

New version of the patch incorporating Thejas' feedback on the Thrift API.

> Add Thrift and DB storage for Schema Registry objects
> -
>
> Key: HIVE-17990
> URL: https://issues.apache.org/jira/browse/HIVE-17990
> Project: Hive
>  Issue Type: Sub-task
>  Components: Standalone Metastore
>Reporter: Alan Gates
>Assignee: Alan Gates
>Priority: Major
>  Labels: pull-request-available
> Attachments: Adding-Schema-Registry-to-Metastore.pdf, 
> HIVE-17990.2.patch, HIVE-17990.patch
>
>
> This JIRA tracks changes to Thrift, RawStore, and DB scripts to support 
> objects in the Schema Registry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17990) Add Thrift and DB storage for Schema Registry objects

2018-03-05 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-17990:
--
Attachment: HIVE-17990.2.patch

> Add Thrift and DB storage for Schema Registry objects
> -
>
> Key: HIVE-17990
> URL: https://issues.apache.org/jira/browse/HIVE-17990
> Project: Hive
>  Issue Type: Sub-task
>  Components: Standalone Metastore
>Reporter: Alan Gates
>Assignee: Alan Gates
>Priority: Major
>  Labels: pull-request-available
> Attachments: Adding-Schema-Registry-to-Metastore.pdf, 
> HIVE-17990.2.patch, HIVE-17990.patch
>
>
> This JIRA tracks changes to Thrift, RawStore, and DB scripts to support 
> objects in the Schema Registry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17552) Enable bucket map join by default for Tez

2018-03-05 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-17552:
--
Component/s: Tez
 Query Planning

> Enable bucket map join by default for Tez
> -
>
> Key: HIVE-17552
> URL: https://issues.apache.org/jira/browse/HIVE-17552
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning, Tez
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-17552.1.patch, HIVE-17552.10.patch, 
> HIVE-17552.11.patch, HIVE-17552.2.patch, HIVE-17552.3.patch, 
> HIVE-17552.4.patch, HIVE-17552.5.patch, HIVE-17552.6.patch, 
> HIVE-17552.7.patch, HIVE-17552.8.patch, HIVE-17552.9.patch
>
>
> Currently bucket map join is disabled by default, however, it is potentially 
> most optimal join we have. Need to enable it by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17552) Enable bucket map join by default for Tez

2018-03-05 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-17552:
--
Summary: Enable bucket map join by default for Tez  (was: Enable bucket map 
join by default)

> Enable bucket map join by default for Tez
> -
>
> Key: HIVE-17552
> URL: https://issues.apache.org/jira/browse/HIVE-17552
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-17552.1.patch, HIVE-17552.10.patch, 
> HIVE-17552.11.patch, HIVE-17552.2.patch, HIVE-17552.3.patch, 
> HIVE-17552.4.patch, HIVE-17552.5.patch, HIVE-17552.6.patch, 
> HIVE-17552.7.patch, HIVE-17552.8.patch, HIVE-17552.9.patch
>
>
> Currently bucket map join is disabled by default, however, it is potentially 
> most optimal join we have. Need to enable it by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17552) Enable bucket map join by default

2018-03-05 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386904#comment-16386904
 ] 

Jason Dere commented on HIVE-17552:
---

+1


> Enable bucket map join by default
> -
>
> Key: HIVE-17552
> URL: https://issues.apache.org/jira/browse/HIVE-17552
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-17552.1.patch, HIVE-17552.10.patch, 
> HIVE-17552.11.patch, HIVE-17552.2.patch, HIVE-17552.3.patch, 
> HIVE-17552.4.patch, HIVE-17552.5.patch, HIVE-17552.6.patch, 
> HIVE-17552.7.patch, HIVE-17552.8.patch, HIVE-17552.9.patch
>
>
> Currently bucket map join is disabled by default, however, it is potentially 
> most optimal join we have. Need to enable it by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18571) stats issues for MM tables

2018-03-05 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386897#comment-16386897
 ] 

Sergey Shelukhin commented on HIVE-18571:
-

TestReplicationScenarios was actually never supposed to work because it sets 
DummyTxnManager in setup (explicitly, dunno why) but then uses ACID tables.
Before this patch, for create table, ACID would not check whether ACID config 
is correct because of the WriteEntry bug. 
After fixing the bug, ACID table creation in the test fails - ACID table cannot 
be created with dummy txn manager.
I'm going to try to change this test to use proper ACID config, but if I run 
into problems I'll just add a test flag to the check.
cc [~thejas]

> stats issues for MM tables
> --
>
> Key: HIVE-18571
> URL: https://issues.apache.org/jira/browse/HIVE-18571
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18571.01.patch, HIVE-18571.02.patch, 
> HIVE-18571.03.patch, HIVE-18571.04.patch, HIVE-18571.patch
>
>
> There are multiple stats aggregation issues with MM tables.
> Some simple stats are double counted and some stats (simple stats) are 
> invalid for ACID table dirs altogether. 
> I have a patch almost ready, need to fix some more stuff and clean up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18863) trunc() calls itself trunk() in an error message

2018-03-05 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386891#comment-16386891
 ] 

Vihang Karajgaonkar commented on HIVE-18863:


oh I see it now :) Sorry for the multiple round trips.. 

> trunc() calls itself trunk() in an error message
> 
>
> Key: HIVE-18863
> URL: https://issues.apache.org/jira/browse/HIVE-18863
> Project: Hive
>  Issue Type: Bug
>  Components: UDF
>Reporter: Tim Armstrong
>Priority: Minor
>  Labels: newbie
>
> {noformat}
> > select  trunc('millennium', cast('2001-02-16 20:38:40' as timestamp))
> FAILED: SemanticException Line 0:-1 Argument type mismatch ''2001-02-16 
> 20:38:40'': trunk() only takes STRING/CHAR/VARCHAR types as second argument, 
> got TIMESTAMP
> {noformat}
> I saw this on a derivative of Hive 1.1.0 (cdh5.15.0), but the string still 
> seems to be present on master:
> https://github.com/apache/hive/blob/6d890faf22fd1ede3658a5eed097476eab3c67e9/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFTrunc.java#L262



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18571) stats issues for MM tables

2018-03-05 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386888#comment-16386888
 ] 

Sergey Shelukhin commented on HIVE-18571:
-

Setting up the WriteEntry properly seems to have uncovered some issue in 
replication tests.
Stats in many tests for ACID tables appear to have changed from incorrect to 
incorrect, but different. I will take a look but might just update the tests, 
it doesn't matter if 11-row table is reported as 1-row or 88-row :)

> stats issues for MM tables
> --
>
> Key: HIVE-18571
> URL: https://issues.apache.org/jira/browse/HIVE-18571
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18571.01.patch, HIVE-18571.02.patch, 
> HIVE-18571.03.patch, HIVE-18571.04.patch, HIVE-18571.patch
>
>
> There are multiple stats aggregation issues with MM tables.
> Some simple stats are double counted and some stats (simple stats) are 
> invalid for ACID table dirs altogether. 
> I have a patch almost ready, need to fix some more stuff and clean up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18863) trunc() calls itself trunk() in an error message

2018-03-05 Thread Tim Armstrong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386886#comment-16386886
 ] 

Tim Armstrong commented on HIVE-18863:
--

The name of the function in the error message is wrong. trunc != trunk.

> trunc() calls itself trunk() in an error message
> 
>
> Key: HIVE-18863
> URL: https://issues.apache.org/jira/browse/HIVE-18863
> Project: Hive
>  Issue Type: Bug
>  Components: UDF
>Reporter: Tim Armstrong
>Priority: Minor
>  Labels: newbie
>
> {noformat}
> > select  trunc('millennium', cast('2001-02-16 20:38:40' as timestamp))
> FAILED: SemanticException Line 0:-1 Argument type mismatch ''2001-02-16 
> 20:38:40'': trunk() only takes STRING/CHAR/VARCHAR types as second argument, 
> got TIMESTAMP
> {noformat}
> I saw this on a derivative of Hive 1.1.0 (cdh5.15.0), but the string still 
> seems to be present on master:
> https://github.com/apache/hive/blob/6d890faf22fd1ede3658a5eed097476eab3c67e9/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFTrunc.java#L262



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17552) Enable bucket map join by default

2018-03-05 Thread Deepak Jaiswal (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386857#comment-16386857
 ] 

Deepak Jaiswal commented on HIVE-17552:
---

[~jdere] can you please review? The test failures in NegativeCli are unrelated 
and known to be due to some other reasons.

Other failures are unrelated.

> Enable bucket map join by default
> -
>
> Key: HIVE-17552
> URL: https://issues.apache.org/jira/browse/HIVE-17552
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-17552.1.patch, HIVE-17552.10.patch, 
> HIVE-17552.11.patch, HIVE-17552.2.patch, HIVE-17552.3.patch, 
> HIVE-17552.4.patch, HIVE-17552.5.patch, HIVE-17552.6.patch, 
> HIVE-17552.7.patch, HIVE-17552.8.patch, HIVE-17552.9.patch
>
>
> Currently bucket map join is disabled by default, however, it is potentially 
> most optimal join we have. Need to enable it by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18863) trunc() calls itself trunk() in an error message

2018-03-05 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386849#comment-16386849
 ] 

Vihang Karajgaonkar commented on HIVE-18863:


The JIRA title seems to suggest that trunc() calls itself (runs twice) in the 
error message. But it looks like {{trunc()}} is just a String value passed to 
the exception as message. It doesn't look like its calling the UDF twice. Are 
you suggesting changing the message text to make it more clear?

> trunc() calls itself trunk() in an error message
> 
>
> Key: HIVE-18863
> URL: https://issues.apache.org/jira/browse/HIVE-18863
> Project: Hive
>  Issue Type: Bug
>  Components: UDF
>Reporter: Tim Armstrong
>Priority: Minor
>  Labels: newbie
>
> {noformat}
> > select  trunc('millennium', cast('2001-02-16 20:38:40' as timestamp))
> FAILED: SemanticException Line 0:-1 Argument type mismatch ''2001-02-16 
> 20:38:40'': trunk() only takes STRING/CHAR/VARCHAR types as second argument, 
> got TIMESTAMP
> {noformat}
> I saw this on a derivative of Hive 1.1.0 (cdh5.15.0), but the string still 
> seems to be present on master:
> https://github.com/apache/hive/blob/6d890faf22fd1ede3658a5eed097476eab3c67e9/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFTrunc.java#L262



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18867) create_with_constraints_duplicate_name and default_constraint_invalid_default_value_length failing

2018-03-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386847#comment-16386847
 ] 

Hive QA commented on HIVE-18867:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
52s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}  1m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9497/dev-support/hive-personality.sh
 |
| git revision | master / 3d5d7c7 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9497/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9497/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> create_with_constraints_duplicate_name and 
> default_constraint_invalid_default_value_length failing 
> ---
>
> Key: HIVE-18867
> URL: https://issues.apache.org/jira/browse/HIVE-18867
> Project: Hive
>  Issue Type: Test
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-18867.1.patch
>
>
> The output file for both of these need to be updated
> {noformat}
> Client Execution succeeded but contained differences (error code = 1) after 
> executing default_constraint_invalid_default_value_length.q 
> 1c1
> < FAILED: SemanticException [Error 10326]: Invalid Constraint syntax Invalid 
> Default value:  
> '12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234'
>  .Maximum character length allowed is 255 .
> ---
> > FAILED: SemanticException [Error 10326]: Invalid Constraint syntax Invalid 
> > Default value:  
> > '12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234'Maximum
> >  character length allowed is 255 .
> {noformat}
> {noformat}
> Client Execution succeeded but contained differences (error code = 1) after 
> executing create_with_constraints_duplicate_name.q 
> 13c13
> < FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. 
> InvalidObjectException(message:Constraint name already exists: pk1)
> ---
> > FAILED: Execution Error, return code 1 from 
> > org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:One or more 
> > instances could not be made persistent)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17626) Query reoptimization using cached runtime statistics

2018-03-05 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-17626:

Attachment: HIVE-17626.10.patch

> Query reoptimization using cached runtime statistics
> 
>
> Key: HIVE-17626
> URL: https://issues.apache.org/jira/browse/HIVE-17626
> Project: Hive
>  Issue Type: New Feature
>  Components: Logical Optimizer
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-17626.01.patch, HIVE-17626.01wip01.patch, 
> HIVE-17626.02.patch, HIVE-17626.03.patch, HIVE-17626.04.patch, 
> HIVE-17626.05.patch, HIVE-17626.06.patch, HIVE-17626.07A.patch, 
> HIVE-17626.07B.patch, HIVE-17626.08.patch, HIVE-17626.09.patch, 
> HIVE-17626.10.patch, runtimestats.patch
>
>
> Something similar to "EXPLAIN ANALYZE" where we annotate explain plan with 
> actual and estimated statistics. The runtime stats can be cached at query 
> level and subsequent execution of the same query can make use of the cached 
> statistics from the previous run for better optimization. 
> Some use cases,
> 1) re-planning join query (mapjoin failures can be converted to shuffle joins)
> 2) better statistics for table scan operator if dynamic partition pruning is 
> involved
> 3) Better estimates for bloom filter initialization (setting expected entries 
> during merge)
> This can extended to support wider queries by caching fragments of operator 
> plans scanning same table(s) or matching some operator sequences.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18693) Snapshot Isolation does not work for Micromanaged table when a insert transaction is aborted

2018-03-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386829#comment-16386829
 ] 

Hive QA commented on HIVE-18693:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12913101/HIVE-18693.03.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 20 failed/errored test(s), 13084 tests 
executed
*Failed tests:*
{noformat}
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=93)


[jira] [Updated] (HIVE-18867) create_with_constraints_duplicate_name and default_constraint_invalid_default_value_length failing

2018-03-05 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-18867:
---
Status: Patch Available  (was: Open)

> create_with_constraints_duplicate_name and 
> default_constraint_invalid_default_value_length failing 
> ---
>
> Key: HIVE-18867
> URL: https://issues.apache.org/jira/browse/HIVE-18867
> Project: Hive
>  Issue Type: Test
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-18867.1.patch
>
>
> The output file for both of these need to be updated
> {noformat}
> Client Execution succeeded but contained differences (error code = 1) after 
> executing default_constraint_invalid_default_value_length.q 
> 1c1
> < FAILED: SemanticException [Error 10326]: Invalid Constraint syntax Invalid 
> Default value:  
> '12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234'
>  .Maximum character length allowed is 255 .
> ---
> > FAILED: SemanticException [Error 10326]: Invalid Constraint syntax Invalid 
> > Default value:  
> > '12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234'Maximum
> >  character length allowed is 255 .
> {noformat}
> {noformat}
> Client Execution succeeded but contained differences (error code = 1) after 
> executing create_with_constraints_duplicate_name.q 
> 13c13
> < FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. 
> InvalidObjectException(message:Constraint name already exists: pk1)
> ---
> > FAILED: Execution Error, return code 1 from 
> > org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:One or more 
> > instances could not be made persistent)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18867) create_with_constraints_duplicate_name and default_constraint_invalid_default_value_length failing

2018-03-05 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-18867:
---
Description: 
The output file for both of these need to be updated
{noformat}
Client Execution succeeded but contained differences (error code = 1) after 
executing default_constraint_invalid_default_value_length.q 
1c1
< FAILED: SemanticException [Error 10326]: Invalid Constraint syntax Invalid 
Default value:  
'12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234'
 .Maximum character length allowed is 255 .
---
> FAILED: SemanticException [Error 10326]: Invalid Constraint syntax Invalid 
> Default value:  
> '12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234'Maximum
>  character length allowed is 255 .
{noformat}

{noformat}
Client Execution succeeded but contained differences (error code = 1) after 
executing create_with_constraints_duplicate_name.q 
13c13
< FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.DDLTask. 
InvalidObjectException(message:Constraint name already exists: pk1)
---
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:One or more 
> instances could not be made persistent)
{noformat}

  was:The output file for both of these need to be updated


> create_with_constraints_duplicate_name and 
> default_constraint_invalid_default_value_length failing 
> ---
>
> Key: HIVE-18867
> URL: https://issues.apache.org/jira/browse/HIVE-18867
> Project: Hive
>  Issue Type: Test
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-18867.1.patch
>
>
> The output file for both of these need to be updated
> {noformat}
> Client Execution succeeded but contained differences (error code = 1) after 
> executing default_constraint_invalid_default_value_length.q 
> 1c1
> < FAILED: SemanticException [Error 10326]: Invalid Constraint syntax Invalid 
> Default value:  
> '12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234'
>  .Maximum character length allowed is 255 .
> ---
> > FAILED: SemanticException [Error 10326]: Invalid Constraint syntax Invalid 
> > Default value:  
> > '12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234'Maximum
> >  character length allowed is 255 .
> {noformat}
> {noformat}
> Client Execution succeeded but contained differences (error code = 1) after 
> executing create_with_constraints_duplicate_name.q 
> 13c13
> < FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. 
> InvalidObjectException(message:Constraint name already exists: pk1)
> ---
> > FAILED: Execution Error, return code 1 from 
> > org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:One or more 
> > instances could not be made persistent)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18867) create_with_constraints_duplicate_name and default_constraint_invalid_default_value_length failing

2018-03-05 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-18867:
---
Attachment: HIVE-18867.1.patch

> create_with_constraints_duplicate_name and 
> default_constraint_invalid_default_value_length failing 
> ---
>
> Key: HIVE-18867
> URL: https://issues.apache.org/jira/browse/HIVE-18867
> Project: Hive
>  Issue Type: Test
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-18867.1.patch
>
>
> The output file for both of these need to be updated
> {noformat}
> Client Execution succeeded but contained differences (error code = 1) after 
> executing default_constraint_invalid_default_value_length.q 
> 1c1
> < FAILED: SemanticException [Error 10326]: Invalid Constraint syntax Invalid 
> Default value:  
> '12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234'
>  .Maximum character length allowed is 255 .
> ---
> > FAILED: SemanticException [Error 10326]: Invalid Constraint syntax Invalid 
> > Default value:  
> > '12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234'Maximum
> >  character length allowed is 255 .
> {noformat}
> {noformat}
> Client Execution succeeded but contained differences (error code = 1) after 
> executing create_with_constraints_duplicate_name.q 
> 13c13
> < FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.DDLTask. 
> InvalidObjectException(message:Constraint name already exists: pk1)
> ---
> > FAILED: Execution Error, return code 1 from 
> > org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:One or more 
> > instances could not be made persistent)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-18867) create_with_constraints_duplicate_name and default_constraint_invalid_default_value_length failing

2018-03-05 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg reassigned HIVE-18867:
--


> create_with_constraints_duplicate_name and 
> default_constraint_invalid_default_value_length failing 
> ---
>
> Key: HIVE-18867
> URL: https://issues.apache.org/jira/browse/HIVE-18867
> Project: Hive
>  Issue Type: Test
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
>
> The output file for both of these need to be updated



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18793) Round udf should support variable as second argument

2018-03-05 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-18793:

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master.

> Round udf should support variable as second argument
> 
>
> Key: HIVE-18793
> URL: https://issues.apache.org/jira/browse/HIVE-18793
> Project: Hive
>  Issue Type: Improvement
>  Components: UDF
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18793.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18793) Round udf should support variable as second argument

2018-03-05 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386808#comment-16386808
 ] 

Jesus Camacho Rodriguez commented on HIVE-18793:


+1

> Round udf should support variable as second argument
> 
>
> Key: HIVE-18793
> URL: https://issues.apache.org/jira/browse/HIVE-18793
> Project: Hive
>  Issue Type: Improvement
>  Components: UDF
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Attachments: HIVE-18793.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18693) Snapshot Isolation does not work for Micromanaged table when a insert transaction is aborted

2018-03-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386775#comment-16386775
 ] 

Hive QA commented on HIVE-18693:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
16s{color} | {color:red} streaming in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
47s{color} | {color:red} ql in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
36s{color} | {color:red} ql: The patch generated 4 new + 283 unchanged - 3 
fixed = 287 total (was 286) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
19s{color} | {color:red} standalone-metastore: The patch generated 2 new + 539 
unchanged - 1 fixed = 541 total (was 540) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9496/dev-support/hive-personality.sh
 |
| git revision | master / 6be6af0 |
| Default Java | 1.8.0_111 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9496/yetus/patch-mvninstall-hcatalog_streaming.txt
 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9496/yetus/patch-mvninstall-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9496/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9496/yetus/diff-checkstyle-standalone-metastore.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9496/yetus/whitespace-eol.txt 
|
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9496/yetus/patch-asflicense-problems.txt
 |
| modules | C: hcatalog/streaming ql standalone-metastore U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9496/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Snapshot Isolation does not work for Micromanaged table when a insert 
> transaction is aborted
> 
>
> Key: HIVE-18693
> URL: https://issues.apache.org/jira/browse/HIVE-18693
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Steve Yeom
>   

[jira] [Commented] (HIVE-17552) Enable bucket map join by default

2018-03-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386748#comment-16386748
 ] 

Hive QA commented on HIVE-17552:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12913082/HIVE-17552.11.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 98 failed/errored test(s), 13477 tests 
executed
*Failed tests:*
{noformat}
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=93)


[jira] [Updated] (HIVE-18693) Snapshot Isolation does not work for Micromanaged table when a insert transaction is aborted

2018-03-05 Thread Steve Yeom (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Yeom updated HIVE-18693:
--
Attachment: HIVE-18693.03.patch

> Snapshot Isolation does not work for Micromanaged table when a insert 
> transaction is aborted
> 
>
> Key: HIVE-18693
> URL: https://issues.apache.org/jira/browse/HIVE-18693
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Steve Yeom
>Assignee: Steve Yeom
>Priority: Major
> Attachments: HIVE-18693.01.patch, HIVE-18693.02.patch, 
> HIVE-18693.03.patch
>
>
> TestTxnCommands2#writeBetweenWorkerAndCleaner with minor 
> changes (changing delete command to insert command) fails on MM table.
> Specifically the last SELECT commands returns wrong results. 
> But this test works fine with full ACID table. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18866) Semijoin: Implement a Long -> Hash64 vector fast-path

2018-03-05 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-18866:
---
Issue Type: Improvement  (was: Bug)

> Semijoin: Implement a Long -> Hash64 vector fast-path
> -
>
> Key: HIVE-18866
> URL: https://issues.apache.org/jira/browse/HIVE-18866
> Project: Hive
>  Issue Type: Improvement
>  Components: Vectorization
>Reporter: Gopal V
>Priority: Major
>  Labels: performance
> Attachments: perf-hash64-long.png
>
>
> A significant amount of CPU is wasted with JMM restrictions on byte[] arrays.
> To transform from one Long -> another Long, this goes into a byte[] array, 
> which shows up as a hotspot.
> !perf-hash64-long.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18866) Semijoin: Implement a Long -> Hash64 vector fast-path

2018-03-05 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-18866:
---
Labels: performance  (was: )

> Semijoin: Implement a Long -> Hash64 vector fast-path
> -
>
> Key: HIVE-18866
> URL: https://issues.apache.org/jira/browse/HIVE-18866
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Reporter: Gopal V
>Priority: Major
>  Labels: performance
> Attachments: perf-hash64-long.png
>
>
> A significant amount of CPU is wasted with JMM restrictions on byte[] arrays.
> To transform from one Long -> another Long, this goes into a byte[] array, 
> which shows up as a hotspot.
> !perf-hash64-long.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18865) Check filesystem calls return value (codescan)

2018-03-05 Thread Sahil Takiar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386709#comment-16386709
 ] 

Sahil Takiar commented on HIVE-18865:
-

Just curious, do you mean a codescan tool was used to generate these warnings? 
What codescan tool was used?

> Check filesystem calls return value (codescan)
> --
>
> Key: HIVE-18865
> URL: https://issues.apache.org/jira/browse/HIVE-18865
> Project: Hive
>  Issue Type: Bug
>Reporter: Thejas M Nair
>Priority: Major
>
> There are a few places where return value of certain filesystem operations 
> are not being checked.
> Hive should at the very least log these failures.
> 1. Overview : The method saveDir() in BeeLineOpts.java ignores the value 
> returned by mkdirs() on line 174, which could cause the program to overlook 
> unexpected states and conditions.
> In the file BeeLineOpts.java similar issues were on line numbers 174
> 2. Overview : The method compile() in CompileProcessor.java ignores the value 
> returned by mkdir() on line 226, which could cause the program to overlook 
> unexpected states and conditions.
> In the file CompileProcessor.java similar issues were on line numbers 234, 226
> 3. Overview : The method deleteTmpFile() in FileUtils.java ignores the value 
> returned by delete() on line 939, which could cause the program to overlook 
> unexpected states and conditions.
> In the file FileUtils.java similar issues were on line numbers 939



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18866) Semijoin: Implement a Long -> Hash64 vector fast-path

2018-03-05 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-18866:
---
Attachment: perf-hash64-long.png

> Semijoin: Implement a Long -> Hash64 vector fast-path
> -
>
> Key: HIVE-18866
> URL: https://issues.apache.org/jira/browse/HIVE-18866
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Reporter: Gopal V
>Priority: Major
> Attachments: perf-hash64-long.png
>
>
> A significant amount of CPU is wasted with JMM restrictions on byte[] arrays.
> To transform from one Long -> another Long, this goes into a byte[] array, 
> which shows up as a hotspot.
> !perf-hash64-long.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18864) WriteId high water mark (HWM) is incorrect if ValidWriteIdList is obtained after allocating writeId by current transaction.

2018-03-05 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-18864:

Description: 
For multi-statement txns, it is possible that write on a table happens after a 
read. Let's see the below scenario.
 # Committed txn=9 writes on table T1 with writeId=5.
 # Open txn=10. ValidTxnList(open:null, txn_HWM=10),
 # Read table T1 from txn=10. ValidWriteIdList(open:null, write_HWM=5).
 # Open txn=11, writes on table T1 with writeid=6.
 # Read table T1 from txn=10. ValidWriteIdList(open:null, write_HWM=5).
 # Write table T1 from txn=10 with writeId=7.
 # Read table T1 from txn=10. {color:#d04437}*ValidWriteIdList(open:null, 
write_HWM=7)*. – This read will able to see rows added by txn=11 which is still 
open.{color}

{color:#d04437}So, it is needed to rebuild the open/aborted list of 
ValidWriteIdList based on txn_HWM. Any writeId allocated by txnId > txn_HWM 
should be marked as open.{color}

{color:#33}{color:#d04437}cc{color}{color} 
[~ekoifman]{color:#33},{color} [~thejas]

  was:
For multi-statement txns, it is possible that write on a table happens after a 
read. Let's see the below scenario.
 # Committed txn=9 writes on table T1 with writeId=5.
 # Open txn=10. ValidTxnList(open:null, txn_HWM=10),
 # Read table T1 from txn=10. ValidWriteIdList(open:null, write_HWM=5).
 # Open txn=11, writes on table T1 with writeid=6.
 # Read table T1 from txn=10. ValidWriteIdList(open:null, write_HWM=5).
 # Write table T1 from txn=10 with writeId=7.
 # Read table T1 from txn=10. {color:#d04437}*ValidWriteIdList(open:null, 
write_HWM=7)*. – This read will able to see rows added by txn=11 which is still 
open.{color}

{color:#d04437}So, it is needed to rebuild the open/aborted list of 
ValidWriteIdList based on txn_HWM. Any writeId allocated by txnId > txn_HWM 
should be marked as open.{color}

{color:#d04437}{color:#33}cc{color} [~ekoifman]{color:#33},{color} 
[~thejas]{color}


> WriteId high water mark (HWM) is incorrect if ValidWriteIdList is obtained 
> after allocating writeId by current transaction.
> ---
>
> Key: HIVE-18864
> URL: https://issues.apache.org/jira/browse/HIVE-18864
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Blocker
>  Labels: ACID
> Fix For: 3.0.0
>
>
> For multi-statement txns, it is possible that write on a table happens after 
> a read. Let's see the below scenario.
>  # Committed txn=9 writes on table T1 with writeId=5.
>  # Open txn=10. ValidTxnList(open:null, txn_HWM=10),
>  # Read table T1 from txn=10. ValidWriteIdList(open:null, write_HWM=5).
>  # Open txn=11, writes on table T1 with writeid=6.
>  # Read table T1 from txn=10. ValidWriteIdList(open:null, write_HWM=5).
>  # Write table T1 from txn=10 with writeId=7.
>  # Read table T1 from txn=10. {color:#d04437}*ValidWriteIdList(open:null, 
> write_HWM=7)*. – This read will able to see rows added by txn=11 which is 
> still open.{color}
> {color:#d04437}So, it is needed to rebuild the open/aborted list of 
> ValidWriteIdList based on txn_HWM. Any writeId allocated by txnId > txn_HWM 
> should be marked as open.{color}
> {color:#33}{color:#d04437}cc{color}{color} 
> [~ekoifman]{color:#33},{color} [~thejas]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18864) WriteId high water mark (HWM) is incorrect if ValidWriteIdList is obtained after allocating writeId by current transaction.

2018-03-05 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-18864:

Description: 
For multi-statement txns, it is possible that write on a table happens after a 
read. Let's see the below scenario.
 # Committed txn=9 writes on table T1 with writeId=5.
 # Open txn=10. ValidTxnList(open:null, txn_HWM=10),
 # Read table T1 from txn=10. ValidWriteIdList(open:null, write_HWM=5).
 # Open txn=11, writes on table T1 with writeid=6.
 # Read table T1 from txn=10. ValidWriteIdList(open:null, write_HWM=5).
 # Write table T1 from txn=10 with writeId=7.
 # Read table T1 from txn=10. {color:#d04437}*ValidWriteIdList(open:null, 
write_HWM=7)*. – This read will able to see rows added by txn=11 which is still 
open.{color}

{color:#d04437}So, it is needed to rebuild the open/aborted list of 
ValidWriteIdList based on txn_HWM. Any writeId allocated by txnId > txn_HWM 
should be marked as open.{color}

{color:#d04437}{color:#33}cc{color} [~ekoifman]{color:#33},{color} 
[~thejas]{color}

  was:
For multi-statement txns, it is possible that write on a table happens after a 
read. Let's see the below scenario.
 # Committed txn=9 writes on table T1 with writeId=5.
 # Open txn=10. ValidTxnList(open:null, txn_HWM=10),
 # Read table T1 from txn=10. ValidWriteIdList(open:null, write_HWM=5).
 # Open txn=11, writes on table T1 with writeid=6.
 # Read table T1 from txn=10. ValidWriteIdList(open:null, write_HWM=5).
 # Write table T1 from txn=10 with writeId=7.
 # Read table T1 from txn=10. {color:#d04437}*ValidWriteIdList(open:null, 
write_HWM=7)*. – This read will able to see rows added by txn=11 which is still 
open.{color}

{color:#d04437}{color:#33}So, it is needed to rebuild the open/aborted list 
of ValidWriteIdList based on txn_HWM. Any writeId allocated by txnId > txn_HWM 
should be marked as open.{color}
{color}


> WriteId high water mark (HWM) is incorrect if ValidWriteIdList is obtained 
> after allocating writeId by current transaction.
> ---
>
> Key: HIVE-18864
> URL: https://issues.apache.org/jira/browse/HIVE-18864
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Blocker
>  Labels: ACID
> Fix For: 3.0.0
>
>
> For multi-statement txns, it is possible that write on a table happens after 
> a read. Let's see the below scenario.
>  # Committed txn=9 writes on table T1 with writeId=5.
>  # Open txn=10. ValidTxnList(open:null, txn_HWM=10),
>  # Read table T1 from txn=10. ValidWriteIdList(open:null, write_HWM=5).
>  # Open txn=11, writes on table T1 with writeid=6.
>  # Read table T1 from txn=10. ValidWriteIdList(open:null, write_HWM=5).
>  # Write table T1 from txn=10 with writeId=7.
>  # Read table T1 from txn=10. {color:#d04437}*ValidWriteIdList(open:null, 
> write_HWM=7)*. – This read will able to see rows added by txn=11 which is 
> still open.{color}
> {color:#d04437}So, it is needed to rebuild the open/aborted list of 
> ValidWriteIdList based on txn_HWM. Any writeId allocated by txnId > txn_HWM 
> should be marked as open.{color}
> {color:#d04437}{color:#33}cc{color} [~ekoifman]{color:#33},{color} 
> [~thejas]{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18860) fix TestAcidOnTez#testGetSplitsLocks

2018-03-05 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1638#comment-1638
 ] 

Sankar Hariappan commented on HIVE-18860:
-

Already have a ticket HIVE-18751 to track this test failure with IOException 
after HIVE-18192.

I noticed, this test was failing even before I merge HIVE-18192 but for 
different reasons which I'm not sure about.

> fix TestAcidOnTez#testGetSplitsLocks
> 
>
> Key: HIVE-18860
> URL: https://issues.apache.org/jira/browse/HIVE-18860
> Project: Hive
>  Issue Type: Bug
>  Components: Test, Transactions
>Reporter: Zoltan Haindrich
>Priority: Major
>
> it seems to me that HIVE-18665 patch have broken this test 
> https://travis-ci.org/kgyrtkirk/hive/builds/345287889



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-15393) Update Guava version

2018-03-05 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386656#comment-16386656
 ] 

Sergey Shelukhin commented on HIVE-15393:
-

Actually this looks like a mess right now.
storage-api uses 14, Druid uses 16 and main Hive uses 19.
There may be API compat issues between those (there definitely are somewhere 
between 11 and 19).
Can we revert or at least bring all versions in line? 
Ideally we also shouldn't upgrade ahead of important library dependencies like 
Hadoop and Tez.

> Update Guava version
> 
>
> Key: HIVE-15393
> URL: https://issues.apache.org/jira/browse/HIVE-15393
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: slim bouguerra
>Assignee: Ashutosh Chauhan
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HIVE-15393.2.patch, HIVE-15393.3.patch, 
> HIVE-15393.5.patch, HIVE-15393.6.patch, HIVE-15393.7.patch, 
> HIVE-15393.8.patch, HIVE-15393.9.patch, HIVE-15393.patch
>
>
> Druid base code is using newer version of guava 16.0.1 that is not compatible 
> with the current version used by Hive.
> FYI Hadoop project is moving to Guava 18 not sure if it is better to move to 
> guava 18 or even 19.
> https://issues.apache.org/jira/browse/HADOOP-10101



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17552) Enable bucket map join by default

2018-03-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386649#comment-16386649
 ] 

Hive QA commented on HIVE-17552:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
14s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9495/dev-support/hive-personality.sh
 |
| git revision | master / 6be6af0 |
| Default Java | 1.8.0_111 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9495/yetus/patch-asflicense-problems.txt
 |
| modules | C: common ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9495/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Enable bucket map join by default
> -
>
> Key: HIVE-17552
> URL: https://issues.apache.org/jira/browse/HIVE-17552
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-17552.1.patch, HIVE-17552.10.patch, 
> HIVE-17552.11.patch, HIVE-17552.2.patch, HIVE-17552.3.patch, 
> HIVE-17552.4.patch, HIVE-17552.5.patch, HIVE-17552.6.patch, 
> HIVE-17552.7.patch, HIVE-17552.8.patch, HIVE-17552.9.patch
>
>
> Currently bucket map join is disabled by default, however, it is potentially 
> most optimal join we have. Need to enable it by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-18864) WriteId high water mark (HWM) is incorrect if ValidWriteIdList is obtained after allocating writeId by current transaction.

2018-03-05 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan reassigned HIVE-18864:
---


> WriteId high water mark (HWM) is incorrect if ValidWriteIdList is obtained 
> after allocating writeId by current transaction.
> ---
>
> Key: HIVE-18864
> URL: https://issues.apache.org/jira/browse/HIVE-18864
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Blocker
>  Labels: ACID
> Fix For: 3.0.0
>
>
> For multi-statement txns, it is possible that write on a table happens after 
> a read. Let's see the below scenario.
>  # Committed txn=9 writes on table T1 with writeId=5.
>  # Open txn=10. ValidTxnList(open:null, txn_HWM=10),
>  # Read table T1 from txn=10. ValidWriteIdList(open:null, write_HWM=5).
>  # Open txn=11, writes on table T1 with writeid=6.
>  # Read table T1 from txn=10. ValidWriteIdList(open:null, write_HWM=5).
>  # Write table T1 from txn=10 with writeId=7.
>  # Read table T1 from txn=10. {color:#d04437}*ValidWriteIdList(open:null, 
> write_HWM=7)*. – This read will able to see rows added by txn=11 which is 
> still open.{color}
> {color:#d04437}{color:#33}So, it is needed to rebuild the open/aborted 
> list of ValidWriteIdList based on txn_HWM. Any writeId allocated by txnId > 
> txn_HWM should be marked as open.{color}
> {color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-15393) Update Guava version

2018-03-05 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386646#comment-16386646
 ] 

Sergey Shelukhin commented on HIVE-15393:
-

Hmm... what version of Guava does Hadoop/Tez/etc. use?
I wonder if we can actually upgrade without breaking these. 

> Update Guava version
> 
>
> Key: HIVE-15393
> URL: https://issues.apache.org/jira/browse/HIVE-15393
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: slim bouguerra
>Assignee: Ashutosh Chauhan
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HIVE-15393.2.patch, HIVE-15393.3.patch, 
> HIVE-15393.5.patch, HIVE-15393.6.patch, HIVE-15393.7.patch, 
> HIVE-15393.8.patch, HIVE-15393.9.patch, HIVE-15393.patch
>
>
> Druid base code is using newer version of guava 16.0.1 that is not compatible 
> with the current version used by Hive.
> FYI Hadoop project is moving to Guava 18 not sure if it is better to move to 
> guava 18 or even 19.
> https://issues.apache.org/jira/browse/HADOOP-10101



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18858) System properties in job configuration not resolved when submitting MR job

2018-03-05 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386635#comment-16386635
 ] 

Aihua Xu commented on HIVE-18858:
-

[~dvoros] Yeah. We were also thinking of this approach to set the right 
configuration before calling the MR job, but that would also have performance 
impact. Right now we just workarounded the issue by changing on the test side 
at this moment. But probably this is the option we have to take.

> System properties in job configuration not resolved when submitting MR job
> --
>
> Key: HIVE-18858
> URL: https://issues.apache.org/jira/browse/HIVE-18858
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
> Environment: Hadoop 3.0.0
>Reporter: Daniel Voros
>Assignee: Daniel Voros
>Priority: Major
> Attachments: HIVE-18858.1.patch
>
>
> Since [this hadoop 
> commit|https://github.com/apache/hadoop/commit/5eb7dbe9b31a45f57f2e1623aa1c9ce84a56c4d1]
>  that was first released in 3.0.0, Configuration has a restricted mode, that 
> disables the resolution of system properties (that happens when retrieving a 
> configuration option).
> This leads to test failures when switching to Hadoop 3.0.0 (instead of 
> 3.0.0-beta1), since we're relying on the [substitution of 
> test.tmp.dir|https://github.com/apache/hive/blob/05d4719eefc56676a3e0e8f706e1c5e5e1f6b345/data/conf/hive-site.xml#L37]
>  during the [maven 
> build|https://github.com/apache/hive/blob/05d4719eefc56676a3e0e8f706e1c5e5e1f6b345/pom.xml#L83].
>  See test results on HIVE-18327.
> When we're passing job configurations to Hadoop, I believe there's no way to 
> disable the restricted mode, since we go through some Hadoop MR calls first, 
> see here:
> {code}
> "HiveServer2-Background-Pool: Thread-105@9500" prio=5 tid=0x69 nid=NA runnable
>   java.lang.Thread.State: RUNNABLE
> at 
> org.apache.hadoop.conf.Configuration.addResourceObject(Configuration.java:970)
> - locked <0x2fe6> (a org.apache.hadoop.mapred.JobConf)
> at 
> org.apache.hadoop.conf.Configuration.addResource(Configuration.java:895)
> at org.apache.hadoop.mapred.JobConf.(JobConf.java:476)
> at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.(LocalJobRunner.java:162)
> at 
> org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:788)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:254)
> at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570)
> at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567)
> at 
> java.security.AccessController.doPrivileged(AccessController.java:-1)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567)
> at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:576)
> at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:571)
> at 
> java.security.AccessController.doPrivileged(AccessController.java:-1)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
> at 
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:571)
> at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:562)
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:415)
> at 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:149)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2314)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1985)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1687)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1438)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1432)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:248)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:90)
> at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:340)
> at 
> java.security.AccessController.doPrivileged(AccessController.java:-1)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
> at 
> 

[jira] [Updated] (HIVE-18726) Implement DEFAULT constraint

2018-03-05 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-18726:
---
Labels: TODOC3.0  (was: )

> Implement DEFAULT constraint
> 
>
> Key: HIVE-18726
> URL: https://issues.apache.org/jira/browse/HIVE-18726
> Project: Hive
>  Issue Type: New Feature
>  Components: Query Planning, Query Processor
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
>  Labels: TODOC3.0
> Fix For: 3.0.0
>
> Attachments: HIVE-18726.1.patch, HIVE-18726.2.patch, 
> HIVE-18726.3.patch, HIVE-18726.4.patch, HIVE-18726.5.patch, 
> HIVE-18726.6.patch, HIVE-18726.7.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18860) fix TestAcidOnTez#testGetSplitsLocks

2018-03-05 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386630#comment-16386630
 ] 

Eugene Koifman commented on HIVE-18860:
---

cc [~sankarh]

> fix TestAcidOnTez#testGetSplitsLocks
> 
>
> Key: HIVE-18860
> URL: https://issues.apache.org/jira/browse/HIVE-18860
> Project: Hive
>  Issue Type: Bug
>  Components: Test, Transactions
>Reporter: Zoltan Haindrich
>Priority: Major
>
> it seems to me that HIVE-18665 patch have broken this test 
> https://travis-ci.org/kgyrtkirk/hive/builds/345287889



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18860) fix TestAcidOnTez#testGetSplitsLocks

2018-03-05 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-18860:
--
Component/s: Transactions

> fix TestAcidOnTez#testGetSplitsLocks
> 
>
> Key: HIVE-18860
> URL: https://issues.apache.org/jira/browse/HIVE-18860
> Project: Hive
>  Issue Type: Bug
>  Components: Test, Transactions
>Reporter: Zoltan Haindrich
>Priority: Major
>
> it seems to me that HIVE-18665 patch have broken this test 
> https://travis-ci.org/kgyrtkirk/hive/builds/345287889



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18571) stats issues for MM tables

2018-03-05 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386622#comment-16386622
 ] 

Sergey Shelukhin commented on HIVE-18571:
-

The negative tests OOMed, which seems to be a typical problem lately...

> stats issues for MM tables
> --
>
> Key: HIVE-18571
> URL: https://issues.apache.org/jira/browse/HIVE-18571
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18571.01.patch, HIVE-18571.02.patch, 
> HIVE-18571.03.patch, HIVE-18571.04.patch, HIVE-18571.patch
>
>
> There are multiple stats aggregation issues with MM tables.
> Some simple stats are double counted and some stats (simple stats) are 
> invalid for ACID table dirs altogether. 
> I have a patch almost ready, need to fix some more stuff and clean up.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17478) Move filesystem stats collection from metastore to ql

2018-03-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386611#comment-16386611
 ] 

Hive QA commented on HIVE-17478:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12913070/HIVE-17478.04.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 524 failed/errored test(s), 13468 tests 
executed
*Failed tests:*
{noformat}
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=94)


[jira] [Commented] (HIVE-18746) add_months should validate the date first

2018-03-05 Thread Kryvenko Igor (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386593#comment-16386593
 ] 

Kryvenko Igor commented on HIVE-18746:
--

[~vgarg] Thanks for review

> add_months should validate the date first
> -
>
> Key: HIVE-18746
> URL: https://issues.apache.org/jira/browse/HIVE-18746
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Subhasis Gorai
>Assignee: Kryvenko Igor
>Priority: Minor
> Attachments: HIVE-18746.1.patch, HIVE-18746.3.patch, 
> HIVE-18746.4.patch, HIVE-18746.5.patch, HIVE-18746.6.patch, 
> HIVE-18746.7.patch, HIVE-18746.patch
>
>
> hive (sbg_hvc_ods)> select add_months('2017-02-28', 1);
> OK
> _c0
> 2017-03-31
> Time taken: 0.107 seconds, Fetched: 1 row(s)
> hive (sbg_hvc_ods)> select add_months('2017-02-29', 1);
> OK
> _c0
> 2017-04-01
> Time taken: 0.084 seconds, Fetched: 1 row(s)
> hive (sbg_hvc_ods)>
>  
> '2017-02-29' is an invalid date.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18746) add_months should validate the date first

2018-03-05 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-18746:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

I have pushed it to master. Thanks for the patch [~vbeshka]

> add_months should validate the date first
> -
>
> Key: HIVE-18746
> URL: https://issues.apache.org/jira/browse/HIVE-18746
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Subhasis Gorai
>Assignee: Kryvenko Igor
>Priority: Minor
> Attachments: HIVE-18746.1.patch, HIVE-18746.3.patch, 
> HIVE-18746.4.patch, HIVE-18746.5.patch, HIVE-18746.6.patch, 
> HIVE-18746.7.patch, HIVE-18746.patch
>
>
> hive (sbg_hvc_ods)> select add_months('2017-02-28', 1);
> OK
> _c0
> 2017-03-31
> Time taken: 0.107 seconds, Fetched: 1 row(s)
> hive (sbg_hvc_ods)> select add_months('2017-02-29', 1);
> OK
> _c0
> 2017-04-01
> Time taken: 0.084 seconds, Fetched: 1 row(s)
> hive (sbg_hvc_ods)>
>  
> '2017-02-29' is an invalid date.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HIVE-17580) Remove dependency of get_fields_with_environment_context API to serde

2018-03-05 Thread Owen O'Malley (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386564#comment-16386564
 ] 

Owen O'Malley edited comment on HIVE-17580 at 3/5/18 7:10 PM:
--

The problem with putting ObjectInspector into storage-api is that 
ObjectInspector by itself doesn't do anything. You need the cloud of stuff 
around ObjectInspector to do anything. It is also ill fitting because 
storage-api is the *vectorized* api. It by design does not include the 
ObjectInspector and the associated slow legacy path for Hive.

Maybe a simpler fix is to move ObjectInspector into standalone metastore and 
make serde depend on it. That would at least not pull ObjectInspect into the 
storage-api and only put it where need is.

You could even combine the two with:
{code}
public enum MetastoreTypeCategory {...};
{code}

and have ObjectInspector.TypeCategory with:
{code}
public MetastoreTypeCategory toMetastore();
{code}



was (Author: owen.omalley):
The problem with putting ObjectInspector into storage-api is that 
ObjectInspector by itself doesn't do anything. You need the cloud of stuff 
around ObjectInspector to do anything. It is also ill fitting because 
storage-api is the *vectorized* api. It by design does not include the 
ObjectInspector and the associated slow legacy path for Hive.

Maybe a simpler fix is to move ObjectInspector into standalone metastore and 
make serde depend on it. That would at least not pull ObjectInspect into the 
storage-api and only put it where need is.

You could even combine the two with:
{{public enum MetastoreTypeCategory {...};}}

and have ObjectInspector.TypeCategory with:
{{public MetastoreTypeCategory toMetastore();}}


> Remove dependency of get_fields_with_environment_context API to serde
> -
>
> Key: HIVE-17580
> URL: https://issues.apache.org/jira/browse/HIVE-17580
> Project: Hive
>  Issue Type: Sub-task
>  Components: Standalone Metastore
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-17580.003-standalone-metastore.patch, 
> HIVE-17580.04-standalone-metastore.patch, 
> HIVE-17580.05-standalone-metastore.patch, 
> HIVE-17580.06-standalone-metastore.patch, 
> HIVE-17580.07-standalone-metastore.patch, 
> HIVE-17580.08-standalone-metastore.patch, 
> HIVE-17580.09-standalone-metastore.patch, 
> HIVE-17580.092-standalone-metastore.patch
>
>
> {{get_fields_with_environment_context}} metastore API uses {{Deserializer}} 
> class to access the fields metadata for the cases where it is stored along 
> with the data files (avro tables). The problem is Deserializer classes is 
> defined in hive-serde module and in order to make metastore independent of 
> Hive we will have to remove this dependency (atleast we should change it to 
> runtime dependency instead of compile time).
> The other option is investigate if we can use SearchArgument to provide this 
> functionality.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18863) trunc() calls itself trunk() in an error message

2018-03-05 Thread Tim Armstrong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386579#comment-16386579
 ] 

Tim Armstrong commented on HIVE-18863:
--

The bug is in the JIRA title.




> trunc() calls itself trunk() in an error message
> 
>
> Key: HIVE-18863
> URL: https://issues.apache.org/jira/browse/HIVE-18863
> Project: Hive
>  Issue Type: Bug
>  Components: UDF
>Reporter: Tim Armstrong
>Priority: Minor
>  Labels: newbie
>
> {noformat}
> > select  trunc('millennium', cast('2001-02-16 20:38:40' as timestamp))
> FAILED: SemanticException Line 0:-1 Argument type mismatch ''2001-02-16 
> 20:38:40'': trunk() only takes STRING/CHAR/VARCHAR types as second argument, 
> got TIMESTAMP
> {noformat}
> I saw this on a derivative of Hive 1.1.0 (cdh5.15.0), but the string still 
> seems to be present on master:
> https://github.com/apache/hive/blob/6d890faf22fd1ede3658a5eed097476eab3c67e9/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFTrunc.java#L262



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17580) Remove dependency of get_fields_with_environment_context API to serde

2018-03-05 Thread Owen O'Malley (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386564#comment-16386564
 ] 

Owen O'Malley commented on HIVE-17580:
--

The problem with putting ObjectInspector into storage-api is that 
ObjectInspector by itself doesn't do anything. You need the cloud of stuff 
around ObjectInspector to do anything. It is also ill fitting because 
storage-api is the *vectorized* api. It by design does not include the 
ObjectInspector and the associated slow legacy path for Hive.

Maybe a simpler fix is to move ObjectInspector into standalone metastore and 
make serde depend on it. That would at least not pull ObjectInspect into the 
storage-api and only put it where need is.

You could even combine the two with:
{{public enum MetastoreTypeCategory {...};}}

and have ObjectInspector.TypeCategory with:
{{public MetastoreTypeCategory toMetastore();}}


> Remove dependency of get_fields_with_environment_context API to serde
> -
>
> Key: HIVE-17580
> URL: https://issues.apache.org/jira/browse/HIVE-17580
> Project: Hive
>  Issue Type: Sub-task
>  Components: Standalone Metastore
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-17580.003-standalone-metastore.patch, 
> HIVE-17580.04-standalone-metastore.patch, 
> HIVE-17580.05-standalone-metastore.patch, 
> HIVE-17580.06-standalone-metastore.patch, 
> HIVE-17580.07-standalone-metastore.patch, 
> HIVE-17580.08-standalone-metastore.patch, 
> HIVE-17580.09-standalone-metastore.patch, 
> HIVE-17580.092-standalone-metastore.patch
>
>
> {{get_fields_with_environment_context}} metastore API uses {{Deserializer}} 
> class to access the fields metadata for the cases where it is stored along 
> with the data files (avro tables). The problem is Deserializer classes is 
> defined in hive-serde module and in order to make metastore independent of 
> Hive we will have to remove this dependency (atleast we should change it to 
> runtime dependency instead of compile time).
> The other option is investigate if we can use SearchArgument to provide this 
> functionality.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18858) System properties in job configuration not resolved when submitting MR job

2018-03-05 Thread Sahil Takiar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386559#comment-16386559
 ] 

Sahil Takiar commented on HIVE-18858:
-

CC: [~aihuaxu] as I know he was working on this too.

> System properties in job configuration not resolved when submitting MR job
> --
>
> Key: HIVE-18858
> URL: https://issues.apache.org/jira/browse/HIVE-18858
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
> Environment: Hadoop 3.0.0
>Reporter: Daniel Voros
>Assignee: Daniel Voros
>Priority: Major
> Attachments: HIVE-18858.1.patch
>
>
> Since [this hadoop 
> commit|https://github.com/apache/hadoop/commit/5eb7dbe9b31a45f57f2e1623aa1c9ce84a56c4d1]
>  that was first released in 3.0.0, Configuration has a restricted mode, that 
> disables the resolution of system properties (that happens when retrieving a 
> configuration option).
> This leads to test failures when switching to Hadoop 3.0.0 (instead of 
> 3.0.0-beta1), since we're relying on the [substitution of 
> test.tmp.dir|https://github.com/apache/hive/blob/05d4719eefc56676a3e0e8f706e1c5e5e1f6b345/data/conf/hive-site.xml#L37]
>  during the [maven 
> build|https://github.com/apache/hive/blob/05d4719eefc56676a3e0e8f706e1c5e5e1f6b345/pom.xml#L83].
>  See test results on HIVE-18327.
> When we're passing job configurations to Hadoop, I believe there's no way to 
> disable the restricted mode, since we go through some Hadoop MR calls first, 
> see here:
> {code}
> "HiveServer2-Background-Pool: Thread-105@9500" prio=5 tid=0x69 nid=NA runnable
>   java.lang.Thread.State: RUNNABLE
> at 
> org.apache.hadoop.conf.Configuration.addResourceObject(Configuration.java:970)
> - locked <0x2fe6> (a org.apache.hadoop.mapred.JobConf)
> at 
> org.apache.hadoop.conf.Configuration.addResource(Configuration.java:895)
> at org.apache.hadoop.mapred.JobConf.(JobConf.java:476)
> at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.(LocalJobRunner.java:162)
> at 
> org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:788)
> at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:254)
> at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570)
> at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567)
> at 
> java.security.AccessController.doPrivileged(AccessController.java:-1)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567)
> at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:576)
> at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:571)
> at 
> java.security.AccessController.doPrivileged(AccessController.java:-1)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
> at 
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:571)
> at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:562)
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:415)
> at 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:149)
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205)
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2314)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1985)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1687)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1438)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1432)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:248)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.access$700(SQLOperation.java:90)
> at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:340)
> at 
> java.security.AccessController.doPrivileged(AccessController.java:-1)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
> at 
> org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:353)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> 

[jira] [Commented] (HIVE-18749) Need to replace transactionId with writeId in RecordIdentifier and other relevant contexts.

2018-03-05 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386550#comment-16386550
 ] 

Sankar Hariappan commented on HIVE-18749:
-

No new test failures due to this patch.

Request [~ekoifman], [~thejas], [~anishek] to review the patch!

> Need to replace transactionId with writeId in RecordIdentifier and other 
> relevant contexts.
> ---
>
> Key: HIVE-18749
> URL: https://issues.apache.org/jira/browse/HIVE-18749
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Minor
>  Labels: ACID, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18749.01.patch, HIVE-18749.02.patch
>
>
> Per table write ID implementation (HIVE-18192) have replaced global 
> transaction ID with write ID for the primary key for a row marked by 
> RecordIdentifier.Field..transactionId.
> Need to replace the same with writeId and update all test results file.
> Also, need to update other references (methods/variables) which currently 
> uses transactionId instead of writeId.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18863) trunc() calls itself trunk() in an error message

2018-03-05 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386551#comment-16386551
 ] 

Vihang Karajgaonkar commented on HIVE-18863:


Can you please clarify what is the bug here? Based on the error message it 
looks like trunc UDF accepts string/char/varchar as the second argument, but in 
this case it got a timestamp. 

> trunc() calls itself trunk() in an error message
> 
>
> Key: HIVE-18863
> URL: https://issues.apache.org/jira/browse/HIVE-18863
> Project: Hive
>  Issue Type: Bug
>  Components: UDF
>Reporter: Tim Armstrong
>Priority: Minor
>  Labels: newbie
>
> {noformat}
> > select  trunc('millennium', cast('2001-02-16 20:38:40' as timestamp))
> FAILED: SemanticException Line 0:-1 Argument type mismatch ''2001-02-16 
> 20:38:40'': trunk() only takes STRING/CHAR/VARCHAR types as second argument, 
> got TIMESTAMP
> {noformat}
> I saw this on a derivative of Hive 1.1.0 (cdh5.15.0), but the string still 
> seems to be present on master:
> https://github.com/apache/hive/blob/6d890faf22fd1ede3658a5eed097476eab3c67e9/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFTrunc.java#L262



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17478) Move filesystem stats collection from metastore to ql

2018-03-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386507#comment-16386507
 ] 

Hive QA commented on HIVE-17478:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
38s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
39s{color} | {color:red} ql: The patch generated 2 new + 644 unchanged - 2 
fixed = 646 total (was 646) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} standalone-metastore: The patch generated 0 new + 
756 unchanged - 41 fixed = 756 total (was 797) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9494/dev-support/hive-personality.sh
 |
| git revision | master / 4047bef |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9494/yetus/diff-checkstyle-ql.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9494/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql standalone-metastore U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9494/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Move filesystem stats collection from metastore to ql
> -
>
> Key: HIVE-17478
> URL: https://issues.apache.org/jira/browse/HIVE-17478
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-17478.01.patch, HIVE-17478.01wip01.patch, 
> HIVE-17478.02.patch, HIVE-17478.03.patch, HIVE-17478.04.patch
>
>
> filesystem level stats are collected automatically at metastore server 
> side...however computing these stats earlier during planning or query 
> execution may enable to launch stat collection on a newly added partition 
> only if needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17751) Separate HMS Client and HMS server into separate sub-modules

2018-03-05 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386489#comment-16386489
 ] 

Vihang Karajgaonkar commented on HIVE-17751:


I think hive-metastore-client should come under standalone-metastore module 
along with metastore-common. Otherwise, projects other than Hive will have to 
write their own HiveMetastoreClient (possible but they probably wont since that 
may be too much work). Ideally, non-hive projects should only have 
metastore-common and metastore-client jars in their classpaths to be able to 
talk to metastore server. What are the maven issues you are running if you move 
the client under standalone-metastore?

> Separate HMS Client and HMS server into separate sub-modules
> 
>
> Key: HIVE-17751
> URL: https://issues.apache.org/jira/browse/HIVE-17751
> Project: Hive
>  Issue Type: Sub-task
>  Components: Standalone Metastore
>Reporter: Vihang Karajgaonkar
>Assignee: Alexander Kolbasov
>Priority: Major
> Attachments: HIVE-17751.06-standalone-metastore.patch
>
>
> external applications which are interfacing with HMS should ideally only 
> include HMSClient library instead of one big library containing server as 
> well. We should ideally have a thin client library so that cross version 
> support for external applications is easier. We should sub-divide the 
> standalone module into possibly 3 modules (one for common classes, one for 
> client classes and one for server) or 2 sub-modules (one for client and one 
> for server) so that we can generate separate jars for HMS client and server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18749) Need to replace transactionId with writeId in RecordIdentifier and other relevant contexts.

2018-03-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16386451#comment-16386451
 ] 

Hive QA commented on HIVE-18749:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12913068/HIVE-18749.02.patch

{color:green}SUCCESS:{color} +1 due to 30 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 18 failed/errored test(s), 13468 tests 
executed
*Failed tests:*
{noformat}
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=94)


[jira] [Updated] (HIVE-17552) Enable bucket map join by default

2018-03-05 Thread Deepak Jaiswal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Jaiswal updated HIVE-17552:
--
Attachment: HIVE-17552.11.patch

> Enable bucket map join by default
> -
>
> Key: HIVE-17552
> URL: https://issues.apache.org/jira/browse/HIVE-17552
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-17552.1.patch, HIVE-17552.10.patch, 
> HIVE-17552.11.patch, HIVE-17552.2.patch, HIVE-17552.3.patch, 
> HIVE-17552.4.patch, HIVE-17552.5.patch, HIVE-17552.6.patch, 
> HIVE-17552.7.patch, HIVE-17552.8.patch, HIVE-17552.9.patch
>
>
> Currently bucket map join is disabled by default, however, it is potentially 
> most optimal join we have. Need to enable it by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >