[jira] [Commented] (HIVE-16605) Enforce NOT NULL constraints

2018-01-16 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328404#comment-16328404
 ] 

Hive QA commented on HIVE-16605:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
35s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 5s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
47s{color} | {color:red} ql: The patch generated 66 new + 1567 unchanged - 0 
fixed = 1633 total (was 1567) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
11s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 798a17c |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8651/yetus/diff-checkstyle-ql.txt
 |
| modules | C: common ql itests U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8651/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Enforce NOT NULL constraints
> 
>
> Key: HIVE-16605
> URL: https://issues.apache.org/jira/browse/HIVE-16605
> Project: Hive
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Carter Shanklin
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-16605.1.patch
>
>
> Since NOT NULL is so common it would be great to have tables start to enforce 
> that.
> [~ekoifman] described a possible approach in HIVE-16575:
> {quote}
> One way to enforce not null constraint is to have the optimizer add 
> enforce_not_null UDF which throws if it sees a NULL, otherwise it's pass 
> through.
> So if 'b' has not null constraint,
> Insert into T select a,b,c... would become
> Insert into T select a, enforce_not_null(b), c.
> This would work for any table type.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18466) Enhance LazySimpleSerDe (Text) to optionally output data escaped for serialization

2018-01-16 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-18466:

Attachment: HIVE-18466.01.patch

> Enhance LazySimpleSerDe (Text) to optionally output data escaped for 
> serialization
> --
>
> Key: HIVE-18466
> URL: https://issues.apache.org/jira/browse/HIVE-18466
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
> Environment: Allows SELECTing out TEXTFILE columns but retaining 
> STRING data type family escapes so the output is still TEXTFILE.
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-18466.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18466) Enhance LazySimpleSerDe (Text) to optionally output data escaped for serialization

2018-01-16 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-18466:

Status: Patch Available  (was: Open)

> Enhance LazySimpleSerDe (Text) to optionally output data escaped for 
> serialization
> --
>
> Key: HIVE-18466
> URL: https://issues.apache.org/jira/browse/HIVE-18466
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
> Environment: Allows SELECTing out TEXTFILE columns but retaining 
> STRING data type family escapes so the output is still TEXTFILE.
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-18466.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18465) Hive metastore schema initialization failing on postgres

2018-01-16 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-18465:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Only changes in the Postgres script, no need for ptests run.

Thanks for the patch [~deepesh]!

> Hive metastore schema initialization failing on postgres
> 
>
> Key: HIVE-18465
> URL: https://issues.apache.org/jira/browse/HIVE-18465
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 3.0.0
>Reporter: Deepesh Khandelwal
>Assignee: Deepesh Khandelwal
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18465.patch
>
>
> Hive metastore schema on postgres is broken after the commit for HIVE-14498. 
> Following error is seen during schema initialization:
> {noformat}
> 0: jdbc:postgresql://localhost.localdomain:54> ALTER TABLE ONLY 
> "MV_CREATION_MET
> ADATA" ADD CONSTRAINT "MV_CREATION_METADATA_FK" FOREIGN KEY ("TBL_ID") 
> REFERENCE
> S "TBLS"("TBL_ID") DEFERRABLE
> Error: ERROR: there is no unique constraint matching given keys for 
> referenced table "TBLS" (state=42830,code=0)
> Closing: 0: jdbc:postgresql://localhost.localdomain:5432/hive
> org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization 
> FAILED! Metastore state would be inconsistent !!
> Underlying cause: java.io.IOException : Schema script failed, errorcode 2
> org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization 
> FAILED! Metastore state would be inconsistent !!
>   at 
> org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:586)
>   at 
> org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:559)
>   at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:1183)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> Caused by: java.io.IOException: Schema script failed, errorcode 2
>   at 
> org.apache.hive.beeline.HiveSchemaTool.runBeeLine(HiveSchemaTool.java:957)
>   at 
> org.apache.hive.beeline.HiveSchemaTool.runBeeLine(HiveSchemaTool.java:935)
>   at 
> org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:582)
>   ... 8 more
> *** schemaTool failed ***{noformat}
> In the file metastore/scripts/upgrade/postgres/hive-schema-3.0.0.postgres.sql 
> the ordering of statement
> {noformat}
> ALTER TABLE ONLY "MV_CREATION_METADATA"
>  ADD CONSTRAINT "MV_CREATION_METADATA_FK" FOREIGN KEY ("TBL_ID") REFERENCES 
> "TBLS"("TBL_ID") DEFERRABLE;{noformat}
> is before the definition of unique constraints for TBLS which is causing the 
> issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-18466) Enhance LazySimpleSerDe (Text) to optionally output data escaped for serialization

2018-01-16 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline reassigned HIVE-18466:
---


> Enhance LazySimpleSerDe (Text) to optionally output data escaped for 
> serialization
> --
>
> Key: HIVE-18466
> URL: https://issues.apache.org/jira/browse/HIVE-18466
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
> Environment: Allows SELECTing out TEXTFILE columns but retaining 
> STRING data type family escapes so the output is still TEXTFILE.
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18465) Hive metastore schema initialization failing on postgres

2018-01-16 Thread Deepesh Khandelwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328395#comment-16328395
 ] 

Deepesh Khandelwal commented on HIVE-18465:
---

[~jcamachorodriguez] can you review the patch?

> Hive metastore schema initialization failing on postgres
> 
>
> Key: HIVE-18465
> URL: https://issues.apache.org/jira/browse/HIVE-18465
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 3.0.0
>Reporter: Deepesh Khandelwal
>Assignee: Deepesh Khandelwal
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18465.patch
>
>
> Hive metastore schema on postgres is broken after the commit for HIVE-14498. 
> Following error is seen during schema initialization:
> {noformat}
> 0: jdbc:postgresql://localhost.localdomain:54> ALTER TABLE ONLY 
> "MV_CREATION_MET
> ADATA" ADD CONSTRAINT "MV_CREATION_METADATA_FK" FOREIGN KEY ("TBL_ID") 
> REFERENCE
> S "TBLS"("TBL_ID") DEFERRABLE
> Error: ERROR: there is no unique constraint matching given keys for 
> referenced table "TBLS" (state=42830,code=0)
> Closing: 0: jdbc:postgresql://localhost.localdomain:5432/hive
> org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization 
> FAILED! Metastore state would be inconsistent !!
> Underlying cause: java.io.IOException : Schema script failed, errorcode 2
> org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization 
> FAILED! Metastore state would be inconsistent !!
>   at 
> org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:586)
>   at 
> org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:559)
>   at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:1183)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> Caused by: java.io.IOException: Schema script failed, errorcode 2
>   at 
> org.apache.hive.beeline.HiveSchemaTool.runBeeLine(HiveSchemaTool.java:957)
>   at 
> org.apache.hive.beeline.HiveSchemaTool.runBeeLine(HiveSchemaTool.java:935)
>   at 
> org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:582)
>   ... 8 more
> *** schemaTool failed ***{noformat}
> In the file metastore/scripts/upgrade/postgres/hive-schema-3.0.0.postgres.sql 
> the ordering of statement
> {noformat}
> ALTER TABLE ONLY "MV_CREATION_METADATA"
>  ADD CONSTRAINT "MV_CREATION_METADATA_FK" FOREIGN KEY ("TBL_ID") REFERENCES 
> "TBLS"("TBL_ID") DEFERRABLE;{noformat}
> is before the definition of unique constraints for TBLS which is causing the 
> issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18465) Hive metastore schema initialization failing on postgres

2018-01-16 Thread Deepesh Khandelwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepesh Khandelwal updated HIVE-18465:
--
Fix Version/s: 3.0.0
   Status: Patch Available  (was: Open)

> Hive metastore schema initialization failing on postgres
> 
>
> Key: HIVE-18465
> URL: https://issues.apache.org/jira/browse/HIVE-18465
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 3.0.0
>Reporter: Deepesh Khandelwal
>Assignee: Deepesh Khandelwal
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18465.patch
>
>
> Hive metastore schema on postgres is broken after the commit for HIVE-14498. 
> Following error is seen during schema initialization:
> {noformat}
> 0: jdbc:postgresql://localhost.localdomain:54> ALTER TABLE ONLY 
> "MV_CREATION_MET
> ADATA" ADD CONSTRAINT "MV_CREATION_METADATA_FK" FOREIGN KEY ("TBL_ID") 
> REFERENCE
> S "TBLS"("TBL_ID") DEFERRABLE
> Error: ERROR: there is no unique constraint matching given keys for 
> referenced table "TBLS" (state=42830,code=0)
> Closing: 0: jdbc:postgresql://localhost.localdomain:5432/hive
> org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization 
> FAILED! Metastore state would be inconsistent !!
> Underlying cause: java.io.IOException : Schema script failed, errorcode 2
> org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization 
> FAILED! Metastore state would be inconsistent !!
>   at 
> org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:586)
>   at 
> org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:559)
>   at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:1183)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> Caused by: java.io.IOException: Schema script failed, errorcode 2
>   at 
> org.apache.hive.beeline.HiveSchemaTool.runBeeLine(HiveSchemaTool.java:957)
>   at 
> org.apache.hive.beeline.HiveSchemaTool.runBeeLine(HiveSchemaTool.java:935)
>   at 
> org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:582)
>   ... 8 more
> *** schemaTool failed ***{noformat}
> In the file metastore/scripts/upgrade/postgres/hive-schema-3.0.0.postgres.sql 
> the ordering of statement
> {noformat}
> ALTER TABLE ONLY "MV_CREATION_METADATA"
>  ADD CONSTRAINT "MV_CREATION_METADATA_FK" FOREIGN KEY ("TBL_ID") REFERENCES 
> "TBLS"("TBL_ID") DEFERRABLE;{noformat}
> is before the definition of unique constraints for TBLS which is causing the 
> issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18465) Hive metastore schema initialization failing on postgres

2018-01-16 Thread Deepesh Khandelwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepesh Khandelwal updated HIVE-18465:
--
Attachment: HIVE-18465.patch

> Hive metastore schema initialization failing on postgres
> 
>
> Key: HIVE-18465
> URL: https://issues.apache.org/jira/browse/HIVE-18465
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 3.0.0
>Reporter: Deepesh Khandelwal
>Assignee: Deepesh Khandelwal
>Priority: Major
> Attachments: HIVE-18465.patch
>
>
> Hive metastore schema on postgres is broken after the commit for HIVE-14498. 
> Following error is seen during schema initialization:
> {noformat}
> 0: jdbc:postgresql://localhost.localdomain:54> ALTER TABLE ONLY 
> "MV_CREATION_MET
> ADATA" ADD CONSTRAINT "MV_CREATION_METADATA_FK" FOREIGN KEY ("TBL_ID") 
> REFERENCE
> S "TBLS"("TBL_ID") DEFERRABLE
> Error: ERROR: there is no unique constraint matching given keys for 
> referenced table "TBLS" (state=42830,code=0)
> Closing: 0: jdbc:postgresql://localhost.localdomain:5432/hive
> org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization 
> FAILED! Metastore state would be inconsistent !!
> Underlying cause: java.io.IOException : Schema script failed, errorcode 2
> org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization 
> FAILED! Metastore state would be inconsistent !!
>   at 
> org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:586)
>   at 
> org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:559)
>   at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:1183)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> Caused by: java.io.IOException: Schema script failed, errorcode 2
>   at 
> org.apache.hive.beeline.HiveSchemaTool.runBeeLine(HiveSchemaTool.java:957)
>   at 
> org.apache.hive.beeline.HiveSchemaTool.runBeeLine(HiveSchemaTool.java:935)
>   at 
> org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:582)
>   ... 8 more
> *** schemaTool failed ***{noformat}
> In the file metastore/scripts/upgrade/postgres/hive-schema-3.0.0.postgres.sql 
> the ordering of statement
> {noformat}
> ALTER TABLE ONLY "MV_CREATION_METADATA"
>  ADD CONSTRAINT "MV_CREATION_METADATA_FK" FOREIGN KEY ("TBL_ID") REFERENCES 
> "TBLS"("TBL_ID") DEFERRABLE;{noformat}
> is before the definition of unique constraints for TBLS which is causing the 
> issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-18465) Hive metastore schema initialization failing on postgres

2018-01-16 Thread Deepesh Khandelwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepesh Khandelwal reassigned HIVE-18465:
-


> Hive metastore schema initialization failing on postgres
> 
>
> Key: HIVE-18465
> URL: https://issues.apache.org/jira/browse/HIVE-18465
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 3.0.0
>Reporter: Deepesh Khandelwal
>Assignee: Deepesh Khandelwal
>Priority: Major
>
> Hive metastore schema on postgres is broken after the commit for HIVE-14498. 
> Following error is seen during schema initialization:
> {noformat}
> 0: jdbc:postgresql://localhost.localdomain:54> ALTER TABLE ONLY 
> "MV_CREATION_MET
> ADATA" ADD CONSTRAINT "MV_CREATION_METADATA_FK" FOREIGN KEY ("TBL_ID") 
> REFERENCE
> S "TBLS"("TBL_ID") DEFERRABLE
> Error: ERROR: there is no unique constraint matching given keys for 
> referenced table "TBLS" (state=42830,code=0)
> Closing: 0: jdbc:postgresql://localhost.localdomain:5432/hive
> org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization 
> FAILED! Metastore state would be inconsistent !!
> Underlying cause: java.io.IOException : Schema script failed, errorcode 2
> org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization 
> FAILED! Metastore state would be inconsistent !!
>   at 
> org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:586)
>   at 
> org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:559)
>   at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:1183)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> Caused by: java.io.IOException: Schema script failed, errorcode 2
>   at 
> org.apache.hive.beeline.HiveSchemaTool.runBeeLine(HiveSchemaTool.java:957)
>   at 
> org.apache.hive.beeline.HiveSchemaTool.runBeeLine(HiveSchemaTool.java:935)
>   at 
> org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:582)
>   ... 8 more
> *** schemaTool failed ***{noformat}
> In the file metastore/scripts/upgrade/postgres/hive-schema-3.0.0.postgres.sql 
> the ordering of statement
> {noformat}
> ALTER TABLE ONLY "MV_CREATION_METADATA"
>  ADD CONSTRAINT "MV_CREATION_METADATA_FK" FOREIGN KEY ("TBL_ID") REFERENCES 
> "TBLS"("TBL_ID") DEFERRABLE;{noformat}
> is before the definition of unique constraints for TBLS which is causing the 
> issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18323) Vectorization: add the support of timestamp in VectorizedPrimitiveColumnReader for parquet

2018-01-16 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328390#comment-16328390
 ] 

Vihang Karajgaonkar commented on HIVE-18323:


I was porting the patch to branch-2 when I realized that branch-2 still uses 
Java 7 and this code will not compile in branch-2
{code}
case INT64:
  long seconds = 0;
  long nanoSeconds = 0;
  switch (type.getOriginalType()) {
  case TIMESTAMP_MILLIS:
long miliSeconds = dataColumn.readLong();
seconds = miliSeconds / TEN_TO_POW_3;
nanoSeconds = (miliSeconds - seconds * TEN_TO_POW_3) * TEN_TO_POW_6;
break;
  default:
throw new IOException(
"Unsupported parquet logical type: " + type.getOriginalType() + 
" for timestamp");
  }
  c.set(rowId, Timestamp.from(Instant.ofEpochSecond(seconds, 
nanoSeconds)));
{code}

because {{Timestamp.from(Instant.ofEpochSecond(seconds, nanoSeconds))}} uses 
Instant class which only available from Java 8. Also, I noticed that in 
{{DataWritableWriter}} the timestampWritable object is written as INT96. Do we 
even support writing timestamps as INT64 in parquet in hive? Any idea [~Ferd] 
[~aihuaxu] [~spena]

> Vectorization: add the support of timestamp in 
> VectorizedPrimitiveColumnReader for parquet
> --
>
> Key: HIVE-18323
> URL: https://issues.apache.org/jira/browse/HIVE-18323
> Project: Hive
>  Issue Type: Sub-task
>  Components: Vectorization
>Affects Versions: 3.0.0
>Reporter: Aihua Xu
>Assignee: Vihang Karajgaonkar
>Priority: Major
> Attachments: HIVE-18323.02.patch, HIVE-18323.03.patch, 
> HIVE-18323.04.patch, HIVE-18323.05.patch, HIVE-18323.06.patch, 
> HIVE-18323.07.patch, HIVE-18323.1.patch
>
>
> {noformat}
> CREATE TABLE `t1`(
>   `ts` timestamp,
>   `s1` string)
> STORED AS PARQUET;
> set hive.vectorized.execution.enabled=true;
> SELECT * from t1 SORT BY s1;
> {noformat}
> This query will throw exception since timestamp is not supported here yet.
> {noformat}
> Caused by: java.io.IOException: java.io.IOException: Unsupported type: 
> optional int96 ts
> at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
> at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:365)
> at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:116)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18462) Explain formatted for queries with map join has columnExprMap with unformatted column name

2018-01-16 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328384#comment-16328384
 ] 

Hive QA commented on HIVE-18462:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12906329/HIVE-18462.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 21 failed/errored test(s), 11543 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join0] (batchId=59)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parallel_join0] 
(batchId=74)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_outer_join3] 
(batchId=33)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_outer_join4] 
(batchId=85)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_outer_join6] 
(batchId=41)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=170)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mergejoin] 
(batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=160)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver
 (batchId=177)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part]
 (batchId=94)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=121)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query39] 
(batchId=248)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=254)
org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgress (batchId=229)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=232)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=232)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=232)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8650/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8650/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8650/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 21 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12906329 - PreCommit-HIVE-Build

> Explain formatted for queries with map join has columnExprMap with 
> unformatted column name
> --
>
> Key: HIVE-18462
> URL: https://issues.apache.org/jira/browse/HIVE-18462
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-18462.1.patch
>
>
> e.g.
> {code:sql}
> "columnExprMap:":{  
>   "_col0":"0:Column[_col0]",
>   "_col1":"0:Column[_col1]",
>   "_col2":"1:Column[_col0]",
>   "_col3":"1:Column[_col1]"
>   }
> {code}
> It is better formatted as:
> {code:sql}
> "columnExprMap:":{  
>  "_col0":"0:_col0",
>  "_col1":"0:_col1",
>  "_col2":"1:_col0",
>  "_col3":"1:_col1"
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18422) Vectorized input format should not be used when input format is excluded and row.serde is enabled

2018-01-16 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328367#comment-16328367
 ] 

Vihang Karajgaonkar commented on HIVE-18422:


[~mmccline] Did you get a chance to take a look at this patch? Thanks!

> Vectorized input format should not be used when input format is excluded and 
> row.serde is enabled
> -
>
> Key: HIVE-18422
> URL: https://issues.apache.org/jira/browse/HIVE-18422
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Minor
> Attachments: HIVE-18422.01.patch, HIVE-18422.02.patch
>
>
> HIVE-17534 introduced a config which gives a capability to exclude certain 
> inputformat from vectorized execution without affecting other input formats. 
> If an input format is excluded and row.serde is enabled at the same time, 
> vectorizer still sets the {{useVectorizedInputFormat}} to true which causes 
> Vectorized readers to be used in row.serde mode.
> In order to reproduce:
> {noformat}
> set hive.fetch.task.conversion=none;
> set hive.vectorized.use.row.serde.deserialize=true;
> set hive.vectorized.use.vector.serde.deserialize=true;
> set hive.vectorized.execution.enabled=true;
> set hive.vectorized.execution.reduce.enabled=true;
> set hive.vectorized.row.serde.inputformat.excludes=;
> -- SORT_QUERY_RESULTS
> -- exclude MapredParquetInputFormat from vectorization, this should cause 
> mapwork vectorization to be disabled
> set 
> hive.vectorized.input.format.excludes=org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat,org.apache.hadoop.hive.ql.io.orc.OrcInputFormat;
> set hive.vectorized.use.vectorized.input.format=true;
> create table orcTbl (t1 tinyint, t2 tinyint)
> stored as orc;
> insert into orcTbl values (54, 9), (-104, 25), (-112, 24);
> explain vectorization select t1, t2, (t1+t2) from orcTbl where (t1+t2) > 10;
> select t1, t2, (t1+t2) from orcTbl where (t1+t2) > 10;
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18323) Vectorization: add the support of timestamp in VectorizedPrimitiveColumnReader for parquet

2018-01-16 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328363#comment-16328363
 ] 

Vihang Karajgaonkar commented on HIVE-18323:


Test failures are unrelated and have been failing for a while. HIVE-17055 
reports TestMiniLlapCliDriver.testCliDriver[llap_smb] as flaky (I confirmed it 
too without the patch)

> Vectorization: add the support of timestamp in 
> VectorizedPrimitiveColumnReader for parquet
> --
>
> Key: HIVE-18323
> URL: https://issues.apache.org/jira/browse/HIVE-18323
> Project: Hive
>  Issue Type: Sub-task
>  Components: Vectorization
>Affects Versions: 3.0.0
>Reporter: Aihua Xu
>Assignee: Vihang Karajgaonkar
>Priority: Major
> Attachments: HIVE-18323.02.patch, HIVE-18323.03.patch, 
> HIVE-18323.04.patch, HIVE-18323.05.patch, HIVE-18323.06.patch, 
> HIVE-18323.07.patch, HIVE-18323.1.patch
>
>
> {noformat}
> CREATE TABLE `t1`(
>   `ts` timestamp,
>   `s1` string)
> STORED AS PARQUET;
> set hive.vectorized.execution.enabled=true;
> SELECT * from t1 SORT BY s1;
> {noformat}
> This query will throw exception since timestamp is not supported here yet.
> {noformat}
> Caused by: java.io.IOException: java.io.IOException: Unsupported type: 
> optional int96 ts
> at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
> at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:365)
> at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:116)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18462) Explain formatted for queries with map join has columnExprMap with unformatted column name

2018-01-16 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328327#comment-16328327
 ] 

Hive QA commented on HIVE-18462:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
42s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
31s{color} | {color:red} ql: The patch generated 1 new + 21 unchanged - 0 fixed 
= 22 total (was 21) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 798a17c |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8650/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8650/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Explain formatted for queries with map join has columnExprMap with 
> unformatted column name
> --
>
> Key: HIVE-18462
> URL: https://issues.apache.org/jira/browse/HIVE-18462
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-18462.1.patch
>
>
> e.g.
> {code:sql}
> "columnExprMap:":{  
>   "_col0":"0:Column[_col0]",
>   "_col1":"0:Column[_col1]",
>   "_col2":"1:Column[_col0]",
>   "_col3":"1:Column[_col1]"
>   }
> {code}
> It is better formatted as:
> {code:sql}
> "columnExprMap:":{  
>  "_col0":"0:_col0",
>  "_col1":"0:_col1",
>  "_col2":"1:_col0",
>  "_col3":"1:_col1"
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18461) Fix precommit hive job

2018-01-16 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328324#comment-16328324
 ] 

Vihang Karajgaonkar commented on HIVE-18461:


precommits are working fine now. Resolving this.

> Fix precommit hive job
> --
>
> Key: HIVE-18461
> URL: https://issues.apache.org/jira/browse/HIVE-18461
> Project: Hive
>  Issue Type: Task
>  Components: Testing Infrastructure
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Blocker
> Attachments: HIVE-18461.01.patch
>
>
> JIRA was upgraded over the weekend and precommit job has been failing since 
> then. There are potentially two issues at play here. One is with the 
> precommit admin job which automate the patch testing. I think YETUS-594 
> should fix the precommit admin job. But manually submission of Hive jobs is 
> failing with below exception. We should get this fix to get the automated 
> testing back on track.
> {noformat}
> + local 
> 'PTEST_CLASSPATH=/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/hive-ptest-3.0-classes.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/lib/*'
> + java -cp 
> '/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/hive-ptest-3.0-classes.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/lib/*'
>  org.apache.hive.ptest.api.client.PTestClient --command testStart --outputDir 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target
>  --password '[***]' --testHandle PreCommit-HIVE-Build-8631 --endpoint 
> http://104.198.109.242:8080/hive-ptest-1.0 --logsEndpoint 
> http://104.198.109.242/logs/ --profile master-mr2 --patch 
> https://issues.apache.org/jira/secure/attachment/12906251/HIVE-18323.05.patch 
> --jira HIVE-18323
> Exception in thread "main" javax.net.ssl.SSLException: Received fatal alert: 
> protocol_version
>   at sun.security.ssl.Alerts.getSSLException(Alerts.java:208)
>   at sun.security.ssl.Alerts.getSSLException(Alerts.java:154)
>   at sun.security.ssl.SSLSocketImpl.recvAlert(SSLSocketImpl.java:1979)
>   at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1086)
>   at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1332)
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1359)
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1343)
>   at 
> sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
>   at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1301)
>   at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:254)
>   at java.net.URL.openStream(URL.java:1041)
>   at 
> com.google.common.io.Resources$UrlByteSource.openStream(Resources.java:72)
>   at com.google.common.io.ByteSource.read(ByteSource.java:257)
>   at com.google.common.io.Resources.toByteArray(Resources.java:99)
>   at 
> org.apache.hive.ptest.api.client.PTestClient.testStart(PTestClient.java:126)
>   at 
> org.apache.hive.ptest.api.client.PTestClient.main(PTestClient.java:320)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18461) Fix precommit hive job

2018-01-16 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar updated HIVE-18461:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Fix precommit hive job
> --
>
> Key: HIVE-18461
> URL: https://issues.apache.org/jira/browse/HIVE-18461
> Project: Hive
>  Issue Type: Task
>  Components: Testing Infrastructure
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Blocker
> Attachments: HIVE-18461.01.patch
>
>
> JIRA was upgraded over the weekend and precommit job has been failing since 
> then. There are potentially two issues at play here. One is with the 
> precommit admin job which automate the patch testing. I think YETUS-594 
> should fix the precommit admin job. But manually submission of Hive jobs is 
> failing with below exception. We should get this fix to get the automated 
> testing back on track.
> {noformat}
> + local 
> 'PTEST_CLASSPATH=/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/hive-ptest-3.0-classes.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/lib/*'
> + java -cp 
> '/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/hive-ptest-3.0-classes.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/lib/*'
>  org.apache.hive.ptest.api.client.PTestClient --command testStart --outputDir 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target
>  --password '[***]' --testHandle PreCommit-HIVE-Build-8631 --endpoint 
> http://104.198.109.242:8080/hive-ptest-1.0 --logsEndpoint 
> http://104.198.109.242/logs/ --profile master-mr2 --patch 
> https://issues.apache.org/jira/secure/attachment/12906251/HIVE-18323.05.patch 
> --jira HIVE-18323
> Exception in thread "main" javax.net.ssl.SSLException: Received fatal alert: 
> protocol_version
>   at sun.security.ssl.Alerts.getSSLException(Alerts.java:208)
>   at sun.security.ssl.Alerts.getSSLException(Alerts.java:154)
>   at sun.security.ssl.SSLSocketImpl.recvAlert(SSLSocketImpl.java:1979)
>   at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1086)
>   at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1332)
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1359)
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1343)
>   at 
> sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
>   at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1301)
>   at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:254)
>   at java.net.URL.openStream(URL.java:1041)
>   at 
> com.google.common.io.Resources$UrlByteSource.openStream(Resources.java:72)
>   at com.google.common.io.ByteSource.read(ByteSource.java:257)
>   at com.google.common.io.Resources.toByteArray(Resources.java:99)
>   at 
> org.apache.hive.ptest.api.client.PTestClient.testStart(PTestClient.java:126)
>   at 
> org.apache.hive.ptest.api.client.PTestClient.main(PTestClient.java:320)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18411) Fix ArrayIndexOutOfBoundsException for VectorizedListColumnReader

2018-01-16 Thread Colin Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328314#comment-16328314
 ] 

Colin Ma commented on HIVE-18411:
-

[~Ferd], the failed test is not patch related.

> Fix ArrayIndexOutOfBoundsException for VectorizedListColumnReader
> -
>
> Key: HIVE-18411
> URL: https://issues.apache.org/jira/browse/HIVE-18411
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Colin Ma
>Assignee: Colin Ma
>Priority: Critical
> Attachments: HIVE-18411.001.patch
>
>
> ColumnVector should be initialized to the default size at the begin of 
> readBatch(), otherwise, ArrayIndexOutOfBoundsException will be thrown because 
> the size of ColumnVector may be updated in the last readBatch().



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18350) load data should rename files consistent with insert statements

2018-01-16 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328307#comment-16328307
 ] 

Hive QA commented on HIVE-18350:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12906327/HIVE-18350.5.patch

{color:green}SUCCESS:{color} +1 due to 5 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 75 failed/errored test(s), 11565 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join25] (batchId=72)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[llap_text] (batchId=74)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[load_data_rename] 
(batchId=7)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[load_fs] (batchId=82)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[load_fs_overwrite] 
(batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[load_orc] (batchId=78)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[load_orc_part] 
(batchId=14)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_loaddata] (batchId=46)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[offset_limit_global_optimizer]
 (batchId=19)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=151)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[load_fs2] 
(batchId=150)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=170)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mergejoin] 
(batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=160)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat]
 (batchId=178)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[index_bitmap3]
 (batchId=179)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[index_bitmap_auto]
 (batchId=177)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[load_fs2] 
(batchId=179)
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver[index_bitmap3] 
(batchId=92)
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver[index_bitmap_auto] 
(batchId=91)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part]
 (batchId=94)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[bucket_mapjoin_mismatch1]
 (batchId=94)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[load_data_into_acid]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[load_orc_negative2]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[load_orc_negative_part]
 (batchId=94)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=121)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=219)
org.apache.hadoop.hive.ql.TestAcidOnTez.testInsertWithRemoveUnion (batchId=222)
org.apache.hadoop.hive.ql.TestTxnLoadData.loadDataNonAcid2AcidConversion 
(batchId=257)
org.apache.hadoop.hive.ql.TestTxnLoadData.loadDataNonAcid2AcidConversionVectorized
 (batchId=257)
org.apache.hadoop.hive.ql.TestTxnNoBuckets.testToAcidConversionMultiBucket 
(batchId=278)
org.apache.hadoop.hive.ql.TestTxnNoBucketsVectorized.testToAcidConversionMultiBucket
 (batchId=278)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=254)
org.apache.hadoop.hive.ql.metadata.TestHiveCopyFiles.testCopyExistingFilesOnDifferentFileSystem[0]
 (batchId=278)
org.apache.hadoop.hive.ql.metadata.TestHiveCopyFiles.testCopyExistingFilesOnDifferentFileSystem[15]
 (batchId=278)
org.apache.hadoop.hive.ql.metadata.TestHiveCopyFiles.testCopyNewFilesOnDifferentFileSystem[0]
 (batchId=278)
org.apache.hadoop.hive.ql.metadata.TestHiveCopyFiles.testCopyNewFilesOnDifferentFileSystem[15]
 (batchId=278)
org.apache.hadoop.hive.ql.metadata.TestHiveCopyFiles.testRenameExistingFilesOnSameFileSystem[0]
 (batchId=278)
org.apache.hadoop.hive.ql.metadata.TestHiveCopyFiles.testRenameExistingFilesOnSameFileSystem[15]
 (batchId=278)
org.apache.hadoop.hive.ql.metadata.TestHiveCopyFiles.testRenameNewFilesOnSameFileSystem[0]
 (batchId=278)
org.apache.hadoop.hive.ql.metadata.TestHiveCopyFiles.testRenameNewFilesOnSameFileSystem[15]
 (batchId=278)
org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConcatenatePartitionedTable
 (batchId=226)

[jira] [Updated] (HIVE-17848) Bucket Map Join : Implement an efficient way to minimize loading hash table

2018-01-16 Thread Deepak Jaiswal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Jaiswal updated HIVE-17848:
--
Attachment: HIVE-17848.4.patch

> Bucket Map Join : Implement an efficient way to minimize loading hash table
> ---
>
> Key: HIVE-17848
> URL: https://issues.apache.org/jira/browse/HIVE-17848
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-17848.2.patch, HIVE-17848.4.patch
>
>
> In bucket mapjoin, each task loads its own copy of hash table which is 
> inefficient as load is IO heavy and due to multiple copies of same hash 
> table, the tables may get GCed on a busy system.
> Implement a subcache with softreference to each hash table corresponding to 
> its bucketID such that it can be reused by a task.
> This needs changes from Tez side to push bucket id to TezProcessor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18385) mergejoin fails with java.lang.IllegalStateException

2018-01-16 Thread Deepak Jaiswal (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328277#comment-16328277
 ] 

Deepak Jaiswal commented on HIVE-18385:
---

Wont this mean that test may still fail on MacOS due to stats issue?

+1 pending test results. I guess you need to reattach the patch to trigger a 
run.

> mergejoin fails with java.lang.IllegalStateException
> 
>
> Key: HIVE-18385
> URL: https://issues.apache.org/jira/browse/HIVE-18385
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-18385.1.patch, HIVE-18385.2.patch, hive.log
>
>
> mergejoin test fails with java.lang.IllegalStateException when run in 
> MiniLlapLocal.
> This is the query for which it fails,
> [ERROR]   TestMiniLlapLocalCliDriver.testCliDriver:59 Client execution failed 
> with error code = 2 running "
> select count(*) from tab a join tab_part b on a.key = b.key join src1 c on 
> a.value = c.value" fname=mergejoin.q 
> This is the stack trace,
> failed due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: b initializer failed, 
> vertex=vertex_1515180518813_0001_42_05 [Map 8], java.lang.RuntimeException: 
> ORC split generation failed with exception: java.lang.IllegalStateException: 
> Failed to retrieve dynamic value for RS_12_a_key_min
> at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1784)
> at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSplits(OrcInputFormat.java:1872)
> at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:499)
> at 
> org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:684)
> at 
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:196)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:278)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:269)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:269)
> at 
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:253)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
> at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.util.concurrent.ExecutionException: 
> java.lang.IllegalStateException: Failed to retrieve dynamic value for 
> RS_12_a_key_min
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)
> at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1778)
> ... 17 more
> Caused by: java.lang.IllegalStateException: Failed to retrieve dynamic value 
> for RS_12_a_key_min
> at 
> org.apache.hadoop.hive.ql.plan.DynamicValue.getValue(DynamicValue.java:142)
> at 
> org.apache.hadoop.hive.ql.plan.DynamicValue.getJavaValue(DynamicValue.java:97)
> at 
> org.apache.hadoop.hive.ql.plan.DynamicValue.getLiteral(DynamicValue.java:93)
> at 
> org.apache.hadoop.hive.ql.io.sarg.SearchArgumentImpl$PredicateLeafImpl.getLiteralList(SearchArgumentImpl.java:120)
> at 
> org.apache.orc.impl.RecordReaderImpl.evaluatePredicateMinMax(RecordReaderImpl.java:553)
> at 
> org.apache.orc.impl.RecordReaderImpl.evaluatePredicateRange(RecordReaderImpl.java:463)
> at 
> org.apache.orc.impl.RecordReaderImpl.evaluatePredicate(RecordReaderImpl.java:440)
> at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.isStripeSatisfyPredicate(OrcInputFormat.java:2163)
> at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.pickStripesInternal(OrcInputFormat.java:2140)
> at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.pickStripes(OrcInputFormat.java:2131)
> at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.access$3000(OrcInputFormat.java:157)

[jira] [Commented] (HIVE-18350) load data should rename files consistent with insert statements

2018-01-16 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328271#comment-16328271
 ] 

Hive QA commented on HIVE-18350:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
35s{color} | {color:red} ql: The patch generated 10 new + 540 unchanged - 5 
fixed = 550 total (was 545) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 798a17c |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8649/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8649/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> load data should rename files consistent with insert statements
> ---
>
> Key: HIVE-18350
> URL: https://issues.apache.org/jira/browse/HIVE-18350
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-18350.1.patch, HIVE-18350.2.patch, 
> HIVE-18350.3.patch, HIVE-18350.4.patch, HIVE-18350.5.patch
>
>
> Insert statements create files of format ending with _0, 0001_0 etc. 
> However, the load data uses the input file name. That results in inconsistent 
> naming convention which makes SMB joins difficult in some scenarios and may 
> cause trouble for other types of queries in future.
> We need consistent naming convention.
> For non-bucketed table, hive renames all the files regardless of how they 
> were named by the user.
>  For bucketed table, hive relies on user to name the files matching the 
> bucket in non-strict mode. Hive assumes that the data belongs to same bucket 
> in a file. In strict mode, loading bucketed table is disabled.
> This will likely affect most of the tests which load data which is pretty 
> significant due to which it is further divided into two subtasks for smoother 
> merge.
> For existing tables in customer database, it is recommended to reload 
> bucketed tables otherwise if customer tries to run SMB join and there is a 
> bucket for which there is no split, then there is a possibility of getting 
> incorrect results. However, this is not a regression as it would happen even 
> without the patch.
> With this patch however, and reloading data, the results should be correct.
> For 

[jira] [Updated] (HIVE-17495) CachedStore: prewarm improvement (avoid multiple sql calls to read partition column stats), refactoring and caching some aggregate stats

2018-01-16 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-17495:

Attachment: HIVE-17495.10.patch

> CachedStore: prewarm improvement (avoid multiple sql calls to read partition 
> column stats), refactoring and caching some aggregate stats
> 
>
> Key: HIVE-17495
> URL: https://issues.apache.org/jira/browse/HIVE-17495
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-17495.1.patch, HIVE-17495.10.patch, 
> HIVE-17495.10.patch, HIVE-17495.2.patch, HIVE-17495.3.patch, 
> HIVE-17495.4.patch, HIVE-17495.5.patch, HIVE-17495.6.patch, 
> HIVE-17495.7.patch, HIVE-17495.8.patch, HIVE-17495.9.patch
>
>
> Only when CachedStore is enabled, we would like to make the following 
> optimizations:
> 1. During CachedStore prewarm, use one sql call to retrieve column stats 
> objects for a db and store it in the cache.
> 2. Cache some aggregate stats  (e.g. aggregate stats for all partitions, 
> which seems to be commonly used) for query compilation speedup.
> 3. There was a bug in {{MetaStoreUtils.aggrPartitionStats}}, which would use 
> an iterator.next w/o checking with iterator.hasNext. This patch refactors 
> some code to fix that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-16605) Enforce NOT NULL constraints

2018-01-16 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-16605:
---
Status: Patch Available  (was: Open)

> Enforce NOT NULL constraints
> 
>
> Key: HIVE-16605
> URL: https://issues.apache.org/jira/browse/HIVE-16605
> Project: Hive
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Carter Shanklin
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-16605.1.patch
>
>
> Since NOT NULL is so common it would be great to have tables start to enforce 
> that.
> [~ekoifman] described a possible approach in HIVE-16575:
> {quote}
> One way to enforce not null constraint is to have the optimizer add 
> enforce_not_null UDF which throws if it sees a NULL, otherwise it's pass 
> through.
> So if 'b' has not null constraint,
> Insert into T select a,b,c... would become
> Insert into T select a, enforce_not_null(b), c.
> This would work for any table type.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-16605) Enforce NOT NULL constraints

2018-01-16 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-16605:
---
Attachment: HIVE-16605.1.patch

> Enforce NOT NULL constraints
> 
>
> Key: HIVE-16605
> URL: https://issues.apache.org/jira/browse/HIVE-16605
> Project: Hive
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Carter Shanklin
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-16605.1.patch
>
>
> Since NOT NULL is so common it would be great to have tables start to enforce 
> that.
> [~ekoifman] described a possible approach in HIVE-16575:
> {quote}
> One way to enforce not null constraint is to have the optimizer add 
> enforce_not_null UDF which throws if it sees a NULL, otherwise it's pass 
> through.
> So if 'b' has not null constraint,
> Insert into T select a,b,c... would become
> Insert into T select a, enforce_not_null(b), c.
> This would work for any table type.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18323) Vectorization: add the support of timestamp in VectorizedPrimitiveColumnReader for parquet

2018-01-16 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328263#comment-16328263
 ] 

Hive QA commented on HIVE-18323:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12906323/HIVE-18323.07.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 13 failed/errored test(s), 11562 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=151)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mergejoin] 
(batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=160)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part]
 (batchId=94)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=121)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=254)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=232)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=232)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=232)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8648/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8648/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8648/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 13 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12906323 - PreCommit-HIVE-Build

> Vectorization: add the support of timestamp in 
> VectorizedPrimitiveColumnReader for parquet
> --
>
> Key: HIVE-18323
> URL: https://issues.apache.org/jira/browse/HIVE-18323
> Project: Hive
>  Issue Type: Sub-task
>  Components: Vectorization
>Affects Versions: 3.0.0
>Reporter: Aihua Xu
>Assignee: Vihang Karajgaonkar
>Priority: Major
> Attachments: HIVE-18323.02.patch, HIVE-18323.03.patch, 
> HIVE-18323.04.patch, HIVE-18323.05.patch, HIVE-18323.06.patch, 
> HIVE-18323.07.patch, HIVE-18323.1.patch
>
>
> {noformat}
> CREATE TABLE `t1`(
>   `ts` timestamp,
>   `s1` string)
> STORED AS PARQUET;
> set hive.vectorized.execution.enabled=true;
> SELECT * from t1 SORT BY s1;
> {noformat}
> This query will throw exception since timestamp is not supported here yet.
> {noformat}
> Caused by: java.io.IOException: java.io.IOException: Unsupported type: 
> optional int96 ts
> at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
> at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:365)
> at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:116)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18462) Explain formatted for queries with map join has columnExprMap with unformatted column name

2018-01-16 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-18462:
---
Status: Patch Available  (was: Open)

> Explain formatted for queries with map join has columnExprMap with 
> unformatted column name
> --
>
> Key: HIVE-18462
> URL: https://issues.apache.org/jira/browse/HIVE-18462
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-18462.1.patch
>
>
> e.g.
> {code:sql}
> "columnExprMap:":{  
>   "_col0":"0:Column[_col0]",
>   "_col1":"0:Column[_col1]",
>   "_col2":"1:Column[_col0]",
>   "_col3":"1:Column[_col1]"
>   }
> {code}
> It is better formatted as:
> {code:sql}
> "columnExprMap:":{  
>  "_col0":"0:_col0",
>  "_col1":"0:_col1",
>  "_col2":"1:_col0",
>  "_col3":"1:_col1"
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-16605) Enforce NOT NULL constraints

2018-01-16 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-16605:
---
Attachment: (was: HIVE-16605.1.patch)

> Enforce NOT NULL constraints
> 
>
> Key: HIVE-16605
> URL: https://issues.apache.org/jira/browse/HIVE-16605
> Project: Hive
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Carter Shanklin
>Assignee: Vineet Garg
>Priority: Major
>
> Since NOT NULL is so common it would be great to have tables start to enforce 
> that.
> [~ekoifman] described a possible approach in HIVE-16575:
> {quote}
> One way to enforce not null constraint is to have the optimizer add 
> enforce_not_null UDF which throws if it sees a NULL, otherwise it's pass 
> through.
> So if 'b' has not null constraint,
> Insert into T select a,b,c... would become
> Insert into T select a, enforce_not_null(b), c.
> This would work for any table type.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18462) Explain formatted for queries with map join has columnExprMap with unformatted column name

2018-01-16 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-18462:
---
Attachment: HIVE-18462.1.patch

> Explain formatted for queries with map join has columnExprMap with 
> unformatted column name
> --
>
> Key: HIVE-18462
> URL: https://issues.apache.org/jira/browse/HIVE-18462
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-18462.1.patch
>
>
> e.g.
> {code:sql}
> "columnExprMap:":{  
>   "_col0":"0:Column[_col0]",
>   "_col1":"0:Column[_col1]",
>   "_col2":"1:Column[_col0]",
>   "_col3":"1:Column[_col1]"
>   }
> {code}
> It is better formatted as:
> {code:sql}
> "columnExprMap:":{  
>  "_col0":"0:_col0",
>  "_col1":"0:_col1",
>  "_col2":"1:_col0",
>  "_col3":"1:_col1"
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-16605) Enforce NOT NULL constraints

2018-01-16 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-16605:
---
Status: Open  (was: Patch Available)

> Enforce NOT NULL constraints
> 
>
> Key: HIVE-16605
> URL: https://issues.apache.org/jira/browse/HIVE-16605
> Project: Hive
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Carter Shanklin
>Assignee: Vineet Garg
>Priority: Major
>
> Since NOT NULL is so common it would be great to have tables start to enforce 
> that.
> [~ekoifman] described a possible approach in HIVE-16575:
> {quote}
> One way to enforce not null constraint is to have the optimizer add 
> enforce_not_null UDF which throws if it sees a NULL, otherwise it's pass 
> through.
> So if 'b' has not null constraint,
> Insert into T select a,b,c... would become
> Insert into T select a, enforce_not_null(b), c.
> This would work for any table type.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18462) Explain formatted for queries with map join has columnExprMap with unformatted column name

2018-01-16 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-18462:
---
Status: Open  (was: Patch Available)

> Explain formatted for queries with map join has columnExprMap with 
> unformatted column name
> --
>
> Key: HIVE-18462
> URL: https://issues.apache.org/jira/browse/HIVE-18462
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
>
> e.g.
> {code:sql}
> "columnExprMap:":{  
>   "_col0":"0:Column[_col0]",
>   "_col1":"0:Column[_col1]",
>   "_col2":"1:Column[_col0]",
>   "_col3":"1:Column[_col1]"
>   }
> {code}
> It is better formatted as:
> {code:sql}
> "columnExprMap:":{  
>  "_col0":"0:_col0",
>  "_col1":"0:_col1",
>  "_col2":"1:_col0",
>  "_col3":"1:_col1"
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18462) Explain formatted for queries with map join has columnExprMap with unformatted column name

2018-01-16 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-18462:
---
Attachment: (was: HIVE-18462.1.patch)

> Explain formatted for queries with map join has columnExprMap with 
> unformatted column name
> --
>
> Key: HIVE-18462
> URL: https://issues.apache.org/jira/browse/HIVE-18462
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
>
> e.g.
> {code:sql}
> "columnExprMap:":{  
>   "_col0":"0:Column[_col0]",
>   "_col1":"0:Column[_col1]",
>   "_col2":"1:Column[_col0]",
>   "_col3":"1:Column[_col1]"
>   }
> {code}
> It is better formatted as:
> {code:sql}
> "columnExprMap:":{  
>  "_col0":"0:_col0",
>  "_col1":"0:_col1",
>  "_col2":"1:_col0",
>  "_col3":"1:_col1"
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18350) load data should rename files consistent with insert statements

2018-01-16 Thread Deepak Jaiswal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Jaiswal updated HIVE-18350:
--
Attachment: HIVE-18350.5.patch

> load data should rename files consistent with insert statements
> ---
>
> Key: HIVE-18350
> URL: https://issues.apache.org/jira/browse/HIVE-18350
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-18350.1.patch, HIVE-18350.2.patch, 
> HIVE-18350.3.patch, HIVE-18350.4.patch, HIVE-18350.5.patch
>
>
> Insert statements create files of format ending with _0, 0001_0 etc. 
> However, the load data uses the input file name. That results in inconsistent 
> naming convention which makes SMB joins difficult in some scenarios and may 
> cause trouble for other types of queries in future.
> We need consistent naming convention.
> For non-bucketed table, hive renames all the files regardless of how they 
> were named by the user.
>  For bucketed table, hive relies on user to name the files matching the 
> bucket in non-strict mode. Hive assumes that the data belongs to same bucket 
> in a file. In strict mode, loading bucketed table is disabled.
> This will likely affect most of the tests which load data which is pretty 
> significant due to which it is further divided into two subtasks for smoother 
> merge.
> For existing tables in customer database, it is recommended to reload 
> bucketed tables otherwise if customer tries to run SMB join and there is a 
> bucket for which there is no split, then there is a possibility of getting 
> incorrect results. However, this is not a regression as it would happen even 
> without the patch.
> With this patch however, and reloading data, the results should be correct.
> For non-bucketed tables and external tables, there is no difference in 
> behavior and reloading data is not needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18323) Vectorization: add the support of timestamp in VectorizedPrimitiveColumnReader for parquet

2018-01-16 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328240#comment-16328240
 ] 

Hive QA commented on HIVE-18323:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 798a17c |
| Default Java | 1.8.0_111 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8648/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Vectorization: add the support of timestamp in 
> VectorizedPrimitiveColumnReader for parquet
> --
>
> Key: HIVE-18323
> URL: https://issues.apache.org/jira/browse/HIVE-18323
> Project: Hive
>  Issue Type: Sub-task
>  Components: Vectorization
>Affects Versions: 3.0.0
>Reporter: Aihua Xu
>Assignee: Vihang Karajgaonkar
>Priority: Major
> Attachments: HIVE-18323.02.patch, HIVE-18323.03.patch, 
> HIVE-18323.04.patch, HIVE-18323.05.patch, HIVE-18323.06.patch, 
> HIVE-18323.07.patch, HIVE-18323.1.patch
>
>
> {noformat}
> CREATE TABLE `t1`(
>   `ts` timestamp,
>   `s1` string)
> STORED AS PARQUET;
> set hive.vectorized.execution.enabled=true;
> SELECT * from t1 SORT BY s1;
> {noformat}
> This query will throw exception since timestamp is not supported here yet.
> {noformat}
> Caused by: java.io.IOException: java.io.IOException: Unsupported type: 
> optional int96 ts
> at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
> at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:365)
> at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:116)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18323) Vectorization: add the support of timestamp in VectorizedPrimitiveColumnReader for parquet

2018-01-16 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328232#comment-16328232
 ] 

Hive QA commented on HIVE-18323:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12906251/HIVE-18323.05.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 17 failed/errored test(s), 11515 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.org.apache.hadoop.hive.cli.TestCliDriver
 (batchId=3)
org.apache.hadoop.hive.cli.TestCliDriver.org.apache.hadoop.hive.cli.TestCliDriver
 (batchId=84)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join25] (batchId=72)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=151)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=170)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mergejoin] 
(batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=160)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part]
 (batchId=94)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=121)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=254)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=232)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=232)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=232)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8632/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8632/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8632/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 17 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12906251 - PreCommit-HIVE-Build

> Vectorization: add the support of timestamp in 
> VectorizedPrimitiveColumnReader for parquet
> --
>
> Key: HIVE-18323
> URL: https://issues.apache.org/jira/browse/HIVE-18323
> Project: Hive
>  Issue Type: Sub-task
>  Components: Vectorization
>Affects Versions: 3.0.0
>Reporter: Aihua Xu
>Assignee: Vihang Karajgaonkar
>Priority: Major
> Attachments: HIVE-18323.02.patch, HIVE-18323.03.patch, 
> HIVE-18323.04.patch, HIVE-18323.05.patch, HIVE-18323.06.patch, 
> HIVE-18323.07.patch, HIVE-18323.1.patch
>
>
> {noformat}
> CREATE TABLE `t1`(
>   `ts` timestamp,
>   `s1` string)
> STORED AS PARQUET;
> set hive.vectorized.execution.enabled=true;
> SELECT * from t1 SORT BY s1;
> {noformat}
> This query will throw exception since timestamp is not supported here yet.
> {noformat}
> Caused by: java.io.IOException: java.io.IOException: Unsupported type: 
> optional int96 ts
> at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
> at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:365)
> at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:116)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18323) Vectorization: add the support of timestamp in VectorizedPrimitiveColumnReader for parquet

2018-01-16 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar updated HIVE-18323:
---
Attachment: HIVE-18323.07.patch

> Vectorization: add the support of timestamp in 
> VectorizedPrimitiveColumnReader for parquet
> --
>
> Key: HIVE-18323
> URL: https://issues.apache.org/jira/browse/HIVE-18323
> Project: Hive
>  Issue Type: Sub-task
>  Components: Vectorization
>Affects Versions: 3.0.0
>Reporter: Aihua Xu
>Assignee: Vihang Karajgaonkar
>Priority: Major
> Attachments: HIVE-18323.02.patch, HIVE-18323.03.patch, 
> HIVE-18323.04.patch, HIVE-18323.05.patch, HIVE-18323.06.patch, 
> HIVE-18323.07.patch, HIVE-18323.1.patch
>
>
> {noformat}
> CREATE TABLE `t1`(
>   `ts` timestamp,
>   `s1` string)
> STORED AS PARQUET;
> set hive.vectorized.execution.enabled=true;
> SELECT * from t1 SORT BY s1;
> {noformat}
> This query will throw exception since timestamp is not supported here yet.
> {noformat}
> Caused by: java.io.IOException: java.io.IOException: Unsupported type: 
> optional int96 ts
> at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
> at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:365)
> at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:116)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18461) Fix precommit hive job

2018-01-16 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328225#comment-16328225
 ] 

Vihang Karajgaonkar commented on HIVE-18461:


patch committed to master.

> Fix precommit hive job
> --
>
> Key: HIVE-18461
> URL: https://issues.apache.org/jira/browse/HIVE-18461
> Project: Hive
>  Issue Type: Task
>  Components: Testing Infrastructure
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Blocker
> Attachments: HIVE-18461.01.patch
>
>
> JIRA was upgraded over the weekend and precommit job has been failing since 
> then. There are potentially two issues at play here. One is with the 
> precommit admin job which automate the patch testing. I think YETUS-594 
> should fix the precommit admin job. But manually submission of Hive jobs is 
> failing with below exception. We should get this fix to get the automated 
> testing back on track.
> {noformat}
> + local 
> 'PTEST_CLASSPATH=/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/hive-ptest-3.0-classes.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/lib/*'
> + java -cp 
> '/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/hive-ptest-3.0-classes.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/lib/*'
>  org.apache.hive.ptest.api.client.PTestClient --command testStart --outputDir 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target
>  --password '[***]' --testHandle PreCommit-HIVE-Build-8631 --endpoint 
> http://104.198.109.242:8080/hive-ptest-1.0 --logsEndpoint 
> http://104.198.109.242/logs/ --profile master-mr2 --patch 
> https://issues.apache.org/jira/secure/attachment/12906251/HIVE-18323.05.patch 
> --jira HIVE-18323
> Exception in thread "main" javax.net.ssl.SSLException: Received fatal alert: 
> protocol_version
>   at sun.security.ssl.Alerts.getSSLException(Alerts.java:208)
>   at sun.security.ssl.Alerts.getSSLException(Alerts.java:154)
>   at sun.security.ssl.SSLSocketImpl.recvAlert(SSLSocketImpl.java:1979)
>   at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1086)
>   at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1332)
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1359)
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1343)
>   at 
> sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
>   at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1301)
>   at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:254)
>   at java.net.URL.openStream(URL.java:1041)
>   at 
> com.google.common.io.Resources$UrlByteSource.openStream(Resources.java:72)
>   at com.google.common.io.ByteSource.read(ByteSource.java:257)
>   at com.google.common.io.Resources.toByteArray(Resources.java:99)
>   at 
> org.apache.hive.ptest.api.client.PTestClient.testStart(PTestClient.java:126)
>   at 
> org.apache.hive.ptest.api.client.PTestClient.main(PTestClient.java:320)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18461) Fix precommit hive job

2018-01-16 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar updated HIVE-18461:
---
Status: Patch Available  (was: Open)

Attaching the patch. The precommit doesn't test Ptest code anyways so there is 
no point waiting for a broken precommit job to fail on this one ;)

I tested the patch locally and it works fine.

> Fix precommit hive job
> --
>
> Key: HIVE-18461
> URL: https://issues.apache.org/jira/browse/HIVE-18461
> Project: Hive
>  Issue Type: Task
>  Components: Testing Infrastructure
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Blocker
> Attachments: HIVE-18461.01.patch
>
>
> JIRA was upgraded over the weekend and precommit job has been failing since 
> then. There are potentially two issues at play here. One is with the 
> precommit admin job which automate the patch testing. I think YETUS-594 
> should fix the precommit admin job. But manually submission of Hive jobs is 
> failing with below exception. We should get this fix to get the automated 
> testing back on track.
> {noformat}
> + local 
> 'PTEST_CLASSPATH=/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/hive-ptest-3.0-classes.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/lib/*'
> + java -cp 
> '/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/hive-ptest-3.0-classes.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/lib/*'
>  org.apache.hive.ptest.api.client.PTestClient --command testStart --outputDir 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target
>  --password '[***]' --testHandle PreCommit-HIVE-Build-8631 --endpoint 
> http://104.198.109.242:8080/hive-ptest-1.0 --logsEndpoint 
> http://104.198.109.242/logs/ --profile master-mr2 --patch 
> https://issues.apache.org/jira/secure/attachment/12906251/HIVE-18323.05.patch 
> --jira HIVE-18323
> Exception in thread "main" javax.net.ssl.SSLException: Received fatal alert: 
> protocol_version
>   at sun.security.ssl.Alerts.getSSLException(Alerts.java:208)
>   at sun.security.ssl.Alerts.getSSLException(Alerts.java:154)
>   at sun.security.ssl.SSLSocketImpl.recvAlert(SSLSocketImpl.java:1979)
>   at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1086)
>   at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1332)
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1359)
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1343)
>   at 
> sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
>   at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1301)
>   at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:254)
>   at java.net.URL.openStream(URL.java:1041)
>   at 
> com.google.common.io.Resources$UrlByteSource.openStream(Resources.java:72)
>   at com.google.common.io.ByteSource.read(ByteSource.java:257)
>   at com.google.common.io.Resources.toByteArray(Resources.java:99)
>   at 
> org.apache.hive.ptest.api.client.PTestClient.testStart(PTestClient.java:126)
>   at 
> org.apache.hive.ptest.api.client.PTestClient.main(PTestClient.java:320)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18461) Fix precommit hive job

2018-01-16 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar updated HIVE-18461:
---
Attachment: HIVE-18461.01.patch

> Fix precommit hive job
> --
>
> Key: HIVE-18461
> URL: https://issues.apache.org/jira/browse/HIVE-18461
> Project: Hive
>  Issue Type: Task
>  Components: Testing Infrastructure
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Blocker
> Attachments: HIVE-18461.01.patch
>
>
> JIRA was upgraded over the weekend and precommit job has been failing since 
> then. There are potentially two issues at play here. One is with the 
> precommit admin job which automate the patch testing. I think YETUS-594 
> should fix the precommit admin job. But manually submission of Hive jobs is 
> failing with below exception. We should get this fix to get the automated 
> testing back on track.
> {noformat}
> + local 
> 'PTEST_CLASSPATH=/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/hive-ptest-3.0-classes.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/lib/*'
> + java -cp 
> '/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/hive-ptest-3.0-classes.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/lib/*'
>  org.apache.hive.ptest.api.client.PTestClient --command testStart --outputDir 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target
>  --password '[***]' --testHandle PreCommit-HIVE-Build-8631 --endpoint 
> http://104.198.109.242:8080/hive-ptest-1.0 --logsEndpoint 
> http://104.198.109.242/logs/ --profile master-mr2 --patch 
> https://issues.apache.org/jira/secure/attachment/12906251/HIVE-18323.05.patch 
> --jira HIVE-18323
> Exception in thread "main" javax.net.ssl.SSLException: Received fatal alert: 
> protocol_version
>   at sun.security.ssl.Alerts.getSSLException(Alerts.java:208)
>   at sun.security.ssl.Alerts.getSSLException(Alerts.java:154)
>   at sun.security.ssl.SSLSocketImpl.recvAlert(SSLSocketImpl.java:1979)
>   at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1086)
>   at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1332)
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1359)
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1343)
>   at 
> sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
>   at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1301)
>   at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:254)
>   at java.net.URL.openStream(URL.java:1041)
>   at 
> com.google.common.io.Resources$UrlByteSource.openStream(Resources.java:72)
>   at com.google.common.io.ByteSource.read(ByteSource.java:257)
>   at com.google.common.io.Resources.toByteArray(Resources.java:99)
>   at 
> org.apache.hive.ptest.api.client.PTestClient.testStart(PTestClient.java:126)
>   at 
> org.apache.hive.ptest.api.client.PTestClient.main(PTestClient.java:320)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18461) Fix precommit hive job

2018-01-16 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328203#comment-16328203
 ] 

Vihang Karajgaonkar commented on HIVE-18461:


OKay the issue is with the Java version. Since the JIRA got upgraded it doesn't 
work with Java 7 clients because in Java 7 the default TLS version is TLSv1.0. 
In Java 8 the default is TLSv1.2 which causes the protocol_version error above. 
Ideally the solution should be to upgrade the pre-commit jenkins job to use 
Java 8 instead of Java 7. I will create a INFRA ticket for the same. Meanwhile 
I have a workaround which fixes this issue. Will post a patch soon.

> Fix precommit hive job
> --
>
> Key: HIVE-18461
> URL: https://issues.apache.org/jira/browse/HIVE-18461
> Project: Hive
>  Issue Type: Task
>  Components: Testing Infrastructure
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Blocker
>
> JIRA was upgraded over the weekend and precommit job has been failing since 
> then. There are potentially two issues at play here. One is with the 
> precommit admin job which automate the patch testing. I think YETUS-594 
> should fix the precommit admin job. But manually submission of Hive jobs is 
> failing with below exception. We should get this fix to get the automated 
> testing back on track.
> {noformat}
> + local 
> 'PTEST_CLASSPATH=/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/hive-ptest-3.0-classes.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/lib/*'
> + java -cp 
> '/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/hive-ptest-3.0-classes.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/lib/*'
>  org.apache.hive.ptest.api.client.PTestClient --command testStart --outputDir 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target
>  --password '[***]' --testHandle PreCommit-HIVE-Build-8631 --endpoint 
> http://104.198.109.242:8080/hive-ptest-1.0 --logsEndpoint 
> http://104.198.109.242/logs/ --profile master-mr2 --patch 
> https://issues.apache.org/jira/secure/attachment/12906251/HIVE-18323.05.patch 
> --jira HIVE-18323
> Exception in thread "main" javax.net.ssl.SSLException: Received fatal alert: 
> protocol_version
>   at sun.security.ssl.Alerts.getSSLException(Alerts.java:208)
>   at sun.security.ssl.Alerts.getSSLException(Alerts.java:154)
>   at sun.security.ssl.SSLSocketImpl.recvAlert(SSLSocketImpl.java:1979)
>   at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1086)
>   at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1332)
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1359)
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1343)
>   at 
> sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
>   at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1301)
>   at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:254)
>   at java.net.URL.openStream(URL.java:1041)
>   at 
> com.google.common.io.Resources$UrlByteSource.openStream(Resources.java:72)
>   at com.google.common.io.ByteSource.read(ByteSource.java:257)
>   at com.google.common.io.Resources.toByteArray(Resources.java:99)
>   at 
> org.apache.hive.ptest.api.client.PTestClient.testStart(PTestClient.java:126)
>   at 
> org.apache.hive.ptest.api.client.PTestClient.main(PTestClient.java:320)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-18461) Fix precommit hive job

2018-01-16 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar reassigned HIVE-18461:
--

Assignee: Vihang Karajgaonkar

> Fix precommit hive job
> --
>
> Key: HIVE-18461
> URL: https://issues.apache.org/jira/browse/HIVE-18461
> Project: Hive
>  Issue Type: Task
>  Components: Testing Infrastructure
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Blocker
>
> JIRA was upgraded over the weekend and precommit job has been failing since 
> then. There are potentially two issues at play here. One is with the 
> precommit admin job which automate the patch testing. I think YETUS-594 
> should fix the precommit admin job. But manually submission of Hive jobs is 
> failing with below exception. We should get this fix to get the automated 
> testing back on track.
> {noformat}
> + local 
> 'PTEST_CLASSPATH=/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/hive-ptest-3.0-classes.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/lib/*'
> + java -cp 
> '/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/hive-ptest-3.0-classes.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/lib/*'
>  org.apache.hive.ptest.api.client.PTestClient --command testStart --outputDir 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target
>  --password '[***]' --testHandle PreCommit-HIVE-Build-8631 --endpoint 
> http://104.198.109.242:8080/hive-ptest-1.0 --logsEndpoint 
> http://104.198.109.242/logs/ --profile master-mr2 --patch 
> https://issues.apache.org/jira/secure/attachment/12906251/HIVE-18323.05.patch 
> --jira HIVE-18323
> Exception in thread "main" javax.net.ssl.SSLException: Received fatal alert: 
> protocol_version
>   at sun.security.ssl.Alerts.getSSLException(Alerts.java:208)
>   at sun.security.ssl.Alerts.getSSLException(Alerts.java:154)
>   at sun.security.ssl.SSLSocketImpl.recvAlert(SSLSocketImpl.java:1979)
>   at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1086)
>   at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1332)
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1359)
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1343)
>   at 
> sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
>   at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1301)
>   at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:254)
>   at java.net.URL.openStream(URL.java:1041)
>   at 
> com.google.common.io.Resources$UrlByteSource.openStream(Resources.java:72)
>   at com.google.common.io.ByteSource.read(ByteSource.java:257)
>   at com.google.common.io.Resources.toByteArray(Resources.java:99)
>   at 
> org.apache.hive.ptest.api.client.PTestClient.testStart(PTestClient.java:126)
>   at 
> org.apache.hive.ptest.api.client.PTestClient.main(PTestClient.java:320)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HIVE-17915) Enable VectorizedOrcAcidRowBatchReader to be used with LLAP IO elevator over original acid files

2018-01-16 Thread Teddy Choi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-17915 started by Teddy Choi.
-
> Enable VectorizedOrcAcidRowBatchReader to be used with LLAP IO elevator over 
> original acid files
> 
>
> Key: HIVE-17915
> URL: https://issues.apache.org/jira/browse/HIVE-17915
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Teddy Choi
>Priority: Critical
>
> Since HIVE-12631, LLAP IO can support Acid tables but when reading "original" 
> files.
> HIVE-17458 enables VectorizedOrcAcidRowBatchReader to vectorize reads over 
> "original" files but not with LLAP IO.
> Current implementation of _OrcSplit.canUseLlapIo()_ is the same as in 
> HIVE-12631.
> This can/should be improved.  There are 2 parts to this:
> When a read of "original" file is performed such that data doesn't need to be 
> decorated with ROW__ID  (see 
> __VectorizedOrcAcidRowBatchReader.canUseLlapForAcid()_) then 
> VectorizedOrcAcidRowBatchReader as of HIVE-17458 should be usable with LLAP 
> IO but when I tried it I got _ArrayIndexOutOfBoundsException_ in various 
> places of the stack.
> This is the more important one.
> The 2nd issue is that reading "original" acid files (when ROW__IDs are 
> needed) requires using 
> _org.apache.hadoop.hive.ql.io.orc.RecordReader.getRowNumber()_ in 
> __VectorizedOrcAcidRowBatchReader_
> This API is not available on the reader that _LlapRecordReader_ provides.
> It would be better if getRowNumber() was available for performance as well as 
> simpler logic in the code.
> cc [~sershe], [~teddy.choi]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18323) Vectorization: add the support of timestamp in VectorizedPrimitiveColumnReader for parquet

2018-01-16 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328194#comment-16328194
 ] 

Hive QA commented on HIVE-18323:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 78d5572 |
| Default Java | 1.8.0_111 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8632/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Vectorization: add the support of timestamp in 
> VectorizedPrimitiveColumnReader for parquet
> --
>
> Key: HIVE-18323
> URL: https://issues.apache.org/jira/browse/HIVE-18323
> Project: Hive
>  Issue Type: Sub-task
>  Components: Vectorization
>Affects Versions: 3.0.0
>Reporter: Aihua Xu
>Assignee: Vihang Karajgaonkar
>Priority: Major
> Attachments: HIVE-18323.02.patch, HIVE-18323.03.patch, 
> HIVE-18323.04.patch, HIVE-18323.05.patch, HIVE-18323.06.patch, 
> HIVE-18323.1.patch
>
>
> {noformat}
> CREATE TABLE `t1`(
>   `ts` timestamp,
>   `s1` string)
> STORED AS PARQUET;
> set hive.vectorized.execution.enabled=true;
> SELECT * from t1 SORT BY s1;
> {noformat}
> This query will throw exception since timestamp is not supported here yet.
> {noformat}
> Caused by: java.io.IOException: java.io.IOException: Unsupported type: 
> optional int96 ts
> at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
> at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:365)
> at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:116)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-18438) WM RP: it's impossible to unset things

2018-01-16 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin reassigned HIVE-18438:
---

Assignee: Sergey Shelukhin

> WM RP: it's impossible to unset things
> --
>
> Key: HIVE-18438
> URL: https://issues.apache.org/jira/browse/HIVE-18438
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
>
> It should be possible to unset default pool, query parallelism for a RP; also 
> scheduling policy for a pool, although that does have a magic value 'default'



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18457) improve show plan output (triggers, mappings)

2018-01-16 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328174#comment-16328174
 ] 

Sergey Shelukhin commented on HIVE-18457:
-

Updated the output.

I'm not sure why it doesn't look like the flag is updated... seems to work fine 
in derby. Are there any errors from alter? we might need to investigate why DN 
won't update the table if this reproes.

> improve show plan output (triggers, mappings)
> -
>
> Key: HIVE-18457
> URL: https://issues.apache.org/jira/browse/HIVE-18457
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18457.patch
>
>
> Did the following sequence to add triggers to UNMANAGED. I can see the 
> triggers added to metastore by IS_IN_UNAMANGED flag is not set in metastore. 
> Also show resource plans does not show triggers in unmanaged pool.
> {code}
> 0: jdbc:hive2://localhost:1> show resource plans;
> +--+--++
> | rp_name  |  status  | query_parallelism  |
> +--+--++
> | global   | ACTIVE   | NULL   |
> | llap | ENABLED  | NULL   |
> +--+--++
> 0: jdbc:hive2://localhost:1>ALTER RESOURCE PLAN llap ACTIVATE;
> 0: jdbc:hive2://localhost:1>ALTER RESOURCE PLAN global DISABLE;
> 0: jdbc:hive2://localhost:1>CREATE TRIGGER global.highly_parallel WHEN 
> TOTAL_TASKS > 40 DO KILL;
> 0: jdbc:hive2://localhost:1>ALTER TRIGGER global.highly_parallel ADD TO 
> UNMANAGED;
> 0: jdbc:hive2://localhost:1>CREATE TRIGGER global.big_hdfs_read WHEN 
> HDFS_BYTES_READ > 30 DO KILL;
> 0: jdbc:hive2://localhost:1>ALTER TRIGGER global.big_hdfs_read ADD TO 
> UNMANAGED;
> 0: jdbc:hive2://localhost:1>CREATE TRIGGER global.slow_query WHEN 
> EXECUTION_TIME > 10 DO KILL;
> 0: jdbc:hive2://localhost:1>ALTER TRIGGER global.slow_query ADD TO 
> UNMANAGED;
> 0: jdbc:hive2://localhost:1>CREATE TRIGGER global.some_spills WHEN 
> SPILLED_RECORDS > 10 DO KILL;
> 0: jdbc:hive2://localhost:1>ALTER TRIGGER global.some_spills ADD TO 
> UNMANAGED;
> 0: jdbc:hive2://localhost:1>ALTER RESOURCE PLAN global ENABLE;
> 0: jdbc:hive2://localhost:1>ALTER RESOURCE PLAN global ACTIVATE;
> 0: jdbc:hive2://localhost:1> show resource plan global;
> ++
> |line|
> ++
> | global[status=ACTIVE,parallelism=null,defaultPool=default] |
> | default[allocFraction=1.0,schedulingPolicy=null,parallelism=4] |
> ++
> {code}
> {code:title=mysql}
> mysql> select * from wm_trigger;
> ++---+-+--+---+-+
> | TRIGGER_ID | RP_ID | NAME| TRIGGER_EXPRESSION   | 
> ACTION_EXPRESSION | IS_IN_UNMANAGED |
> ++---+-+--+---+-+
> | 29 | 1 | highly_parallel | TOTAL_TASKS > 40 | KILL  
> ||
> | 33 | 1 | big_hdfs_read   | HDFS_BYTES_READ > 30 | KILL  
> ||
> | 34 | 1 | slow_query  | EXECUTION_TIME > 10  | KILL  
> ||
> | 35 | 1 | some_spills | SPILLED_RECORDS > 10 | KILL  
> ||
> ++---+-+--+---+-+
> {code}
> From the above mysql table, IS_IN_UNMANAGED is not set and 'show resource 
> plan global' is not showing triggers defined in unmanaged pool. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18457) improve show plan output (triggers, mappings)

2018-01-16 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18457:

Status: Patch Available  (was: Open)

> improve show plan output (triggers, mappings)
> -
>
> Key: HIVE-18457
> URL: https://issues.apache.org/jira/browse/HIVE-18457
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18457.patch
>
>
> Did the following sequence to add triggers to UNMANAGED. I can see the 
> triggers added to metastore by IS_IN_UNAMANGED flag is not set in metastore. 
> Also show resource plans does not show triggers in unmanaged pool.
> {code}
> 0: jdbc:hive2://localhost:1> show resource plans;
> +--+--++
> | rp_name  |  status  | query_parallelism  |
> +--+--++
> | global   | ACTIVE   | NULL   |
> | llap | ENABLED  | NULL   |
> +--+--++
> 0: jdbc:hive2://localhost:1>ALTER RESOURCE PLAN llap ACTIVATE;
> 0: jdbc:hive2://localhost:1>ALTER RESOURCE PLAN global DISABLE;
> 0: jdbc:hive2://localhost:1>CREATE TRIGGER global.highly_parallel WHEN 
> TOTAL_TASKS > 40 DO KILL;
> 0: jdbc:hive2://localhost:1>ALTER TRIGGER global.highly_parallel ADD TO 
> UNMANAGED;
> 0: jdbc:hive2://localhost:1>CREATE TRIGGER global.big_hdfs_read WHEN 
> HDFS_BYTES_READ > 30 DO KILL;
> 0: jdbc:hive2://localhost:1>ALTER TRIGGER global.big_hdfs_read ADD TO 
> UNMANAGED;
> 0: jdbc:hive2://localhost:1>CREATE TRIGGER global.slow_query WHEN 
> EXECUTION_TIME > 10 DO KILL;
> 0: jdbc:hive2://localhost:1>ALTER TRIGGER global.slow_query ADD TO 
> UNMANAGED;
> 0: jdbc:hive2://localhost:1>CREATE TRIGGER global.some_spills WHEN 
> SPILLED_RECORDS > 10 DO KILL;
> 0: jdbc:hive2://localhost:1>ALTER TRIGGER global.some_spills ADD TO 
> UNMANAGED;
> 0: jdbc:hive2://localhost:1>ALTER RESOURCE PLAN global ENABLE;
> 0: jdbc:hive2://localhost:1>ALTER RESOURCE PLAN global ACTIVATE;
> 0: jdbc:hive2://localhost:1> show resource plan global;
> ++
> |line|
> ++
> | global[status=ACTIVE,parallelism=null,defaultPool=default] |
> | default[allocFraction=1.0,schedulingPolicy=null,parallelism=4] |
> ++
> {code}
> {code:title=mysql}
> mysql> select * from wm_trigger;
> ++---+-+--+---+-+
> | TRIGGER_ID | RP_ID | NAME| TRIGGER_EXPRESSION   | 
> ACTION_EXPRESSION | IS_IN_UNMANAGED |
> ++---+-+--+---+-+
> | 29 | 1 | highly_parallel | TOTAL_TASKS > 40 | KILL  
> ||
> | 33 | 1 | big_hdfs_read   | HDFS_BYTES_READ > 30 | KILL  
> ||
> | 34 | 1 | slow_query  | EXECUTION_TIME > 10  | KILL  
> ||
> | 35 | 1 | some_spills | SPILLED_RECORDS > 10 | KILL  
> ||
> ++---+-+--+---+-+
> {code}
> From the above mysql table, IS_IN_UNMANAGED is not set and 'show resource 
> plan global' is not showing triggers defined in unmanaged pool. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18457) improve show plan output (triggers, mappings)

2018-01-16 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18457:

Attachment: HIVE-18457.patch

> improve show plan output (triggers, mappings)
> -
>
> Key: HIVE-18457
> URL: https://issues.apache.org/jira/browse/HIVE-18457
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18457.patch
>
>
> Did the following sequence to add triggers to UNMANAGED. I can see the 
> triggers added to metastore by IS_IN_UNAMANGED flag is not set in metastore. 
> Also show resource plans does not show triggers in unmanaged pool.
> {code}
> 0: jdbc:hive2://localhost:1> show resource plans;
> +--+--++
> | rp_name  |  status  | query_parallelism  |
> +--+--++
> | global   | ACTIVE   | NULL   |
> | llap | ENABLED  | NULL   |
> +--+--++
> 0: jdbc:hive2://localhost:1>ALTER RESOURCE PLAN llap ACTIVATE;
> 0: jdbc:hive2://localhost:1>ALTER RESOURCE PLAN global DISABLE;
> 0: jdbc:hive2://localhost:1>CREATE TRIGGER global.highly_parallel WHEN 
> TOTAL_TASKS > 40 DO KILL;
> 0: jdbc:hive2://localhost:1>ALTER TRIGGER global.highly_parallel ADD TO 
> UNMANAGED;
> 0: jdbc:hive2://localhost:1>CREATE TRIGGER global.big_hdfs_read WHEN 
> HDFS_BYTES_READ > 30 DO KILL;
> 0: jdbc:hive2://localhost:1>ALTER TRIGGER global.big_hdfs_read ADD TO 
> UNMANAGED;
> 0: jdbc:hive2://localhost:1>CREATE TRIGGER global.slow_query WHEN 
> EXECUTION_TIME > 10 DO KILL;
> 0: jdbc:hive2://localhost:1>ALTER TRIGGER global.slow_query ADD TO 
> UNMANAGED;
> 0: jdbc:hive2://localhost:1>CREATE TRIGGER global.some_spills WHEN 
> SPILLED_RECORDS > 10 DO KILL;
> 0: jdbc:hive2://localhost:1>ALTER TRIGGER global.some_spills ADD TO 
> UNMANAGED;
> 0: jdbc:hive2://localhost:1>ALTER RESOURCE PLAN global ENABLE;
> 0: jdbc:hive2://localhost:1>ALTER RESOURCE PLAN global ACTIVATE;
> 0: jdbc:hive2://localhost:1> show resource plan global;
> ++
> |line|
> ++
> | global[status=ACTIVE,parallelism=null,defaultPool=default] |
> | default[allocFraction=1.0,schedulingPolicy=null,parallelism=4] |
> ++
> {code}
> {code:title=mysql}
> mysql> select * from wm_trigger;
> ++---+-+--+---+-+
> | TRIGGER_ID | RP_ID | NAME| TRIGGER_EXPRESSION   | 
> ACTION_EXPRESSION | IS_IN_UNMANAGED |
> ++---+-+--+---+-+
> | 29 | 1 | highly_parallel | TOTAL_TASKS > 40 | KILL  
> ||
> | 33 | 1 | big_hdfs_read   | HDFS_BYTES_READ > 30 | KILL  
> ||
> | 34 | 1 | slow_query  | EXECUTION_TIME > 10  | KILL  
> ||
> | 35 | 1 | some_spills | SPILLED_RECORDS > 10 | KILL  
> ||
> ++---+-+--+---+-+
> {code}
> From the above mysql table, IS_IN_UNMANAGED is not set and 'show resource 
> plan global' is not showing triggers defined in unmanaged pool. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18323) Vectorization: add the support of timestamp in VectorizedPrimitiveColumnReader for parquet

2018-01-16 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar updated HIVE-18323:
---
Attachment: HIVE-18323.06.patch

> Vectorization: add the support of timestamp in 
> VectorizedPrimitiveColumnReader for parquet
> --
>
> Key: HIVE-18323
> URL: https://issues.apache.org/jira/browse/HIVE-18323
> Project: Hive
>  Issue Type: Sub-task
>  Components: Vectorization
>Affects Versions: 3.0.0
>Reporter: Aihua Xu
>Assignee: Vihang Karajgaonkar
>Priority: Major
> Attachments: HIVE-18323.02.patch, HIVE-18323.03.patch, 
> HIVE-18323.04.patch, HIVE-18323.05.patch, HIVE-18323.06.patch, 
> HIVE-18323.1.patch
>
>
> {noformat}
> CREATE TABLE `t1`(
>   `ts` timestamp,
>   `s1` string)
> STORED AS PARQUET;
> set hive.vectorized.execution.enabled=true;
> SELECT * from t1 SORT BY s1;
> {noformat}
> This query will throw exception since timestamp is not supported here yet.
> {noformat}
> Caused by: java.io.IOException: java.io.IOException: Unsupported type: 
> optional int96 ts
> at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121)
> at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77)
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:365)
> at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:116)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18460) Compactor doesn't pass Table properties to the Orc writer

2018-01-16 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328109#comment-16328109
 ] 

Eugene Koifman commented on HIVE-18460:
---

Compactor uses 

StringableMap to pass a HashMap as a string.  I don't remember the details the 
mechanics.

> Compactor doesn't pass Table properties to the Orc writer
> -
>
> Key: HIVE-18460
> URL: https://issues.apache.org/jira/browse/HIVE-18460
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 0.13
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Critical
> Attachments: HIVE-18460.01.patch, HIVE-18460.02.patch
>
>
>  
>  CompactorMap.getWrite()/getDeleteEventWriter() both do 
> AcidOutputFormat.Options.tableProperties() but
> OrcOutputFormat.getRawRecordWriter() does
> {noformat}
> final OrcFile.WriterOptions opts =
> OrcFile.writerOptions(options.getConfiguration());
> {noformat}
> which ignores tableProperties value.
> It should do 
> {noformat}
> final OrcFile.WriterOptions opts =
> OrcFile.writerOptions(options.getTableProperties(), 
> options.getConfiguration());
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17952) Fix license headers to avoid dangling javadoc warnings

2018-01-16 Thread Andrew Sherman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Sherman updated HIVE-17952:
--
Attachment: HIVE-17952.2.patch

> Fix license headers to avoid dangling javadoc warnings
> --
>
> Key: HIVE-17952
> URL: https://issues.apache.org/jira/browse/HIVE-17952
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Andrew Sherman
>Priority: Trivial
> Attachments: HIVE-17952.1.patch, HIVE-17952.2.patch
>
>
> All license headers starts with "/**" which are assumed to be javadocs and 
> IDE warns about dangling javadoc pointing to license headers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18460) Compactor doesn't pass Table properties to the Orc writer

2018-01-16 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328101#comment-16328101
 ] 

Prasanth Jayachandran commented on HIVE-18460:
--

.2 patch looks good to me. +1, pending tests.

bq. 4 is the length of the value of the property
why is this required when we already have ':' as value separator? 

> Compactor doesn't pass Table properties to the Orc writer
> -
>
> Key: HIVE-18460
> URL: https://issues.apache.org/jira/browse/HIVE-18460
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 0.13
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Critical
> Attachments: HIVE-18460.01.patch, HIVE-18460.02.patch
>
>
>  
>  CompactorMap.getWrite()/getDeleteEventWriter() both do 
> AcidOutputFormat.Options.tableProperties() but
> OrcOutputFormat.getRawRecordWriter() does
> {noformat}
> final OrcFile.WriterOptions opts =
> OrcFile.writerOptions(options.getConfiguration());
> {noformat}
> which ignores tableProperties value.
> It should do 
> {noformat}
> final OrcFile.WriterOptions opts =
> OrcFile.writerOptions(options.getTableProperties(), 
> options.getConfiguration());
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HIVE-18435) output mappings summary in plan description

2018-01-16 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin resolved HIVE-18435.
-
Resolution: Implemented

Will include in HIVE-18457

> output mappings summary in plan description
> ---
>
> Key: HIVE-18435
> URL: https://issues.apache.org/jira/browse/HIVE-18435
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18457) improve show plan output (triggers, mappings)

2018-01-16 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18457:

Summary: improve show plan output (triggers, mappings)  (was: Triggers in 
unmanaged pools are not shown)

> improve show plan output (triggers, mappings)
> -
>
> Key: HIVE-18457
> URL: https://issues.apache.org/jira/browse/HIVE-18457
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Sergey Shelukhin
>Priority: Major
>
> Did the following sequence to add triggers to UNMANAGED. I can see the 
> triggers added to metastore by IS_IN_UNAMANGED flag is not set in metastore. 
> Also show resource plans does not show triggers in unmanaged pool.
> {code}
> 0: jdbc:hive2://localhost:1> show resource plans;
> +--+--++
> | rp_name  |  status  | query_parallelism  |
> +--+--++
> | global   | ACTIVE   | NULL   |
> | llap | ENABLED  | NULL   |
> +--+--++
> 0: jdbc:hive2://localhost:1>ALTER RESOURCE PLAN llap ACTIVATE;
> 0: jdbc:hive2://localhost:1>ALTER RESOURCE PLAN global DISABLE;
> 0: jdbc:hive2://localhost:1>CREATE TRIGGER global.highly_parallel WHEN 
> TOTAL_TASKS > 40 DO KILL;
> 0: jdbc:hive2://localhost:1>ALTER TRIGGER global.highly_parallel ADD TO 
> UNMANAGED;
> 0: jdbc:hive2://localhost:1>CREATE TRIGGER global.big_hdfs_read WHEN 
> HDFS_BYTES_READ > 30 DO KILL;
> 0: jdbc:hive2://localhost:1>ALTER TRIGGER global.big_hdfs_read ADD TO 
> UNMANAGED;
> 0: jdbc:hive2://localhost:1>CREATE TRIGGER global.slow_query WHEN 
> EXECUTION_TIME > 10 DO KILL;
> 0: jdbc:hive2://localhost:1>ALTER TRIGGER global.slow_query ADD TO 
> UNMANAGED;
> 0: jdbc:hive2://localhost:1>CREATE TRIGGER global.some_spills WHEN 
> SPILLED_RECORDS > 10 DO KILL;
> 0: jdbc:hive2://localhost:1>ALTER TRIGGER global.some_spills ADD TO 
> UNMANAGED;
> 0: jdbc:hive2://localhost:1>ALTER RESOURCE PLAN global ENABLE;
> 0: jdbc:hive2://localhost:1>ALTER RESOURCE PLAN global ACTIVATE;
> 0: jdbc:hive2://localhost:1> show resource plan global;
> ++
> |line|
> ++
> | global[status=ACTIVE,parallelism=null,defaultPool=default] |
> | default[allocFraction=1.0,schedulingPolicy=null,parallelism=4] |
> ++
> {code}
> {code:title=mysql}
> mysql> select * from wm_trigger;
> ++---+-+--+---+-+
> | TRIGGER_ID | RP_ID | NAME| TRIGGER_EXPRESSION   | 
> ACTION_EXPRESSION | IS_IN_UNMANAGED |
> ++---+-+--+---+-+
> | 29 | 1 | highly_parallel | TOTAL_TASKS > 40 | KILL  
> ||
> | 33 | 1 | big_hdfs_read   | HDFS_BYTES_READ > 30 | KILL  
> ||
> | 34 | 1 | slow_query  | EXECUTION_TIME > 10  | KILL  
> ||
> | 35 | 1 | some_spills | SPILLED_RECORDS > 10 | KILL  
> ||
> ++---+-+--+---+-+
> {code}
> From the above mysql table, IS_IN_UNMANAGED is not set and 'show resource 
> plan global' is not showing triggers defined in unmanaged pool. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18350) load data should rename files consistent with insert statements

2018-01-16 Thread Deepak Jaiswal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Jaiswal updated HIVE-18350:
--
Description: 
Insert statements create files of format ending with _0, 0001_0 etc. 
However, the load data uses the input file name. That results in inconsistent 
naming convention which makes SMB joins difficult in some scenarios and may 
cause trouble for other types of queries in future.

We need consistent naming convention.

For non-bucketed table, hive renames all the files regardless of how they were 
named by the user.
 For bucketed table, hive relies on user to name the files matching the bucket 
in non-strict mode. Hive assumes that the data belongs to same bucket in a 
file. In strict mode, loading bucketed table is disabled.

This will likely affect most of the tests which load data which is pretty 
significant due to which it is further divided into two subtasks for smoother 
merge.

For existing tables in customer database, it is recommended to reload bucketed 
tables otherwise if customer tries to run SMB join and there is a bucket for 
which there is no split, then there is a possibility of getting incorrect 
results. However, this is not a regression as it would happen even without the 
patch.
With this patch however, and reloading data, the results should be correct.

For non-bucketed tables and external tables, there is no difference in behavior 
and reloading data is not needed.

  was:
Insert statements create files of format ending with _0, 0001_0 etc. 
However, the load data uses the input file name. That results in inconsistent 
naming convention which makes SMB joins difficult in some scenarios and may 
cause trouble for other types of queries in future.

We need consistent naming convention.


For non-bucketed table, hive renames all the files regardless of how they were 
named by the user.
For bucketed table, hive relies on user to name the files matching the bucket 
in non-strict mode. Hive assumes that the data belongs to same bucket in a 
file. In strict mode, loading bucketed table is disabled.

This will likely affect most of the tests which load data which is pretty 
significant due to which it is further divided into two subtasks for smoother 
merge.


> load data should rename files consistent with insert statements
> ---
>
> Key: HIVE-18350
> URL: https://issues.apache.org/jira/browse/HIVE-18350
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-18350.1.patch, HIVE-18350.2.patch, 
> HIVE-18350.3.patch, HIVE-18350.4.patch
>
>
> Insert statements create files of format ending with _0, 0001_0 etc. 
> However, the load data uses the input file name. That results in inconsistent 
> naming convention which makes SMB joins difficult in some scenarios and may 
> cause trouble for other types of queries in future.
> We need consistent naming convention.
> For non-bucketed table, hive renames all the files regardless of how they 
> were named by the user.
>  For bucketed table, hive relies on user to name the files matching the 
> bucket in non-strict mode. Hive assumes that the data belongs to same bucket 
> in a file. In strict mode, loading bucketed table is disabled.
> This will likely affect most of the tests which load data which is pretty 
> significant due to which it is further divided into two subtasks for smoother 
> merge.
> For existing tables in customer database, it is recommended to reload 
> bucketed tables otherwise if customer tries to run SMB join and there is a 
> bucket for which there is no split, then there is a possibility of getting 
> incorrect results. However, this is not a regression as it would happen even 
> without the patch.
> With this patch however, and reloading data, the results should be correct.
> For non-bucketed tables and external tables, there is no difference in 
> behavior and reloading data is not needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18463) Typo in the JdbcConnectionParams constructor

2018-01-16 Thread Oleg Danilov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleg Danilov updated HIVE-18463:

Attachment: HIVE-18463.patch

> Typo in the JdbcConnectionParams constructor
> 
>
> Key: HIVE-18463
> URL: https://issues.apache.org/jira/browse/HIVE-18463
> Project: Hive
>  Issue Type: Bug
>Reporter: Oleg Danilov
>Priority: Trivial
> Attachments: HIVE-18463.patch
>
>
> Seems like the last one should be params.rejectedHostZnodePaths as well:
> *jdbc/src/java/org/apache/hive/jdbc/Utils.java:*
> {code:java}
> public JdbcConnectionParams(JdbcConnectionParams params) {
> this.host = params.host;
> this.port = params.port;
> ...
> this.rejectedHostZnodePaths.addAll(rejectedHostZnodePaths);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18463) Typo in the JdbcConnectionParams constructor

2018-01-16 Thread Oleg Danilov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleg Danilov updated HIVE-18463:

Status: Patch Available  (was: Open)

> Typo in the JdbcConnectionParams constructor
> 
>
> Key: HIVE-18463
> URL: https://issues.apache.org/jira/browse/HIVE-18463
> Project: Hive
>  Issue Type: Bug
>Reporter: Oleg Danilov
>Priority: Trivial
> Attachments: HIVE-18463.patch
>
>
> Seems like the last one should be params.rejectedHostZnodePaths as well:
> *jdbc/src/java/org/apache/hive/jdbc/Utils.java:*
> {code:java}
> public JdbcConnectionParams(JdbcConnectionParams params) {
> this.host = params.host;
> this.port = params.port;
> ...
> this.rejectedHostZnodePaths.addAll(rejectedHostZnodePaths);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18460) Compactor doesn't pass Table properties to the Orc writer

2018-01-16 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328076#comment-16328076
 ] 

Eugene Koifman commented on HIVE-18460:
---

patch 2 has the additional test

> Compactor doesn't pass Table properties to the Orc writer
> -
>
> Key: HIVE-18460
> URL: https://issues.apache.org/jira/browse/HIVE-18460
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 0.13
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Critical
> Attachments: HIVE-18460.01.patch, HIVE-18460.02.patch
>
>
>  
>  CompactorMap.getWrite()/getDeleteEventWriter() both do 
> AcidOutputFormat.Options.tableProperties() but
> OrcOutputFormat.getRawRecordWriter() does
> {noformat}
> final OrcFile.WriterOptions opts =
> OrcFile.writerOptions(options.getConfiguration());
> {noformat}
> which ignores tableProperties value.
> It should do 
> {noformat}
> final OrcFile.WriterOptions opts =
> OrcFile.writerOptions(options.getTableProperties(), 
> options.getConfiguration());
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18460) Compactor doesn't pass Table properties to the Orc writer

2018-01-16 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-18460:
--
Attachment: HIVE-18460.02.patch

> Compactor doesn't pass Table properties to the Orc writer
> -
>
> Key: HIVE-18460
> URL: https://issues.apache.org/jira/browse/HIVE-18460
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 0.13
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Critical
> Attachments: HIVE-18460.01.patch, HIVE-18460.02.patch
>
>
>  
>  CompactorMap.getWrite()/getDeleteEventWriter() both do 
> AcidOutputFormat.Options.tableProperties() but
> OrcOutputFormat.getRawRecordWriter() does
> {noformat}
> final OrcFile.WriterOptions opts =
> OrcFile.writerOptions(options.getConfiguration());
> {noformat}
> which ignores tableProperties value.
> It should do 
> {noformat}
> final OrcFile.WriterOptions opts =
> OrcFile.writerOptions(options.getTableProperties(), 
> options.getConfiguration());
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18463) Typo in the JdbcConnectionParams constructor

2018-01-16 Thread Oleg Danilov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleg Danilov updated HIVE-18463:

Description: 
Seems like the last one should be params.rejectedHostZnodePaths as well:

*jdbc/src/java/org/apache/hive/jdbc/Utils.java:*
{code:java}
public JdbcConnectionParams(JdbcConnectionParams params) {
this.host = params.host;
this.port = params.port;
...
this.rejectedHostZnodePaths.addAll(rejectedHostZnodePaths);
}

{code}

  was:
Seems like the last one should be params.rejectedHostZnodePaths as well:

 
{code:java}
public JdbcConnectionParams(JdbcConnectionParams params) {
this.host = params.host;
this.port = params.port;
...
this.rejectedHostZnodePaths.addAll(rejectedHostZnodePaths);
}

{code}


> Typo in the JdbcConnectionParams constructor
> 
>
> Key: HIVE-18463
> URL: https://issues.apache.org/jira/browse/HIVE-18463
> Project: Hive
>  Issue Type: Bug
>Reporter: Oleg Danilov
>Priority: Trivial
>
> Seems like the last one should be params.rejectedHostZnodePaths as well:
> *jdbc/src/java/org/apache/hive/jdbc/Utils.java:*
> {code:java}
> public JdbcConnectionParams(JdbcConnectionParams params) {
> this.host = params.host;
> this.port = params.port;
> ...
> this.rejectedHostZnodePaths.addAll(rejectedHostZnodePaths);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18398) WITH SERDEPROPERTIES option is broken without org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe

2018-01-16 Thread Jyoti (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328070#comment-16328070
 ] 

Jyoti commented on HIVE-18398:
--

We are also hitting the same issue on Hive 1.2.2

Repro steps:

1. Create a table like below:

 
{code:java}
CREATE external TABLE `sample5`(`id1` int,`id2` int) ROW FORMAT DELIMITED 
FIELDS TERMINATED BY ',' ESCAPED BY '\\'
LOCATION 
'hdfs://localhost:9000/user/hive/warehouse/sample';
{code}
2. Get the table schema

 

 
{code:java}
show create table sample5
{code}
Result: 

 

 
{code:java}
+-+--+
| createtab_stmt |
+-+--+
| CREATE EXTERNAL TABLE `sample5`( |
| `id1` int, |
| `id2` int) |
| ROW FORMAT DELIMITED |
| FIELDS TERMINATED BY ',' |
| WITH SERDEPROPERTIES ( |
| 'escape.delim'='\\') |
| STORED AS INPUTFORMAT |
| 'org.apache.hadoop.mapred.TextInputFormat' |
| OUTPUTFORMAT |
| 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' |
| LOCATION |
| 'hdfs://localhost:9000/user/hive/warehouse/sample' |
| TBLPROPERTIES ( |
| 'COLUMN_STATS_ACCURATE'='false', |
| 'numFiles'='0', |
| 'numRows'='-1', |
| 'rawDataSize'='-1', |
| 'totalSize'='0', |
| 'transient_lastDdlTime'='1515808983') |
+-+--+
{code}
Using the above create table command throws the same exception explained above.

 

 

Some more observations:
 # The issue doesnt happen in hive 2.3
 # However the show table command outputs the below result

{code:java}
++
| createtab_stmt |
++
| CREATE EXTERNAL TABLE `sample5`( |
| `id1` int, |
| `id2` int) |
| ROW FORMAT SERDE |
| 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' |
| WITH SERDEPROPERTIES ( |
| 'escape.delim'='\\', |
| 'field.delim'=',', |
| 'serialization.format'=',') |
| STORED AS INPUTFORMAT |
| 'org.apache.hadoop.mapred.TextInputFormat' |
| OUTPUTFORMAT |
| 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' |
| LOCATION |
| 'hdfs://localhost:9000/user/hive/warehouse/sample' |
| TBLPROPERTIES ( |
| 'transient_lastDdlTime'='1515807845') |
++
{code}

 

 

> WITH SERDEPROPERTIES option is broken without 
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
> 
>
> Key: HIVE-18398
> URL: https://issues.apache.org/jira/browse/HIVE-18398
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.2.1
>Reporter: Rajkumar Singh
>Priority: Minor
>
> *Steps to reproduce:*
> 1. Create table 
> {code}
> create table test_serde(id int,value string) ROW FORMAT DELIMITED FIELDS 
> TERMINATED BY '|' ESCAPED BY '\\' 
> {code}
> 2. show create table produce following output
> {code}
> CREATE TABLE `test_serde`(
>   `id` int, 
>   `value` string)
> ROW FORMAT DELIMITED 
>   FIELDS TERMINATED BY '|' 
> WITH SERDEPROPERTIES ( 
>   'escape.delim'='\\') 
> STORED AS INPUTFORMAT 
>   'org.apache.hadoop.mapred.TextInputFormat' 
> OUTPUTFORMAT 
>   'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
> LOCATION
>   'hdfs://hdp262a.hdp.local:8020/apps/hive/warehouse/test_serde'
> TBLPROPERTIES (
>   'COLUMN_STATS_ACCURATE'='{\"BASIC_STATS\":\"true\"}', 
>   'numFiles'='0', 
>   'numRows'='0', 
>   'rawDataSize'='0', 
>   'totalSize'='0', 
>   'transient_lastDdlTime'='1515448894')
> {code}
> 3. once you run the  create table using the output of show create it ran into 
> the parsing error
> {code}
> NoViableAltException(296@[1876:103: ( tableRowFormatMapKeysIdentifier )?])
>   at org.antlr.runtime.DFA.noViableAlt(DFA.java:158)
>   at org.antlr.runtime.DFA.predict(DFA.java:116)
>   .
> FAILED: ParseException line 6:0 cannot recognize input near 'WITH' 
> 'SERDEPROPERTIES' '(' in serde properties specification
> {code}
> 4. table create with LazySimpleSerde don't have any such issue.
> {code}
> hive> CREATE TABLE `foo`( 
> > `col` string) 
> > ROW FORMAT SERDE 
> > 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' 
> > WITH SERDEPROPERTIES ( 
> > 'serialization.encoding'='UTF-8') 
> > STORED AS INPUTFORMAT 
> > 'org.apache.hadoop.mapred.TextInputFormat' 
> > OUTPUTFORMAT 
> > 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' ;
> OK
> Time taken: 0.375 seconds
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18452) work around HADOOP-15171

2018-01-16 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18452:

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Committed to master. Thanks for the review!

> work around HADOOP-15171
> 
>
> Key: HIVE-18452
> URL: https://issues.apache.org/jira/browse/HIVE-18452
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18452.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18460) Compactor doesn't pass Table properties to the Orc writer

2018-01-16 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328044#comment-16328044
 ] 

Eugene Koifman commented on HIVE-18460:
---

"size4" is not a typo - 4 is the length of the value of the property.  This is 
how table properties Properties are encoded in the Configuration.

> Compactor doesn't pass Table properties to the Orc writer
> -
>
> Key: HIVE-18460
> URL: https://issues.apache.org/jira/browse/HIVE-18460
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 0.13
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Critical
> Attachments: HIVE-18460.01.patch
>
>
>  
>  CompactorMap.getWrite()/getDeleteEventWriter() both do 
> AcidOutputFormat.Options.tableProperties() but
> OrcOutputFormat.getRawRecordWriter() does
> {noformat}
> final OrcFile.WriterOptions opts =
> OrcFile.writerOptions(options.getConfiguration());
> {noformat}
> which ignores tableProperties value.
> It should do 
> {noformat}
> final OrcFile.WriterOptions opts =
> OrcFile.writerOptions(options.getTableProperties(), 
> options.getConfiguration());
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18462) Explain formatted for queries with map join has columnExprMap with unformatted column name

2018-01-16 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-18462:
---
Attachment: HIVE-18462.1.patch

> Explain formatted for queries with map join has columnExprMap with 
> unformatted column name
> --
>
> Key: HIVE-18462
> URL: https://issues.apache.org/jira/browse/HIVE-18462
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-18462.1.patch
>
>
> e.g.
> {code:sql}
> "columnExprMap:":{  
>   "_col0":"0:Column[_col0]",
>   "_col1":"0:Column[_col1]",
>   "_col2":"1:Column[_col0]",
>   "_col3":"1:Column[_col1]"
>   }
> {code}
> It is better formatted as:
> {code:sql}
> "columnExprMap:":{  
>  "_col0":"0:_col0",
>  "_col1":"0:_col1",
>  "_col2":"1:_col0",
>  "_col3":"1:_col1"
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17495) CachedStore: prewarm improvement (avoid multiple sql calls to read partition column stats), refactoring and caching some aggregate stats

2018-01-16 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328036#comment-16328036
 ] 

Vaibhav Gumashta commented on HIVE-17495:
-

Rebased on master due to HIVE-17982

> CachedStore: prewarm improvement (avoid multiple sql calls to read partition 
> column stats), refactoring and caching some aggregate stats
> 
>
> Key: HIVE-17495
> URL: https://issues.apache.org/jira/browse/HIVE-17495
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-17495.1.patch, HIVE-17495.10.patch, 
> HIVE-17495.2.patch, HIVE-17495.3.patch, HIVE-17495.4.patch, 
> HIVE-17495.5.patch, HIVE-17495.6.patch, HIVE-17495.7.patch, 
> HIVE-17495.8.patch, HIVE-17495.9.patch
>
>
> Only when CachedStore is enabled, we would like to make the following 
> optimizations:
> 1. During CachedStore prewarm, use one sql call to retrieve column stats 
> objects for a db and store it in the cache.
> 2. Cache some aggregate stats  (e.g. aggregate stats for all partitions, 
> which seems to be commonly used) for query compilation speedup.
> 3. There was a bug in {{MetaStoreUtils.aggrPartitionStats}}, which would use 
> an iterator.next w/o checking with iterator.hasNext. This patch refactors 
> some code to fix that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18462) Explain formatted for queries with map join has columnExprMap with unformatted column name

2018-01-16 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-18462:
---
Status: Patch Available  (was: Open)

> Explain formatted for queries with map join has columnExprMap with 
> unformatted column name
> --
>
> Key: HIVE-18462
> URL: https://issues.apache.org/jira/browse/HIVE-18462
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Attachments: HIVE-18462.1.patch
>
>
> e.g.
> {code:sql}
> "columnExprMap:":{  
>   "_col0":"0:Column[_col0]",
>   "_col1":"0:Column[_col1]",
>   "_col2":"1:Column[_col0]",
>   "_col3":"1:Column[_col1]"
>   }
> {code}
> It is better formatted as:
> {code:sql}
> "columnExprMap:":{  
>  "_col0":"0:_col0",
>  "_col1":"0:_col1",
>  "_col2":"1:_col0",
>  "_col3":"1:_col1"
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17495) CachedStore: prewarm improvement (avoid multiple sql calls to read partition column stats), refactoring and caching some aggregate stats

2018-01-16 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-17495:

Attachment: HIVE-17495.10.patch

> CachedStore: prewarm improvement (avoid multiple sql calls to read partition 
> column stats), refactoring and caching some aggregate stats
> 
>
> Key: HIVE-17495
> URL: https://issues.apache.org/jira/browse/HIVE-17495
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-17495.1.patch, HIVE-17495.10.patch, 
> HIVE-17495.2.patch, HIVE-17495.3.patch, HIVE-17495.4.patch, 
> HIVE-17495.5.patch, HIVE-17495.6.patch, HIVE-17495.7.patch, 
> HIVE-17495.8.patch, HIVE-17495.9.patch
>
>
> Only when CachedStore is enabled, we would like to make the following 
> optimizations:
> 1. During CachedStore prewarm, use one sql call to retrieve column stats 
> objects for a db and store it in the cache.
> 2. Cache some aggregate stats  (e.g. aggregate stats for all partitions, 
> which seems to be commonly used) for query compilation speedup.
> 3. There was a bug in {{MetaStoreUtils.aggrPartitionStats}}, which would use 
> an iterator.next w/o checking with iterator.hasNext. This patch refactors 
> some code to fix that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18462) Explain formatted for queries with map join has columnExprMap with unformatted column name

2018-01-16 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-18462:
---
Description: 
e.g.

{code:sql}
"columnExprMap:":{  
  "_col0":"0:Column[_col0]",
  "_col1":"0:Column[_col1]",
  "_col2":"1:Column[_col0]",
  "_col3":"1:Column[_col1]"
  }
{code}

It is better formatted as:

{code:sql}
"columnExprMap:":{  
 "_col0":"0:_col0",
 "_col1":"0:_col1",
 "_col2":"1:_col0",
 "_col3":"1:_col1"
 }
{code}

  was:
e.g.

{code:sql}
"columnExprMap:":{  
  "_col0":"0:Column[_col0]",
  "_col1":"0:Column[_col1]",
  "_col2":"1:Column[_col0]",
  "_col3":"1:Column[_col1]"
   }
{code}

It is better formatted as:

"columnExprMap:":{  
  "_col0":"0:_col0",
  "_col1":"0:_col1",
  "_col2":"1:_col0",
  "_col3":"1:_col1"
   }


> Explain formatted for queries with map join has columnExprMap with 
> unformatted column name
> --
>
> Key: HIVE-18462
> URL: https://issues.apache.org/jira/browse/HIVE-18462
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
>
> e.g.
> {code:sql}
> "columnExprMap:":{  
>   "_col0":"0:Column[_col0]",
>   "_col1":"0:Column[_col1]",
>   "_col2":"1:Column[_col0]",
>   "_col3":"1:Column[_col1]"
>   }
> {code}
> It is better formatted as:
> {code:sql}
> "columnExprMap:":{  
>  "_col0":"0:_col0",
>  "_col1":"0:_col1",
>  "_col2":"1:_col0",
>  "_col3":"1:_col1"
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-18462) Explain formatted for queries with map join has columnExprMap with unformatted column name

2018-01-16 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg reassigned HIVE-18462:
--


> Explain formatted for queries with map join has columnExprMap with 
> unformatted column name
> --
>
> Key: HIVE-18462
> URL: https://issues.apache.org/jira/browse/HIVE-18462
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
>
> e.g.
> {code:sql}
> "columnExprMap:":{  
>   "_col0":"0:Column[_col0]",
>   "_col1":"0:Column[_col1]",
>   "_col2":"1:Column[_col0]",
>   "_col3":"1:Column[_col1]"
>}
> {code}
> It is better formatted as:
> "columnExprMap:":{  
>   "_col0":"0:_col0",
>   "_col1":"0:_col1",
>   "_col2":"1:_col0",
>   "_col3":"1:_col1"
>}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18461) Fix precommit hive job

2018-01-16 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328014#comment-16328014
 ] 

Vihang Karajgaonkar commented on HIVE-18461:


I suspect that this is because the PTestClient is using Java 7 while the 
upgraded JIRA version doesn't support the TLS protocols requested.

> Fix precommit hive job
> --
>
> Key: HIVE-18461
> URL: https://issues.apache.org/jira/browse/HIVE-18461
> Project: Hive
>  Issue Type: Task
>  Components: Testing Infrastructure
>Reporter: Vihang Karajgaonkar
>Priority: Blocker
>
> JIRA was upgraded over the weekend and precommit job has been failing since 
> then. There are potentially two issues at play here. One is with the 
> precommit admin job which automate the patch testing. I think YETUS-594 
> should fix the precommit admin job. But manually submission of Hive jobs is 
> failing with below exception. We should get this fix to get the automated 
> testing back on track.
> {noformat}
> + local 
> 'PTEST_CLASSPATH=/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/hive-ptest-3.0-classes.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/lib/*'
> + java -cp 
> '/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/hive-ptest-3.0-classes.jar:/home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target/lib/*'
>  org.apache.hive.ptest.api.client.PTestClient --command testStart --outputDir 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HIVE-Build/hive/build/hive/testutils/ptest2/target
>  --password '[***]' --testHandle PreCommit-HIVE-Build-8631 --endpoint 
> http://104.198.109.242:8080/hive-ptest-1.0 --logsEndpoint 
> http://104.198.109.242/logs/ --profile master-mr2 --patch 
> https://issues.apache.org/jira/secure/attachment/12906251/HIVE-18323.05.patch 
> --jira HIVE-18323
> Exception in thread "main" javax.net.ssl.SSLException: Received fatal alert: 
> protocol_version
>   at sun.security.ssl.Alerts.getSSLException(Alerts.java:208)
>   at sun.security.ssl.Alerts.getSSLException(Alerts.java:154)
>   at sun.security.ssl.SSLSocketImpl.recvAlert(SSLSocketImpl.java:1979)
>   at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1086)
>   at 
> sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1332)
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1359)
>   at 
> sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1343)
>   at 
> sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
>   at 
> sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
>   at 
> sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1301)
>   at 
> sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:254)
>   at java.net.URL.openStream(URL.java:1041)
>   at 
> com.google.common.io.Resources$UrlByteSource.openStream(Resources.java:72)
>   at com.google.common.io.ByteSource.read(ByteSource.java:257)
>   at com.google.common.io.Resources.toByteArray(Resources.java:99)
>   at 
> org.apache.hive.ptest.api.client.PTestClient.testStart(PTestClient.java:126)
>   at 
> org.apache.hive.ptest.api.client.PTestClient.main(PTestClient.java:320)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17495) CachedStore: prewarm improvement (avoid multiple sql calls to read partition column stats), refactoring and caching some aggregate stats

2018-01-16 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328011#comment-16328011
 ] 

Vaibhav Gumashta commented on HIVE-17495:
-

The failed tests in the latest run seems to be failing elsewhere too: e.g 
HIVE-18061, HIVE-18452, 

> CachedStore: prewarm improvement (avoid multiple sql calls to read partition 
> column stats), refactoring and caching some aggregate stats
> 
>
> Key: HIVE-17495
> URL: https://issues.apache.org/jira/browse/HIVE-17495
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-17495.1.patch, HIVE-17495.2.patch, 
> HIVE-17495.3.patch, HIVE-17495.4.patch, HIVE-17495.5.patch, 
> HIVE-17495.6.patch, HIVE-17495.7.patch, HIVE-17495.8.patch, HIVE-17495.9.patch
>
>
> Only when CachedStore is enabled, we would like to make the following 
> optimizations:
> 1. During CachedStore prewarm, use one sql call to retrieve column stats 
> objects for a db and store it in the cache.
> 2. Cache some aggregate stats  (e.g. aggregate stats for all partitions, 
> which seems to be commonly used) for query compilation speedup.
> 3. There was a bug in {{MetaStoreUtils.aggrPartitionStats}}, which would use 
> an iterator.next w/o checking with iterator.hasNext. This patch refactors 
> some code to fix that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17495) CachedStore: prewarm improvement (avoid multiple sql calls to read partition column stats), refactoring and caching some aggregate stats

2018-01-16 Thread Daniel Dai (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328007#comment-16328007
 ] 

Daniel Dai commented on HIVE-17495:
---

Finally a clean run, +1.

> CachedStore: prewarm improvement (avoid multiple sql calls to read partition 
> column stats), refactoring and caching some aggregate stats
> 
>
> Key: HIVE-17495
> URL: https://issues.apache.org/jira/browse/HIVE-17495
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Major
> Attachments: HIVE-17495.1.patch, HIVE-17495.2.patch, 
> HIVE-17495.3.patch, HIVE-17495.4.patch, HIVE-17495.5.patch, 
> HIVE-17495.6.patch, HIVE-17495.7.patch, HIVE-17495.8.patch, HIVE-17495.9.patch
>
>
> Only when CachedStore is enabled, we would like to make the following 
> optimizations:
> 1. During CachedStore prewarm, use one sql call to retrieve column stats 
> objects for a db and store it in the cache.
> 2. Cache some aggregate stats  (e.g. aggregate stats for all partitions, 
> which seems to be commonly used) for query compilation speedup.
> 3. There was a bug in {{MetaStoreUtils.aggrPartitionStats}}, which would use 
> an iterator.next w/o checking with iterator.hasNext. This patch refactors 
> some code to fix that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17331) Path must be used as key type of the pathToAlises

2018-01-16 Thread Oleg Danilov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleg Danilov updated HIVE-17331:

Attachment: HIVE-17331.2.patch

> Path must be used as key type of the pathToAlises
> -
>
> Key: HIVE-17331
> URL: https://issues.apache.org/jira/browse/HIVE-17331
> Project: Hive
>  Issue Type: Bug
>Reporter: Oleg Danilov
>Assignee: Oleg Danilov
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HIVE-17331.2.patch, HIVE-17331.patch
>
>
> This code uses String instead of Path as key type of the pathToAliases map, 
> so seems like get(String) always null.
> +*GenMapRedUtils.java*+
> {code:java}
> for (int pos = 0; pos < size; pos++) {
>   String taskTmpDir = taskTmpDirLst.get(pos);
>   TableDesc tt_desc = tt_descLst.get(pos);
>   MapWork mWork = plan.getMapWork();
>   if (mWork.getPathToAliases().get(taskTmpDir) == null) {
> taskTmpDir = taskTmpDir.intern();
> Path taskTmpDirPath = 
> StringInternUtils.internUriStringsInPath(new Path(taskTmpDir));
> mWork.removePathToAlias(taskTmpDirPath);
> mWork.addPathToAlias(taskTmpDirPath, taskTmpDir);
> mWork.addPathToPartitionInfo(taskTmpDirPath, new 
> PartitionDesc(tt_desc, null));
> mWork.getAliasToWork().put(taskTmpDir, topOperators.get(pos));
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18460) Compactor doesn't pass Table properties to the Orc writer

2018-01-16 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328000#comment-16328000
 ] 

Prasanth Jayachandran commented on HIVE-18460:
--

"orc.compress.size4:3141"
Looks like a typo "size4"? 
Can you also add a test with tblproperties of table set to non-default 
compression buffer size and see if it is retained (and inherited by deltas) 
after major/minor compactions?

> Compactor doesn't pass Table properties to the Orc writer
> -
>
> Key: HIVE-18460
> URL: https://issues.apache.org/jira/browse/HIVE-18460
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 0.13
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Critical
> Attachments: HIVE-18460.01.patch
>
>
>  
>  CompactorMap.getWrite()/getDeleteEventWriter() both do 
> AcidOutputFormat.Options.tableProperties() but
> OrcOutputFormat.getRawRecordWriter() does
> {noformat}
> final OrcFile.WriterOptions opts =
> OrcFile.writerOptions(options.getConfiguration());
> {noformat}
> which ignores tableProperties value.
> It should do 
> {noformat}
> final OrcFile.WriterOptions opts =
> OrcFile.writerOptions(options.getTableProperties(), 
> options.getConfiguration());
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17331) Path must be used as key type of the pathToAlises

2018-01-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16327999#comment-16327999
 ] 

ASF GitHub Bot commented on HIVE-17331:
---

GitHub user dosoft opened a pull request:

https://github.com/apache/hive/pull/292

HIVE-17331: Use Path instead of String as key type of the pathToAliases



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dosoft/hive HIVE-17331

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hive/pull/292.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #292


commit 661897e01fe48a0f80c3ee4e9168667b7d926ba9
Author: Oleg Danilov 
Date:   2017-08-16T10:34:39Z

HIVE-17331: Use Path instead of String as key type of the pathToAliases




> Path must be used as key type of the pathToAlises
> -
>
> Key: HIVE-17331
> URL: https://issues.apache.org/jira/browse/HIVE-17331
> Project: Hive
>  Issue Type: Bug
>Reporter: Oleg Danilov
>Assignee: Oleg Danilov
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HIVE-17331.patch
>
>
> This code uses String instead of Path as key type of the pathToAliases map, 
> so seems like get(String) always null.
> +*GenMapRedUtils.java*+
> {code:java}
> for (int pos = 0; pos < size; pos++) {
>   String taskTmpDir = taskTmpDirLst.get(pos);
>   TableDesc tt_desc = tt_descLst.get(pos);
>   MapWork mWork = plan.getMapWork();
>   if (mWork.getPathToAliases().get(taskTmpDir) == null) {
> taskTmpDir = taskTmpDir.intern();
> Path taskTmpDirPath = 
> StringInternUtils.internUriStringsInPath(new Path(taskTmpDir));
> mWork.removePathToAlias(taskTmpDirPath);
> mWork.addPathToAlias(taskTmpDirPath, taskTmpDir);
> mWork.addPathToPartitionInfo(taskTmpDirPath, new 
> PartitionDesc(tt_desc, null));
> mWork.getAliasToWork().put(taskTmpDir, topOperators.get(pos));
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17331) Path must be used as key type of the pathToAlises

2018-01-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16327994#comment-16327994
 ] 

ASF GitHub Bot commented on HIVE-17331:
---

Github user dosoft closed the pull request at:

https://github.com/apache/hive/pull/233


> Path must be used as key type of the pathToAlises
> -
>
> Key: HIVE-17331
> URL: https://issues.apache.org/jira/browse/HIVE-17331
> Project: Hive
>  Issue Type: Bug
>Reporter: Oleg Danilov
>Assignee: Oleg Danilov
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HIVE-17331.patch
>
>
> This code uses String instead of Path as key type of the pathToAliases map, 
> so seems like get(String) always null.
> +*GenMapRedUtils.java*+
> {code:java}
> for (int pos = 0; pos < size; pos++) {
>   String taskTmpDir = taskTmpDirLst.get(pos);
>   TableDesc tt_desc = tt_descLst.get(pos);
>   MapWork mWork = plan.getMapWork();
>   if (mWork.getPathToAliases().get(taskTmpDir) == null) {
> taskTmpDir = taskTmpDir.intern();
> Path taskTmpDirPath = 
> StringInternUtils.internUriStringsInPath(new Path(taskTmpDir));
> mWork.removePathToAlias(taskTmpDirPath);
> mWork.addPathToAlias(taskTmpDirPath, taskTmpDir);
> mWork.addPathToPartitionInfo(taskTmpDirPath, new 
> PartitionDesc(tt_desc, null));
> mWork.getAliasToWork().put(taskTmpDir, topOperators.get(pos));
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17331) Path must be used as key type of the pathToAlises

2018-01-16 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-17331:
--
Labels: pull-request-available  (was: )

> Path must be used as key type of the pathToAlises
> -
>
> Key: HIVE-17331
> URL: https://issues.apache.org/jira/browse/HIVE-17331
> Project: Hive
>  Issue Type: Bug
>Reporter: Oleg Danilov
>Assignee: Oleg Danilov
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HIVE-17331.patch
>
>
> This code uses String instead of Path as key type of the pathToAliases map, 
> so seems like get(String) always null.
> +*GenMapRedUtils.java*+
> {code:java}
> for (int pos = 0; pos < size; pos++) {
>   String taskTmpDir = taskTmpDirLst.get(pos);
>   TableDesc tt_desc = tt_descLst.get(pos);
>   MapWork mWork = plan.getMapWork();
>   if (mWork.getPathToAliases().get(taskTmpDir) == null) {
> taskTmpDir = taskTmpDir.intern();
> Path taskTmpDirPath = 
> StringInternUtils.internUriStringsInPath(new Path(taskTmpDir));
> mWork.removePathToAlias(taskTmpDirPath);
> mWork.addPathToAlias(taskTmpDirPath, taskTmpDir);
> mWork.addPathToPartitionInfo(taskTmpDirPath, new 
> PartitionDesc(tt_desc, null));
> mWork.getAliasToWork().put(taskTmpDir, topOperators.get(pos));
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18393) Error returned when some other type is read as string from parquet tables

2018-01-16 Thread Janaki Lahorani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Janaki Lahorani updated HIVE-18393:
---
Attachment: HIVE-18393.4.patch

> Error returned when some other type is read as string from parquet tables
> -
>
> Key: HIVE-18393
> URL: https://issues.apache.org/jira/browse/HIVE-18393
> Project: Hive
>  Issue Type: Bug
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18393.1.patch, HIVE-18393.1.patch, 
> HIVE-18393.2.patch, HIVE-18393.3.patch, HIVE-18393.4.patch
>
>
> TimeStamp, Decimal, Double, Float, BigInt, Int, SmallInt, Tinyint and Boolean 
> when read as String, Varchar or Char should return the correct data.  Now 
> this results in error for parquet tables.
> Test Case:
> {code}
> drop table if exists testAltCol;
> create table testAltCol
> (cId  TINYINT,
>  cTimeStamp TIMESTAMP,
>  cDecimal   DECIMAL(38,18),
>  cDoubleDOUBLE,
>  cFloat   FLOAT,
>  cBigIntBIGINT,
>  cInt INT,
>  cSmallInt  SMALLINT,
>  cTinyint   TINYINT,
>  cBoolean   BOOLEAN);
> insert into testAltCol values
> (1,
>  '2017-11-07 09:02:49.9',
>  12345678901234567890.123456789012345678,
>  1.79e308,
>  3.4e38,
>  1234567890123456789,
>  1234567890,
>  12345,
>  123,
>  TRUE);
> insert into testAltCol values
> (2,
>  '1400-01-01 01:01:01.1',
>  1.1,
>  2.2,
>  3.3,
>  1,
>  2,
>  3,
>  4,
>  FALSE);
> insert into testAltCol values
> (3,
>  '1400-01-01 01:01:01.1',
>  10.1,
>  20.2,
>  30.3,
>  1234567890123456789,
>  1234567890,
>  12345,
>  123,
>  TRUE);
> select cId, cTimeStamp from testAltCol order by cId;
> select cId, cDecimal, cDouble, cFloat from testAltCol order by cId;
> select cId, cBigInt, cInt, cSmallInt, cTinyint from testAltCol order by cId;
> select cId, cBoolean from testAltCol order by cId;
> drop table if exists testAltColP;
> create table testAltColP stored as parquet as select * from testAltCol;
> select cId, cTimeStamp from testAltColP order by cId;
> select cId, cDecimal, cDouble, cFloat from testAltColP order by cId;
> select cId, cBigInt, cInt, cSmallInt, cTinyint from testAltColP order by cId;
> select cId, cBoolean from testAltColP order by cId;
> alter table testAltColP replace columns
> (cId  TINYINT,
>  cTimeStamp STRING,
>  cDecimal   STRING,
>  cDoubleSTRING,
>  cFloat   STRING,
>  cBigIntSTRING,
>  cInt STRING,
>  cSmallInt  STRING,
>  cTinyint   STRING,
>  cBoolean   STRING);
> select cId, cTimeStamp from testAltColP order by cId;
> select cId, cDecimal, cDouble, cFloat from testAltColP order by cId;
> select cId, cBigInt, cInt, cSmallInt, cTinyint from testAltColP order by cId;
> select cId, cBoolean from testAltColP order by cId;
> alter table testAltColP replace columns
> (cId  TINYINT,
>  cTimeStamp VARCHAR(100),
>  cDecimal   VARCHAR(100),
>  cDoubleVARCHAR(100),
>  cFloat   VARCHAR(100),
>  cBigIntVARCHAR(100),
>  cInt VARCHAR(100),
>  cSmallInt  VARCHAR(100),
>  cTinyint   VARCHAR(100),
>  cBoolean   VARCHAR(100));
> select cId, cTimeStamp from testAltColP order by cId;
> select cId, cDecimal, cDouble, cFloat from testAltColP order by cId;
> select cId, cBigInt, cInt, cSmallInt, cTinyint from testAltColP order by cId;
> select cId, cBoolean from testAltColP order by cId;
> alter table testAltColP replace columns
> (cId  TINYINT,
>  cTimeStamp CHAR(100),
>  cDecimal   CHAR(100),
>  cDoubleCHAR(100),
>  cFloat   CHAR(100),
>  cBigIntCHAR(100),
>  cInt CHAR(100),
>  cSmallInt  CHAR(100),
>  cTinyint   CHAR(100),
>  cBoolean   CHAR(100));
> select cId, cTimeStamp from testAltColP order by cId;
> select cId, cDecimal, cDouble, cFloat from testAltColP order by cId;
> select cId, cBigInt, cInt, cSmallInt, cTinyint from testAltColP order by cId;
> select cId, cBoolean from testAltColP order by cId;
> drop table if exists testAltColP;
> {code}
> {code}
> Error:
> FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask
> Excerpt for log:
> 2018-01-05T15:54:05,756 ERROR [LocalJobRunner Map Task Executor #0] 
> mr.ExecMapper: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing row [Error getting row data with exception 
> java.lang.UnsupportedOperationException: Cannot inspect 
> org.apache.hadoop.hive.serde2.io.TimestampWritable
>   at 
> org.apache.hadoop.hive.ql.io.parquet.serde.primitive.ParquetStringInspector.getPrimitiveJavaObject(ParquetStringInspector.java:77)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17257) Hive should merge empty files

2018-01-16 Thread Chao Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HIVE-17257:

Attachment: (was: HIVE-17257.3.patch)

> Hive should merge empty files
> -
>
> Key: HIVE-17257
> URL: https://issues.apache.org/jira/browse/HIVE-17257
> Project: Hive
>  Issue Type: Bug
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HIVE-17257.0.patch, HIVE-17257.1.patch, 
> HIVE-17257.2.patch, HIVE-17257.3.patch
>
>
> Currently if merging file option is turned on and the dest dir contains large 
> number of empty files, Hive will not trigger merge task:
> {code}
>   private long getMergeSize(FileSystem inpFs, Path dirPath, long avgSize) {
> AverageSize averageSize = getAverageSize(inpFs, dirPath);
> if (averageSize.getTotalSize() <= 0) {
>   return -1;
> }
> if (averageSize.getNumFiles() <= 1) {
>   return -1;
> }
> if (averageSize.getTotalSize()/averageSize.getNumFiles() < avgSize) {
>   return averageSize.getTotalSize();
> }
> return -1;
>   }
> {code}
> This logic doesn't seem right as the it seems better to combine these empty 
> files into one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17257) Hive should merge empty files

2018-01-16 Thread Chao Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HIVE-17257:

Attachment: HIVE-17257.3.patch

> Hive should merge empty files
> -
>
> Key: HIVE-17257
> URL: https://issues.apache.org/jira/browse/HIVE-17257
> Project: Hive
>  Issue Type: Bug
>Reporter: Chao Sun
>Assignee: Chao Sun
>Priority: Major
> Attachments: HIVE-17257.0.patch, HIVE-17257.1.patch, 
> HIVE-17257.2.patch, HIVE-17257.3.patch
>
>
> Currently if merging file option is turned on and the dest dir contains large 
> number of empty files, Hive will not trigger merge task:
> {code}
>   private long getMergeSize(FileSystem inpFs, Path dirPath, long avgSize) {
> AverageSize averageSize = getAverageSize(inpFs, dirPath);
> if (averageSize.getTotalSize() <= 0) {
>   return -1;
> }
> if (averageSize.getNumFiles() <= 1) {
>   return -1;
> }
> if (averageSize.getTotalSize()/averageSize.getNumFiles() < avgSize) {
>   return averageSize.getTotalSize();
> }
> return -1;
>   }
> {code}
> This logic doesn't seem right as the it seems better to combine these empty 
> files into one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18460) Compactor doesn't pass Table properties to the Orc writer

2018-01-16 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-18460:
--
Status: Patch Available  (was: Open)

[~prasanth_j] could you review please

FYI, [~jcamachorodriguez]

> Compactor doesn't pass Table properties to the Orc writer
> -
>
> Key: HIVE-18460
> URL: https://issues.apache.org/jira/browse/HIVE-18460
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 0.13
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Critical
> Attachments: HIVE-18460.01.patch
>
>
>  
>  CompactorMap.getWrite()/getDeleteEventWriter() both do 
> AcidOutputFormat.Options.tableProperties() but
> OrcOutputFormat.getRawRecordWriter() does
> {noformat}
> final OrcFile.WriterOptions opts =
> OrcFile.writerOptions(options.getConfiguration());
> {noformat}
> which ignores tableProperties value.
> It should do 
> {noformat}
> final OrcFile.WriterOptions opts =
> OrcFile.writerOptions(options.getTableProperties(), 
> options.getConfiguration());
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18460) Compactor doesn't pass Table properties to the Orc writer

2018-01-16 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-18460:
--
Attachment: HIVE-18460.01.patch

> Compactor doesn't pass Table properties to the Orc writer
> -
>
> Key: HIVE-18460
> URL: https://issues.apache.org/jira/browse/HIVE-18460
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 0.13
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Critical
> Attachments: HIVE-18460.01.patch
>
>
>  
>  CompactorMap.getWrite()/getDeleteEventWriter() both do 
> AcidOutputFormat.Options.tableProperties() but
> OrcOutputFormat.getRawRecordWriter() does
> {noformat}
> final OrcFile.WriterOptions opts =
> OrcFile.writerOptions(options.getConfiguration());
> {noformat}
> which ignores tableProperties value.
> It should do 
> {noformat}
> final OrcFile.WriterOptions opts =
> OrcFile.writerOptions(options.getTableProperties(), 
> options.getConfiguration());
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18350) load data should rename files consistent with insert statements

2018-01-16 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-18350:
--
Hadoop Flags: Incompatible change

> load data should rename files consistent with insert statements
> ---
>
> Key: HIVE-18350
> URL: https://issues.apache.org/jira/browse/HIVE-18350
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-18350.1.patch, HIVE-18350.2.patch, 
> HIVE-18350.3.patch, HIVE-18350.4.patch
>
>
> Insert statements create files of format ending with _0, 0001_0 etc. 
> However, the load data uses the input file name. That results in inconsistent 
> naming convention which makes SMB joins difficult in some scenarios and may 
> cause trouble for other types of queries in future.
> We need consistent naming convention.
> For non-bucketed table, hive renames all the files regardless of how they 
> were named by the user.
> For bucketed table, hive relies on user to name the files matching the bucket 
> in non-strict mode. Hive assumes that the data belongs to same bucket in a 
> file. In strict mode, loading bucketed table is disabled.
> This will likely affect most of the tests which load data which is pretty 
> significant due to which it is further divided into two subtasks for smoother 
> merge.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18386) Create dummy materialized views registry and make it configurable

2018-01-16 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-18386:
---
Attachment: HIVE-18386.02.patch

> Create dummy materialized views registry and make it configurable
> -
>
> Key: HIVE-18386
> URL: https://issues.apache.org/jira/browse/HIVE-18386
> Project: Hive
>  Issue Type: Improvement
>  Components: Materialized views
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-18386.01.patch, HIVE-18386.02.patch
>
>
> HiveMaterializedViewsRegistry keeps the materialized views plans in memory to 
> have quick access when queries are planned. For debugging purposes, we will 
> create a dummy materialized views registry that forwards all calls to 
> metastore and make the choice configurable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17952) Fix license headers to avoid dangling javadoc warnings

2018-01-16 Thread Andrew Sherman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16327944#comment-16327944
 ] 

Andrew Sherman commented on HIVE-17952:
---

[~prasanth_j] thanks, will do rebase today or tomorrow

> Fix license headers to avoid dangling javadoc warnings
> --
>
> Key: HIVE-17952
> URL: https://issues.apache.org/jira/browse/HIVE-17952
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Andrew Sherman
>Priority: Trivial
> Attachments: HIVE-17952.1.patch
>
>
> All license headers starts with "/**" which are assumed to be javadocs and 
> IDE warns about dangling javadoc pointing to license headers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-18435) output mappings summary in plan description

2018-01-16 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin reassigned HIVE-18435:
---

Assignee: Sergey Shelukhin

> output mappings summary in plan description
> ---
>
> Key: HIVE-18435
> URL: https://issues.apache.org/jira/browse/HIVE-18435
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18452) work around HADOOP-15171

2018-01-16 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16327946#comment-16327946
 ] 

Ashutosh Chauhan commented on HIVE-18452:
-

+1

> work around HADOOP-15171
> 
>
> Key: HIVE-18452
> URL: https://issues.apache.org/jira/browse/HIVE-18452
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-18452.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17952) Fix license headers to avoid dangling javadoc warnings

2018-01-16 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16327940#comment-16327940
 ] 

Prasanth Jayachandran commented on HIVE-17952:
--

[~asherman] The patch has some missing files likely because of the standalone 
metastore move. Can you please rebase the patch? I will get it committed after 
the rebase. 

> Fix license headers to avoid dangling javadoc warnings
> --
>
> Key: HIVE-17952
> URL: https://issues.apache.org/jira/browse/HIVE-17952
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Andrew Sherman
>Priority: Trivial
> Attachments: HIVE-17952.1.patch
>
>
> All license headers starts with "/**" which are assumed to be javadocs and 
> IDE warns about dangling javadoc pointing to license headers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17952) Fix license headers to avoid dangling javadoc warnings

2018-01-16 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16327934#comment-16327934
 ] 

Prasanth Jayachandran commented on HIVE-17952:
--

lgtm, +1. Will commit it shortly if there are no conflicts. 

> Fix license headers to avoid dangling javadoc warnings
> --
>
> Key: HIVE-17952
> URL: https://issues.apache.org/jira/browse/HIVE-17952
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Andrew Sherman
>Priority: Trivial
> Attachments: HIVE-17952.1.patch
>
>
> All license headers starts with "/**" which are assumed to be javadocs and 
> IDE warns about dangling javadoc pointing to license headers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17952) Fix license headers to avoid dangling javadoc warnings

2018-01-16 Thread Andrew Sherman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16327932#comment-16327932
 ] 

Andrew Sherman commented on HIVE-17952:
---

[~prasanth_j] could you please review and push this change?

> Fix license headers to avoid dangling javadoc warnings
> --
>
> Key: HIVE-17952
> URL: https://issues.apache.org/jira/browse/HIVE-17952
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Andrew Sherman
>Priority: Trivial
> Attachments: HIVE-17952.1.patch
>
>
> All license headers starts with "/**" which are assumed to be javadocs and 
> IDE warns about dangling javadoc pointing to license headers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18457) Triggers in unmanaged pools are not shown

2018-01-16 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16327931#comment-16327931
 ] 

Prasanth Jayachandran commented on HIVE-18457:
--

For debugging, pasting the state of metastore tables here
{code}
mysql> select * from wm_resourceplan;
+---++---+-+-+
| RP_ID | NAME   | QUERY_PARALLELISM | STATUS  | DEFAULT_POOL_ID |
+---++---+-+-+
| 1 | global |  NULL | ACTIVE  |   1 |
| 2 | llap   |  NULL | ENABLED |   2 |
+---++---+-+-+
2 rows in set (0.00 sec)

mysql> select * from wm_pool;
+-+---+-++---+---+
| POOL_ID | RP_ID | PATH| ALLOC_FRACTION | QUERY_PARALLELISM | 
SCHEDULING_POLICY |
+-+---+-++---+---+
|   1 | 1 | default |  1 | 4 | NULL 
 |
|   2 | 2 | default |  1 | 4 | NULL 
 |
+-+---+-++---+---+
2 rows in set (0.01 sec)

mysql> select * from wm_pool_to_trigger;
Empty set (0.01 sec)

mysql> select * from wm_mapping;
Empty set (0.00 sec)

mysql> select * from wm_trigger;
++---+-+--+---+-+
| TRIGGER_ID | RP_ID | NAME| TRIGGER_EXPRESSION   | 
ACTION_EXPRESSION | IS_IN_UNMANAGED |
++---+-+--+---+-+
| 29 | 1 | highly_parallel | TOTAL_TASKS > 40 | KILL
  ||
| 33 | 1 | big_hdfs_read   | HDFS_BYTES_READ > 30 | KILL
  ||
| 34 | 1 | slow_query  | EXECUTION_TIME > 10  | KILL
  ||
| 35 | 1 | some_spills | SPILLED_RECORDS > 10 | KILL
  ||
++---+-+--+---+-+
4 rows in set (0.00 sec)
{code}


> Triggers in unmanaged pools are not shown
> -
>
> Key: HIVE-18457
> URL: https://issues.apache.org/jira/browse/HIVE-18457
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Sergey Shelukhin
>Priority: Major
>
> Did the following sequence to add triggers to UNMANAGED. I can see the 
> triggers added to metastore by IS_IN_UNAMANGED flag is not set in metastore. 
> Also show resource plans does not show triggers in unmanaged pool.
> {code}
> 0: jdbc:hive2://localhost:1> show resource plans;
> +--+--++
> | rp_name  |  status  | query_parallelism  |
> +--+--++
> | global   | ACTIVE   | NULL   |
> | llap | ENABLED  | NULL   |
> +--+--++
> 0: jdbc:hive2://localhost:1>ALTER RESOURCE PLAN llap ACTIVATE;
> 0: jdbc:hive2://localhost:1>ALTER RESOURCE PLAN global DISABLE;
> 0: jdbc:hive2://localhost:1>CREATE TRIGGER global.highly_parallel WHEN 
> TOTAL_TASKS > 40 DO KILL;
> 0: jdbc:hive2://localhost:1>ALTER TRIGGER global.highly_parallel ADD TO 
> UNMANAGED;
> 0: jdbc:hive2://localhost:1>CREATE TRIGGER global.big_hdfs_read WHEN 
> HDFS_BYTES_READ > 30 DO KILL;
> 0: jdbc:hive2://localhost:1>ALTER TRIGGER global.big_hdfs_read ADD TO 
> UNMANAGED;
> 0: jdbc:hive2://localhost:1>CREATE TRIGGER global.slow_query WHEN 
> EXECUTION_TIME > 10 DO KILL;
> 0: jdbc:hive2://localhost:1>ALTER TRIGGER global.slow_query ADD TO 
> UNMANAGED;
> 0: jdbc:hive2://localhost:1>CREATE TRIGGER global.some_spills WHEN 
> SPILLED_RECORDS > 10 DO KILL;
> 0: jdbc:hive2://localhost:1>ALTER TRIGGER global.some_spills ADD TO 
> UNMANAGED;
> 0: jdbc:hive2://localhost:1>ALTER RESOURCE PLAN global ENABLE;
> 0: jdbc:hive2://localhost:1>ALTER RESOURCE PLAN global ACTIVATE;
> 0: jdbc:hive2://localhost:1> show resource plan global;
> ++
> |line|
> ++
> | global[status=ACTIVE,parallelism=null,defaultPool=default] |
> | default[allocFraction=1.0,schedulingPolicy=null,parallelism=4] |
> ++
> {code}
> {code:title=mysql}
> mysql> select * from wm_trigger;
> 

[jira] [Updated] (HIVE-17983) Make the standalone metastore generate tarballs etc.

2018-01-16 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-17983:
--
Status: Patch Available  (was: Open)

The testing for various install and upgrade scripts has changed since the 
comment given above on 4 Nov.  This patch includes tests for all four non-Derby 
DBs for install and upgrade.  See standalone-metastore/DEV-README for details 
on how to run these.

Each of these is run in a docker container.  The developer must have docker 
available on his box to run them.  For the Oracle test, the developer must 
independently download the Oracle JDBC driver (it's behind a license 
agreement).  Everything else, including the required docker containers and JDBC 
drivers for the other DB types are handled automatically by maven and the 
tests.  

Since these tests are slow (~1 min each) and require external downloads, they 
are not run by default.  They are implemented as integration tests using the 
failsafe plugin.

> Make the standalone metastore generate tarballs etc.
> 
>
> Key: HIVE-17983
> URL: https://issues.apache.org/jira/browse/HIVE-17983
> Project: Hive
>  Issue Type: Sub-task
>  Components: Standalone Metastore
>Reporter: Alan Gates
>Assignee: Alan Gates
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-17983.patch
>
>
> In order to be separately installable the standalone metastore needs its own 
> tarballs, startup scripts, etc.  All of the SQL installation and upgrade 
> scripts also need to move from metastore to standalone-metastore.
> I also plan to create Dockerfiles for different database types so that 
> developers can test the SQL installation and upgrade scripts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18457) Triggers in unmanaged pools are not shown

2018-01-16 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16327921#comment-16327921
 ] 

Prasanth Jayachandran commented on HIVE-18457:
--

I am using "show resource plan global;" (see description) but still i don't see 
unmanaged triggers. 

> Triggers in unmanaged pools are not shown
> -
>
> Key: HIVE-18457
> URL: https://issues.apache.org/jira/browse/HIVE-18457
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Sergey Shelukhin
>Priority: Major
>
> Did the following sequence to add triggers to UNMANAGED. I can see the 
> triggers added to metastore by IS_IN_UNAMANGED flag is not set in metastore. 
> Also show resource plans does not show triggers in unmanaged pool.
> {code}
> 0: jdbc:hive2://localhost:1> show resource plans;
> +--+--++
> | rp_name  |  status  | query_parallelism  |
> +--+--++
> | global   | ACTIVE   | NULL   |
> | llap | ENABLED  | NULL   |
> +--+--++
> 0: jdbc:hive2://localhost:1>ALTER RESOURCE PLAN llap ACTIVATE;
> 0: jdbc:hive2://localhost:1>ALTER RESOURCE PLAN global DISABLE;
> 0: jdbc:hive2://localhost:1>CREATE TRIGGER global.highly_parallel WHEN 
> TOTAL_TASKS > 40 DO KILL;
> 0: jdbc:hive2://localhost:1>ALTER TRIGGER global.highly_parallel ADD TO 
> UNMANAGED;
> 0: jdbc:hive2://localhost:1>CREATE TRIGGER global.big_hdfs_read WHEN 
> HDFS_BYTES_READ > 30 DO KILL;
> 0: jdbc:hive2://localhost:1>ALTER TRIGGER global.big_hdfs_read ADD TO 
> UNMANAGED;
> 0: jdbc:hive2://localhost:1>CREATE TRIGGER global.slow_query WHEN 
> EXECUTION_TIME > 10 DO KILL;
> 0: jdbc:hive2://localhost:1>ALTER TRIGGER global.slow_query ADD TO 
> UNMANAGED;
> 0: jdbc:hive2://localhost:1>CREATE TRIGGER global.some_spills WHEN 
> SPILLED_RECORDS > 10 DO KILL;
> 0: jdbc:hive2://localhost:1>ALTER TRIGGER global.some_spills ADD TO 
> UNMANAGED;
> 0: jdbc:hive2://localhost:1>ALTER RESOURCE PLAN global ENABLE;
> 0: jdbc:hive2://localhost:1>ALTER RESOURCE PLAN global ACTIVATE;
> 0: jdbc:hive2://localhost:1> show resource plan global;
> ++
> |line|
> ++
> | global[status=ACTIVE,parallelism=null,defaultPool=default] |
> | default[allocFraction=1.0,schedulingPolicy=null,parallelism=4] |
> ++
> {code}
> {code:title=mysql}
> mysql> select * from wm_trigger;
> ++---+-+--+---+-+
> | TRIGGER_ID | RP_ID | NAME| TRIGGER_EXPRESSION   | 
> ACTION_EXPRESSION | IS_IN_UNMANAGED |
> ++---+-+--+---+-+
> | 29 | 1 | highly_parallel | TOTAL_TASKS > 40 | KILL  
> ||
> | 33 | 1 | big_hdfs_read   | HDFS_BYTES_READ > 30 | KILL  
> ||
> | 34 | 1 | slow_query  | EXECUTION_TIME > 10  | KILL  
> ||
> | 35 | 1 | some_spills | SPILLED_RECORDS > 10 | KILL  
> ||
> ++---+-+--+---+-+
> {code}
> From the above mysql table, IS_IN_UNMANAGED is not set and 'show resource 
> plan global' is not showing triggers defined in unmanaged pool. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17983) Make the standalone metastore generate tarballs etc.

2018-01-16 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-17983:
--
Attachment: HIVE-17983.patch

> Make the standalone metastore generate tarballs etc.
> 
>
> Key: HIVE-17983
> URL: https://issues.apache.org/jira/browse/HIVE-17983
> Project: Hive
>  Issue Type: Sub-task
>  Components: Standalone Metastore
>Reporter: Alan Gates
>Assignee: Alan Gates
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-17983.patch
>
>
> In order to be separately installable the standalone metastore needs its own 
> tarballs, startup scripts, etc.  All of the SQL installation and upgrade 
> scripts also need to move from metastore to standalone-metastore.
> I also plan to create Dockerfiles for different database types so that 
> developers can test the SQL installation and upgrade scripts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17983) Make the standalone metastore generate tarballs etc.

2018-01-16 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-17983:
--
Labels: pull-request-available  (was: )

> Make the standalone metastore generate tarballs etc.
> 
>
> Key: HIVE-17983
> URL: https://issues.apache.org/jira/browse/HIVE-17983
> Project: Hive
>  Issue Type: Sub-task
>  Components: Standalone Metastore
>Reporter: Alan Gates
>Assignee: Alan Gates
>Priority: Major
>  Labels: pull-request-available
>
> In order to be separately installable the standalone metastore needs its own 
> tarballs, startup scripts, etc.  All of the SQL installation and upgrade 
> scripts also need to move from metastore to standalone-metastore.
> I also plan to create Dockerfiles for different database types so that 
> developers can test the SQL installation and upgrade scripts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17983) Make the standalone metastore generate tarballs etc.

2018-01-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16327915#comment-16327915
 ] 

ASF GitHub Bot commented on HIVE-17983:
---

GitHub user alanfgates opened a pull request:

https://github.com/apache/hive/pull/291

HIVE-17983 Make the standalone metastore generate tarballs etc.

See JIRA for full comments.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alanfgates/hive hive17983

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hive/pull/291.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #291


commit 1ba9b62d9ef488355e1a97dbc7237c1472349a24
Author: Alan Gates 
Date:   2017-10-19T23:49:38Z

HIVE-17983 Make the standalone metastore generate tarballs etc.




> Make the standalone metastore generate tarballs etc.
> 
>
> Key: HIVE-17983
> URL: https://issues.apache.org/jira/browse/HIVE-17983
> Project: Hive
>  Issue Type: Sub-task
>  Components: Standalone Metastore
>Reporter: Alan Gates
>Assignee: Alan Gates
>Priority: Major
>  Labels: pull-request-available
>
> In order to be separately installable the standalone metastore needs its own 
> tarballs, startup scripts, etc.  All of the SQL installation and upgrade 
> scripts also need to move from metastore to standalone-metastore.
> I also plan to create Dockerfiles for different database types so that 
> developers can test the SQL installation and upgrade scripts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18460) Compactor doesn't pass Table properties to the Orc writer

2018-01-16 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16327885#comment-16327885
 ] 

Eugene Koifman commented on HIVE-18460:
---

because of this any properties overridden as part of Alter Table requesting 
compaction (HIVE-13354) won't be honored either

> Compactor doesn't pass Table properties to the Orc writer
> -
>
> Key: HIVE-18460
> URL: https://issues.apache.org/jira/browse/HIVE-18460
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 0.13
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Critical
>
>  
>  CompactorMap.getWrite()/getDeleteEventWriter() both do 
> AcidOutputFormat.Options.tableProperties() but
> OrcOutputFormat.getRawRecordWriter() does
> {noformat}
> final OrcFile.WriterOptions opts =
> OrcFile.writerOptions(options.getConfiguration());
> {noformat}
> which ignores tableProperties value.
> It should do 
> {noformat}
> final OrcFile.WriterOptions opts =
> OrcFile.writerOptions(options.getTableProperties(), 
> options.getConfiguration());
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18460) Compactor doesn't pass Table properties to the Orc writer

2018-01-16 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-18460:
--
Affects Version/s: (was: 0.14.0)
   0.13

> Compactor doesn't pass Table properties to the Orc writer
> -
>
> Key: HIVE-18460
> URL: https://issues.apache.org/jira/browse/HIVE-18460
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 0.13
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Critical
>
>  
>  CompactorMap.getWrite()/getDeleteEventWriter() both do 
> AcidOutputFormat.Options.tableProperties() but
> OrcOutputFormat.getRawRecordWriter() does
> {noformat}
> final OrcFile.WriterOptions opts =
> OrcFile.writerOptions(options.getConfiguration());
> {noformat}
> which ignores tableProperties value.
> It should do 
> {noformat}
> final OrcFile.WriterOptions opts =
> OrcFile.writerOptions(options.getTableProperties(), 
> options.getConfiguration());
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-18460) Compactor doesn't pass Table properties to the Orc writer

2018-01-16 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman reassigned HIVE-18460:
-


> Compactor doesn't pass Table properties to the Orc writer
> -
>
> Key: HIVE-18460
> URL: https://issues.apache.org/jira/browse/HIVE-18460
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 0.14.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Critical
>
>  
>  CompactorMap.getWrite()/getDeleteEventWriter() both do 
> AcidOutputFormat.Options.tableProperties() but
> OrcOutputFormat.getRawRecordWriter() does
> {noformat}
> final OrcFile.WriterOptions opts =
> OrcFile.writerOptions(options.getConfiguration());
> {noformat}
> which ignores tableProperties value.
> It should do 
> {noformat}
> final OrcFile.WriterOptions opts =
> OrcFile.writerOptions(options.getTableProperties(), 
> options.getConfiguration());
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18458) Workload manager initializes even when interactive queue is not set

2018-01-16 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16327876#comment-16327876
 ] 

Prasanth Jayachandran commented on HIVE-18458:
--

bq. Does it not work?
Doesn't seem to work. Also I don't have any mappings defined. With no mappings, 
I expected creation of unmanaged session here
https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WorkloadManagerFederation.java#L43
But this can only happen when mapping is null which is never the case as 
UserPoolMapping is always non-null. 


> Workload manager initializes even when interactive queue is not set
> ---
>
> Key: HIVE-18458
> URL: https://issues.apache.org/jira/browse/HIVE-18458
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-18458.1.patch
>
>
> Workload manager gets initialized even when interactive queue is not defined 
> (however there is an active resource plan in metastore). Active resource plan 
> is used for tez in this case. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-13563) Hive Streaming does not honor orc.compress.size and orc.stripe.size table properties

2018-01-16 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-13563:
--
Component/s: Transactions

> Hive Streaming does not honor orc.compress.size and orc.stripe.size table 
> properties
> 
>
> Key: HIVE-13563
> URL: https://issues.apache.org/jira/browse/HIVE-13563
> Project: Hive
>  Issue Type: Bug
>  Components: ORC, Transactions
>Affects Versions: 2.1.0
>Reporter: Wei Zheng
>Assignee: Wei Zheng
>Priority: Major
> Fix For: 1.3.0, 2.1.0
>
> Attachments: HIVE-13563.1.patch, HIVE-13563.2.patch, 
> HIVE-13563.3.patch, HIVE-13563.4.patch, HIVE-13563.branch-1.patch
>
>
> According to the doc:
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+ORC#LanguageManualORC-HiveQLSyntax
> One should be able to specify tblproperties for many ORC options.
> But the settings for orc.compress.size and orc.stripe.size don't take effect.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17982) Move metastore specific itests

2018-01-16 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-17982:
--
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Patch 2 committed.  Thanks Peter for the review.

> Move metastore specific itests
> --
>
> Key: HIVE-17982
> URL: https://issues.apache.org/jira/browse/HIVE-17982
> Project: Hive
>  Issue Type: Sub-task
>  Components: Standalone Metastore
>Reporter: Alan Gates
>Assignee: Alan Gates
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-17982.2.patch, HIVE-17982.patch
>
>
> There are a number of tests in itests/hive-unit/.../metastore that are 
> metastore specific.  I suspect they were initially placed in itests only 
> because the metastore pulling in a few plugins from ql.
> Given that we need to be able to release the metastore separately, we need to 
> be able to test it completely as a standalone entity.  So I propose to move a 
> number of the itests over into standalone-metastore.  I will only move tests 
> that are isolated to the metastore.  Anything that tests wider functionality 
> I plan to leave in itests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >