[jira] [Commented] (HIVE-14580) Introduce || operator

2016-10-22 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15599106#comment-15599106
 ] 

Lefty Leverenz commented on HIVE-14580:
---

Also, [~kgyrtkirk] wrote on Oct. 4:

* note: i think this precedence table should be documented somewhere

> Introduce || operator
> -
>
> Key: HIVE-14580
> URL: https://issues.apache.org/jira/browse/HIVE-14580
> Project: Hive
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.1.0
>Reporter: Ashutosh Chauhan
>Assignee: Zoltan Haindrich
>  Labels: TODOC2.2
> Fix For: 2.2.0
>
> Attachments: HIVE-14580.1.patch, HIVE-14580.2.patch, 
> HIVE-14580.3.patch, HIVE-14580.4.patch
>
>
> Functionally equivalent to concat() udf. But standard allows usage of || for 
> string concatenations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14580) Introduce || operator

2016-10-22 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15599096#comment-15599096
 ] 

Lefty Leverenz commented on HIVE-14580:
---

This should be documented in the Operators & UDFs wikidoc, although I'm not 
sure which section it belongs in.  Does it need a new subsection under 
Operators or can it just be included with concat() under Functions?

A crossreference with the logical operator || would also be helpful for 
disambiguation.

* [Operators and UDFs | 
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF]
** [Operators and UDFs -- Built-in Functions -- String Functions | 
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-StringFunctions]
** [Operators and UDFs -- Built-in Operators -- Logical Operators | 
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-LogicalOperators]

Added a TODOC2.2 label. 

> Introduce || operator
> -
>
> Key: HIVE-14580
> URL: https://issues.apache.org/jira/browse/HIVE-14580
> Project: Hive
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.1.0
>Reporter: Ashutosh Chauhan
>Assignee: Zoltan Haindrich
>  Labels: TODOC2.2
> Fix For: 2.2.0
>
> Attachments: HIVE-14580.1.patch, HIVE-14580.2.patch, 
> HIVE-14580.3.patch, HIVE-14580.4.patch
>
>
> Functionally equivalent to concat() udf. But standard allows usage of || for 
> string concatenations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14580) Introduce || operator

2016-10-22 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-14580:
--
Labels: TODOC2.2  (was: )

> Introduce || operator
> -
>
> Key: HIVE-14580
> URL: https://issues.apache.org/jira/browse/HIVE-14580
> Project: Hive
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.1.0
>Reporter: Ashutosh Chauhan
>Assignee: Zoltan Haindrich
>  Labels: TODOC2.2
> Fix For: 2.2.0
>
> Attachments: HIVE-14580.1.patch, HIVE-14580.2.patch, 
> HIVE-14580.3.patch, HIVE-14580.4.patch
>
>
> Functionally equivalent to concat() udf. But standard allows usage of || for 
> string concatenations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12765) Support Intersect (distinct/all) Except (distinct/all) Minus (distinct/all)

2016-10-22 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15599065#comment-15599065
 ] 

Lefty Leverenz commented on HIVE-12765:
---

Should this be documented now, or wait for the rest of HIVE-12764's subtasks?

> Support Intersect (distinct/all) Except (distinct/all) Minus (distinct/all)
> ---
>
> Key: HIVE-12765
> URL: https://issues.apache.org/jira/browse/HIVE-12765
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 2.1.0
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Fix For: 2.2.0
>
> Attachments: HIVE-12765.01.patch, HIVE-12765.02.patch, 
> HIVE-12765.03.patch, HIVE-12765.04.patch, HIVE-12765.05.patch, 
> HIVE-12765.06.patch, HIVE-12765.07.patch, HIVE-12765.08.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14924) MSCK REPAIR table with single threaded is throwing null pointer exception

2016-10-22 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-14924:
---
Status: Patch Available  (was: Open)

> MSCK REPAIR table with single threaded is throwing null pointer exception
> -
>
> Key: HIVE-14924
> URL: https://issues.apache.org/jira/browse/HIVE-14924
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 2.2.0
>Reporter: Ratheesh Kamoor
>Assignee: Pengcheng Xiong
> Attachments: HIVE-14924.01.patch
>
>
> MSCK REPAIR TABLE is throwing Null Pointer Exception while running on single 
> threaded mode (hive.mv.files.thread=0)
> Error:
> 2016-10-10T22:27:13,564 ERROR [e9ce04a8-2a84-426d-8e79-a2d15b8cee09 
> main([])]: exec.DDLTask (DDLTask.java:failed(581)) - 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.metadata.HiveMetaStoreChecker.checkPartitionDirs(HiveMetaStoreChecker.java:423)
>   at 
> org.apache.hadoop.hive.ql.metadata.HiveMetaStoreChecker.findUnknownPartitions(HiveMetaStoreChecker.java:315)
>   at 
> org.apache.hadoop.hive.ql.metadata.HiveMetaStoreChecker.checkTable(HiveMetaStoreChecker.java:291)
>   at 
> org.apache.hadoop.hive.ql.metadata.HiveMetaStoreChecker.checkTable(HiveMetaStoreChecker.java:236)
>   at 
> org.apache.hadoop.hive.ql.metadata.HiveMetaStoreChecker.checkMetastore(HiveMetaStoreChecker.java:113)
>   at org.apache.hadoop.hive.ql.exec.DDLTask.msck(DDLTask.java:1834)
> In order to reproduce:
> set hive.mv.files.thread=0 and run MSCK REPAIR TABLE command



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14924) MSCK REPAIR table with single threaded is throwing null pointer exception

2016-10-22 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-14924:
---
Attachment: HIVE-14924.01.patch

> MSCK REPAIR table with single threaded is throwing null pointer exception
> -
>
> Key: HIVE-14924
> URL: https://issues.apache.org/jira/browse/HIVE-14924
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 2.2.0
>Reporter: Ratheesh Kamoor
>Assignee: Pengcheng Xiong
> Attachments: HIVE-14924.01.patch
>
>
> MSCK REPAIR TABLE is throwing Null Pointer Exception while running on single 
> threaded mode (hive.mv.files.thread=0)
> Error:
> 2016-10-10T22:27:13,564 ERROR [e9ce04a8-2a84-426d-8e79-a2d15b8cee09 
> main([])]: exec.DDLTask (DDLTask.java:failed(581)) - 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.metadata.HiveMetaStoreChecker.checkPartitionDirs(HiveMetaStoreChecker.java:423)
>   at 
> org.apache.hadoop.hive.ql.metadata.HiveMetaStoreChecker.findUnknownPartitions(HiveMetaStoreChecker.java:315)
>   at 
> org.apache.hadoop.hive.ql.metadata.HiveMetaStoreChecker.checkTable(HiveMetaStoreChecker.java:291)
>   at 
> org.apache.hadoop.hive.ql.metadata.HiveMetaStoreChecker.checkTable(HiveMetaStoreChecker.java:236)
>   at 
> org.apache.hadoop.hive.ql.metadata.HiveMetaStoreChecker.checkMetastore(HiveMetaStoreChecker.java:113)
>   at org.apache.hadoop.hive.ql.exec.DDLTask.msck(DDLTask.java:1834)
> In order to reproduce:
> set hive.mv.files.thread=0 and run MSCK REPAIR TABLE command



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14924) MSCK REPAIR table with single threaded is throwing null pointer exception

2016-10-22 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-14924:
---
Status: Open  (was: Patch Available)

> MSCK REPAIR table with single threaded is throwing null pointer exception
> -
>
> Key: HIVE-14924
> URL: https://issues.apache.org/jira/browse/HIVE-14924
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 2.2.0
>Reporter: Ratheesh Kamoor
>Assignee: Pengcheng Xiong
> Attachments: HIVE-14924.01.patch
>
>
> MSCK REPAIR TABLE is throwing Null Pointer Exception while running on single 
> threaded mode (hive.mv.files.thread=0)
> Error:
> 2016-10-10T22:27:13,564 ERROR [e9ce04a8-2a84-426d-8e79-a2d15b8cee09 
> main([])]: exec.DDLTask (DDLTask.java:failed(581)) - 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.metadata.HiveMetaStoreChecker.checkPartitionDirs(HiveMetaStoreChecker.java:423)
>   at 
> org.apache.hadoop.hive.ql.metadata.HiveMetaStoreChecker.findUnknownPartitions(HiveMetaStoreChecker.java:315)
>   at 
> org.apache.hadoop.hive.ql.metadata.HiveMetaStoreChecker.checkTable(HiveMetaStoreChecker.java:291)
>   at 
> org.apache.hadoop.hive.ql.metadata.HiveMetaStoreChecker.checkTable(HiveMetaStoreChecker.java:236)
>   at 
> org.apache.hadoop.hive.ql.metadata.HiveMetaStoreChecker.checkMetastore(HiveMetaStoreChecker.java:113)
>   at org.apache.hadoop.hive.ql.exec.DDLTask.msck(DDLTask.java:1834)
> In order to reproduce:
> set hive.mv.files.thread=0 and run MSCK REPAIR TABLE command



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14924) MSCK REPAIR table with single threaded is throwing null pointer exception

2016-10-22 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-14924:
---
Attachment: (was: HIVE-14924.01.patch)

> MSCK REPAIR table with single threaded is throwing null pointer exception
> -
>
> Key: HIVE-14924
> URL: https://issues.apache.org/jira/browse/HIVE-14924
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 2.2.0
>Reporter: Ratheesh Kamoor
>Assignee: Pengcheng Xiong
> Attachments: HIVE-14924.01.patch
>
>
> MSCK REPAIR TABLE is throwing Null Pointer Exception while running on single 
> threaded mode (hive.mv.files.thread=0)
> Error:
> 2016-10-10T22:27:13,564 ERROR [e9ce04a8-2a84-426d-8e79-a2d15b8cee09 
> main([])]: exec.DDLTask (DDLTask.java:failed(581)) - 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.metadata.HiveMetaStoreChecker.checkPartitionDirs(HiveMetaStoreChecker.java:423)
>   at 
> org.apache.hadoop.hive.ql.metadata.HiveMetaStoreChecker.findUnknownPartitions(HiveMetaStoreChecker.java:315)
>   at 
> org.apache.hadoop.hive.ql.metadata.HiveMetaStoreChecker.checkTable(HiveMetaStoreChecker.java:291)
>   at 
> org.apache.hadoop.hive.ql.metadata.HiveMetaStoreChecker.checkTable(HiveMetaStoreChecker.java:236)
>   at 
> org.apache.hadoop.hive.ql.metadata.HiveMetaStoreChecker.checkMetastore(HiveMetaStoreChecker.java:113)
>   at org.apache.hadoop.hive.ql.exec.DDLTask.msck(DDLTask.java:1834)
> In order to reproduce:
> set hive.mv.files.thread=0 and run MSCK REPAIR TABLE command



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15023) SimpleFetchOptimizer needs to optimize limit=0

2016-10-22 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-15023:
---
Status: Patch Available  (was: Open)

> SimpleFetchOptimizer needs to optimize limit=0
> --
>
> Key: HIVE-15023
> URL: https://issues.apache.org/jira/browse/HIVE-15023
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-15023.01.patch, HIVE-15023.02.patch
>
>
> on current master
> {code}
> hive> explain select key from src limit 0;
> OK
> STAGE DEPENDENCIES:
>   Stage-0 is a root stage
> STAGE PLANS:
>   Stage: Stage-0
> Fetch Operator
>   limit: 0
>   Processor Tree:
> TableScan
>   alias: src
>   Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE 
> Column stats: NONE
>   Select Operator
> expressions: key (type: string)
> outputColumnNames: _col0
> Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE 
> Column stats: NONE
> Limit
>   Number of rows: 0
>   Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column 
> stats: NONE
>   ListSink
> Time taken: 7.534 seconds, Fetched: 20 row(s)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15023) SimpleFetchOptimizer needs to optimize limit=0

2016-10-22 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-15023:
---
Status: Open  (was: Patch Available)

> SimpleFetchOptimizer needs to optimize limit=0
> --
>
> Key: HIVE-15023
> URL: https://issues.apache.org/jira/browse/HIVE-15023
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-15023.01.patch, HIVE-15023.02.patch
>
>
> on current master
> {code}
> hive> explain select key from src limit 0;
> OK
> STAGE DEPENDENCIES:
>   Stage-0 is a root stage
> STAGE PLANS:
>   Stage: Stage-0
> Fetch Operator
>   limit: 0
>   Processor Tree:
> TableScan
>   alias: src
>   Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE 
> Column stats: NONE
>   Select Operator
> expressions: key (type: string)
> outputColumnNames: _col0
> Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE 
> Column stats: NONE
> Limit
>   Number of rows: 0
>   Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column 
> stats: NONE
>   ListSink
> Time taken: 7.534 seconds, Fetched: 20 row(s)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15023) SimpleFetchOptimizer needs to optimize limit=0

2016-10-22 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-15023:
---
Attachment: HIVE-15023.02.patch

> SimpleFetchOptimizer needs to optimize limit=0
> --
>
> Key: HIVE-15023
> URL: https://issues.apache.org/jira/browse/HIVE-15023
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-15023.01.patch, HIVE-15023.02.patch
>
>
> on current master
> {code}
> hive> explain select key from src limit 0;
> OK
> STAGE DEPENDENCIES:
>   Stage-0 is a root stage
> STAGE PLANS:
>   Stage: Stage-0
> Fetch Operator
>   limit: 0
>   Processor Tree:
> TableScan
>   alias: src
>   Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE 
> Column stats: NONE
>   Select Operator
> expressions: key (type: string)
> outputColumnNames: _col0
> Statistics: Num rows: 500 Data size: 5312 Basic stats: COMPLETE 
> Column stats: NONE
> Limit
>   Number of rows: 0
>   Statistics: Num rows: 0 Data size: 0 Basic stats: NONE Column 
> stats: NONE
>   ListSink
> Time taken: 7.534 seconds, Fetched: 20 row(s)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14909) Preserve the "parent location" of the table when an "alter table rename to " is submitted (the case when the db location is not specified and the Hive de

2016-10-22 Thread Chaoyu Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chaoyu Tang updated HIVE-14909:
---
Attachment: HIVE-14909.1.patch

> Preserve the "parent location" of the table when an "alter table  
> rename to " is submitted (the case when the db location is not 
> specified and the Hive defult db is outside the same encrypted zone).
> --
>
> Key: HIVE-14909
> URL: https://issues.apache.org/jira/browse/HIVE-14909
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 1.1.0
>Reporter: Adriano
>Assignee: Chaoyu Tang
> Attachments: HIVE-14909.1.patch, HIVE-14909.patch, HIVE-14909.patch
>
>
> Alter Table operation for db_enc.rename_test failed to move data due to: 
> '/hdfs/encrypted_path/db_enc/rename_test can't be moved from an encryption 
> zone.'
> When Hive renames a managed table, it always creates the new renamed table 
> directory under its database directory in order to keep a db/table hierarchy. 
> In this case, the renamed table directory is created under "default db" 
> directory "hive/warehouse/". When Hive renames a managed table, it always 
> creates the new renamed table directory under its database directory in order 
> to keep a db/table hierarchy. In this case, the renamed table directory is 
> created under "default' db directory typically set as /hive/warehouse/ . 
> This error doesn't appear if first create a database which points to a 
> directory outside /hive/warehouse/, say '/hdfs/encrypted_path', you won't 
> have this problem. For example, 
> create database db_enc location '/hdfs/encrypted_path/db_enc; 
> use db_enc; 
> create table rename_test (...) location 
> '/hdfs/encrypted_path/db_enc/rename_test'; 
> alter table rename_test rename to test_rename; 
> The renamed test_rename directory is created under 
> /hdfs/encrypted_path/db_enc. 
> Considering that the encryption of a filesystem is part of the evolution 
> hardening of a system (where the system and the data contained can already 
> exists) and a db can be already created without location set (because it is 
> not strictly required)and the default db is outside the same encryption zone 
> (or in a no-encryption zone) the alter table rename operation will fail.
> Improvement:
> Preserve the "parent location" of the table when an "alter table  
> rename to " is submitted (the case when the db location is not 
> specified and the Hive defult db is outside the same encrypted zone).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14909) Preserve the "parent location" of the table when an "alter table rename to " is submitted (the case when the db location is not specified and the Hive de

2016-10-22 Thread Chaoyu Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chaoyu Tang updated HIVE-14909:
---
Attachment: (was: HIVE-14909.1.patch)

> Preserve the "parent location" of the table when an "alter table  
> rename to " is submitted (the case when the db location is not 
> specified and the Hive defult db is outside the same encrypted zone).
> --
>
> Key: HIVE-14909
> URL: https://issues.apache.org/jira/browse/HIVE-14909
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 1.1.0
>Reporter: Adriano
>Assignee: Chaoyu Tang
> Attachments: HIVE-14909.patch, HIVE-14909.patch
>
>
> Alter Table operation for db_enc.rename_test failed to move data due to: 
> '/hdfs/encrypted_path/db_enc/rename_test can't be moved from an encryption 
> zone.'
> When Hive renames a managed table, it always creates the new renamed table 
> directory under its database directory in order to keep a db/table hierarchy. 
> In this case, the renamed table directory is created under "default db" 
> directory "hive/warehouse/". When Hive renames a managed table, it always 
> creates the new renamed table directory under its database directory in order 
> to keep a db/table hierarchy. In this case, the renamed table directory is 
> created under "default' db directory typically set as /hive/warehouse/ . 
> This error doesn't appear if first create a database which points to a 
> directory outside /hive/warehouse/, say '/hdfs/encrypted_path', you won't 
> have this problem. For example, 
> create database db_enc location '/hdfs/encrypted_path/db_enc; 
> use db_enc; 
> create table rename_test (...) location 
> '/hdfs/encrypted_path/db_enc/rename_test'; 
> alter table rename_test rename to test_rename; 
> The renamed test_rename directory is created under 
> /hdfs/encrypted_path/db_enc. 
> Considering that the encryption of a filesystem is part of the evolution 
> hardening of a system (where the system and the data contained can already 
> exists) and a db can be already created without location set (because it is 
> not strictly required)and the default db is outside the same encryption zone 
> (or in a no-encryption zone) the alter table rename operation will fail.
> Improvement:
> Preserve the "parent location" of the table when an "alter table  
> rename to " is submitted (the case when the db location is not 
> specified and the Hive defult db is outside the same encrypted zone).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14909) Preserve the "parent location" of the table when an "alter table rename to " is submitted (the case when the db location is not specified and the Hive de

2016-10-22 Thread Chaoyu Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chaoyu Tang updated HIVE-14909:
---
Attachment: HIVE-14909.1.patch

A new patch to fix the failed tests.

> Preserve the "parent location" of the table when an "alter table  
> rename to " is submitted (the case when the db location is not 
> specified and the Hive defult db is outside the same encrypted zone).
> --
>
> Key: HIVE-14909
> URL: https://issues.apache.org/jira/browse/HIVE-14909
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 1.1.0
>Reporter: Adriano
>Assignee: Chaoyu Tang
> Attachments: HIVE-14909.1.patch, HIVE-14909.patch, HIVE-14909.patch
>
>
> Alter Table operation for db_enc.rename_test failed to move data due to: 
> '/hdfs/encrypted_path/db_enc/rename_test can't be moved from an encryption 
> zone.'
> When Hive renames a managed table, it always creates the new renamed table 
> directory under its database directory in order to keep a db/table hierarchy. 
> In this case, the renamed table directory is created under "default db" 
> directory "hive/warehouse/". When Hive renames a managed table, it always 
> creates the new renamed table directory under its database directory in order 
> to keep a db/table hierarchy. In this case, the renamed table directory is 
> created under "default' db directory typically set as /hive/warehouse/ . 
> This error doesn't appear if first create a database which points to a 
> directory outside /hive/warehouse/, say '/hdfs/encrypted_path', you won't 
> have this problem. For example, 
> create database db_enc location '/hdfs/encrypted_path/db_enc; 
> use db_enc; 
> create table rename_test (...) location 
> '/hdfs/encrypted_path/db_enc/rename_test'; 
> alter table rename_test rename to test_rename; 
> The renamed test_rename directory is created under 
> /hdfs/encrypted_path/db_enc. 
> Considering that the encryption of a filesystem is part of the evolution 
> hardening of a system (where the system and the data contained can already 
> exists) and a db can be already created without location set (because it is 
> not strictly required)and the default db is outside the same encryption zone 
> (or in a no-encryption zone) the alter table rename operation will fail.
> Improvement:
> Preserve the "parent location" of the table when an "alter table  
> rename to " is submitted (the case when the db location is not 
> specified and the Hive defult db is outside the same encrypted zone).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15036) Druid code recently included in Hive pulls in GPL jar

2016-10-22 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15598619#comment-15598619
 ] 

Alan Gates commented on HIVE-15036:
---

The dependency tree show by maven is:
{code}
[INFO] +- io.druid:druid-processing:jar:0.9.1.1:compile
[INFO] |  +- io.druid:druid-common:jar:0.9.1.1:compile
[INFO] |  |  +- com.metamx:java-util:jar:0.27.9:compile
[INFO] |  |  |  \- com.jayway.jsonpath:json-path:jar:2.1.0:compile
[INFO] |  |  +- io.druid:druid-api:jar:0.9.1.1:compile
[INFO] |  |  |  \- io.airlift:airline:jar:0.7:compile
[INFO] |  |  | \- com.google.code.findbugs:annotations:jar:2.0.3:compile
{code}

> Druid code recently included in Hive pulls in GPL jar
> -
>
> Key: HIVE-15036
> URL: https://issues.apache.org/jira/browse/HIVE-15036
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Affects Versions: 2.2.0
>Reporter: Alan Gates
>Assignee: Gunther Hagleitner
>Priority: Blocker
>
> Druid pulls in a jar annotation-2.3.jar.  According to its pom file it is 
> licensed under GPL.  We cannot ship a binary distribution that includes this 
> jar.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15035) Clean up Hive licenses for binary distribution

2016-10-22 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-15035:
--
Attachment: HIVE-15035.2.patch

Forgot to write out contents of the note file before creating the previous 
patch.

> Clean up Hive licenses for binary distribution
> --
>
> Key: HIVE-15035
> URL: https://issues.apache.org/jira/browse/HIVE-15035
> Project: Hive
>  Issue Type: Bug
>  Components: distribution
>Affects Versions: 2.1.0
>Reporter: Alan Gates
>Assignee: Alan Gates
> Attachments: HIVE-15035.2.patch, HIVE-15035.patch
>
>
> Hive's current LICENSE file contains information not needed for the source 
> distribution.  For the binary distribution we are missing many license files 
> as a number of jars included in Hive come with various licenses.  This all 
> needs cleaned up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15036) Druid code recently included in Hive pulls in GPL jar

2016-10-22 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15598613#comment-15598613
 ] 

Alan Gates commented on HIVE-15036:
---

cc [~jcamachorodriguez]

> Druid code recently included in Hive pulls in GPL jar
> -
>
> Key: HIVE-15036
> URL: https://issues.apache.org/jira/browse/HIVE-15036
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Affects Versions: 2.2.0
>Reporter: Alan Gates
>Assignee: Gunther Hagleitner
>Priority: Blocker
>
> Druid pulls in a jar annotation-2.3.jar.  According to its pom file it is 
> licensed under GPL.  We cannot ship a binary distribution that includes this 
> jar.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15035) Clean up Hive licenses for binary distribution

2016-10-22 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-15035:
--
Status: Patch Available  (was: Open)

NO PRECOMMIT TESTS

> Clean up Hive licenses for binary distribution
> --
>
> Key: HIVE-15035
> URL: https://issues.apache.org/jira/browse/HIVE-15035
> Project: Hive
>  Issue Type: Bug
>  Components: distribution
>Affects Versions: 2.1.0
>Reporter: Alan Gates
>Assignee: Alan Gates
> Attachments: HIVE-15035.patch
>
>
> Hive's current LICENSE file contains information not needed for the source 
> distribution.  For the binary distribution we are missing many license files 
> as a number of jars included in Hive come with various licenses.  This all 
> needs cleaned up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15035) Clean up Hive licenses for binary distribution

2016-10-22 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-15035:
--
Attachment: HIVE-15035.patch

This patch does a number of things:
# I removed all verbage from the LICENSE file that relates only to things that 
are part of the binary distribution.  That is the top level LICENSE file is now 
relevant only for the source release.
# I created a licenses directory where I placed copies of all the licenses for 
jars included in Hive's binary distribution.  I also placed a NOTICE file in 
that directory to handle "weak copyleft" licenses such as Mozilla and CDDL, per 
the instructions at https://www.apache.org/legal/resolved.html
# I changed the binary distribution to include the contents of the new licenses 
directory.
# I put a notes file in the licenses directory explaining anything I didn't 
think was obvious.  This will not be distributed with the binary distribution 
but is intended to guide future committers who are trying to figure out this 
licenses maze.

> Clean up Hive licenses for binary distribution
> --
>
> Key: HIVE-15035
> URL: https://issues.apache.org/jira/browse/HIVE-15035
> Project: Hive
>  Issue Type: Bug
>  Components: distribution
>Affects Versions: 2.1.0
>Reporter: Alan Gates
>Assignee: Alan Gates
> Attachments: HIVE-15035.patch
>
>
> Hive's current LICENSE file contains information not needed for the source 
> distribution.  For the binary distribution we are missing many license files 
> as a number of jars included in Hive come with various licenses.  This all 
> needs cleaned up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-13579) F304: EXCEPT ALL

2016-10-22 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong resolved HIVE-13579.

Resolution: Fixed

> F304: EXCEPT ALL
> 
>
> Key: HIVE-13579
> URL: https://issues.apache.org/jira/browse/HIVE-13579
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Carter Shanklin
>Assignee: Pengcheng Xiong
>
> This is a part of the SQL:2011 Analytics Complete Umbrella JIRA HIVE-13554. 
> EXCEPT ALL is a common set function, mandatory in the SQL standard, and would 
> be a good addition to Hive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13580) E071-03: EXCEPT DISTINCT

2016-10-22 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-13580:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> E071-03: EXCEPT DISTINCT
> 
>
> Key: HIVE-13580
> URL: https://issues.apache.org/jira/browse/HIVE-13580
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Carter Shanklin
>Assignee: Pengcheng Xiong
> Attachments: HIVE-13580.01.patch
>
>
> This is a part of the SQL:2011 Analytics Complete Umbrella JIRA HIVE-13554. 
> EXCEPT DISTINCT (aka EXCEPT) is a common set function, mandatory in the SQL 
> standard, and would be a good addition to Hive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-13581) F302-01 and F302-02: INTERSECT DISTINCT and INTERSECT ALL

2016-10-22 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong resolved HIVE-13581.

Resolution: Fixed

> F302-01 and F302-02: INTERSECT DISTINCT and INTERSECT ALL
> -
>
> Key: HIVE-13581
> URL: https://issues.apache.org/jira/browse/HIVE-13581
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Carter Shanklin
>Assignee: Pengcheng Xiong
>
> This is a part of the SQL:2011 Analytics Complete Umbrella JIRA HIVE-13554. 
> INTERSECT DISTINCT and INTERSECT ALL are common set functions, mandatory 
> within the SQL standard, and would be good additions to Hive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12765) Support Intersect (distinct/all) Except (distinct/all) Minus (distinct/all)

2016-10-22 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-12765:
---
Fix Version/s: 2.2.0

> Support Intersect (distinct/all) Except (distinct/all) Minus (distinct/all)
> ---
>
> Key: HIVE-12765
> URL: https://issues.apache.org/jira/browse/HIVE-12765
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 2.1.0
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Fix For: 2.2.0
>
> Attachments: HIVE-12765.01.patch, HIVE-12765.02.patch, 
> HIVE-12765.03.patch, HIVE-12765.04.patch, HIVE-12765.05.patch, 
> HIVE-12765.06.patch, HIVE-12765.07.patch, HIVE-12765.08.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12765) Support Intersect (distinct/all) Except (distinct/all) Minus (distinct/all)

2016-10-22 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15598543#comment-15598543
 ] 

Pengcheng Xiong commented on HIVE-12765:


pushed to master. Thanks [~ashutoshc] for the review.

> Support Intersect (distinct/all) Except (distinct/all) Minus (distinct/all)
> ---
>
> Key: HIVE-12765
> URL: https://issues.apache.org/jira/browse/HIVE-12765
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 2.1.0
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Fix For: 2.2.0
>
> Attachments: HIVE-12765.01.patch, HIVE-12765.02.patch, 
> HIVE-12765.03.patch, HIVE-12765.04.patch, HIVE-12765.05.patch, 
> HIVE-12765.06.patch, HIVE-12765.07.patch, HIVE-12765.08.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12765) Support Intersect (distinct/all) Except (distinct/all) Minus (distinct/all)

2016-10-22 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-12765:
---
Affects Version/s: 2.1.0

> Support Intersect (distinct/all) Except (distinct/all) Minus (distinct/all)
> ---
>
> Key: HIVE-12765
> URL: https://issues.apache.org/jira/browse/HIVE-12765
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 2.1.0
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Fix For: 2.2.0
>
> Attachments: HIVE-12765.01.patch, HIVE-12765.02.patch, 
> HIVE-12765.03.patch, HIVE-12765.04.patch, HIVE-12765.05.patch, 
> HIVE-12765.06.patch, HIVE-12765.07.patch, HIVE-12765.08.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14580) Introduce || operator

2016-10-22 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-14580:
---
Affects Version/s: 2.1.0

> Introduce || operator
> -
>
> Key: HIVE-14580
> URL: https://issues.apache.org/jira/browse/HIVE-14580
> Project: Hive
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.1.0
>Reporter: Ashutosh Chauhan
>Assignee: Zoltan Haindrich
> Fix For: 2.2.0
>
> Attachments: HIVE-14580.1.patch, HIVE-14580.2.patch, 
> HIVE-14580.3.patch, HIVE-14580.4.patch
>
>
> Functionally equivalent to concat() udf. But standard allows usage of || for 
> string concatenations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14580) Introduce || operator

2016-10-22 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-14580:
---
   Resolution: Fixed
Fix Version/s: 2.2.0
   Status: Resolved  (was: Patch Available)

pushed to master. Thanks [~kgyrtkirk] for the patch!

> Introduce || operator
> -
>
> Key: HIVE-14580
> URL: https://issues.apache.org/jira/browse/HIVE-14580
> Project: Hive
>  Issue Type: Sub-task
>  Components: SQL
>Affects Versions: 2.1.0
>Reporter: Ashutosh Chauhan
>Assignee: Zoltan Haindrich
> Fix For: 2.2.0
>
> Attachments: HIVE-14580.1.patch, HIVE-14580.2.patch, 
> HIVE-14580.3.patch, HIVE-14580.4.patch
>
>
> Functionally equivalent to concat() udf. But standard allows usage of || for 
> string concatenations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14993) make WriteEntity distinguish writeType

2016-10-22 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-14993:
--
   Resolution: Fixed
Fix Version/s: 2.2.0
   Status: Resolved  (was: Patch Available)

committed to master
thanks Wei for the review

> make WriteEntity distinguish writeType
> --
>
> Key: HIVE-14993
> URL: https://issues.apache.org/jira/browse/HIVE-14993
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Fix For: 2.2.0
>
> Attachments: HIVE-14993.2.patch, HIVE-14993.3.patch, 
> HIVE-14993.4.patch, HIVE-14993.5.patch, HIVE-14993.6.patch, 
> HIVE-14993.7.patch, HIVE-14993.8.patch, HIVE-14993.patch, 
> debug.not2checkin.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14913) Add new unit tests

2016-10-22 Thread Vineet Garg (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15598125#comment-15598125
 ] 

Vineet Garg commented on HIVE-14913:


[~ashutoshc] HIVE-15034 has the patch available and tested

> Add new unit tests
> --
>
> Key: HIVE-14913
> URL: https://issues.apache.org/jira/browse/HIVE-14913
> Project: Hive
>  Issue Type: Task
>  Components: Tests
>Reporter: Vineet Garg
>Assignee: Vineet Garg
> Fix For: 2.2.0
>
> Attachments: HIVE-14913.1.patch, HIVE-14913.2.patch, 
> HIVE-14913.3.patch, HIVE-14913.4.patch, HIVE-14913.5.patch, 
> HIVE-14913.6.patch, HIVE-14913.7.patch, HIVE-14913.8.patch, HIVE-14913.9.patch
>
>
> Moving bunch of tests from system test to hive unit tests to reduce testing 
> overhead



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13873) Column pruning for nested fields

2016-10-22 Thread Ferdinand Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferdinand Xu updated HIVE-13873:

Attachment: HIVE-13873.5.patch

> Column pruning for nested fields
> 
>
> Key: HIVE-13873
> URL: https://issues.apache.org/jira/browse/HIVE-13873
> Project: Hive
>  Issue Type: New Feature
>  Components: Logical Optimizer
>Reporter: Xuefu Zhang
>Assignee: Ferdinand Xu
> Attachments: HIVE-13873.1.patch, HIVE-13873.2.patch, 
> HIVE-13873.3.patch, HIVE-13873.4.patch, HIVE-13873.5.patch, HIVE-13873.patch, 
> HIVE-13873.wip.patch
>
>
> Some columnar file formats such as Parquet store fields in struct type also 
> column by column using encoding described in Google Dramel pager. It's very 
> common in big data where data are stored in structs while queries only needs 
> a subset of the the fields in the structs. However, presently Hive still 
> needs to read the whole struct regardless whether all fields are selected. 
> Therefore, pruning unwanted sub-fields in struct or nested fields at file 
> reading time would be a big performance boost for such scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13873) Column pruning for nested fields

2016-10-22 Thread Ferdinand Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ferdinand Xu updated HIVE-13873:

Attachment: (was: HIVE-13873.5.patch)

> Column pruning for nested fields
> 
>
> Key: HIVE-13873
> URL: https://issues.apache.org/jira/browse/HIVE-13873
> Project: Hive
>  Issue Type: New Feature
>  Components: Logical Optimizer
>Reporter: Xuefu Zhang
>Assignee: Ferdinand Xu
> Attachments: HIVE-13873.1.patch, HIVE-13873.2.patch, 
> HIVE-13873.3.patch, HIVE-13873.4.patch, HIVE-13873.5.patch, HIVE-13873.patch, 
> HIVE-13873.wip.patch
>
>
> Some columnar file formats such as Parquet store fields in struct type also 
> column by column using encoding described in Google Dramel pager. It's very 
> common in big data where data are stored in structs while queries only needs 
> a subset of the the fields in the structs. However, presently Hive still 
> needs to read the whole struct regardless whether all fields are selected. 
> Therefore, pruning unwanted sub-fields in struct or nested fields at file 
> reading time would be a big performance boost for such scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13589) beeline support prompt for password with '-p' option

2016-10-22 Thread Ferdinand Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15597914#comment-15597914
 ] 

Ferdinand Xu commented on HIVE-13589:
-

LGTM +1 pending to the test

> beeline support prompt for password with '-p' option
> 
>
> Key: HIVE-13589
> URL: https://issues.apache.org/jira/browse/HIVE-13589
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Reporter: Thejas M Nair
>Assignee: Vihang Karajgaonkar
> Fix For: 2.2.0
>
> Attachments: HIVE-13589.1.patch, HIVE-13589.10.patch, 
> HIVE-13589.11.patch, HIVE-13589.2.patch, HIVE-13589.3.patch, 
> HIVE-13589.4.patch, HIVE-13589.5.patch, HIVE-13589.6.patch, 
> HIVE-13589.7.patch, HIVE-13589.8.patch, HIVE-13589.9.patch
>
>
> Specifying connection string using commandline options in beeline is 
> convenient, as it gets saved in shell command history, and it is easy to 
> retrieve it from there.
> However, specifying the password in command prompt is not secure as it gets 
> displayed on screen and saved in the history.
> It should be possible to specify '-p' without an argument to make beeline 
> prompt for password.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14866) Remove logic to set global limit from SemanticAnalyzer

2016-10-22 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-14866:
---
Attachment: HIVE-14866.03.patch

> Remove logic to set global limit from SemanticAnalyzer
> --
>
> Key: HIVE-14866
> URL: https://issues.apache.org/jira/browse/HIVE-14866
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.1.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-14866.02.patch, HIVE-14866.03.patch, 
> HIVE-14866.patch
>
>
> Currently, we set up the global limit for the query in the SemanticAnalyzer. 
> In addition, we have an optimization rule GlobalLimitOptimizer that prunes 
> the input depending on the global limit and under certain conditions (off by 
> default).
> We would like to remove the dependency on the SemanticAnalyzer and set the 
> global limit within GlobalLimitOptimizer.
> Further, we need to solve the problem with SimpleFetchOptimizer, which only 
> checks the limit but does not take into account the offset of the query, 
> which I think might lead to incorrect results if FetchOptimizer kicks in (not 
> verified yet).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14866) Remove logic to set global limit from SemanticAnalyzer

2016-10-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15597575#comment-15597575
 ] 

Hive QA commented on HIVE-14866:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12834800/HIVE-14866.02.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1755/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1755/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1755/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2016-10-22 10:03:24.800
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-1755/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2016-10-22 10:03:24.803
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 6cca991 HIVE-14913 : addendum patch
+ git clean -f -d
Removing metastore/scripts/upgrade/derby/037-HIVE-14496.derby.sql
Removing metastore/scripts/upgrade/mssql/022-HIVE-14496.mssql.sql
Removing metastore/scripts/upgrade/mysql/037-HIVE-14496.mysql.sql
Removing metastore/scripts/upgrade/oracle/037-HIVE-14496.oracle.sql
Removing metastore/scripts/upgrade/postgres/036-HIVE-14496.postgres.sql
Removing 
metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ViewDescriptor.java
Removing 
metastore/src/model/org/apache/hadoop/hive/metastore/model/MViewDescriptor.java
Removing 
ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveMaterializedViewsRegistry.java
Removing ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/views/
Removing ql/src/test/queries/clientpositive/materialized_view_create_rewrite.q
Removing 
ql/src/test/results/clientpositive/materialized_view_create_rewrite.q.out
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 6cca991 HIVE-14913 : addendum patch
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2016-10-22 10:03:25.876
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/SimpleFetchOptimizer.java:27
error: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/SimpleFetchOptimizer.java: 
patch does not apply
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12834800 - PreCommit-HIVE-Build

> Remove logic to set global limit from SemanticAnalyzer
> --
>
> Key: HIVE-14866
> URL: https://issues.apache.org/jira/browse/HIVE-14866
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.1.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-14866.02.patch, HIVE-14866.patch
>
>
> Currently, we set up the global limit for the query in the SemanticAnalyzer. 
> In addition, we have an optimization rule GlobalLimitOptimizer that prunes 
> the input depending on the global limit and under certain conditions (off by 
> default).
> We would like to remove the dependency on the SemanticAnalyzer and set the 
> global limit within GlobalLimitOptimizer.
> Further, we need to solve the problem with SimpleFetchOptimizer, which only 
> checks the limit but does not take into account the offset of 

[jira] [Commented] (HIVE-14496) Enable Calcite rewriting with materialized views

2016-10-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15597571#comment-15597571
 ] 

Hive QA commented on HIVE-14496:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12834795/HIVE-14496.02.patch

{color:green}SUCCESS:{color} +1 due to 18 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 10565 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_basic] 
(batchId=132)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[current_date_timestamp]
 (batchId=144)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJarWithoutAddDriverClazz[0]
 (batchId=164)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[0] (batchId=164)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[1] (batchId=164)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1754/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1754/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1754/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12834795 - PreCommit-HIVE-Build

> Enable Calcite rewriting with materialized views
> 
>
> Key: HIVE-14496
> URL: https://issues.apache.org/jira/browse/HIVE-14496
> Project: Hive
>  Issue Type: Sub-task
>  Components: Materialized views
>Affects Versions: 2.2.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-14496.01.patch, HIVE-14496.02.patch, 
> HIVE-14496.patch
>
>
> Calcite already supports query rewriting using materialized views. We will 
> use it to support this feature in Hive.
> In order to do that, we need to register the existing materialized views with 
> Calcite view service and enable the materialized views rewriting rules. 
> We should include a HiveConf flag to completely disable query rewriting using 
> materialized views if necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15029) Add logic to estimate stats for BETWEEN operator

2016-10-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15597474#comment-15597474
 ] 

Hive QA commented on HIVE-15029:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12834798/HIVE-15029.01.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 47 failed/errored test(s), 10564 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_between_columns] 
(batchId=62)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_basic] 
(batchId=132)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[current_date_timestamp]
 (batchId=144)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[explainuser_4]
 (batchId=143)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[orc_predicate_pushdown]
 (batchId=136)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[parquet_predicate_pushdown]
 (batchId=140)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_dynpart_hashjoin_1]
 (batchId=150)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_vector_dynpart_hashjoin_1]
 (batchId=149)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_between_columns]
 (batchId=148)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_between_in]
 (batchId=146)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_4] 
(batchId=91)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query12] 
(batchId=219)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query13] 
(batchId=219)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query20] 
(batchId=219)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query21] 
(batchId=219)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query22] 
(batchId=219)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query25] 
(batchId=219)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query28] 
(batchId=219)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query29] 
(batchId=219)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query32] 
(batchId=219)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query34] 
(batchId=219)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query40] 
(batchId=219)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query48] 
(batchId=219)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query51] 
(batchId=219)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query54] 
(batchId=219)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query58] 
(batchId=219)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query64] 
(batchId=219)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query65] 
(batchId=219)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query66] 
(batchId=219)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query67] 
(batchId=219)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query68] 
(batchId=219)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query70] 
(batchId=219)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query73] 
(batchId=219)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query79] 
(batchId=219)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query80] 
(batchId=219)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query82] 
(batchId=219)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query85] 
(batchId=219)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query87] 
(batchId=219)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query90] 
(batchId=219)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query94] 
(batchId=219)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query95] 
(batchId=219)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query97] 
(batchId=219)
org.apache.hadoop.hive.cli.TestPerfCliDriver.testCliDriver[query98] 
(batchId=219)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[vector_between_in] 
(batchId=116)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJarWithoutAddDriverClazz[0]
 (batchId=164)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[0] (batchId=164)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[1] (batchId=164)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1753/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1753/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1753/

Messages:
{noformat}
Executing 

[jira] [Commented] (HIVE-15030) Fixes in inference of collation for Tez cost model

2016-10-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15597390#comment-15597390
 ] 

Hive QA commented on HIVE-15030:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12834799/HIVE-15030.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 10564 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_basic] 
(batchId=132)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[columnstats_part_coltype]
 (batchId=148)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[current_date_timestamp]
 (batchId=144)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJarWithoutAddDriverClazz[0]
 (batchId=164)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[0] (batchId=164)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[1] (batchId=164)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1752/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1752/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1752/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 6 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12834799 - PreCommit-HIVE-Build

> Fixes in inference of collation for Tez cost model
> --
>
> Key: HIVE-15030
> URL: https://issues.apache.org/jira/browse/HIVE-15030
> Project: Hive
>  Issue Type: Bug
>  Components: Statistics
>Affects Versions: 2.1.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-15030.patch
>
>
> Tez cost model might get NPE if collation returned by join algorithm is null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15034) Fix orc_ppd_basic & current_date_timestamp tests

2016-10-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15597326#comment-15597326
 ] 

Hive QA commented on HIVE-15034:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12834792/HIVE-15034.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 10564 tests 
executed
*Failed tests:*
{noformat}
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJarWithoutAddDriverClazz[0]
 (batchId=164)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[0] (batchId=164)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[1] (batchId=164)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1751/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1751/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1751/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12834792 - PreCommit-HIVE-Build

> Fix orc_ppd_basic & current_date_timestamp tests
> 
>
> Key: HIVE-15034
> URL: https://issues.apache.org/jira/browse/HIVE-15034
> Project: Hive
>  Issue Type: Test
>  Components: Test
>Reporter: Vineet Garg
>Assignee: Vineet Garg
> Attachments: HIVE-15034.1.patch
>
>
> Started failing following HIVE-14913's failure



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14866) Remove logic to set global limit from SemanticAnalyzer

2016-10-22 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-14866:
---
Attachment: HIVE-14866.02.patch

> Remove logic to set global limit from SemanticAnalyzer
> --
>
> Key: HIVE-14866
> URL: https://issues.apache.org/jira/browse/HIVE-14866
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.1.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-14866.02.patch, HIVE-14866.patch
>
>
> Currently, we set up the global limit for the query in the SemanticAnalyzer. 
> In addition, we have an optimization rule GlobalLimitOptimizer that prunes 
> the input depending on the global limit and under certain conditions (off by 
> default).
> We would like to remove the dependency on the SemanticAnalyzer and set the 
> global limit within GlobalLimitOptimizer.
> Further, we need to solve the problem with SimpleFetchOptimizer, which only 
> checks the limit but does not take into account the offset of the query, 
> which I think might lead to incorrect results if FetchOptimizer kicks in (not 
> verified yet).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14866) Remove logic to set global limit from SemanticAnalyzer

2016-10-22 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-14866:
---
Status: Patch Available  (was: In Progress)

> Remove logic to set global limit from SemanticAnalyzer
> --
>
> Key: HIVE-14866
> URL: https://issues.apache.org/jira/browse/HIVE-14866
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.1.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-14866.02.patch, HIVE-14866.patch
>
>
> Currently, we set up the global limit for the query in the SemanticAnalyzer. 
> In addition, we have an optimization rule GlobalLimitOptimizer that prunes 
> the input depending on the global limit and under certain conditions (off by 
> default).
> We would like to remove the dependency on the SemanticAnalyzer and set the 
> global limit within GlobalLimitOptimizer.
> Further, we need to solve the problem with SimpleFetchOptimizer, which only 
> checks the limit but does not take into account the offset of the query, 
> which I think might lead to incorrect results if FetchOptimizer kicks in (not 
> verified yet).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14866) Remove logic to set global limit from SemanticAnalyzer

2016-10-22 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-14866:
---
Attachment: (was: HIVE-14866.02.patch)

> Remove logic to set global limit from SemanticAnalyzer
> --
>
> Key: HIVE-14866
> URL: https://issues.apache.org/jira/browse/HIVE-14866
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.1.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-14866.02.patch, HIVE-14866.patch
>
>
> Currently, we set up the global limit for the query in the SemanticAnalyzer. 
> In addition, we have an optimization rule GlobalLimitOptimizer that prunes 
> the input depending on the global limit and under certain conditions (off by 
> default).
> We would like to remove the dependency on the SemanticAnalyzer and set the 
> global limit within GlobalLimitOptimizer.
> Further, we need to solve the problem with SimpleFetchOptimizer, which only 
> checks the limit but does not take into account the offset of the query, 
> which I think might lead to incorrect results if FetchOptimizer kicks in (not 
> verified yet).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15030) Fixes in inference of collation for Tez cost model

2016-10-22 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-15030:
---
Attachment: HIVE-15030.patch

> Fixes in inference of collation for Tez cost model
> --
>
> Key: HIVE-15030
> URL: https://issues.apache.org/jira/browse/HIVE-15030
> Project: Hive
>  Issue Type: Bug
>  Components: Statistics
>Affects Versions: 2.1.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-15030.patch
>
>
> Tez cost model might get NPE if collation returned by join algorithm is null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15029) Add logic to estimate stats for BETWEEN operator

2016-10-22 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-15029:
---
Attachment: (was: HIVE-15029.01.patch)

> Add logic to estimate stats for BETWEEN operator
> 
>
> Key: HIVE-15029
> URL: https://issues.apache.org/jira/browse/HIVE-15029
> Project: Hive
>  Issue Type: Bug
>  Components: Statistics
>Affects Versions: 2.1.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-15029.01.patch
>
>
> Currently, BETWEEN operator is considered in the default case: reduces the 
> input rows to the half. This may lead to wrong estimates for the number of 
> rows produced by Filter operators.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15030) Fixes in inference of collation for Tez cost model

2016-10-22 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-15030:
---
Attachment: (was: HIVE-15030.patch)

> Fixes in inference of collation for Tez cost model
> --
>
> Key: HIVE-15030
> URL: https://issues.apache.org/jira/browse/HIVE-15030
> Project: Hive
>  Issue Type: Bug
>  Components: Statistics
>Affects Versions: 2.1.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>
> Tez cost model might get NPE if collation returned by join algorithm is null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15029) Add logic to estimate stats for BETWEEN operator

2016-10-22 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-15029:
---
Attachment: HIVE-15029.01.patch

> Add logic to estimate stats for BETWEEN operator
> 
>
> Key: HIVE-15029
> URL: https://issues.apache.org/jira/browse/HIVE-15029
> Project: Hive
>  Issue Type: Bug
>  Components: Statistics
>Affects Versions: 2.1.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-15029.01.patch
>
>
> Currently, BETWEEN operator is considered in the default case: reduces the 
> input rows to the half. This may lead to wrong estimates for the number of 
> rows produced by Filter operators.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14866) Remove logic to set global limit from SemanticAnalyzer

2016-10-22 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-14866:
---
Attachment: (was: HIVE-14866.01.patch)

> Remove logic to set global limit from SemanticAnalyzer
> --
>
> Key: HIVE-14866
> URL: https://issues.apache.org/jira/browse/HIVE-14866
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.1.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-14866.02.patch, HIVE-14866.patch
>
>
> Currently, we set up the global limit for the query in the SemanticAnalyzer. 
> In addition, we have an optimization rule GlobalLimitOptimizer that prunes 
> the input depending on the global limit and under certain conditions (off by 
> default).
> We would like to remove the dependency on the SemanticAnalyzer and set the 
> global limit within GlobalLimitOptimizer.
> Further, we need to solve the problem with SimpleFetchOptimizer, which only 
> checks the limit but does not take into account the offset of the query, 
> which I think might lead to incorrect results if FetchOptimizer kicks in (not 
> verified yet).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14866) Remove logic to set global limit from SemanticAnalyzer

2016-10-22 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-14866:
---
Attachment: HIVE-14866.02.patch

> Remove logic to set global limit from SemanticAnalyzer
> --
>
> Key: HIVE-14866
> URL: https://issues.apache.org/jira/browse/HIVE-14866
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.1.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-14866.02.patch, HIVE-14866.patch
>
>
> Currently, we set up the global limit for the query in the SemanticAnalyzer. 
> In addition, we have an optimization rule GlobalLimitOptimizer that prunes 
> the input depending on the global limit and under certain conditions (off by 
> default).
> We would like to remove the dependency on the SemanticAnalyzer and set the 
> global limit within GlobalLimitOptimizer.
> Further, we need to solve the problem with SimpleFetchOptimizer, which only 
> checks the limit but does not take into account the offset of the query, 
> which I think might lead to incorrect results if FetchOptimizer kicks in (not 
> verified yet).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14496) Enable Calcite rewriting with materialized views

2016-10-22 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-14496:
---
Attachment: (was: HIVE-14496.02.patch)

> Enable Calcite rewriting with materialized views
> 
>
> Key: HIVE-14496
> URL: https://issues.apache.org/jira/browse/HIVE-14496
> Project: Hive
>  Issue Type: Sub-task
>  Components: Materialized views
>Affects Versions: 2.2.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-14496.01.patch, HIVE-14496.02.patch, 
> HIVE-14496.patch
>
>
> Calcite already supports query rewriting using materialized views. We will 
> use it to support this feature in Hive.
> In order to do that, we need to register the existing materialized views with 
> Calcite view service and enable the materialized views rewriting rules. 
> We should include a HiveConf flag to completely disable query rewriting using 
> materialized views if necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14496) Enable Calcite rewriting with materialized views

2016-10-22 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-14496:
---
Attachment: HIVE-14496.02.patch

> Enable Calcite rewriting with materialized views
> 
>
> Key: HIVE-14496
> URL: https://issues.apache.org/jira/browse/HIVE-14496
> Project: Hive
>  Issue Type: Sub-task
>  Components: Materialized views
>Affects Versions: 2.2.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-14496.01.patch, HIVE-14496.02.patch, 
> HIVE-14496.patch
>
>
> Calcite already supports query rewriting using materialized views. We will 
> use it to support this feature in Hive.
> In order to do that, we need to register the existing materialized views with 
> Calcite view service and enable the materialized views rewriting rules. 
> We should include a HiveConf flag to completely disable query rewriting using 
> materialized views if necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)