[jira] [Created] (HIVE-21602) Dropping an external table created by migration case should delete the data directory.

2019-04-10 Thread Sankar Hariappan (JIRA)
Sankar Hariappan created HIVE-21602:
---

 Summary: Dropping an external table created by migration case 
should delete the data directory.
 Key: HIVE-21602
 URL: https://issues.apache.org/jira/browse/HIVE-21602
 Project: Hive
  Issue Type: Bug
  Components: repl
Affects Versions: 4.0.0
Reporter: Sankar Hariappan
Assignee: Sankar Hariappan


For external table, if the table is dropped, the location is not removed. But 
If the source table is managed and at target the table is converted to 
external, then the table location should be removed if the table is dropped.
Replication flow should set additional parameter "external.table.purge"="true" 
for migration to external table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HIVE-21601) Hive JDBC Storage Handler query fail because projected timestamp max precision is not valid for mysql

2019-04-10 Thread Rajkumar Singh (JIRA)
Rajkumar Singh created HIVE-21601:
-

 Summary: Hive JDBC Storage Handler query fail because projected 
timestamp max precision is not valid for mysql
 Key: HIVE-21601
 URL: https://issues.apache.org/jira/browse/HIVE-21601
 Project: Hive
  Issue Type: Bug
  Components: Hive, JDBC
Affects Versions: 3.1.1
 Environment: Hive-3.1
Reporter: Rajkumar Singh


Steps to reproduce:
{code}
--mysql table
mysql> show create table dd_timestamp_error;
++--+
| Table  | Create Table 

|
++--+
| dd_timestamp_error | CREATE TABLE `dd_timestamp_error` (
  `col1` text,
  `col2` timestamp(6) NOT NULL DEFAULT CURRENT_TIMESTAMP(6) ON UPDATE 
CURRENT_TIMESTAMP(6)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 |
++--+
1 row in set (0.00 sec)

-- hive table 

++
|   createtab_stmt   |
++
| CREATE EXTERNAL TABLE `dd_timestamp_error`(|
|   `col1` string COMMENT 'from deserializer',   |
|   `col2` timestamp COMMENT 'from deserializer')|
| ROW FORMAT SERDE   |
|   'org.apache.hive.storage.jdbc.JdbcSerDe' |
| STORED BY  |
|   'org.apache.hive.storage.jdbc.JdbcStorageHandler'  |
| WITH SERDEPROPERTIES ( |
|   'serialization.format'='1')  |
| TBLPROPERTIES (|
|   'bucketing_version'='2', |
|   'hive.sql.database.type'='MYSQL',|
|   'hive.sql.dbcp.maxActive'='1',   |
|   'hive.sql.dbcp.password'='testuser', |
|   'hive.sql.dbcp.username'='testuser', |
|   'hive.sql.jdbc.driver'='com.mysql.jdbc.Driver',  |
|   'hive.sql.jdbc.url'='jdbc:mysql://c46-node3.squadron-labs.com/test',  |
|   'hive.sql.table'='dd_timestamp_error',   |
|   'transient_lastDdlTime'='1554910389')|
++

--query failure

0: jdbc:hive2://c46-node2.squadron-labs.com:2>  select * from 
dd_timestamp_error where col2 = '2019-04-03 15:54:21.543654';

Error: java.io.IOException: java.io.IOException: 
org.apache.hive.storage.jdbc.exception.HiveJdbcDatabaseAccessException: Caught 
exception while trying to execute query:You have an error in your SQL syntax; 
check the manual that corresponds to your MySQL server version for the right 
syntax to use near 'TIMESTAMP(9)) AS `col2`


--
explain select * from dd_timestamp_error where col2 = '2019-04-03 
15:54:21.543654';

TableScan [TS_0] |
| Output:["col1","col2"],properties:{"hive.sql.query":"SELECT `col1`, 
CAST(TIMESTAMP '2019-04-03 15:54:21.543654000' AS TIMESTAMP(9)) AS `col2`\nFROM 
`dd_timestamp_error`\nWHERE `col2` = TIMESTAMP '2019-04-03 
15:54:21.543654000'","hive.sql.query.fieldNames":"col1,col2","hive.sql.query.fieldTypes":"string,timestamp","hive.sql.query.split":"true"}
 |
|   
{code}

the problem seems to be with convertedFilterExpr ( -- where col2 = '2019-04-03 
15:54:21.543654';) while comparing timestamp with constant:- 

https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/translator/RexNodeConverter.java#L856
https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/HiveTypeSystemImpl.java#L38

hive timestamp MAX_TIMESTAMP_PRECISION seems to be 9 and it appears that hive 
pushes the same in query projection(JDBC project) for MySQL and fail the query.







--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HIVE-21600) GenTezUtils.removeSemiJoinOperator may throw out of bounds exception for TS with multiple children

2019-04-10 Thread Jesus Camacho Rodriguez (JIRA)
Jesus Camacho Rodriguez created HIVE-21600:
--

 Summary: GenTezUtils.removeSemiJoinOperator may throw out of 
bounds exception for TS with multiple children
 Key: HIVE-21600
 URL: https://issues.apache.org/jira/browse/HIVE-21600
 Project: Hive
  Issue Type: Bug
Reporter: Jesus Camacho Rodriguez
Assignee: Jesus Camacho Rodriguez


The method does not reset the context when it loops through the children of TS. 
Hence, if TS has multiple FIL children, we can end up with a mangled context.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HIVE-21599) Remove predicate on partition columns from Table Scan operator

2019-04-10 Thread Vineet Garg (JIRA)
Vineet Garg created HIVE-21599:
--

 Summary: Remove predicate on partition columns from Table Scan 
operator
 Key: HIVE-21599
 URL: https://issues.apache.org/jira/browse/HIVE-21599
 Project: Hive
  Issue Type: Improvement
  Components: Query Planning
Reporter: Vineet Garg
Assignee: Vineet Garg


Filter predicates are pushed to Table Scan (to be pushed to and used by storage 
handler/input format). Such predicates could consist of partition columns which 
are of no use to storage handler  or input formats. Therefore it should be 
removed from TS filter expression.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Review Request 70448: Break up DDLTask - extract Privilege related operations

2019-04-10 Thread Miklos Gergely

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/70448/
---

Review request for hive and Zoltan Haindrich.


Bugs: HIVE-21593
https://issues.apache.org/jira/browse/HIVE-21593


Repository: hive-git


Description
---

DDLTask is a huge class, more than 5000 lines long. The related DDLWork is also 
a huge class, which has a field for each DDL operation it supports. The goal is 
to refactor these in order to have everything cut into more handleable classes 
under the package  org.apache.hadoop.hive.ql.exec.ddl:

have a separate class for each operation
have a package for each operation group (database ddl, table ddl, etc), so the 
amount of classes under a package is more manageable
make all the requests (DDLDesc subclasses) immutable
DDLTask should be agnostic to the actual operations
right now let's ignore the issue of having some operations handled by DDLTask 
which are not actual DDL operations (lock, unlock, desc...)
In the interim time when there are two DDLTask and DDLWork classes in the code 
base the new ones in the new package are called DDLTask2 and DDLWork2 thus 
avoiding the usage of fully qualified class names where both the old and the 
new classes are in use.

Step #5: extract all the privilege related operations from the old DDLTask, and 
move them under the new package.


Diffs
-

  ql/src/java/org/apache/hadoop/hive/ql/ddl/database/AlterDatabaseDesc.java 
547b3515c0 
  
ql/src/java/org/apache/hadoop/hive/ql/ddl/database/ShowCreateDatabaseDesc.java 
29dc266ebf 
  ql/src/java/org/apache/hadoop/hive/ql/ddl/database/ShowDatabasesDesc.java 
4814fd3e8c 
  ql/src/java/org/apache/hadoop/hive/ql/ddl/function/DescFunctionDesc.java 
7f1aa0c90e 
  ql/src/java/org/apache/hadoop/hive/ql/ddl/function/ShowFunctionsDesc.java 
2affa32786 
  
ql/src/java/org/apache/hadoop/hive/ql/ddl/function/ShowFunctionsOperation.java 
d76312d691 
  ql/src/java/org/apache/hadoop/hive/ql/ddl/privilege/CreateRoleDesc.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/ddl/privilege/CreateRoleOperation.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/ddl/privilege/DropRoleDesc.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/ddl/privilege/DropRoleOperation.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/ddl/privilege/GrantOperation.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/ddl/privilege/GrantRoleDesc.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/ddl/privilege/GrantRoleOperation.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/ddl/privilege/RevokeOperation.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/ddl/privilege/RevokeRoleDesc.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/ddl/privilege/RevokeRoleOperation.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/ddl/privilege/RoleUtils.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/ddl/privilege/SetRoleDesc.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/ddl/privilege/SetRoleOperation.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/ddl/privilege/ShowCurrentRoleDesc.java 
PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/ddl/privilege/ShowCurrentRoleOperation.java
 PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/ddl/privilege/ShowGrantOperation.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/ddl/privilege/ShowPrincipalsDesc.java 
PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/ddl/privilege/ShowPrincipalsOperation.java
 PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/ddl/privilege/ShowRoleGrantDesc.java 
PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/ddl/privilege/ShowRoleGrantOperation.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/ddl/privilege/ShowRolesDesc.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/ddl/privilege/ShowRolesOperation.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/ddl/privilege/package-info.java 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/ddl/table/DescTableDesc.java 0cfffd2032 
  ql/src/java/org/apache/hadoop/hive/ql/ddl/table/ShowCreateTableDesc.java 
8fa1ef16aa 
  ql/src/java/org/apache/hadoop/hive/ql/ddl/table/ShowTablePropertiesDesc.java 
72caa58607 
  ql/src/java/org/apache/hadoop/hive/ql/ddl/table/ShowTableStatusDesc.java 
8c312a0c5e 
  ql/src/java/org/apache/hadoop/hive/ql/ddl/table/ShowTablesDesc.java 
584433b0a0 
  ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java 269cd852bf 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/repl/bootstrap/load/LoadDatabase.java
 c892b40224 
  ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java 
d187d197a0 
  
ql/src/java/org/apache/hadoop/hive/ql/parse/authorization/AuthorizationParseUtils.java
 de5c90769a 
  

[jira] [Created] (HIVE-21598) CTAS on ACID table during incremental does not replicate data

2019-04-10 Thread Ashutosh Bapat (JIRA)
Ashutosh Bapat created HIVE-21598:
-

 Summary: CTAS on ACID table during incremental does not replicate 
data
 Key: HIVE-21598
 URL: https://issues.apache.org/jira/browse/HIVE-21598
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2, repl
Reporter: Ashutosh Bapat


Scenario

create database dumpdb with dbproperties('repl.source.for'='1,2,3');

use dumpdb;

create table t1 (id int) clustered by(id) into 3 buckets stored as orc 
tblproperties ("transactional"="true");

insert into t1 values(1);

insert into t1 values(2);

repl dump dumpdb;

repl load loaddb from ;

use loaddb;

select * from t1;

++
| t6.id |
++
| 1 |
| 2 |
+

use dumpdb;

create table t6 stored as orc tblproperties ("transactional"="true") as select 
* from t1;

select * from t6;

++
| t6.id |
++
| 1 |
| 2 |
++

repl dump dumpdb from 

repl load loaddb from ;

use loaddb;

select * from t6;

++
| t6.id |
++
++

t6 gets created but there's no data.

 

On further investigation, I see that the CommitTxnEvent's dump directory has 
_files but it is empty. Looks like we do not log names of the files created as 
part of CTAS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Review Request 70442: HIVE-21597: WM trigger validation should happen at the time of create or alter

2019-04-10 Thread j . prasanth . j

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/70442/
---

Review request for hive and Daniel Dai.


Bugs: HIVE-21597
https://issues.apache.org/jira/browse/HIVE-21597


Repository: hive-git


Description
---

HIVE-21597: WM trigger validation should happen at the time of create or alter


Diffs
-

  ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java 
a1d795fb08b163f016673fbf707b347ba63cf818 
  ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TriggerValidatorRunnable.java 
670184b0ac3ce4b47b3c44b392c54e5a28a9cfed 
  ql/src/test/queries/clientpositive/resourceplan.q 
46aae72a1100f9efd568a606f08242013c6fc016 
  ql/src/test/results/clientpositive/llap/resourceplan.q.out 
9ae68f487f040f06d786a1e4d999f1b41acb6457 
  service/src/java/org/apache/hive/service/server/KillQueryImpl.java 
c7f2c9117b9605882a3de849de592e027dcab484 


Diff: https://reviews.apache.org/r/70442/diff/1/


Testing
---


Thanks,

Prasanth_J



[jira] [Created] (HIVE-21597) WM trigger validation should happen at the time of create or alter

2019-04-10 Thread Prasanth Jayachandran (JIRA)
Prasanth Jayachandran created HIVE-21597:


 Summary: WM trigger validation should happen at the time of create 
or alter
 Key: HIVE-21597
 URL: https://issues.apache.org/jira/browse/HIVE-21597
 Project: Hive
  Issue Type: Bug
Affects Versions: 4.0.0, 3.2.0
Reporter: Prasanth Jayachandran
Assignee: Prasanth Jayachandran


When a query guardrail trigger is created the trigger expression is not 
validated immediately upon creation or altering the trigger. Instead, it gets 
validated at the start of HS2 which could fail resource plans from being 
applied correctly. The trigger expression validation should happen in DDLTask. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)