[jira] [Updated] (HIVE-21761) Support table level replication in Hive
[ https://issues.apache.org/jira/browse/HIVE-21761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mahesh kumar behera updated HIVE-21761: --- Description: *Requirements:* {code:java} - User needs to define replication policy to replicate any specific table. This enables user to replicate only the business critical tables instead of replicating all tables which may throttle the network bandwidth, storage and also slow-down Hive replication. - User needs to define replication policy using regular expressions (such as db.sales_*) and needs to include additional tables which are non-matching given pattern and exclude some tables which are matching given pattern. - User needs to dynamically add/remove tables to the list either by manually changing the replication policy during run time. {code} *Design:* {code:java} 1. Hive continue to support DB level replication policy of format but logically, we support the policy as .'t1|t3| …'.'t*'. 2. Regular expression can also be supported as replication policy. For example, a. .'' b. .'<*suffix>' c. .'' d. .'' 3. User can provide include and exclude list to specify the tables to be included in the replication policy. a. Include list specifies the tables to be included. b. Exclude list specifies the tables to be excluded even if it satisfies the expression in include list. c. So the tables included in the policy is a-b. d. For backward compatibility, if no include or exclude list is given, then all the tables will be included in the policy. 4. New format for the Replication policy have 3 parts all separated with Dot (.). a. First part is DB name. b. Second part is included list. Valid java regex within single quote. c. Third part is excluded list. Valid java regex within single quote. - -- Full DB replication which is currently supported - .'.*?' -- Full DB replication - .'t1|t3' -- DB replication with static list of tables t1 and t3 included. - .'(t1*)|t2'.'t100' -- DB replication with all tables having prefix t1 and also include table t2 which doesn’t have prefix t1 and exclude t100 which has the prefix t1. 5. If the DB property “repl.source.for” is set, then by default all the tables in the DB will be enabled for replication and will continue to archive deleted data to CM path. 6. REPL DUMP takes 2 inputs along with existing FROM and WITH clause. a. REPL DUMP [REPLACE FROM WITH ; current_repl_policy and previous_repl_policy can be any format mentioned in Point-4. b. REPLACE clause to be supported to take previous repl policy as input. c. Rest of the format remains same. 7. Now, REPL DUMP on this DB will replicate the tables based on current_repl_policy. 8. Single table replication of format .t1 is not supported. User can provide the same with .'t1' format. 9. If any table is added dynamically either due to change in regular expression or added to include list should be bootstrapped. a. Hive will automatically figure out the list of tables newly included in the list by comparing the current_repl_policy & previous_repl_policy inputs and combine bootstrap dump for added tables as part of incremental dump. As we can combine first incremental with bootstrap dump, it removes the current limitation of target DB being inconsistent after bootstrap unless we run first incremental replication. b. If any table is renamed, then it may gets dynamically added/removed for replication based on defined replication policy + include/exclude list. So, Hive will perform bootstrap for the table which is just included after rename. c. Also, if renamed table is excluded from replication policy, then need to drop the old table at target as well. 10. Only the initial bootstrap load expects the target DB to be empty but the intermediate bootstrap on tables due to regex or inclusion/exclusion list change or renames doesn’t expect the target DB or table to be empty. If any table with same name exist during such bootstrap, the table will be overwritten including data. {code} was: *Requirements:* {code} - User needs to define replication policy to replicate any specific table. This enables user to replicate only the business critical tables instead of replicating all tables which may throttle the network bandwidth, storage and also slow-down Hive replication. - User needs to define replication policy using regular expressions (such as db.sales_*) and needs to include additional tables which are non-matching given pattern and exclude some tables which are matching given pattern. - User needs to dynamically add/remove tables to the list either by manually changing the replication policy during run time. {code} *Design:* {code} 1. Hive continue to support DB level replication policy of format .* but logically, we support the policy as .(t1, t3, …). 2. Regular expression can also be supported as replication
[jira] [Commented] (HIVE-21970) Avoid using RegistryUtils.currentUser()
[ https://issues.apache.org/jira/browse/HIVE-21970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880938#comment-16880938 ] Gopal V commented on HIVE-21970: LGTM - +1 tests pending > Avoid using RegistryUtils.currentUser() > --- > > Key: HIVE-21970 > URL: https://issues.apache.org/jira/browse/HIVE-21970 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0, 3.2.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-21970.1.patch > > > RegistryUtils.currentUser() does replacement of '_' with '-' for DNS reasons. > This is used inconsistently in some places causing issues wrt. ZK (deletion > token secret manager, llap cluster membership for external clients). > > Replace RegistryUtils.currentUser() with > UserGroupInformation.getCurrentUser().getShortUserName() for consistency. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21973) "show locks" print the header twice.
[ https://issues.apache.org/jira/browse/HIVE-21973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880937#comment-16880937 ] Gopal V commented on HIVE-21973: +1 tests pending > "show locks" print the header twice. > > > Key: HIVE-21973 > URL: https://issues.apache.org/jira/browse/HIVE-21973 > Project: Hive > Issue Type: Improvement > Components: Hive >Affects Versions: 3.1.1 >Reporter: Rajkumar Singh >Assignee: Rajkumar Singh >Priority: Minor > Attachments: HIVE-21973.patch > > > show locks; > -- output > {code:java} > +--+---+++-+-++-+-+--+---+---+-+ > | lockid | database | table | partition | lock_state | blocked_by | > lock_type | transaction_id | last_heartbeat | acquired_at | user | > hostname | agent_info | > +--+---+++-+-++-+-+--+---+---+-+ > | Lock ID | Database | Table | Partition | State | Blocked By | > Type | Transaction ID | Last Heartbeat | Acquired At | User | > Hostname | Agent Info | > +--+---+++-+-++-+-+--+---+---+-+ > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21972) "show transactions" display the header twice
[ https://issues.apache.org/jira/browse/HIVE-21972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880936#comment-16880936 ] Gopal V commented on HIVE-21972: +1 tests pending > "show transactions" display the header twice > > > Key: HIVE-21972 > URL: https://issues.apache.org/jira/browse/HIVE-21972 > Project: Hive > Issue Type: Improvement > Components: Hive >Affects Versions: 3.1.1 >Reporter: Rajkumar Singh >Assignee: Rajkumar Singh >Priority: Major > Attachments: HIVE-21972.patch > > > show transactions; > {code:java} > +-+++--+---+---+ > | txnid | state| startedtime | lastheartbeattime > | user | host| > +-+++--+---+---+ > | Transaction ID | Transaction State | Started Time | Last Heartbeat Time > | User | Hostname | > | 896 | ABORTED| 1560209607000 | 1560209607000 > | hive | hostname | > +-+++--+---+---+ > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21973) "show locks" print the header twice.
[ https://issues.apache.org/jira/browse/HIVE-21973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajkumar Singh updated HIVE-21973: -- Attachment: HIVE-21973.patch Status: Patch Available (was: Open) > "show locks" print the header twice. > > > Key: HIVE-21973 > URL: https://issues.apache.org/jira/browse/HIVE-21973 > Project: Hive > Issue Type: Improvement > Components: Hive >Affects Versions: 3.1.1 >Reporter: Rajkumar Singh >Assignee: Rajkumar Singh >Priority: Minor > Attachments: HIVE-21973.patch > > > show locks; > -- output > {code:java} > +--+---+++-+-++-+-+--+---+---+-+ > | lockid | database | table | partition | lock_state | blocked_by | > lock_type | transaction_id | last_heartbeat | acquired_at | user | > hostname | agent_info | > +--+---+++-+-++-+-+--+---+---+-+ > | Lock ID | Database | Table | Partition | State | Blocked By | > Type | Transaction ID | Last Heartbeat | Acquired At | User | > Hostname | Agent Info | > +--+---+++-+-++-+-+--+---+---+-+ > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-21973) "show locks" print the header twice.
[ https://issues.apache.org/jira/browse/HIVE-21973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajkumar Singh reassigned HIVE-21973: - > "show locks" print the header twice. > > > Key: HIVE-21973 > URL: https://issues.apache.org/jira/browse/HIVE-21973 > Project: Hive > Issue Type: Improvement > Components: Hive >Affects Versions: 3.1.1 >Reporter: Rajkumar Singh >Assignee: Rajkumar Singh >Priority: Minor > > show locks; > -- output > {code:java} > +--+---+++-+-++-+-+--+---+---+-+ > | lockid | database | table | partition | lock_state | blocked_by | > lock_type | transaction_id | last_heartbeat | acquired_at | user | > hostname | agent_info | > +--+---+++-+-++-+-+--+---+---+-+ > | Lock ID | Database | Table | Partition | State | Blocked By | > Type | Transaction ID | Last Heartbeat | Acquired At | User | > Hostname | Agent Info | > +--+---+++-+-++-+-+--+---+---+-+ > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21972) "show transactions" display the header twice
[ https://issues.apache.org/jira/browse/HIVE-21972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajkumar Singh updated HIVE-21972: -- Attachment: HIVE-21972.patch Status: Patch Available (was: Open) > "show transactions" display the header twice > > > Key: HIVE-21972 > URL: https://issues.apache.org/jira/browse/HIVE-21972 > Project: Hive > Issue Type: Improvement > Components: Hive >Affects Versions: 3.1.1 >Reporter: Rajkumar Singh >Assignee: Rajkumar Singh >Priority: Major > Attachments: HIVE-21972.patch > > > show transactions; > {code:java} > +-+++--+---+---+ > | txnid | state| startedtime | lastheartbeattime > | user | host| > +-+++--+---+---+ > | Transaction ID | Transaction State | Started Time | Last Heartbeat Time > | User | Hostname | > | 896 | ABORTED| 1560209607000 | 1560209607000 > | hive | hostname | > +-+++--+---+---+ > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21972) "show transactions" display the header twice
[ https://issues.apache.org/jira/browse/HIVE-21972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajkumar Singh updated HIVE-21972: -- Description: show transactions; {code:java} +-+++--+---+---+ | txnid | state| startedtime | lastheartbeattime | user | host| +-+++--+---+---+ | Transaction ID | Transaction State | Started Time | Last Heartbeat Time | User | Hostname | | 896 | ABORTED| 1560209607000 | 1560209607000 | hive | hdp32b.hdp.local | +-+++--+---+---+ {code} was: show transactions; +-+++--+---+---+ | txnid | state| startedtime | lastheartbeattime | user | host| +-+++--+---+---+ | Transaction ID | Transaction State | Started Time | Last Heartbeat Time | User | Hostname | | 896 | ABORTED| 1560209607000 | 1560209607000 | hive | hdp32b.hdp.local | +-+++--+---+---+ > "show transactions" display the header twice > > > Key: HIVE-21972 > URL: https://issues.apache.org/jira/browse/HIVE-21972 > Project: Hive > Issue Type: Improvement > Components: Hive >Affects Versions: 3.1.1 >Reporter: Rajkumar Singh >Assignee: Rajkumar Singh >Priority: Major > > show transactions; > {code:java} > +-+++--+---+---+ > | txnid | state| startedtime | lastheartbeattime > | user | host| > +-+++--+---+---+ > | Transaction ID | Transaction State | Started Time | Last Heartbeat Time > | User | Hostname | > | 896 | ABORTED| 1560209607000 | 1560209607000 > | hive | hdp32b.hdp.local | > +-+++--+---+---+ > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21972) "show transactions" display the header twice
[ https://issues.apache.org/jira/browse/HIVE-21972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajkumar Singh updated HIVE-21972: -- Description: show transactions; {code:java} +-+++--+---+---+ | txnid | state| startedtime | lastheartbeattime | user | host| +-+++--+---+---+ | Transaction ID | Transaction State | Started Time | Last Heartbeat Time | User | Hostname | | 896 | ABORTED| 1560209607000 | 1560209607000 | hive | hostname | +-+++--+---+---+ {code} was: show transactions; {code:java} +-+++--+---+---+ | txnid | state| startedtime | lastheartbeattime | user | host| +-+++--+---+---+ | Transaction ID | Transaction State | Started Time | Last Heartbeat Time | User | Hostname | | 896 | ABORTED| 1560209607000 | 1560209607000 | hive | hdp32b.hdp.local | +-+++--+---+---+ {code} > "show transactions" display the header twice > > > Key: HIVE-21972 > URL: https://issues.apache.org/jira/browse/HIVE-21972 > Project: Hive > Issue Type: Improvement > Components: Hive >Affects Versions: 3.1.1 >Reporter: Rajkumar Singh >Assignee: Rajkumar Singh >Priority: Major > > show transactions; > {code:java} > +-+++--+---+---+ > | txnid | state| startedtime | lastheartbeattime > | user | host| > +-+++--+---+---+ > | Transaction ID | Transaction State | Started Time | Last Heartbeat Time > | User | Hostname | > | 896 | ABORTED| 1560209607000 | 1560209607000 > | hive | hostname | > +-+++--+---+---+ > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-21972) "show transactions" display the header twice
[ https://issues.apache.org/jira/browse/HIVE-21972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajkumar Singh reassigned HIVE-21972: - > "show transactions" display the header twice > > > Key: HIVE-21972 > URL: https://issues.apache.org/jira/browse/HIVE-21972 > Project: Hive > Issue Type: Improvement > Components: Hive >Affects Versions: 3.1.1 >Reporter: Rajkumar Singh >Assignee: Rajkumar Singh >Priority: Major > > show transactions; > +-+++--+---+---+ > | txnid | state| startedtime | lastheartbeattime > | user | host| > +-+++--+---+---+ > | Transaction ID | Transaction State | Started Time | Last Heartbeat Time > | User | Hostname | > | 896 | ABORTED| 1560209607000 | 1560209607000 > | hive | hdp32b.hdp.local | > +-+++--+---+---+ -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21938) Add database and table filter options to PreUpgradeTool
[ https://issues.apache.org/jira/browse/HIVE-21938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880927#comment-16880927 ] Hive QA commented on HIVE-21938: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12973964/HIVE-21938.7.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 16366 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/17920/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17920/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17920/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12973964 - PreCommit-HIVE-Build > Add database and table filter options to PreUpgradeTool > --- > > Key: HIVE-21938 > URL: https://issues.apache.org/jira/browse/HIVE-21938 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.1.0 >Reporter: Krisztian Kasa >Assignee: Krisztian Kasa >Priority: Blocker > Fix For: 4.0.0 > > Attachments: HIVE-21938.1.patch, HIVE-21938.2.patch, > HIVE-21938.3.patch, HIVE-21938.4.patch, HIVE-21938.5.patch, > HIVE-21938.6.patch, HIVE-21938.7.patch > > > By default pre upgrade tool scans all databases and tables in the warehouse. > Add database and table filter options to run the tool for a specific subset > of databases and tables only. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HIVE-21951) Llap query on external table with header or footer returns incorrect row count.
[ https://issues.apache.org/jira/browse/HIVE-21951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan resolved HIVE-21951. - Resolution: Won't Fix This fix is partial as it won't work for cases where query having predicates or aggregates. HIVE-21924 have a proposal to fix this properly. > Llap query on external table with header or footer returns incorrect row > count. > --- > > Key: HIVE-21951 > URL: https://issues.apache.org/jira/browse/HIVE-21951 > Project: Hive > Issue Type: Bug > Components: llap, Query Processor >Affects Versions: 2.4.0, 4.0.0, 3.2.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Attachments: HIVE-21951.01.patch, HIVE-21951.02.patch > > > Create a table with header and footer as follows. > {code} > CREATE EXTERNAL TABLE IF NOT EXISTS externaltableOpenCSV (eid int, name > String, salary String, destination String) > COMMENT 'Employee details' > ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde' > STORED AS TEXTFILE > LOCATION '/externaltableOpenCSV' > tblproperties ("skip.header.line.count"="1", "skip.footer.line.count"="2"); > {code} > Now, query on this table returns incorrect row count as header/footer are not > skipped. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21951) Llap query on external table with header or footer returns incorrect row count.
[ https://issues.apache.org/jira/browse/HIVE-21951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-21951: Status: Open (was: Patch Available) > Llap query on external table with header or footer returns incorrect row > count. > --- > > Key: HIVE-21951 > URL: https://issues.apache.org/jira/browse/HIVE-21951 > Project: Hive > Issue Type: Bug > Components: llap, Query Processor >Affects Versions: 2.4.0, 4.0.0, 3.2.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Attachments: HIVE-21951.01.patch, HIVE-21951.02.patch > > > Create a table with header and footer as follows. > {code} > CREATE EXTERNAL TABLE IF NOT EXISTS externaltableOpenCSV (eid int, name > String, salary String, destination String) > COMMENT 'Employee details' > ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde' > STORED AS TEXTFILE > LOCATION '/externaltableOpenCSV' > tblproperties ("skip.header.line.count"="1", "skip.footer.line.count"="2"); > {code} > Now, query on this table returns incorrect row count as header/footer are not > skipped. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21924) Split text files even if header/footer exists
[ https://issues.apache.org/jira/browse/HIVE-21924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880913#comment-16880913 ] Sankar Hariappan commented on HIVE-21924: - Currently, header and footer skipping logic is done in FetchOperator which works fine if we perform "select * from table" but it returns incorrect result if predicates or aggregates are specified in the query. https://github.com/apache/hive/blob/f923b7490a6085292e2d05425dcdc83b994d42e4/ql/src/java/org/apache/hadoop/hive/ql/exec/FetchOperator.java#L572-L582 {code} headerCount = Utilities.getHeaderCount(currDesc.getTableDesc()); footerCount = Utilities.getFooterCount(currDesc.getTableDesc(), job); // Skip header lines. opNotEOF = Utilities.skipHeader(currRecReader, headerCount, key, value); // Initialize footer buffer. if (opNotEOF && footerCount > 0) { footerBuffer = new FooterBuffer(); opNotEOF = footerBuffer.initializeBuffer(job, currRecReader, footerCount, key, value); } {code} Changing the split generation logic as mentioned in the description would resolve this issue. > Split text files even if header/footer exists > - > > Key: HIVE-21924 > URL: https://issues.apache.org/jira/browse/HIVE-21924 > Project: Hive > Issue Type: Improvement > Components: File Formats >Affects Versions: 2.4.0, 4.0.0, 3.2.0 >Reporter: Prasanth Jayachandran >Priority: Major > > https://github.com/apache/hive/blob/967a1cc98beede8e6568ce750ebeb6e0d048b8ea/ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java#L494-L503 > > {code} > int headerCount = 0; > int footerCount = 0; > if (table != null) { > headerCount = Utilities.getHeaderCount(table); > footerCount = Utilities.getFooterCount(table, conf); > if (headerCount != 0 || footerCount != 0) { > // Input file has header or footer, cannot be splitted. > HiveConf.setLongVar(conf, ConfVars.MAPREDMINSPLITSIZE, > Long.MAX_VALUE); > } > } > {code} > this piece of code makes the CSV (or any text files with header/footer) files > not splittable if header or footer is present. > If only header is present, we can find the offset after first line break and > use that to split. Similarly for footer, may be read few KB's of data at the > end and find the last line break offset. Use that to determine the data range > which can be used for splitting. Few reads during split generation are > cheaper than not splitting the file at all. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21966) Llap external client - Arrow Serializer throws ArrayIndexOutOfBoundsException in some cases
[ https://issues.apache.org/jira/browse/HIVE-21966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-21966: Resolution: Fixed Fix Version/s: 4.0.0 Status: Resolved (was: Patch Available) 2.patch is committed to master! Thanks [~ShubhamChaurasia] for the contribution! > Llap external client - Arrow Serializer throws ArrayIndexOutOfBoundsException > in some cases > --- > > Key: HIVE-21966 > URL: https://issues.apache.org/jira/browse/HIVE-21966 > Project: Hive > Issue Type: Bug > Components: llap, Serializers/Deserializers >Affects Versions: 3.1.1 >Reporter: Shubham Chaurasia >Assignee: Shubham Chaurasia >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-21966.1.patch, HIVE-21966.2.patch > > Time Spent: 1h 10m > Remaining Estimate: 0h > > When we submit query through llap-ext-client, arrow serializer throws > ArrayIndexOutOfBoundsException when 1), 2) and 3) below are satisfied. > 1) {{hive.vectorized.execution.filesink.arrow.native.enabled=true}} to take > arrow serializer code path. > 2) Query contains a filter or limit clause which enforces > {{VectorizedRowBatch#selectedInUse=true}} > 3) Projection involves a column of type {{MultiValuedColumnVector}}. > Sample stacktrace: > {code} > Caused by: java.lang.ArrayIndexOutOfBoundsException: 150 > at > org.apache.hadoop.hive.ql.io.arrow.Serializer.writeGeneric(Serializer.java:679) > at > org.apache.hadoop.hive.ql.io.arrow.Serializer.writePrimitive(Serializer.java:518) > at > org.apache.hadoop.hive.ql.io.arrow.Serializer.write(Serializer.java:276) > at > org.apache.hadoop.hive.ql.io.arrow.Serializer.writeStruct(Serializer.java:342) > at > org.apache.hadoop.hive.ql.io.arrow.Serializer.write(Serializer.java:282) > at > org.apache.hadoop.hive.ql.io.arrow.Serializer.writeList(Serializer.java:365) > at > org.apache.hadoop.hive.ql.io.arrow.Serializer.write(Serializer.java:279) > at > org.apache.hadoop.hive.ql.io.arrow.Serializer.serializeBatch(Serializer.java:199) > at > org.apache.hadoop.hive.ql.exec.vector.filesink.VectorFileSinkArrowOperator.process(VectorFileSinkArrowOperator.java:135) > ... 30 more > {code} > It can be reproduced by: > from beeline: > {code} > CREATE TABLE complex_tbl(c1 array>) STORED AS ORC; > INSERT INTO complex_tbl SELECT ARRAY(NAMED_STRUCT('f1','v11', 'f2','v21'), > NAMED_STRUCT('f1','v21', 'f2','v22')); > {code} > and when we fire query: {{select * from complex_tbl limit 1}} through > llap-ext-client. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21966) Llap external client - Arrow Serializer throws ArrayIndexOutOfBoundsException in some cases
[ https://issues.apache.org/jira/browse/HIVE-21966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880901#comment-16880901 ] Sankar Hariappan commented on HIVE-21966: - +1 > Llap external client - Arrow Serializer throws ArrayIndexOutOfBoundsException > in some cases > --- > > Key: HIVE-21966 > URL: https://issues.apache.org/jira/browse/HIVE-21966 > Project: Hive > Issue Type: Bug > Components: llap, Serializers/Deserializers >Affects Versions: 3.1.1 >Reporter: Shubham Chaurasia >Assignee: Shubham Chaurasia >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21966.1.patch, HIVE-21966.2.patch > > Time Spent: 1h 10m > Remaining Estimate: 0h > > When we submit query through llap-ext-client, arrow serializer throws > ArrayIndexOutOfBoundsException when 1), 2) and 3) below are satisfied. > 1) {{hive.vectorized.execution.filesink.arrow.native.enabled=true}} to take > arrow serializer code path. > 2) Query contains a filter or limit clause which enforces > {{VectorizedRowBatch#selectedInUse=true}} > 3) Projection involves a column of type {{MultiValuedColumnVector}}. > Sample stacktrace: > {code} > Caused by: java.lang.ArrayIndexOutOfBoundsException: 150 > at > org.apache.hadoop.hive.ql.io.arrow.Serializer.writeGeneric(Serializer.java:679) > at > org.apache.hadoop.hive.ql.io.arrow.Serializer.writePrimitive(Serializer.java:518) > at > org.apache.hadoop.hive.ql.io.arrow.Serializer.write(Serializer.java:276) > at > org.apache.hadoop.hive.ql.io.arrow.Serializer.writeStruct(Serializer.java:342) > at > org.apache.hadoop.hive.ql.io.arrow.Serializer.write(Serializer.java:282) > at > org.apache.hadoop.hive.ql.io.arrow.Serializer.writeList(Serializer.java:365) > at > org.apache.hadoop.hive.ql.io.arrow.Serializer.write(Serializer.java:279) > at > org.apache.hadoop.hive.ql.io.arrow.Serializer.serializeBatch(Serializer.java:199) > at > org.apache.hadoop.hive.ql.exec.vector.filesink.VectorFileSinkArrowOperator.process(VectorFileSinkArrowOperator.java:135) > ... 30 more > {code} > It can be reproduced by: > from beeline: > {code} > CREATE TABLE complex_tbl(c1 array>) STORED AS ORC; > INSERT INTO complex_tbl SELECT ARRAY(NAMED_STRUCT('f1','v11', 'f2','v21'), > NAMED_STRUCT('f1','v21', 'f2','v22')); > {code} > and when we fire query: {{select * from complex_tbl limit 1}} through > llap-ext-client. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21938) Add database and table filter options to PreUpgradeTool
[ https://issues.apache.org/jira/browse/HIVE-21938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880900#comment-16880900 ] Hive QA commented on HIVE-21938: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 17s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 22s{color} | {color:blue} upgrade-acid/pre-upgrade in master has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 10s{color} | {color:red} upgrade-acid/pre-upgrade: The patch generated 1 new + 56 unchanged - 12 fixed = 57 total (was 68) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 11m 11s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-17920/dev-support/hive-personality.sh | | git revision | master / 8e64482 | | Default Java | 1.8.0_111 | | findbugs | v3.0.1 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-17920/yetus/diff-checkstyle-upgrade-acid_pre-upgrade.txt | | modules | C: upgrade-acid/pre-upgrade U: upgrade-acid/pre-upgrade | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-17920/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Add database and table filter options to PreUpgradeTool > --- > > Key: HIVE-21938 > URL: https://issues.apache.org/jira/browse/HIVE-21938 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.1.0 >Reporter: Krisztian Kasa >Assignee: Krisztian Kasa >Priority: Blocker > Fix For: 4.0.0 > > Attachments: HIVE-21938.1.patch, HIVE-21938.2.patch, > HIVE-21938.3.patch, HIVE-21938.4.patch, HIVE-21938.5.patch, > HIVE-21938.6.patch, HIVE-21938.7.patch > > > By default pre upgrade tool scans all databases and tables in the warehouse. > Add database and table filter options to run the tool for a specific subset > of databases and tables only. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21971) HS2 leaks classload due to `ReflectionUtils::CONSTRUCTOR_CACHE` with temporary functions + GenericUDF
[ https://issues.apache.org/jira/browse/HIVE-21971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajesh Balamohan updated HIVE-21971: Description: https://issues.apache.org/jira/browse/HIVE-10329 helped in moving away from hadoop's ReflectionUtils constructor cache issue (https://issues.apache.org/jira/browse/HADOOP-10513). However, there are corner cases where hadoop's {{ReflectionUtils}} is in use and this causes gradual build up of memory in HS2. I have observed this in Hive 2.3. But the codepath in master for this has not changed much. Easiest way to repro would be to add a temp function which extends {{GenericUDF}}. In {{FunctionRegistry::cloneGenericUDF,}} this would end up using {{org.apache.hadoop.util.ReflectionUtils.newInstance}} which in turn lands up in COSNTRUCTOR_CACHE of ReflectionUtils. {noformat} CREATE TEMPORARY FUNCTION dummy AS 'com.hive.test.DummyGenericUDF' USING JAR 'file:///home/test/udf/dummy.jar'; select dummy(); at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:107) at org.apache.hadoop.hive.ql.exec.FunctionRegistry.cloneGenericUDF(FunctionRegistry.java:1353) at org.apache.hadoop.hive.ql.exec.FunctionInfo.getGenericUDF(FunctionInfo.java:122) at org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.getXpathOrFuncExprNodeDesc(TypeCheckProcFactory.java:983) at org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.process(TypeCheckProcFactory.java:1359) at org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90) at org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:105) at org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:89) at org.apache.hadoop.hive.ql.lib.ExpressionWalker.walk(ExpressionWalker.java:76) at org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:120) {noformat} Note: Reflection based invocation of hadoop's {{ReflectionUtils::clear}} was removed in 2.x. was: https://issues.apache.org/jira/browse/HIVE-10329 helped in moving away from hadoop's ReflectionUtils constructor cache issue (https://issues.apache.org/jira/browse/HADOOP-10513). However, there are corner cases where hadoop's {{ReflectionUtils}} is in use and this causes gradual build up of memory in HS2. I have observed this in Hive 2.3. But the codepath in master for this has not changed much. Easiest way to repro would be to add a temp function which extends {{GenericUDF}}. In {{FunctionRegistry::cloneGenericUDF,}} this would end up using {{org.apache.hadoop.util.ReflectionUtils.newInstance}} which in turn lands up in COSNTRUCTOR_CACHE of ReflectionUtils. {noformat} CREATE TEMPORARY FUNCTION dummy AS 'com.hive.test.DummyGenericUDF' USING JAR 'file:///home/test/udf/dummy.jar'; select dummy(); at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:107) at org.apache.hadoop.hive.ql.exec.FunctionRegistry.cloneGenericUDF(FunctionRegistry.java:1353) at org.apache.hadoop.hive.ql.exec.FunctionInfo.getGenericUDF(FunctionInfo.java:122) at org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.getXpathOrFuncExprNodeDesc(TypeCheckProcFactory.java:983) at org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.process(TypeCheckProcFactory.java:1359) at org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90) at org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:105) at org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:89) at org.apache.hadoop.hive.ql.lib.ExpressionWalker.walk(ExpressionWalker.java:76) at org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:120) {noformat} Note: Reflection based invocation of hadoop's `ReflectionUtils::clear` was removed in 2.x. > HS2 leaks classload due to `ReflectionUtils::CONSTRUCTOR_CACHE` with > temporary functions + GenericUDF > - > > Key: HIVE-21971 > URL: https://issues.apache.org/jira/browse/HIVE-21971 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 2.3.4 >Reporter: Rajesh Balamohan >Priority: Critical > > https://issues.apache.org/jira/browse/HIVE-10329 helped in moving away from > hadoop's ReflectionUtils constructor cache issue > (https://issues.apache.org/jira/browse/HADOOP-10513). > However, there are corner cases where hadoop's {{ReflectionUtils}} is in use > and this
[jira] [Commented] (HIVE-21928) Fix for statistics annotation in nested AND expressions
[ https://issues.apache.org/jira/browse/HIVE-21928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880878#comment-16880878 ] Hive QA commented on HIVE-21928: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12973954/HIVE-21928.02.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/17919/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17919/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17919/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Tests exited with: Exception: Patch URL https://issues.apache.org/jira/secure/attachment/12973954/HIVE-21928.02.patch was found in seen patch url's cache and a test was probably run already on it. Aborting... {noformat} This message is automatically generated. ATTACHMENT ID: 12973954 - PreCommit-HIVE-Build > Fix for statistics annotation in nested AND expressions > --- > > Key: HIVE-21928 > URL: https://issues.apache.org/jira/browse/HIVE-21928 > Project: Hive > Issue Type: Bug > Components: Physical Optimizer >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Critical > Labels: pull-request-available > Attachments: HIVE-21928.01.patch, HIVE-21928.01.patch, > HIVE-21928.02.patch, HIVE-21928.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Discovered while working on HIVE-21867. Having predicates with nested AND > expressions may result in different stats, even if predicates are basically > similar (from stats estimation standpoint). > For instance, stats for {{AND(x=5, true, true)}} are different from {{x=5}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21928) Fix for statistics annotation in nested AND expressions
[ https://issues.apache.org/jira/browse/HIVE-21928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880873#comment-16880873 ] Jesus Camacho Rodriguez commented on HIVE-21928: [~kgyrtkirk], could you take a look? Btw, I removed the continue block. > Fix for statistics annotation in nested AND expressions > --- > > Key: HIVE-21928 > URL: https://issues.apache.org/jira/browse/HIVE-21928 > Project: Hive > Issue Type: Bug > Components: Physical Optimizer >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Critical > Labels: pull-request-available > Attachments: HIVE-21928.01.patch, HIVE-21928.01.patch, > HIVE-21928.02.patch, HIVE-21928.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Discovered while working on HIVE-21867. Having predicates with nested AND > expressions may result in different stats, even if predicates are basically > similar (from stats estimation standpoint). > For instance, stats for {{AND(x=5, true, true)}} are different from {{x=5}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21928) Fix for statistics annotation in nested AND expressions
[ https://issues.apache.org/jira/browse/HIVE-21928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880870#comment-16880870 ] Hive QA commented on HIVE-21928: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12973954/HIVE-21928.02.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 16362 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/17918/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17918/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17918/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12973954 - PreCommit-HIVE-Build > Fix for statistics annotation in nested AND expressions > --- > > Key: HIVE-21928 > URL: https://issues.apache.org/jira/browse/HIVE-21928 > Project: Hive > Issue Type: Bug > Components: Physical Optimizer >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Critical > Labels: pull-request-available > Attachments: HIVE-21928.01.patch, HIVE-21928.01.patch, > HIVE-21928.02.patch, HIVE-21928.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Discovered while working on HIVE-21867. Having predicates with nested AND > expressions may result in different stats, even if predicates are basically > similar (from stats estimation standpoint). > For instance, stats for {{AND(x=5, true, true)}} are different from {{x=5}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21958) The list of table expression in the inclusion and exclusion list should be separated by '|' instead of comma.
[ https://issues.apache.org/jira/browse/HIVE-21958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mahesh kumar behera updated HIVE-21958: --- Status: Patch Available (was: Open) > The list of table expression in the inclusion and exclusion list should be > separated by '|' instead of comma. > - > > Key: HIVE-21958 > URL: https://issues.apache.org/jira/browse/HIVE-21958 > Project: Hive > Issue Type: Sub-task >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21958.01.patch, HIVE-21958.02.patch, > HIVE-21958.03.patch, HIVE-21958.04.patch, HIVE-21958.05.patch > > Time Spent: 3h 10m > Remaining Estimate: 0h > > Java regex expression does not support comma. If user wants multiple > expression to be present in the include or exclude list, then the expressions > can be provided separated by pipe ('|') character. The policy will look > something like db_name.'(t1*)|(t3)'.'t100' -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21880) Enable flaky test TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites.
[ https://issues.apache.org/jira/browse/HIVE-21880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mahesh kumar behera updated HIVE-21880: --- Resolution: Fixed Status: Resolved (was: Patch Available) HIVE-21880.06.patch committed to master. Thanks [~ashutosh.bapat] for fixing the issue. > Enable flaky test > TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites. > --- > > Key: HIVE-21880 > URL: https://issues.apache.org/jira/browse/HIVE-21880 > Project: Hive > Issue Type: Bug > Components: repl >Affects Versions: 4.0.0 >Reporter: Sankar Hariappan >Assignee: Ashutosh Bapat >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21880.01.patch, HIVE-21880.02.patch, > HIVE-21880.03.patch, HIVE-21880.04.patch, HIVE-21880.05.patch, > HIVE-21880.06.patch > > Time Spent: 3h > Remaining Estimate: 0h > > Need tp enable > TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites > which is disabled as it is flaky and randomly failing with below error. > {code} > Error Message > Notification events are missing in the meta store. > Stacktrace > java.lang.IllegalStateException: Notification events are missing in the meta > store. > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getNextNotification(HiveMetaStoreClient.java:3246) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212) > at com.sun.proxy.$Proxy58.getNextNotification(Unknown Source) > at > org.apache.hadoop.hive.ql.metadata.events.EventUtils$MSClientNotificationFetcher.getNextNotificationEvents(EventUtils.java:107) > at > org.apache.hadoop.hive.ql.metadata.events.EventUtils$NotificationEventIterator.fetchNextBatch(EventUtils.java:159) > at > org.apache.hadoop.hive.ql.metadata.events.EventUtils$NotificationEventIterator.hasNext(EventUtils.java:189) > at > org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.incrementalDump(ReplDumpTask.java:231) > at > org.apache.hadoop.hive.ql.exec.repl.ReplDumpTask.execute(ReplDumpTask.java:121) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:103) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2709) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2361) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:2028) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1788) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1782) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:162) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:223) > at > org.apache.hadoop.hive.ql.parse.WarehouseInstance.run(WarehouseInstance.java:227) > at > org.apache.hadoop.hive.ql.parse.WarehouseInstance.dump(WarehouseInstance.java:282) > at > org.apache.hadoop.hive.ql.parse.WarehouseInstance.dump(WarehouseInstance.java:265) > at > org.apache.hadoop.hive.ql.parse.WarehouseInstance.dump(WarehouseInstance.java:289) > at > org.apache.hadoop.hive.ql.parse.TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites(TestReplicationScenariosAcidTablesBootstrap.java:328) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at
[jira] [Updated] (HIVE-21958) The list of table expression in the inclusion and exclusion list should be separated by '|' instead of comma.
[ https://issues.apache.org/jira/browse/HIVE-21958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mahesh kumar behera updated HIVE-21958: --- Attachment: HIVE-21958.05.patch > The list of table expression in the inclusion and exclusion list should be > separated by '|' instead of comma. > - > > Key: HIVE-21958 > URL: https://issues.apache.org/jira/browse/HIVE-21958 > Project: Hive > Issue Type: Sub-task >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21958.01.patch, HIVE-21958.02.patch, > HIVE-21958.03.patch, HIVE-21958.04.patch, HIVE-21958.05.patch > > Time Spent: 3h 10m > Remaining Estimate: 0h > > Java regex expression does not support comma. If user wants multiple > expression to be present in the include or exclude list, then the expressions > can be provided separated by pipe ('|') character. The policy will look > something like db_name.'(t1*)|(t3)'.'t100' -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21958) The list of table expression in the inclusion and exclusion list should be separated by '|' instead of comma.
[ https://issues.apache.org/jira/browse/HIVE-21958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mahesh kumar behera updated HIVE-21958: --- Status: Open (was: Patch Available) > The list of table expression in the inclusion and exclusion list should be > separated by '|' instead of comma. > - > > Key: HIVE-21958 > URL: https://issues.apache.org/jira/browse/HIVE-21958 > Project: Hive > Issue Type: Sub-task >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21958.01.patch, HIVE-21958.02.patch, > HIVE-21958.03.patch, HIVE-21958.04.patch > > Time Spent: 3h 10m > Remaining Estimate: 0h > > Java regex expression does not support comma. If user wants multiple > expression to be present in the include or exclude list, then the expressions > can be provided separated by pipe ('|') character. The policy will look > something like db_name.'(t1*)|(t3)'.'t100' -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18842) CLUSTERED ON/DISTRIBUTED ON+SORTED ON support for materialized views
[ https://issues.apache.org/jira/browse/HIVE-18842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880849#comment-16880849 ] Jesus Camacho Rodriguez commented on HIVE-18842: Had to regenerate a few q files in latest patch due to HIVE-21947. > CLUSTERED ON/DISTRIBUTED ON+SORTED ON support for materialized views > > > Key: HIVE-18842 > URL: https://issues.apache.org/jira/browse/HIVE-18842 > Project: Hive > Issue Type: New Feature > Components: Materialized views >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Labels: pull-request-available, todoc4.0 > Attachments: HIVE-18842.01.patch, HIVE-18842.01.patch, > HIVE-18842.02.patch, HIVE-18842.03.patch, HIVE-18842.03.patch, > HIVE-18842.04.patch, HIVE-18842.patch > > Time Spent: 1.5h > Remaining Estimate: 0h > > We should support defining a {{CLUSTERED ON/DISTRIBUTED ON+SORTED ON}} > specification for materialized views. > The syntax should be extended as follows: > {code:sql} > CREATE MATERIALIZED VIEW [IF NOT EXISTS] [db_name.]materialized_view_name > [COMMENT materialized_view_comment] > [PARTITIONED ON (col_name, ...)] > [CLUSTERED ON (col_name, ...) | DISTRIBUTED ON (col_name, ...) SORTED ON > (col_name, ...)] -- NEW! > [ >[ROW FORMAT row_format] >[STORED AS file_format] > | STORED BY 'storage.handler.class.name' [WITH SERDEPROPERTIES (...)] > ] > [LOCATION hdfs_path] > [TBLPROPERTIES (property_name=property_value, ...)] > AS select_statement; > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18842) CLUSTERED ON/DISTRIBUTED ON+SORTED ON support for materialized views
[ https://issues.apache.org/jira/browse/HIVE-18842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-18842: --- Attachment: HIVE-18842.04.patch > CLUSTERED ON/DISTRIBUTED ON+SORTED ON support for materialized views > > > Key: HIVE-18842 > URL: https://issues.apache.org/jira/browse/HIVE-18842 > Project: Hive > Issue Type: New Feature > Components: Materialized views >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Labels: pull-request-available, todoc4.0 > Attachments: HIVE-18842.01.patch, HIVE-18842.01.patch, > HIVE-18842.02.patch, HIVE-18842.03.patch, HIVE-18842.03.patch, > HIVE-18842.04.patch, HIVE-18842.patch > > Time Spent: 1.5h > Remaining Estimate: 0h > > We should support defining a {{CLUSTERED ON/DISTRIBUTED ON+SORTED ON}} > specification for materialized views. > The syntax should be extended as follows: > {code:sql} > CREATE MATERIALIZED VIEW [IF NOT EXISTS] [db_name.]materialized_view_name > [COMMENT materialized_view_comment] > [PARTITIONED ON (col_name, ...)] > [CLUSTERED ON (col_name, ...) | DISTRIBUTED ON (col_name, ...) SORTED ON > (col_name, ...)] -- NEW! > [ >[ROW FORMAT row_format] >[STORED AS file_format] > | STORED BY 'storage.handler.class.name' [WITH SERDEPROPERTIES (...)] > ] > [LOCATION hdfs_path] > [TBLPROPERTIES (property_name=property_value, ...)] > AS select_statement; > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21928) Fix for statistics annotation in nested AND expressions
[ https://issues.apache.org/jira/browse/HIVE-21928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880846#comment-16880846 ] Hive QA commented on HIVE-21928: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 29s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 1s{color} | {color:blue} ql in master has 2252 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 25m 9s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-17918/dev-support/hive-personality.sh | | git revision | master / d7214ea | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-17918/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Fix for statistics annotation in nested AND expressions > --- > > Key: HIVE-21928 > URL: https://issues.apache.org/jira/browse/HIVE-21928 > Project: Hive > Issue Type: Bug > Components: Physical Optimizer >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Critical > Labels: pull-request-available > Attachments: HIVE-21928.01.patch, HIVE-21928.01.patch, > HIVE-21928.02.patch, HIVE-21928.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Discovered while working on HIVE-21867. Having predicates with nested AND > expressions may result in different stats, even if predicates are basically > similar (from stats estimation standpoint). > For instance, stats for {{AND(x=5, true, true)}} are different from {{x=5}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18842) CLUSTERED ON/DISTRIBUTED ON+SORTED ON support for materialized views
[ https://issues.apache.org/jira/browse/HIVE-18842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880829#comment-16880829 ] Hive QA commented on HIVE-18842: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12973944/HIVE-18842.03.patch {color:green}SUCCESS:{color} +1 due to 7 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 16365 tests executed *Failed tests:* {noformat} TestStatsReplicationScenariosACIDNoAutogather - did not produce a TEST-*.xml file (likely timed out) (batchId=251) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_cluster] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_distribute_sort] (batchId=178) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_partition_cluster] (batchId=182) org.apache.hadoop.hive.metastore.TestMetastoreHousekeepingLeaderEmptyConfig.testHouseKeepingThreadExistence (batchId=242) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/17917/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17917/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17917/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 5 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12973944 - PreCommit-HIVE-Build > CLUSTERED ON/DISTRIBUTED ON+SORTED ON support for materialized views > > > Key: HIVE-18842 > URL: https://issues.apache.org/jira/browse/HIVE-18842 > Project: Hive > Issue Type: New Feature > Components: Materialized views >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Labels: pull-request-available, todoc4.0 > Attachments: HIVE-18842.01.patch, HIVE-18842.01.patch, > HIVE-18842.02.patch, HIVE-18842.03.patch, HIVE-18842.03.patch, > HIVE-18842.patch > > Time Spent: 1.5h > Remaining Estimate: 0h > > We should support defining a {{CLUSTERED ON/DISTRIBUTED ON+SORTED ON}} > specification for materialized views. > The syntax should be extended as follows: > {code:sql} > CREATE MATERIALIZED VIEW [IF NOT EXISTS] [db_name.]materialized_view_name > [COMMENT materialized_view_comment] > [PARTITIONED ON (col_name, ...)] > [CLUSTERED ON (col_name, ...) | DISTRIBUTED ON (col_name, ...) SORTED ON > (col_name, ...)] -- NEW! > [ >[ROW FORMAT row_format] >[STORED AS file_format] > | STORED BY 'storage.handler.class.name' [WITH SERDEPROPERTIES (...)] > ] > [LOCATION hdfs_path] > [TBLPROPERTIES (property_name=property_value, ...)] > AS select_statement; > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21637) Synchronized metastore cache
[ https://issues.apache.org/jira/browse/HIVE-21637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-21637: -- Attachment: HIVE-21637.21.patch > Synchronized metastore cache > > > Key: HIVE-21637 > URL: https://issues.apache.org/jira/browse/HIVE-21637 > Project: Hive > Issue Type: New Feature >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-21637-1.patch, HIVE-21637.10.patch, > HIVE-21637.11.patch, HIVE-21637.12.patch, HIVE-21637.13.patch, > HIVE-21637.14.patch, HIVE-21637.15.patch, HIVE-21637.16.patch, > HIVE-21637.17.patch, HIVE-21637.18.patch, HIVE-21637.19.patch, > HIVE-21637.19.patch, HIVE-21637.2.patch, HIVE-21637.20.patch, > HIVE-21637.21.patch, HIVE-21637.3.patch, HIVE-21637.4.patch, > HIVE-21637.5.patch, HIVE-21637.6.patch, HIVE-21637.7.patch, > HIVE-21637.8.patch, HIVE-21637.9.patch > > > Currently, HMS has a cache implemented by CachedStore. The cache is > asynchronized and in HMS HA setting, we can only get eventual consistency. In > this Jira, we try to make it synchronized. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18842) CLUSTERED ON/DISTRIBUTED ON+SORTED ON support for materialized views
[ https://issues.apache.org/jira/browse/HIVE-18842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880813#comment-16880813 ] Hive QA commented on HIVE-18842: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 58s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 52s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 23s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 58s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 33s{color} | {color:blue} common in master has 62 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 2s{color} | {color:blue} ql in master has 2252 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 21s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 42s{color} | {color:red} ql: The patch generated 11 new + 707 unchanged - 2 fixed = 718 total (was 709) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 4m 22s{color} | {color:red} ql generated 2 new + 2251 unchanged - 1 fixed = 2253 total (was 2252) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 28m 47s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:ql | | | Dead store to LA29_5 in org.apache.hadoop.hive.ql.parse.HiveLexer$DFA29.specialStateTransition(int, IntStream) At HiveLexer.java:org.apache.hadoop.hive.ql.parse.HiveLexer$DFA29.specialStateTransition(int, IntStream) At HiveLexer.java:[line 12954] | | | Should org.apache.hadoop.hive.ql.parse.HiveParser$DFA240 be a _static_ inner class? At HiveParser.java:inner class? At HiveParser.java:[lines 48890-48903] | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-17917/dev-support/hive-personality.sh | | git revision | master / d7214ea | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-17917/yetus/diff-checkstyle-ql.txt | | findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-17917/yetus/new-findbugs-ql.html | | modules | C: common ql itests U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-17917/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > CLUSTERED ON/DISTRIBUTED ON+SORTED ON support for materialized views > > > Key: HIVE-18842 > URL: https://issues.apache.org/jira/browse/HIVE-18842 > Project: Hive > Issue Type: New Feature > Components: Materialized views >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho
[jira] [Commented] (HIVE-21225) ACID: getAcidState() should cache a recursive dir listing locally
[ https://issues.apache.org/jira/browse/HIVE-21225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880809#comment-16880809 ] Vineet Garg commented on HIVE-21225: I have removed {{hiddenDirFilter}} in {{HIVE-21225.9.patch}} but there are still few cases producing wrong results (as mentioned earlier) > ACID: getAcidState() should cache a recursive dir listing locally > - > > Key: HIVE-21225 > URL: https://issues.apache.org/jira/browse/HIVE-21225 > Project: Hive > Issue Type: Improvement > Components: Transactions >Reporter: Gopal V >Assignee: Vaibhav Gumashta >Priority: Major > Attachments: HIVE-21225.1.patch, HIVE-21225.2.patch, > HIVE-21225.3.patch, HIVE-21225.4.patch, HIVE-21225.4.patch, > HIVE-21225.5.patch, HIVE-21225.6.patch, HIVE-21225.7.patch, > HIVE-21225.7.patch, HIVE-21225.8.patch, HIVE-21225.9.patch, async-pid-44-2.svg > > > Currently getAcidState() makes 3 calls into the FS api which could be > answered by making a single recursive listDir call and reusing the same data > to check for isRawFormat() and isValidBase(). > All delta operations for a single partition can go against a single listed > directory snapshot instead of interacting with the NameNode or ObjectStore > within the inner loop. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21225) ACID: getAcidState() should cache a recursive dir listing locally
[ https://issues.apache.org/jira/browse/HIVE-21225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-21225: --- Attachment: HIVE-21225.9.patch > ACID: getAcidState() should cache a recursive dir listing locally > - > > Key: HIVE-21225 > URL: https://issues.apache.org/jira/browse/HIVE-21225 > Project: Hive > Issue Type: Improvement > Components: Transactions >Reporter: Gopal V >Assignee: Vaibhav Gumashta >Priority: Major > Attachments: HIVE-21225.1.patch, HIVE-21225.2.patch, > HIVE-21225.3.patch, HIVE-21225.4.patch, HIVE-21225.4.patch, > HIVE-21225.5.patch, HIVE-21225.6.patch, HIVE-21225.7.patch, > HIVE-21225.7.patch, HIVE-21225.8.patch, HIVE-21225.9.patch, async-pid-44-2.svg > > > Currently getAcidState() makes 3 calls into the FS api which could be > answered by making a single recursive listDir call and reusing the same data > to check for isRawFormat() and isValidBase(). > All delta operations for a single partition can go against a single listed > directory snapshot instead of interacting with the NameNode or ObjectStore > within the inner loop. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21225) ACID: getAcidState() should cache a recursive dir listing locally
[ https://issues.apache.org/jira/browse/HIVE-21225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-21225: --- Status: Open (was: Patch Available) > ACID: getAcidState() should cache a recursive dir listing locally > - > > Key: HIVE-21225 > URL: https://issues.apache.org/jira/browse/HIVE-21225 > Project: Hive > Issue Type: Improvement > Components: Transactions >Reporter: Gopal V >Assignee: Vaibhav Gumashta >Priority: Major > Attachments: HIVE-21225.1.patch, HIVE-21225.2.patch, > HIVE-21225.3.patch, HIVE-21225.4.patch, HIVE-21225.4.patch, > HIVE-21225.5.patch, HIVE-21225.6.patch, HIVE-21225.7.patch, > HIVE-21225.7.patch, HIVE-21225.8.patch, HIVE-21225.9.patch, async-pid-44-2.svg > > > Currently getAcidState() makes 3 calls into the FS api which could be > answered by making a single recursive listDir call and reusing the same data > to check for isRawFormat() and isValidBase(). > All delta operations for a single partition can go against a single listed > directory snapshot instead of interacting with the NameNode or ObjectStore > within the inner loop. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21225) ACID: getAcidState() should cache a recursive dir listing locally
[ https://issues.apache.org/jira/browse/HIVE-21225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-21225: --- Status: Patch Available (was: Open) > ACID: getAcidState() should cache a recursive dir listing locally > - > > Key: HIVE-21225 > URL: https://issues.apache.org/jira/browse/HIVE-21225 > Project: Hive > Issue Type: Improvement > Components: Transactions >Reporter: Gopal V >Assignee: Vaibhav Gumashta >Priority: Major > Attachments: HIVE-21225.1.patch, HIVE-21225.2.patch, > HIVE-21225.3.patch, HIVE-21225.4.patch, HIVE-21225.4.patch, > HIVE-21225.5.patch, HIVE-21225.6.patch, HIVE-21225.7.patch, > HIVE-21225.7.patch, HIVE-21225.8.patch, HIVE-21225.9.patch, async-pid-44-2.svg > > > Currently getAcidState() makes 3 calls into the FS api which could be > answered by making a single recursive listDir call and reusing the same data > to check for isRawFormat() and isValidBase(). > All delta operations for a single partition can go against a single listed > directory snapshot instead of interacting with the NameNode or ObjectStore > within the inner loop. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21225) ACID: getAcidState() should cache a recursive dir listing locally
[ https://issues.apache.org/jira/browse/HIVE-21225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880807#comment-16880807 ] Vineet Garg commented on HIVE-21225: [~vgumashta] {{hiddenDirFilter}} is filtering out directories for temporary table causing wrong results. Is this filter required at all? There is another issue with tables loaded with UNION queries. Directory layout of such table could be {{//HIVE_UNION_SUBDIR_1}} but {{getHdfsDirSnapshots}} ends up returning only one sub-dir. Can you create pull request for this patch? > ACID: getAcidState() should cache a recursive dir listing locally > - > > Key: HIVE-21225 > URL: https://issues.apache.org/jira/browse/HIVE-21225 > Project: Hive > Issue Type: Improvement > Components: Transactions >Reporter: Gopal V >Assignee: Vaibhav Gumashta >Priority: Major > Attachments: HIVE-21225.1.patch, HIVE-21225.2.patch, > HIVE-21225.3.patch, HIVE-21225.4.patch, HIVE-21225.4.patch, > HIVE-21225.5.patch, HIVE-21225.6.patch, HIVE-21225.7.patch, > HIVE-21225.7.patch, HIVE-21225.8.patch, async-pid-44-2.svg > > > Currently getAcidState() makes 3 calls into the FS api which could be > answered by making a single recursive listDir call and reusing the same data > to check for isRawFormat() and isValidBase(). > All delta operations for a single partition can go against a single listed > directory snapshot instead of interacting with the NameNode or ObjectStore > within the inner loop. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21957) Create temporary table like should omit transactional properties
[ https://issues.apache.org/jira/browse/HIVE-21957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880801#comment-16880801 ] Hive QA commented on HIVE-21957: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12973940/HIVE-21957.03.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 16362 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/17916/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17916/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17916/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12973940 - PreCommit-HIVE-Build > Create temporary table like should omit transactional properties > > > Key: HIVE-21957 > URL: https://issues.apache.org/jira/browse/HIVE-21957 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 4.0.0 >Reporter: Laszlo Pinter >Assignee: Laszlo Pinter >Priority: Major > Attachments: HIVE-21957.01.patch, HIVE-21957.02.patch, > HIVE-21957.03.patch > > > In case of create temporary table like queries, where the source table is > transactional, the transactional properties should not be copied over to the > new table. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21970) Avoid using RegistryUtils.currentUser()
[ https://issues.apache.org/jira/browse/HIVE-21970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-21970: - Status: Patch Available (was: Open) > Avoid using RegistryUtils.currentUser() > --- > > Key: HIVE-21970 > URL: https://issues.apache.org/jira/browse/HIVE-21970 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0, 3.2.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-21970.1.patch > > > RegistryUtils.currentUser() does replacement of '_' with '-' for DNS reasons. > This is used inconsistently in some places causing issues wrt. ZK (deletion > token secret manager, llap cluster membership for external clients). > > Replace RegistryUtils.currentUser() with > UserGroupInformation.getCurrentUser().getShortUserName() for consistency. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21970) Avoid using RegistryUtils.currentUser()
[ https://issues.apache.org/jira/browse/HIVE-21970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-21970: - Attachment: HIVE-21970.1.patch > Avoid using RegistryUtils.currentUser() > --- > > Key: HIVE-21970 > URL: https://issues.apache.org/jira/browse/HIVE-21970 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0, 3.2.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-21970.1.patch > > > RegistryUtils.currentUser() does replacement of '_' with '-' for DNS reasons. > This is used inconsistently in some places causing issues wrt. ZK (deletion > token secret manager, llap cluster membership for external clients). > > Replace RegistryUtils.currentUser() with > UserGroupInformation.getCurrentUser().getShortUserName() for consistency. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21848) Table property name definition between ORC and Parquet encrytion
[ https://issues.apache.org/jira/browse/HIVE-21848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880790#comment-16880790 ] Owen O'Malley commented on HIVE-21848: -- [~sha...@uber.com], I think the appropriate response is to throw an exception if there are conflicting directions about how to encrypt. For now, I don't think we should add an exemption list of children to not encrypt, although if there are user-requests we can add it later. > Table property name definition between ORC and Parquet encrytion > > > Key: HIVE-21848 > URL: https://issues.apache.org/jira/browse/HIVE-21848 > Project: Hive > Issue Type: Task > Components: Metastore >Affects Versions: 3.0.0 >Reporter: Xinli Shang >Assignee: Xinli Shang >Priority: Major > Fix For: 3.0.0 > > > The goal of this Jira is to define a superset of unified table property names > that can be used for both Parquet and ORC column encryption. There is no code > change needed for this Jira. > *Background:* > ORC-14 and Parquet-1178 introduced column encryption to ORC and Parquet. To > configure the encryption, e.g. which column is sensitive, what master key to > be used, algorithm, etc, table properties can be used. It is important that > both Parquet and ORC can use unified names. > According to the slide > [https://www.slideshare.net/oom65/fine-grain-access-control-for-big-data-orc-column-encryption-137308692], > ORC use table properties like orc.encrypt.pii, orc.encrypt.credit. While in > the Parquet community, it is still discussing to provide several ways and > using table properties is one of the options, while there is no detailed > design of the table property names yet. > So it is a good time to discuss within two communities to have unified table > names as a superset. > *Proposal:* > There are several encryption properties that need to be specified for a > table. Here is the list. This is the superset of Parquet and ORC. Some of > them might not apply to both. > # PII columns including nest columns > # Column key metadata, master key metadata > # Encryption algorithm, for example, Parquet support AES_GCM and AES_CTR. > ORC might support AES_CTR. > # Encryption footer - Parquet allow footer to be encrypted or plaintext > # Footer key metadata > Here is the table properties proposal. > |*Table Property Name*|*Value*|*Notes*| > |encrypt_algorithm|aes_ctr, aes_gcm|The algorithm to be used for encryption.| > |encrypt_footer_plaintext|true, false|Parquet support plaintext and encrypted > footer. By default, it is encrypted.| > |encrypt_footer_key_metadata|base64 string of footer key metadata|It is up to > the KMS to define what key metadata is. The metadata should have enough > information to figure out the corresponding key by the KMS. | > |encrypt_col_xxx|base64 string of column key metadata|‘xxx’ is the column > name for example, ‘address.zipcode’. > > It is up to the KMS to define what key metadata is. The metadata should have > enough information to figure out the corresponding key by the KMS.| > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21970) Avoid using RegistryUtils.currentUser()
[ https://issues.apache.org/jira/browse/HIVE-21970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-21970: - Description: RegistryUtils.currentUser() does replacement of '_' with '-' for DNS reasons. This is used inconsistently in some places causing issues wrt. ZK (deletion token secret manager, llap cluster membership for external clients). Replace RegistryUtils.currentUser() with UserGroupInformation.getCurrentUser().getShortUserName() for consistency. was: RegistryUtils.currentUser() does replacement of '_' with '-' for DNS reasons. This is used inconsistently in some places causing issues wrt. ZK (deletion token secret manager, llap cluster membership for external clients). Replace RegistryUtils.currentUser() with UserGroupInformation.getLoginUser().getShortUserName() for consistency. > Avoid using RegistryUtils.currentUser() > --- > > Key: HIVE-21970 > URL: https://issues.apache.org/jira/browse/HIVE-21970 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0, 3.2.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > > RegistryUtils.currentUser() does replacement of '_' with '-' for DNS reasons. > This is used inconsistently in some places causing issues wrt. ZK (deletion > token secret manager, llap cluster membership for external clients). > > Replace RegistryUtils.currentUser() with > UserGroupInformation.getCurrentUser().getShortUserName() for consistency. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21970) Avoid using RegistryUtils.currentUser()
[ https://issues.apache.org/jira/browse/HIVE-21970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-21970: - Description: RegistryUtils.currentUser() does replacement of '_' with '-' for DNS reasons. This is used inconsistently in some places causing issues wrt. ZK (deletion token secret manager, llap cluster membership for external clients). Replace RegistryUtils.currentUser() with UserGroupInformation.getLoginUser().getShortUserName() for consistency. was:RegistryUtils.currentUser() does replacement of '_' with '-' for DNS reasons. This is used inconsistently in some places causing issues wrt. ZK (deletion token secret manager, llap cluster membership for external clients). > Avoid using RegistryUtils.currentUser() > --- > > Key: HIVE-21970 > URL: https://issues.apache.org/jira/browse/HIVE-21970 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0, 3.2.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > > RegistryUtils.currentUser() does replacement of '_' with '-' for DNS reasons. > This is used inconsistently in some places causing issues wrt. ZK (deletion > token secret manager, llap cluster membership for external clients). > > Replace RegistryUtils.currentUser() with > UserGroupInformation.getLoginUser().getShortUserName() for consistency. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-21970) Avoid using RegistryUtils.currentUser()
[ https://issues.apache.org/jira/browse/HIVE-21970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran reassigned HIVE-21970: > Avoid using RegistryUtils.currentUser() > --- > > Key: HIVE-21970 > URL: https://issues.apache.org/jira/browse/HIVE-21970 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0, 3.2.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > > RegistryUtils.currentUser() does replacement of '_' with '-' for DNS reasons. > This is used inconsistently in some places causing issues wrt. ZK (deletion > token secret manager, llap cluster membership for external clients). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21224) Upgrade tests JUnit3 to JUnit4
[ https://issues.apache.org/jira/browse/HIVE-21224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bruno Pusztahazi updated HIVE-21224: Status: Patch Available (was: Open) > Upgrade tests JUnit3 to JUnit4 > -- > > Key: HIVE-21224 > URL: https://issues.apache.org/jira/browse/HIVE-21224 > Project: Hive > Issue Type: Improvement >Reporter: Bruno Pusztahazi >Assignee: Bruno Pusztahazi >Priority: Major > Attachments: HIVE-21224.1.patch, HIVE-21224.10.patch, > HIVE-21224.11.patch, HIVE-21224.12.patch, HIVE-21224.13.patch, > HIVE-21224.13.patch, HIVE-21224.14.patch, HIVE-21224.2.patch, > HIVE-21224.3.patch, HIVE-21224.4.patch, HIVE-21224.5.patch, > HIVE-21224.6.patch, HIVE-21224.7.patch, HIVE-21224.8.patch, HIVE-21224.9.patch > > > Old JUnit3 tests should be upgraded to JUnit4 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21224) Upgrade tests JUnit3 to JUnit4
[ https://issues.apache.org/jira/browse/HIVE-21224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bruno Pusztahazi updated HIVE-21224: Attachment: HIVE-21224.14.patch > Upgrade tests JUnit3 to JUnit4 > -- > > Key: HIVE-21224 > URL: https://issues.apache.org/jira/browse/HIVE-21224 > Project: Hive > Issue Type: Improvement >Reporter: Bruno Pusztahazi >Assignee: Bruno Pusztahazi >Priority: Major > Attachments: HIVE-21224.1.patch, HIVE-21224.10.patch, > HIVE-21224.11.patch, HIVE-21224.12.patch, HIVE-21224.13.patch, > HIVE-21224.13.patch, HIVE-21224.14.patch, HIVE-21224.2.patch, > HIVE-21224.3.patch, HIVE-21224.4.patch, HIVE-21224.5.patch, > HIVE-21224.6.patch, HIVE-21224.7.patch, HIVE-21224.8.patch, HIVE-21224.9.patch > > > Old JUnit3 tests should be upgraded to JUnit4 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21224) Upgrade tests JUnit3 to JUnit4
[ https://issues.apache.org/jira/browse/HIVE-21224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bruno Pusztahazi updated HIVE-21224: Attachment: HIVE-21224.13.patch > Upgrade tests JUnit3 to JUnit4 > -- > > Key: HIVE-21224 > URL: https://issues.apache.org/jira/browse/HIVE-21224 > Project: Hive > Issue Type: Improvement >Reporter: Bruno Pusztahazi >Assignee: Bruno Pusztahazi >Priority: Major > Attachments: HIVE-21224.1.patch, HIVE-21224.10.patch, > HIVE-21224.11.patch, HIVE-21224.12.patch, HIVE-21224.13.patch, > HIVE-21224.13.patch, HIVE-21224.14.patch, HIVE-21224.2.patch, > HIVE-21224.3.patch, HIVE-21224.4.patch, HIVE-21224.5.patch, > HIVE-21224.6.patch, HIVE-21224.7.patch, HIVE-21224.8.patch, HIVE-21224.9.patch > > > Old JUnit3 tests should be upgraded to JUnit4 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21224) Upgrade tests JUnit3 to JUnit4
[ https://issues.apache.org/jira/browse/HIVE-21224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bruno Pusztahazi updated HIVE-21224: Status: Open (was: Patch Available) > Upgrade tests JUnit3 to JUnit4 > -- > > Key: HIVE-21224 > URL: https://issues.apache.org/jira/browse/HIVE-21224 > Project: Hive > Issue Type: Improvement >Reporter: Bruno Pusztahazi >Assignee: Bruno Pusztahazi >Priority: Major > Attachments: HIVE-21224.1.patch, HIVE-21224.10.patch, > HIVE-21224.11.patch, HIVE-21224.12.patch, HIVE-21224.13.patch, > HIVE-21224.13.patch, HIVE-21224.14.patch, HIVE-21224.2.patch, > HIVE-21224.3.patch, HIVE-21224.4.patch, HIVE-21224.5.patch, > HIVE-21224.6.patch, HIVE-21224.7.patch, HIVE-21224.8.patch, HIVE-21224.9.patch > > > Old JUnit3 tests should be upgraded to JUnit4 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21957) Create temporary table like should omit transactional properties
[ https://issues.apache.org/jira/browse/HIVE-21957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880776#comment-16880776 ] Hive QA commented on HIVE-21957: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 28s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 3s{color} | {color:blue} ql in master has 2252 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 24m 52s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-17916/dev-support/hive-personality.sh | | git revision | master / d7214ea | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-17916/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Create temporary table like should omit transactional properties > > > Key: HIVE-21957 > URL: https://issues.apache.org/jira/browse/HIVE-21957 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 4.0.0 >Reporter: Laszlo Pinter >Assignee: Laszlo Pinter >Priority: Major > Attachments: HIVE-21957.01.patch, HIVE-21957.02.patch, > HIVE-21957.03.patch > > > In case of create temporary table like queries, where the source table is > transactional, the transactional properties should not be copied over to the > new table. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work logged] (HIVE-21959) Clean up Concatenate and Msck DDL commands
[ https://issues.apache.org/jira/browse/HIVE-21959?focusedWorklogId=273571=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-273571 ] ASF GitHub Bot logged work on HIVE-21959: - Author: ASF GitHub Bot Created on: 08/Jul/19 22:34 Start Date: 08/Jul/19 22:34 Worklog Time Spent: 10m Work Description: miklosgergely commented on issue #703: HIVE-21959 Clean up Concatenate and Msck DDL commands URL: https://github.com/apache/hive/pull/703#issuecomment-509416637 New patch passed the tests, please merge this. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 273571) Time Spent: 1h 50m (was: 1h 40m) > Clean up Concatenate and Msck DDL commands > -- > > Key: HIVE-21959 > URL: https://issues.apache.org/jira/browse/HIVE-21959 > Project: Hive > Issue Type: Sub-task > Components: Hive >Affects Versions: 3.1.1 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: pull-request-available, refactor-ddl > Fix For: 4.0.0 > > Attachments: HIVE-21959.01.patch, HIVE-21959.02.patch, > HIVE-21959.03.patch, HIVE-21959.04.patch > > Time Spent: 1h 50m > Remaining Estimate: 0h > > Concatenate and Msck DDL use basic data structure implementations instead of > their interfaces for variable declaration, like ArrayList instead of List, > LinkedHashMap instead of Map, etc. This is due to a lot of similar issues in > the code, which needs to be cleaned. > Concatenate also had a non-immutable Desc class, that needs to be transformed > into an immutable one. Concatenate operation code should be cut into smaller > functions. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work logged] (HIVE-21959) Clean up Concatenate and Msck DDL commands
[ https://issues.apache.org/jira/browse/HIVE-21959?focusedWorklogId=273568=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-273568 ] ASF GitHub Bot logged work on HIVE-21959: - Author: ASF GitHub Bot Created on: 08/Jul/19 22:33 Start Date: 08/Jul/19 22:33 Worklog Time Spent: 10m Work Description: miklosgergely commented on pull request #703: HIVE-21959 Clean up Concatenate and Msck DDL commands URL: https://github.com/apache/hive/pull/703#discussion_r301327744 ## File path: standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java ## @@ -108,15 +108,16 @@ public int repair(MsckInfo msckInfo) { boolean success = false; long txnId = -1; int ret = 0; +long partitionExpirySeconds = msckInfo.getPartitionExpirySeconds(); try { Table table = getMsc().getTable(msckInfo.getCatalogName(), msckInfo.getDbName(), msckInfo.getTableName()); qualifiedTableName = Warehouse.getCatalogQualifiedTableName(table); if (getConf().getBoolean(MetastoreConf.ConfVars.MSCK_REPAIR_ENABLE_PARTITION_RETENTION.getHiveName(), false)) { - msckInfo.setPartitionExpirySeconds(PartitionManagementTask.getRetentionPeriodInSeconds(table)); -LOG.info("{} - Retention period ({}s) for partition is enabled for MSCK REPAIR..", - qualifiedTableName, msckInfo.getPartitionExpirySeconds()); +partitionExpirySeconds = PartitionManagementTask.getRetentionPeriodInSeconds(table); Review comment: Now it is set in the constructor, the logic that overwrites the value from the config is there at the creation of the object. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 273568) Time Spent: 1h 20m (was: 1h 10m) > Clean up Concatenate and Msck DDL commands > -- > > Key: HIVE-21959 > URL: https://issues.apache.org/jira/browse/HIVE-21959 > Project: Hive > Issue Type: Sub-task > Components: Hive >Affects Versions: 3.1.1 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: pull-request-available, refactor-ddl > Fix For: 4.0.0 > > Attachments: HIVE-21959.01.patch, HIVE-21959.02.patch, > HIVE-21959.03.patch, HIVE-21959.04.patch > > Time Spent: 1h 20m > Remaining Estimate: 0h > > Concatenate and Msck DDL use basic data structure implementations instead of > their interfaces for variable declaration, like ArrayList instead of List, > LinkedHashMap instead of Map, etc. This is due to a lot of similar issues in > the code, which needs to be cleaned. > Concatenate also had a non-immutable Desc class, that needs to be transformed > into an immutable one. Concatenate operation code should be cut into smaller > functions. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work logged] (HIVE-21959) Clean up Concatenate and Msck DDL commands
[ https://issues.apache.org/jira/browse/HIVE-21959?focusedWorklogId=273569=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-273569 ] ASF GitHub Bot logged work on HIVE-21959: - Author: ASF GitHub Bot Created on: 08/Jul/19 22:33 Start Date: 08/Jul/19 22:33 Worklog Time Spent: 10m Work Description: miklosgergely commented on pull request #703: HIVE-21959 Clean up Concatenate and Msck DDL commands URL: https://github.com/apache/hive/pull/703#discussion_r301327935 ## File path: ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java ## @@ -2061,11 +2057,9 @@ private void analyzeAlterTablePartMergeFiles(ASTNode ast, } // throw a HiveException for other than rcfile and orcfile. - if (!((inputFormatClass.equals(RCFileInputFormat.class) || - (inputFormatClass.equals(OrcInputFormat.class) { + if (!(inputFormatClass.equals(RCFileInputFormat.class) || inputFormatClass.equals(OrcInputFormat.class))) { Review comment: indeed, that should be fixed at a later change. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 273569) Time Spent: 1.5h (was: 1h 20m) > Clean up Concatenate and Msck DDL commands > -- > > Key: HIVE-21959 > URL: https://issues.apache.org/jira/browse/HIVE-21959 > Project: Hive > Issue Type: Sub-task > Components: Hive >Affects Versions: 3.1.1 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: pull-request-available, refactor-ddl > Fix For: 4.0.0 > > Attachments: HIVE-21959.01.patch, HIVE-21959.02.patch, > HIVE-21959.03.patch, HIVE-21959.04.patch > > Time Spent: 1.5h > Remaining Estimate: 0h > > Concatenate and Msck DDL use basic data structure implementations instead of > their interfaces for variable declaration, like ArrayList instead of List, > LinkedHashMap instead of Map, etc. This is due to a lot of similar issues in > the code, which needs to be cleaned. > Concatenate also had a non-immutable Desc class, that needs to be transformed > into an immutable one. Concatenate operation code should be cut into smaller > functions. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work logged] (HIVE-21959) Clean up Concatenate and Msck DDL commands
[ https://issues.apache.org/jira/browse/HIVE-21959?focusedWorklogId=273570=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-273570 ] ASF GitHub Bot logged work on HIVE-21959: - Author: ASF GitHub Bot Created on: 08/Jul/19 22:33 Start Date: 08/Jul/19 22:33 Worklog Time Spent: 10m Work Description: miklosgergely commented on pull request #703: HIVE-21959 Clean up Concatenate and Msck DDL commands URL: https://github.com/apache/hive/pull/703#discussion_r301327984 ## File path: ql/src/java/org/apache/hadoop/hive/ql/ddl/misc/MsckDesc.java ## @@ -36,19 +34,16 @@ private static final long serialVersionUID = 1L; private final String tableName; - private final ArrayList> partitionsSpecs; + private final List> partitionsSpecs; private final String resFile; private final boolean repairPartitions; private final boolean addPartitions; private final boolean dropPartitions; - public MsckDesc(String tableName, List> partitionSpecs, Path resFile, + public MsckDesc(String tableName, List> partitionsSpecs, Path resFile, boolean repairPartitions, boolean addPartitions, boolean dropPartitions) { this.tableName = tableName; -this.partitionsSpecs = new ArrayList>(partitionSpecs.size()); Review comment: No need for a new copy. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 273570) Time Spent: 1h 40m (was: 1.5h) > Clean up Concatenate and Msck DDL commands > -- > > Key: HIVE-21959 > URL: https://issues.apache.org/jira/browse/HIVE-21959 > Project: Hive > Issue Type: Sub-task > Components: Hive >Affects Versions: 3.1.1 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: pull-request-available, refactor-ddl > Fix For: 4.0.0 > > Attachments: HIVE-21959.01.patch, HIVE-21959.02.patch, > HIVE-21959.03.patch, HIVE-21959.04.patch > > Time Spent: 1h 40m > Remaining Estimate: 0h > > Concatenate and Msck DDL use basic data structure implementations instead of > their interfaces for variable declaration, like ArrayList instead of List, > LinkedHashMap instead of Map, etc. This is due to a lot of similar issues in > the code, which needs to be cleaned. > Concatenate also had a non-immutable Desc class, that needs to be transformed > into an immutable one. Concatenate operation code should be cut into smaller > functions. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work logged] (HIVE-21959) Clean up Concatenate and Msck DDL commands
[ https://issues.apache.org/jira/browse/HIVE-21959?focusedWorklogId=273567=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-273567 ] ASF GitHub Bot logged work on HIVE-21959: - Author: ASF GitHub Bot Created on: 08/Jul/19 22:32 Start Date: 08/Jul/19 22:32 Worklog Time Spent: 10m Work Description: miklosgergely commented on pull request #703: HIVE-21959 Clean up Concatenate and Msck DDL commands URL: https://github.com/apache/hive/pull/703#discussion_r301327744 ## File path: standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java ## @@ -108,15 +108,16 @@ public int repair(MsckInfo msckInfo) { boolean success = false; long txnId = -1; int ret = 0; +long partitionExpirySeconds = msckInfo.getPartitionExpirySeconds(); try { Table table = getMsc().getTable(msckInfo.getCatalogName(), msckInfo.getDbName(), msckInfo.getTableName()); qualifiedTableName = Warehouse.getCatalogQualifiedTableName(table); if (getConf().getBoolean(MetastoreConf.ConfVars.MSCK_REPAIR_ENABLE_PARTITION_RETENTION.getHiveName(), false)) { - msckInfo.setPartitionExpirySeconds(PartitionManagementTask.getRetentionPeriodInSeconds(table)); -LOG.info("{} - Retention period ({}s) for partition is enabled for MSCK REPAIR..", - qualifiedTableName, msckInfo.getPartitionExpirySeconds()); +partitionExpirySeconds = PartitionManagementTask.getRetentionPeriodInSeconds(table); Review comment: Now it is set in the constructor, the logic that overwrites the value from the config is there. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 273567) Time Spent: 1h 10m (was: 1h) > Clean up Concatenate and Msck DDL commands > -- > > Key: HIVE-21959 > URL: https://issues.apache.org/jira/browse/HIVE-21959 > Project: Hive > Issue Type: Sub-task > Components: Hive >Affects Versions: 3.1.1 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: pull-request-available, refactor-ddl > Fix For: 4.0.0 > > Attachments: HIVE-21959.01.patch, HIVE-21959.02.patch, > HIVE-21959.03.patch, HIVE-21959.04.patch > > Time Spent: 1h 10m > Remaining Estimate: 0h > > Concatenate and Msck DDL use basic data structure implementations instead of > their interfaces for variable declaration, like ArrayList instead of List, > LinkedHashMap instead of Map, etc. This is due to a lot of similar issues in > the code, which needs to be cleaned. > Concatenate also had a non-immutable Desc class, that needs to be transformed > into an immutable one. Concatenate operation code should be cut into smaller > functions. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work logged] (HIVE-21959) Clean up Concatenate and Msck DDL commands
[ https://issues.apache.org/jira/browse/HIVE-21959?focusedWorklogId=273566=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-273566 ] ASF GitHub Bot logged work on HIVE-21959: - Author: ASF GitHub Bot Created on: 08/Jul/19 22:32 Start Date: 08/Jul/19 22:32 Worklog Time Spent: 10m Work Description: miklosgergely commented on pull request #703: HIVE-21959 Clean up Concatenate and Msck DDL commands URL: https://github.com/apache/hive/pull/703#discussion_r301327649 ## File path: standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/Msck.java ## @@ -108,15 +108,16 @@ public int repair(MsckInfo msckInfo) { boolean success = false; long txnId = -1; int ret = 0; +long partitionExpirySeconds = msckInfo.getPartitionExpirySeconds(); Review comment: fixed This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 273566) Time Spent: 1h (was: 50m) > Clean up Concatenate and Msck DDL commands > -- > > Key: HIVE-21959 > URL: https://issues.apache.org/jira/browse/HIVE-21959 > Project: Hive > Issue Type: Sub-task > Components: Hive >Affects Versions: 3.1.1 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: pull-request-available, refactor-ddl > Fix For: 4.0.0 > > Attachments: HIVE-21959.01.patch, HIVE-21959.02.patch, > HIVE-21959.03.patch, HIVE-21959.04.patch > > Time Spent: 1h > Remaining Estimate: 0h > > Concatenate and Msck DDL use basic data structure implementations instead of > their interfaces for variable declaration, like ArrayList instead of List, > LinkedHashMap instead of Map, etc. This is due to a lot of similar issues in > the code, which needs to be cleaned. > Concatenate also had a non-immutable Desc class, that needs to be transformed > into an immutable one. Concatenate operation code should be cut into smaller > functions. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21959) Clean up Concatenate and Msck DDL commands
[ https://issues.apache.org/jira/browse/HIVE-21959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880760#comment-16880760 ] Hive QA commented on HIVE-21959: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12973933/HIVE-21959.04.patch {color:green}SUCCESS:{color} +1 due to 8 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 16362 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/17915/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17915/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17915/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12973933 - PreCommit-HIVE-Build > Clean up Concatenate and Msck DDL commands > -- > > Key: HIVE-21959 > URL: https://issues.apache.org/jira/browse/HIVE-21959 > Project: Hive > Issue Type: Sub-task > Components: Hive >Affects Versions: 3.1.1 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: pull-request-available, refactor-ddl > Fix For: 4.0.0 > > Attachments: HIVE-21959.01.patch, HIVE-21959.02.patch, > HIVE-21959.03.patch, HIVE-21959.04.patch > > Time Spent: 50m > Remaining Estimate: 0h > > Concatenate and Msck DDL use basic data structure implementations instead of > their interfaces for variable declaration, like ArrayList instead of List, > LinkedHashMap instead of Map, etc. This is due to a lot of similar issues in > the code, which needs to be cleaned. > Concatenate also had a non-immutable Desc class, that needs to be transformed > into an immutable one. Concatenate operation code should be cut into smaller > functions. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21959) Clean up Concatenate and Msck DDL commands
[ https://issues.apache.org/jira/browse/HIVE-21959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880721#comment-16880721 ] Hive QA commented on HIVE-21959: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 34s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 4s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 35s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 10s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 1m 7s{color} | {color:blue} standalone-metastore/metastore-server in master has 179 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 10s{color} | {color:blue} ql in master has 2252 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 17s{color} | {color:red} standalone-metastore/metastore-server: The patch generated 1 new + 57 unchanged - 8 fixed = 58 total (was 65) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s{color} | {color:green} ql: The patch generated 0 new + 1428 unchanged - 67 fixed = 1428 total (was 1495) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 31m 26s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-17915/dev-support/hive-personality.sh | | git revision | master / d7214ea | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-17915/yetus/diff-checkstyle-standalone-metastore_metastore-server.txt | | modules | C: standalone-metastore/metastore-server ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-17915/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Clean up Concatenate and Msck DDL commands > -- > > Key: HIVE-21959 > URL: https://issues.apache.org/jira/browse/HIVE-21959 > Project: Hive > Issue Type: Sub-task > Components: Hive >Affects Versions: 3.1.1 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: pull-request-available, refactor-ddl > Fix For: 4.0.0 > > Attachments: HIVE-21959.01.patch, HIVE-21959.02.patch, > HIVE-21959.03.patch, HIVE-21959.04.patch > > Time Spent: 50m > Remaining Estimate: 0h > > Concatenate and Msck DDL use basic data structure implementations instead of > their interfaces
[jira] [Commented] (HIVE-21966) Llap external client - Arrow Serializer throws ArrayIndexOutOfBoundsException in some cases
[ https://issues.apache.org/jira/browse/HIVE-21966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880704#comment-16880704 ] Hive QA commented on HIVE-21966: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12973923/HIVE-21966.2.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/17914/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17914/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17914/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Tests exited with: Exception: Patch URL https://issues.apache.org/jira/secure/attachment/12973923/HIVE-21966.2.patch was found in seen patch url's cache and a test was probably run already on it. Aborting... {noformat} This message is automatically generated. ATTACHMENT ID: 12973923 - PreCommit-HIVE-Build > Llap external client - Arrow Serializer throws ArrayIndexOutOfBoundsException > in some cases > --- > > Key: HIVE-21966 > URL: https://issues.apache.org/jira/browse/HIVE-21966 > Project: Hive > Issue Type: Bug > Components: llap, Serializers/Deserializers >Affects Versions: 3.1.1 >Reporter: Shubham Chaurasia >Assignee: Shubham Chaurasia >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21966.1.patch, HIVE-21966.2.patch > > Time Spent: 1h 10m > Remaining Estimate: 0h > > When we submit query through llap-ext-client, arrow serializer throws > ArrayIndexOutOfBoundsException when 1), 2) and 3) below are satisfied. > 1) {{hive.vectorized.execution.filesink.arrow.native.enabled=true}} to take > arrow serializer code path. > 2) Query contains a filter or limit clause which enforces > {{VectorizedRowBatch#selectedInUse=true}} > 3) Projection involves a column of type {{MultiValuedColumnVector}}. > Sample stacktrace: > {code} > Caused by: java.lang.ArrayIndexOutOfBoundsException: 150 > at > org.apache.hadoop.hive.ql.io.arrow.Serializer.writeGeneric(Serializer.java:679) > at > org.apache.hadoop.hive.ql.io.arrow.Serializer.writePrimitive(Serializer.java:518) > at > org.apache.hadoop.hive.ql.io.arrow.Serializer.write(Serializer.java:276) > at > org.apache.hadoop.hive.ql.io.arrow.Serializer.writeStruct(Serializer.java:342) > at > org.apache.hadoop.hive.ql.io.arrow.Serializer.write(Serializer.java:282) > at > org.apache.hadoop.hive.ql.io.arrow.Serializer.writeList(Serializer.java:365) > at > org.apache.hadoop.hive.ql.io.arrow.Serializer.write(Serializer.java:279) > at > org.apache.hadoop.hive.ql.io.arrow.Serializer.serializeBatch(Serializer.java:199) > at > org.apache.hadoop.hive.ql.exec.vector.filesink.VectorFileSinkArrowOperator.process(VectorFileSinkArrowOperator.java:135) > ... 30 more > {code} > It can be reproduced by: > from beeline: > {code} > CREATE TABLE complex_tbl(c1 array>) STORED AS ORC; > INSERT INTO complex_tbl SELECT ARRAY(NAMED_STRUCT('f1','v11', 'f2','v21'), > NAMED_STRUCT('f1','v21', 'f2','v22')); > {code} > and when we fire query: {{select * from complex_tbl limit 1}} through > llap-ext-client. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21965) Implement parallel processing in HiveStrictManagedMigration
[ https://issues.apache.org/jira/browse/HIVE-21965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880703#comment-16880703 ] Hive QA commented on HIVE-21965: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12973918/HIVE-21965.1.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 16361 tests executed *Failed tests:* {noformat} org.apache.hive.hcatalog.templeton.TestConcurrentJobRequestsThreadsAndTimeout.ConcurrentListJobsVerifyExceptions (batchId=207) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/17913/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17913/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17913/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12973918 - PreCommit-HIVE-Build > Implement parallel processing in HiveStrictManagedMigration > --- > > Key: HIVE-21965 > URL: https://issues.apache.org/jira/browse/HIVE-21965 > Project: Hive > Issue Type: Improvement > Components: Hive >Affects Versions: 3.1.0 >Reporter: Krisztian Kasa >Assignee: Krisztian Kasa >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-21965.1.patch > > > This process, kicked off from Ambari can take many days for systems with > 1000's of tables. The process needs to support parallel execution as it > iterates through the Databases and Tables. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21963) TransactionalValidationListener.validateTableStructure should check the partition directories in the case of partitioned tables
[ https://issues.apache.org/jira/browse/HIVE-21963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-21963: -- Attachment: (was: HIVE-21963.1.patch) > TransactionalValidationListener.validateTableStructure should check the > partition directories in the case of partitioned tables > --- > > Key: HIVE-21963 > URL: https://issues.apache.org/jira/browse/HIVE-21963 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Attachments: HIVE-21963.1.patch > > > The transactional validation check is checking just the base table directory, > but for partitioned tables this should be checking the partitioned > directories (some of which may not even be in the base table directory). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21963) TransactionalValidationListener.validateTableStructure should check the partition directories in the case of partitioned tables
[ https://issues.apache.org/jira/browse/HIVE-21963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-21963: -- Attachment: HIVE-21963.1.patch > TransactionalValidationListener.validateTableStructure should check the > partition directories in the case of partitioned tables > --- > > Key: HIVE-21963 > URL: https://issues.apache.org/jira/browse/HIVE-21963 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Attachments: HIVE-21963.1.patch > > > The transactional validation check is checking just the base table directory, > but for partitioned tables this should be checking the partitioned > directories (some of which may not even be in the base table directory). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21969) Beeline doesnt support HiveQL
[ https://issues.apache.org/jira/browse/HIVE-21969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880693#comment-16880693 ] Mithun Radhakrishnan commented on HIVE-21969: - {quote}I can provide repro steps if they arent obvious {quote} Please do. HiveQL is rather the point of Beeline. > Beeline doesnt support HiveQL > - > > Key: HIVE-21969 > URL: https://issues.apache.org/jira/browse/HIVE-21969 > Project: Hive > Issue Type: Bug > Components: Beeline >Affects Versions: 2.3.5 >Reporter: Valeriy Trofimov >Priority: Major > > Beeline doesnt support HiveQL. I can provide repro steps if they arent > obvious - run a HiveQL query in Beeline, watch it faill. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21965) Implement parallel processing in HiveStrictManagedMigration
[ https://issues.apache.org/jira/browse/HIVE-21965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880677#comment-16880677 ] Hive QA commented on HIVE-21965: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 27s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 1s{color} | {color:blue} ql in master has 2252 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 40s{color} | {color:red} ql: The patch generated 11 new + 40 unchanged - 12 fixed = 51 total (was 52) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 24m 21s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-17913/dev-support/hive-personality.sh | | git revision | master / 67e515f | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-17913/yetus/diff-checkstyle-ql.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-17913/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Implement parallel processing in HiveStrictManagedMigration > --- > > Key: HIVE-21965 > URL: https://issues.apache.org/jira/browse/HIVE-21965 > Project: Hive > Issue Type: Improvement > Components: Hive >Affects Versions: 3.1.0 >Reporter: Krisztian Kasa >Assignee: Krisztian Kasa >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-21965.1.patch > > > This process, kicked off from Ambari can take many days for systems with > 1000's of tables. The process needs to support parallel execution as it > iterates through the Databases and Tables. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21968) Remove index related codes
[ https://issues.apache.org/jira/browse/HIVE-21968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880672#comment-16880672 ] Jesus Camacho Rodriguez commented on HIVE-21968: +1 (pending tests) > Remove index related codes > -- > > Key: HIVE-21968 > URL: https://issues.apache.org/jira/browse/HIVE-21968 > Project: Hive > Issue Type: Improvement > Components: Hive >Affects Versions: 3.1.1 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-21968.01.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Hive doesn't support indexes since 3.0.0, still some index related tests were > left behind, and some code to disable them. Also some index related code is > still in the codebase. They should be removed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21915) Hive with TEZ UNION ALL and UDTF results in data loss
[ https://issues.apache.org/jira/browse/HIVE-21915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-21915: --- Resolution: Fixed Fix Version/s: 4.0.0 Status: Resolved (was: Patch Available) +1. Pushed to master. Thanks for providing the patch [~zhangweilst] > Hive with TEZ UNION ALL and UDTF results in data loss > - > > Key: HIVE-21915 > URL: https://issues.apache.org/jira/browse/HIVE-21915 > Project: Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 1.2.1 >Reporter: Wei Zhang >Assignee: Wei Zhang >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-21915.01.patch, HIVE-21915.02.patch, > HIVE-21915.03.patch, HIVE-21915.04.patch > > > The HQL syntax is like this: > CREATE TEMPORARY TABLE tez_union_all_loss_data AS > SELECT xxx, yyy, zzz,1 as tag > FROM ods_1 > UNION ALL > SELECT xxx, yyy, zzz, tag > FROM > ( > SELECT xxx > ,get_json_object(get_json_object(tb,'$.a'),'$.b') AS yyy > ,zzz > ,2 as tag > FROM ods_2 > LATERAL VIEW EXPLODE(some_udf(uuu)) team_number AS tb > ) tbl > ; > > With above HQL, we are expecting that rows with both tag = 2 and tag = 1 > appear. In our case however, all the rows with tag = 1 are lost. > Dig deeper we can find that the generated two maps have identical task tmp > paths. And that results from when UDTF is present, the FileSinkOperator will > be processed twice generating the tmp path in > GenTezUtils.removeUnionOperators(); > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21966) Llap external client - Arrow Serializer throws ArrayIndexOutOfBoundsException in some cases
[ https://issues.apache.org/jira/browse/HIVE-21966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880654#comment-16880654 ] Hive QA commented on HIVE-21966: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12973923/HIVE-21966.2.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 16363 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/17912/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17912/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17912/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12973923 - PreCommit-HIVE-Build > Llap external client - Arrow Serializer throws ArrayIndexOutOfBoundsException > in some cases > --- > > Key: HIVE-21966 > URL: https://issues.apache.org/jira/browse/HIVE-21966 > Project: Hive > Issue Type: Bug > Components: llap, Serializers/Deserializers >Affects Versions: 3.1.1 >Reporter: Shubham Chaurasia >Assignee: Shubham Chaurasia >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21966.1.patch, HIVE-21966.2.patch > > Time Spent: 1h 10m > Remaining Estimate: 0h > > When we submit query through llap-ext-client, arrow serializer throws > ArrayIndexOutOfBoundsException when 1), 2) and 3) below are satisfied. > 1) {{hive.vectorized.execution.filesink.arrow.native.enabled=true}} to take > arrow serializer code path. > 2) Query contains a filter or limit clause which enforces > {{VectorizedRowBatch#selectedInUse=true}} > 3) Projection involves a column of type {{MultiValuedColumnVector}}. > Sample stacktrace: > {code} > Caused by: java.lang.ArrayIndexOutOfBoundsException: 150 > at > org.apache.hadoop.hive.ql.io.arrow.Serializer.writeGeneric(Serializer.java:679) > at > org.apache.hadoop.hive.ql.io.arrow.Serializer.writePrimitive(Serializer.java:518) > at > org.apache.hadoop.hive.ql.io.arrow.Serializer.write(Serializer.java:276) > at > org.apache.hadoop.hive.ql.io.arrow.Serializer.writeStruct(Serializer.java:342) > at > org.apache.hadoop.hive.ql.io.arrow.Serializer.write(Serializer.java:282) > at > org.apache.hadoop.hive.ql.io.arrow.Serializer.writeList(Serializer.java:365) > at > org.apache.hadoop.hive.ql.io.arrow.Serializer.write(Serializer.java:279) > at > org.apache.hadoop.hive.ql.io.arrow.Serializer.serializeBatch(Serializer.java:199) > at > org.apache.hadoop.hive.ql.exec.vector.filesink.VectorFileSinkArrowOperator.process(VectorFileSinkArrowOperator.java:135) > ... 30 more > {code} > It can be reproduced by: > from beeline: > {code} > CREATE TABLE complex_tbl(c1 array>) STORED AS ORC; > INSERT INTO complex_tbl SELECT ARRAY(NAMED_STRUCT('f1','v11', 'f2','v21'), > NAMED_STRUCT('f1','v21', 'f2','v22')); > {code} > and when we fire query: {{select * from complex_tbl limit 1}} through > llap-ext-client. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21966) Llap external client - Arrow Serializer throws ArrayIndexOutOfBoundsException in some cases
[ https://issues.apache.org/jira/browse/HIVE-21966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880623#comment-16880623 ] Hive QA commented on HIVE-21966: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 46s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 59s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 49s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 59s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 0s{color} | {color:blue} ql in master has 2252 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 40s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 40s{color} | {color:red} ql: The patch generated 1 new + 211 unchanged - 0 fixed = 212 total (was 211) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 30m 17s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-17912/dev-support/hive-personality.sh | | git revision | master / 67e515f | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-17912/yetus/diff-checkstyle-ql.txt | | modules | C: ql itests/hive-unit U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-17912/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Llap external client - Arrow Serializer throws ArrayIndexOutOfBoundsException > in some cases > --- > > Key: HIVE-21966 > URL: https://issues.apache.org/jira/browse/HIVE-21966 > Project: Hive > Issue Type: Bug > Components: llap, Serializers/Deserializers >Affects Versions: 3.1.1 >Reporter: Shubham Chaurasia >Assignee: Shubham Chaurasia >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21966.1.patch, HIVE-21966.2.patch > > Time Spent: 1h 10m > Remaining Estimate: 0h > > When we submit query through llap-ext-client, arrow serializer throws > ArrayIndexOutOfBoundsException when 1), 2) and 3) below are satisfied. > 1) {{hive.vectorized.execution.filesink.arrow.native.enabled=true}} to take > arrow serializer code path. > 2) Query contains a filter or limit clause which enforces > {{VectorizedRowBatch#selectedInUse=true}} > 3)
[jira] [Updated] (HIVE-21967) Clean up CreateTableLikeOperation
[ https://issues.apache.org/jira/browse/HIVE-21967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-21967: -- Labels: pull-request-available refactor-ddl (was: refactor-ddl) > Clean up CreateTableLikeOperation > - > > Key: HIVE-21967 > URL: https://issues.apache.org/jira/browse/HIVE-21967 > Project: Hive > Issue Type: Sub-task > Components: Hive >Affects Versions: 3.1.1 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: pull-request-available, refactor-ddl > Fix For: 4.0.0 > > Attachments: HIVE-21967.01.patch > > > CreateTableLikeOperation has two sub types, creating from view or table. A > lot of their codes is common, they should be reused. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work logged] (HIVE-21967) Clean up CreateTableLikeOperation
[ https://issues.apache.org/jira/browse/HIVE-21967?focusedWorklogId=273467=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-273467 ] ASF GitHub Bot logged work on HIVE-21967: - Author: ASF GitHub Bot Created on: 08/Jul/19 18:39 Start Date: 08/Jul/19 18:39 Worklog Time Spent: 10m Work Description: miklosgergely commented on pull request #708: HIVE-21967 Clean up CreateTableLikeOperation URL: https://github.com/apache/hive/pull/708 CreateTableLikeOperation has two sub types, creating from view or table. A lot of their codes is common, they should be reused. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 273467) Time Spent: 10m Remaining Estimate: 0h > Clean up CreateTableLikeOperation > - > > Key: HIVE-21967 > URL: https://issues.apache.org/jira/browse/HIVE-21967 > Project: Hive > Issue Type: Sub-task > Components: Hive >Affects Versions: 3.1.1 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: pull-request-available, refactor-ddl > Fix For: 4.0.0 > > Attachments: HIVE-21967.01.patch > > Time Spent: 10m > Remaining Estimate: 0h > > CreateTableLikeOperation has two sub types, creating from view or table. A > lot of their codes is common, they should be reused. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21224) Upgrade tests JUnit3 to JUnit4
[ https://issues.apache.org/jira/browse/HIVE-21224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880597#comment-16880597 ] Hive QA commented on HIVE-21224: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12973908/HIVE-21224.13.patch {color:green}SUCCESS:{color} +1 due to 165 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 16345 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.ql.io.TestRCFile.initializationError (batchId=310) org.apache.hadoop.hive.ql.udf.generic.TestGenericUDFMonthsBetween.initializationError (batchId=305) org.apache.hadoop.hive.serde2.binarysortable.TestBinarySortableFast.initializationError (batchId=345) org.apache.hadoop.hive.serde2.lazy.TestLazySimpleFast.initializationError (batchId=345) org.apache.hive.common.util.TestFixedSizedObjectPool.initializationError (batchId=297) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/17911/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17911/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17911/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 5 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12973908 - PreCommit-HIVE-Build > Upgrade tests JUnit3 to JUnit4 > -- > > Key: HIVE-21224 > URL: https://issues.apache.org/jira/browse/HIVE-21224 > Project: Hive > Issue Type: Improvement >Reporter: Bruno Pusztahazi >Assignee: Bruno Pusztahazi >Priority: Major > Attachments: HIVE-21224.1.patch, HIVE-21224.10.patch, > HIVE-21224.11.patch, HIVE-21224.12.patch, HIVE-21224.13.patch, > HIVE-21224.2.patch, HIVE-21224.3.patch, HIVE-21224.4.patch, > HIVE-21224.5.patch, HIVE-21224.6.patch, HIVE-21224.7.patch, > HIVE-21224.8.patch, HIVE-21224.9.patch > > > Old JUnit3 tests should be upgraded to JUnit4 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21968) Remove index related codes
[ https://issues.apache.org/jira/browse/HIVE-21968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880595#comment-16880595 ] Miklos Gergely commented on HIVE-21968: --- Created [https://github.com/apache/hive/pull/707], [~jcamachorodriguez] please review. Also fixed tons of formatting issues in Operation2Privilege.java > Remove index related codes > -- > > Key: HIVE-21968 > URL: https://issues.apache.org/jira/browse/HIVE-21968 > Project: Hive > Issue Type: Improvement > Components: Hive >Affects Versions: 3.1.1 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-21968.01.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Hive doesn't support indexes since 3.0.0, still some index related tests were > left behind, and some code to disable them. Also some index related code is > still in the codebase. They should be removed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21968) Remove index related codes
[ https://issues.apache.org/jira/browse/HIVE-21968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-21968: -- Labels: pull-request-available (was: ) > Remove index related codes > -- > > Key: HIVE-21968 > URL: https://issues.apache.org/jira/browse/HIVE-21968 > Project: Hive > Issue Type: Improvement > Components: Hive >Affects Versions: 3.1.1 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-21968.01.patch > > > Hive doesn't support indexes since 3.0.0, still some index related tests were > left behind, and some code to disable them. Also some index related code is > still in the codebase. They should be removed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work logged] (HIVE-21968) Remove index related codes
[ https://issues.apache.org/jira/browse/HIVE-21968?focusedWorklogId=273446=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-273446 ] ASF GitHub Bot logged work on HIVE-21968: - Author: ASF GitHub Bot Created on: 08/Jul/19 18:02 Start Date: 08/Jul/19 18:02 Worklog Time Spent: 10m Work Description: miklosgergely commented on pull request #707: HIVE-21968 Remove index related codes URL: https://github.com/apache/hive/pull/707 Hive doesn't support indexes since 3.0.0, still some index related tests were left behind, and some code to disable them. Also some index related code is still in the codebase. They should be removed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 273446) Time Spent: 10m Remaining Estimate: 0h > Remove index related codes > -- > > Key: HIVE-21968 > URL: https://issues.apache.org/jira/browse/HIVE-21968 > Project: Hive > Issue Type: Improvement > Components: Hive >Affects Versions: 3.1.1 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-21968.01.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Hive doesn't support indexes since 3.0.0, still some index related tests were > left behind, and some code to disable them. Also some index related code is > still in the codebase. They should be removed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21968) Remove index related codes
[ https://issues.apache.org/jira/browse/HIVE-21968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Gergely updated HIVE-21968: -- Attachment: (was: HIVE-21968.01.patch) > Remove index related codes > -- > > Key: HIVE-21968 > URL: https://issues.apache.org/jira/browse/HIVE-21968 > Project: Hive > Issue Type: Improvement > Components: Hive >Affects Versions: 3.1.1 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-21968.01.patch > > > Hive doesn't support indexes since 3.0.0, still some index related tests were > left behind, and some code to disable them. Also some index related code is > still in the codebase. They should be removed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21968) Remove index related codes
[ https://issues.apache.org/jira/browse/HIVE-21968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Gergely updated HIVE-21968: -- Attachment: HIVE-21968.01.patch > Remove index related codes > -- > > Key: HIVE-21968 > URL: https://issues.apache.org/jira/browse/HIVE-21968 > Project: Hive > Issue Type: Improvement > Components: Hive >Affects Versions: 3.1.1 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-21968.01.patch > > > Hive doesn't support indexes since 3.0.0, still some index related tests were > left behind, and some code to disable them. Also some index related code is > still in the codebase. They should be removed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21224) Upgrade tests JUnit3 to JUnit4
[ https://issues.apache.org/jira/browse/HIVE-21224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880591#comment-16880591 ] Hive QA commented on HIVE-21224: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 35s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 52s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 5s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 42s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 35s{color} | {color:blue} common in master has 62 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 42s{color} | {color:blue} serde in master has 193 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 0s{color} | {color:blue} ql in master has 2252 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 39s{color} | {color:blue} service in master has 48 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 31s{color} | {color:blue} accumulo-handler in master has 21 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 28s{color} | {color:blue} cli in master has 8 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 23s{color} | {color:blue} contrib in master has 10 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 29s{color} | {color:blue} druid-handler in master has 3 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 34s{color} | {color:blue} hbase-handler in master has 15 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 36s{color} | {color:blue} hcatalog/core in master has 29 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 28s{color} | {color:blue} hcatalog/webhcat/java-client in master has 3 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 40s{color} | {color:blue} hcatalog/webhcat/svr in master has 96 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 39s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 7s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 57s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 13s{color} | {color:red} common: The patch generated 1 new + 24 unchanged - 3 fixed = 25 total (was 27) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 19s{color} | {color:red} serde: The patch generated 271 new + 267 unchanged - 43 fixed = 538 total (was 310) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 56s{color} | {color:red} ql: The patch generated 8 new + 1546 unchanged - 132 fixed = 1554 total (was 1678) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 13s{color} | {color:red} service: The patch generated 1 new + 17 unchanged - 6 fixed = 18 total (was 23) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} The patch accumulo-handler passed checkstyle {color} | | {color:green}+1{color} |
[jira] [Updated] (HIVE-21968) Remove index related codes
[ https://issues.apache.org/jira/browse/HIVE-21968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Gergely updated HIVE-21968: -- Attachment: HIVE-21968.01.patch > Remove index related codes > -- > > Key: HIVE-21968 > URL: https://issues.apache.org/jira/browse/HIVE-21968 > Project: Hive > Issue Type: Improvement > Components: Hive >Affects Versions: 3.1.1 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-21968.01.patch > > > Hive doesn't support indexes since 3.0.0, still some index related tests were > left behind, and some code to disable them. Also some index related code is > still in the codebase. They should be removed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21968) Remove index related codes
[ https://issues.apache.org/jira/browse/HIVE-21968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Gergely updated HIVE-21968: -- Status: Patch Available (was: Open) > Remove index related codes > -- > > Key: HIVE-21968 > URL: https://issues.apache.org/jira/browse/HIVE-21968 > Project: Hive > Issue Type: Improvement > Components: Hive >Affects Versions: 3.1.1 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Fix For: 4.0.0 > > Attachments: HIVE-21968.01.patch > > > Hive doesn't support indexes since 3.0.0, still some index related tests were > left behind, and some code to disable them. Also some index related code is > still in the codebase. They should be removed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21968) Remove index related codes
[ https://issues.apache.org/jira/browse/HIVE-21968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Gergely updated HIVE-21968: -- Description: Hive doesn't support indexes since 3.0.0, still some index related tests were left behind, and some code to disable them. Also some index related code is still in the codebase. They should be removed. (was: Hive doesn't support indexes since 3.0.0, still some index related tests and some code were left behind, and some code to disable them. They should be removed.) > Remove index related codes > -- > > Key: HIVE-21968 > URL: https://issues.apache.org/jira/browse/HIVE-21968 > Project: Hive > Issue Type: Improvement > Components: Hive >Affects Versions: 3.1.1 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Fix For: 4.0.0 > > > Hive doesn't support indexes since 3.0.0, still some index related tests were > left behind, and some code to disable them. Also some index related code is > still in the codebase. They should be removed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21968) Remove index related codes
[ https://issues.apache.org/jira/browse/HIVE-21968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Gergely updated HIVE-21968: -- Description: Hive doesn't support indexes since 3.0.0, still some index related tests and some code were left behind, and some code to disable them. They should be removed. (was: Hive doesn't support indexes since 3.0.0, still some index related tests were left behind, and some code to disable them. They should be removed.) > Remove index related codes > -- > > Key: HIVE-21968 > URL: https://issues.apache.org/jira/browse/HIVE-21968 > Project: Hive > Issue Type: Improvement > Components: Hive >Affects Versions: 3.1.1 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Fix For: 4.0.0 > > > Hive doesn't support indexes since 3.0.0, still some index related tests and > some code were left behind, and some code to disable them. They should be > removed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21968) Remove index related codes
[ https://issues.apache.org/jira/browse/HIVE-21968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Gergely updated HIVE-21968: -- Summary: Remove index related codes (was: Remove index related tests) > Remove index related codes > -- > > Key: HIVE-21968 > URL: https://issues.apache.org/jira/browse/HIVE-21968 > Project: Hive > Issue Type: Improvement > Components: Hive >Affects Versions: 3.1.1 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Fix For: 4.0.0 > > > Hive doesn't support indexes since 3.0.0, still some index related tests were > left behind, and some code to disable them. They should be removed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-21968) Remove index related tests
[ https://issues.apache.org/jira/browse/HIVE-21968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Gergely reassigned HIVE-21968: - > Remove index related tests > -- > > Key: HIVE-21968 > URL: https://issues.apache.org/jira/browse/HIVE-21968 > Project: Hive > Issue Type: Improvement > Components: Hive >Affects Versions: 3.1.1 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Fix For: 4.0.0 > > > Hive doesn't support indexes since 3.0.0, still some index related tests were > left behind, and some code to disable them. They should be removed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21938) Add database and table filter options to PreUpgradeTool
[ https://issues.apache.org/jira/browse/HIVE-21938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Krisztian Kasa updated HIVE-21938: -- Status: Open (was: Patch Available) > Add database and table filter options to PreUpgradeTool > --- > > Key: HIVE-21938 > URL: https://issues.apache.org/jira/browse/HIVE-21938 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.1.0 >Reporter: Krisztian Kasa >Assignee: Krisztian Kasa >Priority: Blocker > Fix For: 4.0.0 > > Attachments: HIVE-21938.1.patch, HIVE-21938.2.patch, > HIVE-21938.3.patch, HIVE-21938.4.patch, HIVE-21938.5.patch, > HIVE-21938.6.patch, HIVE-21938.7.patch > > > By default pre upgrade tool scans all databases and tables in the warehouse. > Add database and table filter options to run the tool for a specific subset > of databases and tables only. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21938) Add database and table filter options to PreUpgradeTool
[ https://issues.apache.org/jira/browse/HIVE-21938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Krisztian Kasa updated HIVE-21938: -- Attachment: HIVE-21938.7.patch > Add database and table filter options to PreUpgradeTool > --- > > Key: HIVE-21938 > URL: https://issues.apache.org/jira/browse/HIVE-21938 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.1.0 >Reporter: Krisztian Kasa >Assignee: Krisztian Kasa >Priority: Blocker > Fix For: 4.0.0 > > Attachments: HIVE-21938.1.patch, HIVE-21938.2.patch, > HIVE-21938.3.patch, HIVE-21938.4.patch, HIVE-21938.5.patch, > HIVE-21938.6.patch, HIVE-21938.7.patch > > > By default pre upgrade tool scans all databases and tables in the warehouse. > Add database and table filter options to run the tool for a specific subset > of databases and tables only. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21938) Add database and table filter options to PreUpgradeTool
[ https://issues.apache.org/jira/browse/HIVE-21938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Krisztian Kasa updated HIVE-21938: -- Status: Patch Available (was: Open) > Add database and table filter options to PreUpgradeTool > --- > > Key: HIVE-21938 > URL: https://issues.apache.org/jira/browse/HIVE-21938 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.1.0 >Reporter: Krisztian Kasa >Assignee: Krisztian Kasa >Priority: Blocker > Fix For: 4.0.0 > > Attachments: HIVE-21938.1.patch, HIVE-21938.2.patch, > HIVE-21938.3.patch, HIVE-21938.4.patch, HIVE-21938.5.patch, > HIVE-21938.6.patch, HIVE-21938.7.patch > > > By default pre upgrade tool scans all databases and tables in the warehouse. > Add database and table filter options to run the tool for a specific subset > of databases and tables only. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21967) Clean up CreateTableLikeOperation
[ https://issues.apache.org/jira/browse/HIVE-21967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Gergely updated HIVE-21967: -- Attachment: HIVE-21967.01.patch > Clean up CreateTableLikeOperation > - > > Key: HIVE-21967 > URL: https://issues.apache.org/jira/browse/HIVE-21967 > Project: Hive > Issue Type: Sub-task > Components: Hive >Affects Versions: 3.1.1 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: refactor-ddl > Fix For: 4.0.0 > > Attachments: HIVE-21967.01.patch > > > CreateTableLikeOperation has two sub types, creating from view or table. A > lot of their codes is common, they should be reused. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21967) Clean up CreateTableLikeOperation
[ https://issues.apache.org/jira/browse/HIVE-21967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Gergely updated HIVE-21967: -- Status: Patch Available (was: Open) > Clean up CreateTableLikeOperation > - > > Key: HIVE-21967 > URL: https://issues.apache.org/jira/browse/HIVE-21967 > Project: Hive > Issue Type: Sub-task > Components: Hive >Affects Versions: 3.1.1 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: refactor-ddl > Fix For: 4.0.0 > > Attachments: HIVE-21967.01.patch > > > CreateTableLikeOperation has two sub types, creating from view or table. A > lot of their codes is common, they should be reused. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-21967) Clean up CreateTableLikeOperation
[ https://issues.apache.org/jira/browse/HIVE-21967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Gergely reassigned HIVE-21967: - > Clean up CreateTableLikeOperation > - > > Key: HIVE-21967 > URL: https://issues.apache.org/jira/browse/HIVE-21967 > Project: Hive > Issue Type: Sub-task > Components: Hive >Affects Versions: 3.1.1 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: refactor-ddl > Fix For: 4.0.0 > > > CreateTableLikeOperation has two sub types, creating from view or table. A > lot of their codes is common, they should be reused. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21958) The list of table expression in the inclusion and exclusion list should be separated by '|' instead of comma.
[ https://issues.apache.org/jira/browse/HIVE-21958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880529#comment-16880529 ] Hive QA commented on HIVE-21958: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12973905/HIVE-21958.04.patch {color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 15 failed/errored test(s), 16361 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.metastore.TestObjectStore.catalogs (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testDatabaseOps (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testDeprecatedConfigIsOverwritten (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropParitionsCleanup (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropPartitionsCacheCrossSession (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSqlErrorMetrics (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testEmptyTrustStoreProps (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testMasterKeyOps (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testMaxEventResponse (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testPartitionOps (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testQueryCloseOnError (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testRoleOps (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testTableOps (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testUseSSLProperty (batchId=232) org.apache.hadoop.hive.metastore.TestPartitionManagement.testPartitionDiscoveryTransactionalTable (batchId=222) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/17910/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17910/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17910/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 15 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12973905 - PreCommit-HIVE-Build > The list of table expression in the inclusion and exclusion list should be > separated by '|' instead of comma. > - > > Key: HIVE-21958 > URL: https://issues.apache.org/jira/browse/HIVE-21958 > Project: Hive > Issue Type: Sub-task >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21958.01.patch, HIVE-21958.02.patch, > HIVE-21958.03.patch, HIVE-21958.04.patch > > Time Spent: 3h 10m > Remaining Estimate: 0h > > Java regex expression does not support comma. If user wants multiple > expression to be present in the include or exclude list, then the expressions > can be provided separated by pipe ('|') character. The policy will look > something like db_name.'(t1*)|(t3)'.'t100' -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21958) The list of table expression in the inclusion and exclusion list should be separated by '|' instead of comma.
[ https://issues.apache.org/jira/browse/HIVE-21958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880503#comment-16880503 ] Hive QA commented on HIVE-21958: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 47s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 1s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 25s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 11s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 2m 28s{color} | {color:blue} standalone-metastore/metastore-common in master has 31 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 7s{color} | {color:blue} ql in master has 2252 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 38s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 10s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 21s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 18s{color} | {color:red} itests/hive-unit: The patch generated 2 new + 106 unchanged - 0 fixed = 108 total (was 106) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 4m 17s{color} | {color:red} ql generated 9 new + 2243 unchanged - 9 fixed = 2252 total (was 2252) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 38s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 41m 1s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:ql | | | Should org.apache.hadoop.hive.ql.parse.HiveParser$DFA235 be a _static_ inner class? At HiveParser.java:inner class? At HiveParser.java:[lines 48087-48100] | | | Dead store to LA29_128 in org.apache.hadoop.hive.ql.parse.HiveParser$DFA29.specialStateTransition(int, IntStream) At HiveParser.java:org.apache.hadoop.hive.ql.parse.HiveParser$DFA29.specialStateTransition(int, IntStream) At HiveParser.java:[line 47827] | | | Dead store to LA29_130 in org.apache.hadoop.hive.ql.parse.HiveParser$DFA29.specialStateTransition(int, IntStream) At HiveParser.java:org.apache.hadoop.hive.ql.parse.HiveParser$DFA29.specialStateTransition(int, IntStream) At HiveParser.java:[line 47840] | | | Dead store to LA29_132 in org.apache.hadoop.hive.ql.parse.HiveParser$DFA29.specialStateTransition(int, IntStream) At HiveParser.java:org.apache.hadoop.hive.ql.parse.HiveParser$DFA29.specialStateTransition(int, IntStream) At HiveParser.java:[line 47853] | | | Dead store to LA29_134 in org.apache.hadoop.hive.ql.parse.HiveParser$DFA29.specialStateTransition(int, IntStream) At HiveParser.java:org.apache.hadoop.hive.ql.parse.HiveParser$DFA29.specialStateTransition(int, IntStream) At HiveParser.java:[line 47866] | | | Dead store to LA29_136 in org.apache.hadoop.hive.ql.parse.HiveParser$DFA29.specialStateTransition(int, IntStream) At HiveParser.java:org.apache.hadoop.hive.ql.parse.HiveParser$DFA29.specialStateTransition(int, IntStream) At HiveParser.java:[line 47879] | | | Dead store to LA29_138 in
[jira] [Updated] (HIVE-21928) Fix for statistics annotation in nested AND expressions
[ https://issues.apache.org/jira/browse/HIVE-21928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-21928: --- Attachment: HIVE-21928.02.patch > Fix for statistics annotation in nested AND expressions > --- > > Key: HIVE-21928 > URL: https://issues.apache.org/jira/browse/HIVE-21928 > Project: Hive > Issue Type: Bug > Components: Physical Optimizer >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Critical > Labels: pull-request-available > Attachments: HIVE-21928.01.patch, HIVE-21928.01.patch, > HIVE-21928.02.patch, HIVE-21928.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Discovered while working on HIVE-21867. Having predicates with nested AND > expressions may result in different stats, even if predicates are basically > similar (from stats estimation standpoint). > For instance, stats for {{AND(x=5, true, true)}} are different from {{x=5}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21928) Fix for statistics annotation in nested AND expressions
[ https://issues.apache.org/jira/browse/HIVE-21928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-21928: --- Attachment: HIVE-21928.01.patch > Fix for statistics annotation in nested AND expressions > --- > > Key: HIVE-21928 > URL: https://issues.apache.org/jira/browse/HIVE-21928 > Project: Hive > Issue Type: Bug > Components: Physical Optimizer >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Critical > Labels: pull-request-available > Attachments: HIVE-21928.01.patch, HIVE-21928.01.patch, > HIVE-21928.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Discovered while working on HIVE-21867. Having predicates with nested AND > expressions may result in different stats, even if predicates are basically > similar (from stats estimation standpoint). > For instance, stats for {{AND(x=5, true, true)}} are different from {{x=5}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21938) Add database and table filter options to PreUpgradeTool
[ https://issues.apache.org/jira/browse/HIVE-21938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880460#comment-16880460 ] Hive QA commented on HIVE-21938: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12973903/HIVE-21938.6.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 16 failed/errored test(s), 16363 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.llap.security.TestLlapSignerImpl.testSigning (batchId=350) org.apache.hadoop.hive.metastore.TestObjectStore.catalogs (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testConcurrentDropPartitions (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testDatabaseOps (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testDeprecatedConfigIsOverwritten (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropParitionsCleanup (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSQLDropPartitionsCacheCrossSession (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testDirectSqlErrorMetrics (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testEmptyTrustStoreProps (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testMasterKeyOps (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testMaxEventResponse (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testPartitionOps (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testQueryCloseOnError (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testRoleOps (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testTableOps (batchId=232) org.apache.hadoop.hive.metastore.TestObjectStore.testUseSSLProperty (batchId=232) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/17909/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17909/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17909/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 16 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12973903 - PreCommit-HIVE-Build > Add database and table filter options to PreUpgradeTool > --- > > Key: HIVE-21938 > URL: https://issues.apache.org/jira/browse/HIVE-21938 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.1.0 >Reporter: Krisztian Kasa >Assignee: Krisztian Kasa >Priority: Blocker > Fix For: 4.0.0 > > Attachments: HIVE-21938.1.patch, HIVE-21938.2.patch, > HIVE-21938.3.patch, HIVE-21938.4.patch, HIVE-21938.5.patch, HIVE-21938.6.patch > > > By default pre upgrade tool scans all databases and tables in the warehouse. > Add database and table filter options to run the tool for a specific subset > of databases and tables only. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18842) CLUSTERED ON/DISTRIBUTED ON+SORTED ON support for materialized views
[ https://issues.apache.org/jira/browse/HIVE-18842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-18842: --- Attachment: HIVE-18842.03.patch > CLUSTERED ON/DISTRIBUTED ON+SORTED ON support for materialized views > > > Key: HIVE-18842 > URL: https://issues.apache.org/jira/browse/HIVE-18842 > Project: Hive > Issue Type: New Feature > Components: Materialized views >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Labels: pull-request-available, todoc4.0 > Attachments: HIVE-18842.01.patch, HIVE-18842.01.patch, > HIVE-18842.02.patch, HIVE-18842.03.patch, HIVE-18842.03.patch, > HIVE-18842.patch > > Time Spent: 1.5h > Remaining Estimate: 0h > > We should support defining a {{CLUSTERED ON/DISTRIBUTED ON+SORTED ON}} > specification for materialized views. > The syntax should be extended as follows: > {code:sql} > CREATE MATERIALIZED VIEW [IF NOT EXISTS] [db_name.]materialized_view_name > [COMMENT materialized_view_comment] > [PARTITIONED ON (col_name, ...)] > [CLUSTERED ON (col_name, ...) | DISTRIBUTED ON (col_name, ...) SORTED ON > (col_name, ...)] -- NEW! > [ >[ROW FORMAT row_format] >[STORED AS file_format] > | STORED BY 'storage.handler.class.name' [WITH SERDEPROPERTIES (...)] > ] > [LOCATION hdfs_path] > [TBLPROPERTIES (property_name=property_value, ...)] > AS select_statement; > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work logged] (HIVE-18842) CLUSTERED ON/DISTRIBUTED ON+SORTED ON support for materialized views
[ https://issues.apache.org/jira/browse/HIVE-18842?focusedWorklogId=273332=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-273332 ] ASF GitHub Bot logged work on HIVE-18842: - Author: ASF GitHub Bot Created on: 08/Jul/19 15:14 Start Date: 08/Jul/19 15:14 Worklog Time Spent: 10m Work Description: jcamachor commented on pull request #696: HIVE-18842 URL: https://github.com/apache/hive/pull/696#discussion_r301150194 ## File path: ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java ## @@ -7297,8 +7435,19 @@ protected Operator genFileSinkPlan(String dest, QB qb, Operator input) // Add NOT NULL constraint check input = genConstraintsPlan(dest, qb, input); - // Add sorting/bucketing if needed - input = genBucketingSortingDest(dest, input, qb, tableDescriptor, destinationTable, rsCtx); + if (destinationTable.isMaterializedView() && + mvRebuildMode == MaterializationRebuildMode.INSERT_OVERWRITE_REBUILD) { +// Data organization (DISTRIBUTED, SORTED, CLUSTERED) for materialized view +// TODO: We only do this for a full rebuild Review comment: I was thinking about this and there is no need to log this info. This is a missing feature, and there is a `TODO` and a follow-up JIRA (https://issues.apache.org/jira/browse/HIVE-21953). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 273332) Time Spent: 1.5h (was: 1h 20m) > CLUSTERED ON/DISTRIBUTED ON+SORTED ON support for materialized views > > > Key: HIVE-18842 > URL: https://issues.apache.org/jira/browse/HIVE-18842 > Project: Hive > Issue Type: New Feature > Components: Materialized views >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Labels: pull-request-available, todoc4.0 > Attachments: HIVE-18842.01.patch, HIVE-18842.01.patch, > HIVE-18842.02.patch, HIVE-18842.03.patch, HIVE-18842.patch > > Time Spent: 1.5h > Remaining Estimate: 0h > > We should support defining a {{CLUSTERED ON/DISTRIBUTED ON+SORTED ON}} > specification for materialized views. > The syntax should be extended as follows: > {code:sql} > CREATE MATERIALIZED VIEW [IF NOT EXISTS] [db_name.]materialized_view_name > [COMMENT materialized_view_comment] > [PARTITIONED ON (col_name, ...)] > [CLUSTERED ON (col_name, ...) | DISTRIBUTED ON (col_name, ...) SORTED ON > (col_name, ...)] -- NEW! > [ >[ROW FORMAT row_format] >[STORED AS file_format] > | STORED BY 'storage.handler.class.name' [WITH SERDEPROPERTIES (...)] > ] > [LOCATION hdfs_path] > [TBLPROPERTIES (property_name=property_value, ...)] > AS select_statement; > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21957) Create temporary table like should omit transactional properties
[ https://issues.apache.org/jira/browse/HIVE-21957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Laszlo Pinter updated HIVE-21957: - Attachment: HIVE-21957.03.patch > Create temporary table like should omit transactional properties > > > Key: HIVE-21957 > URL: https://issues.apache.org/jira/browse/HIVE-21957 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 4.0.0 >Reporter: Laszlo Pinter >Assignee: Laszlo Pinter >Priority: Major > Attachments: HIVE-21957.01.patch, HIVE-21957.02.patch, > HIVE-21957.03.patch > > > In case of create temporary table like queries, where the source table is > transactional, the transactional properties should not be copied over to the > new table. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21191) I want to extends lag/lead functions to Implementing some special functions, And I met some problems
[ https://issues.apache.org/jira/browse/HIVE-21191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880419#comment-16880419 ] Ittay Dror commented on HIVE-21191: --- This is happening to me as well. It seems like Hive doesn't support user defined windowing functions > I want to extends lag/lead functions to Implementing some special functions, > And I met some problems > - > > Key: HIVE-21191 > URL: https://issues.apache.org/jira/browse/HIVE-21191 > Project: Hive > Issue Type: Wish > Components: Hive, UDF, Windows >Affects Versions: 1.1.0 >Reporter: one >Priority: Minor > Labels: LAG(), UDAF, UDF, window_function > > i want a distinctLag functions ,The function is like lag, but the difference > is to select different values in front of it. > Example: > {color:#14892c}select * from active{color} > ||session||sq||channel|| > |1|1|A| > |1|2|B| > |1|3|B| > |1|4|C| > |1|5|B| > |2|1|C| > |2|2|B| > |2|3|B| > |2|4|A| > |2|5|B| > {color:#14892c} > select session,sq,lag(channel)over(partition by session order by sq) from > active{color} > ||session||sq||channel|| > |1|1|null| > |1|2|A| > |1|3|B| > |1|4|B| > |1|5|C| > |2|1|null| > |2|2|C| > |2|3|B| > |2|4|B| > |2|5|A| > The function I want is:{color:#14892c} > select session,sq,distinctLag(channel)over(partition by session order by sq) > from active{color} > ||session||sq||channel|| > |1|1|null| > |1|2|A| > |1|3|A| > |1|4|B| > |1|5|C| > |2|1|null| > |2|2|C| > |2|3|C| > |2|4|B| > |2|5|A| > > i try to extend GenericUDFLeadLag and Override: > {code:java} > import org.apache.hadoop.hive.ql.exec.Description; > import org.apache.hadoop.hive.ql.metadata.HiveException; > import org.apache.hadoop.hive.ql.udf.UDFType; > import org.apache.hadoop.hive.ql.udf.generic.GenericUDFLeadLag; > import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils; > import > org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.ObjectInspectorCopyOption; > @Description( > name = "distinctLag", > value = "distinctLag (scalar_expression [,offset] [,default]) OVER > ([query_partition_clause] order_by_clause); " > + "The distinctLag function is used to access data from a > distinct previous row.", > extended = "Example:\n " > + "select p1.p_mfgr, p1.p_name, p1.p_size,\n" > + " p1.p_size - distinctLag(p1.p_size,1,p1.p_size) over( distribute > by p1.p_mfgr sort by p1.p_name) as deltaSz\n" > + " from part p1 join part p2 on p1.p_partkey = p2.p_partkey") > @UDFType(impliesOrder = true) > public class GenericUDFDistinctLag extends GenericUDFLeadLag { > @Override > public Object evaluate(DeferredObject[] arguments) throws HiveException > { > Object defaultVal = null; > if (arguments.length == 3) { > defaultVal = > ObjectInspectorUtils.copyToStandardObject(getDefaultValueConverter().convert(arguments[2].get()), > getDefaultArgOI()); > } > int idx = getpItr().getIndex() - 1; > int start = 0; > int end = getpItr().getPartition().size(); > try { > Object currValue = > ObjectInspectorUtils.copyToStandardObject(getExprEvaluator().evaluate(getpItr().resetToIndex(idx)), > getFirstArgOI(), ObjectInspectorCopyOption.WRITABLE); > Object ret = null; > int newIdx = idx; > do { > --newIdx; > if (newIdx >= end || newIdx < start) { > ret = defaultVal; > return ret; > }else{ > ret = > ObjectInspectorUtils.copyToStandardObject(getExprEvaluator().evaluate(getpItr().lag(1)), > getFirstArgOI(), ObjectInspectorCopyOption.WRITABLE); > if(ret.equals(currValue)){ > setAmt(getAmt() - 1); > } > } > } while (getAmt() > 0); > return ret; > } finally { > Object currRow = getpItr().resetToIndex(idx); > // reevaluate expression on current Row, to trigger the > Lazy object > // caches to be reset to the current row. > getExprEvaluator().evaluate(currRow); > } > } > @Override > protected String _getFnName(){ >return "distinctLag"; > } > @Override > protected Object getRow(int
[jira] [Commented] (HIVE-21938) Add database and table filter options to PreUpgradeTool
[ https://issues.apache.org/jira/browse/HIVE-21938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880397#comment-16880397 ] Hive QA commented on HIVE-21938: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 20s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 20s{color} | {color:blue} upgrade-acid/pre-upgrade in master has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 9s{color} | {color:red} upgrade-acid/pre-upgrade: The patch generated 2 new + 56 unchanged - 12 fixed = 58 total (was 68) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 11m 3s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-17909/dev-support/hive-personality.sh | | git revision | master / 67e515f | | Default Java | 1.8.0_111 | | findbugs | v3.0.1 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-17909/yetus/diff-checkstyle-upgrade-acid_pre-upgrade.txt | | modules | C: upgrade-acid/pre-upgrade U: upgrade-acid/pre-upgrade | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-17909/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Add database and table filter options to PreUpgradeTool > --- > > Key: HIVE-21938 > URL: https://issues.apache.org/jira/browse/HIVE-21938 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.1.0 >Reporter: Krisztian Kasa >Assignee: Krisztian Kasa >Priority: Blocker > Fix For: 4.0.0 > > Attachments: HIVE-21938.1.patch, HIVE-21938.2.patch, > HIVE-21938.3.patch, HIVE-21938.4.patch, HIVE-21938.5.patch, HIVE-21938.6.patch > > > By default pre upgrade tool scans all databases and tables in the warehouse. > Add database and table filter options to run the tool for a specific subset > of databases and tables only. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21923) Vectorized MapJoin may miss results when only the join key is selected
[ https://issues.apache.org/jira/browse/HIVE-21923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880358#comment-16880358 ] Hive QA commented on HIVE-21923: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 50s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 53s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 24s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 19m 20s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 33s{color} | {color:blue} common in master has 62 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 5s{color} | {color:blue} ql in master has 2252 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 7m 59s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 25s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 13s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 29s{color} | {color:red} common: The patch generated 1 new + 1922 unchanged - 0 fixed = 1923 total (was 1922) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 12m 52s{color} | {color:red} root: The patch generated 1 new + 73140 unchanged - 0 fixed = 73141 total (was 73140) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 34s{color} | {color:red} patch/common cannot run setBugDatabaseInfo from findbugs {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 4m 20s{color} | {color:red} patch/ql cannot run setBugDatabaseInfo from findbugs {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 8m 1s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 97m 16s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-17906/dev-support/hive-personality.sh | | git revision | master / 67e515f | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-17906/yetus/diff-checkstyle-common.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-17906/yetus/diff-checkstyle-root.txt | | findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-17906/yetus/patch-findbugs-common.txt | | findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-17906/yetus/patch-findbugs-ql.txt | | modules | C: common ql . U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-17906/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Vectorized MapJoin may miss results when only the join key is selected > -- > > Key: HIVE-21923 > URL: https://issues.apache.org/jira/browse/HIVE-21923 > Project: Hive > Issue Type: Bug >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >
[jira] [Commented] (HIVE-21957) Create temporary table like should omit transactional properties
[ https://issues.apache.org/jira/browse/HIVE-21957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880349#comment-16880349 ] Hive QA commented on HIVE-21957: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12973901/HIVE-21957.02.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/17908/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17908/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17908/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2019-07-08 13:29:46.950 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-17908/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2019-07-08 13:29:46.953 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at 67e515f HIVE-21571: SHOW COMPACTIONS shows column names as its first output row (Rajkumar Singh, reviewed by Daniel Dai) + git clean -f -d Removing standalone-metastore/metastore-server/src/gen/ + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at 67e515f HIVE-21571: SHOW COMPACTIONS shows column names as its first output row (Rajkumar Singh, reviewed by Daniel Dai) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2019-07-08 13:29:48.109 + rm -rf ../yetus_PreCommit-HIVE-Build-17908 + mkdir ../yetus_PreCommit-HIVE-Build-17908 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-17908 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-17908/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch Going to apply patch with: git apply -p0 + [[ maven == \m\a\v\e\n ]] + rm -rf /data/hiveptest/working/maven/org/apache/hive + mvn -B clean install -DskipTests -T 4 -q -Dmaven.repo.local=/data/hiveptest/working/maven protoc-jar: executing: [/tmp/protoc8234374177639365020.exe, --version] libprotoc 2.5.0 protoc-jar: executing: [/tmp/protoc8234374177639365020.exe, -I/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/src/main/protobuf/org/apache/hadoop/hive/metastore, --java_out=/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/target/generated-sources, /data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/src/main/protobuf/org/apache/hadoop/hive/metastore/metastore.proto] ANTLR Parser Generator Version 3.5.2 [ERROR] Failed to execute goal org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (process-resource-bundles) on project hive-shims: Execution process-resource-bundles of goal org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process failed. ConcurrentModificationException -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn -rf :hive-shims + result=1 + '[' 1 -ne 0 ']' + rm -rf yetus_PreCommit-HIVE-Build-17908 + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12973901 - PreCommit-HIVE-Build > Create temporary table like should omit transactional properties >
[jira] [Commented] (HIVE-21225) ACID: getAcidState() should cache a recursive dir listing locally
[ https://issues.apache.org/jira/browse/HIVE-21225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16880346#comment-16880346 ] Hive QA commented on HIVE-21225: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12973895/HIVE-21225.8.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/17907/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17907/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17907/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Tests exited with: Exception: Patch URL https://issues.apache.org/jira/secure/attachment/12973895/HIVE-21225.8.patch was found in seen patch url's cache and a test was probably run already on it. Aborting... {noformat} This message is automatically generated. ATTACHMENT ID: 12973895 - PreCommit-HIVE-Build > ACID: getAcidState() should cache a recursive dir listing locally > - > > Key: HIVE-21225 > URL: https://issues.apache.org/jira/browse/HIVE-21225 > Project: Hive > Issue Type: Improvement > Components: Transactions >Reporter: Gopal V >Assignee: Vaibhav Gumashta >Priority: Major > Attachments: HIVE-21225.1.patch, HIVE-21225.2.patch, > HIVE-21225.3.patch, HIVE-21225.4.patch, HIVE-21225.4.patch, > HIVE-21225.5.patch, HIVE-21225.6.patch, HIVE-21225.7.patch, > HIVE-21225.7.patch, HIVE-21225.8.patch, async-pid-44-2.svg > > > Currently getAcidState() makes 3 calls into the FS api which could be > answered by making a single recursive listDir call and reusing the same data > to check for isRawFormat() and isValidBase(). > All delta operations for a single partition can go against a single listed > directory snapshot instead of interacting with the NameNode or ObjectStore > within the inner loop. -- This message was sent by Atlassian JIRA (v7.6.3#76005)