[hive] branch master updated (cca0da1 -> 2dc3311)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/hive.git. from cca0da1 HIVE-24871: Initiator / Cleaner performance performance metrics (Denys Kuzmenko, reviewed by Peter Varga) add 2dc3311 HIVE-23779: BasicStatsTask Info is not getting printed in beeline console (Naresh Panchetty Ramanaiah, reviewed by Miklos Gergely) No new revisions were added by this update. Summary of changes: ql/src/java/org/apache/hadoop/hive/ql/log/LogDivertAppender.java | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
[hive] branch master updated (b0539cf -> 84dc08f)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/hive.git. from b0539cf HIVE-24628: Decimal values are displayed as scientific notation in beeline (Naresh Panchetty Ramanaiah, reviewed by Miklos Gergely) add 84dc08f HIVE-24610: Remove superfluous throws IOException from Context (Miklos Gergely, reviewed by Krisztian Kasa) No new revisions were added by this update. Summary of changes: common/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java| 4 ql/src/java/org/apache/hadoop/hive/ql/Context.java | 4 ++-- ql/src/java/org/apache/hadoop/hive/ql/Driver.java | 8 ++-- .../hive/ql/io/rcfile/truncate/ColumnTruncateTask.java | 12 +++- .../hadoop/hive/ql/parse/AcidExportSemanticAnalyzer.java | 2 +- .../hadoop/hive/ql/parse/ColumnStatsAutoGatherContext.java | 4 ++-- .../hadoop/hive/ql/parse/ColumnStatsSemanticAnalyzer.java | 6 +- .../apache/hadoop/hive/ql/parse/LoadSemanticAnalyzer.java | 12 .../java/org/apache/hadoop/hive/ql/parse/ParseUtils.java | 4 ++-- .../hadoop/hive/ql/parse/RewriteSemanticAnalyzer.java | 14 +- .../org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java | 2 +- .../hive/ql/udf/generic/GenericUDTFGetSQLSchema.java | 3 +-- .../hadoop/hive/ql/udf/generic/GenericUDTFGetSplits.java | 2 +- .../test/org/apache/hadoop/hive/ql/exec/TestContext.java | 2 +- .../org/apache/hadoop/hive/ql/exec/tez/TestTezTask.java| 5 ++--- .../ql/optimizer/physical/TestNullScanTaskDispatcher.java | 2 +- .../org/apache/hadoop/hive/ql/tool/TestLineageInfo.java| 2 +- 17 files changed, 30 insertions(+), 58 deletions(-)
[hive] branch master updated (fa68362 -> b0539cf)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/hive.git. from fa68362 HIVE-24589 : Drop catalog failing with deadlock error for Oracle backend dbms. (Mahesh Kumar Behera, reviewed by Miklos Gergely) add b0539cf HIVE-24628: Decimal values are displayed as scientific notation in beeline (Naresh Panchetty Ramanaiah, reviewed by Miklos Gergely) No new revisions were added by this update. Summary of changes: beeline/src/java/org/apache/hive/beeline/Rows.java | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)
[hive] branch master updated (ab11fbb -> 83cd0df)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/hive.git. from ab11fbb HIVE-24615: Remove unnecessary FileSystem listing from Initiator (#1848) (Peter Varga, reviewed by Laszlo Pinter) add 83cd0df HIVE-24611: Remove unnecessary parameter from AbstractAlterTableOperation (Miklos Gergely, reviewed by Krisztian Kasa) No new revisions were added by this update. Summary of changes: .../ql/ddl/table/AbstractAlterTableOperation.java| 20 ++-- 1 file changed, 10 insertions(+), 10 deletions(-)
[hive] branch master updated (230dbce -> 65e1180)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/hive.git. from 230dbce HIVE-24559: Fix some spelling issues (#1818) add 65e1180 HIVE-24593 Clean up checkstyle violations in ddl (Miklos Gergely, reviewed by Krisztian Kasa) No new revisions were added by this update. Summary of changes: .../hadoop/hive/ql/ddl/misc/msck/MsckAnalyzer.java | 3 +- .../show/compactions/ShowCompactionsDesc.java | 2 + .../ql/ddl/table/AbstractAlterTableOperation.java | 1 - .../add/AlterTableAddConstraintAnalyzer.java | 3 +- .../drop/AlterTableDropConstraintDesc.java | 2 - .../ddl/table/lock/show/ShowDbLocksAnalyzer.java | 3 -- .../ql/ddl/table/lock/show/ShowLocksAnalyzer.java | 3 -- .../partition/add/AlterTableAddPartitionDesc.java | 3 +- .../drop/AbstractDropPartitionAnalyzer.java| 20 --- .../rename/AlterTableRenamePartitionOperation.java | 1 - .../AlterTableSetSkewedLocationAnalyzer.java | 1 - .../ql/ddl/view/create/AlterViewAsAnalyzer.java| 2 +- .../AlterMaterializedViewRebuildAnalyzer.java | 2 - .../hive/ql/metadata/formatting/MapBuilder.java| 62 +++--- 14 files changed, 40 insertions(+), 68 deletions(-)
[hive] branch master updated (113f6af -> fc2d47f)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/hive.git. from 113f6af HIVE-24565: Implement standard trim function (Krisztian Kasa, reviewed by Jesus Camacho Rodriguez, Zoltan Haindrich) add fc2d47f HIVE-24509: Move show specific codes under DDL and cut MetaDataFormatter classes to pieces (Miklos Gergely, reviewed by David Mollitor) No new revisions were added by this update. Summary of changes: .../hadoop/hive/ql/ddl/DDLOperationContext.java| 8 - .../org/apache/hadoop/hive/ql/ddl/DDLUtils.java| 67 -- .../org/apache/hadoop/hive/ql/ddl/ShowUtils.java | 406 + .../ddl/database/desc/DescDatabaseFormatter.java | 116 +++ .../ddl/database/desc/DescDatabaseOperation.java | 9 +- .../ddl/database/show/ShowDatabasesFormatter.java | 69 ++ .../ddl/database/show/ShowDatabasesOperation.java | 7 +- .../showcreate/ShowCreateDatabaseOperation.java| 10 +- .../ddl/function/desc/DescFunctionOperation.java | 4 +- .../ddl/function/show/ShowFunctionsOperation.java | 4 +- .../hive/ql/ddl/misc/conf/ShowConfOperation.java | 11 +- .../hive/ql/ddl/privilege/PrivilegeUtils.java | 6 +- .../privilege/show/grant/ShowGrantOperation.java | 24 +- .../show/principals/ShowPrincipalsOperation.java | 16 +- .../show/rolegrant/ShowRoleGrantOperation.java | 12 +- .../show/compactions/ShowCompactionsOperation.java | 4 +- .../transactions/ShowTransactionsOperation.java| 4 +- .../table/column/show/ShowColumnsOperation.java| 9 +- .../create/show/ShowCreateTableOperation.java | 6 +- .../ql/ddl/table/info/desc/DescTableOperation.java | 35 +- .../info/desc/formatter/DescTableFormatter.java| 47 + .../desc/formatter/JsonDescTableFormatter.java | 265 ++ .../desc/formatter/TextDescTableFormatter.java | 565 .../properties/ShowTablePropertiesOperation.java | 14 +- .../info/show/status/ShowTableStatusOperation.java | 9 +- .../formatter/JsonShowTableStatusFormatter.java| 95 ++ .../status/formatter/ShowTableStatusFormatter.java | 181 .../formatter/TextShowTableStatusFormatter.java| 139 +++ .../info/show/tables/ShowTablesFormatter.java | 120 +++ .../info/show/tables/ShowTablesOperation.java | 12 +- .../ql/ddl/table/lock/show/ShowLocksOperation.java | 6 +- .../partition/show/ShowPartitionsFormatter.java| 124 +++ .../partition/show/ShowPartitionsOperation.java| 7 +- .../show/ShowMaterializedViewsFormatter.java | 127 +++ .../show/ShowMaterializedViewsOperation.java | 7 +- .../hive/ql/ddl/view/show/ShowViewsOperation.java | 8 +- .../AlterResourcePlanValidateOperation.java| 7 +- .../show/ShowResourcePlanOperation.java| 10 +- .../formatter/JsonShowResourcePlanFormatter.java | 182 .../show/formatter/ShowResourcePlanFormatter.java | 304 +++ .../formatter/TextShowResourcePlanFormatter.java | 193 .../metadata/formatting/JsonMetaDataFormatter.java | 748 +--- .../metadata/formatting/MetaDataFormatUtils.java | 970 + .../ql/metadata/formatting/MetaDataFormatter.java | 76 +- .../metadata/formatting/TextMetaDataFormatter.java | 776 + .../ql/metadata/formatting/TextMetaDataTable.java | 61 -- .../metadata/formatting/TestJsonRPFormatter.java | 3 +- 47 files changed, 3096 insertions(+), 2787 deletions(-) create mode 100644 ql/src/java/org/apache/hadoop/hive/ql/ddl/ShowUtils.java create mode 100644 ql/src/java/org/apache/hadoop/hive/ql/ddl/database/desc/DescDatabaseFormatter.java create mode 100644 ql/src/java/org/apache/hadoop/hive/ql/ddl/database/show/ShowDatabasesFormatter.java create mode 100644 ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/formatter/DescTableFormatter.java create mode 100644 ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/formatter/JsonDescTableFormatter.java create mode 100644 ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/desc/formatter/TextDescTableFormatter.java create mode 100644 ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/show/status/formatter/JsonShowTableStatusFormatter.java create mode 100644 ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/show/status/formatter/ShowTableStatusFormatter.java create mode 100644 ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/show/status/formatter/TextShowTableStatusFormatter.java create mode 100644 ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/show/tables/ShowTablesFormatter.java create mode 100644 ql/src/java/org/apache/hadoop/hive/ql/ddl/table/partition/show/ShowPartitionsFormatter.java create mode 100644 ql/src/java/org/apache/hadoop/hive/ql/ddl/view/materialized/show/ShowMaterializedViewsFormatter.java create mode 100644 ql/src/java/org/apache/hadoop/hive/ql/ddl/workloadmanagement/resourceplan
[hive] branch master updated (2597088 -> c309914)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/hive.git. from 2597088 HIVE-24380: NullScanTaskDispatcher should liststatus in parallel (Mustafa Iman, reviewed by Rajesh Balamohan) add c309914 HIVE-24333: Cut long methods in Driver to smaller, more manageable pieces (Miklos Gergely, reviewed by David Mollitor) (#1629) No new revisions were added by this update. Summary of changes: ql/src/java/org/apache/hadoop/hive/ql/Driver.java | 406 +++-- .../org/apache/hadoop/hive/ql/DriverContext.java | 8 +- 2 files changed, 222 insertions(+), 192 deletions(-)
[hive] branch master updated (2f60a9e -> b24534e)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/hive.git. from 2f60a9e HIVE-23695: [CachedStore] Add check/default constraints in CachedStore (Ashish Sharma, reviewed by Adesh Rao, Sankar Hariappan) add b24534e HIVE-24282: Show columns shouldn't sort table output columns unless explicitly mentioned (Naresh Panchetty Ramanaiah reviewed by Miklos Gergely) No new revisions were added by this update. Summary of changes: .../org/apache/hadoop/hive/ql/parse/HiveParser.g | 4 +- .../ddl/table/column/show/ShowColumnsAnalyzer.java | 14 ++- .../ql/ddl/table/column/show/ShowColumnsDesc.java | 9 +- .../table/column/show/ShowColumnsOperation.java| 17 ++-- ql/src/test/queries/clientpositive/show_columns.q | 8 ++ .../results/clientpositive/llap/show_columns.q.out | 101 + 6 files changed, 122 insertions(+), 31 deletions(-)
[hive] branch master updated (2f60a9e -> b24534e)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/hive.git. from 2f60a9e HIVE-23695: [CachedStore] Add check/default constraints in CachedStore (Ashish Sharma, reviewed by Adesh Rao, Sankar Hariappan) add b24534e HIVE-24282: Show columns shouldn't sort table output columns unless explicitly mentioned (Naresh Panchetty Ramanaiah reviewed by Miklos Gergely) No new revisions were added by this update. Summary of changes: .../org/apache/hadoop/hive/ql/parse/HiveParser.g | 4 +- .../ddl/table/column/show/ShowColumnsAnalyzer.java | 14 ++- .../ql/ddl/table/column/show/ShowColumnsDesc.java | 9 +- .../table/column/show/ShowColumnsOperation.java| 17 ++-- ql/src/test/queries/clientpositive/show_columns.q | 8 ++ .../results/clientpositive/llap/show_columns.q.out | 101 + 6 files changed, 122 insertions(+), 31 deletions(-)
[hive] branch master updated (2f60a9e -> b24534e)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/hive.git. from 2f60a9e HIVE-23695: [CachedStore] Add check/default constraints in CachedStore (Ashish Sharma, reviewed by Adesh Rao, Sankar Hariappan) add b24534e HIVE-24282: Show columns shouldn't sort table output columns unless explicitly mentioned (Naresh Panchetty Ramanaiah reviewed by Miklos Gergely) No new revisions were added by this update. Summary of changes: .../org/apache/hadoop/hive/ql/parse/HiveParser.g | 4 +- .../ddl/table/column/show/ShowColumnsAnalyzer.java | 14 ++- .../ql/ddl/table/column/show/ShowColumnsDesc.java | 9 +- .../table/column/show/ShowColumnsOperation.java| 17 ++-- ql/src/test/queries/clientpositive/show_columns.q | 8 ++ .../results/clientpositive/llap/show_columns.q.out | 101 + 6 files changed, 122 insertions(+), 31 deletions(-)
[hive] branch master updated (2f60a9e -> b24534e)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/hive.git. from 2f60a9e HIVE-23695: [CachedStore] Add check/default constraints in CachedStore (Ashish Sharma, reviewed by Adesh Rao, Sankar Hariappan) add b24534e HIVE-24282: Show columns shouldn't sort table output columns unless explicitly mentioned (Naresh Panchetty Ramanaiah reviewed by Miklos Gergely) No new revisions were added by this update. Summary of changes: .../org/apache/hadoop/hive/ql/parse/HiveParser.g | 4 +- .../ddl/table/column/show/ShowColumnsAnalyzer.java | 14 ++- .../ql/ddl/table/column/show/ShowColumnsDesc.java | 9 +- .../table/column/show/ShowColumnsOperation.java| 17 ++-- ql/src/test/queries/clientpositive/show_columns.q | 8 ++ .../results/clientpositive/llap/show_columns.q.out | 101 + 6 files changed, 122 insertions(+), 31 deletions(-)
[hive] branch master updated (75d6057 -> 0303da0)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/hive.git. from 75d6057 HIVE-24235: Drop and recreate table during MR compaction leaves behind base/delta directory (Karen Coppage, reviewed by Peter Vary) add 0303da0 HIVE-24184: Re-order methods in Driver (Miklos Gergely reviewed by David Mollitor) No new revisions were added by this update. Summary of changes: ql/src/java/org/apache/hadoop/hive/ql/Driver.java | 636 +++--- 1 file changed, 317 insertions(+), 319 deletions(-)
[hive] branch master updated: HIVE-24178: Add managed location to SHOW CREATE DATABASE (Miklos Gergely, reviewed by David Mollitor)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 0416770 HIVE-24178: Add managed location to SHOW CREATE DATABASE (Miklos Gergely, reviewed by David Mollitor) 0416770 is described below commit 04167704772828b37ed749e2a098d6d8a6838bf2 Author: Miklos Gergely AuthorDate: Sun Oct 4 10:15:19 2020 +0200 HIVE-24178: Add managed location to SHOW CREATE DATABASE (Miklos Gergely, reviewed by David Mollitor) --- .../ddl/database/showcreate/ShowCreateDatabaseOperation.java | 4 ql/src/test/queries/clientpositive/database_location.q| 2 ++ .../test/results/clientpositive/llap/database_location.q.out | 11 +++ 3 files changed, 17 insertions(+) diff --git a/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/showcreate/ShowCreateDatabaseOperation.java b/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/showcreate/ShowCreateDatabaseOperation.java index 1500b8f..dc96a27 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/showcreate/ShowCreateDatabaseOperation.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/showcreate/ShowCreateDatabaseOperation.java @@ -60,6 +60,10 @@ public class ShowCreateDatabaseOperation extends DDLOperation
[hive] branch master updated: HIVE-23339: SBA does not check permissions for DB location specified in Create database query (Shubham Chaurasia, reviewed by Miklos Gergely) (#1011)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 5b545df HIVE-23339: SBA does not check permissions for DB location specified in Create database query (Shubham Chaurasia, reviewed by Miklos Gergely) (#1011) 5b545df is described below commit 5b545df298e403f79e4fb5082fdf0c6a6d8586c7 Author: Shubham Chaurasia AuthorDate: Fri Jul 17 14:34:36 2020 +0530 HIVE-23339: SBA does not check permissions for DB location specified in Create database query (Shubham Chaurasia, reviewed by Miklos Gergely) (#1011) * HIVE-23339: SBA does not check permissions for DB location specified in Create database query * HIVE-23339: -p option in mkdir to fix the test authorization_sba_create_db_with_loc.q --- .../SemanticAnalysis/HCatSemanticAnalyzerBase.java | 4 ++-- .../storagehandler/DummyHCatAuthProvider.java | 7 +-- .../DummyHiveMetastoreAuthorizationProvider.java | 6 +- .../BitSetCheckedAuthorizationProvider.java| 6 -- .../authorization/HiveAuthorizationProvider.java | 12 --- .../MetaStoreAuthzAPIAuthorizerEmbedOnly.java | 6 +- .../StorageBasedAuthorizationProvider.java | 24 -- .../authorization/command/CommandAuthorizerV1.java | 11 ++ .../authorization_sba_alter_db_loc.q | 14 + .../authorization_sba_create_db_with_loc.q | 14 + .../authorization_sba_alter_db_loc.q.out | 1 + .../authorization_sba_create_db_with_loc.q.out | 1 + 12 files changed, 89 insertions(+), 17 deletions(-) diff --git a/hcatalog/core/src/main/java/org/apache/hive/hcatalog/cli/SemanticAnalysis/HCatSemanticAnalyzerBase.java b/hcatalog/core/src/main/java/org/apache/hive/hcatalog/cli/SemanticAnalysis/HCatSemanticAnalyzerBase.java index 8487e3a..f1e8669 100644 --- a/hcatalog/core/src/main/java/org/apache/hive/hcatalog/cli/SemanticAnalysis/HCatSemanticAnalyzerBase.java +++ b/hcatalog/core/src/main/java/org/apache/hive/hcatalog/cli/SemanticAnalysis/HCatSemanticAnalyzerBase.java @@ -19,7 +19,6 @@ package org.apache.hive.hcatalog.cli.SemanticAnalysis; -import java.io.Serializable; import java.util.List; import org.apache.hadoop.hive.metastore.api.Database; @@ -123,7 +122,8 @@ public class HCatSemanticAnalyzerBase extends AbstractSemanticAnalyzerHook { protected void authorize(Privilege[] inputPrivs, Privilege[] outputPrivs) throws AuthorizationException, SemanticException { try { - getAuthProvider().authorize(inputPrivs, outputPrivs); + getAuthProvider().authorizeDbLevelOperations(inputPrivs, outputPrivs, + null, null); } catch (HiveException ex) { throw new SemanticException(ex); } diff --git a/hcatalog/core/src/main/java/org/apache/hive/hcatalog/storagehandler/DummyHCatAuthProvider.java b/hcatalog/core/src/main/java/org/apache/hive/hcatalog/storagehandler/DummyHCatAuthProvider.java index 86d9a18..46d1d9e 100644 --- a/hcatalog/core/src/main/java/org/apache/hive/hcatalog/storagehandler/DummyHCatAuthProvider.java +++ b/hcatalog/core/src/main/java/org/apache/hive/hcatalog/storagehandler/DummyHCatAuthProvider.java @@ -19,10 +19,13 @@ package org.apache.hive.hcatalog.storagehandler; +import java.util.Collection; import java.util.List; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hive.metastore.api.Database; +import org.apache.hadoop.hive.ql.hooks.ReadEntity; +import org.apache.hadoop.hive.ql.hooks.WriteEntity; import org.apache.hadoop.hive.ql.metadata.AuthorizationException; import org.apache.hadoop.hive.ql.metadata.HiveException; import org.apache.hadoop.hive.ql.metadata.Partition; @@ -77,8 +80,8 @@ class DummyHCatAuthProvider implements HiveAuthorizationProvider { * org.apache.hadoop.hive.ql.security.authorization.Privilege[]) */ @Override - public void authorize(Privilege[] readRequiredPriv, - Privilege[] writeRequiredPriv) throws HiveException, + public void authorizeDbLevelOperations(Privilege[] readRequiredPriv, Privilege[] writeRequiredPriv, + Collection inputs, Collection outputs) throws HiveException, AuthorizationException { } diff --git a/itests/util/src/main/java/org/apache/hadoop/hive/ql/security/DummyHiveMetastoreAuthorizationProvider.java b/itests/util/src/main/java/org/apache/hadoop/hive/ql/security/DummyHiveMetastoreAuthorizationProvider.java index 3fdacac..77c7c22 100644 --- a/itests/util/src/main/java/org/apache/hadoop/hive/ql/security/DummyHiveMetastoreAuthorizationProvider.java +++ b/itests/util/src/main/java/org/apache/hadoop/hive/ql/security/DummyHiveMetastoreAuthorizationProvider.java @@ -19,8 +19,11 @@ package org.apache.hadoop.hive.ql.security; import java.util.ArrayList; +import java.util.Collection
[hive] branch master updated (110b5ca -> 4b5286d)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/hive.git. from 110b5ca HIVE-23857: Fix HiveParser 'code too large' problem (Miklos Gergely, reviewed by David Mollitor and Gopal Vijayaraghavan) (#1258) add 4b5286d HIVE-23244 Extract Create View analyzer from SemanticAnalyzer (Miklos Gergely, reviewed by David Mollitor) (#1125) No new revisions were added by this update. Summary of changes: .../java/org/apache/hadoop/hive/ql/ErrorMsg.java | 1 + .../org/apache/hadoop/hive/ql/parse/HiveParser.g | 3 +- .../org/apache/hadoop/hive/ql/ddl/DDLUtils.java| 14 + .../view/create/AbstractCreateViewAnalyzer.java| 130 +++ .../ql/ddl/view/create/AbstractCreateViewDesc.java | 74 .../ql/ddl/view/create/AlterViewAsAnalyzer.java| 92 + .../view/create/AlterViewAsDesc.java} | 26 +- .../create/AlterViewAsOperation.java} | 24 +- ...ewDesc.java => CreateMaterializedViewDesc.java} | 182 +++-- ...n.java => CreateMaterializedViewOperation.java} | 54 +-- .../ql/ddl/view/create/CreateViewAnalyzer.java | 214 +++ .../hive/ql/ddl/view/create/CreateViewDesc.java| 425 ++--- .../ql/ddl/view/create/CreateViewOperation.java| 97 +++-- .../hadoop/hive/ql/exec/repl/ReplLoadTask.java | 14 +- .../org/apache/hadoop/hive/ql/io/AcidUtils.java| 4 +- .../hadoop/hive/ql/parse/CalcitePlanner.java | 8 +- .../apache/hadoop/hive/ql/parse/ParseContext.java | 8 +- .../java/org/apache/hadoop/hive/ql/parse/QB.java | 16 +- .../hadoop/hive/ql/parse/SemanticAnalyzer.java | 417 +++- .../apache/hadoop/hive/ql/parse/StorageFormat.java | 2 +- .../apache/hadoop/hive/ql/parse/TaskCompiler.java | 24 +- .../apache/hadoop/hive/ql/plan/HiveOperation.java | 4 +- .../apache/hadoop/hive/ql/plan/LoadFileDesc.java | 10 +- .../org/apache/hadoop/hive/ql/plan/PlanUtils.java | 6 +- .../clientnegative/create_or_replace_view4.q.out | 2 +- .../clientnegative/create_view_failure10.q.out | 2 +- .../clientnegative/create_view_failure3.q.out | 2 +- .../clientnegative/create_view_failure5.q.out | 2 +- .../clientnegative/create_view_failure6.q.out | 2 +- .../clientnegative/create_view_failure7.q.out | 2 +- .../clientnegative/create_view_failure8.q.out | 2 +- .../clientnegative/create_view_failure9.q.out | 2 +- .../test/results/clientnegative/masking_mv.q.out | 2 +- .../clientnegative/selectDistinctStarNeg_1.q.out | 2 +- .../results/clientpositive/llap/create_view.q.out | 8 +- .../llap/create_view_translate.q.out | 8 +- .../results/clientpositive/llap/explain_ddl.q.out | 8 +- .../clientpositive/llap/explainuser_1.q.out| 4 +- .../results/clientpositive/llap/lineage3.q.out | 6 +- .../results/clientpositive/llap/masking_mv.q.out | 4 +- .../llap/materialized_view_cluster.q.out | 2 +- .../llap/materialized_view_create_rewrite_3.q.out | 2 +- .../llap/materialized_view_create_rewrite_4.q.out | 2 +- ...ialized_view_create_rewrite_rebuild_dummy.q.out | 2 +- ...erialized_view_create_rewrite_time_window.q.out | 2 +- .../llap/materialized_view_distribute_sort.q.out | 6 +- .../llap/materialized_view_partition_cluster.q.out | 2 +- .../llap/materialized_view_partitioned.q.out | 2 +- .../llap/materialized_view_partitioned_3.q.out | 2 +- .../clientpositive/llap/selectDistinctStar.q.out | 16 +- .../llap/sketches_materialized_view_safety.q.out | 2 +- .../clientpositive/llap/union_top_level.q.out | 8 +- .../clientpositive/llap/vector_windowing.q.out | 20 +- .../clientpositive/tez/explainanalyze_3.q.out | 4 +- .../results/clientpositive/tez/explainuser_3.q.out | 4 +- 55 files changed, 940 insertions(+), 1043 deletions(-) create mode 100644 ql/src/java/org/apache/hadoop/hive/ql/ddl/view/create/AbstractCreateViewAnalyzer.java create mode 100644 ql/src/java/org/apache/hadoop/hive/ql/ddl/view/create/AbstractCreateViewDesc.java create mode 100644 ql/src/java/org/apache/hadoop/hive/ql/ddl/view/create/AlterViewAsAnalyzer.java copy ql/src/java/org/apache/hadoop/hive/ql/{exec/repl/AckWork.java => ddl/view/create/AlterViewAsDesc.java} (65%) copy ql/src/java/org/apache/hadoop/hive/ql/ddl/{process/abort/AbortTransactionsOperation.java => view/create/AlterViewAsOperation.java} (54%) copy ql/src/java/org/apache/hadoop/hive/ql/ddl/view/create/{CreateViewDesc.java => CreateMaterializedViewDesc.java} (66%) copy ql/src/java/org/apache/hadoop/hive/ql/ddl/view/create/{CreateViewOperation.java => CreateMaterializedViewOperation.java} (55%) create mode 100644 ql/src/java/org/apache/hadoop/hive/ql/ddl/view/create/CreateViewAnalyzer.java
[hive] branch master updated: HIVE-23857: Fix HiveParser 'code too large' problem (Miklos Gergely, reviewed by David Mollitor and Gopal Vijayaraghavan) (#1258)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 110b5ca HIVE-23857: Fix HiveParser 'code too large' problem (Miklos Gergely, reviewed by David Mollitor and Gopal Vijayaraghavan) (#1258) 110b5ca is described below commit 110b5ca414e1e2b9dc434db64b3c6b85aeac0268 Author: Miklos Gergely AuthorDate: Thu Jul 16 02:40:06 2020 +0200 HIVE-23857: Fix HiveParser 'code too large' problem (Miklos Gergely, reviewed by David Mollitor and Gopal Vijayaraghavan) (#1258) --- parser/bin/fixHiveParser.sh | 33 + parser/pom.xml | 16 2 files changed, 49 insertions(+) diff --git a/parser/bin/fixHiveParser.sh b/parser/bin/fixHiveParser.sh new file mode 100755 index 000..d469388 --- /dev/null +++ b/parser/bin/fixHiveParser.sh @@ -0,0 +1,33 @@ +#!/bin/bash + +# This is a temporary solution for the issue of the "code too large" problem related to HiveParser.java +# We got to a point where adding anything to the antlr files lead to an issue about having a HiveParser.java that can not be compiled due to the compiled code size limitation in java (maximum 65536 bytes), so to avoid it we temorarly add this script to remove the huge tokenNames array into a separate file. +# The real solution would be to switch to antlr 4 + +input="target/generated-sources/antlr3/org/apache/hadoop/hive/ql/parse/HiveParser.java" +output="target/generated-sources/antlr3/org/apache/hadoop/hive/ql/parse/HiveParser.java-fixed" +tokenFile="target/generated-sources/antlr3/org/apache/hadoop/hive/ql/parse/HiveParserTokens.java" + +# create HiveParserTokens containing the tokenNames + +rm $tokenFile > /dev/null 2>&1 + +cat <> $tokenFile +package org.apache.hadoop.hive.ql.parse; + +public class HiveParserTokens { +EOT + +awk '/tokenNames/ { matched = 1 } matched' $input | awk '{print} /};/ {exit}' >> $tokenFile + +echo "}" >> $tokenFile + +# remove tokenNames array from the original file + +rm $output > /dev/null 2>&1 + +awk '/tokenNames/ {exit} {print}' $input >> $output +echo " public static final String[] tokenNames = HiveParserTokens.tokenNames;" >> $output +awk 'matched; /};$/ { matched = 1 }' $input >> $output + +mv $output $input diff --git a/parser/pom.xml b/parser/pom.xml index 41fee3b..bdaa5cb 100644 --- a/parser/pom.xml +++ b/parser/pom.xml @@ -90,6 +90,22 @@ +exec-maven-plugin +org.codehaus.mojo + + +HiveParser.java fix +generate-sources + + exec + + + ${basedir}/bin/fixHiveParser.sh + + + + + org.codehaus.mojo build-helper-maven-plugin
[hive] branch master updated (2731daf -> 955bae0)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/hive.git. from 2731daf HIVE-23244 Extract Create View analyzer from SemanticAnalyzer (Miklos Gergely, reviewed by Jesus Camacho Rodriguez) (#1125) add 955bae0 Revert "HIVE-23244 Extract Create View analyzer from SemanticAnalyzer (Miklos Gergely, reviewed by Jesus Camacho Rodriguez) (#1125)" No new revisions were added by this update. Summary of changes: .../java/org/apache/hadoop/hive/ql/ErrorMsg.java | 1 - .../org/apache/hadoop/hive/ql/parse/HiveParser.g | 3 +- .../org/apache/hadoop/hive/ql/ddl/DDLUtils.java| 14 - .../view/create/AbstractCreateViewAnalyzer.java| 130 --- .../ql/ddl/view/create/AbstractCreateViewDesc.java | 74 .../ql/ddl/view/create/AlterViewAsAnalyzer.java| 92 - .../hive/ql/ddl/view/create/AlterViewAsDesc.java | 37 -- .../ql/ddl/view/create/AlterViewAsOperation.java | 52 --- .../view/create/CreateMaterializedViewDesc.java| 416 .../create/CreateMaterializedViewOperation.java| 76 .../ql/ddl/view/create/CreateViewAnalyzer.java | 214 --- .../hive/ql/ddl/view/create/CreateViewDesc.java| 425 +++-- .../ql/ddl/view/create/CreateViewOperation.java| 97 ++--- .../hadoop/hive/ql/exec/repl/ReplLoadTask.java | 14 +- .../org/apache/hadoop/hive/ql/io/AcidUtils.java| 4 +- .../hadoop/hive/ql/parse/CalcitePlanner.java | 8 +- .../apache/hadoop/hive/ql/parse/ParseContext.java | 8 +- .../java/org/apache/hadoop/hive/ql/parse/QB.java | 16 +- .../hadoop/hive/ql/parse/SemanticAnalyzer.java | 417 +--- .../apache/hadoop/hive/ql/parse/StorageFormat.java | 2 +- .../apache/hadoop/hive/ql/parse/TaskCompiler.java | 24 +- .../apache/hadoop/hive/ql/plan/HiveOperation.java | 4 +- .../apache/hadoop/hive/ql/plan/LoadFileDesc.java | 10 +- .../org/apache/hadoop/hive/ql/plan/PlanUtils.java | 6 +- .../clientnegative/create_or_replace_view4.q.out | 2 +- .../clientnegative/create_view_failure10.q.out | 2 +- .../clientnegative/create_view_failure3.q.out | 2 +- .../clientnegative/create_view_failure5.q.out | 2 +- .../clientnegative/create_view_failure6.q.out | 2 +- .../clientnegative/create_view_failure7.q.out | 2 +- .../clientnegative/create_view_failure8.q.out | 2 +- .../clientnegative/create_view_failure9.q.out | 2 +- .../test/results/clientnegative/masking_mv.q.out | 2 +- .../clientnegative/selectDistinctStarNeg_1.q.out | 2 +- .../results/clientpositive/llap/create_view.q.out | 8 +- .../llap/create_view_translate.q.out | 8 +- .../results/clientpositive/llap/explain_ddl.q.out | 8 +- .../clientpositive/llap/explainuser_1.q.out| 4 +- .../results/clientpositive/llap/lineage3.q.out | 6 +- .../results/clientpositive/llap/masking_mv.q.out | 4 +- .../llap/materialized_view_cluster.q.out | 2 +- .../llap/materialized_view_create_rewrite_3.q.out | 2 +- .../llap/materialized_view_create_rewrite_4.q.out | 2 +- ...ialized_view_create_rewrite_rebuild_dummy.q.out | 2 +- ...erialized_view_create_rewrite_time_window.q.out | 2 +- .../llap/materialized_view_distribute_sort.q.out | 6 +- .../llap/materialized_view_partition_cluster.q.out | 2 +- .../llap/materialized_view_partitioned.q.out | 2 +- .../llap/materialized_view_partitioned_3.q.out | 2 +- .../clientpositive/llap/selectDistinctStar.q.out | 16 +- .../llap/sketches_materialized_view_safety.q.out | 2 +- .../clientpositive/llap/union_top_level.q.out | 8 +- .../clientpositive/llap/vector_windowing.q.out | 20 +- .../clientpositive/tez/explainanalyze_3.q.out | 4 +- .../results/clientpositive/tez/explainuser_3.q.out | 4 +- 55 files changed, 844 insertions(+), 1434 deletions(-) delete mode 100644 ql/src/java/org/apache/hadoop/hive/ql/ddl/view/create/AbstractCreateViewAnalyzer.java delete mode 100644 ql/src/java/org/apache/hadoop/hive/ql/ddl/view/create/AbstractCreateViewDesc.java delete mode 100644 ql/src/java/org/apache/hadoop/hive/ql/ddl/view/create/AlterViewAsAnalyzer.java delete mode 100644 ql/src/java/org/apache/hadoop/hive/ql/ddl/view/create/AlterViewAsDesc.java delete mode 100644 ql/src/java/org/apache/hadoop/hive/ql/ddl/view/create/AlterViewAsOperation.java delete mode 100644 ql/src/java/org/apache/hadoop/hive/ql/ddl/view/create/CreateMaterializedViewDesc.java delete mode 100644 ql/src/java/org/apache/hadoop/hive/ql/ddl/view/create/CreateMaterializedViewOperation.java delete mode 100644 ql/src/java/org/apache/hadoop/hive/ql/ddl/view/create/CreateViewAnalyzer.java
[hive] branch master updated: Clean up Driver (Miklos Gergely, reviewed by Peter Vary) (#1222)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 9418c08 Clean up Driver (Miklos Gergely, reviewed by Peter Vary) (#1222) 9418c08 is described below commit 9418c08225dcdfe9fdb360ab6037caeaa1847863 Author: Miklos Gergely AuthorDate: Wed Jul 15 14:46:13 2020 +0200 Clean up Driver (Miklos Gergely, reviewed by Peter Vary) (#1222) --- .../plugin/TestHiveAuthorizerCheckInvocation.java | 2 +- ql/src/java/org/apache/hadoop/hive/ql/Driver.java | 113 ++--- .../org/apache/hadoop/hive/ql/DriverFactory.java | 12 ++- .../org/apache/hadoop/hive/ql/DriverState.java | 9 +- .../apache/hadoop/hive/ql/DriverTxnHandler.java| 52 +- .../org/apache/hadoop/hive/ql/DriverUtils.java | 13 +++ .../java/org/apache/hadoop/hive/ql/Executor.java | 7 +- .../apache/hadoop/hive/ql/HiveDriverRunHook.java | 6 +- .../hadoop/hive/ql/HiveDriverRunHookContext.java | 4 +- ql/src/java/org/apache/hadoop/hive/ql/IDriver.java | 6 +- .../hive/ql/udf/generic/GenericUDTFGetSplits.java | 2 +- 11 files changed, 120 insertions(+), 106 deletions(-) diff --git a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/security/authorization/plugin/TestHiveAuthorizerCheckInvocation.java b/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/security/authorization/plugin/TestHiveAuthorizerCheckInvocation.java index 2c2d96c..79d494f 100644 --- a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/security/authorization/plugin/TestHiveAuthorizerCheckInvocation.java +++ b/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/security/authorization/plugin/TestHiveAuthorizerCheckInvocation.java @@ -302,7 +302,7 @@ public class TestHiveAuthorizerCheckInvocation { assertTrue("db name", dbName.equalsIgnoreCase(dbObj.getDbname())); // actually create the permanent function -driver.run(null, true); +driver.run(); // Verify privilege objects reset(mockedAuthorizer); diff --git a/ql/src/java/org/apache/hadoop/hive/ql/Driver.java b/ql/src/java/org/apache/hadoop/hive/ql/Driver.java index 523b25f..5590cf3 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/Driver.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/Driver.java @@ -47,7 +47,6 @@ import org.apache.hadoop.hive.ql.metadata.formatting.JsonMetaDataFormatter; import org.apache.hadoop.hive.ql.metadata.formatting.MetaDataFormatUtils; import org.apache.hadoop.hive.ql.metadata.formatting.MetaDataFormatter; import org.apache.hadoop.hive.ql.parse.ExplainConfiguration.AnalyzeState; -import org.apache.hadoop.hive.ql.plan.HiveOperation; import org.apache.hadoop.hive.ql.plan.mapper.PlanMapper; import org.apache.hadoop.hive.ql.plan.mapper.StatsSource; import org.apache.hadoop.hive.ql.processors.CommandProcessorException; @@ -64,6 +63,9 @@ import org.slf4j.LoggerFactory; import com.google.common.annotations.VisibleForTesting; import com.google.common.base.Strings; +/** + * Compiles and executes HQL commands. + */ public class Driver implements IDriver { private static final String CLASS_NAME = Driver.class.getName(); @@ -99,7 +101,7 @@ public class Driver implements IDriver { } /** - * Set the maximum number of rows returned by getResults + * Set the maximum number of rows returned by getResults. */ @Override public void setMaxRows(int maxRows) { @@ -111,8 +113,7 @@ public class Driver implements IDriver { this(new QueryState.Builder().withGenerateNewQueryId(true).withHiveConf(conf).build()); } - // Pass lineageState when a driver instantiates another Driver to run - // or compile another query + // Pass lineageState when a driver instantiates another Driver to run or compile another query public Driver(HiveConf conf, Context ctx, LineageState lineageState) { this(QueryState.getNewQueryState(conf, lineageState), null); context = ctx; @@ -140,8 +141,9 @@ public class Driver implements IDriver { } /** - * Compile a new query, but potentially reset taskID counter. Not resetting task counter - * is useful for generating re-entrant QL queries. + * Compiles a new HQL command, but potentially resets taskID counter. Not resetting task counter is useful for + * generating re-entrant QL queries. + * * @param command The HiveQL query to compile * @param resetTaskIds Resets taskID counter if true. * @return 0 for ok @@ -155,9 +157,14 @@ public class Driver implements IDriver { } } - // deferClose indicates if the close/destroy should be deferred when the process has been - // interrupted, it should be set to true if the compile is called within another method like - // runInternal, which defers the close to the called in that method. + /** + * Compiles an HQL command, creates an exec
[hive] branch master updated: HIVE-23418 : Add test.local.warehouse.dir for TestMiniLlapLocalDriver tests (Miklos Gergely, reviewed by Zoltan Haindrich)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new ed639c4 HIVE-23418 : Add test.local.warehouse.dir for TestMiniLlapLocalDriver tests (Miklos Gergely, reviewed by Zoltan Haindrich) ed639c4 is described below commit ed639c4cc9a00324711ce1659e355bb36876115a Author: Miklos Gergely AuthorDate: Wed Jun 17 14:57:48 2020 +0200 HIVE-23418 : Add test.local.warehouse.dir for TestMiniLlapLocalDriver tests (Miklos Gergely, reviewed by Zoltan Haindrich) --- .../test/resources/testconfiguration.properties| 16 - pom.xml| 2 + ql/src/test/queries/clientpositive/input44.q | 2 +- ql/src/test/queries/clientpositive/msck_repair_0.q | 8 +- ql/src/test/queries/clientpositive/msck_repair_1.q | 4 +- ql/src/test/queries/clientpositive/msck_repair_2.q | 6 +- ql/src/test/queries/clientpositive/msck_repair_3.q | 2 +- ql/src/test/queries/clientpositive/msck_repair_4.q | 4 +- ql/src/test/queries/clientpositive/msck_repair_5.q | 6 +- .../test/queries/clientpositive/msck_repair_acid.q | 10 +- .../queries/clientpositive/msck_repair_batchsize.q | 12 +- .../test/queries/clientpositive/msck_repair_drop.q | 164 - ql/src/test/queries/clientpositive/nullformat.q| 2 +- .../test/queries/clientpositive/nullformatCTAS.q | 2 +- .../queries/clientpositive/partition_discovery.q | 8 +- ql/src/test/queries/clientpositive/repair.q| 6 +- .../clientpositive/symlink_text_input_format.q | 8 +- .../clientpositive/{ => llap}/input44.q.out| 0 .../clientpositive/{ => llap}/msck_repair_0.q.out | 0 .../clientpositive/{ => llap}/msck_repair_1.q.out | 0 .../clientpositive/{ => llap}/msck_repair_2.q.out | 0 .../clientpositive/{ => llap}/msck_repair_3.q.out | 0 .../clientpositive/{ => llap}/msck_repair_4.q.out | 0 .../clientpositive/{ => llap}/msck_repair_5.q.out | 0 .../clientpositive/{ => llap}/msck_repair_6.q.out | 0 .../{ => llap}/msck_repair_acid.q.out | 0 .../{ => llap}/msck_repair_batchsize.q.out | 0 .../{ => llap}/msck_repair_drop.q.out | 0 .../clientpositive/{ => llap}/nullformat.q.out | 0 .../clientpositive/{ => llap}/nullformatCTAS.q.out | 160 - .../{ => llap}/partition_discovery.q.out | 0 .../results/clientpositive/{ => llap}/repair.q.out | 0 .../{ => llap}/symlink_text_input_format.q.out | 374 - 33 files changed, 403 insertions(+), 393 deletions(-) diff --git a/itests/src/test/resources/testconfiguration.properties b/itests/src/test/resources/testconfiguration.properties index f430a13..810de56 100644 --- a/itests/src/test/resources/testconfiguration.properties +++ b/itests/src/test/resources/testconfiguration.properties @@ -205,7 +205,6 @@ mr.query.files=\ infer_bucket_sort.q,\ input37.q,\ input39.q,\ - input44.q,\ inputwherefalse.q,\ join_map_ppr.q,\ join_vc.q,\ @@ -222,24 +221,10 @@ mr.query.files=\ mapjoin_subquery2.q,\ mapjoin_test_outer.q,\ masking_5.q,\ - msck_repair_0.q,\ - msck_repair_1.q,\ - msck_repair_2.q,\ - msck_repair_3.q,\ - msck_repair_4.q,\ - msck_repair_5.q,\ - msck_repair_6.q,\ - msck_repair_acid.q,\ - msck_repair_batchsize.q,\ - msck_repair_drop.q,\ nonmr_fetch.q,\ nonreserved_keywords_input37.q,\ - nullformat.q,\ - nullformatCTAS.q,\ parenthesis_star_by.q,\ - partition_discovery.q,\ partition_vs_table_metadata.q,\ - repair.q,\ row__id.q,\ sample_islocalmode_hook.q,\ sample_islocalmode_hook_use_metadata.q,\ @@ -280,7 +265,6 @@ mr.query.files=\ sort_merge_join_desc_7.q,\ sort_merge_join_desc_8.q,\ stats_noscan_2.q,\ - symlink_text_input_format.q,\ timestamptz_2.q,\ transform_acid.q,\ type_change_test_fraction_vectorized.q,\ diff --git a/pom.xml b/pom.xml index 44fff7d..bb93b52 100644 --- a/pom.xml +++ b/pom.xml @@ -91,6 +91,7 @@ INFO ${project.build.directory}/warehouse + ${project.build.directory}/localfs/warehouse pfile:// @@ -1412,6 +1413,7 @@ ${test.dfs.mkdir} ${test.output.overwrite} ${test.warehouse.scheme}${test.warehouse.dir} + ${test.warehouse.scheme}${test.local.warehouse.dir} true diff --git a/ql/src/test/queries/clientpositive/input44.q b/ql/src/test/queries/clientpositive/input44.q index c4ed032..21a3af9 100644 --- a/ql/src/test/queries/clientpositive/input44.q +++ b/ql/src/test/queries/clientpositive/input44.q @@ -4,4 +4,4 @@ CREATE TABLE dest_n0(key INT, value STRING) STORED AS TEXTFILE; SET hive.output.file.extension=.txt; INSERT OVERWRITE TABLE dest_n0 SELECT src.* FROM src; -dfs -cat ${sy
[hive] branch master updated: HIVE-21952 : Allow unsetting of serde properties (Miklos Gergely, reviewed by David Mollitor)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 8208df9 HIVE-21952 : Allow unsetting of serde properties (Miklos Gergely, reviewed by David Mollitor) 8208df9 is described below commit 8208df93b465bfa829052fd042a43d89bab86c31 Author: Miklos Gergely AuthorDate: Tue Jun 16 09:45:33 2020 +0200 HIVE-21952 : Allow unsetting of serde properties (Miklos Gergely, reviewed by David Mollitor) --- .../cli/SemanticAnalysis/HCatSemanticAnalyzer.java | 6 +- .../org/apache/hadoop/hive/ql/parse/HiveParser.g | 15 ++- .../serde/AlterTableSetSerdePropsAnalyzer.java | 2 +- ...java => AlterTableUnsetSerdePropsAnalyzer.java} | 10 +- .../serde/AlterTableUnsetSerdePropsDesc.java | 46 +++ .../serde/AlterTableUnsetSerdePropsOperation.java | 43 +++ .../apache/hadoop/hive/ql/plan/HiveOperation.java | 6 +- ql/src/test/queries/clientpositive/table_storage.q | 14 +++ .../clientpositive/llap/table_storage.q.out| 137 + 9 files changed, 264 insertions(+), 15 deletions(-) diff --git a/hcatalog/core/src/main/java/org/apache/hive/hcatalog/cli/SemanticAnalysis/HCatSemanticAnalyzer.java b/hcatalog/core/src/main/java/org/apache/hive/hcatalog/cli/SemanticAnalysis/HCatSemanticAnalyzer.java index cd54e28..941f6b8 100644 --- a/hcatalog/core/src/main/java/org/apache/hive/hcatalog/cli/SemanticAnalysis/HCatSemanticAnalyzer.java +++ b/hcatalog/core/src/main/java/org/apache/hive/hcatalog/cli/SemanticAnalysis/HCatSemanticAnalyzer.java @@ -127,7 +127,8 @@ public class HCatSemanticAnalyzer extends HCatSemanticAnalyzerBase { case HiveParser.TOK_ALTERTABLE_ADDPARTS: case HiveParser.TOK_ALTERTABLE_ADDCOLS: case HiveParser.TOK_ALTERTABLE_CHANGECOL_AFTER_POSITION: -case HiveParser.TOK_ALTERTABLE_SERDEPROPERTIES: +case HiveParser.TOK_ALTERTABLE_SETSERDEPROPERTIES: +case HiveParser.TOK_ALTERTABLE_UNSETSERDEPROPERTIES: case HiveParser.TOK_ALTERTABLE_CLUSTER_SORT: case HiveParser.TOK_ALTERTABLE_DROPPARTS: case HiveParser.TOK_ALTERTABLE_PROPERTIES: @@ -212,7 +213,8 @@ public class HCatSemanticAnalyzer extends HCatSemanticAnalyzerBase { case HiveParser.TOK_ALTERTABLE_ADDPARTS: case HiveParser.TOK_ALTERTABLE_ADDCOLS: case HiveParser.TOK_ALTERTABLE_CHANGECOL_AFTER_POSITION: - case HiveParser.TOK_ALTERTABLE_SERDEPROPERTIES: + case HiveParser.TOK_ALTERTABLE_SETSERDEPROPERTIES: + case HiveParser.TOK_ALTERTABLE_UNSETSERDEPROPERTIES: case HiveParser.TOK_ALTERTABLE_CLUSTER_SORT: case HiveParser.TOK_ALTERTABLE_DROPPARTS: case HiveParser.TOK_ALTERTABLE_PROPERTIES: diff --git a/parser/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g b/parser/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g index 768a3a1..0f9caae 100644 --- a/parser/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g +++ b/parser/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g @@ -172,8 +172,10 @@ TOK_ALTERPARTITION_MERGEFILES; TOK_ALTERTABLE_TOUCH; TOK_ALTERTABLE_ARCHIVE; TOK_ALTERTABLE_UNARCHIVE; -TOK_ALTERTABLE_SERDEPROPERTIES; -TOK_ALTERPARTITION_SERDEPROPERTIES; +TOK_ALTERTABLE_SETSERDEPROPERTIES; +TOK_ALTERPARTITION_SETSERDEPROPERTIES; +TOK_ALTERTABLE_UNSETSERDEPROPERTIES; +TOK_ALTERPARTITION_UNSETSERDEPROPERTIES; TOK_ALTERTABLE_SERIALIZER; TOK_ALTERPARTITION_SERIALIZER; TOK_ALTERTABLE_UPDATECOLSTATS; @@ -1452,14 +1454,17 @@ alterViewSuffixProperties ; alterStatementSuffixSerdeProperties[boolean partition] -@init { pushMsg("alter serdes statement", state); } +@init { pushMsg("alter serde statement", state); } @after { popMsg(state); } : KW_SET KW_SERDE serdeName=StringLiteral (KW_WITH KW_SERDEPROPERTIES tableProperties)? -> {partition}? ^(TOK_ALTERPARTITION_SERIALIZER $serdeName tableProperties?) -> ^(TOK_ALTERTABLE_SERIALIZER $serdeName tableProperties?) | KW_SET KW_SERDEPROPERTIES tableProperties --> {partition}? ^(TOK_ALTERPARTITION_SERDEPROPERTIES tableProperties) --> ^(TOK_ALTERTABLE_SERDEPROPERTIES tableProperties) +-> {partition}? ^(TOK_ALTERPARTITION_SETSERDEPROPERTIES tableProperties) +-> ^(TOK_ALTERTABLE_SETSERDEPROPERTIES tableProperties) +| KW_UNSET KW_SERDEPROPERTIES tableProperties +-> {partition}? ^(TOK_ALTERPARTITION_UNSETSERDEPROPERTIES tableProperties) +-> ^(TOK_ALTERTABLE_UNSETSERDEPROPERTIES tableProperties) ; tablePartitionPrefix diff --git a/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/storage/serde/AlterTableSetSerdePropsAnalyzer.java b/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/storage/serde/AlterTableSetSerdePropsAnalyzer.java index 2be5dc6..16453e1 100644 --- a/
[hive] branch master updated: HIVE-21952 : Allow unsetting of serde properties (Miklos Gergely, reviewed by David Mollitor)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 8208df9 HIVE-21952 : Allow unsetting of serde properties (Miklos Gergely, reviewed by David Mollitor) 8208df9 is described below commit 8208df93b465bfa829052fd042a43d89bab86c31 Author: Miklos Gergely AuthorDate: Tue Jun 16 09:45:33 2020 +0200 HIVE-21952 : Allow unsetting of serde properties (Miklos Gergely, reviewed by David Mollitor) --- .../cli/SemanticAnalysis/HCatSemanticAnalyzer.java | 6 +- .../org/apache/hadoop/hive/ql/parse/HiveParser.g | 15 ++- .../serde/AlterTableSetSerdePropsAnalyzer.java | 2 +- ...java => AlterTableUnsetSerdePropsAnalyzer.java} | 10 +- .../serde/AlterTableUnsetSerdePropsDesc.java | 46 +++ .../serde/AlterTableUnsetSerdePropsOperation.java | 43 +++ .../apache/hadoop/hive/ql/plan/HiveOperation.java | 6 +- ql/src/test/queries/clientpositive/table_storage.q | 14 +++ .../clientpositive/llap/table_storage.q.out| 137 + 9 files changed, 264 insertions(+), 15 deletions(-) diff --git a/hcatalog/core/src/main/java/org/apache/hive/hcatalog/cli/SemanticAnalysis/HCatSemanticAnalyzer.java b/hcatalog/core/src/main/java/org/apache/hive/hcatalog/cli/SemanticAnalysis/HCatSemanticAnalyzer.java index cd54e28..941f6b8 100644 --- a/hcatalog/core/src/main/java/org/apache/hive/hcatalog/cli/SemanticAnalysis/HCatSemanticAnalyzer.java +++ b/hcatalog/core/src/main/java/org/apache/hive/hcatalog/cli/SemanticAnalysis/HCatSemanticAnalyzer.java @@ -127,7 +127,8 @@ public class HCatSemanticAnalyzer extends HCatSemanticAnalyzerBase { case HiveParser.TOK_ALTERTABLE_ADDPARTS: case HiveParser.TOK_ALTERTABLE_ADDCOLS: case HiveParser.TOK_ALTERTABLE_CHANGECOL_AFTER_POSITION: -case HiveParser.TOK_ALTERTABLE_SERDEPROPERTIES: +case HiveParser.TOK_ALTERTABLE_SETSERDEPROPERTIES: +case HiveParser.TOK_ALTERTABLE_UNSETSERDEPROPERTIES: case HiveParser.TOK_ALTERTABLE_CLUSTER_SORT: case HiveParser.TOK_ALTERTABLE_DROPPARTS: case HiveParser.TOK_ALTERTABLE_PROPERTIES: @@ -212,7 +213,8 @@ public class HCatSemanticAnalyzer extends HCatSemanticAnalyzerBase { case HiveParser.TOK_ALTERTABLE_ADDPARTS: case HiveParser.TOK_ALTERTABLE_ADDCOLS: case HiveParser.TOK_ALTERTABLE_CHANGECOL_AFTER_POSITION: - case HiveParser.TOK_ALTERTABLE_SERDEPROPERTIES: + case HiveParser.TOK_ALTERTABLE_SETSERDEPROPERTIES: + case HiveParser.TOK_ALTERTABLE_UNSETSERDEPROPERTIES: case HiveParser.TOK_ALTERTABLE_CLUSTER_SORT: case HiveParser.TOK_ALTERTABLE_DROPPARTS: case HiveParser.TOK_ALTERTABLE_PROPERTIES: diff --git a/parser/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g b/parser/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g index 768a3a1..0f9caae 100644 --- a/parser/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g +++ b/parser/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g @@ -172,8 +172,10 @@ TOK_ALTERPARTITION_MERGEFILES; TOK_ALTERTABLE_TOUCH; TOK_ALTERTABLE_ARCHIVE; TOK_ALTERTABLE_UNARCHIVE; -TOK_ALTERTABLE_SERDEPROPERTIES; -TOK_ALTERPARTITION_SERDEPROPERTIES; +TOK_ALTERTABLE_SETSERDEPROPERTIES; +TOK_ALTERPARTITION_SETSERDEPROPERTIES; +TOK_ALTERTABLE_UNSETSERDEPROPERTIES; +TOK_ALTERPARTITION_UNSETSERDEPROPERTIES; TOK_ALTERTABLE_SERIALIZER; TOK_ALTERPARTITION_SERIALIZER; TOK_ALTERTABLE_UPDATECOLSTATS; @@ -1452,14 +1454,17 @@ alterViewSuffixProperties ; alterStatementSuffixSerdeProperties[boolean partition] -@init { pushMsg("alter serdes statement", state); } +@init { pushMsg("alter serde statement", state); } @after { popMsg(state); } : KW_SET KW_SERDE serdeName=StringLiteral (KW_WITH KW_SERDEPROPERTIES tableProperties)? -> {partition}? ^(TOK_ALTERPARTITION_SERIALIZER $serdeName tableProperties?) -> ^(TOK_ALTERTABLE_SERIALIZER $serdeName tableProperties?) | KW_SET KW_SERDEPROPERTIES tableProperties --> {partition}? ^(TOK_ALTERPARTITION_SERDEPROPERTIES tableProperties) --> ^(TOK_ALTERTABLE_SERDEPROPERTIES tableProperties) +-> {partition}? ^(TOK_ALTERPARTITION_SETSERDEPROPERTIES tableProperties) +-> ^(TOK_ALTERTABLE_SETSERDEPROPERTIES tableProperties) +| KW_UNSET KW_SERDEPROPERTIES tableProperties +-> {partition}? ^(TOK_ALTERPARTITION_UNSETSERDEPROPERTIES tableProperties) +-> ^(TOK_ALTERTABLE_UNSETSERDEPROPERTIES tableProperties) ; tablePartitionPrefix diff --git a/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/storage/serde/AlterTableSetSerdePropsAnalyzer.java b/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/storage/serde/AlterTableSetSerdePropsAnalyzer.java index 2be5dc6..16453e1 100644 --- a/
[hive] branch master updated: HIVE-23547 Enforce testconfiguration.properties file format and alphabetical order (Miklos Gergely, reviewed by Laszlo Bodor)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new f49d257 HIVE-23547 Enforce testconfiguration.properties file format and alphabetical order (Miklos Gergely, reviewed by Laszlo Bodor) f49d257 is described below commit f49d257c560c81c38259e95023b20c544acb4d10 Author: miklosgergely AuthorDate: Mon May 25 14:00:13 2020 +0200 HIVE-23547 Enforce testconfiguration.properties file format and alphabetical order (Miklos Gergely, reviewed by Laszlo Bodor) --- itests/bin/validateTestConfiguration.sh| 60 itests/pom.xml | 25 ++ .../test/resources/testconfiguration.properties| 304 ++--- .../clientpositive/{tez-tag.q => tez_tag.q}| 0 .../tez/{tez-tag.q.out => tez_tag.q.out} | 0 5 files changed, 237 insertions(+), 152 deletions(-) diff --git a/itests/bin/validateTestConfiguration.sh b/itests/bin/validateTestConfiguration.sh new file mode 100755 index 000..6d57520 --- /dev/null +++ b/itests/bin/validateTestConfiguration.sh @@ -0,0 +1,60 @@ +#!/bin/bash + +echo "Validating testconfiguration.properties format" + +HIVE_ROOT=$1 +export LC_ALL=C + +state="out" +row=0 +last_test_name= +group= +while IFS= read -r line; do + row=$((row+1)) + if [ "$state" == "out" ]; then +[ -z "$line" ] && continue +[[ $line == \#* ]] && continue + +parts=(${line//=/ }) +if [[ ${#parts[@]} != 2 ]]; then + echo "group declaration should contain exactly one '=', but in row $row: '$line'" + exit 1 +fi + +group=${parts[0]} +last_test_name= +state="in" + else +if ! [[ "$line" =~ [[:space:]][[:space:]]* ]]; then + echo "lines within group should start with two spaces, but in row $row: '$line'" + exit 1 +fi + +file=${line:2} +if [[ ${line: -2} == ",\\" ]]; then + file=${file%??} +else + state="out" +fi + +if ! [[ ${file: -2} == ".q" ]]; then + echo "file name should end with '.q', but in row $row: '$line'" + exit 1 +fi + +test_name=${file%??} +if [[ "$test_name" = *[^a-zA-Z0-9_]* ]]; then + echo "test name should contain only letters, numbers and '_' characters, but in row $row: '$line'" + exit 1 +fi + +if [[ $last_test_name > $test_name ]]; then + echo "files should be in alphabetic order within group, but in group $group in row $row: $test_name < $last_test_name " + exit 1 +fi + +last_test_name=$test_name + fi +done < $HIVE_ROOT/itests/src/test/resources/testconfiguration.properties + +echo "Validation of testconfiguration.properties finished successfully" diff --git a/itests/pom.xml b/itests/pom.xml index d4fb252..faadce3 100644 --- a/itests/pom.xml +++ b/itests/pom.xml @@ -482,4 +482,29 @@ + + + +org.apache.maven.plugins +maven-antrun-plugin + + +validate testconfiguration.properties +generate-sources + + run + + + + + + + + + + + + + + diff --git a/itests/src/test/resources/testconfiguration.properties b/itests/src/test/resources/testconfiguration.properties index 92ae8c2..1fd09eb 100644 --- a/itests/src/test/resources/testconfiguration.properties +++ b/itests/src/test/resources/testconfiguration.properties @@ -14,18 +14,18 @@ minitez.query.files.shared=\ minitez.query.files=\ acid_vectorization_original_tez.q,\ delete_orig_table.q,\ - explainuser_3.q,\ explainanalyze_1.q,\ explainanalyze_3.q,\ explainanalyze_4.q,\ explainanalyze_5.q,\ + explainuser_3.q,\ multi_count_distinct.q,\ orc_merge12.q,\ orc_vectorization_ppd.q,\ - tez-tag.q,\ - tez_union_with_udf.q,\ - tez_union_udtf.q,\ tez_complextype_with_null.q,\ + tez_tag.q,\ + tez_union_udtf.q,\ + tez_union_with_udf.q,\ update_orig_table.q,\ vector_join_part_col_char.q,\ vector_non_string_partition.q @@ -37,89 +37,31 @@ minillap.query.files=\ add_part_with_loc.q,\ alter_table_location2.q,\ alter_table_location3.q,\ + autoColumnStats_6.q,\ + autogen_colalias.q,\ + binary_output_format.q,\ bucket5.q,\ bucket6.q,\ + compressed_skip_header_footer_aggr.q,\ + create_genericudaf.q,\ + create_udaf.q,\ + create_view.q,\ cte_2.q,\ cte_4.q,\ + cttl.q,\ + dynamic_partition_pruning_2.q,\ dynamic_semijoin_user_level.q,\ + dynpart_cast.q,\ + empty_dir_in_table.q,\ except_distinct.q,\ exp
[hive] branch master updated (ef7a9de -> 9ab45e2)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/hive.git. from ef7a9de HIVE-23480: use the JsonPropertyOrder annotation to ensure the ordering of the serialized properties. (Panos G via Ashutosh Chauhan) add 9ab45e2 HIVE-23510 TestMiniLlapLocalCliDriver should be the default driver for q tests ADDENDUM remove row left in accidentally (Miklos Gergely, reviewed by Jesus Camacho Rodriguez) No new revisions were added by this update. Summary of changes: itests/src/test/resources/testconfiguration.properties | 1 - 1 file changed, 1 deletion(-)
[hive] branch master updated: HIVE-23513 Fix Json output for SHOW TABLES and SHOW MATERIALIZED VIEWS (Miklos Gergely, reviewed by Zoltan Haindrich)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new c641497 HIVE-23513 Fix Json output for SHOW TABLES and SHOW MATERIALIZED VIEWS (Miklos Gergely, reviewed by Zoltan Haindrich) c641497 is described below commit c64149747826e305dd5f9156ee9e6e8cf0c1e863 Author: miklosgergely AuthorDate: Tue May 19 22:14:27 2020 +0200 HIVE-23513 Fix Json output for SHOW TABLES and SHOW MATERIALIZED VIEWS (Miklos Gergely, reviewed by Zoltan Haindrich) --- .../test/resources/testconfiguration.properties| 3 +- .../metadata/formatting/JsonMetaDataFormatter.java | 53 +-- .../test/queries/clientpositive/show_json_format.q | 25 ++ .../clientpositive/llap/show_json_format.q.out | 100 + 4 files changed, 150 insertions(+), 31 deletions(-) diff --git a/itests/src/test/resources/testconfiguration.properties b/itests/src/test/resources/testconfiguration.properties index 0d06d02..38a8103 100644 --- a/itests/src/test/resources/testconfiguration.properties +++ b/itests/src/test/resources/testconfiguration.properties @@ -3025,7 +3025,8 @@ minillaplocal.query.files=\ windowing_range_multiorder.q,\ windowing_streaming.q,\ windowing_udaf.q,\ - windowing_windowspec3.q + windowing_windowspec3.q,\ + show_json_format.q encrypted.query.files=encryption_join_unencrypted_tbl.q,\ encryption_insert_partition_static.q,\ diff --git a/ql/src/java/org/apache/hadoop/hive/ql/metadata/formatting/JsonMetaDataFormatter.java b/ql/src/java/org/apache/hadoop/hive/ql/metadata/formatting/JsonMetaDataFormatter.java index a1611e3..00ddb1d 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/metadata/formatting/JsonMetaDataFormatter.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/metadata/formatting/JsonMetaDataFormatter.java @@ -35,6 +35,8 @@ import org.apache.commons.lang3.StringUtils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; +import com.google.common.collect.ImmutableMap; + import org.apache.hadoop.fs.FileStatus; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; @@ -130,48 +132,38 @@ public class JsonMetaDataFormatter implements MetaDataFormatter { * Show a list of tables including table types. */ @Override - public void showTablesExtended(DataOutputStream out, List tables) - throws HiveException { + public void showTablesExtended(DataOutputStream out, List tables) throws HiveException { if (tables.isEmpty()) { - // Nothing to do return; } -MapBuilder builder = MapBuilder.create(); -ArrayList> res = new ArrayList>(); +List> tableDataList = new ArrayList>(); for (Table table : tables) { - final String tableName = table.getTableName(); - final String tableType = table.getTableType().toString(); - res.add(builder - .put("Table Name", tableName) - .put("Table Type", tableType) - .build()); + Map tableData = ImmutableMap.of( + "Table Name", table.getTableName(), + "Table Type", table.getTableType().toString()); + tableDataList.add(tableData); } -asJson(out, builder.put("tables", res).build()); +asJson(out, ImmutableMap.of("tables", tableDataList)); } /** * Show a list of materialized views. */ @Override - public void showMaterializedViews(DataOutputStream out, List materializedViews) - throws HiveException { + public void showMaterializedViews(DataOutputStream out, List materializedViews) throws HiveException { if (materializedViews.isEmpty()) { - // Nothing to do return; } -MapBuilder builder = MapBuilder.create(); -ArrayList> res = new ArrayList>(); -for (Table mv : materializedViews) { - final String mvName = mv.getTableName(); - final String rewriteEnabled = mv.isRewriteEnabled() ? "Yes" : "No"; +List> materializedViewDataList = new ArrayList>(); +for (Table materializedView : materializedViews) { // Currently, we only support manual refresh // TODO: Update whenever we have other modes - final String refreshMode = "Manual refresh"; - final String timeWindowString = mv.getProperty(MATERIALIZED_VIEW_REWRITING_TIME_WINDOW); - final String mode; - if (!org.apache.commons.lang3.StringUtils.isEmpty(timeWindowString)) { + String refreshMode = "Manual refresh"; + String timeWindowString = materializedView.getProperty(MATERIALIZED_VIEW_REWRITING_TIME_WINDOW); + String mode; + if (!StringUtils.isEmpty(timeWindowString)) { long time = HiveConf.toTime(timeWindowString, HiveConf.getDefaultTimeUnit(HiveConf.ConfVars.HIVE_MATERI
[hive] branch master updated: HIVE-23508 Do not show parameters column for non-extended desc database (Miklos Gergely, reviewed by Zoltan Haindrich)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new fb7d52e HIVE-23508 Do not show parameters column for non-extended desc database (Miklos Gergely, reviewed by Zoltan Haindrich) fb7d52e is described below commit fb7d52ec577daef859722eab551463ad10f981aa Author: miklosgergely AuthorDate: Tue May 19 17:34:00 2020 +0200 HIVE-23508 Do not show parameters column for non-extended desc database (Miklos Gergely, reviewed by Zoltan Haindrich) --- .../ql/ddl/database/desc/DescDatabaseAnalyzer.java | 2 +- .../ql/ddl/database/desc/DescDatabaseDesc.java | 19 +-- .../ddl/database/desc/DescDatabaseOperation.java | 2 +- .../queries/clientpositive/describe_database.q | 14 +++-- .../clientpositive/beeline/escape_comments.q.out | 2 +- .../clientpositive/llap/alter_db_owner.q.out | 6 +- .../llap/authorization_owner_actions_db.q.out | 2 +- .../clientpositive/llap/database_properties.q.out | 2 +- .../clientpositive/llap/db_ddl_explain.q.out | 2 +- .../clientpositive/llap/describe_database.q.out| 66 +++--- .../clientpositive/tez/explainanalyze_3.q.out | 2 +- .../results/clientpositive/tez/explainuser_3.q.out | 2 +- 12 files changed, 93 insertions(+), 28 deletions(-) diff --git a/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/desc/DescDatabaseAnalyzer.java b/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/desc/DescDatabaseAnalyzer.java index b460811..6b4860b 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/desc/DescDatabaseAnalyzer.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/desc/DescDatabaseAnalyzer.java @@ -56,6 +56,6 @@ public class DescDatabaseAnalyzer extends BaseSemanticAnalyzer { rootTasks.add(task); task.setFetchSource(true); -setFetchTask(createFetchTask(DescDatabaseDesc.DESC_DATABASE_SCHEMA)); +setFetchTask(createFetchTask(desc.getSchema())); } } diff --git a/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/desc/DescDatabaseDesc.java b/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/desc/DescDatabaseDesc.java index b92ed21..09751ee 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/desc/DescDatabaseDesc.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/desc/DescDatabaseDesc.java @@ -33,21 +33,26 @@ public class DescDatabaseDesc implements DDLDesc, Serializable { private static final long serialVersionUID = 1L; public static final String DESC_DATABASE_SCHEMA = + "db_name,comment,location,managedLocation,owner_name,owner_type#string:string:string:string:string:string"; + + public static final String DESC_DATABASE_SCHEMA_EXTENDED = "db_name,comment,location,managedLocation,owner_name,owner_type,parameters#" + "string:string:string:string:string:string:string"; private final String resFile; private final String dbName; - private final boolean isExt; + private final boolean isExtended; - public DescDatabaseDesc(Path resFile, String dbName, boolean isExt) { + public DescDatabaseDesc(Path resFile, String dbName, boolean isExtended) { this.resFile = resFile.toString(); this.dbName = dbName; -this.isExt = isExt; +this.isExtended = isExtended; } - public boolean isExt() { -return isExt; + @Explain(displayName = "extended", displayOnlyOnTrue=true, + explainLevels = { Level.USER, Level.DEFAULT, Level.EXTENDED }) + public boolean isExtended() { +return isExtended; } @Explain(displayName = "database", explainLevels = { Level.USER, Level.DEFAULT, Level.EXTENDED }) @@ -59,4 +64,8 @@ public class DescDatabaseDesc implements DDLDesc, Serializable { public String getResFile() { return resFile; } + + public String getSchema() { +return isExtended ? DESC_DATABASE_SCHEMA_EXTENDED : DESC_DATABASE_SCHEMA; + } } diff --git a/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/desc/DescDatabaseOperation.java b/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/desc/DescDatabaseOperation.java index 406397d..52b7eb9 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/desc/DescDatabaseOperation.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/desc/DescDatabaseOperation.java @@ -48,7 +48,7 @@ public class DescDatabaseOperation extends DDLOperation { } SortedMap params = null; - if (desc.isExt()) { + if (desc.isExtended()) { params = new TreeMap<>(database.getParameters()); } diff --git a/ql/src/test/queries/clientpositive/describe_database.q b/ql/src/test/queries/clientpositive/describe_database.q index 961bf55..15bbca0 100644 --- a/ql/src/test/queries/clientpositive/describe_database.q +++ b/ql/src/test/queries/cli
[hive] 01/01: move files to llap
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch HIVE-23470_rb in repository https://gitbox.apache.org/repos/asf/hive.git commit 4f4aef81ee293838fee56657ffe07381f19492b4 Author: miklosgergely AuthorDate: Thu May 14 22:56:52 2020 +0200 move files to llap --- .../clientpositive/{ => llap}/autoColumnStats_6.q.out | 0 .../clientpositive/{ => llap}/autogen_colalias.q.out| 0 .../clientpositive/{ => llap}/binary_output_format.q.out| 0 .../clientpositive/{ => llap}/create_genericudaf.q.out | 0 .../results/clientpositive/{ => llap}/create_udaf.q.out | 0 .../results/clientpositive/{ => llap}/create_view.q.out | 0 .../clientpositive/{ => llap}/gen_udf_example_add10.q.out | 0 .../results/clientpositive/{ => llap}/groupby_bigdata.q.out | 0 .../clientpositive/{ => llap}/infer_bucket_sort.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/input14.q.out | 0 .../results/clientpositive/{ => llap}/input14_limit.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/input17.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/input18.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/input20.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/input33.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/input34.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/input35.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/input36.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/input38.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/input5.q.out | 0 .../results/clientpositive/{ => llap}/insert_into3.q.out| 0 .../results/clientpositive/{ => llap}/insert_into4.q.out| 0 .../results/clientpositive/{ => llap}/insert_into5.q.out| 0 .../results/clientpositive/{ => llap}/insert_into6.q.out| 0 .../clientpositive/{ => llap}/load_binary_data.q.out| Bin ql/src/test/results/clientpositive/{ => llap}/macro_1.q.out | 0 .../results/clientpositive/{ => llap}/macro_duplicate.q.out | 0 .../test/results/clientpositive/{ => llap}/mapreduce3.q.out | 0 .../test/results/clientpositive/{ => llap}/mapreduce4.q.out | 0 .../test/results/clientpositive/{ => llap}/mapreduce7.q.out | 0 .../test/results/clientpositive/{ => llap}/mapreduce8.q.out | 0 .../{ => llap}/merge_test_dummy_operator.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/newline.q.out | 0 .../{ => llap}/nonreserved_keywords_insert_into1.q.out | 0 .../test/results/clientpositive/{ => llap}/nullscript.q.out | 0 .../results/clientpositive/{ => llap}/orc_createas1.q.out | 0 .../test/results/clientpositive/{ => llap}/partcols1.q.out | 0 .../results/clientpositive/{ => llap}/ppd_transform.q.out | 0 .../results/clientpositive/{ => llap}/query_with_semi.q.out | 0 .../results/clientpositive/{ => llap}/rcfile_bigdata.q.out | 0 .../results/clientpositive/{ => llap}/regexp_extract.q.out | 0 .../results/clientpositive/{ => llap}/script_env_var1.q.out | 0 .../results/clientpositive/{ => llap}/script_env_var2.q.out | 0 .../results/clientpositive/{ => llap}/script_pipe.q.out | 0 .../results/clientpositive/{ => llap}/scriptfile1.q.out | 0 .../clientpositive/{ => llap}/select_transform_hint.q.out | 0 .../test/results/clientpositive/{ => llap}/str_to_map.q.out | 0 .../clientpositive/{ => llap}/temp_table_partcols1.q.out| 0 .../test/results/clientpositive/{ => llap}/transform1.q.out | 0 .../test/results/clientpositive/{ => llap}/transform2.q.out | 0 .../test/results/clientpositive/{ => llap}/transform3.q.out | 0 .../results/clientpositive/{ => llap}/transform_acid.q.out | 0 .../results/clientpositive/{ => llap}/transform_ppr1.q.out | 0 .../results/clientpositive/{ => llap}/transform_ppr2.q.out | 0 .../results/clientpositive/{ => llap}/udaf_sum_list.q.out | 0 .../test/results/clientpositive/{ => llap}/udf_printf.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/union23.q.out | 0 .../results/clientpositive/{ => llap}/union_script.q.out| 0 .../{ => llap}/vector_custom_udf_configure.q.out| 0 .../results/clientpositive/{ => llap}/vector_udf3.q.out | 0 60 files changed, 0 insertions(+), 0 deletions(-) diff --git a/ql/src/test/results/clientpositive/autoColumnStats_6.q.out b/ql/src/test/results/clientpositive/llap/autoColumnStats_6.q.out similarity index 100% rename from ql/src/test/results/clientpositive/autoColumnStats_6.q.out rename to ql/src/test/results/clientpositive/llap/autoColumnStats_6.q.out diff --git a/ql/src/test/results/clientpositive/autogen_colalias.q.out b/ql/src/test/results/clie
[hive] branch HIVE-23470_rb updated (0b00a83 -> 4f4aef8)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a change to branch HIVE-23470_rb in repository https://gitbox.apache.org/repos/asf/hive.git. discard 0b00a83 move files to tez add ee4daec HIVE-23414: Detail Hive Java Compatibility (David Mollitor, reviewed by Naveen Gangam) add fdf6758 HIVE-23351: Ranger Replication Scheduling (Aasha Medhi, reviewed by Pravin Kumar Sinha) add 8bfdd18 HIVE-23436: Staging directory is not removed for stats gathering tasks (Peter Vary reviewed by Zoltan Haindrich) add 7ebc546 HIVE-23442: ACID major compaction doesn't read base directory correctly if it was written by insert overwrite (Marta Kuczora, reviewed by Peter Vary) add 57c1593 HIVE-23445 : Remove mapreduce.workflow.* configs (Ashutosh Chauhan via Gopal V) add 2ff6370 HIVE-23053: Clean Up Stats Mergers (David Mollitor, reviewed by Ashutosh Chauhan) add 9ffbbdc HIVE-23409 : If TezSession application reopen fails for Timeline service down, default TezSession from SessionPool is closed after a retry ( Naresh PR via Ashutosh Chauhan) add 3193589 HIVE-23338: Bump jackson version to 2.10.0 (Karen Coppage via Peter Vary) add 472aca8 HIVE-23344: Bump scala version to 2.12.4, spark to 2.4.5 (Karen Coppage via Peter Vary) add b63c35a HIVE-23451 : FileSinkOperator calls deleteOnExit (hdfs call) twice for the same file ( Rajesh Balamohan via Ashutosh Chauhan ) add 9f40d7c HIVE-23423 : Check of disabling hash aggregation ignores grouping set ( Gopal V via Ashutosh Chauhan) add ce53f3e HIVE-23133: Numeric operations can have different result across hardware archs (Zhenyu Zheng, reviewed by Chinna Rao L) add 43ac992 HIVE-23407: Prompt Beeline Users To Enable Verbose Logging on Error (David Mollitor, reviewed by Ashutosh Chauhan) add 390ad7d HIVE-23099: Improve Logger for Operation Child Classes (David Mollitor, reviewed by Ashutosh Chauhan) new 4f4aef8 move files to llap This update added new revisions after undoing existing revisions. That is to say, some revisions that were in the old version of the branch are not in the new version. This situation occurs when a user --force pushes a change and generates a repository containing something like this: * -- * -- B -- O -- O -- O (0b00a83) \ N -- N -- N refs/heads/HIVE-23470_rb (4f4aef8) You should already have received notification emails for all of the O revisions, and so the following emails describe only the N revisions from the common base, B. Any revisions marked "omit" are not gone; other references still refer to them. Any revisions marked "discard" are gone forever. The 1 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: README.md | 19 +- beeline/src/main/resources/BeeLine.properties |9 +- .../java/org/apache/hadoop/hive/conf/HiveConf.java | 16 + .../hive/ql/parse/TestReplicationScenarios.java| 11 +- .../TestReplicationScenariosAcrossInstances.java | 68 + .../hive/ql/txn/compactor/TestCompactor.java | 40 +- itests/qtest-druid/pom.xml |2 +- kafka-handler/pom.xml |4 +- pom.xml| 19 +- ql/pom.xml |5 + .../apache/hadoop/hive/ql/plan/api/StageType.java | 10 +- .../java/org/apache/hadoop/hive/ql/Compiler.java |3 - ql/src/java/org/apache/hadoop/hive/ql/Context.java | 16 +- ql/src/java/org/apache/hadoop/hive/ql/Driver.java |2 +- .../java/org/apache/hadoop/hive/ql/Executor.java |2 - .../AlterMaterializedViewRebuildAnalyzer.java |2 +- .../org/apache/hadoop/hive/ql/exec/DagUtils.java |4 +- .../hadoop/hive/ql/exec/FileSinkOperator.java |5 - .../apache/hadoop/hive/ql/exec/TaskFactory.java|6 + .../org/apache/hadoop/hive/ql/exec/Utilities.java | 22 - .../hadoop/hive/ql/exec/repl/RangerDumpTask.java | 130 ++ .../hadoop/hive/ql/exec/repl/RangerDumpWork.java | 50 + .../hadoop/hive/ql/exec/repl/RangerLoadTask.java | 137 ++ .../hadoop/hive/ql/exec/repl/RangerLoadWork.java | 59 + .../hadoop/hive/ql/exec/repl/ReplDumpTask.java | 31 +- .../hadoop/hive/ql/exec/repl/ReplLoadTask.java | 550 +++ .../hadoop/hive/ql/exec/repl/ReplLoadWork.java |8 +- .../ql/exec/repl/ranger/NoOpRangerRestClient.java | 71 + .../ql/exec/repl/ranger/RangerBaseModelObject.java | 191 +++ .../exec/repl/ranger/RangerExportPolicyList.java | 52 + .../hive/ql/exec/repl/ranger/RangerPolicy.java | 1513 .../hive/ql/exec/repl/ranger/RangerPolicyL
[hive] branch HIVE-23470_rb updated (0b00a83 -> 4f4aef8)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a change to branch HIVE-23470_rb in repository https://gitbox.apache.org/repos/asf/hive.git. discard 0b00a83 move files to tez add ee4daec HIVE-23414: Detail Hive Java Compatibility (David Mollitor, reviewed by Naveen Gangam) add fdf6758 HIVE-23351: Ranger Replication Scheduling (Aasha Medhi, reviewed by Pravin Kumar Sinha) add 8bfdd18 HIVE-23436: Staging directory is not removed for stats gathering tasks (Peter Vary reviewed by Zoltan Haindrich) add 7ebc546 HIVE-23442: ACID major compaction doesn't read base directory correctly if it was written by insert overwrite (Marta Kuczora, reviewed by Peter Vary) add 57c1593 HIVE-23445 : Remove mapreduce.workflow.* configs (Ashutosh Chauhan via Gopal V) add 2ff6370 HIVE-23053: Clean Up Stats Mergers (David Mollitor, reviewed by Ashutosh Chauhan) add 9ffbbdc HIVE-23409 : If TezSession application reopen fails for Timeline service down, default TezSession from SessionPool is closed after a retry ( Naresh PR via Ashutosh Chauhan) add 3193589 HIVE-23338: Bump jackson version to 2.10.0 (Karen Coppage via Peter Vary) add 472aca8 HIVE-23344: Bump scala version to 2.12.4, spark to 2.4.5 (Karen Coppage via Peter Vary) add b63c35a HIVE-23451 : FileSinkOperator calls deleteOnExit (hdfs call) twice for the same file ( Rajesh Balamohan via Ashutosh Chauhan ) add 9f40d7c HIVE-23423 : Check of disabling hash aggregation ignores grouping set ( Gopal V via Ashutosh Chauhan) add ce53f3e HIVE-23133: Numeric operations can have different result across hardware archs (Zhenyu Zheng, reviewed by Chinna Rao L) add 43ac992 HIVE-23407: Prompt Beeline Users To Enable Verbose Logging on Error (David Mollitor, reviewed by Ashutosh Chauhan) add 390ad7d HIVE-23099: Improve Logger for Operation Child Classes (David Mollitor, reviewed by Ashutosh Chauhan) new 4f4aef8 move files to llap This update added new revisions after undoing existing revisions. That is to say, some revisions that were in the old version of the branch are not in the new version. This situation occurs when a user --force pushes a change and generates a repository containing something like this: * -- * -- B -- O -- O -- O (0b00a83) \ N -- N -- N refs/heads/HIVE-23470_rb (4f4aef8) You should already have received notification emails for all of the O revisions, and so the following emails describe only the N revisions from the common base, B. Any revisions marked "omit" are not gone; other references still refer to them. Any revisions marked "discard" are gone forever. The 1 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: README.md | 19 +- beeline/src/main/resources/BeeLine.properties |9 +- .../java/org/apache/hadoop/hive/conf/HiveConf.java | 16 + .../hive/ql/parse/TestReplicationScenarios.java| 11 +- .../TestReplicationScenariosAcrossInstances.java | 68 + .../hive/ql/txn/compactor/TestCompactor.java | 40 +- itests/qtest-druid/pom.xml |2 +- kafka-handler/pom.xml |4 +- pom.xml| 19 +- ql/pom.xml |5 + .../apache/hadoop/hive/ql/plan/api/StageType.java | 10 +- .../java/org/apache/hadoop/hive/ql/Compiler.java |3 - ql/src/java/org/apache/hadoop/hive/ql/Context.java | 16 +- ql/src/java/org/apache/hadoop/hive/ql/Driver.java |2 +- .../java/org/apache/hadoop/hive/ql/Executor.java |2 - .../AlterMaterializedViewRebuildAnalyzer.java |2 +- .../org/apache/hadoop/hive/ql/exec/DagUtils.java |4 +- .../hadoop/hive/ql/exec/FileSinkOperator.java |5 - .../apache/hadoop/hive/ql/exec/TaskFactory.java|6 + .../org/apache/hadoop/hive/ql/exec/Utilities.java | 22 - .../hadoop/hive/ql/exec/repl/RangerDumpTask.java | 130 ++ .../hadoop/hive/ql/exec/repl/RangerDumpWork.java | 50 + .../hadoop/hive/ql/exec/repl/RangerLoadTask.java | 137 ++ .../hadoop/hive/ql/exec/repl/RangerLoadWork.java | 59 + .../hadoop/hive/ql/exec/repl/ReplDumpTask.java | 31 +- .../hadoop/hive/ql/exec/repl/ReplLoadTask.java | 550 +++ .../hadoop/hive/ql/exec/repl/ReplLoadWork.java |8 +- .../ql/exec/repl/ranger/NoOpRangerRestClient.java | 71 + .../ql/exec/repl/ranger/RangerBaseModelObject.java | 191 +++ .../exec/repl/ranger/RangerExportPolicyList.java | 52 + .../hive/ql/exec/repl/ranger/RangerPolicy.java | 1513 .../hive/ql/exec/repl/ranger/RangerPolicyL
[hive] 01/01: move files to llap
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch HIVE-23470_rb in repository https://gitbox.apache.org/repos/asf/hive.git commit 4f4aef81ee293838fee56657ffe07381f19492b4 Author: miklosgergely AuthorDate: Thu May 14 22:56:52 2020 +0200 move files to llap --- .../clientpositive/{ => llap}/autoColumnStats_6.q.out | 0 .../clientpositive/{ => llap}/autogen_colalias.q.out| 0 .../clientpositive/{ => llap}/binary_output_format.q.out| 0 .../clientpositive/{ => llap}/create_genericudaf.q.out | 0 .../results/clientpositive/{ => llap}/create_udaf.q.out | 0 .../results/clientpositive/{ => llap}/create_view.q.out | 0 .../clientpositive/{ => llap}/gen_udf_example_add10.q.out | 0 .../results/clientpositive/{ => llap}/groupby_bigdata.q.out | 0 .../clientpositive/{ => llap}/infer_bucket_sort.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/input14.q.out | 0 .../results/clientpositive/{ => llap}/input14_limit.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/input17.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/input18.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/input20.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/input33.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/input34.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/input35.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/input36.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/input38.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/input5.q.out | 0 .../results/clientpositive/{ => llap}/insert_into3.q.out| 0 .../results/clientpositive/{ => llap}/insert_into4.q.out| 0 .../results/clientpositive/{ => llap}/insert_into5.q.out| 0 .../results/clientpositive/{ => llap}/insert_into6.q.out| 0 .../clientpositive/{ => llap}/load_binary_data.q.out| Bin ql/src/test/results/clientpositive/{ => llap}/macro_1.q.out | 0 .../results/clientpositive/{ => llap}/macro_duplicate.q.out | 0 .../test/results/clientpositive/{ => llap}/mapreduce3.q.out | 0 .../test/results/clientpositive/{ => llap}/mapreduce4.q.out | 0 .../test/results/clientpositive/{ => llap}/mapreduce7.q.out | 0 .../test/results/clientpositive/{ => llap}/mapreduce8.q.out | 0 .../{ => llap}/merge_test_dummy_operator.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/newline.q.out | 0 .../{ => llap}/nonreserved_keywords_insert_into1.q.out | 0 .../test/results/clientpositive/{ => llap}/nullscript.q.out | 0 .../results/clientpositive/{ => llap}/orc_createas1.q.out | 0 .../test/results/clientpositive/{ => llap}/partcols1.q.out | 0 .../results/clientpositive/{ => llap}/ppd_transform.q.out | 0 .../results/clientpositive/{ => llap}/query_with_semi.q.out | 0 .../results/clientpositive/{ => llap}/rcfile_bigdata.q.out | 0 .../results/clientpositive/{ => llap}/regexp_extract.q.out | 0 .../results/clientpositive/{ => llap}/script_env_var1.q.out | 0 .../results/clientpositive/{ => llap}/script_env_var2.q.out | 0 .../results/clientpositive/{ => llap}/script_pipe.q.out | 0 .../results/clientpositive/{ => llap}/scriptfile1.q.out | 0 .../clientpositive/{ => llap}/select_transform_hint.q.out | 0 .../test/results/clientpositive/{ => llap}/str_to_map.q.out | 0 .../clientpositive/{ => llap}/temp_table_partcols1.q.out| 0 .../test/results/clientpositive/{ => llap}/transform1.q.out | 0 .../test/results/clientpositive/{ => llap}/transform2.q.out | 0 .../test/results/clientpositive/{ => llap}/transform3.q.out | 0 .../results/clientpositive/{ => llap}/transform_acid.q.out | 0 .../results/clientpositive/{ => llap}/transform_ppr1.q.out | 0 .../results/clientpositive/{ => llap}/transform_ppr2.q.out | 0 .../results/clientpositive/{ => llap}/udaf_sum_list.q.out | 0 .../test/results/clientpositive/{ => llap}/udf_printf.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/union23.q.out | 0 .../results/clientpositive/{ => llap}/union_script.q.out| 0 .../{ => llap}/vector_custom_udf_configure.q.out| 0 .../results/clientpositive/{ => llap}/vector_udf3.q.out | 0 60 files changed, 0 insertions(+), 0 deletions(-) diff --git a/ql/src/test/results/clientpositive/autoColumnStats_6.q.out b/ql/src/test/results/clientpositive/llap/autoColumnStats_6.q.out similarity index 100% rename from ql/src/test/results/clientpositive/autoColumnStats_6.q.out rename to ql/src/test/results/clientpositive/llap/autoColumnStats_6.q.out diff --git a/ql/src/test/results/clientpositive/autogen_colalias.q.out b/ql/src/test/results/clie
[hive] 01/01: move files to tez
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch HIVE-23470_rb in repository https://gitbox.apache.org/repos/asf/hive.git commit 0b00a835d6be320a850d5b430934d68a472e6e3f Author: miklosgergely AuthorDate: Thu May 14 16:54:20 2020 +0200 move files to tez --- .../clientpositive/{ => tez}/autoColumnStats_6.q.out| 0 .../results/clientpositive/{ => tez}/autogen_colalias.q.out | 0 .../clientpositive/{ => tez}/binary_output_format.q.out | 0 .../clientpositive/{ => tez}/create_genericudaf.q.out | 0 .../test/results/clientpositive/{ => tez}/create_udaf.q.out | 0 .../test/results/clientpositive/{ => tez}/create_view.q.out | 0 .../test/results/clientpositive/{ => tez}/f_is_null.q.out | 0 .../clientpositive/{ => tez}/gen_udf_example_add10.q.out| 0 .../results/clientpositive/{ => tez}/groupby_bigdata.q.out | 0 .../clientpositive/{ => tez}/infer_bucket_sort.q.out| 0 ql/src/test/results/clientpositive/{ => tez}/input14.q.out | 0 .../results/clientpositive/{ => tez}/input14_limit.q.out| 0 ql/src/test/results/clientpositive/{ => tez}/input17.q.out | 0 ql/src/test/results/clientpositive/{ => tez}/input18.q.out | 0 ql/src/test/results/clientpositive/{ => tez}/input20.q.out | 0 ql/src/test/results/clientpositive/{ => tez}/input33.q.out | 0 ql/src/test/results/clientpositive/{ => tez}/input34.q.out | 0 ql/src/test/results/clientpositive/{ => tez}/input35.q.out | 0 ql/src/test/results/clientpositive/{ => tez}/input36.q.out | 0 ql/src/test/results/clientpositive/{ => tez}/input38.q.out | 0 ql/src/test/results/clientpositive/{ => tez}/input5.q.out | 0 .../results/clientpositive/{ => tez}/insert_into3.q.out | 0 .../results/clientpositive/{ => tez}/insert_into4.q.out | 0 .../results/clientpositive/{ => tez}/insert_into5.q.out | 0 .../results/clientpositive/{ => tez}/insert_into6.q.out | 0 .../results/clientpositive/{ => tez}/load_binary_data.q.out | Bin .../results/clientpositive/{ => tez}/localtimezone.q.out| 0 ql/src/test/results/clientpositive/{ => tez}/macro_1.q.out | 0 .../results/clientpositive/{ => tez}/macro_duplicate.q.out | 0 .../test/results/clientpositive/{ => tez}/mapreduce3.q.out | 0 .../test/results/clientpositive/{ => tez}/mapreduce4.q.out | 0 .../test/results/clientpositive/{ => tez}/mapreduce7.q.out | 0 .../test/results/clientpositive/{ => tez}/mapreduce8.q.out | 0 .../{ => tez}/merge_test_dummy_operator.q.out | 0 ql/src/test/results/clientpositive/{ => tez}/newline.q.out | 0 .../{ => tez}/nonreserved_keywords_insert_into1.q.out | 0 .../test/results/clientpositive/{ => tez}/nullscript.q.out | 0 .../results/clientpositive/{ => tez}/orc_createas1.q.out| 0 .../test/results/clientpositive/{ => tez}/partcols1.q.out | 0 .../{ => tez}/partition_vs_table_metadata.q.out | 0 .../results/clientpositive/{ => tez}/ppd_transform.q.out| 0 .../results/clientpositive/{ => tez}/query_with_semi.q.out | 0 .../results/clientpositive/{ => tez}/rcfile_bigdata.q.out | 0 .../results/clientpositive/{ => tez}/regexp_extract.q.out | 0 .../results/clientpositive/{ => tez}/script_env_var1.q.out | 0 .../results/clientpositive/{ => tez}/script_env_var2.q.out | 0 .../test/results/clientpositive/{ => tez}/script_pipe.q.out | 0 .../test/results/clientpositive/{ => tez}/scriptfile1.q.out | 0 .../clientpositive/{ => tez}/select_transform_hint.q.out| 0 .../test/results/clientpositive/{ => tez}/str_to_map.q.out | 0 .../clientpositive/{ => tez}/temp_table_partcols1.q.out | 0 .../results/clientpositive/{ => tez}/timestamptz_2.q.out| 0 .../test/results/clientpositive/{ => tez}/transform1.q.out | 0 .../test/results/clientpositive/{ => tez}/transform2.q.out | 0 .../test/results/clientpositive/{ => tez}/transform3.q.out | 0 .../results/clientpositive/{ => tez}/transform_ppr1.q.out | 0 .../results/clientpositive/{ => tez}/transform_ppr2.q.out | 0 .../{ => tez}/type_change_test_fraction_vectorized.q.out| 0 .../{ => tez}/type_change_test_int_vectorized.q.out | 0 .../results/clientpositive/{ => tez}/typechangetest.q.out | 0 .../results/clientpositive/{ => tez}/udaf_sum_list.q.out| 0 .../test/results/clientpositive/{ => tez}/udf_printf.q.out | 0 ql/src/test/results/clientpositive/{ => tez}/union23.q.out | 0 .../results/clientpositive/{ => tez}/union_script.q.out | 0 .../{ => tez}/vector_custom_udf_configure.q.out | 0 .../test/results/clientpositive/{ => tez}/vector_udf3.q.out | 0 .../clientpositive/{ => tez}/windowing_windowspec.q.ou
[hive] branch HIVE-23470_rb created (now 0b00a83)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a change to branch HIVE-23470_rb in repository https://gitbox.apache.org/repos/asf/hive.git. at 0b00a83 move files to tez This branch includes the following new commits: new 0b00a83 move files to tez The 1 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference.
[hive] branch HIVE-23440_280_rb created (now 5efb0ba)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a change to branch HIVE-23440_280_rb in repository https://gitbox.apache.org/repos/asf/hive.git. at 5efb0ba move files to llap This branch includes the following new commits: new 5efb0ba move files to llap The 1 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference.
[hive] 01/01: move files to llap
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch HIVE-23440_280_rb in repository https://gitbox.apache.org/repos/asf/hive.git commit 5efb0bac6344065e06d058d6440c41cb900f4e7f Author: miklosgergely AuthorDate: Mon May 11 19:10:00 2020 +0200 move files to llap --- .../clientpositive/{ => llap}/temp_table_merge_dynamic_partition.q.out| 0 .../clientpositive/{ => llap}/temp_table_merge_dynamic_partition2.q.out | 0 .../clientpositive/{ => llap}/temp_table_merge_dynamic_partition3.q.out | 0 .../clientpositive/{ => llap}/temp_table_merge_dynamic_partition4.q.out | 0 .../clientpositive/{ => llap}/temp_table_merge_dynamic_partition5.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/temp_table_options1.q.out | 0 .../{ => llap}/temp_table_parquet_mixed_partition_formats2.q.out | 0 .../results/clientpositive/{ => llap}/temp_table_partition_boolexpr.q.out | 0 .../{ => llap}/temp_table_partition_condition_remover.q.out | 0 .../results/clientpositive/{ => llap}/temp_table_partition_ctas.q.out | 0 .../clientpositive/{ => llap}/temp_table_partition_multilevels.q.out | 0 .../results/clientpositive/{ => llap}/temp_table_partition_pruning.q.out | 0 .../clientpositive/{ => llap}/temp_table_windowing_expressions.q.out | 0 .../test/results/clientpositive/{ => llap}/test_teradatabinaryfile.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/timestamp.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/timestamp_comparison3.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/timestamp_ints_casts.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/timestamp_literal.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/timestamptz.q.out | 0 .../test/results/clientpositive/{ => llap}/truncate_column_buckets.q.out | 0 .../results/clientpositive/{ => llap}/truncate_column_list_bucket.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/type_cast_1.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/type_widening.q.out | 0 .../test/results/clientpositive/{ => llap}/udaf_binarysetfunctions.q.out | 0 .../clientpositive/{ => llap}/udaf_binarysetfunctions_no_cbo.q.out| 0 ql/src/test/results/clientpositive/{ => llap}/udaf_number_format.q.out| 0 .../results/clientpositive/{ => llap}/udaf_percentile_approx_23.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/udaf_percentile_cont.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/udaf_percentile_disc.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/udf1.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/udf2.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/udf3.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/udf4.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/udf5.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/udf6.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/udf7.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/udf8.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/udf9.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/udf_10_trims.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/udf_E.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/udf_PI.q.out| 0 ql/src/test/results/clientpositive/{ => llap}/udf_abs.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/udf_add_months.q.out| 0 ql/src/test/results/clientpositive/{ => llap}/udf_aes_decrypt.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/udf_aes_encrypt.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/udf_array.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/udf_ascii.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/udf_between.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/udf_bitwise_shiftleft.q.out | 0 .../test/results/clientpositive/{ => llap}/udf_bitwise_shiftright.q.out | 0 .../clientpositive/{ => llap}/udf_bitwise_shiftrightunsigned.q.out| 0 ql/src/test/results/clientpositive/{ => llap}/udf_case.q.out | 0 .../test/results/clientpositive/{ => llap}/udf_case_column_pruning.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/udf_case_thrift.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/udf_cbrt.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/udf_character_length.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/udf_concat_insert1.q.out| 0
[hive] branch master updated: HIVE-23359 'show tables like' support for SQL wildcard characters (% and _) APPENDUM - remove unused imports (Miklos Gergely, reviewed by Zoltan Haiandrich)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 134f3b2 HIVE-23359 'show tables like' support for SQL wildcard characters (% and _) APPENDUM - remove unused imports (Miklos Gergely, reviewed by Zoltan Haiandrich) 134f3b2 is described below commit 134f3b2d9446fda9cc7a8d7b02290742f0b3b64a Author: miklosgergely AuthorDate: Fri May 8 20:52:44 2020 +0200 HIVE-23359 'show tables like' support for SQL wildcard characters (% and _) APPENDUM - remove unused imports (Miklos Gergely, reviewed by Zoltan Haiandrich) --- .../hadoop/hive/ql/ddl/table/info/show/tables/ShowTablesOperation.java | 2 -- .../org/apache/hadoop/hive/ql/ddl/view/show/ShowViewsOperation.java | 2 -- 2 files changed, 4 deletions(-) diff --git a/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/show/tables/ShowTablesOperation.java b/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/show/tables/ShowTablesOperation.java index b1d2dd2..bb2356a 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/show/tables/ShowTablesOperation.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/show/tables/ShowTablesOperation.java @@ -26,8 +26,6 @@ import java.util.ArrayList; import java.util.Collections; import java.util.Comparator; import java.util.List; -import java.util.SortedSet; -import java.util.TreeSet; import java.util.regex.Pattern; import java.util.stream.Collectors; diff --git a/ql/src/java/org/apache/hadoop/hive/ql/ddl/view/show/ShowViewsOperation.java b/ql/src/java/org/apache/hadoop/hive/ql/ddl/view/show/ShowViewsOperation.java index a2d92e2..c899ede 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/ddl/view/show/ShowViewsOperation.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/ddl/view/show/ShowViewsOperation.java @@ -21,8 +21,6 @@ package org.apache.hadoop.hive.ql.ddl.view.show; import java.io.DataOutputStream; import java.util.Collections; import java.util.List; -import java.util.SortedSet; -import java.util.TreeSet; import java.util.regex.Pattern; import java.util.stream.Collectors;
[hive] branch master updated: HIVE-23359 'show tables like' support for SQL wildcard characters (% and _) (Miklos Gergely, reviewed by Zoltan Haindrich)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new c419a7b HIVE-23359 'show tables like' support for SQL wildcard characters (% and _) (Miklos Gergely, reviewed by Zoltan Haindrich) c419a7b is described below commit c419a7b99a69d1c1bede9394fc361591e57f4de6 Author: miklosgergely AuthorDate: Mon May 4 16:52:38 2020 +0200 HIVE-23359 'show tables like' support for SQL wildcard characters (% and _) (Miklos Gergely, reviewed by Zoltan Haindrich) --- .../ddl/database/show/ShowDatabasesOperation.java | 2 +- .../info/show/status/ShowTableStatusOperation.java | 51 - .../info/show/tables/ShowTablesOperation.java | 73 +++-- .../show/ShowMaterializedViewsOperation.java | 31 +++--- .../hive/ql/ddl/view/show/ShowViewsOperation.java | 30 +++--- .../org/apache/hadoop/hive/ql/metadata/Hive.java | 14 +-- .../metadata/formatting/JsonMetaDataFormatter.java | 3 +- .../ql/metadata/formatting/MetaDataFormatter.java | 3 +- .../metadata/formatting/TextMetaDataFormatter.java | 3 +- .../test/queries/clientnegative/show_tablestatus.q | 2 +- ql/src/test/queries/clientpositive/alter1.q| 2 +- ql/src/test/queries/clientpositive/alter2.q| 4 +- ql/src/test/queries/clientpositive/alter3.q| 2 +- ql/src/test/queries/clientpositive/alter4.q| 2 +- ql/src/test/queries/clientpositive/alter5.q| 2 +- ql/src/test/queries/clientpositive/create_view.q | 4 +- .../queries/clientpositive/describe_table_json.q | 4 +- .../clientpositive/encryption_auto_purge_tables.q | 4 +- .../queries/clientpositive/encryption_drop_table.q | 10 +- .../queries/clientpositive/encryption_move_tbl.q | 2 +- ql/src/test/queries/clientpositive/input2.q| 6 +- ql/src/test/queries/clientpositive/input3.q| 4 +- ql/src/test/queries/clientpositive/rename_column.q | 4 +- .../clientpositive/show_materialized_views.q | 16 +-- ql/src/test/queries/clientpositive/show_tables.q | 24 ++--- .../test/queries/clientpositive/show_tablestatus.q | 13 ++- ql/src/test/queries/clientpositive/show_views.q| 16 +-- .../test/queries/clientpositive/temp_table_names.q | 2 +- .../queries/clientpositive/temp_table_truncate.q | 2 +- .../results/clientnegative/show_tablestatus.q.out | 2 +- .../test/results/clientpositive/create_view.q.out | 8 +- .../encrypted/encryption_auto_purge_tables.q.out | 8 +- .../encrypted/encryption_drop_table.q.out | 20 ++-- .../encrypted/encryption_move_tbl.q.out| 4 +- .../test/results/clientpositive/llap/alter1.q.out | 4 +- .../test/results/clientpositive/llap/alter2.q.out | 8 +- .../test/results/clientpositive/llap/alter3.q.out | 4 +- .../test/results/clientpositive/llap/alter4.q.out | 4 +- .../test/results/clientpositive/llap/alter5.q.out | 4 +- .../clientpositive/llap/describe_table_json.q.out | 8 +- .../test/results/clientpositive/llap/input2.q.out | 14 +-- .../test/results/clientpositive/llap/input3.q.out | 8 +- .../clientpositive/llap/rename_column.q.out| 8 +- .../llap/show_materialized_views.q.out | 55 +++--- .../results/clientpositive/llap/show_tables.q.out | 115 - .../clientpositive/llap/show_tablestatus.q.out | 68 ++-- .../results/clientpositive/llap/show_views.q.out | 54 +++--- .../clientpositive/llap/temp_table_names.q.out | 4 +- .../clientpositive/llap/temp_table_truncate.q.out | 4 +- 49 files changed, 386 insertions(+), 353 deletions(-) diff --git a/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/show/ShowDatabasesOperation.java b/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/show/ShowDatabasesOperation.java index 625a48e..12a5299 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/show/ShowDatabasesOperation.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/show/ShowDatabasesOperation.java @@ -45,7 +45,7 @@ public class ShowDatabasesOperation extends DDLOperation { List databases = context.getDb().getAllDatabases(); if (desc.getPattern() != null) { LOG.debug("pattern: {}", desc.getPattern()); - Pattern pattern = Pattern.compile(UDFLike.likePatternToRegExp(desc.getPattern())); + Pattern pattern = Pattern.compile(UDFLike.likePatternToRegExp(desc.getPattern()), Pattern.CASE_INSENSITIVE); databases = databases.stream().filter(name -> pattern.matcher(name).matches()).collect(Collectors.toList()); } diff --git a/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/show/status/ShowTableStatusOperation.java b/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/show/status/ShowTableStatusOperation.java index 914e63d..75ea0e3 100644 ---
[hive] branch HIVE-23403_280_rb created (now d777a44)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a change to branch HIVE-23403_280_rb in repository https://gitbox.apache.org/repos/asf/hive.git. at d777a44 move files to llap This branch includes the following new commits: new d777a44 move files to llap The 1 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference.
[hive] 01/01: move files to llap
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch HIVE-23403_280_rb in repository https://gitbox.apache.org/repos/asf/hive.git commit d777a4487cba8dce20d8aae951937c9c65bb00d1 Author: miklosgergely AuthorDate: Thu May 7 15:58:35 2020 +0200 move files to llap --- .../clientpositive/{ => llap}/mergejoins_mixed.q.out| 0 .../clientpositive/{ => llap}/metadataOnlyOptimizer.q.out | 0 .../test/results/clientpositive/{ => llap}/mm_buckets.q.out | 0 .../results/clientpositive/{ => llap}/msck_repair_0.q.out | 0 .../results/clientpositive/{ => llap}/msck_repair_1.q.out | 0 .../results/clientpositive/{ => llap}/msck_repair_2.q.out | 0 .../results/clientpositive/{ => llap}/msck_repair_3.q.out | 0 .../clientpositive/{ => llap}/msck_repair_acid.q.out| 0 .../clientpositive/{ => llap}/msck_repair_batchsize.q.out | 0 .../clientpositive/{ => llap}/msck_repair_drop.q.out| 0 .../clientpositive/{ => llap}/multi_insert_distinct.q.out | 0 .../clientpositive/{ => llap}/multi_insert_gby.q.out| 0 .../clientpositive/{ => llap}/multi_insert_gby2.q.out | 0 .../clientpositive/{ => llap}/multi_insert_gby3.q.out | 0 .../clientpositive/{ => llap}/multi_insert_gby4.q.out | 0 .../clientpositive/{ => llap}/multi_insert_mixed.q.out | 0 .../multi_insert_move_tasks_share_dependencies.q.out| 0 .../clientpositive/{ => llap}/multi_insert_union_src.q.out | 0 .../clientpositive/{ => llap}/multi_insert_with_join2.q.out | 0 .../clientpositive/{ => llap}/multi_join_union.q.out| 0 .../clientpositive/{ => llap}/multigroupby_singlemr.q.out | 0 .../clientpositive/{ => llap}/named_column_join.q.out | 0 .../clientpositive/{ => llap}/nested_column_pruning.q.out | 0 .../test/results/clientpositive/{ => llap}/no_hooks.q.out | 0 .../results/clientpositive/{ => llap}/noalias_subq1.q.out | 0 .../clientpositive/{ => llap}/nonblock_op_deduplicate.q.out | 0 .../results/clientpositive/{ => llap}/notable_alias1.q.out | 0 .../results/clientpositive/{ => llap}/notable_alias2.q.out | 0 .../test/results/clientpositive/{ => llap}/null_cast.q.out | 0 .../{ => llap}/nullability_transitive_inference.q.out | 0 .../test/results/clientpositive/{ => llap}/nullgroup.q.out | 0 .../test/results/clientpositive/{ => llap}/nullgroup2.q.out | 0 .../test/results/clientpositive/{ => llap}/nullgroup3.q.out | 0 .../test/results/clientpositive/{ => llap}/nullgroup4.q.out | 0 .../{ => llap}/nullgroup4_multi_distinct.q.out | 0 .../test/results/clientpositive/{ => llap}/nullgroup5.q.out | 0 .../clientpositive/{ => llap}/num_op_type_conv.q.out| 0 .../{ => llap}/offset_limit_global_optimizer.q.out | 0 .../clientpositive/{ => llap}/optimize_filter_literal.q.out | 0 .../results/clientpositive/{ => llap}/optional_outer.q.out | 0 .../{ => llap}/orc_avro_partition_uniontype.q.out | 0 .../clientpositive/{ => llap}/orc_int_type_promotion.q.out | 0 .../{ => llap}/orc_nested_column_pruning.q.out | 0 .../clientpositive/{ => llap}/orc_ppd_str_conversion.q.out | 0 .../{ => llap}/orc_schema_evolution_float.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/order.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/order3.q.out | 0 .../results/clientpositive/{ => llap}/order_by_expr_1.q.out | 0 .../results/clientpositive/{ => llap}/order_by_expr_2.q.out | 0 .../results/clientpositive/{ => llap}/order_by_pos.q.out| 0 .../results/clientpositive/{ => llap}/outer_join_ppr.q.out | 0 .../{ => llap}/outer_reference_windowed.q.out | 0 .../results/clientpositive/{ => llap}/parallel_join0.q.out | 0 .../results/clientpositive/{ => llap}/parallel_join1.q.out | 0 .../clientpositive/{ => llap}/parallel_orderby.q.out| 0 .../clientpositive/{ => llap}/parenthesis_star_by.q.out | 0 .../results/clientpositive/{ => llap}/parquet_create.q.out | 0 .../parquet_int64_timestamp_int96_compatibility.q.out | 0 .../results/clientpositive/{ => llap}/parquet_join.q.out| 0 .../{ => llap}/parquet_mixed_partition_formats2.q.out | 0 .../clientpositive/{ => llap}/parquet_no_row_serde.q.out| 0 .../clientpositive/{ => llap}/parquet_ppd_boolean.q.out | 0 .../clientpositive/{ => llap}/parquet_ppd_char.q.out| 0 .../clientpositive/{ => llap}/parquet_ppd_date.q.out| 0 .../clientpositive/{ => llap}/parquet_ppd_decimal.q.out | 0 .../clientpositive/{ => llap}/parquet_ppd_timestamp.q.out | 0 .../cli
[hive] branch master updated: HIVE-23372 Project not defined correctly after reordering a join ADDENDUM - fix sharedwork.q (Krisztian Kasa, reviewed by Miklos Gergely)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new dbc04ef HIVE-23372 Project not defined correctly after reordering a join ADDENDUM - fix sharedwork.q (Krisztian Kasa, reviewed by Miklos Gergely) dbc04ef is described below commit dbc04ef2861d4bf1d917c7b923be3969af26f718 Author: miklosgergely AuthorDate: Tue May 5 13:34:34 2020 +0200 HIVE-23372 Project not defined correctly after reordering a join ADDENDUM - fix sharedwork.q (Krisztian Kasa, reviewed by Miklos Gergely) --- .../results/clientpositive/llap/sharedwork.q.out | 178 ++--- 1 file changed, 12 insertions(+), 166 deletions(-) diff --git a/ql/src/test/results/clientpositive/llap/sharedwork.q.out b/ql/src/test/results/clientpositive/llap/sharedwork.q.out index 5308daf..175141f 100644 --- a/ql/src/test/results/clientpositive/llap/sharedwork.q.out +++ b/ql/src/test/results/clientpositive/llap/sharedwork.q.out @@ -696,72 +696,21 @@ STAGE PLANS: alias: part Statistics: Num rows: 26 Data size: 5954 Basic stats: COMPLETE Column stats: COMPLETE GatherStats: false -<<<<<<< ours - Filter Operator -isSamplingPred: false -predicate: p_size is not null (type: boolean) -Statistics: Num rows: 26 Data size: 2808 Basic stats: COMPLETE Column stats: COMPLETE -Select Operator - expressions: (p_size + 1) (type: int), p_type (type: string) - outputColumnNames: _col0, _col1 - Statistics: Num rows: 26 Data size: 2808 Basic stats: COMPLETE Column stats: COMPLETE - Group By Operator -aggregations: count(), count(_col1) -keys: _col0 (type: int) -minReductionHashAggr: 0.5 -mode: hash -outputColumnNames: _col0, _col1, _col2 -Statistics: Num rows: 13 Data size: 260 Basic stats: COMPLETE Column stats: COMPLETE -Reduce Output Operator - bucketingVersion: 2 - key expressions: _col0 (type: int) - null sort order: z - numBuckets: -1 - sort order: + - Map-reduce partition columns: _col0 (type: int) - Statistics: Num rows: 13 Data size: 260 Basic stats: COMPLETE Column stats: COMPLETE - tag: -1 - value expressions: _col1 (type: bigint), _col2 (type: bigint) - auto parallelism: true - Filter Operator -isSamplingPred: false -predicate: (p_size is not null and p_type is not null) (type: boolean) -Statistics: Num rows: 26 Data size: 2808 Basic stats: COMPLETE Column stats: COMPLETE -Select Operator - expressions: p_type (type: string), (p_size + 1) (type: int) - outputColumnNames: _col0, _col1 - Statistics: Num rows: 26 Data size: 2808 Basic stats: COMPLETE Column stats: COMPLETE - Group By Operator -keys: _col1 (type: int), _col0 (type: string) -minReductionHashAggr: 0.0 -mode: hash -outputColumnNames: _col0, _col1 -Statistics: Num rows: 13 Data size: 1404 Basic stats: COMPLETE Column stats: COMPLETE -Reduce Output Operator - bucketingVersion: 2 - key expressions: _col0 (type: int), _col1 (type: string) - null sort order: zz - numBuckets: -1 - sort order: ++ - Map-reduce partition columns: _col0 (type: int), _col1 (type: string) - Statistics: Num rows: 13 Data size: 1404 Basic stats: COMPLETE Column stats: COMPLETE - tag: -1 - auto parallelism: true -=== Select Operator expressions: p_name (type: string), p_type (type: string), (p_size + 1) (type: int) outputColumnNames: _col0, _col1, _col2 Statistics: Num rows: 26 Data size: 5954 Basic stats: COMPLETE Column stats: COMPLETE Reduce Output Operator + bucketingVersion: 2 key expressions: _col2 (type: int)
[hive] branch HIVE-23337_280_rb created (now 6a6d69c)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a change to branch HIVE-23337_280_rb in repository https://gitbox.apache.org/repos/asf/hive.git. at 6a6d69c move files to llap This branch includes the following new commits: new 6a6d69c move files to llap The 1 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference.
[hive] 01/01: move files to llap
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch HIVE-23337_280_rb in repository https://gitbox.apache.org/repos/asf/hive.git commit 6a6d69cc62035f058669cd4540a0398271961753 Author: miklosgergely AuthorDate: Thu Apr 30 01:58:14 2020 +0200 move files to llap --- ql/src/test/results/clientpositive/{ => llap}/groupby9.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/groupby_complex_types.q.out | 0 .../{ => llap}/groupby_complex_types_multi_single_reducer.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/groupby_cube1.q.out | 0 .../test/results/clientpositive/{ => llap}/groupby_cube_multi_gby.q.out | 0 .../test/results/clientpositive/{ => llap}/groupby_distinct_samekey.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/groupby_duplicate_key.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/groupby_grouping_id3.q.out | 0 .../test/results/clientpositive/{ => llap}/groupby_grouping_sets1.q.out | 0 .../test/results/clientpositive/{ => llap}/groupby_grouping_sets2.q.out | 0 .../test/results/clientpositive/{ => llap}/groupby_grouping_sets3.q.out | 0 .../test/results/clientpositive/{ => llap}/groupby_grouping_sets4.q.out | 0 .../test/results/clientpositive/{ => llap}/groupby_grouping_sets5.q.out | 0 .../test/results/clientpositive/{ => llap}/groupby_grouping_sets6.q.out | 0 .../clientpositive/{ => llap}/groupby_grouping_sets_grouping.q.out| 0 .../results/clientpositive/{ => llap}/groupby_grouping_sets_limit.q.out | 0 .../results/clientpositive/{ => llap}/groupby_grouping_sets_view.q.out| 0 .../test/results/clientpositive/{ => llap}/groupby_grouping_window.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/groupby_join_pushdown.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/groupby_map_ppr.q.out | 0 .../clientpositive/{ => llap}/groupby_map_ppr_multi_distinct.q.out| 0 .../clientpositive/{ => llap}/groupby_multi_insert_common_distinct.q.out | 0 .../results/clientpositive/{ => llap}/groupby_multi_single_reducer.q.out | 0 .../results/clientpositive/{ => llap}/groupby_multi_single_reducer2.q.out | 0 .../results/clientpositive/{ => llap}/groupby_multi_single_reducer3.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/groupby_multialias.q.out| 0 ql/src/test/results/clientpositive/{ => llap}/groupby_nocolumnalign.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/groupby_position.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/groupby_ppd.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/groupby_ppr.q.out | 0 .../results/clientpositive/{ => llap}/groupby_ppr_multi_distinct.q.out| 0 ql/src/test/results/clientpositive/{ => llap}/groupby_rollup1.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/groupby_sort_10.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/groupby_sort_11.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/groupby_sort_1_23.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/groupby_sort_2.q.out| 0 ql/src/test/results/clientpositive/{ => llap}/groupby_sort_3.q.out| 0 ql/src/test/results/clientpositive/{ => llap}/groupby_sort_4.q.out| 0 ql/src/test/results/clientpositive/{ => llap}/groupby_sort_5.q.out| 0 ql/src/test/results/clientpositive/{ => llap}/groupby_sort_6.q.out| 0 ql/src/test/results/clientpositive/{ => llap}/groupby_sort_7.q.out| 0 ql/src/test/results/clientpositive/{ => llap}/groupby_sort_8.q.out| 0 ql/src/test/results/clientpositive/{ => llap}/groupby_sort_9.q.out| 0 .../test/results/clientpositive/{ => llap}/groupby_sort_skew_1_23.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/groupby_sort_test_1.q.out | 0 .../test/results/clientpositive/{ => llap}/groupingset_high_columns.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/hashjoin.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/having2.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/hll.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/implicit_cast1.q.out| 0 .../results/clientpositive/{ => llap}/implicit_cast_during_insert.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/implicit_decimal.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/in_typecheck_char.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/in_typecheck_mixed.q.out| 0 .../test/results/clientpositive/{ => llap}/in_typecheck_pointlook.q.out | 0 ql/src/test/results/clientpositive/{ => llap}/in_typecheck_varchar.q.out | 0 .../clientpositive/{ => llap}/infer_bucket_sort_convert_join.q.out| 0
[hive] branch master updated: HIVE-22028 Clean up Add Partition ADDENDUM - fix typo (Miklos Gergely, reviewed by Zoltan Haindrich)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 342f8fb HIVE-22028 Clean up Add Partition ADDENDUM - fix typo (Miklos Gergely, reviewed by Zoltan Haindrich) 342f8fb is described below commit 342f8fbd521ce5728098887cd1749fc7931e370f Author: miklosgergely AuthorDate: Sat May 2 16:42:30 2020 +0200 HIVE-22028 Clean up Add Partition ADDENDUM - fix typo (Miklos Gergely, reviewed by Zoltan Haindrich) --- .../table/partition/add/AlterTableAddPartitionOperation.java | 10 +- ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java | 4 +--- 2 files changed, 6 insertions(+), 8 deletions(-) diff --git a/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/partition/add/AlterTableAddPartitionOperation.java b/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/partition/add/AlterTableAddPartitionOperation.java index 6910e10..ddc47a4 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/partition/add/AlterTableAddPartitionOperation.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/partition/add/AlterTableAddPartitionOperation.java @@ -157,7 +157,7 @@ public class AlterTableAddPartitionOperation extends DDLOperation partitions) throws HiveException { // TODO: normally, the result is not necessary; might make sense to pass false List outPartitions = new ArrayList<>(); -for (Partition outPart : context.getDb().addPartition(partitions, desc.isIfNotExists(), true)) { +for (Partition outPart : context.getDb().addPartitions(partitions, desc.isIfNotExists(), true)) { outPartitions.add(new org.apache.hadoop.hive.ql.metadata.Partition(table, outPart)); } return outPartitions; @@ -170,7 +170,7 @@ public class AlterTableAddPartitionOperation extends DDLOperation partitionsToAdd = new ArrayList<>(); -List partitionssToAlter = new ArrayList<>(); +List partitionsToAlter = new ArrayList<>(); List partitionNames = new ArrayList<>(); for (Partition partition : partitions){ partitionNames.add(getPartitionName(table, partition)); @@ -178,7 +178,7 @@ public class AlterTableAddPartitionOperation extends DDLOperation outPartitions = new ArrayList<>(); -for (Partition outPartition : context.getDb().addPartition(partitionsToAdd, desc.isIfNotExists(), true)) { +for (Partition outPartition : context.getDb().addPartitions(partitionsToAdd, desc.isIfNotExists(), true)) { outPartitions.add(new org.apache.hadoop.hive.ql.metadata.Partition(table, outPartition)); } @@ -199,7 +199,7 @@ public class AlterTableAddPartitionOperation extends DDLOperation addPartition( + public List addPartitions( List partitions, boolean ifNotExists, boolean needResults) throws HiveException { try {
[hive] branch master updated: HIVE-23315 Remove empty line from the end of SHOW EXTENDED TABLES and SHOW MATERIALIZED VIEWS (Miklos Gergely, reviewed by Zoltan Haindrich)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 5c675ac HIVE-23315 Remove empty line from the end of SHOW EXTENDED TABLES and SHOW MATERIALIZED VIEWS (Miklos Gergely, reviewed by Zoltan Haindrich) 5c675ac is described below commit 5c675acea490ed6d907d516a647b66fb7d77d1c9 Author: miklosgergely AuthorDate: Sun Feb 23 18:22:52 2020 +0100 HIVE-23315 Remove empty line from the end of SHOW EXTENDED TABLES and SHOW MATERIALIZED VIEWS (Miklos Gergely, reviewed by Zoltan Haindrich) --- .../hive/ql/metadata/formatting/TextMetaDataFormatter.java | 2 -- .../results/clientpositive/llap/show_materialized_views.q.out | 10 -- ql/src/test/results/clientpositive/llap/show_tables.q.out | 3 --- 3 files changed, 15 deletions(-) diff --git a/ql/src/java/org/apache/hadoop/hive/ql/metadata/formatting/TextMetaDataFormatter.java b/ql/src/java/org/apache/hadoop/hive/ql/metadata/formatting/TextMetaDataFormatter.java index 4700573..d64ae44 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/metadata/formatting/TextMetaDataFormatter.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/metadata/formatting/TextMetaDataFormatter.java @@ -149,7 +149,6 @@ class TextMetaDataFormatter implements MetaDataFormatter { // In case the query is served by HiveServer2, don't pad it with spaces, // as HiveServer2 output is consumed by JDBC/ODBC clients. out.write(mdt.renderTable(!SessionState.get().isHiveServerQuery()).getBytes("UTF-8")); - out.write(terminator); } catch (IOException e) { throw new HiveException(e); } @@ -198,7 +197,6 @@ class TextMetaDataFormatter implements MetaDataFormatter { // In case the query is served by HiveServer2, don't pad it with spaces, // as HiveServer2 output is consumed by JDBC/ODBC clients. out.write(mdt.renderTable(!SessionState.get().isHiveServerQuery()).getBytes("UTF-8")); - out.write(terminator); } catch (IOException e) { throw new HiveException(e); } diff --git a/ql/src/test/results/clientpositive/llap/show_materialized_views.q.out b/ql/src/test/results/clientpositive/llap/show_materialized_views.q.out index 57bd93b..d377f97 100644 --- a/ql/src/test/results/clientpositive/llap/show_materialized_views.q.out +++ b/ql/src/test/results/clientpositive/llap/show_materialized_views.q.out @@ -149,7 +149,6 @@ POSTHOOK: type: SHOWMATERIALIZEDVIEWS shtb_full_view2Yes Manual refresh (Valid for 5min) shtb_test1_view1 No Manual refresh shtb_test1_view2 Yes Manual refresh (Valid always) - PREHOOK: query: EXPLAIN SHOW MATERIALIZED VIEWS '*test*' PREHOOK: type: SHOWMATERIALIZEDVIEWS POSTHOOK: query: EXPLAIN SHOW MATERIALIZED VIEWS '*test*' @@ -177,7 +176,6 @@ POSTHOOK: type: SHOWMATERIALIZEDVIEWS # MV Name Rewriting Enabled Mode shtb_test1_view1 No Manual refresh shtb_test1_view2 Yes Manual refresh (Valid always) - PREHOOK: query: SHOW MATERIALIZED VIEWS '*view2' PREHOOK: type: SHOWMATERIALIZEDVIEWS POSTHOOK: query: SHOW MATERIALIZED VIEWS '*view2' @@ -185,7 +183,6 @@ POSTHOOK: type: SHOWMATERIALIZEDVIEWS # MV Name Rewriting Enabled Mode shtb_full_view2Yes Manual refresh (Valid for 5min) shtb_test1_view2 Yes Manual refresh (Valid always) - PREHOOK: query: EXPLAIN SHOW MATERIALIZED VIEWS LIKE 'shtb_test1_view1|shtb_test1_view2' PREHOOK: type: SHOWMATERIALIZEDVIEWS POSTHOOK: query: EXPLAIN SHOW MATERIALIZED VIEWS LIKE 'shtb_test1_view1|shtb_test1_view2' @@ -213,7 +210,6 @@ POSTHOOK: type: SHOWMATERIALIZEDVIEWS # MV Name Rewriting Enabled Mode shtb_test1_view1 No Manual refresh shtb_test1_view2 Yes Manual refresh (Valid always) - PREHOOK: query: USE test2 PREHOOK: type: SWITCHDATABASE PREHOOK: Input: database:test2 @@ -227,7 +223,6 @@ POSTHOOK: type: SHOWMATERIALIZEDVIEWS # MV Name Rewriting Enabled Mode shtb_test1_view1 No Manual refresh shtb_test2_view2 No Manual refresh - PREHOOK: query: USE default PREHOOK: type: SWITCHDATABASE PREHOOK: Input: database:default @@ -261,7 +256,6 @@ POSTHOOK: type: SHOWMATERIALIZEDVIEWS shtb_full_view2Yes Manual refresh (Valid for 5min) shtb_test1_view1 No Manual refresh shtb_test1_view2 Yes Manual r
[hive] branch master updated: HIVE-23316 Add tests to cover database managed location related DDL and fix minor issues (Miklos Gergely, reviewed by Jesus Camacho Rodriguez)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 0897480 HIVE-23316 Add tests to cover database managed location related DDL and fix minor issues (Miklos Gergely, reviewed by Jesus Camacho Rodriguez) 0897480 is described below commit 08974803c881f2dc652764776a28683af9f9c8d9 Author: miklosgergely AuthorDate: Tue Apr 28 23:12:25 2020 +0200 HIVE-23316 Add tests to cover database managed location related DDL and fix minor issues (Miklos Gergely, reviewed by Jesus Camacho Rodriguez) --- .../location/AlterDatabaseSetLocationDesc.java | 20 +--- .../AlterDatabaseSetLocationOperation.java | 37 +++- .../AlterDatabaseSetManagedLocationAnalyzer.java | 4 +- ...va => AlterDatabaseSetManagedLocationDesc.java} | 30 ++ ... AlterDatabaseSetManagedLocationOperation.java} | 44 - .../database/create/CreateDatabaseAnalyzer.java| 9 +- .../ql/ddl/database/create/CreateDatabaseDesc.java | 13 +-- .../database/create/CreateDatabaseOperation.java | 21 +++-- .../ql/ddl/database/desc/DescDatabaseDesc.java | 5 +- .../ddl/database/drop/DropDatabaseOperation.java | 3 +- .../table/create/show/ShowCreateTableAnalyzer.java | 4 +- .../create/show/ShowCreateTableOperation.java | 2 +- .../hive/ql/ddl/table/drop/DropTableOperation.java | 3 +- .../drop/AlterTableDropPartitionOperation.java | 6 +- .../AlterMaterializedViewRebuildAnalyzer.java | 1 - .../ql/exec/repl/bootstrap/load/LoadDatabase.java | 2 +- .../repl/load/message/CreateDatabaseHandler.java | 2 +- .../clientnegative/database_location_conflict.q| 4 + .../clientnegative/database_location_conflict2.q | 5 + .../clientnegative/database_location_conflict3.q | 4 + .../queries/clientpositive/database_location.q | 27 ++ .../database_location_conflict.q.out | 6 ++ .../database_location_conflict2.q.out | 14 +++ .../database_location_conflict3.q.out | 14 +++ .../results/clientpositive/database_location.q.out | 104 + 25 files changed, 254 insertions(+), 130 deletions(-) diff --git a/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/alter/location/AlterDatabaseSetLocationDesc.java b/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/alter/location/AlterDatabaseSetLocationDesc.java index 16d28f2..ddb3206 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/alter/location/AlterDatabaseSetLocationDesc.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/alter/location/AlterDatabaseSetLocationDesc.java @@ -29,31 +29,15 @@ import org.apache.hadoop.hive.ql.plan.Explain.Level; public class AlterDatabaseSetLocationDesc extends AbstractAlterDatabaseDesc { private static final long serialVersionUID = 1L; - private String location = null; - private String managedLocation = null; + private final String location; public AlterDatabaseSetLocationDesc(String databaseName, String location) { -this(databaseName, location,null); - } - - public AlterDatabaseSetLocationDesc(String databaseName, String location, String managedLocation) { super(databaseName, null); -if (location != null) { - this.location = location; -} - -if (managedLocation != null) { - this.managedLocation = managedLocation; -} +this.location = location; } @Explain(displayName="location") public String getLocation() { return location; } - - @Explain(displayName="managedLocation") - public String getManagedLocation() { -return managedLocation; - } } diff --git a/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/alter/location/AlterDatabaseSetLocationOperation.java b/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/alter/location/AlterDatabaseSetLocationOperation.java index 0c4ade3..949c9ae 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/alter/location/AlterDatabaseSetLocationOperation.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/alter/location/AlterDatabaseSetLocationOperation.java @@ -23,10 +23,12 @@ import java.net.URISyntaxException; import java.util.Map; import org.apache.commons.lang3.StringUtils; +import org.apache.hadoop.fs.Path; import org.apache.hadoop.hive.metastore.api.Database; import org.apache.hadoop.hive.ql.ErrorMsg; import org.apache.hadoop.hive.ql.ddl.DDLOperationContext; import org.apache.hadoop.hive.ql.ddl.database.alter.AbstractAlterDatabaseOperation; +import org.apache.hadoop.hive.ql.exec.Utilities; import org.apache.hadoop.hive.ql.metadata.HiveException; /** @@ -40,34 +42,23 @@ public class AlterDatabaseSetLocationOperation extends AbstractAlterDatabaseOper @Override protected void doAlteration(Database database, Map params) throws H
[hive] branch master updated: HIVE-23272 Do not expose the timestampTZ from TimestampLocalTZWritable (Miklos Gergely, reviewed by Zoltan Haindrich)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 730b026 HIVE-23272 Do not expose the timestampTZ from TimestampLocalTZWritable (Miklos Gergely, reviewed by Zoltan Haindrich) 730b026 is described below commit 730b026a87efcb06193a646e361924d14865fe20 Author: miklosgergely AuthorDate: Thu Apr 23 20:25:11 2020 +0200 HIVE-23272 Do not expose the timestampTZ from TimestampLocalTZWritable (Miklos Gergely, reviewed by Zoltan Haindrich) --- .../apache/hadoop/hive/cli/control/CliConfigs.java | 1 - .../results/clientpositive/timestamptz_2.q.out | 80 ++ .../hive/serde2/io/TimestampLocalTZWritable.java | 4 +- .../hive/serde2/io/TestTimestampTZWritable.java| 10 +++ 4 files changed, 92 insertions(+), 3 deletions(-) diff --git a/itests/util/src/main/java/org/apache/hadoop/hive/cli/control/CliConfigs.java b/itests/util/src/main/java/org/apache/hadoop/hive/cli/control/CliConfigs.java index 4c9f60c..1c0c62f 100644 --- a/itests/util/src/main/java/org/apache/hadoop/hive/cli/control/CliConfigs.java +++ b/itests/util/src/main/java/org/apache/hadoop/hive/cli/control/CliConfigs.java @@ -70,7 +70,6 @@ public class CliConfigs { excludeQuery("udaf_corr.q"); // disabled in HIVE-20741 excludeQuery("udaf_histogram_numeric.q"); // disabled in HIVE-20715 excludeQuery("vector_groupby_reduce.q"); // Disabled in HIVE-21396 -excludeQuery("timestamptz_2.q"); // Disabled in HIVE-22722 excludeQuery("constprog_cast.q"); // TODO: Enable when we move to Calcite 1.23 setResultsDir("ql/src/test/results/clientpositive"); diff --git a/ql/src/test/results/clientpositive/timestamptz_2.q.out b/ql/src/test/results/clientpositive/timestamptz_2.q.out new file mode 100644 index 000..7f614c0 --- /dev/null +++ b/ql/src/test/results/clientpositive/timestamptz_2.q.out @@ -0,0 +1,80 @@ +PREHOOK: query: drop table tstz2 +PREHOOK: type: DROPTABLE +POSTHOOK: query: drop table tstz2 +POSTHOOK: type: DROPTABLE +PREHOOK: query: create table tstz2(t timestamp with local time zone) +PREHOOK: type: CREATETABLE +PREHOOK: Output: database:default +PREHOOK: Output: default@tstz2 +POSTHOOK: query: create table tstz2(t timestamp with local time zone) +POSTHOOK: type: CREATETABLE +POSTHOOK: Output: database:default +POSTHOOK: Output: default@tstz2 +PREHOOK: query: insert into table tstz2 values + ('2005-04-03 03:01:00.04067 GMT-07:00'),('2005-01-03 02:01:00 GMT'),('2005-01-03 06:01:00 GMT+04:00'), + ('2013-06-03 02:01:00.30547 GMT+01:00'),('2016-01-03 12:26:34.0123 GMT+08:00') +PREHOOK: type: QUERY +PREHOOK: Input: _dummy_database@_dummy_table +PREHOOK: Output: default@tstz2 +POSTHOOK: query: insert into table tstz2 values + ('2005-04-03 03:01:00.04067 GMT-07:00'),('2005-01-03 02:01:00 GMT'),('2005-01-03 06:01:00 GMT+04:00'), + ('2013-06-03 02:01:00.30547 GMT+01:00'),('2016-01-03 12:26:34.0123 GMT+08:00') +POSTHOOK: type: QUERY +POSTHOOK: Input: _dummy_database@_dummy_table +POSTHOOK: Output: default@tstz2 +POSTHOOK: Lineage: tstz2.t SCRIPT [] +PREHOOK: query: select * from tstz2 where t='2005-01-02 19:01:00 GMT-07:00' +PREHOOK: type: QUERY +PREHOOK: Input: default@tstz2 + A masked pattern was here +POSTHOOK: query: select * from tstz2 where t='2005-01-02 19:01:00 GMT-07:00' +POSTHOOK: type: QUERY +POSTHOOK: Input: default@tstz2 + A masked pattern was here +2005-01-03 02:01:00.0 UTC +2005-01-03 02:01:00.0 UTC +PREHOOK: query: select * from tstz2 where t>'2013-06-03 02:01:00.30547 GMT+01:00' +PREHOOK: type: QUERY +PREHOOK: Input: default@tstz2 + A masked pattern was here +POSTHOOK: query: select * from tstz2 where t>'2013-06-03 02:01:00.30547 GMT+01:00' +POSTHOOK: type: QUERY +POSTHOOK: Input: default@tstz2 + A masked pattern was here +2016-01-03 04:26:34.0123 UTC +PREHOOK: query: select min(t),max(t) from tstz2 +PREHOOK: type: QUERY +PREHOOK: Input: default@tstz2 + A masked pattern was here +POSTHOOK: query: select min(t),max(t) from tstz2 +POSTHOOK: type: QUERY +POSTHOOK: Input: default@tstz2 + A masked pattern was here +2005-01-03 02:01:00.0 UTC 2016-01-03 04:26:34.0123 UTC +PREHOOK: query: select t from tstz2 group by t order by t +PREHOOK: type: QUERY +PREHOOK: Input: default@tstz2 + A masked pattern was here +POSTHOOK: query: select t from tstz2 group by t order by t +POSTHOOK: type: QUERY +POSTHOOK: Input: default@tstz2 + A masked pattern was here +2005-01-03 02:01:00.0 UTC +2005-04-03 10:01:00.04067 UTC +2013-06-03 01:01:00.30547 UTC +2016-01-03 04:26:34.0123 UTC +PREHOOK: query: select * from tstz2 a join tstz2 b on a.t=b.t order by a.t +PREHOOK: type: QUERY +PREHOOK: Input: default@tstz2 + A masked patter
[hive] branch master updated: HIVE-23273 Add fix order to cbo_limit.q queries + improve readability (Miklos Gergely, reviewed by Laszlo Bodor)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 88053b2 HIVE-23273 Add fix order to cbo_limit.q queries + improve readability (Miklos Gergely, reviewed by Laszlo Bodor) 88053b2 is described below commit 88053b254fe4dbd3ab4b5bd5c93ae2ab17bc8d77 Author: miklosgergely AuthorDate: Wed Apr 22 19:09:07 2020 +0200 HIVE-23273 Add fix order to cbo_limit.q queries + improve readability (Miklos Gergely, reviewed by Laszlo Bodor) --- ql/src/test/queries/clientpositive/cbo_limit.q | 99 +- .../results/clientpositive/llap/cbo_limit.q.out| 208 +++-- 2 files changed, 281 insertions(+), 26 deletions(-) diff --git a/ql/src/test/queries/clientpositive/cbo_limit.q b/ql/src/test/queries/clientpositive/cbo_limit.q index 1abb76a..d273c994 100644 --- a/ql/src/test/queries/clientpositive/cbo_limit.q +++ b/ql/src/test/queries/clientpositive/cbo_limit.q @@ -9,14 +9,101 @@ set hive.stats.fetch.column.stats=true; set hive.auto.convert.join=false; -- 7. Test Select + TS + Join + Fil + GB + GB Having + Limit -select key, (c_int+1)+2 as x, sum(c_int) from cbo_t1 group by c_float, cbo_t1.c_int, key order by x, key limit 1; -select x, y, count(*) from (select key, (c_int+c_float+1+2) as x, sum(c_int) as y from cbo_t1 group by c_float, cbo_t1.c_int, key) R group by y, x order by x,y limit 1; -select key from(select key from (select key from cbo_t1 limit 5)cbo_t2 limit 5)cbo_t3 limit 5; -select key, c_int from(select key, c_int from (select key, c_int from cbo_t1 order by c_int limit 5)cbo_t1 order by c_int limit 5)cbo_t2 order by c_int limit 5; + SELECT key, (c_int+1)+2 AS x, sum(c_int) +FROM cbo_t1 +GROUP BY c_float, cbo_t1.c_int, key +ORDER BY x, key + LIMIT 1; -select cbo_t3.c_int, c, count(*) from (select key as a, c_int+1 as b, sum(c_int) as c from cbo_t1 where (cbo_t1.c_int + 1 >= 0) and (cbo_t1.c_int > 0 or cbo_t1.c_float >= 0) group by c_float, cbo_t1.c_int, key order by a limit 5) cbo_t1 join (select key as p, c_int+1 as q, sum(c_int) as r from cbo_t2 where (cbo_t2.c_int + 1 >= 0) and (cbo_t2.c_int > 0 or cbo_t2.c_float >= 0) group by c_float, cbo_t2.c_int, key order by q/10 desc, r asc limit 5) cbo_t2 on cbo_t1.a=p join cbo_t3 on cbo_t1 [...] + SELECT x, y, count(*) +FROM (SELECT key, (c_int+c_float+1+2) AS x, sum(c_int) AS y +FROM cbo_t1 +GROUP BY c_float, cbo_t1.c_int, key + ) R +GROUP BY y, x +ORDER BY x, y + LIMIT 1; -select cbo_t3.c_int, c, count(*) from (select key as a, c_int+1 as b, sum(c_int) as c from cbo_t1 where (cbo_t1.c_int + 1 >= 0) and (cbo_t1.c_int > 0 or cbo_t1.c_float >= 0) group by c_float, cbo_t1.c_int, key having cbo_t1.c_float > 0 and (c_int >=1 or c_float >= 1) and (c_int + c_float) >= 0 order by b % c asc, b desc limit 5) cbo_t1 left outer join (select key as p, c_int+1 as q, sum(c_int) as r from cbo_t2 where (cbo_t2.c_int + 1 >= 0) and (cbo_t2.c_int > 0 or cbo_t2.c_float >= 0) [...] +SELECT key + FROM (SELECT key + FROM (SELECT key + FROM cbo_t1 + ORDER BY key + LIMIT 5 + ) cbo_t2 + LIMIT 5 + ) cbo_t3 + LIMIT 5; + + SELECT key, c_int +FROM (SELECT key, c_int +FROM (SELECT key, c_int +FROM cbo_t1 +ORDER BY c_int, key + LIMIT 5 + ) cbo_t1 +ORDER BY c_int + LIMIT 5 + ) cbo_t2 +ORDER BY c_int + LIMIT 5; + + SELECT cbo_t3.c_int, c, count(*) +FROM (SELECT key AS a, c_int+1 AS b, sum(c_int) AS c +FROM cbo_t1 + WHERE (cbo_t1.c_int + 1 >= 0) + AND (cbo_t1.c_int > 0 OR cbo_t1.c_float >= 0) +GROUP BY c_float, cbo_t1.c_int, key +ORDER BY a, b + LIMIT 5 + ) cbo_t1 +JOIN (SELECT key AS p, c_int+1 AS q, sum(c_int) AS r +FROM cbo_t2 + WHERE (cbo_t2.c_int + 1 >= 0) + AND (cbo_t2.c_int > 0 OR cbo_t2.c_float >= 0) +GROUP BY c_float, cbo_t2.c_int, key +ORDER BY q/10 DESC, r ASC, p ASC + LIMIT 5 + ) cbo_t2 ON cbo_t1.a = p +JOIN cbo_t3 ON cbo_t1.a = key + WHERE (b + cbo_t2.q >= 0) + AND (b > 0 OR c_int >= 0) +GROUP BY cbo_t3.c_int, c +ORDER BY cbo_t3.c_int + c DESC, c ASC + LIMIT 5; + + SELECT cbo_t3.c_int, c, count(*) + FROM (SELECT key AS a, c_int+1 AS b, sum(c_int) AS c + FROM cbo_t1 +WHERE (cbo_t1.c_int + 1 >= 0) + AND (cbo_t1.c_int > 0 OR cbo_t1.c_float >= 0) + GROUP BY c_float, cbo_t1.c_int, key + HAVING cbo_t1.c_float > 0 + AND (c_int >=1 OR c_float >= 1) +
[hive] branch master updated: HIVE-23243 Accept SQL type like pattern for Show Databases (Miklos Gergely, reviewed by David Mollitor)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 7ff8bd0 HIVE-23243 Accept SQL type like pattern for Show Databases (Miklos Gergely, reviewed by David Mollitor) 7ff8bd0 is described below commit 7ff8bd0b7061199c86d3f77bbff729bd7c8eaaf9 Author: miklosgergely AuthorDate: Sun Apr 19 15:20:06 2020 +0200 HIVE-23243 Accept SQL type like pattern for Show Databases (Miklos Gergely, reviewed by David Mollitor) --- .../ql/ddl/database/show/ShowDatabasesOperation.java| 11 ++- ql/src/test/queries/clientpositive/database.q | 7 +-- .../queries/clientpositive/describe_database_json.q | 2 +- ql/src/test/results/clientpositive/llap/database.q.out | 17 + .../clientpositive/llap/describe_database_json.q.out| 4 ++-- 5 files changed, 27 insertions(+), 14 deletions(-) diff --git a/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/show/ShowDatabasesOperation.java b/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/show/ShowDatabasesOperation.java index d7cc033..625a48e 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/show/ShowDatabasesOperation.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/show/ShowDatabasesOperation.java @@ -20,6 +20,8 @@ package org.apache.hadoop.hive.ql.ddl.database.show; import java.io.DataOutputStream; import java.util.List; +import java.util.regex.Pattern; +import java.util.stream.Collectors; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hive.ql.ErrorMsg; @@ -27,6 +29,7 @@ import org.apache.hadoop.hive.ql.ddl.DDLOperation; import org.apache.hadoop.hive.ql.ddl.DDLOperationContext; import org.apache.hadoop.hive.ql.ddl.DDLUtils; import org.apache.hadoop.hive.ql.metadata.HiveException; +import org.apache.hadoop.hive.ql.udf.UDFLike; import org.apache.hadoop.io.IOUtils; /** @@ -39,13 +42,11 @@ public class ShowDatabasesOperation extends DDLOperation { @Override public int execute() throws HiveException { -// get the databases for the desired pattern - populate the output stream -List databases = null; +List databases = context.getDb().getAllDatabases(); if (desc.getPattern() != null) { LOG.debug("pattern: {}", desc.getPattern()); - databases = context.getDb().getDatabasesByPattern(desc.getPattern()); -} else { - databases = context.getDb().getAllDatabases(); + Pattern pattern = Pattern.compile(UDFLike.likePatternToRegExp(desc.getPattern())); + databases = databases.stream().filter(name -> pattern.matcher(name).matches()).collect(Collectors.toList()); } LOG.info("Found {} database(s) matching the SHOW DATABASES statement.", databases.size()); diff --git a/ql/src/test/queries/clientpositive/database.q b/ql/src/test/queries/clientpositive/database.q index 36c3c0b..bfa9d1a 100644 --- a/ql/src/test/queries/clientpositive/database.q +++ b/ql/src/test/queries/clientpositive/database.q @@ -39,11 +39,14 @@ CREATE DATABASE test_db; SHOW DATABASES; -- SHOW pattern -SHOW DATABASES LIKE 'test*'; +SHOW DATABASES LIKE 'test%'; -- SHOW pattern -SHOW DATABASES LIKE '*ef*'; +SHOW DATABASES LIKE '%ef%'; +-- SHOW pattern +SHOW DATABASES LIKE 'test_d_'; +SHOW DATABASES LIKE 'test__'; USE test_db; SHOW DATABASES; diff --git a/ql/src/test/queries/clientpositive/describe_database_json.q b/ql/src/test/queries/clientpositive/describe_database_json.q index 67ca68f..cff2fba 100644 --- a/ql/src/test/queries/clientpositive/describe_database_json.q +++ b/ql/src/test/queries/clientpositive/describe_database_json.q @@ -12,7 +12,7 @@ DESCRIBE SCHEMA EXTENDED jsondb1; SHOW DATABASES; -SHOW DATABASES LIKE 'json*'; +SHOW DATABASES LIKE 'json%'; DROP DATABASE jsondb1; diff --git a/ql/src/test/results/clientpositive/llap/database.q.out b/ql/src/test/results/clientpositive/llap/database.q.out index 5027ccd..193cc31 100644 --- a/ql/src/test/results/clientpositive/llap/database.q.out +++ b/ql/src/test/results/clientpositive/llap/database.q.out @@ -87,16 +87,25 @@ POSTHOOK: query: SHOW DATABASES POSTHOOK: type: SHOWDATABASES default test_db -PREHOOK: query: SHOW DATABASES LIKE 'test*' +PREHOOK: query: SHOW DATABASES LIKE 'test%' PREHOOK: type: SHOWDATABASES -POSTHOOK: query: SHOW DATABASES LIKE 'test*' +POSTHOOK: query: SHOW DATABASES LIKE 'test%' POSTHOOK: type: SHOWDATABASES test_db -PREHOOK: query: SHOW DATABASES LIKE '*ef*' +PREHOOK: query: SHOW DATABASES LIKE '%ef%' PREHOOK: type: SHOWDATABASES -POSTHOOK: query: SHOW DATABASES LIKE '*ef*' +POSTHOOK: query: SHOW DATABASES LIKE '%ef%' POSTHOOK: type: SHOWDATABASES default +PREHOOK: query: SHOW DATABASES LIKE 'test_d_' +PREHOOK: type: SHOWDATABASES +POSTHOOK: query: SHOW DATABASES LIKE 'test_d_' +POSTHOOK: type: SHOWDATABASES +test_d
[hive] 01/01: move files to llap
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch HIVE-23274_280_rb in repository https://gitbox.apache.org/repos/asf/hive.git commit 0148884af597cece1d0a4cd42d97d7e760ac556a Author: miklosgergely AuthorDate: Wed Apr 22 22:10:40 2020 +0200 move files to llap --- .../results/clientpositive/{ => llap}/acid_mapjoin.q.out| 0 .../results/clientpositive/{ => llap}/acid_nullscan.q.out | 0 .../results/clientpositive/{ => llap}/acid_stats2.q.out | 0 .../results/clientpositive/{ => llap}/acid_stats3.q.out | 0 .../results/clientpositive/{ => llap}/acid_stats4.q.out | 0 .../results/clientpositive/{ => llap}/acid_stats5.q.out | 0 .../clientpositive/{ => llap}/acid_table_stats.q.out| 0 .../clientpositive/{ => llap}/acid_view_delete.q.out| 0 .../clientpositive/{ => llap}/alias_casted_column.q.out | 0 .../clientpositive/{ => llap}/allcolref_in_udf.q.out| 0 .../clientpositive/{ => llap}/alterColumnStatsPart.q.out| 0 .../{ => llap}/alter_change_db_location.q.out | 0 .../results/clientpositive/{ => llap}/alter_db_owner.q.out | 0 .../clientpositive/{ => llap}/alter_partition_coltype.q.out | 0 .../results/clientpositive/{ => llap}/ambiguitycheck.q.out | 0 .../results/clientpositive/{ => llap}/ambiguous_col.q.out | 0 .../{ => llap}/analyze_table_null_partition.q.out | 0 .../clientpositive/{ => llap}/analyze_tbl_date.q.out| 0 .../{ => llap}/annotate_stats_deep_filters.q.out| 0 .../clientpositive/{ => llap}/annotate_stats_filter.q.out | 0 .../clientpositive/{ => llap}/annotate_stats_groupby.q.out | 0 .../clientpositive/{ => llap}/annotate_stats_groupby2.q.out | 0 .../clientpositive/{ => llap}/annotate_stats_join.q.out | 0 .../{ => llap}/annotate_stats_join_pkfk.q.out | 0 .../clientpositive/{ => llap}/annotate_stats_limit.q.out| 0 .../clientpositive/{ => llap}/annotate_stats_part.q.out | 0 .../clientpositive/{ => llap}/annotate_stats_select.q.out | 0 .../clientpositive/{ => llap}/annotate_stats_table.q.out| 0 .../clientpositive/{ => llap}/annotate_stats_udtf.q.out | 0 .../clientpositive/{ => llap}/annotate_stats_union.q.out| 0 .../clientpositive/{ => llap}/ansi_sql_arithmetic.q.out | 0 .../{ => llap}/array_map_access_nonconstant.q.out | 0 .../clientpositive/{ => llap}/array_size_estimation.q.out | 0 .../results/clientpositive/{ => llap}/authorization_9.q.out | 0 .../clientpositive/{ => llap}/authorization_explain.q.out | 0 .../{ => llap}/authorization_owner_actions_db.q.out | 0 .../clientpositive/{ => llap}/authorization_view_1.q.out| 0 .../{ => llap}/authorization_view_disable_cbo_1.q.out | 0 .../clientpositive/{ => llap}/autoColumnStats_11.q.out | 0 .../clientpositive/{ => llap}/autoColumnStats_4.q.out | 0 .../clientpositive/{ => llap}/autoColumnStats_5.q.out | 0 .../clientpositive/{ => llap}/autoColumnStats_5a.q.out | 0 .../clientpositive/{ => llap}/autoColumnStats_7.q.out | 0 .../clientpositive/{ => llap}/autoColumnStats_8.q.out | 0 .../clientpositive/{ => llap}/autoColumnStats_9.q.out | 0 .../results/clientpositive/{ => llap}/auto_join10.q.out | 0 .../results/clientpositive/{ => llap}/auto_join11.q.out | 0 .../results/clientpositive/{ => llap}/auto_join12.q.out | 0 .../results/clientpositive/{ => llap}/auto_join13.q.out | 0 .../results/clientpositive/{ => llap}/auto_join14.q.out | 0 .../results/clientpositive/{ => llap}/auto_join15.q.out | 0 .../results/clientpositive/{ => llap}/auto_join16.q.out | 0 .../results/clientpositive/{ => llap}/auto_join17.q.out | 0 .../results/clientpositive/{ => llap}/auto_join18.q.out | 0 .../{ => llap}/auto_join18_multi_distinct.q.out | 0 .../results/clientpositive/{ => llap}/auto_join19.q.out | 0 .../clientpositive/{ => llap}/auto_join19_inclause.q.out| 0 .../test/results/clientpositive/{ => llap}/auto_join2.q.out | 0 .../results/clientpositive/{ => llap}/auto_join20.q.out | 0 .../results/clientpositive/{ => llap}/auto_join22.q.out | 0 .../results/clientpositive/{ => llap}/auto_join23.q.out | 0 .../results/clientpositive/{ => llap}/auto_join24.q.out | 0 .../results/clientpositive/{ => llap}/auto_join25.q.out | 0 .../results/clientpositive/{ => llap}/auto_join26.q.out | 0 .../results/clientpositive/{ => llap}/auto_join27.q.out | 0 .../results/clientpositive/{ => llap}/auto_join28.q.out | 0 .../test/results/clientpositive/{ => llap}/auto_j
[hive] branch HIVE-23274_280_rb created (now 0148884)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a change to branch HIVE-23274_280_rb in repository https://gitbox.apache.org/repos/asf/hive.git. at 0148884 move files to llap This branch includes the following new commits: new 0148884 move files to llap The 1 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference.
[hive] branch master updated: HIVE-23264 Make partition_wise_fileformat12.q deterministic with order by clauses (Miklos Gergely, reviewed by Laszlo Bodor)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 3bb2bf8 HIVE-23264 Make partition_wise_fileformat12.q deterministic with order by clauses (Miklos Gergely, reviewed by Laszlo Bodor) 3bb2bf8 is described below commit 3bb2bf8813cb4cc578b16459025aa151f3aeb8bd Author: miklosgergely AuthorDate: Tue Apr 21 16:11:49 2020 +0200 HIVE-23264 Make partition_wise_fileformat12.q deterministic with order by clauses (Miklos Gergely, reviewed by Laszlo Bodor) --- .../clientpositive/partition_wise_fileformat12.q | 16 .../llap/partition_wise_fileformat12.q.out | 48 +++--- 2 files changed, 32 insertions(+), 32 deletions(-) diff --git a/ql/src/test/queries/clientpositive/partition_wise_fileformat12.q b/ql/src/test/queries/clientpositive/partition_wise_fileformat12.q index c9379f4..1933d96 100644 --- a/ql/src/test/queries/clientpositive/partition_wise_fileformat12.q +++ b/ql/src/test/queries/clientpositive/partition_wise_fileformat12.q @@ -6,22 +6,22 @@ create table partition_test_partitioned_n9(key string, value string) partitioned alter table partition_test_partitioned_n9 set serde 'org.apache.hadoop.hive.serde2.columnar.LazyBinaryColumnarSerDe'; insert overwrite table partition_test_partitioned_n9 partition(dt='1') select * from src where key = 238; -select * from partition_test_partitioned_n9 where dt is not null; -select key+key, value from partition_test_partitioned_n9 where dt is not null; +select * from partition_test_partitioned_n9 where dt is not null order by key, value, dt; +select key+key, value from partition_test_partitioned_n9 where dt is not null order by key; set hive.metastore.disallow.incompatible.col.type.changes=false; alter table partition_test_partitioned_n9 change key key int; reset hive.metastore.disallow.incompatible.col.type.changes; -select key+key, value from partition_test_partitioned_n9 where dt is not null; -select * from partition_test_partitioned_n9 where dt is not null; +select key+key, value from partition_test_partitioned_n9 where dt is not null order by key, value; +select * from partition_test_partitioned_n9 where dt is not null order by key, value, dt; insert overwrite table partition_test_partitioned_n9 partition(dt='2') select * from src where key = 97; alter table partition_test_partitioned_n9 add columns (value2 string); -select key+key, value from partition_test_partitioned_n9 where dt is not null; -select * from partition_test_partitioned_n9 where dt is not null; +select key+key, value from partition_test_partitioned_n9 where dt is not null order by key, value; +select * from partition_test_partitioned_n9 where dt is not null order by key, value, value2, dt; insert overwrite table partition_test_partitioned_n9 partition(dt='3') select key, value, value from src where key = 200; -select key+key, value, value2 from partition_test_partitioned_n9 where dt is not null; -select * from partition_test_partitioned_n9 where dt is not null; +select key+key, value, value2 from partition_test_partitioned_n9 where dt is not null order by key, value, value2; +select * from partition_test_partitioned_n9 where dt is not null order by key, value, value2, dt; diff --git a/ql/src/test/results/clientpositive/llap/partition_wise_fileformat12.q.out b/ql/src/test/results/clientpositive/llap/partition_wise_fileformat12.q.out index c91e2b7..bd1952e 100644 --- a/ql/src/test/results/clientpositive/llap/partition_wise_fileformat12.q.out +++ b/ql/src/test/results/clientpositive/llap/partition_wise_fileformat12.q.out @@ -24,24 +24,24 @@ POSTHOOK: Input: default@src POSTHOOK: Output: default@partition_test_partitioned_n9@dt=1 POSTHOOK: Lineage: partition_test_partitioned_n9 PARTITION(dt=1).key SIMPLE [(src)src.FieldSchema(name:key, type:string, comment:default), ] POSTHOOK: Lineage: partition_test_partitioned_n9 PARTITION(dt=1).value SIMPLE [(src)src.FieldSchema(name:value, type:string, comment:default), ] -PREHOOK: query: select * from partition_test_partitioned_n9 where dt is not null +PREHOOK: query: select * from partition_test_partitioned_n9 where dt is not null order by key, value, dt PREHOOK: type: QUERY PREHOOK: Input: default@partition_test_partitioned_n9 PREHOOK: Input: default@partition_test_partitioned_n9@dt=1 A masked pattern was here -POSTHOOK: query: select * from partition_test_partitioned_n9 where dt is not null +POSTHOOK: query: select * from partition_test_partitioned_n9 where dt is not null order by key, value, dt POSTHOOK: type: QUERY POSTHOOK: Input: default@partition_test_partitioned_n9 POSTHOOK: Input: default@partition_test_partitioned_n9@dt=1 A masked pattern was here 238val_238 1 238val_238 1 -PREHOOK: query: select key+key, value from
[hive] branch master updated: HIVE-23263 Add fix order to cbo_rp_limit.q queries + improve readability (Miklos Gergely, reviewed by Laszlo Bodor)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new df825b9 HIVE-23263 Add fix order to cbo_rp_limit.q queries + improve readability (Miklos Gergely, reviewed by Laszlo Bodor) df825b9 is described below commit df825b9ac052e003b32b26efeb844dc0e0f332ed Author: miklosgergely AuthorDate: Tue Apr 21 13:22:21 2020 +0200 HIVE-23263 Add fix order to cbo_rp_limit.q queries + improve readability (Miklos Gergely, reviewed by Laszlo Bodor) --- ql/src/test/queries/clientpositive/cbo_rp_limit.q | 105 +- .../results/clientpositive/llap/cbo_rp_limit.q.out | 218 ++--- 2 files changed, 294 insertions(+), 29 deletions(-) diff --git a/ql/src/test/queries/clientpositive/cbo_rp_limit.q b/ql/src/test/queries/clientpositive/cbo_rp_limit.q index 19c0522..1fe7edc 100644 --- a/ql/src/test/queries/clientpositive/cbo_rp_limit.q +++ b/ql/src/test/queries/clientpositive/cbo_rp_limit.q @@ -9,13 +9,104 @@ set hive.stats.fetch.column.stats=true; set hive.auto.convert.join=false; -- 7. Test Select + TS + Join + Fil + GB + GB Having + Limit -select key, (c_int+1)+2 as x, sum(c_int) from cbo_t1 group by c_float, cbo_t1.c_int, key order by x,key limit 1; + SELECT key, (c_int+1)+2 AS x, sum(c_int) +FROM cbo_t1 +GROUP BY c_float, cbo_t1.c_int, key +ORDER BY x, key + LIMIT 1; + -- annoying spaces in the key -select distinct "<"||key||">", (c_int+c_float+1+2) as x, sum(c_int) as y from cbo_t1 group by c_float, cbo_t1.c_int, key limit 2; -select x, y, count(*) from (select key, (c_int+c_float+1+2) as x, sum(c_int) as y from cbo_t1 group by c_float, cbo_t1.c_int, key) R group by y, x order by x,y limit 5; -select key from(select key from (select key from cbo_t1 limit 5)cbo_t2 limit 5)cbo_t3 limit 5; -select key, c_int from(select key, c_int from (select key, c_int from cbo_t1 order by c_int limit 5)cbo_t1 order by c_int limit 5)cbo_t2 order by c_int limit 5; + SELECT DISTINCT "<"||key||">", (c_int+c_float+1+2) AS x, sum(c_int) AS y +FROM cbo_t1 +GROUP BY c_float, cbo_t1.c_int, key + LIMIT 2; + + SELECT x, y, count(*) +FROM (SELECT key, (c_int+c_float+1+2) AS x, sum(c_int) AS y +FROM cbo_t1 +GROUP BY c_float, cbo_t1.c_int, key + ) R +GROUP BY y, x +ORDER BY x, y + LIMIT 5; + +SELECT key + FROM (SELECT key + FROM (SELECT key + FROM cbo_t1 + ORDER BY key + LIMIT 5 + ) cbo_t2 + LIMIT 5 + ) cbo_t3 + LIMIT 5; + + SELECT key, c_int +FROM (SELECT key, c_int +FROM (SELECT key, c_int +FROM cbo_t1 +ORDER BY c_int, key + LIMIT 5 + ) cbo_t1 +ORDER BY c_int + LIMIT 5 + ) cbo_t2 +ORDER BY c_int + LIMIT 5; -select cbo_t3.c_int, c, count(*) from (select key as a, c_int+1 as b, sum(c_int) as c from cbo_t1 where (cbo_t1.c_int + 1 >= 0) and (cbo_t1.c_int > 0 or cbo_t1.c_float >= 0) group by c_float, cbo_t1.c_int, key order by a limit 5) cbo_t1 join (select key as p, c_int+1 as q, sum(c_int) as r from cbo_t2 where (cbo_t2.c_int + 1 >= 0) and (cbo_t2.c_int > 0 or cbo_t2.c_float >= 0) group by c_float, cbo_t2.c_int, key order by q/10 desc, r asc limit 5) cbo_t2 on cbo_t1.a=p join cbo_t3 on cbo_t1 [...] + SELECT cbo_t3.c_int, c, count(*) +FROM (SELECT key AS a, c_int+1 AS b, sum(c_int) AS c +FROM cbo_t1 + WHERE (cbo_t1.c_int + 1 >= 0) + AND (cbo_t1.c_int > 0 OR cbo_t1.c_float >= 0) +GROUP BY c_float, cbo_t1.c_int, key +ORDER BY a, b + LIMIT 5 + ) cbo_t1 +JOIN (SELECT key AS p, c_int+1 AS q, sum(c_int) AS r +FROM cbo_t2 + WHERE (cbo_t2.c_int + 1 >= 0) + AND (cbo_t2.c_int > 0 OR cbo_t2.c_float >= 0) +GROUP BY c_float, cbo_t2.c_int, key +ORDER BY q/10 DESC, r ASC, p ASC + LIMIT 5 + ) cbo_t2 ON cbo_t1.a = p +JOIN cbo_t3 ON cbo_t1.a = key + WHERE (b + cbo_t2.q >= 0) + AND (b > 0 OR c_int >= 0) +GROUP BY cbo_t3.c_int, c +ORDER BY cbo_t3.c_int + c DESC, c ASC + LIMIT 5; -select cbo_t3.c_int, c, count(*) from (select key as a, c_int+1 as b, sum(c_int) as c from cbo_t1 where (cbo_t1.c_int + 1 >= 0) and (cbo_t1.c_int > 0 or cbo_t1.c_float >= 0) group by c_float, cbo_t1.c_int, key having cbo_t1.c_float > 0 and (c_int >=1 or c_float >= 1) and (c_int + c_float) >= 0 order by b % c asc, b desc limit 5) cbo_t1 left outer join (select key as p, c_int+1 as q, sum(c_int) as r from cbo_t2 where (cbo_t2.c_int + 1 >= 0) and (cbo_t2.c_int > 0 or cbo_t2.c_float >= 0) [...] + SELECT cbo_t3.c_int, c,
[hive] branch master updated: HIVE-23245 Ensure result order for partition_date.q partition_timestamp.q queries (Miklos Gergely, reviewed by Laszlo Bodor)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 7b6bacb HIVE-23245 Ensure result order for partition_date.q partition_timestamp.q queries (Miklos Gergely, reviewed by Laszlo Bodor) 7b6bacb is described below commit 7b6bacb3e3f9f47e7d6e702d23cf2c909842b993 Author: miklosgergely AuthorDate: Mon Apr 20 00:22:50 2020 +0200 HIVE-23245 Ensure result order for partition_date.q partition_timestamp.q queries (Miklos Gergely, reviewed by Laszlo Bodor) --- ql/src/test/queries/clientpositive/partition_date.q | 2 +- ql/src/test/queries/clientpositive/partition_timestamp.q | 2 +- ql/src/test/queries/clientpositive/temp_table_partition_date.q| 2 +- ql/src/test/queries/clientpositive/temp_table_partition_timestamp.q | 2 +- ql/src/test/results/clientpositive/llap/partition_date.q.out | 4 ++-- ql/src/test/results/clientpositive/llap/partition_timestamp.q.out | 4 ++-- .../test/results/clientpositive/llap/temp_table_partition_date.q.out | 4 ++-- .../results/clientpositive/llap/temp_table_partition_timestamp.q.out | 4 ++-- 8 files changed, 12 insertions(+), 12 deletions(-) diff --git a/ql/src/test/queries/clientpositive/partition_date.q b/ql/src/test/queries/clientpositive/partition_date.q index 3d131d8..1f16d74 100644 --- a/ql/src/test/queries/clientpositive/partition_date.q +++ b/ql/src/test/queries/clientpositive/partition_date.q @@ -16,7 +16,7 @@ insert overwrite table partition_date_1 partition(dt='2013-08-08', region= '10') select * from src tablesample (11 rows); -select distinct dt from partition_date_1; +select distinct dt from partition_date_1 order by dt; select * from partition_date_1 where dt = '2000-01-01' and region = '2' order by key,value; -- 15 diff --git a/ql/src/test/queries/clientpositive/partition_timestamp.q b/ql/src/test/queries/clientpositive/partition_timestamp.q index 17c164a..55fbca1 100644 --- a/ql/src/test/queries/clientpositive/partition_timestamp.q +++ b/ql/src/test/queries/clientpositive/partition_timestamp.q @@ -15,7 +15,7 @@ insert overwrite table partition_timestamp_1 partition(dt='2001-01-01 02:00:00', insert overwrite table partition_timestamp_1 partition(dt='2001-01-01 03:00:00', region= '10') select * from src tablesample (11 rows); -select distinct dt from partition_timestamp_1; +select distinct dt from partition_timestamp_1 order by dt; select * from partition_timestamp_1 where dt = '2000-01-01 01:00:00' and region = '2' order by key,value; -- 10 diff --git a/ql/src/test/queries/clientpositive/temp_table_partition_date.q b/ql/src/test/queries/clientpositive/temp_table_partition_date.q index 6f3fd0a..bd85039 100644 --- a/ql/src/test/queries/clientpositive/temp_table_partition_date.q +++ b/ql/src/test/queries/clientpositive/temp_table_partition_date.q @@ -16,7 +16,7 @@ insert overwrite table partition_date_1_temp partition(dt='2013-08-08', region= select * from src tablesample (11 rows); -select distinct dt from partition_date_1_temp; +select distinct dt from partition_date_1_temp order by dt; select * from partition_date_1_temp where dt = '2000-01-01' and region = '2' order by key,value; -- 15 diff --git a/ql/src/test/queries/clientpositive/temp_table_partition_timestamp.q b/ql/src/test/queries/clientpositive/temp_table_partition_timestamp.q index 09d4417..ea36e80 100644 --- a/ql/src/test/queries/clientpositive/temp_table_partition_timestamp.q +++ b/ql/src/test/queries/clientpositive/temp_table_partition_timestamp.q @@ -15,7 +15,7 @@ insert overwrite table partition_timestamp_1_temp partition(dt='2001-01-01 02:00 insert overwrite table partition_timestamp_1_temp partition(dt='2001-01-01 03:00:00', region= '10') select * from src tablesample (11 rows); -select distinct dt from partition_timestamp_1_temp; +select distinct dt from partition_timestamp_1_temp order by dt; select * from partition_timestamp_1_temp where dt = '2000-01-01 01:00:00' and region = '2' order by key,value; -- 10 diff --git a/ql/src/test/results/clientpositive/llap/partition_date.q.out b/ql/src/test/results/clientpositive/llap/partition_date.q.out index bbb8a04..caa1163 100644 --- a/ql/src/test/results/clientpositive/llap/partition_date.q.out +++ b/ql/src/test/results/clientpositive/llap/partition_date.q.out @@ -70,7 +70,7 @@ POSTHOOK: Input: default@src POSTHOOK: Output: default@partition_date_1@dt=2013-08-08/region=10 POSTHOOK: Lineage: partition_date_1 PARTITION(dt=2013-08-08,region=10).key SIMPLE [(src)src.FieldSchema(name:key, type:string, comment:default), ] POSTHOOK: Lineage: partition_date_1 PARTITION(dt=2013-08-08,region=10).value SIMPLE [(src)src.FieldSchema(name:value, type:string, comment:default), ] -PREHOOK: query: select distinct dt from partition_date_1 +PREHOOK
[hive] branch master updated: HIVE-23221 Ignore flaky test testHouseKeepingThreadExistence in TestMetastoreHousekeepingLeaderEmptyConfig and TestMetastoreHousekeepingLeader
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 110d398 HIVE-23221 Ignore flaky test testHouseKeepingThreadExistence in TestMetastoreHousekeepingLeaderEmptyConfig and TestMetastoreHousekeepingLeader 110d398 is described below commit 110d398fc66fa26bea51856b0b542303486a2ca4 Author: miklosgergely AuthorDate: Thu Apr 16 22:20:18 2020 +0200 HIVE-23221 Ignore flaky test testHouseKeepingThreadExistence in TestMetastoreHousekeepingLeaderEmptyConfig and TestMetastoreHousekeepingLeader --- .../apache/hadoop/hive/metastore/TestMetastoreHousekeepingLeader.java | 3 +++ .../hive/metastore/TestMetastoreHousekeepingLeaderEmptyConfig.java | 3 +++ 2 files changed, 6 insertions(+) diff --git a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestMetastoreHousekeepingLeader.java b/itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestMetastoreHousekeepingLeader.java index e8b820d..03a8161 100644 --- a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestMetastoreHousekeepingLeader.java +++ b/itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestMetastoreHousekeepingLeader.java @@ -20,6 +20,7 @@ package org.apache.hadoop.hive.metastore; import org.junit.Assert; import org.junit.Before; +import org.junit.Ignore; import org.junit.Test; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -37,6 +38,8 @@ public class TestMetastoreHousekeepingLeader extends MetastoreHousekeepingLeader internalSetup("localhost"); } + @Ignore("HIVE-23221 Ignore flaky test testHouseKeepingThreadExistence in TestMetastoreHousekeepingLeaderEmptyConfig" + + " and TestMetastoreHousekeepingLeader") @Test public void testHouseKeepingThreadExistence() throws Exception { searchHousekeepingThreads(); diff --git a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestMetastoreHousekeepingLeaderEmptyConfig.java b/itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestMetastoreHousekeepingLeaderEmptyConfig.java index 202a677..75ea637 100644 --- a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestMetastoreHousekeepingLeaderEmptyConfig.java +++ b/itests/hive-unit/src/test/java/org/apache/hadoop/hive/metastore/TestMetastoreHousekeepingLeaderEmptyConfig.java @@ -20,6 +20,7 @@ package org.apache.hadoop.hive.metastore; import org.junit.Assert; import org.junit.Before; +import org.junit.Ignore; import org.junit.Test; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -39,6 +40,8 @@ public class TestMetastoreHousekeepingLeaderEmptyConfig extends MetastoreHouseke internalSetup(""); } + @Ignore("HIVE-23221 Ignore flaky test testHouseKeepingThreadExistence in TestMetastoreHousekeepingLeaderEmptyConfig" + + " and TestMetastoreHousekeepingLeader") @Test public void testHouseKeepingThreadExistence() throws Exception { searchHousekeepingThreads();
[hive] branch master updated: HIVE-23123 Disable export/import of views and materialized views (Miklos Gergely, reviewed by Mahesh Kumar Behera)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new c4023c5 HIVE-23123 Disable export/import of views and materialized views (Miklos Gergely, reviewed by Mahesh Kumar Behera) c4023c5 is described below commit c4023c5a23c21933d68187b7c57b9b9c4f3cbdbb Author: miklosgergely AuthorDate: Sat Apr 4 15:02:03 2020 +0200 HIVE-23123 Disable export/import of views and materialized views (Miklos Gergely, reviewed by Mahesh Kumar Behera) --- .../hadoop/hive/ql/exec/repl/ReplLoadTask.java | 88 -- .../bootstrap/events/filesystem/FSTableEvent.java | 4 + .../hive/ql/parse/AcidExportSemanticAnalyzer.java | 4 + .../org/apache/hadoop/hive/ql/parse/EximUtil.java | 14 + .../hive/ql/parse/ExportSemanticAnalyzer.java | 4 + .../hive/ql/parse/ImportSemanticAnalyzer.java | 18 +- .../ql/parse/repl/load/message/TableHandler.java | 31 +- .../hadoop/hive/ql/plan/ImportTableDesc.java | 316 - .../clientnegative/export_materialized_view.q | 5 + ql/src/test/queries/clientnegative/export_view.q | 5 + .../clientnegative/export_materialized_view.q.out | 19 ++ .../test/results/clientnegative/export_view.q.out | 20 ++ 12 files changed, 213 insertions(+), 315 deletions(-) diff --git a/ql/src/java/org/apache/hadoop/hive/ql/exec/repl/ReplLoadTask.java b/ql/src/java/org/apache/hadoop/hive/ql/exec/repl/ReplLoadTask.java index a593555..b578d48 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/exec/repl/ReplLoadTask.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/exec/repl/ReplLoadTask.java @@ -20,12 +20,15 @@ package org.apache.hadoop.hive.ql.exec.repl; import com.google.common.collect.Collections2; import org.apache.commons.lang3.StringUtils; import org.apache.hadoop.fs.Path; +import org.apache.hadoop.hive.common.TableName; import org.apache.hadoop.hive.common.repl.ReplScope; import org.apache.hadoop.hive.conf.HiveConf; +import org.apache.hadoop.hive.metastore.TableType; import org.apache.hadoop.hive.metastore.api.Database; import org.apache.hadoop.hive.ql.ErrorMsg; import org.apache.hadoop.hive.ql.ddl.DDLWork; import org.apache.hadoop.hive.ql.ddl.database.alter.poperties.AlterDatabaseSetPropertiesDesc; +import org.apache.hadoop.hive.ql.ddl.view.create.CreateViewDesc; import org.apache.hadoop.hive.ql.exec.Task; import org.apache.hadoop.hive.ql.exec.TaskFactory; import org.apache.hadoop.hive.ql.exec.repl.bootstrap.events.BootstrapEvent; @@ -33,9 +36,9 @@ import org.apache.hadoop.hive.ql.exec.repl.bootstrap.events.ConstraintEvent; import org.apache.hadoop.hive.ql.exec.repl.bootstrap.events.DatabaseEvent; import org.apache.hadoop.hive.ql.exec.repl.bootstrap.events.FunctionEvent; import org.apache.hadoop.hive.ql.exec.repl.bootstrap.events.PartitionEvent; -import org.apache.hadoop.hive.ql.exec.repl.bootstrap.events.TableEvent; import org.apache.hadoop.hive.ql.exec.repl.bootstrap.events.filesystem.BootstrapEventsIterator; import org.apache.hadoop.hive.ql.exec.repl.bootstrap.events.filesystem.ConstraintEventsIterator; +import org.apache.hadoop.hive.ql.exec.repl.bootstrap.events.filesystem.FSTableEvent; import org.apache.hadoop.hive.ql.exec.repl.bootstrap.load.LoadConstraint; import org.apache.hadoop.hive.ql.exec.repl.bootstrap.load.LoadDatabase; import org.apache.hadoop.hive.ql.exec.repl.bootstrap.load.LoadFunction; @@ -49,10 +52,13 @@ import org.apache.hadoop.hive.ql.exec.repl.util.TaskTracker; import org.apache.hadoop.hive.ql.exec.util.DAGTraversal; import org.apache.hadoop.hive.ql.metadata.Hive; import org.apache.hadoop.hive.ql.metadata.HiveException; +import org.apache.hadoop.hive.ql.metadata.Table; +import org.apache.hadoop.hive.ql.parse.HiveTableName; import org.apache.hadoop.hive.ql.parse.ReplicationSpec; import org.apache.hadoop.hive.ql.parse.SemanticAnalyzer; import org.apache.hadoop.hive.ql.parse.SemanticException; import org.apache.hadoop.hive.ql.parse.repl.ReplLogger; +import org.apache.hadoop.hive.ql.parse.repl.load.MetaData; import org.apache.hadoop.hive.ql.plan.api.StageType; import java.io.Serializable; @@ -67,6 +73,7 @@ import static org.apache.hadoop.hive.ql.exec.repl.bootstrap.load.LoadDatabase.Al import static org.apache.hadoop.hive.ql.exec.repl.ReplAck.LOAD_ACKNOWLEDGEMENT; public class ReplLoadTask extends Task implements Serializable { + private static final long serialVersionUID = 1L; private final static int ZERO_TASKS = 0; @Override @@ -84,7 +91,7 @@ public class ReplLoadTask extends Task implements Serializable { * by the driver. It does not track details across multiple runs of LoadTask. */ private static class Scope { -boolean database = false, table = false, partition = false; +boolean database = false, table = false; List> rootTasks = new ArrayL
[hive] branch master updated: HIVE-23200 Remove semijoin_reddedup without '.q' from testconfiguration.properties (Miklos Gergely, reviewed by Laszlo Bodor)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 871c678 HIVE-23200 Remove semijoin_reddedup without '.q' from testconfiguration.properties (Miklos Gergely, reviewed by Laszlo Bodor) 871c678 is described below commit 871c678f2664b2c1924dfbf59a77b5416d2d56cc Author: miklosgergely AuthorDate: Tue Apr 14 15:56:38 2020 +0200 HIVE-23200 Remove semijoin_reddedup without '.q' from testconfiguration.properties (Miklos Gergely, reviewed by Laszlo Bodor) --- itests/src/test/resources/testconfiguration.properties | 1 - 1 file changed, 1 deletion(-) diff --git a/itests/src/test/resources/testconfiguration.properties b/itests/src/test/resources/testconfiguration.properties index c4864e3..48f90fe 100644 --- a/itests/src/test/resources/testconfiguration.properties +++ b/itests/src/test/resources/testconfiguration.properties @@ -789,7 +789,6 @@ minillaplocal.query.files=\ semijoin7.q,\ semijoin_hint.q,\ sharedwork.q,\ - semijoin_reddedup,\ semijoin_reddedup.q,\ sharedworkext.q,\ sharedworkresidual.q,\
[hive] branch master updated: HIVE-23205 ADDENDUM Do not run TestMTQueries until HIVE-23138 is finished (Miklos Gergely, reviewed by Jesus Camacho Rodriguez)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 684473f HIVE-23205 ADDENDUM Do not run TestMTQueries until HIVE-23138 is finished (Miklos Gergely, reviewed by Jesus Camacho Rodriguez) 684473f is described below commit 684473f082458b1cb4fb1f7be1fc840b7f7b9bce Author: miklosgergely AuthorDate: Wed Apr 15 17:36:37 2020 +0200 HIVE-23205 ADDENDUM Do not run TestMTQueries until HIVE-23138 is finished (Miklos Gergely, reviewed by Jesus Camacho Rodriguez) --- .../src/test/java/org/apache/hadoop/hive/ql/TestMTQueries.java | 3 +++ 1 file changed, 3 insertions(+) diff --git a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/TestMTQueries.java b/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/TestMTQueries.java index d72c14a..f2c81fd 100644 --- a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/TestMTQueries.java +++ b/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/TestMTQueries.java @@ -19,12 +19,15 @@ package org.apache.hadoop.hive.ql; import java.io.File; + +import org.junit.Ignore; import org.junit.Test; import static org.junit.Assert.fail; /** * Suite for testing running of queries in multi-threaded mode. */ +@Ignore("Ignore until HIVE-23138 is finished") public class TestMTQueries extends BaseTestQueries { public TestMTQueries() {
[hive] branch master updated: HIVE-23179 Show create table is not showing SerDe Properties in unicode (Naresh P R, reviewed by Miklos Gergely)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 37ac058 HIVE-23179 Show create table is not showing SerDe Properties in unicode (Naresh P R, reviewed by Miklos Gergely) 37ac058 is described below commit 37ac05861660e9863d51fb7cc34a0dcf13c5f1e5 Author: Naresh P R AuthorDate: Mon Apr 13 19:53:13 2020 +0200 HIVE-23179 Show create table is not showing SerDe Properties in unicode (Naresh P R, reviewed by Miklos Gergely) --- .../apache/hive/common/util/HiveStringUtils.java | 14 ++ .../create/show/ShowCreateTableOperation.java | 10 +- .../clientpositive/show_create_table_delimited.q | 15 ++ .../show_create_table_delimited.q.out | 156 + 4 files changed, 190 insertions(+), 5 deletions(-) diff --git a/common/src/java/org/apache/hive/common/util/HiveStringUtils.java b/common/src/java/org/apache/hive/common/util/HiveStringUtils.java index 22948e3..6499ac1 100644 --- a/common/src/java/org/apache/hive/common/util/HiveStringUtils.java +++ b/common/src/java/org/apache/hive/common/util/HiveStringUtils.java @@ -43,6 +43,7 @@ import com.google.common.base.Splitter; import org.apache.commons.lang3.StringUtils; import org.apache.commons.lang3.text.translate.CharSequenceTranslator; import org.apache.commons.lang3.text.translate.EntityArrays; +import org.apache.commons.lang3.text.translate.JavaUnicodeEscaper; import org.apache.commons.lang3.text.translate.LookupTranslator; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hive.common.classification.InterfaceAudience; @@ -83,6 +84,9 @@ public class HiveStringUtils { }).with( new LookupTranslator(EntityArrays.JAVA_CTRL_CHARS_ESCAPE())); + private static final CharSequenceTranslator UNICODE_CONVERTER = + JavaUnicodeEscaper.outsideOf(32, 127); + static { NumberFormat numberFormat = NumberFormat.getNumberInstance(Locale.ENGLISH); decimalFormat = (DecimalFormat) numberFormat; @@ -652,6 +656,16 @@ public class HiveStringUtils { } /** + * Escape java unicode characters. + * + * @param str Original string + * @return Escaped string + */ + public static String escapeUnicode(String str) { +return UNICODE_CONVERTER.translate(str); + } + + /** * Unescape commas in the string using the default escape char * @param str a string * @return an unescaped string diff --git a/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/create/show/ShowCreateTableOperation.java b/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/create/show/ShowCreateTableOperation.java index 51d9f10..bf91344 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/create/show/ShowCreateTableOperation.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/create/show/ShowCreateTableOperation.java @@ -18,10 +18,6 @@ package org.apache.hadoop.hive.ql.ddl.table.create.show; -import org.apache.hadoop.hive.ql.ddl.DDLOperationContext; -import org.apache.hadoop.hive.ql.ddl.DDLUtils; -import org.apache.hadoop.hive.ql.ddl.table.create.CreateTableOperation; - import static org.apache.hadoop.hive.metastore.api.hive_metastoreConstants.META_TABLE_STORAGE; import java.io.DataOutputStream; @@ -47,6 +43,9 @@ import org.apache.hadoop.hive.metastore.api.SerDeInfo; import org.apache.hadoop.hive.metastore.api.SkewedInfo; import org.apache.hadoop.hive.metastore.api.StorageDescriptor; import org.apache.hadoop.hive.ql.ddl.DDLOperation; +import org.apache.hadoop.hive.ql.ddl.DDLOperationContext; +import org.apache.hadoop.hive.ql.ddl.DDLUtils; +import org.apache.hadoop.hive.ql.ddl.table.create.CreateTableOperation; import org.apache.hadoop.hive.ql.metadata.HiveException; import org.apache.hadoop.hive.ql.metadata.Table; import org.apache.hadoop.hive.ql.util.DirectionUtils; @@ -322,7 +321,8 @@ public class ShowCreateTableOperation extends DDLOperation SortedMap sortedSerdeParams = new TreeMap(serdeParams); List serdeCols = new ArrayList(); for (Entry entry : sortedSerdeParams.entrySet()) { - serdeCols.add(" '" + entry.getKey() + "'='" + HiveStringUtils.escapeHiveCommand(entry.getValue()) + "'"); + serdeCols.add(" '" + entry.getKey() + "'='" + + HiveStringUtils.escapeUnicode(HiveStringUtils.escapeHiveCommand(entry.getValue())) + "'"); } builder diff --git a/ql/src/test/queries/clientpositive/show_create_table_delimited.q b/ql/src/test/queries/clientpositive/show_create_table_delimited.q index 7722964..4eef9d5 100644 --- a/ql/src/test/queries/clientpositive/show_create_table_delimited.q +++ b/ql/src/test/queries/clientpositive/show_create_table_delimited.q @@ -7,3 +7,18 @@ LOCATION 'file:${system:test.tmp.dir}/tmp_showcrt1'; SHOW CREATE TABLE tmp_showcrt1; DROP TABLE
[hive] branch master updated: HIVE-23139 Improve q test result masking (Miklos Gergely, reviewed by Laszlo Bodor)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 897e1e8 HIVE-23139 Improve q test result masking (Miklos Gergely, reviewed by Laszlo Bodor) 897e1e8 is described below commit 897e1e8687cdc9904cca9186868e58c6ea8eca58 Author: miklosgergely AuthorDate: Thu Apr 2 16:24:30 2020 +0200 HIVE-23139 Improve q test result masking (Miklos Gergely, reviewed by Laszlo Bodor) --- .../org/apache/hadoop/hive/ql/QOutProcessor.java | 120 ++--- 1 file changed, 82 insertions(+), 38 deletions(-) diff --git a/itests/util/src/main/java/org/apache/hadoop/hive/ql/QOutProcessor.java b/itests/util/src/main/java/org/apache/hadoop/hive/ql/QOutProcessor.java index b3a00d8..879a204 100644 --- a/itests/util/src/main/java/org/apache/hadoop/hive/ql/QOutProcessor.java +++ b/itests/util/src/main/java/org/apache/hadoop/hive/ql/QOutProcessor.java @@ -89,53 +89,68 @@ public class QOutProcessor { } private final Pattern[] planMask = toPattern(new String[] { - ".*file:.*", - ".*pfile:.*", - ".*/tmp/.*", - ".*invalidscheme:.*", - ".*lastUpdateTime.*", - ".*lastAccessTime.*", - ".*lastModifiedTime.*", - ".*[Oo]wner.*", - ".*CreateTime.*", - ".*LastAccessTime.*", - ".*Location.*", - ".*LOCATION '.*", - ".*transient_lastDdlTime.*", - ".*last_modified_.*", - ".*at org.*", - ".*at sun.*", - ".*at java.*", - ".*at junit.*", - ".*Caused by:.*", - ".*LOCK_QUERYID:.*", - ".*LOCK_TIME:.*", - ".*grantTime.*", ".*[.][.][.] [0-9]* more.*", - ".*job_[0-9_]*.*", - ".*job_local[0-9_]*.*", - ".*USING 'java -cp.*", - "^Deleted.*", - ".*DagName:.*", - ".*DagId:.*", - ".*Input:.*/data/files/.*", - ".*Output:.*/data/files/.*", - ".*total number of created files now is.*", - ".*.hive-staging.*", "pk_-?[0-9]*_[0-9]*_[0-9]*", "fk_-?[0-9]*_[0-9]*_[0-9]*", "uk_-?[0-9]*_[0-9]*_[0-9]*", "nn_-?[0-9]*_[0-9]*_[0-9]*", // not null constraint name "dc_-?[0-9]*_[0-9]*_[0-9]*", // default constraint name - ".*at com\\.sun\\.proxy.*", - ".*at com\\.jolbox.*", - ".*at com\\.zaxxer.*", "org\\.apache\\.hadoop\\.hive\\.metastore\\.model\\.MConstraint@([0-9]|[a-z])*", - "^Repair: Added partition to metastore.*", - "^latestOffsets.*", - "^minimumLag.*" }); + // Using patterns for matching the whole line can take a long time, therefore we should try to avoid it + // in case of really long lines trying to match a .*some string.* may take up to 4 seconds each! + + // Using String.startsWith instead of pattern, as it is much faster + private final String[] maskIfStartsWith = new String[] { + "Deleted", + "Repair: Added partition to metastore", + "latestOffsets", + "minimumLag" + }; + + // Using String.contains instead of pattern, as it is much faster + private final String[] maskIfContains = new String[] { + "file:", + "pfile:", + "/tmp/", + "invalidscheme:", + "lastUpdateTime", + "lastAccessTime", + "lastModifiedTim", + "Owner", + "owner", + "CreateTime", + "LastAccessTime", + "Location", + "LOCATION '", + "transient_lastDdlTime", + "last_modified_", + "at org", + "at sun", + "at java", + "at junit", + "Caused by:", + "LOCK_QUERYID:", + "LOCK_TIME:", + "grantTime", + "job_", + "USING 'java -cp", + "DagName:", + "DagId:", + "total number of created files now is", + "hive-staging", + "at com.sun.proxy", + "at com.jolbox", + "at com.zaxxer" + }; + + // Using String.contains instead of pattern, as it is much faster + private final String[][] maskIfContainsMultiple = new String[][] { +{"Input:", "/data/files/"}, +{"Output:", "/data/files/"} + }; + private final QTestReplaceHandler
[hive] branch master updated: HIVE-23136 Do not compare q test result if test.output.overwrite is specified (Miklos Gergely, reviewed by Laszlo Bodor)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 2de6ae8 HIVE-23136 Do not compare q test result if test.output.overwrite is specified (Miklos Gergely, reviewed by Laszlo Bodor) 2de6ae8 is described below commit 2de6ae874c71d7cd8d7b25bd1f3025d53715d127 Author: miklosgergely AuthorDate: Sat Apr 4 10:05:11 2020 +0200 HIVE-23136 Do not compare q test result if test.output.overwrite is specified (Miklos Gergely, reviewed by Laszlo Bodor) --- itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java | 6 ++ 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java b/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java index ffc0b2f..bebc37e 100644 --- a/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java +++ b/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java @@ -934,16 +934,14 @@ public class QTestUtil { String outFileName = outPath(outDir, tname + outFileExtension); File f = new File(logDir, tname + outFileExtension); - qOutProcessor.maskPatterns(f.getPath(), tname); -QTestProcessExecResult exitVal = qTestResultProcessor.executeDiffCommand(f.getPath(), outFileName, false, tname); if (QTestSystemProperties.shouldOverwriteResults()) { qTestResultProcessor.overwriteResults(f.getPath(), outFileName); return QTestProcessExecResult.createWithoutOutput(0); +} else { + return qTestResultProcessor.executeDiffCommand(f.getPath(), outFileName, false, tname); } - -return exitVal; } public QTestProcessExecResult checkCompareCliDriverResults(String tname, List outputs) throws Exception {
[hive] branch master updated: HIVE-23131 Remove ql/src/test/results/clientnegative/orc_type_promotion3_acid.q (Miklos Gergely, reviewed by Laszlo Bodor)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new d676dfb HIVE-23131 Remove ql/src/test/results/clientnegative/orc_type_promotion3_acid.q (Miklos Gergely, reviewed by Laszlo Bodor) d676dfb is described below commit d676dfbc0d1e4db61425fe024004ef8a229bc697 Author: miklosgergely AuthorDate: Thu Apr 2 21:49:19 2020 +0200 HIVE-23131 Remove ql/src/test/results/clientnegative/orc_type_promotion3_acid.q (Miklos Gergely, reviewed by Laszlo Bodor) --- .../results/clientnegative/orc_type_promotion3_acid.q | 18 -- 1 file changed, 18 deletions(-) diff --git a/ql/src/test/results/clientnegative/orc_type_promotion3_acid.q b/ql/src/test/results/clientnegative/orc_type_promotion3_acid.q deleted file mode 100644 index bd33c6c..000 --- a/ql/src/test/results/clientnegative/orc_type_promotion3_acid.q +++ /dev/null @@ -1,18 +0,0 @@ -PREHOOK: query: -- Currently, double to smallint conversion is not supported because it isn't in the lossless --- TypeIntoUtils.implicitConvertible conversions. -create table src_orc (key double, val string) clustered by (val) into 2 buckets stored as orc TBLPROPERTIES ('transactional'='true') -PREHOOK: type: CREATETABLE -PREHOOK: Output: database:default -PREHOOK: Output: default@src_orc -POSTHOOK: query: -- Currently, double to smallint conversion is not supported because it isn't in the lossless --- TypeIntoUtils.implicitConvertible conversions. -create table src_orc (key double, val string) clustered by (val) into 2 buckets stored as orc TBLPROPERTIES ('transactional'='true') -POSTHOOK: type: CREATETABLE -POSTHOOK: Output: database:default -POSTHOOK: Output: default@src_orc -PREHOOK: query: alter table src_orc change key key smallint -PREHOOK: type: ALTERTABLE_RENAMECOL -PREHOOK: Input: default@src_orc -PREHOOK: Output: default@src_orc -FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Unable to alter table. The following columns have types incompatible with the existing columns in their respective positions : -key
[hive] branch master updated: HIVE-22566 Drop table involved in materialized view leaves the table in inconsistent state (Pablo Junge, reviewed by Miklos Gergely)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new c3afb57 HIVE-22566 Drop table involved in materialized view leaves the table in inconsistent state (Pablo Junge, reviewed by Miklos Gergely) c3afb57 is described below commit c3afb57bdb1041f566fbbd896f625328fc9656a0 Author: Pablo Junge AuthorDate: Thu Apr 2 16:34:13 2020 +0200 HIVE-22566 Drop table involved in materialized view leaves the table in inconsistent state (Pablo Junge, reviewed by Miklos Gergely) --- .../hcatalog/listener/DummyRawStoreFailEvent.java | 5 ++ .../org/apache/hadoop/hive/ql/metadata/Hive.java | 7 --- .../clientnegative/drop_table_used_by_mv2.q| 12 .../clientnegative/drop_table_used_by_mv.q.out | 3 +- .../clientnegative/drop_table_used_by_mv2.q.out| 72 ++ .../hadoop/hive/metastore/HiveMetaStore.java | 9 +++ .../apache/hadoop/hive/metastore/ObjectStore.java | 38 .../org/apache/hadoop/hive/metastore/RawStore.java | 9 +++ .../hadoop/hive/metastore/cache/CachedStore.java | 4 ++ .../metastore/DummyRawStoreControlledCommit.java | 6 ++ .../metastore/DummyRawStoreForJdoConnection.java | 5 ++ 11 files changed, 162 insertions(+), 8 deletions(-) diff --git a/itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/listener/DummyRawStoreFailEvent.java b/itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/listener/DummyRawStoreFailEvent.java index 26c4937..4984138 100644 --- a/itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/listener/DummyRawStoreFailEvent.java +++ b/itests/hcatalog-unit/src/test/java/org/apache/hive/hcatalog/listener/DummyRawStoreFailEvent.java @@ -270,6 +270,11 @@ public class DummyRawStoreFailEvent implements RawStore, Configurable { } @Override + public List isPartOfMaterializedView(String catName, String dbName, String tblName) { +return objectStore.isPartOfMaterializedView(catName, dbName, tblName); + } + + @Override public Table getTable(String catName, String dbName, String tableName) throws MetaException { return objectStore.getTable(catName, dbName, tableName); } diff --git a/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java b/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java index 3b0b56d..1f9fb3b 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java @@ -1235,13 +1235,6 @@ public class Hive { if (!ignoreUnknownTab) { throw new HiveException(e); } -} catch (MetaException e) { - int idx = ExceptionUtils.indexOfType(e, SQLIntegrityConstraintViolationException.class); - if (idx != -1 && ExceptionUtils.getThrowables(e)[idx].getMessage().contains("MV_TABLES_USED")) { -throw new HiveException("Cannot drop table since it is used by at least one materialized view definition. " + -"Please drop any materialized view that uses the table before dropping it", e); - } - throw new HiveException(e); } catch (Exception e) { throw new HiveException(e); } diff --git a/ql/src/test/queries/clientnegative/drop_table_used_by_mv2.q b/ql/src/test/queries/clientnegative/drop_table_used_by_mv2.q new file mode 100644 index 000..458cc9e --- /dev/null +++ b/ql/src/test/queries/clientnegative/drop_table_used_by_mv2.q @@ -0,0 +1,12 @@ +create table mytable (key int, value string); +insert into mytable values (1, 'val1'), (2, 'val2'); +create view myview as select * from mytable; + +create materialized view mv1 disable rewrite as +select key, value from myview; +create materialized view mv2 disable rewrite as +select count(*) from myview; + +-- dropping the view is fine, as the MV uses not the view itself, but it's query for creating it's own during it's creation +drop view myview; +drop table mytable; diff --git a/ql/src/test/results/clientnegative/drop_table_used_by_mv.q.out b/ql/src/test/results/clientnegative/drop_table_used_by_mv.q.out index 0a20203..5d980c1 100644 --- a/ql/src/test/results/clientnegative/drop_table_used_by_mv.q.out +++ b/ql/src/test/results/clientnegative/drop_table_used_by_mv.q.out @@ -32,4 +32,5 @@ PREHOOK: query: drop table mytable PREHOOK: type: DROPTABLE PREHOOK: Input: default@mytable PREHOOK: Output: default@mytable -FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.ddl.DDLTask. Cannot drop table since it is used by at least one materialized view definition. Please drop any materialized view that uses the table before dropping it +FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.ddl.DDLTask. MetaException(message:Cannot drop table as it is used in the following materialized views [default.mv
[hive] branch master updated: HIVE-23063 Use the same PerfLogger all over Compiler (Miklos Gergely, reviewed by Jesus Camacho Rodriguez)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 2c0080b HIVE-23063 Use the same PerfLogger all over Compiler (Miklos Gergely, reviewed by Jesus Camacho Rodriguez) 2c0080b is described below commit 2c0080bcbb23d698b989e79be4b518a6f7d93e13 Author: miklosgergely AuthorDate: Sat Mar 21 01:21:22 2020 +0100 HIVE-23063 Use the same PerfLogger all over Compiler (Miklos Gergely, reviewed by Jesus Camacho Rodriguez) --- ql/src/java/org/apache/hadoop/hive/ql/Compiler.java | 17 + 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/ql/src/java/org/apache/hadoop/hive/ql/Compiler.java b/ql/src/java/org/apache/hadoop/hive/ql/Compiler.java index a559d90..aa42fd5 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/Compiler.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/Compiler.java @@ -75,6 +75,7 @@ public class Compiler { private final Context context; private final DriverContext driverContext; private final DriverState driverState; + private final PerfLogger perfLogger = SessionState.getPerfLogger(); private ASTNode tree; @@ -123,7 +124,7 @@ public class Compiler { } private void initialize(String rawCommand) throws CommandProcessorException { -SessionState.getPerfLogger().PerfLogBegin(CLASS_NAME, PerfLogger.COMPILE); +perfLogger.PerfLogBegin(CLASS_NAME, PerfLogger.COMPILE); driverState.compilingWithLocking(); VariableSubstitution variableSubstitution = new VariableSubstitution(new HiveVariableSource() { @@ -157,7 +158,7 @@ public class Compiler { } private void parse() throws ParseException { -SessionState.getPerfLogger().PerfLogBegin(CLASS_NAME, PerfLogger.PARSE); +perfLogger.PerfLogBegin(CLASS_NAME, PerfLogger.PARSE); // Trigger query hook before compilation driverContext.getHookRunner().runBeforeParseHook(context.getCmd()); @@ -169,11 +170,11 @@ public class Compiler { } finally { driverContext.getHookRunner().runAfterParseHook(context.getCmd(), !success); } -SessionState.getPerfLogger().PerfLogEnd(CLASS_NAME, PerfLogger.PARSE); +perfLogger.PerfLogEnd(CLASS_NAME, PerfLogger.PARSE); } private BaseSemanticAnalyzer analyze() throws Exception { -SessionState.getPerfLogger().PerfLogBegin(CLASS_NAME, PerfLogger.ANALYZE); +perfLogger.PerfLogBegin(CLASS_NAME, PerfLogger.ANALYZE); driverContext.getHookRunner().runBeforeCompileHook(context.getCmd()); @@ -233,7 +234,7 @@ public class Compiler { // validate the plan sem.validate(); -SessionState.getPerfLogger().PerfLogEnd(CLASS_NAME, PerfLogger.ANALYZE); +perfLogger.PerfLogEnd(CLASS_NAME, PerfLogger.ANALYZE); return sem; } @@ -399,7 +400,7 @@ public class Compiler { HiveConf.getBoolVar(driverContext.getConf(), HiveConf.ConfVars.HIVE_AUTHORIZATION_ENABLED)) { try { -SessionState.getPerfLogger().PerfLogBegin(CLASS_NAME, PerfLogger.DO_AUTHORIZATION); +perfLogger.PerfLogBegin(CLASS_NAME, PerfLogger.DO_AUTHORIZATION); // Authorization check for kill query will be in KillQueryImpl // As both admin or operation owner can perform the operation. // Which is not directly supported in authorizer @@ -410,7 +411,7 @@ public class Compiler { CONSOLE.printError("Authorization failed:" + authExp.getMessage() + ". Use SHOW GRANT to get more details."); throw DriverUtils.createProcessorException(driverContext, 403, authExp.getMessage(), "42000", null); } finally { -SessionState.getPerfLogger().PerfLogEnd(CLASS_NAME, PerfLogger.DO_AUTHORIZATION); +perfLogger.PerfLogEnd(CLASS_NAME, PerfLogger.DO_AUTHORIZATION); } } } @@ -466,7 +467,7 @@ public class Compiler { } } -double duration = SessionState.getPerfLogger().PerfLogEnd(CLASS_NAME, PerfLogger.COMPILE) / 1000.00; +double duration = perfLogger.PerfLogEnd(CLASS_NAME, PerfLogger.COMPILE) / 1000.00; ImmutableMap compileHMSTimings = Hive.dumpMetaCallTimingWithoutEx("compilation"); driverContext.getQueryDisplay().setHmsTimings(QueryDisplay.Phase.COMPILATION, compileHMSTimings);
[hive] branch master updated: HIVE-23059 In constraint name uniqueness query use the MTable instead of it's id (Miklos Gergely, reviewed by Jesus Camacho Rodriguez)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 3f06375 HIVE-23059 In constraint name uniqueness query use the MTable instead of it's id (Miklos Gergely, reviewed by Jesus Camacho Rodriguez) 3f06375 is described below commit 3f063750e98aa1fb6aafbfbdbfc1ee2066ff3d26 Author: miklosgergely AuthorDate: Fri Mar 20 18:11:18 2020 +0100 HIVE-23059 In constraint name uniqueness query use the MTable instead of it's id (Miklos Gergely, reviewed by Jesus Camacho Rodriguez) --- .../src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java index 8a826d2..ffc8607 100644 --- a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java +++ b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java @@ -4605,10 +4605,10 @@ public class ObjectStore implements RawStore, Configurable { constraintName = normalizeIdentifier(constraintName); constraintExistsQuery = pm.newQuery(MConstraint.class, "parentTable == parentTableP && constraintName == constraintNameP"); - constraintExistsQuery.declareParameters("java.lang.Long parentTableP, java.lang.String constraintNameP"); + constraintExistsQuery.declareParameters("MTable parentTableP, java.lang.String constraintNameP"); constraintExistsQuery.setUnique(true); constraintExistsQuery.setResult("constraintName"); - constraintNameIfExists = (String) constraintExistsQuery.executeWithArray(table.getId(), constraintName); + constraintNameIfExists = (String) constraintExistsQuery.executeWithArray(table, constraintName); commited = commitTransaction(); } finally { rollbackAndCleanup(commited, constraintExistsQuery);
[hive] branch master updated: HIVE-22955 PreUpgradeTool can fail because access to CharsetDecoder is not synchronized (Gergely Hanko, reviewed by Miklos Gergely)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 2c5a109 HIVE-22955 PreUpgradeTool can fail because access to CharsetDecoder is not synchronized (Gergely Hanko, reviewed by Miklos Gergely) 2c5a109 is described below commit 2c5a1095b1f39cbfdfbcf472f21b11239ad1b49e Author: Gergely Hanko AuthorDate: Wed Mar 18 17:56:51 2020 +0100 HIVE-22955 PreUpgradeTool can fail because access to CharsetDecoder is not synchronized (Gergely Hanko, reviewed by Miklos Gergely) --- .../hadoop/hive/upgrade/acid/PreUpgradeTool.java | 100 ++--- .../hive/upgrade/acid/TestPreUpgradeTool.java | 40 + 2 files changed, 90 insertions(+), 50 deletions(-) diff --git a/upgrade-acid/pre-upgrade/src/main/java/org/apache/hadoop/hive/upgrade/acid/PreUpgradeTool.java b/upgrade-acid/pre-upgrade/src/main/java/org/apache/hadoop/hive/upgrade/acid/PreUpgradeTool.java index 5b0ad7c..b72b236 100644 --- a/upgrade-acid/pre-upgrade/src/main/java/org/apache/hadoop/hive/upgrade/acid/PreUpgradeTool.java +++ b/upgrade-acid/pre-upgrade/src/main/java/org/apache/hadoop/hive/upgrade/acid/PreUpgradeTool.java @@ -26,6 +26,7 @@ import java.nio.ByteBuffer; import java.nio.charset.CharacterCodingException; import java.nio.charset.Charset; import java.nio.charset.CharsetDecoder; +import java.nio.charset.StandardCharsets; import java.util.ArrayList; import java.util.Arrays; import java.util.Collections; @@ -123,7 +124,7 @@ public class PreUpgradeTool implements AutoCloseable { public static void main(String[] args) throws Exception { Options cmdLineOptions = createCommandLineOptions(); CommandLineParser parser = new GnuParser(); -CommandLine line ; +CommandLine line; try { line = parser.parse(cmdLineOptions, args); } catch (ParseException e) { @@ -149,8 +150,7 @@ public class PreUpgradeTool implements AutoCloseable { try (PreUpgradeTool tool = new PreUpgradeTool(runOptions)) { tool.prepareAcidUpgradeInternal(); } -} -catch(Exception ex) { +} catch(Exception ex) { LOG.error("PreUpgradeTool failed", ex); throw ex; } @@ -230,8 +230,7 @@ public class PreUpgradeTool implements AutoCloseable { cmdLineOptions.addOption(tablePoolSizeOption); return cmdLineOptions; -} -catch(Exception ex) { +} catch(Exception ex) { LOG.error("init()", ex); throw ex; } @@ -278,7 +277,7 @@ public class PreUpgradeTool implements AutoCloseable { } } - /** + /* * todo: change script comments to a preamble instead of a footer */ private void prepareAcidUpgradeInternal() @@ -328,29 +327,29 @@ public class PreUpgradeTool implements AutoCloseable { final String state = e.getState(); boolean removed; switch (state) { -case TxnStore.CLEANING_RESPONSE: -case TxnStore.SUCCEEDED_RESPONSE: - removed = compactTablesState.getMetaInfo().getCompactionIds().remove(e.getId()); - if(removed) { -LOG.debug("Required compaction succeeded: " + e.toString()); - } - break; -case TxnStore.ATTEMPTED_RESPONSE: -case TxnStore.FAILED_RESPONSE: - removed = compactTablesState.getMetaInfo().getCompactionIds().remove(e.getId()); - if(removed) { -LOG.warn("Required compaction failed: " + e.toString()); - } - break; -case TxnStore.INITIATED_RESPONSE: - //may flood the log - //LOG.debug("Still waiting on: " + e.toString()); - break; -case TxnStore.WORKING_RESPONSE: - LOG.debug("Still working on: " + e.toString()); - break; -default://shouldn't be any others - LOG.error("Unexpected state for : " + e.toString()); + case TxnStore.CLEANING_RESPONSE: + case TxnStore.SUCCEEDED_RESPONSE: +removed = compactTablesState.getMetaInfo().getCompactionIds().remove(e.getId()); +if(removed) { + LOG.debug("Required compaction succeeded: " + e.toString()); +} +break; + case TxnStore.ATTEMPTED_RESPONSE: + case TxnStore.FAILED_RESPONSE: +removed = compactTablesState.getMetaInfo().getCompactionIds().remove(e.getId()); +if(removed) { + LOG.warn("Required compaction failed: " + e.toString()); +} +break; + case TxnStore.INITIATED_RESPONSE: +//may flood the log +//LOG.debug("Still waiting on: " + e.toString()); +break;
[hive] branch master updated: HIVE-23013 Fix UnitTestPropertiesParser creation log message (Miklos Gergely, reviewed by Zoltan Haindrich)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 9be51fd HIVE-23013 Fix UnitTestPropertiesParser creation log message (Miklos Gergely, reviewed by Zoltan Haindrich) 9be51fd is described below commit 9be51fdaa5a72300d6f3abe7570b348fcca9c9a3 Author: miklosgergely AuthorDate: Wed Mar 11 14:37:07 2020 +0100 HIVE-23013 Fix UnitTestPropertiesParser creation log message (Miklos Gergely, reviewed by Zoltan Haindrich) --- .../org/apache/hive/ptest/execution/conf/UnitTestPropertiesParser.java | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/testutils/ptest2/src/main/java/org/apache/hive/ptest/execution/conf/UnitTestPropertiesParser.java b/testutils/ptest2/src/main/java/org/apache/hive/ptest/execution/conf/UnitTestPropertiesParser.java index 490c23b..d110cec 100644 --- a/testutils/ptest2/src/main/java/org/apache/hive/ptest/execution/conf/UnitTestPropertiesParser.java +++ b/testutils/ptest2/src/main/java/org/apache/hive/ptest/execution/conf/UnitTestPropertiesParser.java @@ -90,7 +90,7 @@ class UnitTestPropertiesParser { File sourceDirectory, Logger logger, FileListProvider fileListProvider, Set excludedProvided, boolean inTest) { -logger.info("{} created with sourceDirectory={}, testCasePropertyName={}, excludedProvide={}", +logger.info("{} created with sourceDirectory={}, testCasePropertyName={}, excludedProvide={}" + "fileListProvider={}, inTest={}", UnitTestPropertiesParser.class.getSimpleName(), sourceDirectory, testCasePropertyName, excludedProvided,
[hive] branch master updated: HIVE-22972 Allow table id to be set for table creation requests (Miklos Gergely, reviewed by Jesus Camacho Rodriguez)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new f148293 HIVE-22972 Allow table id to be set for table creation requests (Miklos Gergely, reviewed by Jesus Camacho Rodriguez) f148293 is described below commit f148293e58d397b31676bc1dae8792f49b8fc3cb Author: miklosgergely AuthorDate: Wed Mar 4 07:26:30 2020 +0100 HIVE-22972 Allow table id to be set for table creation requests (Miklos Gergely, reviewed by Jesus Camacho Rodriguez) --- .../java/org/apache/hadoop/hive/metastore/HiveMetaStore.java | 6 +++--- .../java/org/apache/hadoop/hive/metastore/TestHiveMetaStore.java | 9 + 2 files changed, 4 insertions(+), 11 deletions(-) diff --git a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java index 662a098..de3c44b 100644 --- a/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java +++ b/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java @@ -2040,9 +2040,9 @@ public class HiveMetaStore extends ThriftHiveMetastore { } } if (tbl.isSetId()) { -throw new InvalidObjectException("Id shouldn't be set but table " -+ tbl.getDbName() + "." + tbl.getTableName() + " has the Id set to " -+ tbl.getId() + ". It's a read-only option"); +LOG.debug("Id shouldn't be set but table {}.{} has the Id set to {}. Id is ignored.", tbl.getDbName(), +tbl.getTableName(), tbl.getId()); +tbl.unsetId(); } SkewedInfo skew = tbl.getSd().getSkewedInfo(); if (skew != null) { diff --git a/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/TestHiveMetaStore.java b/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/TestHiveMetaStore.java index 40a4ef6..5f85165 100644 --- a/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/TestHiveMetaStore.java +++ b/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/TestHiveMetaStore.java @@ -1850,14 +1850,7 @@ public abstract class TestHiveMetaStore { .addCol("bar", "string") .build(conf); table.setId(1); -try { - client.createTable(table); - Assert.fail("An error should happen when setting the id" - + " to create a table"); -} catch (InvalidObjectException e) { - Assert.assertTrue(e.getMessage().contains("Id shouldn't be set")); - Assert.assertTrue(e.getMessage().contains(tblName)); -} +client.createTable(table); } @Test
[hive] branch master updated: HIVE-22907 Break up DDLSemanticAnalyzer - extract the rest of the Alter Table analyzers (Miklos Gergely, reviewed by Jesus Camacho Rodriguez)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 3bed626 HIVE-22907 Break up DDLSemanticAnalyzer - extract the rest of the Alter Table analyzers (Miklos Gergely, reviewed by Jesus Camacho Rodriguez) 3bed626 is described below commit 3bed626d5b6a7bab3659bb0422c67b4168935ee6 Author: miklosgergely AuthorDate: Tue Feb 18 19:43:47 2020 +0100 HIVE-22907 Break up DDLSemanticAnalyzer - extract the rest of the Alter Table analyzers (Miklos Gergely, reviewed by Jesus Camacho Rodriguez) --- .../ql/ddl/table/AbstractAlterTableAnalyzer.java | 136 +--- .../ql/ddl/table/AbstractAlterTableOperation.java | 4 +- ...er.java => AbstractBaseAlterTableAnalyzer.java} | 35 +- .../drop/AlterTableDropConstraintOperation.java| 4 +- .../hive/ql/ddl/table/drop/DropTableOperation.java | 2 +- .../AlterTableUpdateColumnStatistictAnalyzer.java | 92 +++ .../misc/owner/AlterTableSetOwnerAnalyzer.java | 60 ++ .../misc/{ => owner}/AlterTableSetOwnerDesc.java | 2 +- .../{ => owner}/AlterTableSetOwnerOperation.java | 2 +- .../misc/{ => preinsert}/PreInsertTableDesc.java | 2 +- .../{ => preinsert}/PreInsertTableOperation.java | 2 +- .../AbstractAlterTablePropertiesAnalyzer.java | 146 .../AbstractAlterTableSetPropertiesAnalyzer.java | 50 ++ .../AbstractAlterTableUnsetPropertiesAnalyzer.java | 61 ++ .../AlterTableSetPropertiesAnalyzer.java} | 33 +- .../AlterTableSetPropertiesDesc.java | 2 +- .../AlterTableSetPropertiesOperation.java | 2 +- .../AlterTableUnsetPropertiesAnalyzer.java}| 32 +- .../AlterTableUnsetPropertiesDesc.java | 2 +- .../AlterTableUnsetPropertiesOperation.java| 2 +- .../rename/AbstractAlterTableRenameAnalyzer.java | 56 ++ .../AlterTableRenameAnalyzer.java} | 32 +- .../misc/{ => rename}/AlterTableRenameDesc.java| 2 +- .../{ => rename}/AlterTableRenameOperation.java| 8 +- .../table/misc/touch/AlterTableTouchAnalyzer.java | 72 ++ .../misc/{ => touch}/AlterTableTouchDesc.java | 2 +- .../misc/{ => touch}/AlterTableTouchOperation.java | 2 +- .../table/misc/truncate/TruncateTableAnalyzer.java | 326 .../misc/{ => truncate}/TruncateTableDesc.java | 2 +- .../{ => truncate}/TruncateTableOperation.java | 2 +- .../drop/AlterTableDropPartitionOperation.java | 2 +- .../archive/AlterTableArchiveOperation.java| 6 +- .../archive/AlterTableUnarchiveOperation.java | 6 +- .../compact/AlterTableCompactOperation.java| 1 - .../hive/ql/ddl/view/drop/DropViewOperation.java | 2 +- .../AlterMaterializedViewRewriteAnalyzer.java | 4 +- .../drop/DropMaterializedViewOperation.java| 2 +- .../AlterViewSetPropertiesAnalyzer.java} | 33 +- .../AlterViewUnsetPropertiesAnalyzer.java} | 33 +- .../rename/AlterViewRenameAnalyzer.java} | 33 +- .../incremental/IncrementalLoadTasksBuilder.java | 2 +- .../hadoop/hive/ql/exec/repl/util/ReplUtils.java | 2 +- .../hive/ql/parse/AcidExportSemanticAnalyzer.java | 4 +- .../hadoop/hive/ql/parse/BaseSemanticAnalyzer.java | 63 +- .../hadoop/hive/ql/parse/DDLSemanticAnalyzer.java | 818 - .../hadoop/hive/ql/parse/SemanticAnalyzer.java | 6 +- .../hive/ql/parse/SemanticAnalyzerFactory.java | 19 - .../HiveAuthorizationTaskFactoryImpl.java | 3 +- .../repl/load/message/RenameTableHandler.java | 2 +- .../load/message/TruncatePartitionHandler.java | 2 +- .../repl/load/message/TruncateTableHandler.java| 2 +- .../parse/authorization/AuthorizationTestUtil.java | 4 - ...bleprops_external_with_default_constraint.q.out | 2 +- ...bleprops_external_with_notnull_constraint.q.out | 2 +- 54 files changed, 1056 insertions(+), 1172 deletions(-) diff --git a/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/AbstractAlterTableAnalyzer.java b/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/AbstractAlterTableAnalyzer.java index 0acd501..6a28ef0 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/AbstractAlterTableAnalyzer.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/AbstractAlterTableAnalyzer.java @@ -21,31 +21,17 @@ package org.apache.hadoop.hive.ql.ddl.table; import java.util.Map; import org.apache.hadoop.hive.common.TableName; -import org.apache.hadoop.hive.conf.HiveConf; -import org.apache.hadoop.hive.metastore.api.hive_metastoreConstants; import org.apache.hadoop.hive.metastore.utils.MetaStoreUtils; -import org.apache.hadoop.hive.ql.ErrorMsg; import org.apache.hadoop.hive.ql.QueryState; -import org.apache.hadoop.hive.ql.ddl.DD
[hive] branch master updated (fd08239 -> 2705e93)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/hive.git. from fd08239 HIVE-22881: Revise non-recommended Calcite api calls (Zoltan Haindrich reviewed by Jesus Camacho Rodriguez) add 2705e93 HIVE-22897 Remove enforcing of package-info.java files from the rest of the checkstyle files (Miklos Gergely, reviewed by Peter Vary) No new revisions were added by this update. Summary of changes: standalone-metastore/checkstyle/checkstyle.xml | 3 --- storage-api/checkstyle/checkstyle.xml | 3 --- 2 files changed, 6 deletions(-)
[hive] branch master updated: HIVE-22868 Extract ValidTxnManager from Driver (Miklos Gergely, reviewed by Zoltan Haindrich)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new a742593 HIVE-22868 Extract ValidTxnManager from Driver (Miklos Gergely, reviewed by Zoltan Haindrich) a742593 is described below commit a742593be8d39e5689b5501af9a0f915ee733661 Author: miklosgergely AuthorDate: Tue Feb 11 11:38:37 2020 +0100 HIVE-22868 Extract ValidTxnManager from Driver (Miklos Gergely, reviewed by Zoltan Haindrich) --- ql/src/java/org/apache/hadoop/hive/ql/Driver.java | 244 ++- .../org/apache/hadoop/hive/ql/ValidTxnManager.java | 265 + 2 files changed, 279 insertions(+), 230 deletions(-) diff --git a/ql/src/java/org/apache/hadoop/hive/ql/Driver.java b/ql/src/java/org/apache/hadoop/hive/ql/Driver.java index 1f8bc12..48ebc4f 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/Driver.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/Driver.java @@ -20,14 +20,9 @@ package org.apache.hadoop.hive.ql; import java.io.IOException; import java.util.ArrayList; -import java.util.HashMap; -import java.util.HashSet; import java.util.LinkedList; import java.util.List; -import java.util.Map; import java.util.Queue; -import java.util.Set; -import java.util.stream.Collectors; import org.apache.hadoop.conf.Configurable; import org.apache.hadoop.fs.FSDataInputStream; @@ -42,11 +37,7 @@ import org.apache.hadoop.hive.common.metrics.common.MetricsFactory; import org.apache.hadoop.hive.conf.Constants; import org.apache.hadoop.hive.conf.HiveConf; import org.apache.hadoop.hive.conf.HiveConf.ConfVars; -import org.apache.hadoop.hive.metastore.api.LockComponent; -import org.apache.hadoop.hive.metastore.api.LockType; import org.apache.hadoop.hive.metastore.api.Schema; -import org.apache.hadoop.hive.metastore.api.TxnType; -import org.apache.hadoop.hive.metastore.utils.MetaStoreUtils; import org.apache.hadoop.hive.ql.cache.results.CacheUsage; import org.apache.hadoop.hive.ql.cache.results.QueryResultsCache; import org.apache.hadoop.hive.ql.ddl.DDLDesc.DDLDescWithWriteId; @@ -54,19 +45,13 @@ import org.apache.hadoop.hive.ql.exec.AbstractFileMergeOperator; import org.apache.hadoop.hive.ql.exec.ConditionalTask; import org.apache.hadoop.hive.ql.exec.ExplainTask; import org.apache.hadoop.hive.ql.exec.FetchTask; -import org.apache.hadoop.hive.ql.exec.Operator; -import org.apache.hadoop.hive.ql.exec.TableScanOperator; import org.apache.hadoop.hive.ql.exec.Task; import org.apache.hadoop.hive.ql.exec.TaskFactory; import org.apache.hadoop.hive.ql.exec.Utilities; import org.apache.hadoop.hive.ql.exec.spark.session.SparkSession; -import org.apache.hadoop.hive.ql.hooks.Entity; -import org.apache.hadoop.hive.ql.io.AcidUtils; import org.apache.hadoop.hive.ql.lock.CompileLock; import org.apache.hadoop.hive.ql.lock.CompileLockFactory; -import org.apache.hadoop.hive.ql.lockmgr.DbTxnManager; import org.apache.hadoop.hive.ql.lockmgr.HiveLock; -import org.apache.hadoop.hive.ql.lockmgr.HiveLockMode; import org.apache.hadoop.hive.ql.lockmgr.HiveTxnManager; import org.apache.hadoop.hive.ql.lockmgr.LockException; import org.apache.hadoop.hive.ql.log.PerfLogger; @@ -91,7 +76,6 @@ import org.apache.hadoop.hive.ql.wm.WmContext; import org.apache.hadoop.hive.serde2.ByteStream; import org.apache.hadoop.util.StringUtils; import org.apache.hive.common.util.ShutdownHookManager; -import org.apache.hive.common.util.TxnIdUtils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -109,12 +93,13 @@ public class Driver implements IDriver { private int maxRows = 100; private ByteStream.Output bos = new ByteStream.Output(); - private Context context; private final DriverContext driverContext; - private TaskQueue taskQueue; + private final DriverState driverState = new DriverState(); private final List hiveLocks = new ArrayList(); + private final ValidTxnManager validTxnManager; - private DriverState driverState = new DriverState(); + private Context context; + private TaskQueue taskQueue; @Override public Schema getSchema() { @@ -158,11 +143,6 @@ public class Driver implements IDriver { this(queryState, queryInfo, null); } - public Driver(QueryState queryState, QueryInfo queryInfo, HiveTxnManager txnManager) { -driverContext = new DriverContext(queryState, queryInfo, new HookRunner(queryState.getConf(), CONSOLE), -txnManager); - } - public Driver(QueryState queryState, QueryInfo queryInfo, HiveTxnManager txnManager, ValidWriteIdList compactionWriteIds, long compactorTxnId) { this(queryState, queryInfo, txnManager); @@ -170,6 +150,12 @@ public class Driver implements IDriver { driverContext.setCompactorTxnId(compactorTxnId); } + public Driver(QueryState queryState, QueryInfo queryInfo, HiveTxnManager txnManager) { +driverContext
[hive] branch master updated: HIVE-22728 Limit the scope of uniqueness of constraint name to database (Miklos Gergely, reviewed by Zoltan Haindrich)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 40921bd HIVE-22728 Limit the scope of uniqueness of constraint name to database (Miklos Gergely, reviewed by Zoltan Haindrich) 40921bd is described below commit 40921bdd6b49dc04842614701a0d95acff356c79 Author: miklosgergely AuthorDate: Mon Feb 10 14:27:18 2020 +0100 HIVE-22728 Limit the scope of uniqueness of constraint name to database (Miklos Gergely, reviewed by Zoltan Haindrich) --- .../upgrade/hive/hive-schema-4.0.0.hive.sql| 2 +- .../clientnegative/constraint_duplicate_name.q | 2 - .../create_with_constraints_duplicate_name.q | 13 +- .../clientnegative/constraint_duplicate_name.q.out | 13 -- .../create_with_constraints_duplicate_name.q.out | 58 +-- .../hadoop/hive/metastore/HiveMetaStore.java | 2 +- .../apache/hadoop/hive/metastore/ObjectStore.java | 164 ++- .../hadoop/hive/metastore/model/MConstraint.java | 178 +++-- .../apache/hadoop/hive/metastore/model/MTable.java | 31 .../src/main/resources/package.jdo | 6 +- .../src/main/sql/derby/hive-schema-4.0.0.derby.sql | 18 ++- .../sql/derby/upgrade-3.2.0-to-4.0.0.derby.sql | 5 +- .../src/main/sql/mssql/hive-schema-4.0.0.mssql.sql | 4 +- .../sql/mssql/upgrade-3.2.0-to-4.0.0.mssql.sql | 4 + .../src/main/sql/mysql/hive-schema-4.0.0.mysql.sql | 2 +- .../sql/mysql/upgrade-3.2.0-to-4.0.0.mysql.sql | 4 + .../main/sql/oracle/hive-schema-4.0.0.oracle.sql | 2 +- .../sql/oracle/upgrade-3.2.0-to-4.0.0.oracle.sql | 5 + .../sql/postgres/hive-schema-4.0.0.postgres.sql| 2 +- .../postgres/upgrade-3.2.0-to-4.0.0.postgres.sql | 8 +- 20 files changed, 321 insertions(+), 202 deletions(-) diff --git a/metastore/scripts/upgrade/hive/hive-schema-4.0.0.hive.sql b/metastore/scripts/upgrade/hive/hive-schema-4.0.0.hive.sql index e3f5eb9..fde6f02 100644 --- a/metastore/scripts/upgrade/hive/hive-schema-4.0.0.hive.sql +++ b/metastore/scripts/upgrade/hive/hive-schema-4.0.0.hive.sql @@ -900,7 +900,7 @@ CREATE EXTERNAL TABLE IF NOT EXISTS `KEY_CONSTRAINTS` `DELETE_RULE` string, `ENABLE_VALIDATE_RELY` int, `DEFAULT_VALUE` string, - CONSTRAINT `SYS_PK_KEY_CONSTRAINTS` PRIMARY KEY (`CONSTRAINT_NAME`, `POSITION`) DISABLE + CONSTRAINT `SYS_PK_KEY_CONSTRAINTS` PRIMARY KEY (`PARENT_TBL_ID`, `CONSTRAINT_NAME`, `POSITION`) DISABLE ) STORED BY 'org.apache.hive.storage.jdbc.JdbcStorageHandler' TBLPROPERTIES ( diff --git a/ql/src/test/queries/clientnegative/constraint_duplicate_name.q b/ql/src/test/queries/clientnegative/constraint_duplicate_name.q deleted file mode 100644 index 2b7429d..000 --- a/ql/src/test/queries/clientnegative/constraint_duplicate_name.q +++ /dev/null @@ -1,2 +0,0 @@ -create table t(i int constraint c1 not null enable); -create table t1(j int constraint c1 default 4); diff --git a/ql/src/test/queries/clientnegative/create_with_constraints_duplicate_name.q b/ql/src/test/queries/clientnegative/create_with_constraints_duplicate_name.q index a0bc7f6..79d8d1a 100644 --- a/ql/src/test/queries/clientnegative/create_with_constraints_duplicate_name.q +++ b/ql/src/test/queries/clientnegative/create_with_constraints_duplicate_name.q @@ -1,2 +1,11 @@ -create table t1(x int, constraint pk1 primary key (x) disable); -create table t2(x int, constraint pk1 primary key (x) disable); +create database db1; +use db1; +create table t1(x int, constraint constraint_name primary key (x) disable); + +-- same constraint name in different db or different table is valid, thus only the foreign key creation should fail +create database db2; +use db2; +create table t1(x int, constraint constraint_name primary key (x) disable); +create table t2(x int, constraint constraint_name primary key (x) disable); + +alter table t1 add constraint constraint_name foreign key (x) references t2(x) disable novalidate rely; diff --git a/ql/src/test/results/clientnegative/constraint_duplicate_name.q.out b/ql/src/test/results/clientnegative/constraint_duplicate_name.q.out deleted file mode 100644 index e66e8c1..000 --- a/ql/src/test/results/clientnegative/constraint_duplicate_name.q.out +++ /dev/null @@ -1,13 +0,0 @@ -PREHOOK: query: create table t(i int constraint c1 not null enable) -PREHOOK: type: CREATETABLE -PREHOOK: Output: database:default -PREHOOK: Output: default@t -POSTHOOK: query: create table t(i int constraint c1 not null enable) -POSTHOOK: type: CREATETABLE -POSTHOOK: Output: database:default -POSTHOOK: Output: default@t -PREHOOK: query: create table t1(j int constraint c1 default 4) -PREHOOK: type: CREATETABLE -PREHOOK: Output: database:default -PREHOOK: Output: default@t1 -FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.ddl.DDLTask
[hive] branch master updated: HIVE-22747 Break up DDLSemanticAnalyzer - extract Table info and lock analyzers (Miklos Gergely, reviewed by Zoltan Haindrich)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new d9aa6dc HIVE-22747 Break up DDLSemanticAnalyzer - extract Table info and lock analyzers (Miklos Gergely, reviewed by Zoltan Haindrich) d9aa6dc is described below commit d9aa6dc2b6f4e6610a7e8d32c164d78a2dce9b29 Author: miklosgergely AuthorDate: Tue Jan 7 12:13:38 2020 +0100 HIVE-22747 Break up DDLSemanticAnalyzer - extract Table info and lock analyzers (Miklos Gergely, reviewed by Zoltan Haindrich) --- .../cli/SemanticAnalysis/HCatSemanticAnalyzer.java | 6 +- .../org/apache/hadoop/hive/ql/ddl/DDLUtils.java| 18 + .../ql/ddl/table/info/desc/DescTableAnalyzer.java | 192 .../ddl/table/info/{ => desc}/DescTableDesc.java | 2 +- .../table/info/{ => desc}/DescTableOperation.java | 4 +- .../properties/ShowTablePropertiesAnalyzer.java| 57 +++ .../properties}/ShowTablePropertiesDesc.java | 2 +- .../properties}/ShowTablePropertiesOperation.java | 2 +- .../info/show/status/ShowTableStatusAnalyzer.java | 83 .../{ => show/status}/ShowTableStatusDesc.java | 11 +- .../status}/ShowTableStatusOperation.java | 2 +- .../table/info/show/tables/ShowTablesAnalyzer.java | 83 .../info/{ => show/tables}/ShowTablesDesc.java | 40 +- .../{ => show/tables}/ShowTablesOperation.java | 48 +- .../hive/ql/ddl/table/lock/LockTableAnalyzer.java | 64 +++ .../ql/ddl/table/lock/UnlockTableAnalyzer.java | 60 +++ .../ddl/table/lock/show/ShowDbLocksAnalyzer.java | 67 +++ .../ql/ddl/table/lock/show/ShowLocksAnalyzer.java | 88 .../ddl/table/lock/{ => show}/ShowLocksDesc.java | 2 +- .../table/lock/{ => show}/ShowLocksOperation.java | 2 +- .../show/ShowMaterializedViewsAnalyzer.java| 78 +++ .../show/ShowMaterializedViewsDesc.java} | 35 +- .../show/ShowMaterializedViewsOperation.java | 66 +++ .../hive/ql/ddl/view/show/ShowViewsAnalyzer.java | 78 +++ .../show/ShowViewsDesc.java} | 33 +- .../hive/ql/ddl/view/show/ShowViewsOperation.java | 64 +++ .../hadoop/hive/ql/lockmgr/DbLockManager.java | 2 +- .../org/apache/hadoop/hive/ql/metadata/Hive.java | 14 + .../metadata/formatting/TextMetaDataFormatter.java | 2 +- .../hadoop/hive/ql/parse/DDLSemanticAnalyzer.java | 547 - .../hive/ql/parse/SemanticAnalyzerFactory.java | 9 - .../clientpositive/show_materialized_views.q | 6 + ql/src/test/queries/clientpositive/show_tables.q | 8 + ql/src/test/queries/clientpositive/show_views.q| 6 + .../clientpositive/show_materialized_views.q.out | 117 + .../test/results/clientpositive/show_tables.q.out | 175 +++ .../test/results/clientpositive/show_views.q.out | 117 + 37 files changed, 1495 insertions(+), 695 deletions(-) diff --git a/hcatalog/core/src/main/java/org/apache/hive/hcatalog/cli/SemanticAnalysis/HCatSemanticAnalyzer.java b/hcatalog/core/src/main/java/org/apache/hive/hcatalog/cli/SemanticAnalysis/HCatSemanticAnalyzer.java index f92478c..8277d34 100644 --- a/hcatalog/core/src/main/java/org/apache/hive/hcatalog/cli/SemanticAnalysis/HCatSemanticAnalyzer.java +++ b/hcatalog/core/src/main/java/org/apache/hive/hcatalog/cli/SemanticAnalysis/HCatSemanticAnalyzer.java @@ -25,9 +25,9 @@ import org.apache.hadoop.hive.ql.ddl.database.desc.DescDatabaseDesc; import org.apache.hadoop.hive.ql.ddl.database.drop.DropDatabaseDesc; import org.apache.hadoop.hive.ql.ddl.database.show.ShowDatabasesDesc; import org.apache.hadoop.hive.ql.ddl.database.use.SwitchDatabaseDesc; -import org.apache.hadoop.hive.ql.ddl.table.info.DescTableDesc; -import org.apache.hadoop.hive.ql.ddl.table.info.ShowTableStatusDesc; -import org.apache.hadoop.hive.ql.ddl.table.info.ShowTablesDesc; +import org.apache.hadoop.hive.ql.ddl.table.info.desc.DescTableDesc; +import org.apache.hadoop.hive.ql.ddl.table.info.show.status.ShowTableStatusDesc; +import org.apache.hadoop.hive.ql.ddl.table.info.show.tables.ShowTablesDesc; import org.apache.hadoop.hive.ql.ddl.table.partition.drop.AlterTableDropPartitionDesc; import org.apache.hadoop.hive.ql.ddl.table.partition.show.ShowPartitionsDesc; import org.apache.hadoop.hive.ql.ddl.table.storage.AlterTableSetLocationDesc; diff --git a/ql/src/java/org/apache/hadoop/hive/ql/ddl/DDLUtils.java b/ql/src/java/org/apache/hadoop/hive/ql/ddl/DDLUtils.java index eb8b858..b82fc5e 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/ddl/DDLUtils.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/ddl/DDLUtils.java @@ -41,6 +41,7 @@ import org.apache.hadoop.hive.ql.metadata.Hive; import org.apache.hadoop.hive.ql.metadata.HiveException; import org.apache.hadoop.hive.ql.metadata.Partition; import org.apache.hadoop.hive.ql.me
[hive] branch master updated: HIVE-22876 Do not enforce package-info.java files by checkstyle (Miklos Gergely, reviewed by Zoltan Haindrich)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new c060c84 HIVE-22876 Do not enforce package-info.java files by checkstyle (Miklos Gergely, reviewed by Zoltan Haindrich) c060c84 is described below commit c060c841797fd5814030da9cd1dd2bf059b74506 Author: miklosgergely AuthorDate: Tue Feb 11 21:09:30 2020 +0100 HIVE-22876 Do not enforce package-info.java files by checkstyle (Miklos Gergely, reviewed by Zoltan Haindrich) --- checkstyle/checkstyle.xml | 3 --- .../metastore/tools/metatool/package-info.java | 20 -- .../hadoop/hive/llap/cli/service/package-info.java | 23 - .../hadoop/hive/llap/cli/status/package-info.java | 24 -- .../ddl/database/alter/location/package-info.java | 20 -- .../ql/ddl/database/alter/owner/package-info.java | 20 -- .../hive/ql/ddl/database/alter/package-info.java | 20 -- .../ddl/database/alter/poperties/package-info.java | 20 -- .../hive/ql/ddl/database/create/package-info.java | 20 -- .../hive/ql/ddl/database/desc/package-info.java| 20 -- .../hive/ql/ddl/database/drop/package-info.java| 20 -- .../hive/ql/ddl/database/lock/package-info.java| 20 -- .../hadoop/hive/ql/ddl/database/package-info.java | 20 -- .../hive/ql/ddl/database/show/package-info.java| 20 -- .../ql/ddl/database/showcreate/package-info.java | 20 -- .../hive/ql/ddl/database/unlock/package-info.java | 20 -- .../hive/ql/ddl/database/use/package-info.java | 20 -- .../hive/ql/ddl/function/create/package-info.java | 20 -- .../hive/ql/ddl/function/desc/package-info.java| 20 -- .../hive/ql/ddl/function/drop/package-info.java| 20 -- .../ql/ddl/function/macro/create/package-info.java | 20 -- .../ql/ddl/function/macro/drop/package-info.java | 20 -- .../hadoop/hive/ql/ddl/function/package-info.java | 20 -- .../hive/ql/ddl/function/reload/package-info.java | 20 -- .../hive/ql/ddl/function/show/package-info.java| 20 -- .../hadoop/hive/ql/ddl/misc/conf/package-info.java | 20 -- .../hive/ql/ddl/misc/flags/package-info.java | 20 -- .../hive/ql/ddl/misc/hooks/package-info.java | 20 -- .../hive/ql/ddl/misc/metadata/package-info.java| 20 -- .../hadoop/hive/ql/ddl/misc/msck/package-info.java | 20 -- .../hadoop/hive/ql/ddl/misc/package-info.java | 20 -- .../apache/hadoop/hive/ql/ddl/package-info.java| 20 -- .../hive/ql/ddl/privilege/grant/package-info.java | 20 -- .../hadoop/hive/ql/ddl/privilege/package-info.java | 20 -- .../hive/ql/ddl/privilege/revoke/package-info.java | 20 -- .../ql/ddl/privilege/role/create/package-info.java | 20 -- .../ql/ddl/privilege/role/drop/package-info.java | 20 -- .../ql/ddl/privilege/role/grant/package-info.java | 20 -- .../ql/ddl/privilege/role/revoke/package-info.java | 20 -- .../ql/ddl/privilege/role/set/package-info.java| 21 --- .../ql/ddl/privilege/role/show/package-info.java | 20 -- .../ql/ddl/privilege/show/grant/package-info.java | 20 -- .../privilege/show/principals/package-info.java| 20 -- .../ddl/privilege/show/rolegrant/package-info.java | 20 -- .../hive/ql/ddl/process/abort/package-info.java| 20 -- .../hive/ql/ddl/process/kill/package-info.java | 20 -- .../hadoop/hive/ql/ddl/process/package-info.java | 20 -- .../ddl/process/show/compactions/package-info.java | 20 -- .../process/show/transactions/package-info.java| 20 -- .../hive/ql/ddl/table/column/add/package-info.java | 20 -- .../ql/ddl/table/column/change/package-info.java | 20 -- .../ql/ddl/table/column/replace/package-info.java | 20 -- .../ql/ddl/table/column/show/package-info.java | 20 -- .../ql/ddl/table/column/update/package-info.java | 20 -- .../ql/ddl/table/constraint/add/package-info.java | 20 -- .../ql/ddl/table/constraint/drop/package-info.java | 20 -- .../hive/ql/ddl/table/constraint/package-info.java | 20 -- .../ql/ddl/table/create/like/package
[hive] branch master updated: HIVE-22864 Add option to DatabaseRule to run the Schema Tool in verbose mode for tests (Miklos Gergely, reviewed by Laszlo Bodor)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 8f46884 HIVE-22864 Add option to DatabaseRule to run the Schema Tool in verbose mode for tests (Miklos Gergely, reviewed by Laszlo Bodor) 8f46884 is described below commit 8f46884b3989bb3c97eeebd4c4cc221f9e9ad8ee Author: miklosgergely AuthorDate: Mon Feb 10 21:59:34 2020 +0100 HIVE-22864 Add option to DatabaseRule to run the Schema Tool in verbose mode for tests (Miklos Gergely, reviewed by Laszlo Bodor) --- standalone-metastore/DEV-README | 2 ++ .../apache/hadoop/hive/metastore/dbinstall/rules/DatabaseRule.java| 4 2 files changed, 6 insertions(+) diff --git a/standalone-metastore/DEV-README b/standalone-metastore/DEV-README index ab5df26..84ed938 100644 --- a/standalone-metastore/DEV-README +++ b/standalone-metastore/DEV-README @@ -51,6 +51,8 @@ Supported databases for testing: -Dit.test=ITestPostgres -Dit.test=ITestSqlServer +By adding -Dverbose.schematool the Schema Tool output becomes more detailed. + You can download the Oracle driver at http://www.oracle.com/technetwork/database/features/jdbc/index-091264.html You should download Oracle 11g Release 1, ojdbc6.jar diff --git a/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/dbinstall/rules/DatabaseRule.java b/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/dbinstall/rules/DatabaseRule.java index 3f82891..a6d22d1 100644 --- a/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/dbinstall/rules/DatabaseRule.java +++ b/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/dbinstall/rules/DatabaseRule.java @@ -61,6 +61,10 @@ public abstract class DatabaseRule extends ExternalResource { private boolean verbose; + public DatabaseRule() { +verbose = System.getProperty("verbose.schematool") != null; + } + public DatabaseRule setVerbose(boolean verbose) { this.verbose = verbose; return this;
[hive] branch master updated: HIVE-22835 Extract Executor from Driver (Miklos Gergely, reviewed by Zoltan Haindrich)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 0d9deba HIVE-22835 Extract Executor from Driver (Miklos Gergely, reviewed by Zoltan Haindrich) 0d9deba is described below commit 0d9deba3c15038df4c64ea9b8494d554eb8eea2f Author: miklosgergely AuthorDate: Wed Feb 5 18:55:55 2020 +0100 HIVE-22835 Extract Executor from Driver (Miklos Gergely, reviewed by Zoltan Haindrich) --- ql/src/java/org/apache/hadoop/hive/ql/Driver.java | 578 +--- .../org/apache/hadoop/hive/ql/DriverContext.java | 23 + .../java/org/apache/hadoop/hive/ql/Executor.java | 593 + ql/src/java/org/apache/hadoop/hive/ql/IDriver.java | 2 +- .../apache/hadoop/hive/ql/reexec/ReExecDriver.java | 4 +- 5 files changed, 635 insertions(+), 565 deletions(-) diff --git a/ql/src/java/org/apache/hadoop/hive/ql/Driver.java b/ql/src/java/org/apache/hadoop/hive/ql/Driver.java index 5191800..1f8bc12 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/Driver.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/Driver.java @@ -18,14 +18,10 @@ package org.apache.hadoop.hive.ql; -import java.io.DataInput; import java.io.IOException; -import java.net.InetAddress; import java.util.ArrayList; import java.util.HashMap; import java.util.HashSet; -import java.util.LinkedHashMap; -import java.util.LinkedHashSet; import java.util.LinkedList; import java.util.List; import java.util.Map; @@ -53,26 +49,18 @@ import org.apache.hadoop.hive.metastore.api.TxnType; import org.apache.hadoop.hive.metastore.utils.MetaStoreUtils; import org.apache.hadoop.hive.ql.cache.results.CacheUsage; import org.apache.hadoop.hive.ql.cache.results.QueryResultsCache; -import org.apache.hadoop.hive.ql.cache.results.QueryResultsCache.CacheEntry; import org.apache.hadoop.hive.ql.ddl.DDLDesc.DDLDescWithWriteId; import org.apache.hadoop.hive.ql.exec.AbstractFileMergeOperator; import org.apache.hadoop.hive.ql.exec.ConditionalTask; -import org.apache.hadoop.hive.ql.exec.DagUtils; import org.apache.hadoop.hive.ql.exec.ExplainTask; import org.apache.hadoop.hive.ql.exec.FetchTask; import org.apache.hadoop.hive.ql.exec.Operator; import org.apache.hadoop.hive.ql.exec.TableScanOperator; import org.apache.hadoop.hive.ql.exec.Task; import org.apache.hadoop.hive.ql.exec.TaskFactory; -import org.apache.hadoop.hive.ql.exec.TaskResult; -import org.apache.hadoop.hive.ql.exec.TaskRunner; import org.apache.hadoop.hive.ql.exec.Utilities; import org.apache.hadoop.hive.ql.exec.spark.session.SparkSession; -import org.apache.hadoop.hive.ql.history.HiveHistory.Keys; import org.apache.hadoop.hive.ql.hooks.Entity; -import org.apache.hadoop.hive.ql.hooks.HookContext; -import org.apache.hadoop.hive.ql.hooks.PrivateHookContext; -import org.apache.hadoop.hive.ql.hooks.WriteEntity; import org.apache.hadoop.hive.ql.io.AcidUtils; import org.apache.hadoop.hive.ql.lock.CompileLock; import org.apache.hadoop.hive.ql.lock.CompileLockFactory; @@ -82,7 +70,6 @@ import org.apache.hadoop.hive.ql.lockmgr.HiveLockMode; import org.apache.hadoop.hive.ql.lockmgr.HiveTxnManager; import org.apache.hadoop.hive.ql.lockmgr.LockException; import org.apache.hadoop.hive.ql.log.PerfLogger; -import org.apache.hadoop.hive.ql.metadata.Hive; import org.apache.hadoop.hive.ql.metadata.HiveException; import org.apache.hadoop.hive.ql.metadata.Table; import org.apache.hadoop.hive.ql.metadata.formatting.JsonMetaDataFormatter; @@ -102,7 +89,6 @@ import org.apache.hadoop.hive.ql.session.SessionState; import org.apache.hadoop.hive.ql.session.SessionState.LogHelper; import org.apache.hadoop.hive.ql.wm.WmContext; import org.apache.hadoop.hive.serde2.ByteStream; -import org.apache.hadoop.mapreduce.MRJobConfig; import org.apache.hadoop.util.StringUtils; import org.apache.hive.common.util.ShutdownHookManager; import org.apache.hive.common.util.TxnIdUtils; @@ -111,7 +97,6 @@ import org.slf4j.LoggerFactory; import com.google.common.annotations.VisibleForTesting; import com.google.common.base.Strings; -import com.google.common.collect.ImmutableMap; public class Driver implements IDriver { @@ -124,15 +109,11 @@ public class Driver implements IDriver { private int maxRows = 100; private ByteStream.Output bos = new ByteStream.Output(); - private DataInput resStream; private Context context; private final DriverContext driverContext; private TaskQueue taskQueue; private final List hiveLocks = new ArrayList(); - // HS2 operation handle guid string - private String operationId; - private DriverState driverState = new DriverState(); @Override @@ -945,7 +926,9 @@ public class Driver implements IDriver { } try { -execute(); +taskQueue = new TaskQueue(context); // for canceling the query (should be bound to session
[hive] branch master updated: HIVE-22723 Add backtics to identifiers within structs at SHOW CREATE TABLE (Miklos Gergely, reviewed by Zoltan Haindrich)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 22f5ab5 HIVE-22723 Add backtics to identifiers within structs at SHOW CREATE TABLE (Miklos Gergely, reviewed by Zoltan Haindrich) 22f5ab5 is described below commit 22f5ab51660aa660fdae58bb56a8c5cc44b60630 Author: miklosgergely AuthorDate: Fri Jan 24 16:15:44 2020 +0100 HIVE-22723 Add backtics to identifiers within structs at SHOW CREATE TABLE (Miklos Gergely, reviewed by Zoltan Haindrich) --- .../create/show/ShowCreateTableOperation.java | 62 -- .../clientpositive/show_create_table_db_table.q| 3 ++ .../show_create_table_db_table.q.out | 36 + 3 files changed, 97 insertions(+), 4 deletions(-) diff --git a/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/create/show/ShowCreateTableOperation.java b/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/create/show/ShowCreateTableOperation.java index 9c584ae..e07559f 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/create/show/ShowCreateTableOperation.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/create/show/ShowCreateTableOperation.java @@ -51,6 +51,12 @@ import org.apache.hadoop.hive.ql.metadata.HiveException; import org.apache.hadoop.hive.ql.metadata.Table; import org.apache.hadoop.hive.ql.util.DirectionUtils; import org.apache.hadoop.hive.serde.serdeConstants; +import org.apache.hadoop.hive.serde2.typeinfo.ListTypeInfo; +import org.apache.hadoop.hive.serde2.typeinfo.MapTypeInfo; +import org.apache.hadoop.hive.serde2.typeinfo.StructTypeInfo; +import org.apache.hadoop.hive.serde2.typeinfo.TypeInfo; +import org.apache.hadoop.hive.serde2.typeinfo.TypeInfoUtils; +import org.apache.hadoop.hive.serde2.typeinfo.UnionTypeInfo; import org.apache.hive.common.util.HiveStringUtils; import org.stringtemplate.v4.ST; @@ -142,16 +148,64 @@ public class ShowCreateTableOperation extends DDLOperation private String getColumns(Table table) { List columnDescs = new ArrayList(); -for (FieldSchema col : table.getCols()) { - String columnDesc = " `" + col.getName() + "` " + col.getType(); - if (col.getComment() != null) { -columnDesc += " COMMENT '" + HiveStringUtils.escapeHiveCommand(col.getComment()) + "'"; +for (FieldSchema column : table.getCols()) { + String columnType = formatType(TypeInfoUtils.getTypeInfoFromTypeString(column.getType())); + String columnDesc = " `" + column.getName() + "` " + columnType; + if (column.getComment() != null) { +columnDesc += " COMMENT '" + HiveStringUtils.escapeHiveCommand(column.getComment()) + "'"; } columnDescs.add(columnDesc); } return StringUtils.join(columnDescs, ", \n"); } + /** Struct fields are identifiers, need to be put between ``. */ + private String formatType(TypeInfo typeInfo) { +switch (typeInfo.getCategory()) { +case PRIMITIVE: + return typeInfo.getTypeName(); +case STRUCT: + StringBuilder structFormattedType = new StringBuilder(); + + StructTypeInfo structTypeInfo = (StructTypeInfo)typeInfo; + for (int i = 0; i < structTypeInfo.getAllStructFieldNames().size(); i++) { +if (structFormattedType.length() != 0) { + structFormattedType.append(", "); +} + +String structElementName = structTypeInfo.getAllStructFieldNames().get(i); +String structElementType = formatType(structTypeInfo.getAllStructFieldTypeInfos().get(i)); + +structFormattedType.append("`" + structElementName + "`:" + structElementType); + } + return "struct<" + structFormattedType.toString() + ">"; +case LIST: + ListTypeInfo listTypeInfo = (ListTypeInfo)typeInfo; + String elementType = formatType(listTypeInfo.getListElementTypeInfo()); + return "array<" + elementType + ">"; +case MAP: + MapTypeInfo mapTypeInfo = (MapTypeInfo)typeInfo; + String keyTypeInfo = mapTypeInfo.getMapKeyTypeInfo().getTypeName(); + String valueTypeInfo = formatType(mapTypeInfo.getMapValueTypeInfo()); + return "map<" + keyTypeInfo + "," + valueTypeInfo + ">"; +case UNION: + StringBuilder unionFormattedType = new StringBuilder(); + + UnionTypeInfo unionTypeInfo = (UnionTypeInfo)typeInfo; + for (TypeInfo unionElementTypeInfo : unionTypeInfo.getAllUnionObjectTypeInfos()) { +if (unionFormattedType.length() != 0) { + unionFormattedType.append(", "); +} + +String unionElementType = formatType(unionElementTypeInfo); +unionFormattedType.append(unionEl
[hive] branch master updated: HIVE-22644 Break up DDLSemanticAnalyzer - extract Table column analyzers (Miklos Gergely, reviewed by Jesus Camacho Rodriguez)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 8776087 HIVE-22644 Break up DDLSemanticAnalyzer - extract Table column analyzers (Miklos Gergely, reviewed by Jesus Camacho Rodriguez) 8776087 is described below commit 87760874b1cb7eab1c278f1c09949103c4625451 Author: miklosgergely AuthorDate: Fri Dec 13 10:47:44 2019 +0100 HIVE-22644 Break up DDLSemanticAnalyzer - extract Table column analyzers (Miklos Gergely, reviewed by Jesus Camacho Rodriguez) --- .../ql/ddl/table/AbstractAlterTableAnalyzer.java | 130 +++ .../ql/ddl/table/AbstractAlterTableOperation.java | 3 +- .../hadoop/hive/ql/ddl/table/AlterTableUtils.java | 10 + .../column/add/AlterTableAddColumnsAnalyzer.java | 64 ++ .../column/{ => add}/AlterTableAddColumnsDesc.java | 2 +- .../{ => add}/AlterTableAddColumnsOperation.java | 2 +- .../ddl/table/column/{ => add}/package-info.java | 4 +- .../change/AlterTableChangeColumnAnalyzer.java | 173 +++ .../{ => change}/AlterTableChangeColumnDesc.java | 2 +- .../AlterTableChangeColumnOperation.java | 2 +- .../table/column/{ => change}/package-info.java| 4 +- .../replace/AlterTableReplaceColumnsAnalyzer.java | 64 ++ .../AlterTableReplaceColumnsDesc.java | 2 +- .../AlterTableReplaceColumnsOperation.java | 2 +- .../table/column/{ => replace}/package-info.java | 4 +- .../ddl/table/column/show/ShowColumnsAnalyzer.java | 87 .../table/column/{ => show}/ShowColumnsDesc.java | 6 +- .../column/{ => show}/ShowColumnsOperation.java| 2 +- .../ddl/table/column/{ => show}/package-info.java | 4 +- .../update/AlterTableUpdateColumnsAnalyzer.java| 57 + .../{ => update}/AlterTableUpdateColumnsDesc.java | 2 +- .../AlterTableUpdateColumnsOperation.java | 2 +- .../table/column/{ => update}/package-info.java| 4 +- .../ql/ddl/table/constraint/ConstraintsUtils.java | 109 ++ .../hadoop/hive/ql/parse/BaseSemanticAnalyzer.java | 105 - .../hadoop/hive/ql/parse/DDLSemanticAnalyzer.java | 241 + .../hadoop/hive/ql/parse/SemanticAnalyzer.java | 9 +- .../hive/ql/parse/SemanticAnalyzerFactory.java | 1 - 28 files changed, 721 insertions(+), 376 deletions(-) diff --git a/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/AbstractAlterTableAnalyzer.java b/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/AbstractAlterTableAnalyzer.java index 026f251..1adcef6 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/AbstractAlterTableAnalyzer.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/AbstractAlterTableAnalyzer.java @@ -21,8 +21,18 @@ package org.apache.hadoop.hive.ql.ddl.table; import java.util.Map; import org.apache.hadoop.hive.common.TableName; +import org.apache.hadoop.hive.conf.HiveConf; +import org.apache.hadoop.hive.metastore.api.hive_metastoreConstants; import org.apache.hadoop.hive.metastore.utils.MetaStoreUtils; +import org.apache.hadoop.hive.ql.ErrorMsg; import org.apache.hadoop.hive.ql.QueryState; +import org.apache.hadoop.hive.ql.ddl.DDLDesc.DDLDescWithWriteId; +import org.apache.hadoop.hive.ql.hooks.ReadEntity; +import org.apache.hadoop.hive.ql.hooks.WriteEntity; +import org.apache.hadoop.hive.ql.hooks.WriteEntity.WriteType; +import org.apache.hadoop.hive.ql.io.AcidUtils; +import org.apache.hadoop.hive.ql.metadata.Partition; +import org.apache.hadoop.hive.ql.metadata.Table; import org.apache.hadoop.hive.ql.parse.ASTNode; import org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer; import org.apache.hadoop.hive.ql.parse.HiveParser; @@ -33,6 +43,9 @@ import org.apache.hadoop.hive.ql.parse.SemanticException; * tableName command partitionSpec? */ public abstract class AbstractAlterTableAnalyzer extends BaseSemanticAnalyzer { + // Equivalent to acidSinks, but for DDL operations that change data. + private DDLDescWithWriteId ddlDescWithWriteId; + public AbstractAlterTableAnalyzer(QueryState queryState) throws SemanticException { super(queryState); } @@ -61,4 +74,121 @@ public abstract class AbstractAlterTableAnalyzer extends BaseSemanticAnalyzer { protected abstract void analyzeCommand(TableName tableName, Map partitionSpec, ASTNode command) throws SemanticException; + + protected void setAcidDdlDesc(DDLDescWithWriteId descWithWriteId) { +if(this.ddlDescWithWriteId != null) { + throw new IllegalStateException("ddlDescWithWriteId is already set: " + this.ddlDescWithWriteId); +} +this.ddlDescWithWriteId = descWithWriteId; + } + + @Override + public DDLDescWithWriteId getAcidDdlDesc() { +return ddlDescWithWriteId; + } + + protected void addInputsOutputsAlterT
[hive] branch master updated: HIVE-22608 Reduce the number of public methods in Driver (Miklos Gergely, reviewed by Jesus Camacho Rodriguez)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new d981e8d HIVE-22608 Reduce the number of public methods in Driver (Miklos Gergely, reviewed by Jesus Camacho Rodriguez) d981e8d is described below commit d981e8d3d7c6cbbed5d5793042cc4d91ffef3580 Author: miklosgergely AuthorDate: Mon Dec 9 17:00:28 2019 +0100 HIVE-22608 Reduce the number of public methods in Driver (Miklos Gergely, reviewed by Jesus Camacho Rodriguez) --- ql/src/java/org/apache/hadoop/hive/ql/Driver.java | 30 +- .../org/apache/hadoop/hive/ql/DriverUtils.java | 3 +-- .../java/org/apache/hadoop/hive/ql/QueryState.java | 15 +++ .../hadoop/hive/ql/parse/DDLSemanticAnalyzer.java | 2 +- .../apache/hadoop/hive/ql/reexec/ReExecDriver.java | 5 ++-- 5 files changed, 27 insertions(+), 28 deletions(-) diff --git a/ql/src/java/org/apache/hadoop/hive/ql/Driver.java b/ql/src/java/org/apache/hadoop/hive/ql/Driver.java index 7549144..342d463 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/Driver.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/Driver.java @@ -139,10 +139,6 @@ public class Driver implements IDriver { return driverContext.getSchema(); } - public Schema getExplainSchema() { -return new Schema(ExplainTask.getResultSchema(), null); - } - @Override public Context getContext() { return context; @@ -168,7 +164,7 @@ public class Driver implements IDriver { // Pass lineageState when a driver instantiates another Driver to run // or compile another query public Driver(HiveConf conf, Context ctx, LineageState lineageState) { -this(getNewQueryState(conf, lineageState), null); +this(QueryState.getNewQueryState(conf, lineageState), null); context = ctx; } @@ -185,18 +181,11 @@ public class Driver implements IDriver { txnManager); } - /** - * Generating the new QueryState object. Making sure, that the new queryId is generated. - * @param conf The HiveConf which should be used - * @param lineageState a LineageState to be set in the new QueryState object - * @return The new QueryState object - */ - public static QueryState getNewQueryState(HiveConf conf, LineageState lineageState) { -return new QueryState.Builder() -.withGenerateNewQueryId(true) -.withHiveConf(conf) -.withLineageState(lineageState) -.build(); + public Driver(QueryState queryState, QueryInfo queryInfo, HiveTxnManager txnManager, + ValidWriteIdList compactionWriteIds, long compactorTxnId) { +this(queryState, queryInfo, txnManager); +driverContext.setCompactionWriteIds(compactionWriteIds); +driverContext.setCompactorTxnId(compactorTxnId); } /** @@ -1816,7 +1805,7 @@ public class Driver implements IDriver { // Close and release resources within a running query process. Since it runs under // driver state COMPILING, EXECUTING or INTERRUPT, it would not have race condition // with the releases probably running in the other closing thread. - public int closeInProcess(boolean destroyed) { + private int closeInProcess(boolean destroyed) { releaseTaskQueue(); releasePlan(); releaseCachedResult(); @@ -1930,9 +1919,4 @@ public class Driver implements IDriver { return driverContext.getPlan().getFetchTask() != null && driverContext.getPlan().getResultSchema() != null && driverContext.getPlan().getResultSchema().isSetFieldSchemas(); } - - void setCompactionWriteIds(ValidWriteIdList compactionWriteIds, long compactorTxnId) { -driverContext.setCompactionWriteIds(compactionWriteIds); -driverContext.setCompactorTxnId(compactorTxnId); - } } diff --git a/ql/src/java/org/apache/hadoop/hive/ql/DriverUtils.java b/ql/src/java/org/apache/hadoop/hive/ql/DriverUtils.java index 1eacf69..21e5f72 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/DriverUtils.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/DriverUtils.java @@ -64,8 +64,7 @@ public final class DriverUtils { boolean isOk = false; try { QueryState qs = new QueryState.Builder().withHiveConf(conf).withGenerateNewQueryId(true).nonIsolated().build(); - Driver driver = new Driver(qs, null, null); - driver.setCompactionWriteIds(writeIds, compactorTxnId); + Driver driver = new Driver(qs, null, null, writeIds, compactorTxnId); try { try { driver.run(query); diff --git a/ql/src/java/org/apache/hadoop/hive/ql/QueryState.java b/ql/src/java/org/apache/hadoop/hive/ql/QueryState.java index 267f7d0..280b7a4 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/QueryState.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/QueryState.java @@ -58,6 +58,7 @@ public class QueryState { private long numModifiedRows = 0; static public final S
[hive] branch master updated: HIVE-22657 Add log message when stats have to to computed during calcite (Miklos Gergely)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 4dcbfb3 HIVE-22657 Add log message when stats have to to computed during calcite (Miklos Gergely) 4dcbfb3 is described below commit 4dcbfb317927ceecbb2f4cfd95a88c8f79601ac2 Author: miklosgergely AuthorDate: Fri Dec 20 14:55:26 2019 +0100 HIVE-22657 Add log message when stats have to to computed during calcite (Miklos Gergely) --- .../org/apache/hadoop/hive/ql/optimizer/calcite/RelOptHiveTable.java| 2 ++ 1 file changed, 2 insertions(+) diff --git a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/RelOptHiveTable.java b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/RelOptHiveTable.java index 001156a..1f6e1bc 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/RelOptHiveTable.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/RelOptHiveTable.java @@ -689,6 +689,8 @@ public class RelOptHiveTable implements RelOptTable { } } if (!projIndxSet.isEmpty()) { + LOG.info("Calculating column statistics for {}, projIndxSet: {}, allowMissingStats: {}", name, + projIndxLst, allowMissingStats); updateColStats(projIndxSet, allowMissingStats); for (Integer i : projIndxSet) { colStatsBldr.add(hiveColStatsMap.get(i));
[hive] branch master updated: HIVE-21860 Incorrect FQDN of HadoopThriftAuthBridge23 in ShimLoader (Oleksiy Sayankin via Miklos Gergely)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new d5aec85 HIVE-21860 Incorrect FQDN of HadoopThriftAuthBridge23 in ShimLoader (Oleksiy Sayankin via Miklos Gergely) d5aec85 is described below commit d5aec854a09bbf4dd6fbc6a24ef506d7f30f4648 Author: miklosgergely AuthorDate: Tue Dec 17 15:04:58 2019 +0100 HIVE-21860 Incorrect FQDN of HadoopThriftAuthBridge23 in ShimLoader (Oleksiy Sayankin via Miklos Gergely) --- shims/common/src/main/java/org/apache/hadoop/hive/shims/ShimLoader.java | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/shims/common/src/main/java/org/apache/hadoop/hive/shims/ShimLoader.java b/shims/common/src/main/java/org/apache/hadoop/hive/shims/ShimLoader.java index 206a6d9..20b0f60 100644 --- a/shims/common/src/main/java/org/apache/hadoop/hive/shims/ShimLoader.java +++ b/shims/common/src/main/java/org/apache/hadoop/hive/shims/ShimLoader.java @@ -63,7 +63,7 @@ public abstract class ShimLoader { static { HADOOP_THRIFT_AUTH_BRIDGE_CLASSES.put(HADOOP23VERSIONNAME, -"org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge23"); +"org.apache.hadoop.hive.metastore.security.HadoopThriftAuthBridge23"); }
[hive] branch master updated: HIVE-22638 Fix insert statement issue with return path (Miklos Gergely, reviewed by Jesus Camacho Rodriguez)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 95410e6 HIVE-22638 Fix insert statement issue with return path (Miklos Gergely, reviewed by Jesus Camacho Rodriguez) 95410e6 is described below commit 95410e69ae1997148b6de7a00ee7722b3f4009a5 Author: miklosgergely AuthorDate: Thu Dec 12 15:40:03 2019 +0100 HIVE-22638 Fix insert statement issue with return path (Miklos Gergely, reviewed by Jesus Camacho Rodriguez) --- .../opconventer/HiveTableFunctionScanVisitor.java | 15 ++ .../opconventer/HiveTableScanVisitor.java | 1 + .../hadoop/hive/ql/parse/CalcitePlanner.java | 21 + .../hadoop/hive/ql/parse/SemanticAnalyzer.java | 2 +- .../hive/ql/util/TestUpgradeToolRerturnPath.java | 34 ++ 5 files changed, 62 insertions(+), 11 deletions(-) diff --git a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/translator/opconventer/HiveTableFunctionScanVisitor.java b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/translator/opconventer/HiveTableFunctionScanVisitor.java index 55455f0..7c2d424 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/translator/opconventer/HiveTableFunctionScanVisitor.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/translator/opconventer/HiveTableFunctionScanVisitor.java @@ -26,6 +26,7 @@ import java.util.Map; import java.util.stream.Collectors; import org.apache.calcite.rex.RexCall; +import org.apache.hadoop.hive.conf.HiveConf; import org.apache.hadoop.hive.ql.exec.ColumnInfo; import org.apache.hadoop.hive.ql.exec.FunctionInfo; import org.apache.hadoop.hive.ql.exec.FunctionRegistry; @@ -66,7 +67,7 @@ class HiveTableFunctionScanVisitor extends HiveRelNodeVisitor fieldNames = new ArrayList<>(scanRel.getRowType().getFieldNames()); -List exprNames = new ArrayList<>(fieldNames); +List functionFieldNames = new ArrayList<>(); List exprCols = new ArrayList<>(); Map colExprMap = new HashMap<>(); for (int pos = 0; pos < call.getOperands().size(); pos++) { @@ -74,22 +75,25 @@ class HiveTableFunctionScanVisitor extends HiveRelNodeVisitor output = OperatorFactory.getAndMakeChild(new SelectDesc(exprCols, fieldNames, false), +Operator output = OperatorFactory.getAndMakeChild(new SelectDesc(exprCols, functionFieldNames, false), new RowSchema(rowResolver.getRowSchema()), op); output.setColumnExprMap(colExprMap); -Operator funcOp = genUDTFPlan(call, fieldNames, output, rowResolver); +Operator funcOp = genUDTFPlan(call, functionFieldNames, output, rowResolver); return new OpAttr(null, new HashSet(), funcOp); } @@ -133,6 +137,7 @@ class HiveTableFunctionScanVisitor extends HiveRelNodeVisitor { // 2. Setup TableScan TableScanOperator ts = (TableScanOperator) OperatorFactory.get( hiveOpConverter.getSemanticAnalyzer().getOpContext(), tsd, new RowSchema(colInfos)); +ts.setBucketingVersion(tsd.getTableMetadata().getBucketingVersion()); //now that we let Calcite process subqueries we might have more than one // tablescan with same alias. diff --git a/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java b/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java index e837cde..0a6e995 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java @@ -111,7 +111,6 @@ import org.apache.calcite.sql.validate.SqlValidatorUtil; import org.apache.calcite.tools.Frameworks; import org.apache.calcite.util.CompositeList; import org.apache.calcite.util.ImmutableBitSet; -import org.apache.calcite.util.ImmutableIntList; import org.apache.calcite.util.Pair; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hive.conf.Constants; @@ -1577,8 +1576,7 @@ public class CalcitePlanner extends SemanticAnalyzer { RowResolver hiveRootRR = genRowResolver(hiveRoot, getQB()); opParseCtx.put(hiveRoot, new OpParseContext(hiveRootRR)); String dest = getQB().getParseInfo().getClauseNames().iterator().next(); -if (getQB().getParseInfo().getDestSchemaForClause(dest) != null -&& this.getQB().getTableDesc() == null) { +if (isInsertInto(getQB().getParseInfo(), dest)) { Operator selOp = handleInsertStatement(dest, hiveRoot, hiveRootRR, getQB()); return genFileSinkPlan(dest, getQB(), selOp); } else { @@ -1598,7 +1596,8 @@ public class CalcitePlanner extends SemanticAnalyzer { } ASTNode selExprList = qb.getParseInfo().getSelForClause(dest); -RowResolver out_rwsch = handleInsertStatementSpec(colList, dest, inputRR, qb, selExprList); +RowResolver rowResolver = createRowResolver(columns); +rowRes
[hive] branch master updated (78a4dd7 -> 3b36273)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/hive.git. from 78a4dd7 HIVE-22408: The fix for CALCITE-2991 creates wrong results on edge case (Vineet Garg, reviewed by Zoltan Haindrich) add 3b36273 HIVE-22557 Break up DDLSemanticAnalyzer - extract Table constraints analyzers (Miklos Gergely, reviewed by Jesus Camacho Rodriguez) No new revisions were added by this update. Summary of changes: .../hive/ql/ddl/DDLSemanticAnalyzerFactory.java| 54 ++- .../ql/ddl/table/AbstractAlterTableAnalyzer.java | 64 +++ .../ql/ddl/table/AbstractAlterTableOperation.java | 2 +- .../AbstractAlterTableWithConstraintsDesc.java | 2 +- ...e-info.java => AlterTableAnalyzerCategory.java} | 20 +- .../table/column/AlterTableChangeColumnDesc.java | 2 +- .../{constaint => constraint}/Constraints.java | 2 +- .../ql/ddl/table/constraint/ConstraintsUtils.java | 420 .../add/AlterTableAddConstraintAnalyzer.java | 85 .../add}/AlterTableAddConstraintDesc.java | 3 +- .../add}/AlterTableAddConstraintOperation.java | 3 +- .../add}/package-info.java | 4 +- .../drop/AlterTableDropConstraintAnalyzer.java | 50 +++ .../drop}/AlterTableDropConstraintDesc.java| 2 +- .../drop}/AlterTableDropConstraintOperation.java | 2 +- .../drop}/package-info.java| 4 +- .../{constaint => constraint}/package-info.java| 2 +- .../hadoop/hive/ql/parse/BaseSemanticAnalyzer.java | 439 + .../hadoop/hive/ql/parse/DDLSemanticAnalyzer.java | 71 +--- .../hive/ql/parse/SemanticAnalyzerFactory.java | 2 +- .../repl/load/message/AddForeignKeyHandler.java| 4 +- .../load/message/AddNotNullConstraintHandler.java | 4 +- .../repl/load/message/AddPrimaryKeyHandler.java| 4 +- .../load/message/AddUniqueConstraintHandler.java | 4 +- .../repl/load/message/DropConstraintHandler.java | 2 +- 25 files changed, 740 insertions(+), 511 deletions(-) create mode 100644 ql/src/java/org/apache/hadoop/hive/ql/ddl/table/AbstractAlterTableAnalyzer.java copy ql/src/java/org/apache/hadoop/hive/ql/ddl/table/{constaint/package-info.java => AlterTableAnalyzerCategory.java} (51%) rename ql/src/java/org/apache/hadoop/hive/ql/ddl/table/{constaint => constraint}/Constraints.java (98%) create mode 100644 ql/src/java/org/apache/hadoop/hive/ql/ddl/table/constraint/ConstraintsUtils.java create mode 100644 ql/src/java/org/apache/hadoop/hive/ql/ddl/table/constraint/add/AlterTableAddConstraintAnalyzer.java rename ql/src/java/org/apache/hadoop/hive/ql/ddl/table/{constaint => constraint/add}/AlterTableAddConstraintDesc.java (93%) rename ql/src/java/org/apache/hadoop/hive/ql/ddl/table/{constaint => constraint/add}/AlterTableAddConstraintOperation.java (96%) copy ql/src/java/org/apache/hadoop/hive/ql/ddl/table/{constaint => constraint/add}/package-info.java (86%) create mode 100644 ql/src/java/org/apache/hadoop/hive/ql/ddl/table/constraint/drop/AlterTableDropConstraintAnalyzer.java rename ql/src/java/org/apache/hadoop/hive/ql/ddl/table/{constaint => constraint/drop}/AlterTableDropConstraintDesc.java (97%) rename ql/src/java/org/apache/hadoop/hive/ql/ddl/table/{constaint => constraint/drop}/AlterTableDropConstraintOperation.java (97%) copy ql/src/java/org/apache/hadoop/hive/ql/ddl/table/{constaint => constraint/drop}/package-info.java (86%) rename ql/src/java/org/apache/hadoop/hive/ql/ddl/table/{constaint => constraint}/package-info.java (94%)
[hive] branch master updated: HIVE-22526 Extract Compiler from Driver (Miklos Gergely, reviewed by Zoltan Haindrich)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 23db35e HIVE-22526 Extract Compiler from Driver (Miklos Gergely, reviewed by Zoltan Haindrich) 23db35e is described below commit 23db35e092ce1d09c5993b45c8b0f790505fc1a5 Author: miklosgergely AuthorDate: Thu Nov 21 13:38:46 2019 +0100 HIVE-22526 Extract Compiler from Driver (Miklos Gergely, reviewed by Zoltan Haindrich) --- .../java/org/apache/hadoop/hive/ql/Compiler.java | 483 ++ ql/src/java/org/apache/hadoop/hive/ql/Driver.java | 721 - .../org/apache/hadoop/hive/ql/DriverContext.java | 8 + .../org/apache/hadoop/hive/ql/DriverUtils.java | 74 ++- .../apache/hadoop/hive/ql/exec/ExplainTask.java| 54 +- .../org/apache/hadoop/hive/ql/metadata/Hive.java | 9 + 6 files changed, 753 insertions(+), 596 deletions(-) diff --git a/ql/src/java/org/apache/hadoop/hive/ql/Compiler.java b/ql/src/java/org/apache/hadoop/hive/ql/Compiler.java new file mode 100644 index 000..a559d90 --- /dev/null +++ b/ql/src/java/org/apache/hadoop/hive/ql/Compiler.java @@ -0,0 +1,483 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hive.ql; + +import java.io.IOException; +import java.util.List; +import java.util.Map; + +import org.apache.hadoop.hive.common.JavaUtils; +import org.apache.hadoop.hive.common.ValidTxnList; +import org.apache.hadoop.hive.conf.HiveConf; +import org.apache.hadoop.hive.conf.HiveVariableSource; +import org.apache.hadoop.hive.conf.VariableSubstitution; +import org.apache.hadoop.hive.conf.HiveConf.ConfVars; +import org.apache.hadoop.hive.metastore.HiveMetaStoreUtils; +import org.apache.hadoop.hive.metastore.api.FieldSchema; +import org.apache.hadoop.hive.metastore.api.Schema; +import org.apache.hadoop.hive.metastore.api.TxnType; +import org.apache.hadoop.hive.ql.exec.ExplainTask; +import org.apache.hadoop.hive.ql.exec.FetchTask; +import org.apache.hadoop.hive.ql.exec.repl.util.ReplUtils; +import org.apache.hadoop.hive.ql.hooks.HookUtils; +import org.apache.hadoop.hive.ql.io.AcidUtils; +import org.apache.hadoop.hive.ql.lockmgr.HiveTxnManager; +import org.apache.hadoop.hive.ql.lockmgr.LockException; +import org.apache.hadoop.hive.ql.log.PerfLogger; +import org.apache.hadoop.hive.ql.metadata.AuthorizationException; +import org.apache.hadoop.hive.ql.metadata.Hive; +import org.apache.hadoop.hive.ql.metadata.HiveException; +import org.apache.hadoop.hive.ql.parse.ASTNode; +import org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer; +import org.apache.hadoop.hive.ql.parse.HiveSemanticAnalyzerHookContext; +import org.apache.hadoop.hive.ql.parse.HiveSemanticAnalyzerHookContextImpl; +import org.apache.hadoop.hive.ql.parse.ParseException; +import org.apache.hadoop.hive.ql.parse.ParseUtils; +import org.apache.hadoop.hive.ql.parse.SemanticAnalyzerFactory; +import org.apache.hadoop.hive.ql.plan.HiveOperation; +import org.apache.hadoop.hive.ql.plan.TableDesc; +import org.apache.hadoop.hive.ql.processors.CommandProcessorException; +import org.apache.hadoop.hive.ql.security.authorization.command.CommandAuthorizer; +import org.apache.hadoop.hive.ql.session.SessionState; +import org.apache.hadoop.hive.ql.session.SessionState.LogHelper; +import org.apache.hadoop.util.StringUtils; +import org.apache.thrift.TException; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import com.google.common.collect.ImmutableMap; + +/** + * The compiler compiles the command, by creating a QueryPlan from a String command. + * Also opens a transaction if necessary. + */ +public class Compiler { + private static final String CLASS_NAME = Driver.class.getName(); + private static final Logger LOG = LoggerFactory.getLogger(CLASS_NAME); + private static final LogHelper CONSOLE = new LogHelper(LOG); + + private final Context context; + private final DriverContext driverContext; + private final DriverState driverState; + + private ASTNode tree; + + public Compiler(Context context, DriverContext driverContext, Driv
[hive] branch master updated: HIVE-22488 Break up DDLSemanticAnalyzer - extract Table creation analyzers (Miklos Gergely, reviewed by Jesus Camacho Rodriguez)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 8d31922 HIVE-22488 Break up DDLSemanticAnalyzer - extract Table creation analyzers (Miklos Gergely, reviewed by Jesus Camacho Rodriguez) 8d31922 is described below commit 8d31922a1bf96aaa9f1139ec0ddd43b6c4728e6c Author: miklosgergely AuthorDate: Wed Nov 13 11:13:11 2019 +0100 HIVE-22488 Break up DDLSemanticAnalyzer - extract Table creation analyzers (Miklos Gergely, reviewed by Jesus Camacho Rodriguez) --- .../cli/SemanticAnalysis/CreateTableHook.java | 2 +- .../ql/metadata/DummySemanticAnalyzerHook.java | 2 +- .../ql/metadata/DummySemanticAnalyzerHook1.java| 2 +- .../{creation => create}/CreateTableDesc.java | 2 +- .../{creation => create}/CreateTableOperation.java | 9 ++-- .../like}/CreateTableLikeDesc.java | 2 +- .../like}/CreateTableLikeOperation.java| 3 +- .../{creation => create/like}/package-info.java| 4 +- .../table/{creation => create}/package-info.java | 4 +- .../table/create/show/ShowCreateTableAnalyzer.java | 57 .../show}/ShowCreateTableDesc.java | 2 +- .../show}/ShowCreateTableOperation.java| 3 +- .../{creation => create/show}/package-info.java| 4 +- .../hive/ql/ddl/table/drop/DropTableAnalyzer.java | 62 ++ .../table/{creation => drop}/DropTableDesc.java| 2 +- .../{creation => drop}/DropTableOperation.java | 2 +- .../ddl/table/{creation => drop}/package-info.java | 4 +- .../exec/repl/bootstrap/load/table/LoadTable.java | 2 +- .../org/apache/hadoop/hive/ql/io/AcidUtils.java| 2 +- .../hive/ql/parse/AcidExportSemanticAnalyzer.java | 4 +- .../hadoop/hive/ql/parse/DDLSemanticAnalyzer.java | 37 - .../hive/ql/parse/ImportSemanticAnalyzer.java | 2 +- .../apache/hadoop/hive/ql/parse/ParseContext.java | 2 +- .../java/org/apache/hadoop/hive/ql/parse/QB.java | 2 +- .../hadoop/hive/ql/parse/SemanticAnalyzer.java | 4 +- .../hive/ql/parse/SemanticAnalyzerFactory.java | 2 - .../apache/hadoop/hive/ql/parse/TaskCompiler.java | 2 +- .../parse/repl/load/message/DropTableHandler.java | 2 +- .../hadoop/hive/ql/plan/ImportTableDesc.java | 2 +- .../apache/hadoop/hive/ql/plan/LoadFileDesc.java | 2 +- .../org/apache/hadoop/hive/ql/plan/PlanUtils.java | 2 +- .../ql/txn/compactor/MmMajorQueryCompactor.java| 2 +- .../hadoop/hive/ql/parse/TestHiveDecimalParse.java | 2 +- 33 files changed, 159 insertions(+), 78 deletions(-) diff --git a/hcatalog/core/src/main/java/org/apache/hive/hcatalog/cli/SemanticAnalysis/CreateTableHook.java b/hcatalog/core/src/main/java/org/apache/hive/hcatalog/cli/SemanticAnalysis/CreateTableHook.java index 98a44b8..084bbfe 100644 --- a/hcatalog/core/src/main/java/org/apache/hive/hcatalog/cli/SemanticAnalysis/CreateTableHook.java +++ b/hcatalog/core/src/main/java/org/apache/hive/hcatalog/cli/SemanticAnalysis/CreateTableHook.java @@ -28,7 +28,7 @@ import org.apache.hadoop.fs.Path; import org.apache.hadoop.hive.metastore.api.FieldSchema; import org.apache.hadoop.hive.ql.ddl.DDLDesc; import org.apache.hadoop.hive.ql.ddl.DDLTask; -import org.apache.hadoop.hive.ql.ddl.table.creation.CreateTableDesc; +import org.apache.hadoop.hive.ql.ddl.table.create.CreateTableDesc; import org.apache.hadoop.hive.ql.exec.Task; import org.apache.hadoop.hive.ql.metadata.Hive; import org.apache.hadoop.hive.ql.metadata.HiveException; diff --git a/itests/util/src/main/java/org/apache/hadoop/hive/ql/metadata/DummySemanticAnalyzerHook.java b/itests/util/src/main/java/org/apache/hadoop/hive/ql/metadata/DummySemanticAnalyzerHook.java index 8ccbf97..06f4a44 100644 --- a/itests/util/src/main/java/org/apache/hadoop/hive/ql/metadata/DummySemanticAnalyzerHook.java +++ b/itests/util/src/main/java/org/apache/hadoop/hive/ql/metadata/DummySemanticAnalyzerHook.java @@ -24,7 +24,7 @@ import java.util.List; import java.util.Map; import org.apache.hadoop.hive.ql.ddl.DDLTask; -import org.apache.hadoop.hive.ql.ddl.table.creation.CreateTableDesc; +import org.apache.hadoop.hive.ql.ddl.table.create.CreateTableDesc; import org.apache.hadoop.hive.ql.exec.Task; import org.apache.hadoop.hive.ql.parse.ASTNode; import org.apache.hadoop.hive.ql.parse.AbstractSemanticAnalyzerHook; diff --git a/itests/util/src/main/java/org/apache/hadoop/hive/ql/metadata/DummySemanticAnalyzerHook1.java b/itests/util/src/main/java/org/apache/hadoop/hive/ql/metadata/DummySemanticAnalyzerHook1.java index 00e7582..e2a0eaea 100644 --- a/itests/util/src/main/java/org/apache/hadoop/hive/ql/metadata/DummySemanticAnalyzerHook1.java +++ b/itests/util/src/main/java/org/apache/hadoop/hive/ql/metadata/DummySemanticAnalyzerHook1.java
[hive] branch master updated (df8e185 -> 13fc651)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/hive.git. from df8e185 HIVE-22513: Constant propagation of casted column in filter ops can cause incorrect results (Adam Szita, reviewed by Zoltan Haindrich, Peter Vary) add 13fc651 HIVE-22369 Handle HiveTableFunctionScan at return path (Miklos Gergely, reviewed by Jesus Camacho Rodriguez) No new revisions were added by this update. Summary of changes: .../org/apache/hive/jdbc/BaseJdbcWithMiniLlap.java | 38 + .../apache/hive/jdbc/TestNewGetSplitsFormat.java | 7 +- .../jdbc/TestNewGetSplitsFormatReturnPath.java | 25 +++--- .../hive/ql/optimizer/calcite/HiveCalciteUtil.java | 6 +- .../calcite/translator/HiveOpConverter.java| 96 ++ 5 files changed, 114 insertions(+), 58 deletions(-) copy cli/src/test/org/apache/hadoop/hive/cli/TestCliSessionState.java => itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestNewGetSplitsFormatReturnPath.java (60%)
[hive] branch master updated (c032cbe -> 44697f0)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/hive.git. from c032cbe HIVE-22497 : Remove default value for Capabilities from HiveConf. (Naveen Gangam, reviewed by Sam An) add 44697f0 HIVE-21198 Introduce a database object reference class (David Lavati, reviewed by Zoltan Haindrich) No new revisions were added by this update. Summary of changes: .../hive/accumulo/AccumuloStorageHandler.java | 3 +- .../test/results/positive/accumulo_queries.q.out | 4 +- .../accumulo_single_sourced_multi_insert.q.out | 2 +- beeline/src/java/org/apache/hive/beeline/Rows.java | 2 +- .../test/results/clientnegative/serde_regex.q.out | 2 +- .../results/clientpositive/fileformat_base64.q.out | 2 +- .../test/results/clientpositive/serde_regex.q.out | 2 +- .../src/test/results/positive/hbase_ddl.q.out | 2 +- .../src/test/results/positive/hbase_queries.q.out | 4 +- .../hbase_single_sourced_multi_insert.q.out| 2 +- .../src/test/results/positive/hbasestats.q.out | 2 +- .../cli/SemanticAnalysis/CreateTableHook.java | 3 +- .../cli/SemanticAnalysis/HCatSemanticAnalyzer.java | 4 +- .../hive/hcatalog/cli/TestSemanticAnalysis.java| 2 +- .../hive/hcatalog/api/HCatAddPartitionDesc.java| 2 +- .../hive/hcatalog/api/HCatCreateTableDesc.java | 2 +- .../hadoop/hive/metastore/HiveMetaStoreUtils.java | 2 +- ql/src/java/org/apache/hadoop/hive/ql/Driver.java | 9 +- .../hive/ql/ddl/table/AbstractAlterTableDesc.java | 14 +- .../ql/ddl/table/AbstractAlterTableOperation.java | 8 +- .../AbstractAlterTableWithConstraintsDesc.java | 3 +- .../ddl/table/column/AlterTableAddColumnsDesc.java | 3 +- .../table/column/AlterTableChangeColumnDesc.java | 3 +- .../column/AlterTableChangeColumnOperation.java| 2 +- .../table/column/AlterTableReplaceColumnsDesc.java | 3 +- .../column/AlterTableReplaceColumnsOperation.java | 6 +- .../table/column/AlterTableUpdateColumnsDesc.java | 3 +- .../constaint/AlterTableAddConstraintDesc.java | 3 +- .../AlterTableAddConstraintOperation.java | 4 +- .../constaint/AlterTableDropConstraintDesc.java| 16 +- .../AlterTableDropConstraintOperation.java | 9 +- .../ql/ddl/table/creation/CreateTableDesc.java | 35 +- .../ddl/table/creation/CreateTableOperation.java | 2 +- .../hive/ql/ddl/table/info/DescTableDesc.java | 11 +- .../hive/ql/ddl/table/info/DescTableOperation.java | 21 +- .../ql/ddl/table/info/ShowTablePropertiesDesc.java | 7 +- .../ql/ddl/table/misc/AlterTableRenameDesc.java| 3 +- .../ddl/table/misc/AlterTableRenameOperation.java | 6 +- .../ql/ddl/table/misc/AlterTableSetOwnerDesc.java | 3 +- .../table/misc/AlterTableSetPropertiesDesc.java| 3 +- .../table/misc/AlterTableUnsetPropertiesDesc.java | 3 +- .../hive/ql/ddl/table/misc/TruncateTableDesc.java | 14 +- .../partition/AlterTableDropPartitionDesc.java | 7 +- .../partition/AlterTableRenamePartitionDesc.java | 11 +- .../table/storage/AlterTableClusteredByDesc.java | 3 +- .../ddl/table/storage/AlterTableCompactDesc.java | 5 +- .../table/storage/AlterTableConcatenateDesc.java | 5 +- .../table/storage/AlterTableIntoBucketsDesc.java | 3 +- .../table/storage/AlterTableNotClusteredDesc.java | 3 +- .../ddl/table/storage/AlterTableNotSkewedDesc.java | 3 +- .../ddl/table/storage/AlterTableNotSortedDesc.java | 3 +- .../table/storage/AlterTableSetFileFormatDesc.java | 3 +- .../storage/AlterTableSetFileFormatOperation.java | 2 +- .../table/storage/AlterTableSetLocationDesc.java | 3 +- .../ddl/table/storage/AlterTableSetSerdeDesc.java | 3 +- .../table/storage/AlterTableSetSerdeOperation.java | 2 +- .../table/storage/AlterTableSetSerdePropsDesc.java | 3 +- .../storage/AlterTableSetSkewedLocationDesc.java | 3 +- .../ddl/table/storage/AlterTableSkewedByDesc.java | 5 +- .../AlterMaterializedViewRebuildAnalyzer.java | 28 +- .../AlterMaterializedViewRewriteAnalyzer.java | 15 +- .../org/apache/hadoop/hive/ql/exec/Utilities.java | 80 +-- .../apache/hadoop/hive/ql/exec/mr/ExecDriver.java | 2 +- .../repl/bootstrap/load/table/LoadPartitions.java | 3 +- .../incremental/IncrementalLoadTasksBuilder.java | 6 +- .../hadoop/hive/ql/exec/repl/util/ReplUtils.java | 6 +- .../org/apache/hadoop/hive/ql/io/AcidUtils.java| 2 +- .../org/apache/hadoop/hive/ql/metadata/Hive.java | 19 +- .../hive/ql/parse/AcidExportSemanticAnalyzer.java | 7 +- .../hadoop/hive/ql/parse/BaseSemanticAnalyzer.java | 267 +- .../hadoop/hive/ql/parse/DDLSemanticAnalyzer.java | 570 ++--- .../hive/ql/parse/ExportSemanticAnalyzer.java | 4 +- .../apache/hadoop/hive/ql/parse/HiveTableName.java | 142 + .../hive/ql/parse
[hive] branch master updated (90fa906 -> 83222b0)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/hive.git. from 90fa906 HIVE-20256 Remove unused classes from Hive QL (Miklos Gergely, reviewed by David Mollitor) add 83222b0 HIVE-22347 Break up DDLSemanticAnalyzer - extract Other analyzers (Miklos Gergely, reviewed by Jesus Camacho Rodriguez) No new revisions were added by this update. Summary of changes: .../hive/ql/ddl/misc/conf/ShowConfAnalyzer.java| 52 ++ .../hive/ql/ddl/misc/{ => conf}/ShowConfDesc.java | 2 +- .../ql/ddl/misc/{ => conf}/ShowConfOperation.java | 2 +- .../package-info.java} | 26 +-- .../ReplRemoveFirstIncLoadPendFlagDesc.java| 2 +- .../ReplRemoveFirstIncLoadPendFlagOperation.java | 2 +- .../package-info.java} | 26 +-- .../ddl/misc/{ => hooks}/InsertCommitHookDesc.java | 2 +- .../{ => hooks}/InsertCommitHookOperation.java | 2 +- .../package-info.java} | 26 +-- .../ddl/misc/metadata/CacheMetadataAnalyzer.java | 64 +++ .../ddl/misc/{ => metadata}/CacheMetadataDesc.java | 2 +- .../{ => metadata}/CacheMetadataOperation.java | 2 +- .../package-info.java} | 26 +-- .../hadoop/hive/ql/ddl/misc/msck/MsckAnalyzer.java | 83 + .../hive/ql/ddl/misc/{ => msck}/MsckDesc.java | 2 +- .../hive/ql/ddl/misc/{ => msck}/MsckOperation.java | 2 +- .../package-info.java} | 26 +-- .../incremental/IncrementalLoadTasksBuilder.java | 2 +- .../hadoop/hive/ql/parse/AnalyzeCommandUtils.java | 11 +- .../hadoop/hive/ql/parse/BaseSemanticAnalyzer.java | 53 ++ .../hadoop/hive/ql/parse/DDLSemanticAnalyzer.java | 205 ++--- .../hadoop/hive/ql/parse/SemanticAnalyzer.java | 2 +- .../hive/ql/parse/SemanticAnalyzerFactory.java | 1 - 24 files changed, 294 insertions(+), 329 deletions(-) create mode 100644 ql/src/java/org/apache/hadoop/hive/ql/ddl/misc/conf/ShowConfAnalyzer.java rename ql/src/java/org/apache/hadoop/hive/ql/ddl/misc/{ => conf}/ShowConfDesc.java (97%) rename ql/src/java/org/apache/hadoop/hive/ql/ddl/misc/{ => conf}/ShowConfOperation.java (97%) copy ql/src/java/org/apache/hadoop/hive/ql/ddl/misc/{CacheMetadataOperation.java => conf/package-info.java} (52%) rename ql/src/java/org/apache/hadoop/hive/ql/ddl/misc/{ => flags}/ReplRemoveFirstIncLoadPendFlagDesc.java (97%) rename ql/src/java/org/apache/hadoop/hive/ql/ddl/misc/{ => flags}/ReplRemoveFirstIncLoadPendFlagOperation.java (97%) copy ql/src/java/org/apache/hadoop/hive/ql/ddl/misc/{CacheMetadataOperation.java => flags/package-info.java} (52%) rename ql/src/java/org/apache/hadoop/hive/ql/ddl/misc/{ => hooks}/InsertCommitHookDesc.java (97%) rename ql/src/java/org/apache/hadoop/hive/ql/ddl/misc/{ => hooks}/InsertCommitHookOperation.java (97%) copy ql/src/java/org/apache/hadoop/hive/ql/ddl/misc/{CacheMetadataOperation.java => hooks/package-info.java} (52%) create mode 100644 ql/src/java/org/apache/hadoop/hive/ql/ddl/misc/metadata/CacheMetadataAnalyzer.java rename ql/src/java/org/apache/hadoop/hive/ql/ddl/misc/{ => metadata}/CacheMetadataDesc.java (97%) copy ql/src/java/org/apache/hadoop/hive/ql/ddl/misc/{ => metadata}/CacheMetadataOperation.java (96%) copy ql/src/java/org/apache/hadoop/hive/ql/ddl/misc/{CacheMetadataOperation.java => metadata/package-info.java} (52%) create mode 100644 ql/src/java/org/apache/hadoop/hive/ql/ddl/misc/msck/MsckAnalyzer.java rename ql/src/java/org/apache/hadoop/hive/ql/ddl/misc/{ => msck}/MsckDesc.java (98%) rename ql/src/java/org/apache/hadoop/hive/ql/ddl/misc/{ => msck}/MsckOperation.java (98%) rename ql/src/java/org/apache/hadoop/hive/ql/ddl/misc/{CacheMetadataOperation.java => msck/package-info.java} (52%)
[hive] branch branch-3.0 updated: HIVE-20256 Remove unused classes from Hive QL (Miklos Gergely, reviewed by David Mollitor)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch branch-3.0 in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/branch-3.0 by this push: new 0ecbd12 HIVE-20256 Remove unused classes from Hive QL (Miklos Gergely, reviewed by David Mollitor) 0ecbd12 is described below commit 0ecbd12df518bd9666a26b877d02a26b3e5bcbdc Author: miklosgergely AuthorDate: Wed Nov 13 09:08:08 2019 +0100 HIVE-20256 Remove unused classes from Hive QL (Miklos Gergely, reviewed by David Mollitor) --- ql/src/java/org/apache/hadoop/hive/ql/exec/NumericUDAF.java | 1 + .../java/org/apache/hadoop/hive/ql/exec/vector/AggregateDefinition.java | 1 + .../apache/hadoop/hive/ql/exec/vector/VectorColumnAssignFactory.java| 1 + .../ql/exec/vector/mapjoin/fast/VectorMapJoinFastBytesHashUtil.java | 1 + .../hive/ql/exec/vector/mapjoin/fast/VectorMapJoinFastHashMap.java | 1 + ql/src/java/org/apache/hadoop/hive/ql/metadata/RandomDimension.java | 1 + ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/ExprPrunerInfo.java | 1 + ql/src/java/org/apache/hadoop/hive/ql/parse/InputSignature.java | 2 +- ql/src/java/org/apache/hadoop/hive/ql/parse/PrintOpTreeProcessor.java | 1 + ql/src/java/org/apache/hadoop/hive/ql/parse/TezWalker.java | 1 + .../hadoop/hive/ql/parse/repl/dump/io/VersionCompatibleSerializer.java | 1 + ql/src/java/org/apache/hadoop/hive/ql/plan/ExplosionDesc.java | 1 + ql/src/java/org/apache/hadoop/hive/ql/plan/SchemaDesc.java | 1 + 13 files changed, 13 insertions(+), 1 deletion(-) diff --git a/ql/src/java/org/apache/hadoop/hive/ql/exec/NumericUDAF.java b/ql/src/java/org/apache/hadoop/hive/ql/exec/NumericUDAF.java index 0e96d07..a6f1056 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/exec/NumericUDAF.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/exec/NumericUDAF.java @@ -22,6 +22,7 @@ package org.apache.hadoop.hive.ql.exec; * Base class of numeric UDAFs like sum and avg which need a * NumericUDAFEvaluatorResolver. */ +@Deprecated public class NumericUDAF extends UDAF { /** diff --git a/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/AggregateDefinition.java b/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/AggregateDefinition.java index 3772979..ddce3f7 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/AggregateDefinition.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/AggregateDefinition.java @@ -22,6 +22,7 @@ import org.apache.hadoop.hive.ql.exec.vector.expressions.aggregates.VectorAggreg import org.apache.hadoop.hive.ql.plan.GroupByDesc; import org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator; +@Deprecated class AggregateDefinition { private String name; diff --git a/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorColumnAssignFactory.java b/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorColumnAssignFactory.java index c555464..736b754 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorColumnAssignFactory.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorColumnAssignFactory.java @@ -56,6 +56,7 @@ import org.apache.hive.common.util.DateUtils; * This class is used as a static factory for VectorColumnAssign. * Is capable of building assigners from expression nodes or from object inspectors. */ +@Deprecated public class VectorColumnAssignFactory { private static abstract class VectorColumnAssignVectorBase diff --git a/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/mapjoin/fast/VectorMapJoinFastBytesHashUtil.java b/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/mapjoin/fast/VectorMapJoinFastBytesHashUtil.java index 7a3b3e2..80b0bf3 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/mapjoin/fast/VectorMapJoinFastBytesHashUtil.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/mapjoin/fast/VectorMapJoinFastBytesHashUtil.java @@ -20,6 +20,7 @@ package org.apache.hadoop.hive.ql.exec.vector.mapjoin.fast; import org.apache.hadoop.hive.serde2.WriteBuffers; +@Deprecated public class VectorMapJoinFastBytesHashUtil { public static String displayBytes(byte[] bytes, int start, int length) { diff --git a/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/mapjoin/fast/VectorMapJoinFastHashMap.java b/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/mapjoin/fast/VectorMapJoinFastHashMap.java index 3e91667..87e207b 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/mapjoin/fast/VectorMapJoinFastHashMap.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/mapjoin/fast/VectorMapJoinFastHashMap.java @@ -21,6 +21,7 @@ package org.apache.hadoop.hive.ql.exec.vector.mapjoin.fast; import org.apache.hadoop.hive.ql.exec.vector.mapjoin.hashtable.VectorMapJoinHashMap; import
[hive] branch branch-3 updated: HIVE-20256 Remove unused classes from Hive QL (Miklos Gergely, reviewed by David Mollitor)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch branch-3 in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/branch-3 by this push: new f55ee60 HIVE-20256 Remove unused classes from Hive QL (Miklos Gergely, reviewed by David Mollitor) f55ee60 is described below commit f55ee60a3e87bf26c8fcd2aa7dc3869a234e7720 Author: miklosgergely AuthorDate: Wed Nov 13 09:08:08 2019 +0100 HIVE-20256 Remove unused classes from Hive QL (Miklos Gergely, reviewed by David Mollitor) --- ql/src/java/org/apache/hadoop/hive/ql/exec/NumericUDAF.java | 1 + .../java/org/apache/hadoop/hive/ql/exec/vector/AggregateDefinition.java | 1 + .../apache/hadoop/hive/ql/exec/vector/VectorColumnAssignFactory.java| 1 + .../ql/exec/vector/mapjoin/fast/VectorMapJoinFastBytesHashUtil.java | 1 + .../hive/ql/exec/vector/mapjoin/fast/VectorMapJoinFastHashMap.java | 1 + ql/src/java/org/apache/hadoop/hive/ql/metadata/RandomDimension.java | 1 + ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/ExprPrunerInfo.java | 1 + ql/src/java/org/apache/hadoop/hive/ql/parse/InputSignature.java | 2 +- ql/src/java/org/apache/hadoop/hive/ql/parse/PrintOpTreeProcessor.java | 1 + ql/src/java/org/apache/hadoop/hive/ql/parse/TezWalker.java | 1 + .../hadoop/hive/ql/parse/repl/dump/io/VersionCompatibleSerializer.java | 1 + ql/src/java/org/apache/hadoop/hive/ql/plan/ExplosionDesc.java | 1 + ql/src/java/org/apache/hadoop/hive/ql/plan/SchemaDesc.java | 1 + 13 files changed, 13 insertions(+), 1 deletion(-) diff --git a/ql/src/java/org/apache/hadoop/hive/ql/exec/NumericUDAF.java b/ql/src/java/org/apache/hadoop/hive/ql/exec/NumericUDAF.java index 0e96d07..a6f1056 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/exec/NumericUDAF.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/exec/NumericUDAF.java @@ -22,6 +22,7 @@ package org.apache.hadoop.hive.ql.exec; * Base class of numeric UDAFs like sum and avg which need a * NumericUDAFEvaluatorResolver. */ +@Deprecated public class NumericUDAF extends UDAF { /** diff --git a/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/AggregateDefinition.java b/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/AggregateDefinition.java index 3772979..ddce3f7 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/AggregateDefinition.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/AggregateDefinition.java @@ -22,6 +22,7 @@ import org.apache.hadoop.hive.ql.exec.vector.expressions.aggregates.VectorAggreg import org.apache.hadoop.hive.ql.plan.GroupByDesc; import org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator; +@Deprecated class AggregateDefinition { private String name; diff --git a/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorColumnAssignFactory.java b/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorColumnAssignFactory.java index 39a124f..227c25a 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorColumnAssignFactory.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorColumnAssignFactory.java @@ -55,6 +55,7 @@ import org.apache.hadoop.io.Writable; * This class is used as a static factory for VectorColumnAssign. * Is capable of building assigners from expression nodes or from object inspectors. */ +@Deprecated public class VectorColumnAssignFactory { private static abstract class VectorColumnAssignVectorBase diff --git a/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/mapjoin/fast/VectorMapJoinFastBytesHashUtil.java b/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/mapjoin/fast/VectorMapJoinFastBytesHashUtil.java index 7a3b3e2..80b0bf3 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/mapjoin/fast/VectorMapJoinFastBytesHashUtil.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/mapjoin/fast/VectorMapJoinFastBytesHashUtil.java @@ -20,6 +20,7 @@ package org.apache.hadoop.hive.ql.exec.vector.mapjoin.fast; import org.apache.hadoop.hive.serde2.WriteBuffers; +@Deprecated public class VectorMapJoinFastBytesHashUtil { public static String displayBytes(byte[] bytes, int start, int length) { diff --git a/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/mapjoin/fast/VectorMapJoinFastHashMap.java b/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/mapjoin/fast/VectorMapJoinFastHashMap.java index 3e91667..87e207b 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/mapjoin/fast/VectorMapJoinFastHashMap.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/mapjoin/fast/VectorMapJoinFastHashMap.java @@ -21,6 +21,7 @@ package org.apache.hadoop.hive.ql.exec.vector.mapjoin.fast; import org.apache.hadoop.hive.ql.exec.vector.mapjoin.hashtable.VectorMapJoinHashMap; import org.apache.hadoop.hive.ql.exec.vector.mapjoin.hashtable.VectorMapJoinHashMapResult
[hive] branch master updated: HIVE-20256 Remove unused classes from Hive QL (Miklos Gergely, reviewed by David Mollitor)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 90fa906 HIVE-20256 Remove unused classes from Hive QL (Miklos Gergely, reviewed by David Mollitor) 90fa906 is described below commit 90fa9064f2c6907fbe6237cb46d5937eebd8ea31 Author: miklosgergely AuthorDate: Fri Mar 15 14:50:19 2019 +0100 HIVE-20256 Remove unused classes from Hive QL (Miklos Gergely, reviewed by David Mollitor) --- .../apache/hadoop/hive/ql/exec/NumericUDAF.java| 33 -- .../hive/ql/exec/vector/AggregateDefinition.java | 52 -- .../ql/exec/vector/VectorColumnAssignFactory.java | 608 - .../fast/VectorMapJoinFastBytesHashUtil.java | 37 -- .../mapjoin/fast/VectorMapJoinFastHashMap.java | 40 -- .../hadoop/hive/ql/metadata/RandomDimension.java | 41 -- .../hive/ql/optimizer/ppr/ExprPrunerInfo.java | 41 -- .../hadoop/hive/ql/parse/InputSignature.java | 119 .../hadoop/hive/ql/parse/PrintOpTreeProcessor.java | 95 .../org/apache/hadoop/hive/ql/parse/TezWalker.java | 66 --- .../repl/dump/io/VersionCompatibleSerializer.java | 37 -- .../apache/hadoop/hive/ql/plan/ExplosionDesc.java | 58 -- .../org/apache/hadoop/hive/ql/plan/SchemaDesc.java | 46 -- 13 files changed, 1273 deletions(-) diff --git a/ql/src/java/org/apache/hadoop/hive/ql/exec/NumericUDAF.java b/ql/src/java/org/apache/hadoop/hive/ql/exec/NumericUDAF.java deleted file mode 100644 index 0e96d07..000 --- a/ql/src/java/org/apache/hadoop/hive/ql/exec/NumericUDAF.java +++ /dev/null @@ -1,33 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.hive.ql.exec; - -/** - * Base class of numeric UDAFs like sum and avg which need a - * NumericUDAFEvaluatorResolver. - */ -public class NumericUDAF extends UDAF { - - /** - * Constructor. - */ - public NumericUDAF() { -setResolver(new NumericUDAFEvaluatorResolver(this.getClass())); - } -} diff --git a/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/AggregateDefinition.java b/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/AggregateDefinition.java deleted file mode 100644 index 3772979..000 --- a/ql/src/java/org/apache/hadoop/hive/ql/exec/vector/AggregateDefinition.java +++ /dev/null @@ -1,52 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.hive.ql.exec.vector; - -import org.apache.hadoop.hive.ql.exec.vector.expressions.aggregates.VectorAggregateExpression; -import org.apache.hadoop.hive.ql.plan.GroupByDesc; -import org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator; - -class AggregateDefinition { - - private String name; - private VectorExpressionDescriptor.ArgumentType type; - private GenericUDAFEvaluator.Mode udafEvaluatorMode; - private Class aggClass; - - AggregateDefinition(String name, VectorExpressionDescriptor.ArgumentType type, - GenericUDAFEvaluator.Mode udafEvaluatorMode, Class aggClass) { -this.name = name; -this.type = type; -this.udafEvaluatorMode = udafEvaluatorMode; -this.aggClass = aggClass; - } - - String getName() { -return name; - } - VectorExpressionDescriptor.ArgumentType getType() { -return type; - } - GenericUDAFEvaluator.Mode getUdafEvaluatorM
[hive] branch master updated (bc1bf43 -> ef591e8)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/hive.git. from bc1bf43 HIVE-22398: Remove legacy code that can cause issue with new Yarn releases (Slim Bouguerra via via Ashutosh Chauhan) add ef591e8 HIVE-22378 Remove return path related code duplications (Miklos Gergely, reviewed by Zoltan Haindrich) No new revisions were added by this update. Summary of changes: .../calcite/translator/HiveOpConverter.java| 23 +- .../hadoop/hive/ql/parse/CalcitePlanner.java | 27 -- 2 files changed, 6 insertions(+), 44 deletions(-)
[hive] branch master updated: HIVE-22340 Prevent shaded imports (Miklos Gergely, reviewed by Zoltan Haindrich)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 5ce0b40 HIVE-22340 Prevent shaded imports (Miklos Gergely, reviewed by Zoltan Haindrich) 5ce0b40 is described below commit 5ce0b4008ed1d6b7bdce0d037cc6cf7010bd948b Author: miklosgergely AuthorDate: Tue Oct 15 01:50:01 2019 +0200 HIVE-22340 Prevent shaded imports (Miklos Gergely, reviewed by Zoltan Haindrich) --- .../hive/ql/txn/compactor/TestCompactor.java | 3 +- .../org/apache/hadoop/hive/kudu/KuduTestSetup.java | 3 +- .../hadoop/hive/kudu/TestKuduInputFormat.java | 2 +- .../hadoop/hive/kudu/TestKuduOutputFormat.java | 3 +- .../hadoop/hive/kudu/TestKuduPredicateHandler.java | 2 +- .../org/apache/hadoop/hive/kudu/TestKuduSerDe.java | 3 +- .../hive/llap/metrics/ReadWriteLockMetrics.java| 3 +- pom.xml| 27 .../ql/optimizer/signature/RelTreeSignature.java | 6 +--- .../org/apache/hadoop/hive/ql/TestTxnCommands.java | 3 +- .../hadoop/hive/ql/TestTxnCommandsForMmTable.java | 36 -- .../hive/ql/stats/TestStatsUpdaterThread.java | 3 +- .../hadoop/hive/ql/stats/TestStatsUtils.java | 3 +- .../generic/TestGenericUDAFBinarySetFunctions.java | 3 +- .../hadoop/hive/metastore/utils/FileUtils.java | 3 +- 15 files changed, 48 insertions(+), 55 deletions(-) diff --git a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/txn/compactor/TestCompactor.java b/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/txn/compactor/TestCompactor.java index 5b8fb4b..f2ea30b 100644 --- a/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/txn/compactor/TestCompactor.java +++ b/itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/txn/compactor/TestCompactor.java @@ -38,7 +38,6 @@ import java.util.TreeSet; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicInteger; -import org.apache.curator.shaded.com.google.common.collect.Lists; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileStatus; import org.apache.hadoop.fs.FileSystem; @@ -100,6 +99,8 @@ import org.junit.runners.Parameterized; import org.slf4j.Logger; import org.slf4j.LoggerFactory; +import com.google.common.collect.Lists; + public class TestCompactor { private static final AtomicInteger salt = new AtomicInteger(new Random().nextInt()); private static final Logger LOG = LoggerFactory.getLogger(TestCompactor.class); diff --git a/itests/util/src/main/java/org/apache/hadoop/hive/kudu/KuduTestSetup.java b/itests/util/src/main/java/org/apache/hadoop/hive/kudu/KuduTestSetup.java index 2a0f04c..85ab1eb 100644 --- a/itests/util/src/main/java/org/apache/hadoop/hive/kudu/KuduTestSetup.java +++ b/itests/util/src/main/java/org/apache/hadoop/hive/kudu/KuduTestSetup.java @@ -26,9 +26,10 @@ import org.apache.kudu.Type; import org.apache.kudu.client.CreateTableOptions; import org.apache.kudu.client.KuduClient; import org.apache.kudu.client.KuduException; -import org.apache.kudu.shaded.com.google.common.collect.ImmutableList; import org.apache.kudu.test.cluster.MiniKuduCluster; +import com.google.common.collect.ImmutableList; + import java.io.File; import java.util.Arrays; diff --git a/kudu-handler/src/test/org/apache/hadoop/hive/kudu/TestKuduInputFormat.java b/kudu-handler/src/test/org/apache/hadoop/hive/kudu/TestKuduInputFormat.java index feb6f75..653cdc2 100644 --- a/kudu-handler/src/test/org/apache/hadoop/hive/kudu/TestKuduInputFormat.java +++ b/kudu-handler/src/test/org/apache/hadoop/hive/kudu/TestKuduInputFormat.java @@ -17,6 +17,7 @@ */ package org.apache.hadoop.hive.kudu; +import com.google.common.collect.ImmutableList; import com.google.common.collect.Lists; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hive.kudu.KuduInputFormat.KuduInputSplit; @@ -42,7 +43,6 @@ import org.apache.kudu.client.KuduSession; import org.apache.kudu.client.KuduTable; import org.apache.kudu.client.PartialRow; import org.apache.kudu.client.RowResult; -import org.apache.kudu.shaded.com.google.common.collect.ImmutableList; import org.apache.kudu.test.KuduTestHarness; import org.junit.Before; import org.junit.Rule; diff --git a/kudu-handler/src/test/org/apache/hadoop/hive/kudu/TestKuduOutputFormat.java b/kudu-handler/src/test/org/apache/hadoop/hive/kudu/TestKuduOutputFormat.java index 8a1cf26..c208e38 100644 --- a/kudu-handler/src/test/org/apache/hadoop/hive/kudu/TestKuduOutputFormat.java +++ b/kudu-handler/src/test/org/apache/hadoop/hive/kudu/TestKuduOutputFormat.java @@ -28,12 +28,13 @@ import org.apache.kudu.client.KuduScanner; import org.apache.kudu.client.KuduTable; import org.apache.kudu.client.PartialRow; import
[hive] branch master updated (d85533e -> 7845b37)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/hive.git. from d85533e HIVE-22323 ADDENDUM Fix Desc Table bugs (Miklos Gergely reviewed by Jesus Camacho Rodriguez) add 7845b37 HIVE-22323 ADDENDUM 2 Fix Desc Table bugs (Miklos Gergely reviewed by Jesus Camacho Rodriguez) No new revisions were added by this update. Summary of changes: .../clientpositive/partitioned_table_stats.q.out | 1092 ++-- 1 file changed, 546 insertions(+), 546 deletions(-)
[hive] branch master updated: HIVE-22323 ADDENDUM Fix Desc Table bugs (Miklos Gergely reviewed by Jesus Camacho Rodriguez)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new d85533e HIVE-22323 ADDENDUM Fix Desc Table bugs (Miklos Gergely reviewed by Jesus Camacho Rodriguez) d85533e is described below commit d85533e257004fe18b67fe39201a4f954ed3ef82 Author: miklosgergely AuthorDate: Tue Oct 15 17:47:04 2019 +0200 HIVE-22323 ADDENDUM Fix Desc Table bugs (Miklos Gergely reviewed by Jesus Camacho Rodriguez) --- .../results/clientpositive/beeline/desc_table_formatted.q.out | 8 ql/src/test/results/clientpositive/desc_table_formatted.q.out | 8 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/ql/src/test/results/clientpositive/beeline/desc_table_formatted.q.out b/ql/src/test/results/clientpositive/beeline/desc_table_formatted.q.out index 5bcb12c..e961744 100644 --- a/ql/src/test/results/clientpositive/beeline/desc_table_formatted.q.out +++ b/ql/src/test/results/clientpositive/beeline/desc_table_formatted.q.out @@ -165,7 +165,7 @@ POSTHOOK: type: DESCTABLE POSTHOOK: Input: default@datatype_stats_n0 col_name f data_type float -min45454.3984375 +min45454.30078125 max45454.3984375 num_nulls 1 distinct_count 2 @@ -184,7 +184,7 @@ POSTHOOK: type: DESCTABLE POSTHOOK: Input: default@datatype_stats_n0 col_name d data_type double -min454.6565 +min454.6564 max454.6565 num_nulls 1 distinct_count 2 @@ -564,14 +564,14 @@ PREHOOK: Input: default@datatype_stats_n0 POSTHOOK: query: DESC FORMATTED datatype_stats_n0 f POSTHOOK: type: DESCTABLE POSTHOOK: Input: default@datatype_stats_n0 -{"columns":[{"name":"f","type":"float","comment":"from deserializer","min":45454.3984375,"max":45454.3984375,"numNulls":1,"distinctCount":2}]} +{"columns":[{"name":"f","type":"float","comment":"from deserializer","min":45454.30078125,"max":45454.3984375,"numNulls":1,"distinctCount":2}]} PREHOOK: query: DESC FORMATTED datatype_stats_n0 d PREHOOK: type: DESCTABLE PREHOOK: Input: default@datatype_stats_n0 POSTHOOK: query: DESC FORMATTED datatype_stats_n0 d POSTHOOK: type: DESCTABLE POSTHOOK: Input: default@datatype_stats_n0 -{"columns":[{"name":"d","type":"double","comment":"from deserializer","min":454.6565,"max":454.6565,"numNulls":1,"distinctCount":2}]} +{"columns":[{"name":"d","type":"double","comment":"from deserializer","min":454.6564,"max":454.6565,"numNulls":1,"distinctCount":2}]} PREHOOK: query: DESC FORMATTED datatype_stats_n0 dem PREHOOK: type: DESCTABLE PREHOOK: Input: default@datatype_stats_n0 diff --git a/ql/src/test/results/clientpositive/desc_table_formatted.q.out b/ql/src/test/results/clientpositive/desc_table_formatted.q.out index 93b818f..0a5c363 100644 --- a/ql/src/test/results/clientpositive/desc_table_formatted.q.out +++ b/ql/src/test/results/clientpositive/desc_table_formatted.q.out @@ -165,7 +165,7 @@ POSTHOOK: type: DESCTABLE POSTHOOK: Input: default@datatype_stats_n0 col_name f data_type float -min45454.3984375 +min45454.30078125 max45454.3984375 num_nulls 1 distinct_count 2 @@ -184,7 +184,7 @@ POSTHOOK: type: DESCTABLE POSTHOOK: Input: default@datatype_stats_n0 col_name d data_type double -min454.6565 +min454.6564 max454.6565 num_nulls 1 distinct_count 2 @@ -564,14 +564,14 @@ PREHOOK: Input: default@datatype_stats_n0 POSTHOOK: query: DESC FORMATTED datatype_stats_n0 f POSTHOOK: type: DESCTABLE POSTHOOK: Input: default@datatype_stats_n0 -{"columns":[{"name":"f","type":"float","comment":"from deserializer","min":45454.3984375,"max":45454.3984375,"numNulls":1,"distinctCount":2}]} +{"columns":[{"name":"f","type":"float","comment":"from deserializer","min":45454.30078125,"max":45454.3984375,"numNulls":1,"distinctCount":2}]} PREHOOK: query: DESC FORMATTED datatype_stats_n0 d PREHOOK: type: DESCTABLE PREHOOK: Input: default@datatype_stats_n0 POSTHOOK: query: DESC FORMATTED datatype_stats_n0 d POSTHOOK: type: DESCTABLE POSTHOOK: Input: default@datatype_stats_n0 -{"columns":[{"name":"d","type":"double","comment":"from deserializer","min":454.6565,"max":454.6565,"numNulls":1,"distinctCount":2}]} +{"columns":[{"name":"d","type":"double","comment":"from deserializer","min":454.6564,"max":454.6565,"numNulls":1,"distinctCount":2}]} PREHOOK: query: DESC FORMATTED datatype_stats_n0 dem PREHOOK: type: DESCTABLE PREHOOK: Input: default@datatype_stats_n0
[hive] branch master updated (6fa1e91 -> d6d4f2e)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/hive.git. from 6fa1e91 HIVE-22284: Improve LLAP CacheContentsTracker to collect and display correct statistics (Adam Szita, reviewed by Peter Vary) add d6d4f2e HIVE-22276 Break up DDLSemanticAnalyzer - extract View related analyzers (Miklos Gergely reviewed by Jesus Camacho Rodriguez) No new revisions were added by this update. Summary of changes: .../hive/ql/ddl/DDLSemanticAnalyzerFactory.java| 10 +- .../ql/ddl/view/{ => create}/CreateViewDesc.java | 2 +- .../ddl/view/{ => create}/CreateViewOperation.java | 2 +- .../hive/ql/ddl/view/create/package-info.java | 20 .../hive/ql/ddl/view/drop/DropViewAnalyzer.java| 59 .../hive/ql/ddl/view/{ => drop}/DropViewDesc.java | 2 +- .../ql/ddl/view/{ => drop}/DropViewOperation.java | 2 +- .../hadoop/hive/ql/ddl/view/drop/package-info.java | 20 .../AlterMaterializedViewRebuildAnalyzer.java} | 73 +++--- .../materialized/alter/rebuild/package-info.java | 20 .../AlterMaterializedViewRewriteAnalyzer.java | 107 + .../rewrite}/AlterMaterializedViewRewriteDesc.java | 2 +- .../AlterMaterializedViewRewriteOperation.java | 2 +- .../materialized/alter/rewrite/package-info.java | 20 .../drop/DropMaterializedViewAnalyzer.java | 59 .../drop}/DropMaterializedViewDesc.java| 2 +- .../drop}/DropMaterializedViewOperation.java | 2 +- .../ddl/view/materialized/drop/package-info.java | 20 .../update}/MaterializedViewUpdateDesc.java| 8 +- .../update}/MaterializedViewUpdateOperation.java | 2 +- .../ddl/view/materialized/update/package-info.java | 20 .../hadoop/hive/ql/parse/DDLSemanticAnalyzer.java | 106 .../org/apache/hadoop/hive/ql/parse/HiveParser.g | 35 --- .../apache/hadoop/hive/ql/parse/ParseContext.java | 4 +- .../java/org/apache/hadoop/hive/ql/parse/QB.java | 2 +- .../hadoop/hive/ql/parse/SemanticAnalyzer.java | 4 +- .../hive/ql/parse/SemanticAnalyzerFactory.java | 18 .../apache/hadoop/hive/ql/parse/TaskCompiler.java | 6 +- .../hadoop/hive/ql/plan/ImportTableDesc.java | 2 +- .../apache/hadoop/hive/ql/plan/LoadFileDesc.java | 2 +- .../org/apache/hadoop/hive/ql/plan/PlanUtils.java | 2 +- .../test/results/clientnegative/masking_mv.q.out | 2 +- .../beeline/materialized_view_create_rewrite.q.out | 2 +- .../clientpositive/druid/druidmini_mv.q.out| 2 +- .../llap/materialized_view_cluster.q.out | 10 +- .../llap/materialized_view_create_rewrite.q.out| 2 +- .../llap/materialized_view_create_rewrite_3.q.out | 4 +- .../llap/materialized_view_create_rewrite_4.q.out | 10 +- .../llap/materialized_view_create_rewrite_5.q.out | 8 +- .../materialized_view_create_rewrite_dummy.q.out | 4 +- ...ialized_view_create_rewrite_rebuild_dummy.q.out | 4 +- ...erialized_view_create_rewrite_time_window.q.out | 4 +- .../llap/materialized_view_distribute_sort.q.out | 10 +- .../llap/materialized_view_partition_cluster.q.out | 10 +- .../llap/materialized_view_partitioned.q.out | 6 +- .../llap/materialized_view_partitioned_3.q.out | 2 +- .../test/results/clientpositive/masking_mv.q.out | 4 +- 47 files changed, 476 insertions(+), 243 deletions(-) rename ql/src/java/org/apache/hadoop/hive/ql/ddl/view/{ => create}/CreateViewDesc.java (99%) rename ql/src/java/org/apache/hadoop/hive/ql/ddl/view/{ => create}/CreateViewOperation.java (98%) create mode 100644 ql/src/java/org/apache/hadoop/hive/ql/ddl/view/create/package-info.java create mode 100644 ql/src/java/org/apache/hadoop/hive/ql/ddl/view/drop/DropViewAnalyzer.java rename ql/src/java/org/apache/hadoop/hive/ql/ddl/view/{ => drop}/DropViewDesc.java (97%) rename ql/src/java/org/apache/hadoop/hive/ql/ddl/view/{ => drop}/DropViewOperation.java (97%) create mode 100644 ql/src/java/org/apache/hadoop/hive/ql/ddl/view/drop/package-info.java rename ql/src/java/org/apache/hadoop/hive/ql/{parse/MaterializedViewRebuildSemanticAnalyzer.java => ddl/view/materialized/alter/rebuild/AlterMaterializedViewRebuildAnalyzer.java} (63%) create mode 100644 ql/src/java/org/apache/hadoop/hive/ql/ddl/view/materialized/alter/rebuild/package-info.java create mode 100644 ql/src/java/org/apache/hadoop/hive/ql/ddl/view/materialized/alter/rewrite/AlterMaterializedViewRewriteAnalyzer.java rename ql/src/java/org/apache/hadoop/hive/ql/ddl/view/{ => materialized/alter/rewrite}/AlterMaterializedViewRewriteDesc.java (97%) rename ql/src/java/org/apache/hadoop/hive/ql/ddl/view/{ => materialized/alter/rewrite}/AlterMaterializedViewRewriteOperation.java (97%) create mode 100644 ql/src/java/org/apache/hadoop/hi
[hive] branch master updated (f16509a -> ee359ad)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/hive.git. from f16509a HIVE-21344: CBO: Reduce compilation time in presence of materialized views (Jesus Camacho Rodriguez, reviewed by Vineet Garg) add ee359ad HIVE-22328 Min value for column in stats is not set correctly for some data types in partitioned tables (Miklos Gergely reviewed by Jesus Camacho Rodriguez) No new revisions were added by this update. Summary of changes: .../clientpositive/partitioned_table_stats.q | 66 ++ .../clientpositive/partitioned_table_stats.q.out | 918 + .../aggr/DateColumnStatsAggregator.java| 13 +- .../aggr/DecimalColumnStatsAggregator.java | 23 +- .../aggr/DoubleColumnStatsAggregator.java | 10 +- .../aggr/LongColumnStatsAggregator.java| 8 +- .../columnstats/merge/DateColumnStatsMerger.java | 4 +- .../merge/DecimalColumnStatsMerger.java| 8 +- .../columnstats/merge/DoubleColumnStatsMerger.java | 6 +- .../columnstats/merge/LongColumnStatsMerger.java | 4 +- 10 files changed, 1021 insertions(+), 39 deletions(-) create mode 100644 ql/src/test/queries/clientpositive/partitioned_table_stats.q create mode 100644 ql/src/test/results/clientpositive/partitioned_table_stats.q.out
[hive] branch master updated (8bcf7b9a -> 7ae6756)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/hive.git. from 8bcf7b9a HIVE-22305: Add the kudu-handler to the packaging module (Grant Henke, reviewed by Gopal Vijayaraghavan) add 7ae6756 HIVE-22235 CommandProcessorResponse should not be an exception (Miklos Gergely reviewed by Zoltan Haindrich) No new revisions were added by this update. Summary of changes: .../java/org/apache/hadoop/hive/cli/CliDriver.java | 145 ++- .../hadoop/hive/cli/TestCliDriverMethods.java | 17 +- .../java/org/apache/hive/hcatalog/cli/HCatCli.java | 25 +- .../org/apache/hive/hcatalog/cli/HCatDriver.java | 27 +- .../hive/hcatalog/cli/TestSemanticAnalysis.java| 144 ++- .../apache/hive/hcatalog/cli/TestUseDatabase.java | 29 +- .../hive/hcatalog/data/HCatDataCheckUtil.java | 14 +- .../mapreduce/TestHCatDynamicPartitioned.java | 26 +- .../hcatalog/mapreduce/TestHCatInputFormat.java| 10 +- .../mapreduce/TestHCatInputFormatMethods.java |6 +- .../hcatalog/mapreduce/TestHCatNonPartitioned.java | 13 +- .../hcatalog/mapreduce/TestHCatPartitioned.java|6 +- .../hcatalog/mapreduce/TestPassProperties.java |3 +- .../hive/hcatalog/pig/AbstractHCatLoaderTest.java | 15 +- .../hive/hcatalog/pig/AbstractHCatStorerTest.java | 25 +- .../apache/hive/hcatalog/pig/TestE2EScenarios.java |6 +- .../hcatalog/pig/TestHCatLoaderEncryption.java | 27 +- .../hive/hcatalog/pig/TestHCatLoaderStorer.java| 23 +- .../hive/hcatalog/pig/TestHCatStorerWrapper.java |3 +- .../hcatalog/listener/TestMsgBusConnection.java|5 +- .../hive/hcatalog/streaming/TestStreaming.java | 18 +- .../hcatalog/api/repl/commands/TestCommands.java | 59 +- .../hcatalog/hbase/TestPigHBaseStorageHandler.java | 49 +- .../mapreduce/TestHCatHiveCompatibility.java |4 +- .../mapreduce/TestHCatHiveThriftCompatibility.java | 12 +- .../mapreduce/TestSequenceFileReadWrite.java | 13 +- ...BasedMetastoreAuthorizationProviderWithACL.java | 12 + .../hive/metastore/TestMetastoreVersion.java | 34 +- .../org/apache/hadoop/hive/ql/TestAcidOnTez.java | 12 +- .../TestDDLWithRemoteMetastoreSecondNamenode.java | 14 +- .../ql/exec/spark/TestSmallTableCacheEviction.java | 12 +- .../ql/exec/spark/TestSparkSessionTimeout.java | 22 +- .../hive/ql/exec/spark/TestSparkStatistics.java|7 +- .../hadoop/hive/ql/history/TestHiveHistory.java|5 +- .../hive/ql/metadata/TestAlterTableMetadata.java | 17 +- .../metadata/TestSemanticAnalyzerHookLoading.java | 30 +- .../parse/BaseReplicationScenariosAcidTables.java |6 +- .../hive/ql/parse/TestReplicationScenarios.java| 74 +- .../parse/TestReplicationScenariosAcidTables.java | 47 +- .../TestReplicationScenariosAcrossInstances.java | 25 +- .../hadoop/hive/ql/parse/WarehouseInstance.java| 32 +- .../TestClientSideAuthorizationProvider.java | 46 +- .../TestMetastoreAuthorizationProvider.java| 100 +- ...torageBasedClientSideAuthorizationProvider.java | 10 +- .../plugin/TestHiveAuthorizerCheckInvocation.java | 22 +- .../plugin/TestHiveAuthorizerShowFilters.java |4 +- .../hive/ql/txn/compactor/TestCompactor.java |6 +- .../ql/txn/compactor/TestCrudCompactorOnTez.java | 22 +- .../java/org/apache/hive/jdbc/TestJdbcDriver2.java |2 - .../TestJdbcWithSQLAuthorization.java |5 - .../control/AbstractCoreBlobstoreCliDriver.java| 17 +- .../hive/cli/control/CoreAccumuloCliDriver.java|9 +- .../hadoop/hive/cli/control/CoreCliDriver.java | 10 +- .../hive/cli/control/CoreCompareCliDriver.java | 10 +- .../hive/cli/control/CoreHBaseCliDriver.java |9 +- .../cli/control/CoreHBaseNegativeCliDriver.java|7 +- .../hadoop/hive/cli/control/CoreKuduCliDriver.java |9 +- .../cli/control/CoreKuduNegativeCliDriver.java |8 +- .../hive/cli/control/CoreNegativeCliDriver.java|7 +- .../hadoop/hive/cli/control/CorePerfCliDriver.java | 10 +- .../java/org/apache/hadoop/hive/ql/QTestUtil.java | 77 +- .../hive/ql/dataset/QTestDatasetHandler.java | 10 +- ql/src/java/org/apache/hadoop/hive/ql/Driver.java | 259 ++--- .../org/apache/hadoop/hive/ql/DriverState.java | 27 +- .../org/apache/hadoop/hive/ql/DriverUtils.java | 11 +- ql/src/java/org/apache/hadoop/hive/ql/IDriver.java |9 +- .../hive/ql/parse/ExplainSemanticAnalyzer.java | 16 +- .../hive/ql/processors/AddResourceProcessor.java | 13 +- .../hive/ql/processors/CommandProcessor.java |2 +- .../ql/processors/CommandProcessorException.java | 85 ++ .../ql/processors/CommandProcessorFactory.java |2 +- .../ql/processors/CommandProcessorResponse.java| 87 +- .../hadoop/hive/ql/processors
[hive] branch master updated: HIVE-22248 Fix statistics persisting issues (Miklos Gergely reviewed by Jesus Camacho Rodriguez)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 9502d06 HIVE-22248 Fix statistics persisting issues (Miklos Gergely reviewed by Jesus Camacho Rodriguez) 9502d06 is described below commit 9502d06d2c36b80e9fe4ecf9d37e7b5d94d3b04e Author: miklosgergely AuthorDate: Wed Oct 2 11:07:58 2019 +0200 HIVE-22248 Fix statistics persisting issues (Miklos Gergely reviewed by Jesus Camacho Rodriguez) --- .../clientpositive/alter_table_update_status.q.out | 26 - ...ter_table_update_status_disable_bitvector.q.out | 26 - .../clientpositive/llap/vector_coalesce_3.q.out| 6 +- .../results/clientpositive/vector_coalesce_3.q.out | 6 +- .../columnstats/merge/DateColumnStatsMerger.java | 55 ++--- .../merge/DecimalColumnStatsMerger.java| 55 ++--- .../columnstats/merge/DoubleColumnStatsMerger.java | 26 - .../columnstats/merge/LongColumnStatsMerger.java | 26 - .../columnstats/merge/StringColumnStatsMerger.java | 2 + .../merge/DecimalColumnStatsMergerTest.java| 68 ++ 10 files changed, 208 insertions(+), 88 deletions(-) diff --git a/ql/src/test/results/clientpositive/alter_table_update_status.q.out b/ql/src/test/results/clientpositive/alter_table_update_status.q.out index 6453391..e643863 100644 --- a/ql/src/test/results/clientpositive/alter_table_update_status.q.out +++ b/ql/src/test/results/clientpositive/alter_table_update_status.q.out @@ -339,7 +339,7 @@ POSTHOOK: type: DESCTABLE POSTHOOK: Input: default@datatype_stats_n0 col_name s data_type smallint -min0 +min3 max3 num_nulls 1 distinct_count 1 @@ -358,7 +358,7 @@ POSTHOOK: type: DESCTABLE POSTHOOK: Input: default@datatype_stats_n0 col_name i data_type int -min0 +min45 max45 num_nulls 1 distinct_count 1 @@ -377,7 +377,7 @@ POSTHOOK: type: DESCTABLE POSTHOOK: Input: default@datatype_stats_n0 col_name b data_type bigint -min0 +min456 max456 num_nulls 1 distinct_count 1 @@ -396,7 +396,7 @@ POSTHOOK: type: DESCTABLE POSTHOOK: Input: default@datatype_stats_n0 col_name f data_type float -min0.0 +min
[hive] branch master updated: HIVE-22222 Clean up the error handling in Driver - get rid of global variables (Miklos Gergely reviewed by Jesus Camacho Rodriguez)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 7575732 HIVE-2 Clean up the error handling in Driver - get rid of global variables (Miklos Gergely reviewed by Jesus Camacho Rodriguez) 7575732 is described below commit 757573272a097957c2b084119cb448c0304e5c2c Author: miklosgergely AuthorDate: Fri Sep 20 18:08:47 2019 +0200 HIVE-2 Clean up the error handling in Driver - get rid of global variables (Miklos Gergely reviewed by Jesus Camacho Rodriguez) --- ql/src/java/org/apache/hadoop/hive/ql/Driver.java | 270 - .../hadoop/hive/ql/parse/TestHiveDecimalParse.java | 92 --- 2 files changed, 170 insertions(+), 192 deletions(-) diff --git a/ql/src/java/org/apache/hadoop/hive/ql/Driver.java b/ql/src/java/org/apache/hadoop/hive/ql/Driver.java index 64375c1..00b21d5 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/Driver.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/Driver.java @@ -37,7 +37,6 @@ import java.util.Queue; import java.util.Set; import java.util.stream.Collectors; -import org.apache.commons.lang.StringUtils; import org.apache.commons.lang3.tuple.ImmutablePair; import org.apache.commons.lang3.tuple.Pair; import org.apache.hadoop.conf.Configurable; @@ -121,6 +120,7 @@ import org.apache.hadoop.hive.ql.session.SessionState.LogHelper; import org.apache.hadoop.hive.ql.wm.WmContext; import org.apache.hadoop.hive.serde2.ByteStream; import org.apache.hadoop.mapreduce.MRJobConfig; +import org.apache.hadoop.util.StringUtils; import org.apache.hive.common.util.ShutdownHookManager; import org.apache.hive.common.util.TxnIdUtils; import org.apache.thrift.TException; @@ -150,9 +150,6 @@ public class Driver implements IDriver { private DriverContext driverCxt; private QueryPlan plan; private Schema schema; - private String errorMessage; - private String SQLState; - private Throwable downstreamError; private FetchTask fetchTask; private List hiveLocks = new ArrayList(); @@ -264,8 +261,7 @@ public class Driver implements IDriver { try { lst = HiveMetaStoreUtils.getFieldsFromDeserializer(tableName, td.getDeserializer(conf)); } catch (Exception e) { - LOG.warn("Error getting schema: " - + org.apache.hadoop.util.StringUtils.stringifyException(e)); + LOG.warn("Error getting schema: " + StringUtils.stringifyException(e)); } if (lst != null) { schema = new Schema(lst, null); @@ -356,7 +352,7 @@ public class Driver implements IDriver { // interrupted, it should be set to true if the compile is called within another method like // runInternal, which defers the close to the called in that method. @VisibleForTesting - void compile(String command, boolean resetTaskIds, boolean deferClose) throws CommandProcessorResponse { + public void compile(String command, boolean resetTaskIds, boolean deferClose) throws CommandProcessorResponse { PerfLogger perfLogger = SessionState.getPerfLogger(); perfLogger.PerfLogBegin(CLASS_NAME, PerfLogger.COMPILE); driverState.lock(); @@ -568,11 +564,8 @@ public class Driver implements IDriver { CommandAuthorizer.doAuthorization(queryState.getHiveOperation(), sem, command); } } catch (AuthorizationException authExp) { - console.printError("Authorization failed:" + authExp.getMessage() - + ". Use SHOW GRANT to get more details."); - errorMessage = authExp.getMessage(); - SQLState = "42000"; - throw createProcessorResponse(403); + console.printError("Authorization failed:" + authExp.getMessage() + ". Use SHOW GRANT to get more details."); + throw createProcessorResponse(403, authExp.getMessage(), "42000", null); } finally { perfLogger.PerfLogEnd(CLASS_NAME, PerfLogger.DO_AUTHORIZATION); } @@ -598,7 +591,7 @@ public class Driver implements IDriver { compileError = true; ErrorMsg error = ErrorMsg.getErrorMsg(e.getMessage()); - errorMessage = "FAILED: " + e.getClass().getSimpleName(); + String errorMessage = "FAILED: " + e.getClass().getSimpleName(); if (error != ErrorMsg.GENERIC_ERROR) { errorMessage += " [Error " + error.getErrorCode() + "]:"; } @@ -614,11 +607,8 @@ public class Driver implements IDriver { errorMessage += ". Failed command: " + queryStr; } - SQLState = error.getSQLState(); - downstreamError = e; - console.printError(errorMessage, "\n" - + org.apache.hadoop.util.StringUtils.stringifyException(e)); - throw cre
[hive] branch master updated: HIVE-22203 Break up DDLSemanticAnalyzer - extract Process related analyzers (Miklos Gergely reviewed by Jesus Camacho Rodriguez)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 5b4deef HIVE-22203 Break up DDLSemanticAnalyzer - extract Process related analyzers (Miklos Gergely reviewed by Jesus Camacho Rodriguez) 5b4deef is described below commit 5b4deefc7a06dfc87bf6c37f2e5ed222f7bdb30d Author: miklosgergely AuthorDate: Thu Sep 12 10:44:05 2019 +0200 HIVE-22203 Break up DDLSemanticAnalyzer - extract Process related analyzers (Miklos Gergely reviewed by Jesus Camacho Rodriguez) --- .../org/apache/hadoop/hive/ql/ddl/DDLUtils.java| 21 +++ .../process/abort/AbortTransactionsAnalyzer.java | 53 .../process/{ => abort}/AbortTransactionsDesc.java | 2 +- .../{ => abort}/AbortTransactionsOperation.java| 2 +- .../package-info.java} | 22 +-- .../ql/ddl/process/kill/KillQueriesAnalyzer.java | 57 + .../ql/ddl/process/{ => kill}/KillQueriesDesc.java | 2 +- .../process/{ => kill}/KillQueriesOperation.java | 2 +- .../package-info.java} | 22 +-- .../show/compactions/ShowCompactionsAnalyzer.java | 50 +++ .../compactions}/ShowCompactionsDesc.java | 2 +- .../compactions}/ShowCompactionsOperation.java | 2 +- .../compactions/package-info.java} | 22 +-- .../transactions/ShowTransactionsAnalyzer.java | 50 +++ .../transactions}/ShowTransactionsDesc.java| 2 +- .../transactions}/ShowTransactionsOperation.java | 2 +- .../transactions/package-info.java}| 22 +-- .../hadoop/hive/ql/parse/DDLSemanticAnalyzer.java | 71 -- .../hive/ql/parse/SemanticAnalyzerFactory.java | 4 -- 19 files changed, 247 insertions(+), 163 deletions(-) diff --git a/ql/src/java/org/apache/hadoop/hive/ql/ddl/DDLUtils.java b/ql/src/java/org/apache/hadoop/hive/ql/ddl/DDLUtils.java index c81c574..3dc6bf5 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/ddl/DDLUtils.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/ddl/DDLUtils.java @@ -33,13 +33,17 @@ import org.apache.hadoop.fs.FSDataOutputStream; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; import org.apache.hadoop.hive.conf.HiveConf; +import org.apache.hadoop.hive.conf.HiveConf.ConfVars; import org.apache.hadoop.hive.ql.exec.Utilities; import org.apache.hadoop.hive.ql.hooks.WriteEntity; +import org.apache.hadoop.hive.ql.hooks.Entity.Type; import org.apache.hadoop.hive.ql.metadata.Hive; import org.apache.hadoop.hive.ql.metadata.HiveException; import org.apache.hadoop.hive.ql.metadata.Partition; import org.apache.hadoop.hive.ql.metadata.Table; import org.apache.hadoop.hive.ql.parse.ReplicationSpec; +import org.apache.hadoop.hive.ql.parse.SemanticException; +import org.apache.hadoop.hive.ql.session.SessionState; import org.apache.hadoop.hive.serde2.Deserializer; import org.apache.hive.common.util.HiveStringUtils; import org.apache.hive.common.util.ReflectionUtil; @@ -198,4 +202,21 @@ public final class DDLUtils { builder.append(value); } } + + public static void addServiceOutput(HiveConf conf, Set outputs) throws SemanticException { +String hs2Hostname = getHS2Host(conf); +if (hs2Hostname != null) { + outputs.add(new WriteEntity(hs2Hostname, Type.SERVICE_NAME)); +} + } + + private static String getHS2Host(HiveConf conf) throws SemanticException { +if (SessionState.get().isHiveServerQuery()) { + return SessionState.get().getHiveServer2Host(); +} else if (conf.getBoolVar(ConfVars.HIVE_TEST_AUTHORIZATION_SQLSTD_HS2_MODE)) { + return "dummyHostnameForTest"; +} + +throw new SemanticException("Kill query is only supported in HiveServer2 (not hive cli)"); + } } diff --git a/ql/src/java/org/apache/hadoop/hive/ql/ddl/process/abort/AbortTransactionsAnalyzer.java b/ql/src/java/org/apache/hadoop/hive/ql/ddl/process/abort/AbortTransactionsAnalyzer.java new file mode 100644 index 000..21116a8 --- /dev/null +++ b/ql/src/java/org/apache/hadoop/hive/ql/ddl/process/abort/AbortTransactionsAnalyzer.java @@ -0,0 +1,53 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS
[hive] branch master updated: HIVE-22199 Ugrade findbugs to 3.0.5 (Miklos Gergely reviewed by Jesus Camacho Rodriguez)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new d8e2e40 HIVE-22199 Ugrade findbugs to 3.0.5 (Miklos Gergely reviewed by Jesus Camacho Rodriguez) d8e2e40 is described below commit d8e2e40a28b27144f10c0bbb57144be3d921fa94 Author: miklosgergely AuthorDate: Thu Sep 12 18:06:08 2019 +0200 HIVE-22199 Ugrade findbugs to 3.0.5 (Miklos Gergely reviewed by Jesus Camacho Rodriguez) --- findbugs/findbugs-exclude.xml | 43 ++- pom.xml | 4 ++-- 2 files changed, 28 insertions(+), 19 deletions(-) diff --git a/findbugs/findbugs-exclude.xml b/findbugs/findbugs-exclude.xml index c165c62..b31e79f 100644 --- a/findbugs/findbugs-exclude.xml +++ b/findbugs/findbugs-exclude.xml @@ -14,22 +14,31 @@ See the License for the specific language governing permissions and limitations under the License. --> + - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/pom.xml b/pom.xml index a1b45b4..e061f64 100644 --- a/pom.xml +++ b/pom.xml @@ -1446,7 +1446,7 @@ org.codehaus.mojo findbugs-maven-plugin -3.0.0 +3.0.5 true 2048 @@ -1461,7 +1461,7 @@ org.codehaus.mojo findbugs-maven-plugin -3.0.0 +3.0.5 true 2048
[hive] branch master updated: HIVE-22194 Break up DDLSemanticAnalyzer - extract Privilege related analyzers (Miklos Gergely reviewed by Jesus Camacho Rodriguez)
This is an automated email from the ASF dual-hosted git repository. mgergely pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/hive.git The following commit(s) were added to refs/heads/master by this push: new 396c161 HIVE-22194 Break up DDLSemanticAnalyzer - extract Privilege related analyzers (Miklos Gergely reviewed by Jesus Camacho Rodriguez) 396c161 is described below commit 396c1617429d1d0a6ae5e89761bd55bee0a0ab78 Author: miklosgergely AuthorDate: Sun Sep 8 14:44:56 2019 +0200 HIVE-22194 Break up DDLSemanticAnalyzer - extract Privilege related analyzers (Miklos Gergely reviewed by Jesus Camacho Rodriguez) --- .../hive/ql/ddl/DDLSemanticAnalyzerFactory.java| 16 ++ .../hive/ql/ddl/database/alter/package-info.java | 2 +- .../ddl/privilege/AbstractPrivilegeAnalyzer.java | 64 .../hive/ql/ddl/privilege/PrivilegeUtils.java | 8 +- .../hive/ql/ddl/privilege/grant/GrantAnalyzer.java | 53 +++ .../ql/ddl/privilege/{ => grant}/GrantDesc.java| 5 +- .../ddl/privilege/{ => grant}/GrantOperation.java | 3 +- .../alter => privilege/grant}/package-info.java| 4 +- .../ql/ddl/privilege/revoke/RevokeAnalyzer.java| 53 +++ .../ql/ddl/privilege/{ => revoke}/RevokeDesc.java | 5 +- .../privilege/{ => revoke}/RevokeOperation.java| 3 +- .../alter => privilege/revoke}/package-info.java | 4 +- .../privilege/role/create/CreateRoleAnalyzer.java | 53 +++ .../{ => role/create}/CreateRoleDesc.java | 2 +- .../{ => role/create}/CreateRoleOperation.java | 4 +- .../role/create}/package-info.java | 4 +- .../ddl/privilege/role/drop/DropRoleAnalyzer.java | 53 +++ .../privilege/{ => role/drop}/DropRoleDesc.java| 2 +- .../{ => role/drop}/DropRoleOperation.java | 4 +- .../role/drop}/package-info.java | 4 +- .../privilege/role/grant/GrantRoleAnalyzer.java| 53 +++ .../privilege/{ => role/grant}/GrantRoleDesc.java | 3 +- .../{ => role/grant}/GrantRoleOperation.java | 3 +- .../role/grant}/package-info.java | 4 +- .../privilege/role/revoke/RevokeRoleAnalyzer.java | 53 +++ .../{ => role/revoke}/RevokeRoleDesc.java | 3 +- .../{ => role/revoke}/RevokeRoleOperation.java | 3 +- .../role/revoke}/package-info.java | 4 +- .../ql/ddl/privilege/role/set/SetRoleAnalyzer.java | 51 ++ .../ddl/privilege/{ => role/set}/SetRoleDesc.java | 2 +- .../privilege/{ => role/set}/SetRoleOperation.java | 4 +- .../alter => privilege/role/set}/package-info.java | 5 +- .../role/show/ShowCurrentRoleAnalyzer.java | 58 +++ .../{ => role/show}/ShowCurrentRoleDesc.java | 2 +- .../{ => role/show}/ShowCurrentRoleOperation.java | 3 +- .../ddl/privilege/role/show/ShowRolesAnalyzer.java | 58 +++ .../privilege/{ => role/show}/ShowRolesDesc.java | 2 +- .../{ => role/show}/ShowRolesOperation.java| 3 +- .../role/show}/package-info.java | 4 +- .../privilege/show/grant/ShowGrantAnalyzer.java| 57 +++ .../privilege/{ => show/grant}/ShowGrantDesc.java | 4 +- .../{ => show/grant}/ShowGrantOperation.java | 3 +- .../show/grant}/package-info.java | 4 +- .../show/principals/ShowPrincipalsAnalyzer.java| 58 +++ .../{ => show/principals}/ShowPrincipalsDesc.java | 2 +- .../principals}/ShowPrincipalsOperation.java | 3 +- .../show/principals}/package-info.java | 4 +- .../show/rolegrant/ShowRoleGrantAnalyzer.java | 58 +++ .../{ => show/rolegrant}/ShowRoleGrantDesc.java| 2 +- .../rolegrant}/ShowRoleGrantOperation.java | 3 +- .../show/rolegrant}/package-info.java | 4 +- .../hadoop/hive/ql/parse/DDLSemanticAnalyzer.java | 172 - .../org/apache/hadoop/hive/ql/parse/HiveParser.g | 11 +- .../hive/ql/parse/SemanticAnalyzerFactory.java | 11 -- .../HiveAuthorizationTaskFactoryImpl.java | 24 +-- .../apache/hadoop/hive/ql/plan/HiveOperation.java | 5 +- .../parse/authorization/AuthorizationTestUtil.java | 7 +- .../ql/parse/authorization/PrivilegesTestBase.java | 2 +- .../TestHiveAuthorizationTaskFactory.java | 16 +- 59 files changed, 848 insertions(+), 266 deletions(-) diff --git a/ql/src/java/org/apache/hadoop/hive/ql/ddl/DDLSemanticAnalyzerFactory.java b/ql/src/java/org/apache/hadoop/hive/ql/ddl/DDLSemanticAnalyzerFactory.java index bc93d75..efbd90f 100644 --- a/ql/src/java/org/apache/hadoop/hive/ql/ddl/DDLSemanticAnalyzerFactory.java +++ b/ql/src/java/org/apache/hadoop/hive/ql/ddl/DDLSemanticAnalyzerFactory.java @@ -26,10 +26,13 @@ import java.util.Map; import java.util.Set; import org.apache.hadoop.hive.ql.QueryState; +import org.apache.hadoop.hive