This is an automated email from the ASF dual-hosted git repository.
joemcdonnell pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/impala.git
The following commit(s) were added to refs/heads/master by this push:
new bd88b71ed IMPALA-11524: Bump up the CDP GBN to 33375775 and remove
workarounds
bd88b71ed is described below
commit bd88b71edd305dd448e9ec33a0cfd10f97177c4e
Author: Joe McDonnell <[email protected]>
AuthorDate: Mon Oct 17 21:26:11 2022 -0700
IMPALA-11524: Bump up the CDP GBN to 33375775 and remove workarounds
This patch bumps up the GBN to 33375775, which contains the
fix for HADOOP-18410 as well as HADOOP-18456, which should help
with the stability of S3 tests. This removes the workaround
for HADOOP-18410 introduced in IMPALA-11514.
This also picks up newer versions of Ozone, Iceberg, Hive, etc.
After HIVE-26071, hive-standalone-metastore starts relying on
jetty-servlet, which in turn requires jetty-security. Since Impala bans
all jetty-related dependencies unless otherwise specified, this patch
adds jetty-servlet, jetty-security, and jetty-util-ajax as allowed
dependencies in order to compile Impala with Hive.
Hive Metastore introduced several new APIs in their interface, so
this adds implementations for these (e.g. HIVE-25303, HIVE-22782).
For example, HIVE-26149 introduced a new HMS API drop_database_req,
so we need to implement this API in CatalogServerHandler as well.
This patch also fixes a bug that the third parameter in drop_database
was treated as "ignoreUnknownDb", however it should be "cascade".
The number of files produced on systems with 32GB of memory changed.
Until we track down the specific cause, this tunes the YARN memory
slightly higher to avoid changes in the number of files produced
for systems with 32GB of memory.
The content in this patch comes from several developers: Yu-Wen Lai,
Kishen Das, and Fang-Yu Rao.
Testing:
- Ran core jobs
- Ran CatalogHmsSyncToLatestEventIdTest tests
- Hand tested on a machine with 32GB of memory
Change-Id: Iea7e1785f5c93f61179cc336968c3a86c53e9ed1
Reviewed-on: http://gerrit.cloudera.org:8080/19149
Tested-by: Impala Public Jenkins <[email protected]>
Reviewed-by: Csaba Ringhofer <[email protected]>
Reviewed-by: Wenzhe Zhou <[email protected]>
---
bin/impala-config.sh | 22 ++--
fe/pom.xml | 42 +++++++
.../metastore/CatalogMetastoreServiceHandler.java | 49 +++++++-
.../impala/catalog/metastore/HmsApiNameEnum.java | 1 +
.../catalog/metastore/MetastoreServiceHandler.java | 125 ++++++++++++++++++++-
.../common/etc/hadoop/conf/core-site.xml.py | 3 -
.../common/etc/hadoop/conf/yarn-site.xml.py | 4 +-
7 files changed, 221 insertions(+), 25 deletions(-)
diff --git a/bin/impala-config.sh b/bin/impala-config.sh
index 718984009..212e87b49 100755
--- a/bin/impala-config.sh
+++ b/bin/impala-config.sh
@@ -198,19 +198,19 @@ fi
: ${IMPALA_TOOLCHAIN_HOST:=native-toolchain.s3.amazonaws.com}
export IMPALA_TOOLCHAIN_HOST
-export CDP_BUILD_NUMBER=31397203
+export CDP_BUILD_NUMBER=33375775
export CDP_MAVEN_REPOSITORY=\
"https://${IMPALA_TOOLCHAIN_HOST}/build/cdp_components/${CDP_BUILD_NUMBER}/maven"
-export CDP_AVRO_JAVA_VERSION=1.8.2.7.2.16.0-164
-export CDP_HADOOP_VERSION=3.1.1.7.2.16.0-164
-export CDP_HBASE_VERSION=2.4.6.7.2.16.0-164
-export CDP_HIVE_VERSION=3.1.3000.7.2.16.0-164
-export CDP_ICEBERG_VERSION=0.14.0.7.2.16.0-164
-export CDP_KNOX_VERSION=1.3.0.7.2.16.0-164
-export CDP_OZONE_VERSION=1.1.0.7.2.16.0-164
-export CDP_PARQUET_VERSION=1.10.99.7.2.16.0-164
-export CDP_RANGER_VERSION=2.3.0.7.2.16.0-164
-export CDP_TEZ_VERSION=0.9.1.7.2.16.0-164
+export CDP_AVRO_JAVA_VERSION=1.8.2.7.2.16.0-233
+export CDP_HADOOP_VERSION=3.1.1.7.2.16.0-233
+export CDP_HBASE_VERSION=2.4.6.7.2.16.0-233
+export CDP_HIVE_VERSION=3.1.3000.7.2.16.0-233
+export CDP_ICEBERG_VERSION=0.14.1.7.2.16.0-233
+export CDP_KNOX_VERSION=1.3.0.7.2.16.0-233
+export CDP_OZONE_VERSION=1.3.0.7.2.16.0-233
+export CDP_PARQUET_VERSION=1.10.99.7.2.16.0-233
+export CDP_RANGER_VERSION=2.3.0.7.2.16.0-233
+export CDP_TEZ_VERSION=0.9.1.7.2.16.0-233
# Ref: https://infra.apache.org/release-download-pages.html#closer
: ${APACHE_MIRROR:="https://www.apache.org/dyn/closer.cgi"}
diff --git a/fe/pom.xml b/fe/pom.xml
index 0c42bb45c..b4c5f0d80 100644
--- a/fe/pom.xml
+++ b/fe/pom.xml
@@ -64,6 +64,39 @@ under the License.
</exclusions>
</dependency>
+ <dependency>
+ <groupId>org.apache.hadoop</groupId>
+ <artifactId>hadoop-hdfs</artifactId>
+ <version>${hadoop.version}</version>
+ <exclusions>
+ <exclusion>
+ <groupId>io.netty</groupId>
+ <artifactId>*</artifactId>
+ </exclusion>
+ <exclusion>
+ <!-- IMPALA-9108: Avoid pulling in leveldbjni, which is unneeded. -->
+ <groupId>org.fusesource.leveldbjni</groupId>
+ <artifactId>*</artifactId>
+ </exclusion>
+ <exclusion>
+ <groupId>org.eclipse.jetty</groupId>
+ <artifactId>jetty-util-ajax</artifactId>
+ </exclusion>
+ <exclusion>
+ <groupId>org.eclipse.jetty</groupId>
+ <artifactId>jetty-server</artifactId>
+ </exclusion>
+ <exclusion>
+ <groupId>com.sun.jersey</groupId>
+ <artifactId>jersey-server</artifactId>
+ </exclusion>
+ <exclusion>
+ <groupId>log4j</groupId>
+ <artifactId>log4j</artifactId>
+ </exclusion>
+ </exclusions>
+ </dependency>
+
<dependency>
<groupId>org.apache.hudi</groupId>
<artifactId>hudi-hadoop-mr</artifactId>
@@ -420,6 +453,10 @@ under the License.
<groupId>io.netty</groupId>
<artifactId>*</artifactId>
</exclusion>
+ <exclusion>
+ <groupId>com.sun.jersey</groupId>
+ <artifactId>jersey-server</artifactId>
+ </exclusion>
</exclusions>
</dependency>
<dependency>
@@ -815,10 +852,15 @@ under the License.
<include>org.eclipse.jetty:jetty-client</include>
<include>org.eclipse.jetty:jetty-http</include>
<include>org.eclipse.jetty:jetty-io</include>
+ <!-- jetty-security is needed by jetty-servlet. -->
+ <include>org.eclipse.jetty:jetty-security</include>
+ <!-- jetty-servlet is required by
hive-standalone-metastore after HIVE-26071. -->
+ <include>org.eclipse.jetty:jetty-servlet</include>
<!-- jetty-server is required when HiveMetaStoreClient is
instantiated after HIVE-21456. -->
<include>org.eclipse.jetty:jetty-server</include>
<!-- hadoop-yarn-common depends on some Jetty utilities.
-->
<include>org.eclipse.jetty:jetty-util</include>
+ <include>org.eclipse.jetty:jetty-util-ajax</include>
<!-- Include the allowed versions specifically -->
<include>org.apache.hadoop:*:${hadoop.version}</include>
<include>org.apache.hadoop:*:${ozone.version}</include>
diff --git
a/fe/src/main/java/org/apache/impala/catalog/metastore/CatalogMetastoreServiceHandler.java
b/fe/src/main/java/org/apache/impala/catalog/metastore/CatalogMetastoreServiceHandler.java
index 963753046..57a212671 100644
---
a/fe/src/main/java/org/apache/impala/catalog/metastore/CatalogMetastoreServiceHandler.java
+++
b/fe/src/main/java/org/apache/impala/catalog/metastore/CatalogMetastoreServiceHandler.java
@@ -33,6 +33,7 @@ import org.apache.hadoop.hive.metastore.api.AlterTableRequest;
import org.apache.hadoop.hive.metastore.api.AlterTableResponse;
import org.apache.hadoop.hive.metastore.api.CreateTableRequest;
import org.apache.hadoop.hive.metastore.api.Database;
+import org.apache.hadoop.hive.metastore.api.DropDatabaseRequest;
import org.apache.hadoop.hive.metastore.api.DropPartitionsRequest;
import org.apache.hadoop.hive.metastore.api.DropPartitionsResult;
import org.apache.hadoop.hive.metastore.api.EnvironmentContext;
@@ -60,6 +61,7 @@ import
org.apache.hadoop.hive.metastore.api.SQLUniqueConstraint;
import org.apache.hadoop.hive.metastore.api.TruncateTableRequest;
import org.apache.hadoop.hive.metastore.api.TruncateTableResponse;
import org.apache.hadoop.hive.metastore.utils.MetaStoreUtils;
+import org.apache.hadoop.util.StringUtils;
import org.apache.impala.catalog.CatalogHmsAPIHelper;
import org.apache.impala.catalog.events.DeleteEventLog;
import org.apache.impala.catalog.events.MetastoreEvents;
@@ -239,11 +241,11 @@ public class CatalogMetastoreServiceHandler extends
MetastoreServiceHandler {
@Override
public void drop_database(String databaseName, boolean deleteData,
- boolean ignoreUnknownDb) throws NoSuchObjectException,
+ boolean cascade) throws NoSuchObjectException,
InvalidOperationException, MetaException, TException {
if (!BackendConfig.INSTANCE.enableCatalogdHMSCache() ||
!BackendConfig.INSTANCE.enableSyncToLatestEventOnDdls()) {
- super.drop_database(databaseName, deleteData, ignoreUnknownDb);
+ super.drop_database(databaseName, deleteData, cascade);
return;
}
// TODO: The complete logic can be moved to
@@ -255,7 +257,7 @@ public class CatalogMetastoreServiceHandler extends
MetastoreServiceHandler {
try {
try {
currentEventId = super.get_current_notificationEventId().getEventId();
- super.drop_database(databaseName, deleteData, ignoreUnknownDb);
+ super.drop_database(databaseName, deleteData, cascade);
} catch (NoSuchObjectException e) {
// db does not exist in metastore, remove it from
// catalog if exists
@@ -265,7 +267,46 @@ public class CatalogMetastoreServiceHandler extends
MetastoreServiceHandler {
}
throw e;
}
- dropDbIfExists(databaseName, ignoreUnknownDb, currentEventId, apiName);
+ dropDbIfExists(databaseName, false, currentEventId, apiName);
+ } finally {
+ catalogOpExecutor_.getMetastoreDdlLock().unlock();
+ }
+ }
+
+ @Override
+ public void drop_database_req(final DropDatabaseRequest dropDatabaseRequest)
+ throws NoSuchObjectException, InvalidOperationException, MetaException {
+ if (!BackendConfig.INSTANCE.enableCatalogdHMSCache() ||
+ !BackendConfig.INSTANCE.enableSyncToLatestEventOnDdls()) {
+ super.drop_database_req(dropDatabaseRequest);
+ return;
+ }
+ String apiName = HmsApiNameEnum.DROP_DATABASE_REQ.apiName();
+ String dbName =
+ MetaStoreUtils.parseDbName(dropDatabaseRequest.getName(),
serverConf_)[1];
+ long currentEventId = -1;
+ catalogOpExecutor_.getMetastoreDdlLock().lock();
+ try {
+ try {
+ currentEventId = super.get_current_notificationEventId().getEventId();
+ super.drop_database_req(dropDatabaseRequest);
+ } catch (NoSuchObjectException e) {
+ // db does not exist in metastore, remove it from
+ // catalog if exists
+ if (catalog_.removeDb(dbName) != null) {
+ LOG.info("Db {} not known to metastore, removed it from catalog for
" +
+ "metastore api {}", dbName, apiName);
+ }
+ throw e;
+ // TODO: We should add TException to method signature in hive and we
can remove
+ // following two catch blocks.
+ } catch (InvalidOperationException|MetaException e) {
+ throw e;
+ } catch (TException e) {
+ throw new MetaException(StringUtils.stringifyException(e));
+ }
+ dropDbIfExists(dropDatabaseRequest.getName(),
+ dropDatabaseRequest.isIgnoreUnknownDb(), currentEventId, apiName);
} finally {
catalogOpExecutor_.getMetastoreDdlLock().unlock();
}
diff --git
a/fe/src/main/java/org/apache/impala/catalog/metastore/HmsApiNameEnum.java
b/fe/src/main/java/org/apache/impala/catalog/metastore/HmsApiNameEnum.java
index b68013481..4610d1634 100644
--- a/fe/src/main/java/org/apache/impala/catalog/metastore/HmsApiNameEnum.java
+++ b/fe/src/main/java/org/apache/impala/catalog/metastore/HmsApiNameEnum.java
@@ -27,6 +27,7 @@ public enum HmsApiNameEnum {
GET_PARTITION_BY_NAMES("get_partitions_by_names_req"),
CREATE_DATABASE("create_database"),
DROP_DATABASE("drop_database"),
+ DROP_DATABASE_REQ("drop_database_req"),
ALTER_DATABASE("alter_database"),
CREATE_TABLE("create_table"),
CREATE_TABLE_REQ("create_table_req"),
diff --git
a/fe/src/main/java/org/apache/impala/catalog/metastore/MetastoreServiceHandler.java
b/fe/src/main/java/org/apache/impala/catalog/metastore/MetastoreServiceHandler.java
index 0db30cc05..2566703c3 100644
---
a/fe/src/main/java/org/apache/impala/catalog/metastore/MetastoreServiceHandler.java
+++
b/fe/src/main/java/org/apache/impala/catalog/metastore/MetastoreServiceHandler.java
@@ -37,6 +37,7 @@ import
org.apache.hadoop.hive.metastore.api.AddDefaultConstraintRequest;
import org.apache.hadoop.hive.metastore.api.AddDynamicPartitions;
import org.apache.hadoop.hive.metastore.api.AddForeignKeyRequest;
import org.apache.hadoop.hive.metastore.api.AddNotNullConstraintRequest;
+import org.apache.hadoop.hive.metastore.api.AddPackageRequest;
import org.apache.hadoop.hive.metastore.api.AddPartitionsRequest;
import org.apache.hadoop.hive.metastore.api.AddPartitionsResult;
import org.apache.hadoop.hive.metastore.api.AddPrimaryKeyRequest;
@@ -44,6 +45,8 @@ import
org.apache.hadoop.hive.metastore.api.AddUniqueConstraintRequest;
import org.apache.hadoop.hive.metastore.api.AggrStats;
import org.apache.hadoop.hive.metastore.api.AllocateTableWriteIdsRequest;
import org.apache.hadoop.hive.metastore.api.AllocateTableWriteIdsResponse;
+import org.apache.hadoop.hive.metastore.api.AllTableConstraintsRequest;
+import org.apache.hadoop.hive.metastore.api.AllTableConstraintsResponse;
import org.apache.hadoop.hive.metastore.api.AlreadyExistsException;
import org.apache.hadoop.hive.metastore.api.AlterCatalogRequest;
import org.apache.hadoop.hive.metastore.api.AlterISchemaRequest;
@@ -75,6 +78,8 @@ import
org.apache.hadoop.hive.metastore.api.DefaultConstraintsRequest;
import org.apache.hadoop.hive.metastore.api.DefaultConstraintsResponse;
import org.apache.hadoop.hive.metastore.api.DropCatalogRequest;
import org.apache.hadoop.hive.metastore.api.DropConstraintRequest;
+import org.apache.hadoop.hive.metastore.api.DropDatabaseRequest;
+import org.apache.hadoop.hive.metastore.api.DropPackageRequest;
import org.apache.hadoop.hive.metastore.api.DropPartitionsRequest;
import org.apache.hadoop.hive.metastore.api.DropPartitionsResult;
import org.apache.hadoop.hive.metastore.api.EnvironmentContext;
@@ -102,7 +107,9 @@ import
org.apache.hadoop.hive.metastore.api.GetFileMetadataResult;
import
org.apache.hadoop.hive.metastore.api.GetLatestCommittedCompactionInfoRequest;
import
org.apache.hadoop.hive.metastore.api.GetLatestCommittedCompactionInfoResponse;
import org.apache.hadoop.hive.metastore.api.GetOpenTxnsInfoResponse;
+import org.apache.hadoop.hive.metastore.api.GetOpenTxnsRequest;
import org.apache.hadoop.hive.metastore.api.GetOpenTxnsResponse;
+import org.apache.hadoop.hive.metastore.api.GetPackageRequest;
import org.apache.hadoop.hive.metastore.api.GetPartitionsByNamesRequest;
import org.apache.hadoop.hive.metastore.api.GetPartitionsByNamesResult;
import org.apache.hadoop.hive.metastore.api.GetPartitionsResponse;
@@ -143,6 +150,8 @@ import org.apache.hadoop.hive.metastore.api.ISchemaName;
import org.apache.hadoop.hive.metastore.api.InvalidInputException;
import org.apache.hadoop.hive.metastore.api.InvalidObjectException;
import org.apache.hadoop.hive.metastore.api.InvalidOperationException;
+import org.apache.hadoop.hive.metastore.api.ListPackageRequest;
+import org.apache.hadoop.hive.metastore.api.ListStoredProcedureRequest;
import org.apache.hadoop.hive.metastore.api.LockRequest;
import org.apache.hadoop.hive.metastore.api.LockResponse;
import org.apache.hadoop.hive.metastore.api.MapSchemaVersionToSerdeRequest;
@@ -163,6 +172,7 @@ import
org.apache.hadoop.hive.metastore.api.NotificationEventsCountResponse;
import org.apache.hadoop.hive.metastore.api.OpenTxnRequest;
import org.apache.hadoop.hive.metastore.api.OpenTxnsResponse;
import org.apache.hadoop.hive.metastore.api.OptionalCompactionInfoStruct;
+import org.apache.hadoop.hive.metastore.api.Package;
import org.apache.hadoop.hive.metastore.api.Partition;
import org.apache.hadoop.hive.metastore.api.PartitionEventType;
import org.apache.hadoop.hive.metastore.api.PartitionSpec;
@@ -211,6 +221,8 @@ import
org.apache.hadoop.hive.metastore.api.ShowCompactRequest;
import org.apache.hadoop.hive.metastore.api.ShowCompactResponse;
import org.apache.hadoop.hive.metastore.api.ShowLocksRequest;
import org.apache.hadoop.hive.metastore.api.ShowLocksResponse;
+import org.apache.hadoop.hive.metastore.api.StoredProcedure;
+import org.apache.hadoop.hive.metastore.api.StoredProcedureRequest;
import org.apache.hadoop.hive.metastore.api.Table;
import org.apache.hadoop.hive.metastore.api.TableMeta;
import org.apache.hadoop.hive.metastore.api.TableStatsRequest;
@@ -265,6 +277,7 @@ import
org.apache.hadoop.hive.metastore.api.WriteNotificationLogResponse;
import org.apache.hadoop.hive.metastore.conf.MetastoreConf;
import org.apache.hadoop.hive.metastore.conf.MetastoreConf.ConfVars;
import org.apache.hadoop.hive.metastore.utils.MetaStoreUtils;
+import org.apache.hadoop.util.StringUtils;
import org.apache.impala.catalog.CatalogHmsAPIHelper;
import org.apache.impala.catalog.DatabaseNotFoundException;
import org.apache.impala.catalog.CatalogServiceCatalog;
@@ -472,22 +485,42 @@ public abstract class MetastoreServiceHandler extends
AbstractThriftHiveMetastor
@Override
public void drop_database(String databaseName, boolean deleteData,
- boolean ignoreUnknownDb) throws NoSuchObjectException,
+ boolean cascade) throws NoSuchObjectException,
InvalidOperationException, MetaException, TException {
+ String[] parsedCatDbName = MetaStoreUtils.parseDbName(databaseName,
serverConf_);
+
+ DropDatabaseRequest req = new DropDatabaseRequest();
+ req.setName(parsedCatDbName[1]);
+ req.setCatalogName(parsedCatDbName[0]);
+ req.setIgnoreUnknownDb(false);
+ req.setDeleteData(deleteData);
+ req.setCascade(cascade);
+ drop_database_req(req);
+ }
+
+ @Override
+ public void drop_database_req(final DropDatabaseRequest dropDatabaseRequest)
+ throws NoSuchObjectException, InvalidOperationException, MetaException {
long currentEventId = -1;
catalogOpExecutor_.getMetastoreDdlLock().lock();
try (MetaStoreClient client = catalog_.getMetaStoreClient()) {
currentEventId = getCurrentEventId(client);
- client.getHiveClient().getThriftClient()
- .drop_database(databaseName, deleteData, ignoreUnknownDb);
+
client.getHiveClient().getThriftClient().drop_database_req(dropDatabaseRequest);
+ // TODO: We should add TException to method signature in hive and we can
remove
+ // following two catch blocks.
+ } catch (NoSuchObjectException|InvalidOperationException|MetaException e) {
+ throw e;
+ } catch (TException e) {
+ throw new MetaException(StringUtils.stringifyException(e));
} finally {
catalogOpExecutor_.getMetastoreDdlLock().unlock();
}
if (!BackendConfig.INSTANCE.invalidateCatalogdHMSCacheOnDDLs() ||
- !BackendConfig.INSTANCE.enableCatalogdHMSCache()) {
+ !BackendConfig.INSTANCE.enableCatalogdHMSCache()) {
return;
}
- dropDbIfExists(databaseName, ignoreUnknownDb, currentEventId,
"drop_database");
+ dropDbIfExists(dropDatabaseRequest.getName(),
dropDatabaseRequest.isIgnoreUnknownDb(),
+ currentEventId, "drop_database");
}
@Override
@@ -694,6 +727,16 @@ public abstract class MetastoreServiceHandler extends
AbstractThriftHiveMetastor
}
}
+ @Override
+ public Table translate_table_dryrun(CreateTableRequest createTableRequest)
throws
+ AlreadyExistsException, InvalidObjectException, MetaException,
+ NoSuchObjectException, TException {
+ try (MetaStoreClient client = catalog_.getMetaStoreClient()) {
+ return client.getHiveClient().getThriftClient()
+ .translate_table_dryrun(createTableRequest);
+ }
+ }
+
@Override
public void drop_table(String dbname, String tblname,
boolean deleteData) throws NoSuchObjectException,
@@ -1718,6 +1761,16 @@ public abstract class MetastoreServiceHandler extends
AbstractThriftHiveMetastor
}
}
+ @Override
+ public AllTableConstraintsResponse get_all_table_constraints(
+ AllTableConstraintsRequest request) throws TException, MetaException,
+ NoSuchObjectException {
+ try (MetaStoreClient client = catalog_.getMetaStoreClient()) {
+ return client.getHiveClient().getThriftClient()
+ .get_all_table_constraints(request);
+ }
+ }
+
@Override
public boolean update_table_column_statistics(ColumnStatistics
columnStatistics)
throws NoSuchObjectException, InvalidObjectException, MetaException,
@@ -2119,6 +2172,68 @@ public abstract class MetastoreServiceHandler extends
AbstractThriftHiveMetastor
}
}
+ @Override
+ public GetOpenTxnsResponse get_open_txns_req(GetOpenTxnsRequest
getOpenTxnsRequest)
+ throws TException {
+ try (MetaStoreClient client = catalog_.getMetaStoreClient()) {
+ return client.getHiveClient().getThriftClient()
+ .get_open_txns_req(getOpenTxnsRequest);
+ }
+ }
+ @Override
+ public void create_stored_procedure(StoredProcedure proc)
+ throws NoSuchObjectException, MetaException, TException {
+ try (MetaStoreClient client = catalog_.getMetaStoreClient()) {
+ client.getHiveClient().getThriftClient().create_stored_procedure(proc);
+ }
+ }
+
+ @Override
+ public StoredProcedure get_stored_procedure(StoredProcedureRequest request)
+ throws MetaException, NoSuchObjectException, TException {
+ try (MetaStoreClient client = catalog_.getMetaStoreClient()) {
+ return
client.getHiveClient().getThriftClient().get_stored_procedure(request);
+ }
+ }
+
+ @Override
+ public void drop_stored_procedure(StoredProcedureRequest request)
+ throws MetaException, TException {
+ try (MetaStoreClient client = catalog_.getMetaStoreClient()) {
+ client.getHiveClient().getThriftClient().drop_stored_procedure(request);
+ }
+ }
+
+ @Override
+ public Package find_package(GetPackageRequest request)
+ throws MetaException, NoSuchObjectException, TException {
+ try (MetaStoreClient client = catalog_.getMetaStoreClient()) {
+ return client.getHiveClient().getThriftClient().find_package(request);
+ }
+ }
+
+ @Override
+ public void add_package(AddPackageRequest request) throws MetaException,
TException {
+ try (MetaStoreClient client = catalog_.getMetaStoreClient()) {
+ client.getHiveClient().getThriftClient().add_package(request);
+ }
+ }
+
+ @Override
+ public List<String> get_all_packages(ListPackageRequest request)
+ throws MetaException, TException {
+ try (MetaStoreClient client = catalog_.getMetaStoreClient()) {
+ return
client.getHiveClient().getThriftClient().get_all_packages(request);
+ }
+ }
+
+ @Override
+ public void drop_package(DropPackageRequest request) throws MetaException,
TException {
+ try (MetaStoreClient client = catalog_.getMetaStoreClient()) {
+ client.getHiveClient().getThriftClient().drop_package(request);
+ }
+ }
+
@Override
public GetOpenTxnsInfoResponse get_open_txns_info() throws TException {
try (MetaStoreClient client = catalog_.getMetaStoreClient()) {
diff --git
a/testdata/cluster/node_templates/common/etc/hadoop/conf/core-site.xml.py
b/testdata/cluster/node_templates/common/etc/hadoop/conf/core-site.xml.py
index 1614faed1..4d5f8d8c6 100644
--- a/testdata/cluster/node_templates/common/etc/hadoop/conf/core-site.xml.py
+++ b/testdata/cluster/node_templates/common/etc/hadoop/conf/core-site.xml.py
@@ -112,9 +112,6 @@ CONFIG = {
if target_filesystem == 's3':
CONFIG.update({'fs.s3a.connection.maximum': 1500})
- # As a workaround for HADOOP-18410, set the async drain threshold to an
absurdly large
- # value to turn off the async drain codepath.
- CONFIG.update({'fs.s3a.input.async.drain.threshold': '512G'})
s3guard_enabled = os.environ.get("S3GUARD_ENABLED") == 'true'
if s3guard_enabled:
CONFIG.update({
diff --git
a/testdata/cluster/node_templates/common/etc/hadoop/conf/yarn-site.xml.py
b/testdata/cluster/node_templates/common/etc/hadoop/conf/yarn-site.xml.py
index eb6359365..73f7edc1d 100644
--- a/testdata/cluster/node_templates/common/etc/hadoop/conf/yarn-site.xml.py
+++ b/testdata/cluster/node_templates/common/etc/hadoop/conf/yarn-site.xml.py
@@ -36,9 +36,9 @@ def _get_yarn_nm_ram_mb():
available_ram_gb = int(os.getenv("IMPALA_CLUSTER_MAX_MEM_GB", str(sys_ram /
1024)))
# Fit into the following envelope:
# - need 4GB at a bare minimum
- # - leave at least 24G for other services
+ # - leave at least 20G for other services
# - don't need more than 48G
- ret = min(max(available_ram_gb * 1024 - 24 * 1024, 4096), 48 * 1024)
+ ret = min(max(available_ram_gb * 1024 - 20 * 1024, 4096), 48 * 1024)
print >>sys.stderr, "Configuring Yarn NM to use {0}MB RAM".format(ret)
return ret