lide-reed commented on code in PR #15125:
URL: https://github.com/apache/doris/pull/15125#discussion_r1053076672
##########
fe/fe-core/src/main/java/org/apache/doris/catalog/HiveMetaStoreClientHelper.java:
##########
@@ -175,6 +176,11 @@ public static String getHiveDataFiles(HiveTable hiveTable,
ExprNodeGenericFuncDe
List<TBrokerFileStatus> fileStatuses, Table remoteHiveTbl,
StorageBackend.StorageType type)
throws DdlException {
BlobStorage storage = BlobStorage.create("HiveMetaStore", type,
hiveTable.getHiveProperties());
+ // as of ofs files, use hdfs storage, but it's type should be ofs
+ if (type == StorageBackend.StorageType.OFS) {
+ storage.setType(type);
Review Comment:
Move them to BlobStorage.create() is better?
##########
fe/fe-core/src/main/java/org/apache/doris/catalog/HiveMetaStoreClientHelper.java:
##########
@@ -211,6 +217,7 @@ public static String normalizeS3LikeSchema(String location)
{
private static String getAllFileStatus(List<TBrokerFileStatus>
fileStatuses,
List<RemoteIterator<LocatedFileStatus>> remoteIterators,
BlobStorage storage) throws UserException {
boolean onS3 = storage instanceof S3Storage;
+ boolean onOFS = storage.getStorageType() ==
StorageBackend.StorageType.OFS;
Review Comment:
Change two boolean statements to "StorageBackend.StorageType storageType =
storage.getStorageType();"? and use storageType directly later.
##########
fe/fe-core/src/main/java/org/apache/doris/catalog/HiveTable.java:
##########
@@ -79,7 +79,7 @@ public Map<String, String> getHiveProperties() {
private void validate(Map<String, String> properties) throws DdlException {
if (properties == null) {
throw new DdlException("Please set properties of hive table, "
- + "they are: database, table and 'hive.metastore.uris'");
+ + "they are: database, table and 'hive.metastore.uris'");
Review Comment:
Reserve old format?
##########
fe/fe-core/src/main/java/org/apache/doris/catalog/HiveMetaStoreClientHelper.java:
##########
@@ -761,7 +770,8 @@ public static String
showCreateTable(org.apache.hadoop.hive.metastore.api.Table
if (remoteTable.getPartitionKeys().size() > 0) {
output.append("PARTITIONED BY (\n")
.append(remoteTable.getPartitionKeys().stream().map(
- partition -> String.format(" `%s` `%s`",
partition.getName(), partition.getType()))
+ partition ->
+ String.format(" `%s` `%s`",
partition.getName(), partition.getType()))
Review Comment:
Reserve old format?
##########
fe/fe-core/src/main/java/org/apache/doris/catalog/HiveMetaStoreClientHelper.java:
##########
@@ -264,7 +271,7 @@ private static String
getAllFileStatus(List<TBrokerFileStatus> fileStatuses,
* @throws DdlException when connect hiveMetaStore failed.
*/
public static List<Partition> getHivePartitions(String metaStoreUris,
Table remoteHiveTbl,
- ExprNodeGenericFuncDesc hivePartitionPredicate) throws
DdlException {
+ ExprNodeGenericFuncDesc hivePartitionPredicate) throws
DdlException {
Review Comment:
Reserve old format?
##########
fe/fe-core/src/main/java/org/apache/doris/catalog/HiveTable.java:
##########
@@ -146,7 +146,8 @@ private void validate(Map<String, String> properties)
throws DdlException {
while (iter.hasNext()) {
Map.Entry<String, String> entry = iter.next();
String key = entry.getKey();
- if (key.startsWith(HdfsResource.HADOOP_FS_PREFIX) ||
key.startsWith(S3Resource.S3_PROPERTIES_PREFIX)) {
+ if (key.startsWith(HdfsResource.HADOOP_FS_PREFIX) ||
key.startsWith(S3Resource.S3_PROPERTIES_PREFIX)
+ || key.equalsIgnoreCase(HdfsResource.HADOOP_FS_NAME)) {
Review Comment:
Propose a new issue if this is a bug?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]