This is an automated email from the ASF dual-hosted git repository.

morningman pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/doris.git


The following commit(s) were added to refs/heads/master by this push:
     new 370ddd8fb2e [feat](metatable) support table$partitions for hive table 
(#40774)
370ddd8fb2e is described below

commit 370ddd8fb2ed09f560147db4d48bae55e215d7a2
Author: Mingyu Chen <[email protected]>
AuthorDate: Tue Sep 24 16:58:22 2024 +0800

    [feat](metatable) support table$partitions for hive table (#40774)
    
    Support new grammar:  `table_name$partitions`
    
    User can query partition info by using:
    
    ```
    select * from table_name$partitions
    ```
    
    `table_name$partitions` is a special meta table corresponding to the
    table.
    
    Its schema is the partition columns of the table, and column type is
    always "String".
    And the value is the partition column's value
    
    You can use `DESC table_name$partitions` to get schema:
    
    ```
    mysql> desc partition_values_all_types$partitions;
    +-------+----------------+------+------+---------+-------+
    | Field | Type           | Null | Key  | Default | Extra |
    +-------+----------------+------+------+---------+-------+
    | p1    | boolean        | Yes  | true | NULL    | NONE  |
    | p2    | tinyint        | Yes  | true | NULL    | NONE  |
    | p3    | smallint       | Yes  | true | NULL    | NONE  |
    | p4    | int            | Yes  | true | NULL    | NONE  |
    | p5    | bigint         | Yes  | true | NULL    | NONE  |
    | p6    | date           | Yes  | true | NULL    | NONE  |
    | p7    | datetime(6)    | Yes  | true | NULL    | NONE  |
    | p8    | varchar(65533) | Yes  | true | NULL    | NONE  |
    | p9    | decimal(9,2)   | Yes  | true | NULL    | NONE  |
    | p10   | decimal(18,10) | Yes  | true | NULL    | NONE  |
    | p11   | decimal(38,9)  | Yes  | true | NULL    | NONE  |
    +-------+----------------+------+------+---------+-------+
    ```
    Where `px` are partition columns of table `partition_values_all_types`;
    
    ```
    mysql> select * from partition_values_all_types$partitions order by 
p1,p2,p3;
    
+------+------+--------+-------------+----------------------+------------+----------------------------+--------+-------------+----------------------+------------------------------------------+
    | p1   | p2   | p3     | p4          | p5                   | p6         | 
p7                         | p8     | p9          | p10                  | p11  
                                    |
    
+------+------+--------+-------------+----------------------+------------+----------------------------+--------+-------------+----------------------+------------------------------------------+
    | NULL | NULL |   NULL |        NULL |                 NULL | NULL       | 
NULL                       | NULL   |        NULL |                 NULL |      
                               NULL |
    |    0 | -128 | -32768 | -2147483648 | -9223372036854775808 | 1900-01-01 | 
1899-01-01 23:59:59.000000 | NULL   | -9999999.99 | -99999999.9999999999 | 
-99999999999999999999999999999.999999999 |
    |    0 |  127 |  32767 |  2147483647 |  9223372036854775807 | 9999-12-31 | 
0001-01-01 00:00:01.321000 | boston |  9999999.99 |  99999999.9999999999 |  
99999999999999999999999999999.999999999 |
    
+------+------+--------+-------------+----------------------+------------+----------------------------+--------+-------------+----------------------+------------------------------------------+
    ```
    
    Currently, this grammar can only be used for partition table's in Hive
    Catalog.
    Table in other catalogs or non partition table can not use this grammar.
    
    Internally, it is implemented as a table valued function:
    `partition_values`
    
    ```
    mysql> select * from partition_values("catalog" = "hive", "database" = 
"multi_catalog", "table" = "partition_values_all_types");
    
+------+------+--------+-------------+----------------------+------------+----------------------------+--------+-------------+----------------------+------------------------------------------+
    | p1   | p2   | p3     | p4          | p5                   | p6         | 
p7                         | p8     | p9          | p10                  | p11  
                                    |
    
+------+------+--------+-------------+----------------------+------------+----------------------------+--------+-------------+----------------------+------------------------------------------+
    |    0 |  127 |  32767 |  2147483647 |  9223372036854775807 | 9999-12-31 | 
0001-01-01 00:00:01.321000 | boston |  9999999.99 |  99999999.9999999999 |  
99999999999999999999999999999.999999999 |
    |    0 | -128 | -32768 | -2147483648 | -9223372036854775808 | 1900-01-01 | 
1899-01-01 23:59:59.000000 | NULL   | -9999999.99 | -99999999.9999999999 | 
-99999999999999999999999999999.999999999 |
    | NULL | NULL |   NULL |        NULL |                 NULL | NULL       | 
NULL                       | NULL   |        NULL |                 NULL |      
                               NULL |
    
+------+------+--------+-------------+----------------------+------------+----------------------------+--------+-------------+----------------------+------------------------------------------+
    ```
---
 be/src/vec/exec/scan/vmeta_scanner.cpp             | 170 +++++++++++--------
 be/src/vec/exec/scan/vmeta_scanner.h               |   2 +
 .../antlr4/org/apache/doris/nereids/DorisParser.g4 |   8 +-
 .../org/apache/doris/analysis/DescribeStmt.java    |  28 +++-
 .../doris/catalog/BuiltinTableValuedFunctions.java |   4 +-
 .../java/org/apache/doris/catalog/TableIf.java     |   2 -
 .../org/apache/doris/common/util/TimeUtils.java    |  35 ++++
 .../org/apache/doris/datasource/CatalogIf.java     |  23 +++
 .../doris/datasource/hive/HMSExternalCatalog.java  |  92 +++++++++++
 .../doris/datasource/hive/HMSExternalTable.java    |   2 +-
 .../doris/nereids/parser/LogicalPlanBuilder.java   |   7 +-
 .../doris/nereids/rules/analysis/BindRelation.java |  24 ++-
 .../functions/table/PartitionValues.java           |  76 +++++++++
 .../visitor/TableValuedFunctionVisitor.java        |   5 +
 .../apache/doris/nereids/util/RelationUtil.java    |   6 +-
 .../doris/tablefunction/MetadataGenerator.java     | 143 ++++++++++++++--
 .../PartitionValuesTableValuedFunction.java        | 180 +++++++++++++++++++++
 .../PartitionsTableValuedFunction.java             |   2 +-
 .../doris/tablefunction/TableValuedFunctionIf.java |   2 +
 .../apache/doris/common/util/TimeUtilsTest.java    |  51 ++++++
 .../nereids/rules/analysis/BindRelationTest.java   |   9 ++
 gensrc/thrift/Data.thrift                          |   1 +
 gensrc/thrift/FrontendService.thrift               |   1 +
 gensrc/thrift/PlanNodes.thrift                     |   8 +
 gensrc/thrift/Types.thrift                         |   3 +-
 .../hive/test_hive_partition_values_tvf.out        | 120 ++++++++++++++
 .../hive/test_hive_partition_values_tvf.groovy     | 142 ++++++++++++++++
 27 files changed, 1053 insertions(+), 93 deletions(-)

diff --git a/be/src/vec/exec/scan/vmeta_scanner.cpp 
b/be/src/vec/exec/scan/vmeta_scanner.cpp
index f5864924a38..74fed8c80c7 100644
--- a/be/src/vec/exec/scan/vmeta_scanner.cpp
+++ b/be/src/vec/exec/scan/vmeta_scanner.cpp
@@ -74,16 +74,6 @@ Status VMetaScanner::prepare(RuntimeState* state, const 
VExprContextSPtrs& conju
     VLOG_CRITICAL << "VMetaScanner::prepare";
     RETURN_IF_ERROR(VScanner::prepare(_state, conjuncts));
     _tuple_desc = state->desc_tbl().get_tuple_descriptor(_tuple_id);
-    bool has_col_nullable =
-            std::any_of(std::begin(_tuple_desc->slots()), 
std::end(_tuple_desc->slots()),
-                        [](SlotDescriptor* slot_desc) { return 
slot_desc->is_nullable(); });
-
-    if (has_col_nullable) {
-        // We do not allow any columns to be Nullable here, since FE can not
-        // transmit a NULL value to BE, so we can not distinguish a empty 
string
-        // from a NULL value.
-        return Status::InternalError("Logical error, VMetaScanner do not allow 
ColumnNullable");
-    }
     return Status::OK();
 }
 
@@ -146,60 +136,90 @@ Status VMetaScanner::_fill_block_with_remote_data(const 
std::vector<MutableColum
         }
 
         for (int _row_idx = 0; _row_idx < _batch_data.size(); _row_idx++) {
-            // No need to check nullable column since no nullable column is
-            // guaranteed in VMetaScanner::prepare
             vectorized::IColumn* col_ptr = columns[col_idx].get();
-            switch (slot_desc->type().type) {
-            case TYPE_BOOLEAN: {
-                bool data = 
_batch_data[_row_idx].column_value[col_idx].boolVal;
-                
reinterpret_cast<vectorized::ColumnVector<vectorized::UInt8>*>(col_ptr)
-                        ->insert_value((uint8_t)data);
-                break;
-            }
-            case TYPE_INT: {
-                int64_t data = 
_batch_data[_row_idx].column_value[col_idx].intVal;
-                
reinterpret_cast<vectorized::ColumnVector<vectorized::Int32>*>(col_ptr)
-                        ->insert_value(data);
-                break;
-            }
-            case TYPE_BIGINT: {
-                int64_t data = 
_batch_data[_row_idx].column_value[col_idx].longVal;
-                
reinterpret_cast<vectorized::ColumnVector<vectorized::Int64>*>(col_ptr)
-                        ->insert_value(data);
-                break;
-            }
-            case TYPE_FLOAT: {
-                double data = 
_batch_data[_row_idx].column_value[col_idx].doubleVal;
-                
reinterpret_cast<vectorized::ColumnVector<vectorized::Float32>*>(col_ptr)
-                        ->insert_value(data);
-                break;
-            }
-            case TYPE_DOUBLE: {
-                double data = 
_batch_data[_row_idx].column_value[col_idx].doubleVal;
-                
reinterpret_cast<vectorized::ColumnVector<vectorized::Float64>*>(col_ptr)
-                        ->insert_value(data);
-                break;
-            }
-            case TYPE_DATETIMEV2: {
-                uint64_t data = 
_batch_data[_row_idx].column_value[col_idx].longVal;
-                
reinterpret_cast<vectorized::ColumnVector<vectorized::UInt64>*>(col_ptr)
-                        ->insert_value(data);
-                break;
-            }
-            case TYPE_STRING:
-            case TYPE_CHAR:
-            case TYPE_VARCHAR: {
-                std::string data = 
_batch_data[_row_idx].column_value[col_idx].stringVal;
-                
reinterpret_cast<vectorized::ColumnString*>(col_ptr)->insert_data(data.c_str(),
-                                                                               
   data.length());
-                break;
-            }
-            default: {
-                std::string error_msg =
-                        fmt::format("Invalid column type {} on column: {}.",
-                                    slot_desc->type().debug_string(), 
slot_desc->col_name());
-                return Status::InternalError(std::string(error_msg));
-            }
+            TCell& cell = _batch_data[_row_idx].column_value[col_idx];
+            if (cell.__isset.isNull && cell.isNull) {
+                DCHECK(slot_desc->is_nullable())
+                        << "cell is null but column is not nullable: " << 
slot_desc->col_name();
+                auto& null_col = reinterpret_cast<ColumnNullable&>(*col_ptr);
+                null_col.get_nested_column().insert_default();
+                null_col.get_null_map_data().push_back(1);
+            } else {
+                if (slot_desc->is_nullable()) {
+                    auto& null_col = 
reinterpret_cast<ColumnNullable&>(*col_ptr);
+                    null_col.get_null_map_data().push_back(0);
+                    col_ptr = null_col.get_nested_column_ptr();
+                }
+                switch (slot_desc->type().type) {
+                case TYPE_BOOLEAN: {
+                    bool data = cell.boolVal;
+                    
reinterpret_cast<vectorized::ColumnVector<vectorized::UInt8>*>(col_ptr)
+                            ->insert_value((uint8_t)data);
+                    break;
+                }
+                case TYPE_TINYINT: {
+                    int8_t data = (int8_t)cell.intVal;
+                    
reinterpret_cast<vectorized::ColumnVector<vectorized::Int8>*>(col_ptr)
+                            ->insert_value(data);
+                    break;
+                }
+                case TYPE_SMALLINT: {
+                    int16_t data = (int16_t)cell.intVal;
+                    
reinterpret_cast<vectorized::ColumnVector<vectorized::Int16>*>(col_ptr)
+                            ->insert_value(data);
+                    break;
+                }
+                case TYPE_INT: {
+                    int32_t data = cell.intVal;
+                    
reinterpret_cast<vectorized::ColumnVector<vectorized::Int32>*>(col_ptr)
+                            ->insert_value(data);
+                    break;
+                }
+                case TYPE_BIGINT: {
+                    int64_t data = cell.longVal;
+                    
reinterpret_cast<vectorized::ColumnVector<vectorized::Int64>*>(col_ptr)
+                            ->insert_value(data);
+                    break;
+                }
+                case TYPE_FLOAT: {
+                    double data = cell.doubleVal;
+                    
reinterpret_cast<vectorized::ColumnVector<vectorized::Float32>*>(col_ptr)
+                            ->insert_value(data);
+                    break;
+                }
+                case TYPE_DOUBLE: {
+                    double data = cell.doubleVal;
+                    
reinterpret_cast<vectorized::ColumnVector<vectorized::Float64>*>(col_ptr)
+                            ->insert_value(data);
+                    break;
+                }
+                case TYPE_DATEV2: {
+                    uint32_t data = (uint32_t)cell.longVal;
+                    
reinterpret_cast<vectorized::ColumnVector<vectorized::UInt32>*>(col_ptr)
+                            ->insert_value(data);
+                    break;
+                }
+                case TYPE_DATETIMEV2: {
+                    uint64_t data = cell.longVal;
+                    
reinterpret_cast<vectorized::ColumnVector<vectorized::UInt64>*>(col_ptr)
+                            ->insert_value(data);
+                    break;
+                }
+                case TYPE_STRING:
+                case TYPE_CHAR:
+                case TYPE_VARCHAR: {
+                    std::string data = cell.stringVal;
+                    
reinterpret_cast<vectorized::ColumnString*>(col_ptr)->insert_data(
+                            data.c_str(), data.length());
+                    break;
+                }
+                default: {
+                    std::string error_msg =
+                            fmt::format("Invalid column type {} on column: 
{}.",
+                                        slot_desc->type().debug_string(), 
slot_desc->col_name());
+                    return Status::InternalError(std::string(error_msg));
+                }
+                }
             }
         }
     }
@@ -241,6 +261,9 @@ Status VMetaScanner::_fetch_metadata(const TMetaScanRange& 
meta_scan_range) {
     case TMetadataType::TASKS:
         RETURN_IF_ERROR(_build_tasks_metadata_request(meta_scan_range, 
&request));
         break;
+    case TMetadataType::PARTITION_VALUES:
+        
RETURN_IF_ERROR(_build_partition_values_metadata_request(meta_scan_range, 
&request));
+        break;
     default:
         _meta_eos = true;
         return Status::OK();
@@ -461,6 +484,27 @@ Status VMetaScanner::_build_tasks_metadata_request(const 
TMetaScanRange& meta_sc
     return Status::OK();
 }
 
+Status VMetaScanner::_build_partition_values_metadata_request(
+        const TMetaScanRange& meta_scan_range, TFetchSchemaTableDataRequest* 
request) {
+    VLOG_CRITICAL << "VMetaScanner::_build_partition_values_metadata_request";
+    if (!meta_scan_range.__isset.partition_values_params) {
+        return Status::InternalError(
+                "Can not find TPartitionValuesMetadataParams from 
meta_scan_range.");
+    }
+
+    // create request
+    request->__set_schema_table_name(TSchemaTableName::METADATA_TABLE);
+
+    // create TMetadataTableRequestParams
+    TMetadataTableRequestParams metadata_table_params;
+    metadata_table_params.__set_metadata_type(TMetadataType::PARTITION_VALUES);
+    metadata_table_params.__set_partition_values_metadata_params(
+            meta_scan_range.partition_values_params);
+
+    request->__set_metada_table_params(metadata_table_params);
+    return Status::OK();
+}
+
 Status VMetaScanner::close(RuntimeState* state) {
     VLOG_CRITICAL << "VMetaScanner::close";
     RETURN_IF_ERROR(VScanner::close(state));
diff --git a/be/src/vec/exec/scan/vmeta_scanner.h 
b/be/src/vec/exec/scan/vmeta_scanner.h
index a9975300cdc..350e0fbf807 100644
--- a/be/src/vec/exec/scan/vmeta_scanner.h
+++ b/be/src/vec/exec/scan/vmeta_scanner.h
@@ -86,6 +86,8 @@ private:
                                          TFetchSchemaTableDataRequest* 
request);
     Status _build_queries_metadata_request(const TMetaScanRange& 
meta_scan_range,
                                            TFetchSchemaTableDataRequest* 
request);
+    Status _build_partition_values_metadata_request(const TMetaScanRange& 
meta_scan_range,
+                                                    
TFetchSchemaTableDataRequest* request);
     bool _meta_eos;
     TupleId _tuple_id;
     TUserIdentity _user_identity;
diff --git a/fe/fe-core/src/main/antlr4/org/apache/doris/nereids/DorisParser.g4 
b/fe/fe-core/src/main/antlr4/org/apache/doris/nereids/DorisParser.g4
index 8403d471890..22edc655a85 100644
--- a/fe/fe-core/src/main/antlr4/org/apache/doris/nereids/DorisParser.g4
+++ b/fe/fe-core/src/main/antlr4/org/apache/doris/nereids/DorisParser.g4
@@ -1269,12 +1269,12 @@ optScanParams
 
 relationPrimary
     : multipartIdentifier optScanParams? materializedViewName? tableSnapshot? 
specifiedPartition?
-       tabletList? tableAlias sample? relationHint? lateralView*           
#tableName
-    | LEFT_PAREN query RIGHT_PAREN tableAlias lateralView*                 
#aliasedQuery
+       tabletList? tableAlias sample? relationHint? lateralView*               
            #tableName
+    | LEFT_PAREN query RIGHT_PAREN tableAlias lateralView*                     
            #aliasedQuery
     | tvfName=identifier LEFT_PAREN
       (properties=propertyItemList)?
-      RIGHT_PAREN tableAlias                                               
#tableValuedFunction
-    | LEFT_PAREN relations RIGHT_PAREN                                     
#relationList
+      RIGHT_PAREN tableAlias                                                   
            #tableValuedFunction
+    | LEFT_PAREN relations RIGHT_PAREN                                         
            #relationList
     ;
 
 materializedViewName
diff --git 
a/fe/fe-core/src/main/java/org/apache/doris/analysis/DescribeStmt.java 
b/fe/fe-core/src/main/java/org/apache/doris/analysis/DescribeStmt.java
index 7e503d52586..f5f7cdaf749 100644
--- a/fe/fe-core/src/main/java/org/apache/doris/analysis/DescribeStmt.java
+++ b/fe/fe-core/src/main/java/org/apache/doris/analysis/DescribeStmt.java
@@ -32,6 +32,7 @@ import org.apache.doris.common.AnalysisException;
 import org.apache.doris.common.ErrorCode;
 import org.apache.doris.common.ErrorReport;
 import org.apache.doris.common.FeConstants;
+import org.apache.doris.common.Pair;
 import org.apache.doris.common.UserException;
 import org.apache.doris.common.proc.IndexSchemaProcNode;
 import org.apache.doris.common.proc.ProcNodeInterface;
@@ -44,6 +45,7 @@ import org.apache.doris.qe.ConnectContext;
 import org.apache.doris.qe.ShowResultSetMetaData;
 
 import com.google.common.base.Preconditions;
+import com.google.common.base.Strings;
 import com.google.common.collect.Lists;
 import com.google.common.collect.Sets;
 import org.apache.commons.lang3.StringUtils;
@@ -55,6 +57,7 @@ import java.util.Arrays;
 import java.util.LinkedList;
 import java.util.List;
 import java.util.Map;
+import java.util.Optional;
 import java.util.Set;
 
 public class DescribeStmt extends ShowStmt implements NotFallbackInParser {
@@ -123,6 +126,25 @@ public class DescribeStmt extends ShowStmt implements 
NotFallbackInParser {
 
     @Override
     public void analyze(Analyzer analyzer) throws UserException {
+        // First handle meta table.
+        // It will convert this to corresponding table valued functions
+        // eg: DESC table$partitions -> partition_values(...)
+        if (dbTableName != null) {
+            dbTableName.analyze(analyzer);
+            CatalogIf catalog = 
Env.getCurrentEnv().getCatalogMgr().getCatalogOrAnalysisException(dbTableName.getCtl());
+            Pair<String, String> sourceTableNameWithMetaName = 
catalog.getSourceTableNameWithMetaTableName(
+                    dbTableName.getTbl());
+            if (!Strings.isNullOrEmpty(sourceTableNameWithMetaName.second)) {
+                isTableValuedFunction = true;
+                Optional<TableValuedFunctionRef> optTvfRef = 
catalog.getMetaTableFunctionRef(
+                        dbTableName.getDb(), dbTableName.getTbl());
+                if (!optTvfRef.isPresent()) {
+                    throw new AnalysisException("meta table not found: " + 
sourceTableNameWithMetaName.second);
+                }
+                tableValuedFunctionRef = optTvfRef.get();
+            }
+        }
+
         if (!isAllTables && isTableValuedFunction) {
             tableValuedFunctionRef.analyze(analyzer);
             List<Column> columns = 
tableValuedFunctionRef.getTable().getBaseSchema();
@@ -148,8 +170,6 @@ public class DescribeStmt extends ShowStmt implements 
NotFallbackInParser {
             }
         }
 
-        dbTableName.analyze(analyzer);
-
         if (!Env.getCurrentEnv().getAccessManager()
                 .checkTblPriv(ConnectContext.get(), dbTableName, 
PrivPredicate.SHOW)) {
             
ErrorReport.reportAnalysisException(ErrorCode.ERR_TABLEACCESS_DENIED_ERROR, 
"DESCRIBE",
@@ -159,8 +179,7 @@ public class DescribeStmt extends ShowStmt implements 
NotFallbackInParser {
 
         CatalogIf catalog = 
Env.getCurrentEnv().getCatalogMgr().getCatalogOrAnalysisException(dbTableName.getCtl());
         DatabaseIf db = catalog.getDbOrAnalysisException(dbTableName.getDb());
-        TableIf table = db.getTableOrAnalysisException(dbTableName.getTbl());
-
+        TableIf table = db.getTableOrDdlException(dbTableName.getTbl());
         table.readLock();
         try {
             if (!isAllTables) {
@@ -387,3 +406,4 @@ public class DescribeStmt extends ShowStmt implements 
NotFallbackInParser {
         return emptyRow;
     }
 }
+
diff --git 
a/fe/fe-core/src/main/java/org/apache/doris/catalog/BuiltinTableValuedFunctions.java
 
b/fe/fe-core/src/main/java/org/apache/doris/catalog/BuiltinTableValuedFunctions.java
index db66c260e56..88c98162093 100644
--- 
a/fe/fe-core/src/main/java/org/apache/doris/catalog/BuiltinTableValuedFunctions.java
+++ 
b/fe/fe-core/src/main/java/org/apache/doris/catalog/BuiltinTableValuedFunctions.java
@@ -29,6 +29,7 @@ import 
org.apache.doris.nereids.trees.expressions.functions.table.Jobs;
 import org.apache.doris.nereids.trees.expressions.functions.table.Local;
 import org.apache.doris.nereids.trees.expressions.functions.table.MvInfos;
 import org.apache.doris.nereids.trees.expressions.functions.table.Numbers;
+import 
org.apache.doris.nereids.trees.expressions.functions.table.PartitionValues;
 import org.apache.doris.nereids.trees.expressions.functions.table.Partitions;
 import org.apache.doris.nereids.trees.expressions.functions.table.Query;
 import org.apache.doris.nereids.trees.expressions.functions.table.S3;
@@ -59,7 +60,8 @@ public class BuiltinTableValuedFunctions implements 
FunctionHelper {
             tableValued(Partitions.class, "partitions"),
             tableValued(Jobs.class, "jobs"),
             tableValued(Tasks.class, "tasks"),
-            tableValued(Query.class, "query")
+            tableValued(Query.class, "query"),
+            tableValued(PartitionValues.class, "partition_values")
     );
 
     public static final BuiltinTableValuedFunctions INSTANCE = new 
BuiltinTableValuedFunctions();
diff --git a/fe/fe-core/src/main/java/org/apache/doris/catalog/TableIf.java 
b/fe/fe-core/src/main/java/org/apache/doris/catalog/TableIf.java
index 83c07c485a1..ed40840239a 100644
--- a/fe/fe-core/src/main/java/org/apache/doris/catalog/TableIf.java
+++ b/fe/fe-core/src/main/java/org/apache/doris/catalog/TableIf.java
@@ -65,8 +65,6 @@ public interface TableIf {
     default void readUnlock() {
     }
 
-    ;
-
     default void writeLock() {
     }
 
diff --git 
a/fe/fe-core/src/main/java/org/apache/doris/common/util/TimeUtils.java 
b/fe/fe-core/src/main/java/org/apache/doris/common/util/TimeUtils.java
index 55b2ed1c58d..e7066846c30 100644
--- a/fe/fe-core/src/main/java/org/apache/doris/common/util/TimeUtils.java
+++ b/fe/fe-core/src/main/java/org/apache/doris/common/util/TimeUtils.java
@@ -340,4 +340,39 @@ public class TimeUtils {
                 parts.length > 3 ? String.format(" %02d:%02d:%02d", 
Integer.parseInt(parts[3]),
                         Integer.parseInt(parts[4]), 
Integer.parseInt(parts[5])) : "");
     }
+
+
+    // Refer to be/src/vec/runtime/vdatetime_value.h
+    public static long convertToDateTimeV2(
+            int year, int month, int day, int hour, int minute, int second, 
int microsecond) {
+        return (long) microsecond | (long) second << 20 | (long) minute << 26 
| (long) hour << 32
+                | (long) day << 37 | (long) month << 42 | (long) year << 46;
+    }
+
+    // Refer to be/src/vec/runtime/vdatetime_value.h
+    public static long convertToDateV2(
+            int year, int month, int day) {
+        return (long) day | (long) month << 5 | (long) year << 9;
+    }
+
+    public static long convertStringToDateTimeV2(String dateTimeStr, int 
scale) {
+        String format = "yyyy-MM-dd HH:mm:ss";
+        if (scale > 0) {
+            format += ".";
+            for (int i = 0; i < scale; i++) {
+                format += "S";
+            }
+        }
+        DateTimeFormatter formatter = DateTimeFormatter.ofPattern(format);
+        LocalDateTime dateTime = 
TimeUtils.formatDateTimeAndFullZero(dateTimeStr, formatter);
+        return convertToDateTimeV2(dateTime.getYear(), 
dateTime.getMonthValue(), dateTime.getDayOfMonth(),
+                dateTime.getHour(), dateTime.getMinute(), 
dateTime.getSecond(), dateTime.getNano() / 1000);
+    }
+
+    public static long convertStringToDateV2(String dateStr) {
+        DateTimeFormatter formatter = 
DateTimeFormatter.ofPattern("yyyy-MM-dd");
+        LocalDateTime dateTime = TimeUtils.formatDateTimeAndFullZero(dateStr, 
formatter);
+        return convertToDateV2(dateTime.getYear(), dateTime.getMonthValue(), 
dateTime.getDayOfMonth());
+    }
 }
+
diff --git 
a/fe/fe-core/src/main/java/org/apache/doris/datasource/CatalogIf.java 
b/fe/fe-core/src/main/java/org/apache/doris/datasource/CatalogIf.java
index a79897c67df..41cb44ef0b5 100644
--- a/fe/fe-core/src/main/java/org/apache/doris/datasource/CatalogIf.java
+++ b/fe/fe-core/src/main/java/org/apache/doris/datasource/CatalogIf.java
@@ -22,6 +22,7 @@ import org.apache.doris.analysis.CreateTableStmt;
 import org.apache.doris.analysis.DropDbStmt;
 import org.apache.doris.analysis.DropTableStmt;
 import org.apache.doris.analysis.TableName;
+import org.apache.doris.analysis.TableValuedFunctionRef;
 import org.apache.doris.analysis.TruncateTableStmt;
 import org.apache.doris.catalog.DatabaseIf;
 import org.apache.doris.catalog.Env;
@@ -30,7 +31,9 @@ import org.apache.doris.common.AnalysisException;
 import org.apache.doris.common.DdlException;
 import org.apache.doris.common.ErrorCode;
 import org.apache.doris.common.MetaNotFoundException;
+import org.apache.doris.common.Pair;
 import org.apache.doris.common.UserException;
+import 
org.apache.doris.nereids.trees.expressions.functions.table.TableValuedFunction;
 
 import com.google.common.base.Strings;
 import com.google.common.collect.Lists;
@@ -197,4 +200,24 @@ public interface CatalogIf<T extends DatabaseIf> {
     void dropTable(DropTableStmt stmt) throws DdlException;
 
     void truncateTable(TruncateTableStmt truncateTableStmt) throws 
DdlException;
+
+    /**
+     * Try to parse meta table name from table name.
+     * Some catalog allow querying meta table like "table_name$partitions".
+     * Catalog can override this method to parse meta table name from table 
name.
+     *
+     * @param tableName table name like "table_name" or "table_name$partitions"
+     * @return pair of source table name and meta table name
+     */
+    default Pair<String, String> getSourceTableNameWithMetaTableName(String 
tableName) {
+        return Pair.of(tableName, "");
+    }
+
+    default Optional<TableValuedFunction> getMetaTableFunction(String dbName, 
String sourceNameWithMetaName) {
+        return Optional.empty();
+    }
+
+    default Optional<TableValuedFunctionRef> getMetaTableFunctionRef(String 
dbName, String sourceNameWithMetaName) {
+        return Optional.empty();
+    }
 }
diff --git 
a/fe/fe-core/src/main/java/org/apache/doris/datasource/hive/HMSExternalCatalog.java
 
b/fe/fe-core/src/main/java/org/apache/doris/datasource/hive/HMSExternalCatalog.java
index f5baa9cc7bf..99fd96f65db 100644
--- 
a/fe/fe-core/src/main/java/org/apache/doris/datasource/hive/HMSExternalCatalog.java
+++ 
b/fe/fe-core/src/main/java/org/apache/doris/datasource/hive/HMSExternalCatalog.java
@@ -17,11 +17,13 @@
 
 package org.apache.doris.datasource.hive;
 
+import org.apache.doris.analysis.TableValuedFunctionRef;
 import org.apache.doris.catalog.Env;
 import org.apache.doris.catalog.HdfsResource;
 import org.apache.doris.cluster.ClusterNamespace;
 import org.apache.doris.common.Config;
 import org.apache.doris.common.DdlException;
+import org.apache.doris.common.Pair;
 import org.apache.doris.common.ThreadPoolManager;
 import org.apache.doris.common.security.authentication.AuthenticationConfig;
 import org.apache.doris.common.security.authentication.HadoopAuthenticator;
@@ -39,10 +41,15 @@ import 
org.apache.doris.datasource.property.constants.HMSProperties;
 import org.apache.doris.fs.FileSystemProvider;
 import org.apache.doris.fs.FileSystemProviderImpl;
 import org.apache.doris.fs.remote.dfs.DFSFileSystem;
+import org.apache.doris.nereids.exceptions.AnalysisException;
+import 
org.apache.doris.nereids.trees.expressions.functions.table.PartitionValues;
+import 
org.apache.doris.nereids.trees.expressions.functions.table.TableValuedFunction;
 import org.apache.doris.transaction.TransactionManagerFactory;
 
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Strings;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
 import lombok.Getter;
 import org.apache.commons.lang3.math.NumberUtils;
 import org.apache.hadoop.hive.conf.HiveConf;
@@ -52,6 +59,7 @@ import org.apache.logging.log4j.Logger;
 import java.util.List;
 import java.util.Map;
 import java.util.Objects;
+import java.util.Optional;
 import java.util.concurrent.ThreadPoolExecutor;
 
 /**
@@ -277,6 +285,36 @@ public class HMSExternalCatalog extends ExternalCatalog {
         }
     }
 
+    @Override
+    public Pair<String, String> getSourceTableNameWithMetaTableName(String 
tableName) {
+        for (MetaTableFunction metaFunction : MetaTableFunction.values()) {
+            if (metaFunction.containsMetaTable(tableName)) {
+                return Pair.of(metaFunction.getSourceTableName(tableName), 
metaFunction.name().toLowerCase());
+            }
+        }
+        return Pair.of(tableName, "");
+    }
+
+    @Override
+    public Optional<TableValuedFunction> getMetaTableFunction(String dbName, 
String sourceNameWithMetaName) {
+        for (MetaTableFunction metaFunction : MetaTableFunction.values()) {
+            if (metaFunction.containsMetaTable(sourceNameWithMetaName)) {
+                return Optional.of(metaFunction.createFunction(name, dbName, 
sourceNameWithMetaName));
+            }
+        }
+        return Optional.empty();
+    }
+
+    @Override
+    public Optional<TableValuedFunctionRef> getMetaTableFunctionRef(String 
dbName, String sourceNameWithMetaName) {
+        for (MetaTableFunction metaFunction : MetaTableFunction.values()) {
+            if (metaFunction.containsMetaTable(sourceNameWithMetaName)) {
+                return Optional.of(metaFunction.createFunctionRef(name, 
dbName, sourceNameWithMetaName));
+            }
+        }
+        return Optional.empty();
+    }
+
     public String getHiveMetastoreUris() {
         return catalogProperty.getOrDefault(HMSProperties.HIVE_METASTORE_URIS, 
"");
     }
@@ -292,4 +330,58 @@ public class HMSExternalCatalog extends ExternalCatalog {
     public boolean isEnableHmsEventsIncrementalSync() {
         return enableHmsEventsIncrementalSync;
     }
+
+    /**
+     * Enum for meta tables in hive catalog.
+     * eg: tbl$partitions
+     */
+    private enum MetaTableFunction {
+        PARTITIONS("partition_values");
+
+        private final String suffix;
+        private final String tvfName;
+
+        MetaTableFunction(String tvfName) {
+            this.suffix = "$" + name().toLowerCase();
+            this.tvfName = tvfName;
+        }
+
+        boolean containsMetaTable(String tableName) {
+            return tableName.endsWith(suffix) && (tableName.length() > 
suffix.length());
+        }
+
+        String getSourceTableName(String tableName) {
+            return tableName.substring(0, tableName.length() - 
suffix.length());
+        }
+
+        public TableValuedFunction createFunction(String ctlName, String 
dbName, String sourceNameWithMetaName) {
+            switch (this) {
+                case PARTITIONS:
+                    List<String> nameParts = Lists.newArrayList(ctlName, 
dbName,
+                            getSourceTableName(sourceNameWithMetaName));
+                    return PartitionValues.create(nameParts);
+                default:
+                    throw new AnalysisException("Unsupported meta function 
type: " + this);
+            }
+        }
+
+        public TableValuedFunctionRef createFunctionRef(String ctlName, String 
dbName, String sourceNameWithMetaName) {
+            switch (this) {
+                case PARTITIONS:
+                    Map<String, String> params = Maps.newHashMap();
+                    params.put("catalog", ctlName);
+                    params.put("database", dbName);
+                    params.put("table", 
getSourceTableName(sourceNameWithMetaName));
+                    try {
+                        return new TableValuedFunctionRef(tvfName, null, 
params);
+                    } catch (org.apache.doris.common.AnalysisException e) {
+                        LOG.warn("should not happen. {}.{}.{}", ctlName, 
dbName, sourceNameWithMetaName);
+                        return null;
+                    }
+                default:
+                    throw new AnalysisException("Unsupported meta function 
type: " + this);
+            }
+        }
+    }
 }
+
diff --git 
a/fe/fe-core/src/main/java/org/apache/doris/datasource/hive/HMSExternalTable.java
 
b/fe/fe-core/src/main/java/org/apache/doris/datasource/hive/HMSExternalTable.java
index 1c1a28242f4..6179bf5f19c 100644
--- 
a/fe/fe-core/src/main/java/org/apache/doris/datasource/hive/HMSExternalTable.java
+++ 
b/fe/fe-core/src/main/java/org/apache/doris/datasource/hive/HMSExternalTable.java
@@ -922,6 +922,6 @@ public class HMSExternalTable extends ExternalTable 
implements MTMVRelatedTableI
     @Override
     public boolean isPartitionedTable() {
         makeSureInitialized();
-        return remoteTable.getPartitionKeysSize() > 0;
+        return !isView() && remoteTable.getPartitionKeysSize() > 0;
     }
 }
diff --git 
a/fe/fe-core/src/main/java/org/apache/doris/nereids/parser/LogicalPlanBuilder.java
 
b/fe/fe-core/src/main/java/org/apache/doris/nereids/parser/LogicalPlanBuilder.java
index 3e5ba15b0f7..3b4c8ef0674 100644
--- 
a/fe/fe-core/src/main/java/org/apache/doris/nereids/parser/LogicalPlanBuilder.java
+++ 
b/fe/fe-core/src/main/java/org/apache/doris/nereids/parser/LogicalPlanBuilder.java
@@ -1407,7 +1407,7 @@ public class LogicalPlanBuilder extends 
DorisParserBaseVisitor<Object> {
 
     @Override
     public LogicalPlan visitTableName(TableNameContext ctx) {
-        List<String> tableId = 
visitMultipartIdentifier(ctx.multipartIdentifier());
+        List<String> nameParts = 
visitMultipartIdentifier(ctx.multipartIdentifier());
         List<String> partitionNames = new ArrayList<>();
         boolean isTempPart = false;
         if (ctx.specifiedPartition() != null) {
@@ -1455,8 +1455,9 @@ public class LogicalPlanBuilder extends 
DorisParserBaseVisitor<Object> {
 
         TableSample tableSample = ctx.sample() == null ? null : (TableSample) 
visit(ctx.sample());
         UnboundRelation relation = new 
UnboundRelation(StatementScopeIdGenerator.newRelationId(),
-                        tableId, partitionNames, isTempPart, tabletIdLists, 
relationHints,
-                        Optional.ofNullable(tableSample), indexName, 
scanParams, Optional.ofNullable(tableSnapshot));
+                nameParts, partitionNames, isTempPart, tabletIdLists, 
relationHints,
+                Optional.ofNullable(tableSample), indexName, scanParams, 
Optional.ofNullable(tableSnapshot));
+
         LogicalPlan checkedRelation = 
LogicalPlanBuilderAssistant.withCheckPolicy(relation);
         LogicalPlan plan = withTableAlias(checkedRelation, ctx.tableAlias());
         for (LateralViewContext lateralViewContext : ctx.lateralView()) {
diff --git 
a/fe/fe-core/src/main/java/org/apache/doris/nereids/rules/analysis/BindRelation.java
 
b/fe/fe-core/src/main/java/org/apache/doris/nereids/rules/analysis/BindRelation.java
index c81fcc25b83..fee31b68fe4 100644
--- 
a/fe/fe-core/src/main/java/org/apache/doris/nereids/rules/analysis/BindRelation.java
+++ 
b/fe/fe-core/src/main/java/org/apache/doris/nereids/rules/analysis/BindRelation.java
@@ -65,6 +65,7 @@ import 
org.apache.doris.nereids.trees.expressions.functions.agg.Max;
 import org.apache.doris.nereids.trees.expressions.functions.agg.Min;
 import org.apache.doris.nereids.trees.expressions.functions.agg.QuantileUnion;
 import org.apache.doris.nereids.trees.expressions.functions.agg.Sum;
+import 
org.apache.doris.nereids.trees.expressions.functions.table.TableValuedFunction;
 import org.apache.doris.nereids.trees.expressions.literal.TinyIntLiteral;
 import org.apache.doris.nereids.trees.plans.Plan;
 import org.apache.doris.nereids.trees.plans.PreAggStatus;
@@ -81,6 +82,7 @@ import 
org.apache.doris.nereids.trees.plans.logical.LogicalOlapScan;
 import org.apache.doris.nereids.trees.plans.logical.LogicalPlan;
 import org.apache.doris.nereids.trees.plans.logical.LogicalSchemaScan;
 import org.apache.doris.nereids.trees.plans.logical.LogicalSubQueryAlias;
+import org.apache.doris.nereids.trees.plans.logical.LogicalTVFRelation;
 import org.apache.doris.nereids.trees.plans.logical.LogicalTestScan;
 import org.apache.doris.nereids.trees.plans.logical.LogicalView;
 import org.apache.doris.nereids.util.RelationUtil;
@@ -366,14 +368,32 @@ public class BindRelation extends OneAnalysisRuleFactory {
         return scan;
     }
 
+    private Optional<LogicalPlan> handleMetaTable(TableIf table, 
UnboundRelation unboundRelation,
+            List<String> qualifiedTableName) {
+        Optional<TableValuedFunction> tvf = 
table.getDatabase().getCatalog().getMetaTableFunction(
+                qualifiedTableName.get(1), qualifiedTableName.get(2));
+        if (tvf.isPresent()) {
+            return Optional.of(new 
LogicalTVFRelation(unboundRelation.getRelationId(), tvf.get()));
+        }
+        return Optional.empty();
+    }
+
     private LogicalPlan getLogicalPlan(TableIf table, UnboundRelation 
unboundRelation,
-                                               List<String> 
qualifiedTableName, CascadesContext cascadesContext) {
-        // for create view stmt replace tablNname to ctl.db.tableName
+                                       List<String> qualifiedTableName, 
CascadesContext cascadesContext) {
+        // for create view stmt replace tableName to ctl.db.tableName
         unboundRelation.getIndexInSqlString().ifPresent(pair -> {
             StatementContext statementContext = 
cascadesContext.getStatementContext();
             statementContext.addIndexInSqlToString(pair,
                     Utils.qualifiedNameWithBackquote(qualifiedTableName));
         });
+
+        // Handle meta table like "table_name$partitions"
+        // qualifiedTableName should be like "ctl.db.tbl$partitions"
+        Optional<LogicalPlan> logicalPlan = handleMetaTable(table, 
unboundRelation, qualifiedTableName);
+        if (logicalPlan.isPresent()) {
+            return logicalPlan.get();
+        }
+
         List<String> qualifierWithoutTableName = Lists.newArrayList();
         qualifierWithoutTableName.addAll(qualifiedTableName.subList(0, 
qualifiedTableName.size() - 1));
         boolean isView = false;
diff --git 
a/fe/fe-core/src/main/java/org/apache/doris/nereids/trees/expressions/functions/table/PartitionValues.java
 
b/fe/fe-core/src/main/java/org/apache/doris/nereids/trees/expressions/functions/table/PartitionValues.java
new file mode 100644
index 00000000000..0389bb62637
--- /dev/null
+++ 
b/fe/fe-core/src/main/java/org/apache/doris/nereids/trees/expressions/functions/table/PartitionValues.java
@@ -0,0 +1,76 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.nereids.trees.expressions.functions.table;
+
+import org.apache.doris.catalog.FunctionSignature;
+import org.apache.doris.nereids.exceptions.AnalysisException;
+import org.apache.doris.nereids.trees.expressions.Properties;
+import org.apache.doris.nereids.trees.expressions.visitor.ExpressionVisitor;
+import org.apache.doris.nereids.types.coercion.AnyDataType;
+import org.apache.doris.tablefunction.PartitionValuesTableValuedFunction;
+import org.apache.doris.tablefunction.TableValuedFunctionIf;
+
+import com.google.common.base.Preconditions;
+import com.google.common.collect.Maps;
+
+import java.util.List;
+import java.util.Map;
+
+/**
+ * partition_values
+ */
+public class PartitionValues extends TableValuedFunction {
+    public PartitionValues(Properties properties) {
+        super("partition_values", properties);
+    }
+
+    /**
+     * Create PartitionValues function.
+     * @param qualifiedTableName ctl.db.tbl
+     * @return PartitionValues function
+     */
+    public static TableValuedFunction create(List<String> qualifiedTableName) {
+        Preconditions.checkArgument(qualifiedTableName != null && 
qualifiedTableName.size() == 3);
+        Map<String, String> parameters = Maps.newHashMap();
+        parameters.put(PartitionValuesTableValuedFunction.CATALOG, 
qualifiedTableName.get(0));
+        parameters.put(PartitionValuesTableValuedFunction.DB, 
qualifiedTableName.get(1));
+        parameters.put(PartitionValuesTableValuedFunction.TABLE, 
qualifiedTableName.get(2));
+        return new PartitionValues(new Properties(parameters));
+    }
+
+    @Override
+    public FunctionSignature customSignature() {
+        return FunctionSignature.of(AnyDataType.INSTANCE_WITHOUT_INDEX, 
getArgumentsTypes());
+    }
+
+    @Override
+    protected TableValuedFunctionIf toCatalogFunction() {
+        try {
+            Map<String, String> arguments = getTVFProperties().getMap();
+            return new PartitionValuesTableValuedFunction(arguments);
+        } catch (Throwable t) {
+            throw new AnalysisException("Can not build 
PartitionsTableValuedFunction by "
+                    + this + ": " + t.getMessage(), t);
+        }
+    }
+
+    @Override
+    public <R, C> R accept(ExpressionVisitor<R, C> visitor, C context) {
+        return visitor.visitPartitionValues(this, context);
+    }
+}
diff --git 
a/fe/fe-core/src/main/java/org/apache/doris/nereids/trees/expressions/visitor/TableValuedFunctionVisitor.java
 
b/fe/fe-core/src/main/java/org/apache/doris/nereids/trees/expressions/visitor/TableValuedFunctionVisitor.java
index ca14edd87de..0b4b57e11dc 100644
--- 
a/fe/fe-core/src/main/java/org/apache/doris/nereids/trees/expressions/visitor/TableValuedFunctionVisitor.java
+++ 
b/fe/fe-core/src/main/java/org/apache/doris/nereids/trees/expressions/visitor/TableValuedFunctionVisitor.java
@@ -29,6 +29,7 @@ import 
org.apache.doris.nereids.trees.expressions.functions.table.Jobs;
 import org.apache.doris.nereids.trees.expressions.functions.table.Local;
 import org.apache.doris.nereids.trees.expressions.functions.table.MvInfos;
 import org.apache.doris.nereids.trees.expressions.functions.table.Numbers;
+import 
org.apache.doris.nereids.trees.expressions.functions.table.PartitionValues;
 import org.apache.doris.nereids.trees.expressions.functions.table.Partitions;
 import org.apache.doris.nereids.trees.expressions.functions.table.Query;
 import org.apache.doris.nereids.trees.expressions.functions.table.S3;
@@ -59,6 +60,10 @@ public interface TableValuedFunctionVisitor<R, C> {
         return visitTableValuedFunction(partitions, context);
     }
 
+    default R visitPartitionValues(PartitionValues partitionValues, C context) 
{
+        return visitTableValuedFunction(partitionValues, context);
+    }
+
     default R visitJobs(Jobs jobs, C context) {
         return visitTableValuedFunction(jobs, context);
     }
diff --git 
a/fe/fe-core/src/main/java/org/apache/doris/nereids/util/RelationUtil.java 
b/fe/fe-core/src/main/java/org/apache/doris/nereids/util/RelationUtil.java
index b145338ff81..b72498a2227 100644
--- a/fe/fe-core/src/main/java/org/apache/doris/nereids/util/RelationUtil.java
+++ b/fe/fe-core/src/main/java/org/apache/doris/nereids/util/RelationUtil.java
@@ -100,8 +100,10 @@ public class RelationUtil {
         try {
             DatabaseIf<TableIf> db = catalog.getDb(dbName).orElseThrow(() -> 
new AnalysisException(
                     "Database [" + dbName + "] does not exist."));
-            TableIf table = db.getTable(tableName).orElseThrow(() -> new 
AnalysisException(
-                    "Table [" + tableName + "] does not exist in database [" + 
dbName + "]."));
+            Pair<String, String> sourceTblNameWithMetaTblName = 
catalog.getSourceTableNameWithMetaTableName(tableName);
+            String sourceTableName = sourceTblNameWithMetaTblName.first;
+            TableIf table = db.getTable(sourceTableName).orElseThrow(() -> new 
AnalysisException(
+                    "Table [" + sourceTableName + "] does not exist in 
database [" + dbName + "]."));
             return Pair.of(db, table);
         } catch (Throwable e) {
             throw new AnalysisException(e.getMessage(), e.getCause());
diff --git 
a/fe/fe-core/src/main/java/org/apache/doris/tablefunction/MetadataGenerator.java
 
b/fe/fe-core/src/main/java/org/apache/doris/tablefunction/MetadataGenerator.java
index 103c659a990..ba73e5f44ef 100644
--- 
a/fe/fe-core/src/main/java/org/apache/doris/tablefunction/MetadataGenerator.java
+++ 
b/fe/fe-core/src/main/java/org/apache/doris/tablefunction/MetadataGenerator.java
@@ -30,10 +30,13 @@ import org.apache.doris.catalog.OlapTable;
 import org.apache.doris.catalog.Partition;
 import org.apache.doris.catalog.PartitionItem;
 import org.apache.doris.catalog.PartitionType;
+import org.apache.doris.catalog.ScalarType;
 import org.apache.doris.catalog.SchemaTable;
 import org.apache.doris.catalog.Table;
 import org.apache.doris.catalog.TableIf;
+import org.apache.doris.catalog.TableIf.TableType;
 import org.apache.doris.catalog.TableProperty;
+import org.apache.doris.catalog.Type;
 import org.apache.doris.common.AnalysisException;
 import org.apache.doris.common.ClientPool;
 import org.apache.doris.common.Pair;
@@ -42,11 +45,14 @@ import org.apache.doris.common.proc.FrontendsProcNode;
 import org.apache.doris.common.proc.PartitionsProcDir;
 import org.apache.doris.common.util.NetUtils;
 import org.apache.doris.common.util.TimeUtils;
+import org.apache.doris.common.util.Util;
 import org.apache.doris.datasource.CatalogIf;
 import org.apache.doris.datasource.ExternalCatalog;
 import org.apache.doris.datasource.ExternalMetaCacheMgr;
 import org.apache.doris.datasource.InternalCatalog;
+import org.apache.doris.datasource.TablePartitionValues;
 import org.apache.doris.datasource.hive.HMSExternalCatalog;
+import org.apache.doris.datasource.hive.HMSExternalTable;
 import org.apache.doris.datasource.hive.HiveMetaStoreCache;
 import org.apache.doris.datasource.hudi.source.HudiCachedPartitionProcessor;
 import org.apache.doris.datasource.iceberg.IcebergExternalCatalog;
@@ -78,6 +84,7 @@ import 
org.apache.doris.thrift.TMaterializedViewsMetadataParams;
 import org.apache.doris.thrift.TMetadataTableRequestParams;
 import org.apache.doris.thrift.TMetadataType;
 import org.apache.doris.thrift.TNetworkAddress;
+import org.apache.doris.thrift.TPartitionValuesMetadataParams;
 import org.apache.doris.thrift.TPartitionsMetadataParams;
 import org.apache.doris.thrift.TPipelineWorkloadGroup;
 import org.apache.doris.thrift.TRow;
@@ -204,7 +211,8 @@ public class MetadataGenerator {
         }
         TFetchSchemaTableDataResult result;
         TMetadataTableRequestParams params = request.getMetadaTableParams();
-        switch (request.getMetadaTableParams().getMetadataType()) {
+        TMetadataType metadataType = 
request.getMetadaTableParams().getMetadataType();
+        switch (metadataType) {
             case ICEBERG:
                 result = icebergMetadataResult(params);
                 break;
@@ -232,11 +240,17 @@ public class MetadataGenerator {
             case TASKS:
                 result = taskMetadataResult(params);
                 break;
+            case PARTITION_VALUES:
+                result = partitionValuesMetadataResult(params);
+                break;
             default:
                 return errorResult("Metadata table params is not set.");
         }
         if (result.getStatus().getStatusCode() == TStatusCode.OK) {
-            filterColumns(result, params.getColumnsName(), 
params.getMetadataType(), params);
+            if (metadataType != TMetadataType.PARTITION_VALUES) {
+                // partition_values' result already sorted by column names
+                filterColumns(result, params.getColumnsName(), 
params.getMetadataType(), params);
+            }
         }
         if (LOG.isDebugEnabled()) {
             LOG.debug("getMetadataTable() end.");
@@ -329,7 +343,8 @@ public class MetadataGenerator {
                     TRow trow = new TRow();
                     LocalDateTime committedAt = 
LocalDateTime.ofInstant(Instant.ofEpochMilli(
                             snapshot.timestampMillis()), 
TimeUtils.getTimeZone().toZoneId());
-                    long encodedDatetime = 
convertToDateTimeV2(committedAt.getYear(), committedAt.getMonthValue(),
+                    long encodedDatetime = 
TimeUtils.convertToDateTimeV2(committedAt.getYear(),
+                            committedAt.getMonthValue(),
                             committedAt.getDayOfMonth(), 
committedAt.getHour(), committedAt.getMinute(),
                             committedAt.getSecond(), committedAt.getNano() / 
1000);
 
@@ -785,12 +800,6 @@ public class MetadataGenerator {
         result.setDataBatch(filterColumnsRows);
     }
 
-    private static long convertToDateTimeV2(
-            int year, int month, int day, int hour, int minute, int second, 
int microsecond) {
-        return (long) microsecond | (long) second << 20 | (long) minute << 26 
| (long) hour << 32
-                | (long) day << 37 | (long) month << 42 | (long) year << 46;
-    }
-
     private static TFetchSchemaTableDataResult 
mtmvMetadataResult(TMetadataTableRequestParams params) {
         if (LOG.isDebugEnabled()) {
             LOG.debug("mtmvMetadataResult() start");
@@ -1476,4 +1485,120 @@ public class MetadataGenerator {
             }
         }
     }
+
+    private static TFetchSchemaTableDataResult 
partitionValuesMetadataResult(TMetadataTableRequestParams params) {
+        if (!params.isSetPartitionValuesMetadataParams()) {
+            return errorResult("partition values metadata params is not set.");
+        }
+
+        TPartitionValuesMetadataParams metaParams = 
params.getPartitionValuesMetadataParams();
+        String ctlName = metaParams.getCatalog();
+        String dbName = metaParams.getDatabase();
+        String tblName = metaParams.getTable();
+        List<TRow> dataBatch;
+        try {
+            TableIf table = 
PartitionValuesTableValuedFunction.analyzeAndGetTable(ctlName, dbName, tblName, 
false);
+            TableType tableType = table.getType();
+            switch (tableType) {
+                case HMS_EXTERNAL_TABLE:
+                    dataBatch = 
partitionValuesMetadataResultForHmsTable((HMSExternalTable) table,
+                            params.getColumnsName());
+                    break;
+                default:
+                    return errorResult("not support table type " + 
tableType.name());
+            }
+            TFetchSchemaTableDataResult result = new 
TFetchSchemaTableDataResult();
+            result.setDataBatch(dataBatch);
+            result.setStatus(new TStatus(TStatusCode.OK));
+            return result;
+        } catch (Throwable t) {
+            LOG.warn("error when get partition values metadata. {}.{}.{}", 
ctlName, dbName, tblName, t);
+            return errorResult("error when get partition values metadata: " + 
Util.getRootCauseMessage(t));
+        }
+    }
+
+    private static List<TRow> 
partitionValuesMetadataResultForHmsTable(HMSExternalTable tbl, List<String> 
colNames)
+            throws AnalysisException {
+        List<Column> partitionCols = tbl.getPartitionColumns();
+        List<Integer> colIdxs = Lists.newArrayList();
+        List<Type> types = Lists.newArrayList();
+        for (String colName : colNames) {
+            for (int i = 0; i < partitionCols.size(); ++i) {
+                if (partitionCols.get(i).getName().equalsIgnoreCase(colName)) {
+                    colIdxs.add(i);
+                    types.add(partitionCols.get(i).getType());
+                }
+            }
+        }
+        if (colIdxs.size() != colNames.size()) {
+            throw new AnalysisException(
+                    "column " + colNames + " does not match partition columns 
of table " + tbl.getName());
+        }
+
+        HiveMetaStoreCache cache = Env.getCurrentEnv().getExtMetaCacheMgr()
+                .getMetaStoreCache((HMSExternalCatalog) tbl.getCatalog());
+        HiveMetaStoreCache.HivePartitionValues hivePartitionValues = 
cache.getPartitionValues(
+                tbl.getDbName(), tbl.getName(), tbl.getPartitionColumnTypes());
+        Map<Long, List<String>> valuesMap = 
hivePartitionValues.getPartitionValuesMap();
+        List<TRow> dataBatch = Lists.newArrayList();
+        for (Map.Entry<Long, List<String>> entry : valuesMap.entrySet()) {
+            TRow trow = new TRow();
+            List<String> values = entry.getValue();
+            if (values.size() != partitionCols.size()) {
+                continue;
+            }
+
+            for (int i = 0; i < colIdxs.size(); ++i) {
+                int idx = colIdxs.get(i);
+                String partitionValue = values.get(idx);
+                if (partitionValue == null || 
partitionValue.equals(TablePartitionValues.HIVE_DEFAULT_PARTITION)) {
+                    trow.addToColumnValue(new TCell().setIsNull(true));
+                } else {
+                    Type type = types.get(i);
+                    switch (type.getPrimitiveType()) {
+                        case BOOLEAN:
+                            trow.addToColumnValue(new 
TCell().setBoolVal(Boolean.valueOf(partitionValue)));
+                            break;
+                        case TINYINT:
+                        case SMALLINT:
+                        case INT:
+                            trow.addToColumnValue(new 
TCell().setIntVal(Integer.valueOf(partitionValue)));
+                            break;
+                        case BIGINT:
+                            trow.addToColumnValue(new 
TCell().setLongVal(Long.valueOf(partitionValue)));
+                            break;
+                        case FLOAT:
+                            trow.addToColumnValue(new 
TCell().setDoubleVal(Float.valueOf(partitionValue)));
+                            break;
+                        case DOUBLE:
+                            trow.addToColumnValue(new 
TCell().setDoubleVal(Double.valueOf(partitionValue)));
+                            break;
+                        case VARCHAR:
+                        case CHAR:
+                        case STRING:
+                            trow.addToColumnValue(new 
TCell().setStringVal(partitionValue));
+                            break;
+                        case DATE:
+                        case DATEV2:
+                            trow.addToColumnValue(
+                                    new 
TCell().setLongVal(TimeUtils.convertStringToDateV2(partitionValue)));
+                            break;
+                        case DATETIME:
+                        case DATETIMEV2:
+                            trow.addToColumnValue(
+                                    new 
TCell().setLongVal(TimeUtils.convertStringToDateTimeV2(partitionValue,
+                                            ((ScalarType) 
type).getScalarScale())));
+                            break;
+                        default:
+                            throw new AnalysisException(
+                                    "Unsupported partition column type for 
$partitions sys table " + type);
+                    }
+                }
+            }
+            dataBatch.add(trow);
+        }
+        return dataBatch;
+    }
+
 }
+
diff --git 
a/fe/fe-core/src/main/java/org/apache/doris/tablefunction/PartitionValuesTableValuedFunction.java
 
b/fe/fe-core/src/main/java/org/apache/doris/tablefunction/PartitionValuesTableValuedFunction.java
new file mode 100644
index 00000000000..59efe9fcefd
--- /dev/null
+++ 
b/fe/fe-core/src/main/java/org/apache/doris/tablefunction/PartitionValuesTableValuedFunction.java
@@ -0,0 +1,180 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.doris.tablefunction;
+
+import org.apache.doris.catalog.Column;
+import org.apache.doris.catalog.DatabaseIf;
+import org.apache.doris.catalog.Env;
+import org.apache.doris.catalog.TableIf;
+import org.apache.doris.catalog.TableIf.TableType;
+import org.apache.doris.common.ErrorCode;
+import org.apache.doris.common.MetaNotFoundException;
+import org.apache.doris.datasource.CatalogIf;
+import org.apache.doris.datasource.hive.HMSExternalCatalog;
+import org.apache.doris.datasource.hive.HMSExternalTable;
+import org.apache.doris.datasource.maxcompute.MaxComputeExternalCatalog;
+import org.apache.doris.mysql.privilege.PrivPredicate;
+import org.apache.doris.nereids.exceptions.AnalysisException;
+import org.apache.doris.qe.ConnectContext;
+import org.apache.doris.thrift.TMetaScanRange;
+import org.apache.doris.thrift.TMetadataType;
+import org.apache.doris.thrift.TPartitionValuesMetadataParams;
+
+import com.google.common.base.Preconditions;
+import com.google.common.collect.ImmutableSet;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+
+/**
+ * The Implement of table valued function
+ * partition_values("catalog"="ctl1", "database" = "db1","table" = "table1").
+ */
+public class PartitionValuesTableValuedFunction extends 
MetadataTableValuedFunction {
+    private static final Logger LOG = 
LogManager.getLogger(PartitionValuesTableValuedFunction.class);
+
+    public static final String NAME = "partition_values";
+
+    public static final String CATALOG = "catalog";
+    public static final String DB = "database";
+    public static final String TABLE = "table";
+
+    private static final ImmutableSet<String> PROPERTIES_SET = 
ImmutableSet.of(CATALOG, DB, TABLE);
+
+    private final String catalogName;
+    private final String databaseName;
+    private final String tableName;
+    private TableIf table;
+    private List<Column> schema;
+
+    public PartitionValuesTableValuedFunction(Map<String, String> params) 
throws AnalysisException {
+        Map<String, String> validParams = Maps.newHashMap();
+        for (String key : params.keySet()) {
+            if (!PROPERTIES_SET.contains(key.toLowerCase())) {
+                throw new AnalysisException("'" + key + "' is invalid 
property");
+            }
+            // check ctl, db, tbl
+            validParams.put(key.toLowerCase(), params.get(key));
+        }
+        String catalogName = validParams.get(CATALOG);
+        String dbName = validParams.get(DB);
+        String tableName = validParams.get(TABLE);
+        if (StringUtils.isEmpty(catalogName) || StringUtils.isEmpty(dbName) || 
StringUtils.isEmpty(tableName)) {
+            throw new AnalysisException("catalog, database and table are 
required");
+        }
+        this.table = analyzeAndGetTable(catalogName, dbName, tableName, true);
+        this.catalogName = catalogName;
+        this.databaseName = dbName;
+        this.tableName = tableName;
+        if (LOG.isDebugEnabled()) {
+            LOG.debug("PartitionsTableValuedFunction() end");
+        }
+    }
+
+    public static TableIf analyzeAndGetTable(String catalogName, String 
dbName, String tableName, boolean checkAuth) {
+        if (checkAuth) {
+            // This method will be called at 2 places:
+            // One is when planing the query, which should check the privilege 
of the user.
+            // the other is when BE call FE to fetch partition values, which 
should not check the privilege.
+            if (!Env.getCurrentEnv().getAccessManager()
+                    .checkTblPriv(ConnectContext.get(), catalogName, dbName,
+                            tableName, PrivPredicate.SHOW)) {
+                String message = 
ErrorCode.ERR_TABLEACCESS_DENIED_ERROR.formatErrorMsg("SHOW PARTITIONS",
+                        ConnectContext.get().getQualifiedUser(), 
ConnectContext.get().getRemoteIP(),
+                        catalogName + ": " + dbName + ": " + tableName);
+                throw new AnalysisException(message);
+            }
+        }
+        CatalogIf catalog = 
Env.getCurrentEnv().getCatalogMgr().getCatalog(catalogName);
+        if (catalog == null) {
+            throw new AnalysisException("can not find catalog: " + 
catalogName);
+        }
+        // disallow unsupported catalog
+        if (!(catalog.isInternalCatalog() || catalog instanceof 
HMSExternalCatalog
+                || catalog instanceof MaxComputeExternalCatalog)) {
+            throw new AnalysisException(String.format("Catalog of type '%s' is 
not allowed in ShowPartitionsStmt",
+                    catalog.getType()));
+        }
+
+        Optional<DatabaseIf> db = catalog.getDb(dbName);
+        if (!db.isPresent()) {
+            throw new AnalysisException("can not find database: " + dbName);
+        }
+        TableIf table;
+        try {
+            table = db.get().getTableOrMetaException(tableName, TableType.OLAP,
+                    TableType.HMS_EXTERNAL_TABLE, 
TableType.MAX_COMPUTE_EXTERNAL_TABLE);
+        } catch (MetaNotFoundException e) {
+            throw new AnalysisException(e.getMessage(), e);
+        }
+
+        if (!(table instanceof HMSExternalTable)) {
+            throw new AnalysisException("Currently only support hive table's 
partition values meta table");
+        }
+        HMSExternalTable hmsTable = (HMSExternalTable) table;
+        if (!hmsTable.isPartitionedTable()) {
+            throw new AnalysisException("Table " + tableName + " is not a 
partitioned table");
+        }
+        return table;
+    }
+
+    @Override
+    public TMetadataType getMetadataType() {
+        return TMetadataType.PARTITION_VALUES;
+    }
+
+    @Override
+    public TMetaScanRange getMetaScanRange() {
+        if (LOG.isDebugEnabled()) {
+            LOG.debug("getMetaScanRange() start");
+        }
+        TMetaScanRange metaScanRange = new TMetaScanRange();
+        metaScanRange.setMetadataType(TMetadataType.PARTITION_VALUES);
+        TPartitionValuesMetadataParams partitionParam = new 
TPartitionValuesMetadataParams();
+        partitionParam.setCatalog(catalogName);
+        partitionParam.setDatabase(databaseName);
+        partitionParam.setTable(tableName);
+        metaScanRange.setPartitionValuesParams(partitionParam);
+        return metaScanRange;
+    }
+
+    @Override
+    public String getTableName() {
+        return "PartitionsValuesTableValuedFunction";
+    }
+
+    @Override
+    public List<Column> getTableColumns() throws AnalysisException {
+        Preconditions.checkNotNull(table);
+        // TODO: support other type of sys tables
+        if (schema == null) {
+            List<Column> partitionColumns = ((HMSExternalTable) 
table).getPartitionColumns();
+            schema = Lists.newArrayList();
+            for (Column column : partitionColumns) {
+                schema.add(new Column(column));
+            }
+        }
+        return schema;
+    }
+}
diff --git 
a/fe/fe-core/src/main/java/org/apache/doris/tablefunction/PartitionsTableValuedFunction.java
 
b/fe/fe-core/src/main/java/org/apache/doris/tablefunction/PartitionsTableValuedFunction.java
index 1ceddeb89cf..00169ad555f 100644
--- 
a/fe/fe-core/src/main/java/org/apache/doris/tablefunction/PartitionsTableValuedFunction.java
+++ 
b/fe/fe-core/src/main/java/org/apache/doris/tablefunction/PartitionsTableValuedFunction.java
@@ -55,7 +55,7 @@ import java.util.Optional;
 
 /**
  * The Implement of table valued function
- * partitions("database" = "db1","table" = "table1").
+ * partitions("catalog"="ctl1", "database" = "db1","table" = "table1").
  */
 public class PartitionsTableValuedFunction extends MetadataTableValuedFunction 
{
     private static final Logger LOG = 
LogManager.getLogger(PartitionsTableValuedFunction.class);
diff --git 
a/fe/fe-core/src/main/java/org/apache/doris/tablefunction/TableValuedFunctionIf.java
 
b/fe/fe-core/src/main/java/org/apache/doris/tablefunction/TableValuedFunctionIf.java
index 6b6fda088a8..d4faa460195 100644
--- 
a/fe/fe-core/src/main/java/org/apache/doris/tablefunction/TableValuedFunctionIf.java
+++ 
b/fe/fe-core/src/main/java/org/apache/doris/tablefunction/TableValuedFunctionIf.java
@@ -77,6 +77,8 @@ public abstract class TableValuedFunctionIf {
                 return new GroupCommitTableValuedFunction(params);
             case QueryTableValueFunction.NAME:
                 return 
QueryTableValueFunction.createQueryTableValueFunction(params);
+            case PartitionValuesTableValuedFunction.NAME:
+                return new PartitionValuesTableValuedFunction(params);
             default:
                 throw new AnalysisException("Could not find table function " + 
funcName);
         }
diff --git 
a/fe/fe-core/src/test/java/org/apache/doris/common/util/TimeUtilsTest.java 
b/fe/fe-core/src/test/java/org/apache/doris/common/util/TimeUtilsTest.java
index 114ddada9cd..f57e8fe2fe6 100644
--- a/fe/fe-core/src/test/java/org/apache/doris/common/util/TimeUtilsTest.java
+++ b/fe/fe-core/src/test/java/org/apache/doris/common/util/TimeUtilsTest.java
@@ -22,6 +22,7 @@ import org.apache.doris.catalog.PrimitiveType;
 import org.apache.doris.catalog.ScalarType;
 import org.apache.doris.common.AnalysisException;
 import org.apache.doris.common.DdlException;
+import org.apache.doris.common.ExceptionChecker;
 import org.apache.doris.common.FeConstants;
 
 import mockit.Expectations;
@@ -31,6 +32,7 @@ import org.junit.Before;
 import org.junit.Test;
 
 import java.time.ZoneId;
+import java.time.format.DateTimeParseException;
 import java.util.Calendar;
 import java.util.Date;
 import java.util.LinkedList;
@@ -201,4 +203,53 @@ public class TimeUtilsTest {
         Assert.assertNull(TimeUtils.getHourAsDate("111"));
         Assert.assertNull(TimeUtils.getHourAsDate("-1"));
     }
+
+    @Test
+    public void testConvertToBEDateType() {
+        long result = TimeUtils.convertStringToDateV2("2021-01-01");
+        Assert.assertEquals(1034785, result);
+        result = TimeUtils.convertStringToDateV2("1900-01-01");
+        Assert.assertEquals(972833, result);
+        result = TimeUtils.convertStringToDateV2("1899-12-31");
+        Assert.assertEquals(972703, result);
+        result = TimeUtils.convertStringToDateV2("9999-12-31");
+        Assert.assertEquals(5119903, result);
+
+        ExceptionChecker.expectThrows(DateTimeParseException.class, () -> 
TimeUtils.convertStringToDateV2("2021-1-1"));
+        ExceptionChecker.expectThrows(DateTimeParseException.class, () -> 
TimeUtils.convertStringToDateV2("1900-01-1"));
+        ExceptionChecker.expectThrows(DateTimeParseException.class, () -> 
TimeUtils.convertStringToDateV2("20210101"));
+        ExceptionChecker.expectThrows(DateTimeParseException.class, () -> 
TimeUtils.convertStringToDateV2(""));
+        ExceptionChecker.expectThrows(NullPointerException.class, () -> 
TimeUtils.convertStringToDateV2(null));
+        ExceptionChecker.expectThrows(DateTimeParseException.class,
+                () -> TimeUtils.convertStringToDateV2("2024:12:31"));
+    }
+
+    @Test
+    public void testConvertToBEDatetimeV2Type() {
+        long result = TimeUtils.convertStringToDateTimeV2("2021-01-01 
10:10:10", 0);
+        Assert.assertEquals(142219811099770880L, result);
+        result = TimeUtils.convertStringToDateTimeV2("1900-01-01 00:00:00.12", 
2);
+        Assert.assertEquals(133705149423146176L, result);
+        result = TimeUtils.convertStringToDateTimeV2("1899-12-31 
23:59:59.000", 3);
+        Assert.assertEquals(133687385164611584L, result);
+        result = TimeUtils.convertStringToDateTimeV2("9999-12-31 
23:59:59.123456", 6);
+        Assert.assertEquals(703674213003812984L, result);
+
+        ExceptionChecker.expectThrows(DateTimeParseException.class,
+                () -> TimeUtils.convertStringToDateTimeV2("2021-1-1", 0));
+        ExceptionChecker.expectThrows(DateTimeParseException.class,
+                () -> TimeUtils.convertStringToDateTimeV2("1900-01-1", 0));
+        ExceptionChecker.expectThrows(DateTimeParseException.class,
+                () -> TimeUtils.convertStringToDateTimeV2("20210101", 0));
+        ExceptionChecker.expectThrows(DateTimeParseException.class, () -> 
TimeUtils.convertStringToDateTimeV2("", 0));
+        ExceptionChecker.expectThrows(NullPointerException.class, () -> 
TimeUtils.convertStringToDateTimeV2(null, 0));
+        ExceptionChecker.expectThrows(DateTimeParseException.class,
+                () -> TimeUtils.convertStringToDateTimeV2("2024-10-10", 0));
+        ExceptionChecker.expectThrows(DateTimeParseException.class,
+                () -> TimeUtils.convertStringToDateTimeV2("2024-10-10 10", 0));
+        ExceptionChecker.expectThrows(DateTimeParseException.class,
+                () -> TimeUtils.convertStringToDateTimeV2("2024:12:31", 0));
+        ExceptionChecker.expectThrows(DateTimeParseException.class,
+                () -> TimeUtils.convertStringToDateTimeV2("2024-10-10 
11:11:11", 6));
+    }
 }
diff --git 
a/fe/fe-core/src/test/java/org/apache/doris/nereids/rules/analysis/BindRelationTest.java
 
b/fe/fe-core/src/test/java/org/apache/doris/nereids/rules/analysis/BindRelationTest.java
index 67115e67687..369a57017cb 100644
--- 
a/fe/fe-core/src/test/java/org/apache/doris/nereids/rules/analysis/BindRelationTest.java
+++ 
b/fe/fe-core/src/test/java/org/apache/doris/nereids/rules/analysis/BindRelationTest.java
@@ -18,6 +18,8 @@
 package org.apache.doris.nereids.rules.analysis;
 
 import org.apache.doris.catalog.Column;
+import org.apache.doris.catalog.Database;
+import org.apache.doris.catalog.DatabaseIf;
 import org.apache.doris.catalog.KeysType;
 import org.apache.doris.catalog.OlapTable;
 import org.apache.doris.catalog.PartitionInfo;
@@ -98,6 +100,8 @@ class BindRelationTest extends TestWithFeService implements 
GeneratedPlanPattern
                 new Column("name", Type.VARCHAR)
         );
 
+        Database externalDatabase = new Database(10000, DEFAULT_CLUSTER_PREFIX 
+ DB1);
+
         OlapTable externalOlapTable = new OlapTable(1, tableName, 
externalTableColumns, KeysType.DUP_KEYS,
                 new PartitionInfo(), new RandomDistributionInfo(10)) {
             @Override
@@ -109,6 +113,11 @@ class BindRelationTest extends TestWithFeService 
implements GeneratedPlanPattern
             public boolean hasDeleteSign() {
                 return false;
             }
+
+            @Override
+            public DatabaseIf getDatabase() {
+                return externalDatabase;
+            }
         };
 
         CustomTableResolver customTableResolver = qualifiedTable -> {
diff --git a/gensrc/thrift/Data.thrift b/gensrc/thrift/Data.thrift
index d1163821076..dc1190c6e42 100644
--- a/gensrc/thrift/Data.thrift
+++ b/gensrc/thrift/Data.thrift
@@ -54,6 +54,7 @@ struct TCell {
   3: optional i64 longVal
   4: optional double doubleVal
   5: optional string stringVal
+  6: optional bool isNull
   // add type: date datetime
 }
 
diff --git a/gensrc/thrift/FrontendService.thrift 
b/gensrc/thrift/FrontendService.thrift
index 3190d331851..32d69ce081c 100644
--- a/gensrc/thrift/FrontendService.thrift
+++ b/gensrc/thrift/FrontendService.thrift
@@ -1029,6 +1029,7 @@ struct TMetadataTableRequestParams {
   10: optional PlanNodes.TTasksMetadataParams tasks_metadata_params
   11: optional PlanNodes.TPartitionsMetadataParams partitions_metadata_params
   12: optional PlanNodes.TMetaCacheStatsParams meta_cache_stats_params
+  13: optional PlanNodes.TPartitionValuesMetadataParams 
partition_values_metadata_params
 }
 
 struct TSchemaTableRequestParams {
diff --git a/gensrc/thrift/PlanNodes.thrift b/gensrc/thrift/PlanNodes.thrift
index c77ab48b2d4..426c3217918 100644
--- a/gensrc/thrift/PlanNodes.thrift
+++ b/gensrc/thrift/PlanNodes.thrift
@@ -538,6 +538,12 @@ struct TPartitionsMetadataParams {
   3: optional string table
 }
 
+struct TPartitionValuesMetadataParams {
+  1: optional string catalog
+  2: optional string database
+  3: optional string table
+}
+
 struct TJobsMetadataParams {
   1: optional string type
   2: optional Types.TUserIdentity current_user_ident
@@ -555,6 +561,7 @@ struct TQueriesMetadataParams {
   4: optional TJobsMetadataParams jobs_params
   5: optional TTasksMetadataParams tasks_params
   6: optional TPartitionsMetadataParams partitions_params
+  7: optional TPartitionValuesMetadataParams partition_values_params
 }
 
 struct TMetaCacheStatsParams {
@@ -571,6 +578,7 @@ struct TMetaScanRange {
   8: optional TTasksMetadataParams tasks_params
   9: optional TPartitionsMetadataParams partitions_params
   10: optional TMetaCacheStatsParams meta_cache_stats_params
+  11: optional TPartitionValuesMetadataParams partition_values_params
 }
 
 // Specification of an individual data range which is held in its entirety
diff --git a/gensrc/thrift/Types.thrift b/gensrc/thrift/Types.thrift
index 85a88544484..35a78e068de 100644
--- a/gensrc/thrift/Types.thrift
+++ b/gensrc/thrift/Types.thrift
@@ -733,7 +733,8 @@ enum TMetadataType {
   JOBS,
   TASKS,
   WORKLOAD_SCHED_POLICY,
-  PARTITIONS;
+  PARTITIONS,
+  PARTITION_VALUES;
 }
 
 enum TIcebergQueryType {
diff --git 
a/regression-test/data/external_table_p0/hive/test_hive_partition_values_tvf.out
 
b/regression-test/data/external_table_p0/hive/test_hive_partition_values_tvf.out
new file mode 100644
index 00000000000..bc69c716277
--- /dev/null
+++ 
b/regression-test/data/external_table_p0/hive/test_hive_partition_values_tvf.out
@@ -0,0 +1,120 @@
+-- This file is automatically generated. You should know what you did if you 
want to edit this
+-- !sql01 --
+\N     0.1     test1
+\N     0.2     test2
+100    0.3     test3
+
+-- !sql02 --
+\N     0.1     test1
+\N     0.2     test2
+100    0.3     test3
+
+-- !sql03 --
+\N     0.1     test1
+\N     0.2     test2
+100    0.3     test3
+
+-- !sql11 --
+\N     0.1
+\N     0.2
+100    0.3
+
+-- !sql12 --
+0.1    \N
+0.2    \N
+0.3    100
+
+-- !sql13 --
+test1  \N
+test2  \N
+test3  100
+
+-- !sql21 --
+test3  100     0.3
+
+-- !sql22 --
+test1
+test2
+test3
+
+-- !sql22 --
+3
+
+-- !sql22 --
+3
+
+-- !sql31 --
+0.1    \N
+0.2    \N
+0.3    100
+
+-- !sql41 --
+100
+
+-- !sql42 --
+test3
+
+-- !sql51 --
+test3
+
+-- !sql61 --
+
+-- !sql71 --
+\N     0.2     test2
+100    0.3     test3
+\N     0.1     test1
+
+-- !sql72 --
+
+-- !sql73 --
+test3
+
+-- !sql81 --
+100    0.3     test3   100     0.3     test3
+
+-- !sql91 --
+t_int  int     Yes     true    \N      NONE
+t_float        float   Yes     true    \N      NONE
+t_string       varchar(65533)  Yes     true    \N      NONE
+
+-- !sql92 --
+t_int  int     Yes     true    \N      NONE
+t_float        float   Yes     true    \N      NONE
+t_string       varchar(65533)  Yes     true    \N      NONE
+
+-- !sql93 --
+t_int  int     Yes     true    \N      NONE
+
+-- !sql94 --
+t_int  int     Yes     true    \N      NONE
+
+-- !sql95 --
+\N     0.1     test1
+\N     0.2     test2
+100    0.3     test3
+
+-- !sql101 --
+k1     int     Yes     true    \N      
+
+-- !sql102 --
+
+-- !sql111 --
+p1     boolean Yes     true    \N      NONE
+p2     tinyint Yes     true    \N      NONE
+p3     smallint        Yes     true    \N      NONE
+p4     int     Yes     true    \N      NONE
+p5     bigint  Yes     true    \N      NONE
+p6     date    Yes     true    \N      NONE
+p7     datetime(6)     Yes     true    \N      NONE
+p8     varchar(65533)  Yes     true    \N      NONE
+
+-- !sql112 --
+1      test1   true    -128    -32768  -2147483648     -9223372036854775808    
1900-01-01      1899-01-01T23:59:59     \N
+2      \N      false   127     32767   2147483647      9223372036854775807     
9999-12-31      0001-01-01T00:00:01.321 boston
+3              \N      \N      \N      \N      \N      \N      \N      \N
+
+-- !sql113 --
+\N     \N      \N      \N      \N      \N      \N      \N
+false  -128    -32768  -2147483648     -9223372036854775808    1900-01-01      
1899-01-01T23:59:59     \N
+false  127     32767   2147483647      9223372036854775807     9999-12-31      
0001-01-01T00:00:01.321 boston
+
diff --git 
a/regression-test/suites/external_table_p0/hive/test_hive_partition_values_tvf.groovy
 
b/regression-test/suites/external_table_p0/hive/test_hive_partition_values_tvf.groovy
new file mode 100644
index 00000000000..7cb764d0165
--- /dev/null
+++ 
b/regression-test/suites/external_table_p0/hive/test_hive_partition_values_tvf.groovy
@@ -0,0 +1,142 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+suite("test_hive_partition_values_tvf", 
"p0,external,hive,external_docker,external_docker_hive") {
+    String enabled = context.config.otherConfigs.get("enableHiveTest")
+    if (enabled == null || !enabled.equalsIgnoreCase("true")) {
+        logger.info("disable Hive test.")
+        return;
+    }
+    for (String hivePrefix : ["hive3"]) {
+        String extHiveHmsHost = 
context.config.otherConfigs.get("externalEnvIp")
+        String extHiveHmsPort = context.config.otherConfigs.get(hivePrefix + 
"HmsPort")
+        String catalog_name = 
"${hivePrefix}_test_external_catalog_hive_partition"
+
+        sql """drop catalog if exists ${catalog_name};"""
+        sql """
+            create catalog if not exists ${catalog_name} properties (
+                'type'='hms',
+                'hive.metastore.uris' = 
'thrift://${extHiveHmsHost}:${extHiveHmsPort}'
+            );
+        """
+
+        // 1. test qualifier
+        qt_sql01 """ select * from 
${catalog_name}.multi_catalog.orc_partitioned_columns\$partitions order by 
t_int, t_float, t_string"""
+        sql """ switch ${catalog_name} """
+        qt_sql02 """ select * from 
multi_catalog.orc_partitioned_columns\$partitions order by t_int, t_float, 
t_string"""
+        sql """ use multi_catalog"""
+        qt_sql03 """ select * from orc_partitioned_columns\$partitions order 
by t_int, t_float, t_string"""
+
+        // 2. test select order
+        qt_sql11 """ select * except(t_string) from 
orc_partitioned_columns\$partitions order by t_int, t_float, t_string"""
+        qt_sql12 """ select t_float, t_int from 
orc_partitioned_columns\$partitions order by t_int, t_float, t_string"""
+        qt_sql13 """ select t_string, t_int from 
orc_partitioned_columns\$partitions order by t_int, t_float, t_string"""
+
+        // 3. test agg
+        qt_sql21 """ select max(t_string), max(t_int), max(t_float) from 
orc_partitioned_columns\$partitions"""
+        qt_sql22 """ select max(t_string) from 
orc_partitioned_columns\$partitions group by t_int, t_float order by t_int, 
t_float"""
+        qt_sql22 """ select count(*) from 
orc_partitioned_columns\$partitions;"""
+        qt_sql22 """ select count(1) from 
orc_partitioned_columns\$partitions;"""
+
+        // 4. test alias
+        qt_sql31 """ select pv.t_float, pv.t_int from 
orc_partitioned_columns\$partitions as pv group by t_int, t_float order by 
t_int, t_float"""
+        
+        // 5. test CTE
+        qt_sql41 """ with v1 as (select t_string, t_int from 
orc_partitioned_columns\$partitions order by t_int, t_float, t_string) select 
max(t_int) from v1; """
+        qt_sql42 """ with v1 as (select t_string, t_int from 
orc_partitioned_columns\$partitions order by t_int, t_float, t_string) select 
c1 from (select max(t_string) as c1 from v1) x; """
+ 
+        // 6. test subquery
+        qt_sql51 """select c1 from (select max(t_string) as c1 from (select * 
from multi_catalog.orc_partitioned_columns\$partitions)x)y;"""
+
+        // 7. test where
+        qt_sql61 """select * from orc_partitioned_columns\$partitions where 
t_int != "__HIVE_DEFAULT_PARTITION__" order by t_int, t_float, t_string; """
+        
+        // 8. test view
+        sql """drop database if exists internal.partition_values_db"""
+        sql """create database if not exists internal.partition_values_db"""
+        sql """create view internal.partition_values_db.v1 as select * from 
${catalog_name}.multi_catalog.orc_partitioned_columns\$partitions"""
+        qt_sql71 """select * from internal.partition_values_db.v1"""
+        qt_sql72 """select t_string, t_int from 
internal.partition_values_db.v1 where t_int != "__HIVE_DEFAULT_PARTITION__""""
+        qt_sql73 """with v1 as (select t_string, t_int from 
internal.partition_values_db.v1 order by t_int, t_float, t_string) select c1 
from (select max(t_string) as c1 from v1) x;"""
+        
+        // 9. test join
+        qt_sql81 """select * from orc_partitioned_columns\$partitions p1 join 
orc_partitioned_columns\$partitions p2 on p1.t_int = p2.t_int order by 
p1.t_int, p1.t_float"""
+
+        // 10. test desc
+        qt_sql91 """desc orc_partitioned_columns\$partitions"""
+        qt_sql92 """desc function partition_values("catalog" = 
"${catalog_name}", "database" = "multi_catalog", "table" = 
"orc_partitioned_columns");"""
+        qt_sql93 """desc orc_partitioned_one_column\$partitions"""
+        qt_sql94 """desc function partition_values("catalog" = 
"${catalog_name}", "database" = "multi_catalog", "table" = 
"orc_partitioned_one_column");"""
+        qt_sql95 """select * from partition_values("catalog" = 
"${catalog_name}", "database" = "multi_catalog", "table" = 
"orc_partitioned_columns") order by t_int, t_float"""
+
+        // 11. test non partition table
+        test {
+            sql """select * from hive_text_complex_type\$partitions"""
+            exception "is not a partitioned table"
+        }
+        test {
+            sql """desc hive_text_complex_type\$partitions"""
+            exception "is not a partitioned table"
+        }
+
+        // 12. test inner table
+        sql """create table internal.partition_values_db.pv_inner1 (k1 int) 
distributed by hash (k1) buckets 1 properties("replication_num" = "1")"""
+        qt_sql101 """desc internal.partition_values_db.pv_inner1"""
+        qt_sql102 """select * from internal.partition_values_db.pv_inner1"""
+        test {
+            sql """desc internal.partition_values_db.pv_inner1\$partitions"""
+            exception """Unknown table 'pv_inner1\$partitions'"""
+        }
+
+        test {
+            sql """select * from 
internal.partition_values_db.pv_inner1\$partitions"""
+            exception """Table [pv_inner1\$partitions] does not exist in 
database [partition_values_db]"""
+        }
+
+        // 13. test all types of partition columns
+        sql """switch ${catalog_name}"""
+        sql """drop database if exists partition_values_db""";
+        sql """create database partition_values_db"""
+        sql """use partition_values_db"""
+
+        sql """create table partition_values_all_types (
+            k1 int,
+            k2 string,
+            p1 boolean,
+            p2 tinyint,
+            p3 smallint,
+            p4 int,
+            p5 bigint,
+            p6 date,
+            p7 datetime,
+            p8 string
+        ) partition by list(p1, p2, p3, p4, p5, p6, p7, p8)();
+        """
+
+        qt_sql111 """desc partition_values_all_types\$partitions;"""
+
+        sql """insert into partition_values_all_types values
+            (1, "test1", true, -128, -32768, -2147483648, 
-9223372036854775808, "1900-01-01", "1899-01-01 23:59:59", ""),
+            (2, null, false, 127, 32767, 2147483647, 9223372036854775807, 
"9999-12-31", "0001-01-01 00:00:01.321", "boston"),
+            (3, "", null, null, null, null, null, null, null, null);
+        """
+
+        qt_sql112 """select * from partition_values_all_types order by k1;"""
+        qt_sql113 """select * from partition_values_all_types\$partitions 
order by p1,p2,p3;"""
+    }
+}
+


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]


Reply via email to