[
https://issues.apache.org/jira/browse/HIVE-27898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17819172#comment-17819172
]
yongzhi.shao edited comment on HIVE-27898 at 2/21/24 10:27 AM:
---------------------------------------------------------------
That's strange. Our execution plans are different?
you: one map task
me: one map -> one reduce
{code:java}
----------------------------------------------------------------------------------------------
VERTICES MODE STATUS TOTAL COMPLETED RUNNING PENDING
FAILED KILLED
----------------------------------------------------------------------------------------------
Map 1 .......... container SUCCEEDED 1 1 0 0
0 0
Reducer 2 ...... container SUCCEEDED 1 1 0 0
0 0
----------------------------------------------------------------------------------------------
VERTICES: 02/02 [==========================>>] 100% ELAPSED TIME: 26.27 s
----------------------------------------------------------------------------------------------
INFO : Status: DAG finished successfully in 26.06 seconds
INFO : DAG ID: dag_1706163520799_38211_2
INFO :
INFO : Query Execution Summary
INFO :
----------------------------------------------------------------------------------------------
INFO : OPERATION DURATION
INFO :
----------------------------------------------------------------------------------------------
INFO : Compile Query 0.00s
INFO : Prepare Plan 0.00s
INFO : Get Query Coordinator (AM) 0.00s
INFO : Submit Plan 1708509885.24s
INFO : Start DAG 0.04s
INFO : Run DAG 26.05s
INFO :
----------------------------------------------------------------------------------------------
INFO :
INFO : Task Execution Summary
INFO :
----------------------------------------------------------------------------------------------
INFO : VERTICES DURATION(ms) CPU_TIME(ms) GC_TIME(ms)
INPUT_RECORDS OUTPUT_RECORDS
INFO :
----------------------------------------------------------------------------------------------
INFO : Map 1 3949.00 8,360 118
2 2
INFO : Reducer 2 0.00 820 0
2 0
INFO :
----------------------------------------------------------------------------------------------
INFO :
INFO : org.apache.tez.common.counters.DAGCounter:
INFO : NUM_SUCCEEDED_TASKS: 2
INFO : TOTAL_LAUNCHED_TASKS: 2
INFO : RACK_LOCAL_TASKS: 1
INFO : AM_CPU_MILLISECONDS: 2650
INFO : WALL_CLOCK_MILLIS: 3912
INFO : AM_GC_TIME_MILLIS: 0
INFO : INITIAL_HELD_CONTAINERS: 0
INFO : TOTAL_CONTAINERS_USED: 1
INFO : TOTAL_CONTAINER_ALLOCATION_COUNT: 1
INFO : TOTAL_CONTAINER_LAUNCH_COUNT: 1
INFO : TOTAL_CONTAINER_REUSE_COUNT: 1
INFO : File System Counters:
INFO : FILE_BYTES_READ: 0
INFO : FILE_BYTES_WRITTEN: 0
INFO : FILE_READ_OPS: 0
INFO : FILE_LARGE_READ_OPS: 0
INFO : FILE_WRITE_OPS: 0
INFO : HDFS_BYTES_READ: 1210
INFO : HDFS_BYTES_WRITTEN: 121
INFO : HDFS_READ_OPS: 8
INFO : HDFS_LARGE_READ_OPS: 0
INFO : HDFS_WRITE_OPS: 2
INFO : org.apache.tez.common.counters.TaskCounter:
INFO : SPILLED_RECORDS: 0
INFO : NUM_SHUFFLED_INPUTS: 0
INFO : NUM_FAILED_SHUFFLE_INPUTS: 0
INFO : GC_TIME_MILLIS: 118
INFO : CPU_MILLISECONDS: 9180
INFO : WALL_CLOCK_MILLISECONDS: 3726
INFO : PHYSICAL_MEMORY_BYTES: 3435134976
INFO : VIRTUAL_MEMORY_BYTES: 7564333056
INFO : COMMITTED_HEAP_BYTES: 3435134976
INFO : INPUT_RECORDS_PROCESSED: 4
INFO : INPUT_SPLIT_LENGTH_BYTES: 880
INFO : OUTPUT_RECORDS: 2
INFO : APPROXIMATE_INPUT_RECORDS: 2
INFO : OUTPUT_LARGE_RECORDS: 0
INFO : OUTPUT_BYTES: 8
INFO : OUTPUT_BYTES_WITH_OVERHEAD: 18
INFO : OUTPUT_BYTES_PHYSICAL: 46
INFO : ADDITIONAL_SPILLS_BYTES_WRITTEN: 0
INFO : ADDITIONAL_SPILLS_BYTES_READ: 0
INFO : ADDITIONAL_SPILL_COUNT: 0
INFO : SHUFFLE_BYTES: 0
INFO : SHUFFLE_BYTES_DECOMPRESSED: 0
INFO : SHUFFLE_BYTES_TO_MEM: 0
INFO : SHUFFLE_BYTES_TO_DISK: 0
INFO : SHUFFLE_BYTES_DISK_DIRECT: 0
INFO : SHUFFLE_PHASE_TIME: 58
INFO : FIRST_EVENT_RECEIVED: 58
INFO : LAST_EVENT_RECEIVED: 58
INFO : DATA_BYTES_VIA_EVENT: 22
INFO : HIVE:
INFO : CREATED_FILES: 1
INFO : DESERIALIZE_ERRORS: 0
INFO : RECORDS_IN_Map_1: 2
INFO : RECORDS_OUT_0: 2
INFO : RECORDS_OUT_INTERMEDIATE_Map_1: 2
INFO : RECORDS_OUT_INTERMEDIATE_Reducer_2: 0
INFO : RECORDS_OUT_OPERATOR_FS_13: 2
INFO : RECORDS_OUT_OPERATOR_LIM_11: 2
INFO : RECORDS_OUT_OPERATOR_LIM_8: 2
INFO : RECORDS_OUT_OPERATOR_MAP_0: 0
INFO : RECORDS_OUT_OPERATOR_RS_10: 2
INFO : RECORDS_OUT_OPERATOR_SEL_12: 2
INFO : RECORDS_OUT_OPERATOR_SEL_9: 2
INFO : RECORDS_OUT_OPERATOR_TS_0: 2
INFO : org.apache.hadoop.hive.ql.exec.tez.HiveInputCounters:
INFO : GROUPED_INPUT_SPLITS_Map_1: 1
INFO : INPUT_DIRECTORIES_Map_1: 1
INFO : INPUT_FILES_Map_1: 1
INFO : RAW_INPUT_SPLITS_Map_1: 1
INFO : Completed executing
command(queryId=hive_20240221180444_99c0a1c4-ebb5-4895-8d0f-19df4f7d7c68); Time
taken: 26.792 seconds
s1.id NULL
s1.name as1.id NULL
s1.name b2 rows selected (27.535 seconds)
0: jdbc:hive2://smaster01:10000> select * from (select * from
iceberg_dwd.test_data_02 limit 10) s1; {code}
was (Author: lisoda):
That's strange. Our execution plans are different?
{code:java}
----------------------------------------------------------------------------------------------
VERTICES MODE STATUS TOTAL COMPLETED RUNNING PENDING
FAILED KILLED
----------------------------------------------------------------------------------------------
Map 1 .......... container SUCCEEDED 1 1 0 0
0 0
Reducer 2 ...... container SUCCEEDED 1 1 0 0
0 0
----------------------------------------------------------------------------------------------
VERTICES: 02/02 [==========================>>] 100% ELAPSED TIME: 26.27 s
----------------------------------------------------------------------------------------------
INFO : Status: DAG finished successfully in 26.06 seconds
INFO : DAG ID: dag_1706163520799_38211_2
INFO :
INFO : Query Execution Summary
INFO :
----------------------------------------------------------------------------------------------
INFO : OPERATION DURATION
INFO :
----------------------------------------------------------------------------------------------
INFO : Compile Query 0.00s
INFO : Prepare Plan 0.00s
INFO : Get Query Coordinator (AM) 0.00s
INFO : Submit Plan 1708509885.24s
INFO : Start DAG 0.04s
INFO : Run DAG 26.05s
INFO :
----------------------------------------------------------------------------------------------
INFO :
INFO : Task Execution Summary
INFO :
----------------------------------------------------------------------------------------------
INFO : VERTICES DURATION(ms) CPU_TIME(ms) GC_TIME(ms)
INPUT_RECORDS OUTPUT_RECORDS
INFO :
----------------------------------------------------------------------------------------------
INFO : Map 1 3949.00 8,360 118
2 2
INFO : Reducer 2 0.00 820 0
2 0
INFO :
----------------------------------------------------------------------------------------------
INFO :
INFO : org.apache.tez.common.counters.DAGCounter:
INFO : NUM_SUCCEEDED_TASKS: 2
INFO : TOTAL_LAUNCHED_TASKS: 2
INFO : RACK_LOCAL_TASKS: 1
INFO : AM_CPU_MILLISECONDS: 2650
INFO : WALL_CLOCK_MILLIS: 3912
INFO : AM_GC_TIME_MILLIS: 0
INFO : INITIAL_HELD_CONTAINERS: 0
INFO : TOTAL_CONTAINERS_USED: 1
INFO : TOTAL_CONTAINER_ALLOCATION_COUNT: 1
INFO : TOTAL_CONTAINER_LAUNCH_COUNT: 1
INFO : TOTAL_CONTAINER_REUSE_COUNT: 1
INFO : File System Counters:
INFO : FILE_BYTES_READ: 0
INFO : FILE_BYTES_WRITTEN: 0
INFO : FILE_READ_OPS: 0
INFO : FILE_LARGE_READ_OPS: 0
INFO : FILE_WRITE_OPS: 0
INFO : HDFS_BYTES_READ: 1210
INFO : HDFS_BYTES_WRITTEN: 121
INFO : HDFS_READ_OPS: 8
INFO : HDFS_LARGE_READ_OPS: 0
INFO : HDFS_WRITE_OPS: 2
INFO : org.apache.tez.common.counters.TaskCounter:
INFO : SPILLED_RECORDS: 0
INFO : NUM_SHUFFLED_INPUTS: 0
INFO : NUM_FAILED_SHUFFLE_INPUTS: 0
INFO : GC_TIME_MILLIS: 118
INFO : CPU_MILLISECONDS: 9180
INFO : WALL_CLOCK_MILLISECONDS: 3726
INFO : PHYSICAL_MEMORY_BYTES: 3435134976
INFO : VIRTUAL_MEMORY_BYTES: 7564333056
INFO : COMMITTED_HEAP_BYTES: 3435134976
INFO : INPUT_RECORDS_PROCESSED: 4
INFO : INPUT_SPLIT_LENGTH_BYTES: 880
INFO : OUTPUT_RECORDS: 2
INFO : APPROXIMATE_INPUT_RECORDS: 2
INFO : OUTPUT_LARGE_RECORDS: 0
INFO : OUTPUT_BYTES: 8
INFO : OUTPUT_BYTES_WITH_OVERHEAD: 18
INFO : OUTPUT_BYTES_PHYSICAL: 46
INFO : ADDITIONAL_SPILLS_BYTES_WRITTEN: 0
INFO : ADDITIONAL_SPILLS_BYTES_READ: 0
INFO : ADDITIONAL_SPILL_COUNT: 0
INFO : SHUFFLE_BYTES: 0
INFO : SHUFFLE_BYTES_DECOMPRESSED: 0
INFO : SHUFFLE_BYTES_TO_MEM: 0
INFO : SHUFFLE_BYTES_TO_DISK: 0
INFO : SHUFFLE_BYTES_DISK_DIRECT: 0
INFO : SHUFFLE_PHASE_TIME: 58
INFO : FIRST_EVENT_RECEIVED: 58
INFO : LAST_EVENT_RECEIVED: 58
INFO : DATA_BYTES_VIA_EVENT: 22
INFO : HIVE:
INFO : CREATED_FILES: 1
INFO : DESERIALIZE_ERRORS: 0
INFO : RECORDS_IN_Map_1: 2
INFO : RECORDS_OUT_0: 2
INFO : RECORDS_OUT_INTERMEDIATE_Map_1: 2
INFO : RECORDS_OUT_INTERMEDIATE_Reducer_2: 0
INFO : RECORDS_OUT_OPERATOR_FS_13: 2
INFO : RECORDS_OUT_OPERATOR_LIM_11: 2
INFO : RECORDS_OUT_OPERATOR_LIM_8: 2
INFO : RECORDS_OUT_OPERATOR_MAP_0: 0
INFO : RECORDS_OUT_OPERATOR_RS_10: 2
INFO : RECORDS_OUT_OPERATOR_SEL_12: 2
INFO : RECORDS_OUT_OPERATOR_SEL_9: 2
INFO : RECORDS_OUT_OPERATOR_TS_0: 2
INFO : org.apache.hadoop.hive.ql.exec.tez.HiveInputCounters:
INFO : GROUPED_INPUT_SPLITS_Map_1: 1
INFO : INPUT_DIRECTORIES_Map_1: 1
INFO : INPUT_FILES_Map_1: 1
INFO : RAW_INPUT_SPLITS_Map_1: 1
INFO : Completed executing
command(queryId=hive_20240221180444_99c0a1c4-ebb5-4895-8d0f-19df4f7d7c68); Time
taken: 26.792 seconds
s1.id NULL
s1.name as1.id NULL
s1.name b2 rows selected (27.535 seconds)
0: jdbc:hive2://smaster01:10000> select * from (select * from
iceberg_dwd.test_data_02 limit 10) s1; {code}
> HIVE4 can't use ICEBERG table in subqueries
> -------------------------------------------
>
> Key: HIVE-27898
> URL: https://issues.apache.org/jira/browse/HIVE-27898
> Project: Hive
> Issue Type: Improvement
> Components: Iceberg integration
> Affects Versions: 4.0.0-beta-1
> Reporter: yongzhi.shao
> Priority: Critical
>
> Currently, we found that when using HIVE4-BETA1 version, if we use ICEBERG
> table in the subquery, we can't get any data in the end.
> I have used HIVE3-TEZ for cross validation and HIVE3 does not have this
> problem when querying ICEBERG.
> {code:java}
> --spark3.4.1+iceberg 1.4.2
> CREATE TABLE datacenter.dwd.b_std_trade (
> uni_order_id STRING,
> data_from BIGINT,
> partner STRING,
> plat_code STRING,
> order_id STRING,
> uni_shop_id STRING,
> uni_id STRING,
> guide_id STRING,
> shop_id STRING,
> plat_account STRING,
> total_fee DOUBLE,
> item_discount_fee DOUBLE,
> trade_discount_fee DOUBLE,
> adjust_fee DOUBLE,
> post_fee DOUBLE,
> discount_rate DOUBLE,
> payment_no_postfee DOUBLE,
> payment DOUBLE,
> pay_time STRING,
> product_num BIGINT,
> order_status STRING,
> is_refund STRING,
> refund_fee DOUBLE,
> insert_time STRING,
> created STRING,
> endtime STRING,
> modified STRING,
> trade_type STRING,
> receiver_name STRING,
> receiver_country STRING,
> receiver_state STRING,
> receiver_city STRING,
> receiver_district STRING,
> receiver_town STRING,
> receiver_address STRING,
> receiver_mobile STRING,
> trade_source STRING,
> delivery_type STRING,
> consign_time STRING,
> orders_num BIGINT,
> is_presale BIGINT,
> presale_status STRING,
> first_fee_paytime STRING,
> last_fee_paytime STRING,
> first_paid_fee DOUBLE,
> tenant STRING,
> tidb_modified STRING,
> step_paid_fee DOUBLE,
> seller_flag STRING,
> is_used_store_card BIGINT,
> store_card_used DOUBLE,
> store_card_basic_used DOUBLE,
> store_card_expand_used DOUBLE,
> order_promotion_num BIGINT,
> item_promotion_num BIGINT,
> buyer_remark STRING,
> seller_remark STRING,
> trade_business_type STRING)
> USING iceberg
> PARTITIONED BY (uni_shop_id, truncate(4, created))
> LOCATION '/iceberg-catalog/warehouse/dwd/b_std_trade'
> TBLPROPERTIES (
> 'current-snapshot-id' = '7217819472703702905',
> 'format' = 'iceberg/orc',
> 'format-version' = '1',
> 'hive.stored-as' = 'iceberg',
> 'read.orc.vectorization.enabled' = 'true',
> 'sort-order' = 'uni_shop_id ASC NULLS FIRST, created ASC NULLS FIRST',
> 'write.distribution-mode' = 'hash',
> 'write.format.default' = 'orc',
> 'write.metadata.delete-after-commit.enabled' = 'true',
> 'write.metadata.previous-versions-max' = '3',
> 'write.orc.bloom.filter.columns' = 'order_id',
> 'write.orc.compression-codec' = 'zstd')
> --hive-iceberg
> CREATE EXTERNAL TABLE iceberg_dwd.b_std_trade
> STORED BY 'org.apache.iceberg.mr.hive.HiveIcebergStorageHandler'
> LOCATION 'hdfs://xxxx/iceberg-catalog/warehouse/dwd/b_std_trade'
> TBLPROPERTIES
> ('iceberg.catalog'='location_based_table','engine.hive.enabled'='true');
> select * from iceberg_dwd.b_std_trade
> where uni_shop_id = 'TEST|11111' limit 10 --10 rows
> select *
> from (
> select * from iceberg_dwd.b_std_trade
> where uni_shop_id = 'TEST|11111' limit 10
> ) t1; --10 rows
> select uni_shop_id
> from (
> select * from iceberg_dwd.b_std_trade
> where uni_shop_id = 'TEST|11111' limit 10
> ) t1; --0 rows
> select uni_shop_id
> from (
> select uni_shop_id as uni_shop_id from iceberg_dwd.b_std_trade
> where uni_shop_id = 'TEST|11111' limit 10
> ) t1; --0 rows
> --hive-orc
> select uni_shop_id
> from (
> select * from iceberg_dwd.trade_test
> where uni_shop_id = 'TEST|11111' limit 10
> ) t1; --10 ROWS{code}
>
--
This message was sent by Atlassian Jira
(v8.20.10#820010)