[
https://issues.apache.org/jira/browse/IMPALA-9338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17033108#comment-17033108
]
Abhishek Rawat commented on IMPALA-9338:
----------------------------------------
Looking at the explain for the ROJ plan, it seems the partition by expressions
for the EXCHANGE node on either legs of the join, reference the wrong table and
it would definitely result in such kind of errors when row descriptors try to
reference an invalid tuple id.
{code:java}
| Max Per-Host Resource Reservation: Memory=11.97MB Threads=5
|
| Per-Host Resource Estimates: Memory=76MB
|
| WARNING: The following tables are missing relevant table and/or column
statistics.
|
| edw.dim
|
|
|
| PLAN-ROOT SINK
|
| |
|
| 09:EXCHANGE [UNPARTITIONED]
|
| |
|
| 06:HASH JOIN [RIGHT OUTER JOIN, PARTITIONED]
|
| | hash predicates: dim.`YEAR` = fact.`YEAR`, dim.`YEAR` = fact.`YEAR`,
fact.cem_bor_act_sfx = dim.ce_bor_act_sfx, fact.CEM_BOR_SSN = dim.ce_bor_ssn,
fact.cem_ce_eff_dt = dim.ce_eff_dt |
| | runtime filters: RF000 <- dim.`YEAR`, RF001 <- dim.`YEAR`, RF002 <-
dim.ce_bor_act_sfx, RF003 <- dim.ce_bor_ssn, RF004 <- dim.ce_eff_dt
|
| | row-size=108B cardinality=0
|
| |
|
| |--08:EXCHANGE
[HASH(fact.`YEAR`,fact.`YEAR`,dim.ce_bor_act_sfx,dim.ce_bor_ssn,dim.ce_eff_dt)]
|
| | |
|
| | 01:SUBPLAN
|
| | | row-size=39B cardinality=0
|
| | |
|
| | |--04:NESTED LOOP JOIN [CROSS JOIN]
|
| | | | row-size=39B cardinality=10
|
| | | |
|
| | | |--02:SINGULAR ROW SRC
|
| | | | row-size=35B cardinality=1
|
| | | |
|
| | | 03:UNNEST [edw.dim.ce_lon_map_cd amap]
|
| | | row-size=0B cardinality=10
|
| | |
|
| | 00:SCAN HDFS [edw.dim]
|
| | partition predicates: dim.`year` IN (2018, 2019)
|
| | partitions=0/0 files=0 size=0B
|
| | predicates: CAST(dim.ce_msg_tp_cd AS STRING) LIKE '%B295%',
!empty(edw.dim.ce_lon_map_cd)
|
| | row-size=35B cardinality=0
|
| |
|
| 07:EXCHANGE
[HASH(dim.`YEAR`,dim.`YEAR`,fact.cem_bor_act_sfx,fact.CEM_BOR_SSN,fact.cem_ce_eff_dt)]
|
| |
|
| 05:SCAN HDFS [edw.fact]
|
| partition predicates: fact.`year` IN (2018, 2019)
|
| HDFS partitions=1/52 files=10 size=26.20KB
|
| runtime filters: RF000 -> fact.`YEAR`, RF001 -> fact.`YEAR`, RF002 ->
fact.cem_bor_act_sfx, RF003 -> fact.CEM_BOR_SSN, RF004 -> fact.cem_ce_eff_dt
|
| row-size=69B cardinality=10 {code}
The EXCHANGE (07) above the SCAN (05) is referencing dim table for the
partitioning key columns. Similarly, the EXCHANGE (08) above the subplan (01)
and SCAN (00) is refencing the fact table for the partitioning key columns.
There is also some redundancy in the expressions used for HASH(..) in Exchange
Node. We probably should address the redundancy in a separate JIRA.
> Impala crashing in impala::RowDescriptor::TupleIsNullable(int)
> --------------------------------------------------------------
>
> Key: IMPALA-9338
> URL: https://issues.apache.org/jira/browse/IMPALA-9338
> Project: IMPALA
> Issue Type: Bug
> Components: Backend
> Affects Versions: Impala 3.3.0
> Reporter: Abhishek Rawat
> Assignee: Abhishek Rawat
> Priority: Blocker
> Labels: crash
>
> Repro:
> {code:java}
> create database default;
> CREATE EXTERNAL TABLE default.dimension ( ssn_id INT, act_num CHAR(1), eff_dt
> CHAR(10), seq_num SMALLINT, entry_dt CHAR(10), map ARRAY<INT>, src CHAR(10),
> msg CHAR(1), msg_num CHAR(3), remarks CHAR(3), description CHAR(26),
> default_load_ts CHAR(26), map_cd VARCHAR(50) ) PARTITIONED BY ( year INT,
> ssn_hash INT ) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\u001C' WITH
> SERDEPROPERTIES ('colelction.delim'=',', 'field.delim'='\u001C',
> 'serialization.format'='\u001C') STORED AS PARQUET --LOCATION
> 'hdfs://prdnameservice/user/hive/warehouse/default.db/dimension'
> TBLPROPERTIES ('DO_NOT_UPDATE_STATS'='true', 'STATS_GENERATED'='TASK',
> 'STATS_GENERATED_VIA_STATS_TASK'='true',
> 'impala.lastComputeStatsTime'='1579246708', 'last_modified_by'='a00811p',
> 'last_modified_time'='1489791214', 'numRows'='7357715311',
> 'totalSize'='235136295799');
> CREATE EXTERNAL TABLE default.fact ( ssn_id_n INT, bor_act_sfx CHAR(1),
> start_dt CHAR(10), seq_num SMALLINT, msg_n CHAR(8), end_dt CHAR(10), reviews
> CHAR(50), description CHAR(50), detail CHAR(50), default_load_ts CHAR(26) )
> PARTITIONED BY ( year INT, ssn_hash INT ) ROW FORMAT DELIMITED FIELDS
> TERMINATED BY '\u0016' WITH SERDEPROPERTIES ('field.delim'='\u0016',
> 'serialization.format'='\u0016') STORED AS PARQUET --LOCATION
> 'hdfs://prdnameservice/user/hive/warehouse/default.db/fact' TBLPROPERTIES
> ('DO_NOT_UPDATE_STATS'='true', 'STATS_GENERATED'='TASK',
> 'STATS_GENERATED_VIA_STATS_TASK'='true',
> 'impala.lastComputeStatsTime'='1579242111', 'last_modified_by'='e32940',
> 'last_modified_time'='1484186332', 'numRows'='5142832439',
> 'totalSize'='105397898347');
> use default;
> select ssn_id_n, bor_act_sfx, amap.item, start_dt, reviews, concat(msg,
> msg_num) corr_code from dimension, dimension.map amap LEFT JOIN fact ON
> dimension.ssn_id = fact.ssn_id_n AND dimension.act_num = fact.bor_act_sfx AND
> dimension.eff_dt = fact.start_dt and dimension.year = fact.year --and
> dimension.month(cast(eff_dt as timestamp)) = fact.month(cast(start_dt as
> timestamp)) AND dimension.YEAR = fact.YEAR AND fact.year in (2018,2019) where
> dimension.msg like '%B295%' AND dimension.year in (2018,2019);{code}
> Stack Trace:
> {code:java}
> #0 0x0000000000f8b1b9 in impala::RowDescriptor::TupleIsNullable(int) const ()
> #1 0x000000000130911f in impala::SlotRef::Init(impala::RowDescriptor const&,
> impala::RuntimeState*) ()
> #2 0x000000000130748e in impala::ScalarExpr::Create(impala::TExpr const&,
> impala::RowDescriptor const&, impala::RuntimeState*, impala::ObjectPool*,
> impala::ScalarExpr**) ()
> #3 0x00000000013075e5 in
> impala::ScalarExpr::Create(std::vector<impala::TExpr,
> std::allocator<impala::TExpr> > const&, impala::RowDescriptor const&,
> impala::RuntimeState*, impala::ObjectPool*, std::vector<impala::ScalarExpr*,
> std::allocator<impala::ScalarExpr*> >*) ()
> #4 0x000000000130769f in
> impala::ScalarExpr::Create(std::vector<impala::TExpr,
> std::allocator<impala::TExpr> > const&, impala::RowDescriptor const&,
> impala::RuntimeState*, std::vector<impala::ScalarExpr*,
> std::allocator<impala::ScalarExpr*> >*) ()
> #5 0x000000000149c1aa in
> impala::KrpcDataStreamSender::Init(std::vector<impala::TExpr,
> std::allocator<impala::TExpr> > const&, impala::TDataSink const&,
> impala::RuntimeState*) ()
> #6 0x0000000001208ad3 in impala::DataSink::Create(impala::TPlanFragmentCtx
> const&, impala::TPlanFragmentInstanceCtx const&, impala::RowDescriptor
> const*, impala::RuntimeState*, impala::DataSink**) ()
> #7 0x0000000000fac9a4 in impala::FragmentInstanceState::Prepare() ()
> #8 0x0000000000fad3dd in impala::FragmentInstanceState::Exec() ()
> #9 0x0000000000f98e77 in
> impala::QueryState::ExecFInstance(impala::FragmentInstanceState*) ()
> #10 0x00000000011a1490 in impala::Thread::SuperviseThread(std::string const&,
> std::string const&, boost::function<void ()>, impala::ThreadDebugInfo const*,
> impala::Promise<long, (impala::PromiseMode)0>*) ()
> #11 0x00000000011a203a in boost::detail::thread_data<boost::_bi::bind_t<void,
> void (std::string const&, std::string const&, boost::function<void ()>,
> impala::ThreadDebugInfo const*, impala::Promise<long,
> (impala::PromiseMode)0>), boost::_bi::list5<boost::_bi::value<std::string>,
> boost::_bi::value<std::string>, boost::_bi::value<boost::function<void ()> >,
> boost::_bi::value<impala::ThreadDebugInfo>,
> boost::_bi::value<impala::Promise<long, (impala::PromiseMode)0>*> > >
> >::run() ()
> #12 0x00000000017909ca in thread_proxy () #13 0x00007f8832fa6aa1 in
> __pthread_initialize_minimal_internal () from /lib64/libpthread.so.0 #14
> 0x0000000000000000 in ?? ()
> {code}
>
> The crash only happens when ROJ plan is selected. If, LOJ plan is selected
> the query runs successfully.
> Initial investigation indicates that the Scalar expression being contructed
> in the above stack trace is referencing an invalid tupleId in the row
> descriptor.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]