[
https://issues.apache.org/jira/browse/IMPALA-10267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17569156#comment-17569156
]
Csaba Ringhofer commented on IMPALA-10267:
------------------------------------------
As it turned out the fix is incomplete.
> Impala crashes in HdfsScanner::WriteTemplateTuples() with negative num_tuples
> -----------------------------------------------------------------------------
>
> Key: IMPALA-10267
> URL: https://issues.apache.org/jira/browse/IMPALA-10267
> Project: IMPALA
> Issue Type: Bug
> Components: Backend
> Affects Versions: Impala 4.0.0
> Reporter: Joe McDonnell
> Assignee: Csaba Ringhofer
> Priority: Critical
> Labels: broken-build, flaky
> Fix For: Impala 4.2.0
>
>
> An exhaustive job hit two Impalad crashes with the following stack:
> {noformat}
> 2 impalad!google::LogMessageFatal::~LogMessageFatal() + 0x9
> rbx = 0x0000000000000000 rbp = 0x00007f82f98ad7a0
> rsp = 0x00007f82f98ad6a0 r12 = 0x0000000000000000
> r13 = 0x00007f8306dd1690 r14 = 0x000000002f6631a0
> r15 = 0x0000000072b8f2f0 rip = 0x0000000005209129
> Found by: call frame info
> 3 impalad!impala::HdfsScanner::WriteTemplateTuples(impala::TupleRow*, int)
> [hdfs-scanner.cc : 235 + 0xf]
> rbx = 0x0000000000000000 rbp = 0x00007f82f98ad7a0
> rsp = 0x00007f82f98ad6b0 r12 = 0x0000000000000000
> r13 = 0x00007f8306dd1690 r14 = 0x000000002f6631a0
> r15 = 0x0000000072b8f2f0 rip = 0x0000000002802013
> Found by: call frame info
> 4 impalad!impala::HdfsAvroScanner::ProcessRange(impala::RowBatch*)
> [hdfs-avro-scanner.cc : 553 + 0x19]
> rbx = 0x0000000000000400 rbp = 0x00007f82f98adc60
> rsp = 0x00007f82f98ad7b0 r12 = 0x0000000000000000
> r13 = 0x00007f8306dd1690 r14 = 0x000000002f6631a0
> r15 = 0x0000000072b8f2f0 rip = 0x000000000283880d
> Found by: call frame info
> 5 impalad!impala::BaseSequenceScanner::GetNextInternal(impala::RowBatch*)
> [base-sequence-scanner.cc : 189 + 0x2b]
> rbx = 0x0000000000000000 rbp = 0x00007f82f98adf40
> rsp = 0x00007f82f98adc70 r12 = 0x0000000000000000
> r13 = 0x00007f8306dd1690 r14 = 0x000000002f6631a0
> r15 = 0x0000000072b8f2f0 rip = 0x00000000029302b5
> Found by: call frame info
> 6 impalad!impala::HdfsScanner::ProcessSplit() [hdfs-scanner.cc : 143 + 0x39]
> rbx = 0x000000000292fbd4 rbp = 0x00007f82f98ae000
> rsp = 0x00007f82f98adf50 r12 = 0x0000000000000000
> r13 = 0x00007f8306dd1690 r14 = 0x000000002f6631a0
> r15 = 0x0000000072b8f2f0 rip = 0x00000000028011c9
> Found by: call frame info
> 7
> impalad!impala::HdfsScanNode::ProcessSplit(std::vector<impala::FilterContext,
> std::allocator<impala::FilterContext> > const&, impala::MemPool*,
> impala::io::ScanRange*, long*) [hdfs-scan-node.cc : 500 + 0x28]
> rbx = 0x0000000000008000 rbp = 0x00007f82f98ae390
> rsp = 0x00007f82f98ae010 r12 = 0x0000000000000000
> r13 = 0x00007f8306dd1690 r14 = 0x000000002f6631a0
> r15 = 0x0000000072b8f2f0 rip = 0x000000000297aa3d
> Found by: call frame info
> 8 impalad!impala::HdfsScanNode::ScannerThread(bool, long)
> [hdfs-scan-node.cc : 418 + 0x27]
> rbx = 0x00000001abc6a760 rbp = 0x00007f82f98ae750
> rsp = 0x00007f82f98ae3a0 r12 = 0x0000000000000000
> r13 = 0x00007f8306dd1690 r14 = 0x000000002f6631a0
> r15 = 0x0000000072b8f2f0 rip = 0x0000000002979dbe
> Found by: call frame info
> 9
> impalad!impala::HdfsScanNode::ThreadTokenAvailableCb(impala::ThreadResourcePool*)::{lambda()#1}::operator()()
> const + 0x30
> rbx = 0x0000000000000bbf rbp = 0x00007f82f98ae770
> rsp = 0x00007f82f98ae760 r12 = 0x0000000008e18f40
> r13 = 0x00007f8306dd1690 r14 = 0x000000002f6631a0
> r15 = 0x0000000072b8f2f0 rip = 0x0000000002979126
> Found by: call frame info{noformat}
> This seems to happen when runningĀ
> query_test/test_scanners_fuzz.py::TestScannersFuzzing::test_fuzz_alltypes on
> Avro. In reading the code in HdfsAvroScanner ProcessScanRanger(), it seems
> impossible for this value to be negative, so it's unclear what is happening.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]