[
https://issues.apache.org/jira/browse/IMPALA-10267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17218490#comment-17218490
]
Tim Armstrong commented on IMPALA-10267:
----------------------------------------
Backtrace is.
{noformat}
F1019 11:07:48.783948 3007 hdfs-scanner.cc:235]
554c6dd1a97a44c1:710f12d800000003] Check failed: num_tuples >= 0 (-25 vs. 0)
*** Check failure stack trace: ***
@ 0x5205bcc google::LogMessage::Fail()
@ 0x52074bc google::LogMessage::SendToLog()
@ 0x520552a google::LogMessage::Flush()
@ 0x5209128 google::LogMessageFatal::~LogMessageFatal()
@ 0x2802012 impala::HdfsScanner::WriteTemplateTuples()
@ 0x283880c impala::HdfsAvroScanner::ProcessRange()
@ 0x29302b4 impala::BaseSequenceScanner::GetNextInternal()
@ 0x28011c8 impala::HdfsScanner::ProcessSplit()
@ 0x297aa3c impala::HdfsScanNode::ProcessSplit()
@ 0x2979dbd impala::HdfsScanNode::ScannerThread()
@ 0x2979125
_ZZN6impala12HdfsScanNode22ThreadTokenAvailableCbEPNS_18ThreadResourcePoolEENKUlvE_clEv
@ 0x297b4de
_ZN5boost6detail8function26void_function_obj_invoker0IZN6impala12HdfsScanNode22ThreadTokenAvailableCbEPNS3_18ThreadResourcePoolEEUlvE_vE6invokeERNS1_15function_bufferE
@ 0x213a2c9 boost::function0<>::operator()()
@ 0x271b041 impala::Thread::SuperviseThread()
@ 0x2722fde boost::_bi::list5<>::operator()<>()
@ 0x2722f02 boost::_bi::bind_t<>::operator()()
@ 0x2722ec3 boost::detail::thread_data<>::run()
@ 0x3f0c701 thread_proxy
@ 0x7f83eb6d7e24 start_thread
@ 0x7f83e816e34c __clone
InputStream.java:660)
{noformat}
> Impala crashes in HdfsScanner::WriteTemplateTuples() with negative num_tuples
> -----------------------------------------------------------------------------
>
> Key: IMPALA-10267
> URL: https://issues.apache.org/jira/browse/IMPALA-10267
> Project: IMPALA
> Issue Type: Bug
> Components: Backend
> Affects Versions: Impala 4.0
> Reporter: Joe McDonnell
> Priority: Critical
> Labels: broken-build, flaky
>
> An exhaustive job hit two Impalad crashes with the following stack:
> {noformat}
> 2 impalad!google::LogMessageFatal::~LogMessageFatal() + 0x9
> rbx = 0x0000000000000000 rbp = 0x00007f82f98ad7a0
> rsp = 0x00007f82f98ad6a0 r12 = 0x0000000000000000
> r13 = 0x00007f8306dd1690 r14 = 0x000000002f6631a0
> r15 = 0x0000000072b8f2f0 rip = 0x0000000005209129
> Found by: call frame info
> 3 impalad!impala::HdfsScanner::WriteTemplateTuples(impala::TupleRow*, int)
> [hdfs-scanner.cc : 235 + 0xf]
> rbx = 0x0000000000000000 rbp = 0x00007f82f98ad7a0
> rsp = 0x00007f82f98ad6b0 r12 = 0x0000000000000000
> r13 = 0x00007f8306dd1690 r14 = 0x000000002f6631a0
> r15 = 0x0000000072b8f2f0 rip = 0x0000000002802013
> Found by: call frame info
> 4 impalad!impala::HdfsAvroScanner::ProcessRange(impala::RowBatch*)
> [hdfs-avro-scanner.cc : 553 + 0x19]
> rbx = 0x0000000000000400 rbp = 0x00007f82f98adc60
> rsp = 0x00007f82f98ad7b0 r12 = 0x0000000000000000
> r13 = 0x00007f8306dd1690 r14 = 0x000000002f6631a0
> r15 = 0x0000000072b8f2f0 rip = 0x000000000283880d
> Found by: call frame info
> 5 impalad!impala::BaseSequenceScanner::GetNextInternal(impala::RowBatch*)
> [base-sequence-scanner.cc : 189 + 0x2b]
> rbx = 0x0000000000000000 rbp = 0x00007f82f98adf40
> rsp = 0x00007f82f98adc70 r12 = 0x0000000000000000
> r13 = 0x00007f8306dd1690 r14 = 0x000000002f6631a0
> r15 = 0x0000000072b8f2f0 rip = 0x00000000029302b5
> Found by: call frame info
> 6 impalad!impala::HdfsScanner::ProcessSplit() [hdfs-scanner.cc : 143 + 0x39]
> rbx = 0x000000000292fbd4 rbp = 0x00007f82f98ae000
> rsp = 0x00007f82f98adf50 r12 = 0x0000000000000000
> r13 = 0x00007f8306dd1690 r14 = 0x000000002f6631a0
> r15 = 0x0000000072b8f2f0 rip = 0x00000000028011c9
> Found by: call frame info
> 7
> impalad!impala::HdfsScanNode::ProcessSplit(std::vector<impala::FilterContext,
> std::allocator<impala::FilterContext> > const&, impala::MemPool*,
> impala::io::ScanRange*, long*) [hdfs-scan-node.cc : 500 + 0x28]
> rbx = 0x0000000000008000 rbp = 0x00007f82f98ae390
> rsp = 0x00007f82f98ae010 r12 = 0x0000000000000000
> r13 = 0x00007f8306dd1690 r14 = 0x000000002f6631a0
> r15 = 0x0000000072b8f2f0 rip = 0x000000000297aa3d
> Found by: call frame info
> 8 impalad!impala::HdfsScanNode::ScannerThread(bool, long)
> [hdfs-scan-node.cc : 418 + 0x27]
> rbx = 0x00000001abc6a760 rbp = 0x00007f82f98ae750
> rsp = 0x00007f82f98ae3a0 r12 = 0x0000000000000000
> r13 = 0x00007f8306dd1690 r14 = 0x000000002f6631a0
> r15 = 0x0000000072b8f2f0 rip = 0x0000000002979dbe
> Found by: call frame info
> 9
> impalad!impala::HdfsScanNode::ThreadTokenAvailableCb(impala::ThreadResourcePool*)::{lambda()#1}::operator()()
> const + 0x30
> rbx = 0x0000000000000bbf rbp = 0x00007f82f98ae770
> rsp = 0x00007f82f98ae760 r12 = 0x0000000008e18f40
> r13 = 0x00007f8306dd1690 r14 = 0x000000002f6631a0
> r15 = 0x0000000072b8f2f0 rip = 0x0000000002979126
> Found by: call frame info{noformat}
> This seems to happen when runningĀ
> query_test/test_scanners_fuzz.py::TestScannersFuzzing::test_fuzz_alltypes on
> Avro. In reading the code in HdfsAvroScanner ProcessScanRanger(), it seems
> impossible for this value to be negative, so it's unclear what is happening.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]