[
https://issues.apache.org/jira/browse/IMPALA-7402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16578985#comment-16578985
]
Tim Armstrong commented on IMPALA-7402:
---------------------------------------
My best guess is that there's something wonky with the lifecycle of the
HdfsScanner or ScannerContext. The code doing cleanup of the two has evolved
more than been designed so it's not clear what the expectations are about who
calls HdfsScanner::Close() and/or ScannerContext::ReleaseCompletedResources()
and/or ScanRange::Cancel(). I suspect there's some code path where we miss
calling one of those.
> DCHECK failed min_bytes_to_write <= dirty_unpinned_pages_ in buffer-pool
> ------------------------------------------------------------------------
>
> Key: IMPALA-7402
> URL: https://issues.apache.org/jira/browse/IMPALA-7402
> Project: IMPALA
> Issue Type: Bug
> Components: Backend
> Affects Versions: Impala 3.1.0
> Reporter: Vuk Ercegovac
> Assignee: Tim Armstrong
> Priority: Blocker
> Labels: broken-build
>
> One of the impalad's crashed with the following DCHECK failure:
> {noformat}
> F0806 01:26:21.905500 5101 buffer-pool.cc:645] Check failed:
> min_bytes_to_write <= dirty_unpinned_pages_.bytes() (8192 vs. 0)
> Here is the backtrace:{noformat}
> {noformat}
> #0 0x0000003af1e328e5 in raise () from /lib64/libc.so.6
> #1 0x0000003af1e340c5 in abort () from /lib64/libc.so.6
> #2 0x000000000437f454 in google::DumpStackTraceAndExit() ()
> #3 0x0000000004375ead in google::LogMessage::Fail() ()
> #4 0x0000000004377752 in google::LogMessage::SendToLog() ()
> #5 0x0000000004375887 in google::LogMessage::Flush() ()
> #6 0x0000000004378e4e in google::LogMessageFatal::~LogMessageFatal() ()
> #7 0x000000000205ad16 in impala::BufferPool::Client::WriteDirtyPagesAsync
> (this=0x17d03f0e0, min_bytes_to_write=8192) at
> /data/jenkins/workspace/impala-cdh6.x-exhaustive-centos6/repos/Impala/be/src/runtime/bufferpool/buffer-pool.cc:645
>
> #8 0x000000000205a835 in impala::BufferPool::Client::CleanPages
> (this=0x17d03f0e0, client_lock=0x7f324cb12220, len=8192) at
> /data/jenkins/workspace/impala-cdh6.x-exhaustive-centos6/repos/Impala/be/src/runtime/bufferpool/buffer-pool.cc:625
>
> #9 0x000000000205a646 in impala::BufferPool::Client::DecreaseReservationTo
> (this=0x17d03f0e0, max_decrease=8192, target_bytes=8192) at
> /data/jenkins/workspace/impala-cdh6.x-exhaustive-centos6/repos/Impala/be/src/runtime/bufferpool/buffer-pool.cc:609
>
> #10 0x0000000002057583 in
> impala::BufferPool::ClientHandle::DecreaseReservationTo (this=0x181c0a990,
> max_decrease=8192, target_bytes=8192) at
> /data/jenkins/workspace/impala-cdh6.x-exhaustive-centos6/repos/Impala/be/src/runtime/bufferpool/buffer-pool.cc:319
>
> #11 0x00000000020d9419 in
> impala::HdfsScanNode::ReturnReservationFromScannerThread (this=0x181c0a800,
> lock=..., bytes=8192) at
> /data/jenkins/workspace/impala-cdh6.x-exhaustive-centos6/repos/Impala/be/src/exec/hdfs-scan-node.cc:194
>
> #12 0x00000000020da485 in impala::HdfsScanNode::ScannerThread
> (this=0x181c0a800, first_thread=false, scanner_thread_reservation=8192) at
> /data/jenkins/workspace/impala-cdh6.x-exhaustive-centos6/repos/Impala/be/src/exec/hdfs-scan-node.cc:367
>
> #13 0x00000000020d96b0 in impala::HdfsScanNode::<lambda()>::operator()(void)
> const (__closure=0x7f324cb12b88) at
> /data/jenkins/workspace/impala-cdh6.x-exhaustive-centos6/repos/Impala/be/src/exec/hdfs-scan-node.cc:261
>
> #14 0x00000000020db6d6 in
> boost::detail::function::void_function_obj_invoker0<impala::HdfsScanNode::ThreadTokenAvailableCb(impala::ThreadResourcePool*)::<lambda()>,
> void>::invoke(boost::detail::function::function_buffer &)
> (function_obj_ptr=...) at
> /data/jenkins/workspace/impala-cdh6.x-exhaustive-centos6/Impala-Toolchain/boost-1.57.0-p3/include/boost/function/function_template.hpp:153
> ...
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]