Hi, no, sigabort is usually caused by a failing assert macro. We use asserts at a lot of places. If you get that signal then first check the logs for errors, they should have more information.
bye Christoph 2012/8/15 bin fang <[email protected]> > Hi, guys: > > I follow your rangesever crashes problem several days. I am a freshman on > Hypertable and interested on it. Please allow me to say something :-) > > SigAbort means short of memory? so we can check if it is really the > sistuation. > if not, the problem may be not simple. We maybe face the memory fragment > problem. Maybe it is not suitable just to use new to malloc a big chunck of > memory? > > > 2012/8/15 <[email protected]> > >> Today's Topic Summary >> >> Group: http://groups.google.com/group/hypertable-dev/topics >> >> - RangeServer >> crashes<#1392aa27d586085a_13929fdab39e9bab_group_thread_0>[3 Updates] >> - RangeServer/DfsBroker failled to come up after >> restart<#1392aa27d586085a_13929fdab39e9bab_group_thread_1>[1 Update] >> >> RangeServer >> crashes<http://groups.google.com/group/hypertable-dev/t/5f5af3bea0d3046b> >> >> "Kenny F." <[email protected]> Aug 14 06:51AM -0700 >> >> 0.9.6.0.95b0abc tracks: >> >> # screen -r >> >> Program received signal SIGABRT, Aborted. >> [Switching to Thread 0x99ec9b70 (LWP 15640)] >> 0xffffe424 in __kernel_vsyscall () >> >> # where >> >> #0 0xffffe424 in __kernel_vsyscall () >> #1 0xb7b28781 in raise () from /lib/i686/cmov/libc.so.6 >> #2 0xb7b2bbb2 in abort () from /lib/i686/cmov/libc.so.6 >> #3 0xb7d34959 in __gnu_cxx::__verbose_terminate_handler() () from >> /opt/hypertable/0.9.6.0.95b0abc/lib/libstdc++.so.6 >> #4 0xb7d32865 in ?? () from >> /opt/hypertable/0.9.6.0.95b0abc/lib/libstdc++.so.6 >> #5 0xb7d328a2 in std::terminate() () from >> /opt/hypertable/0.9.6.0.95b0abc/lib/libstdc++.so.6 >> #6 0xb7d329da in __cxa_throw () from >> /opt/hypertable/0.9.6.0.95b0abc/lib/libstdc++.so.6 >> *#7 0xb7d33033 in operator new(unsigned int) () from >> /opt/hypertable/0.9.6.0.95b0abc/lib/libstdc++.so.6 >> #8 0xb7d3311d in operator new[](unsigned int) () from >> /opt/hypertable/0.9.6.0.95b0abc/lib/libstdc++.so.6* >> #9 0x0869ef81 in Hypertable::DynamicBuffer::grow (this=0x99ec7ff4, >> new_size=1000004, nocopy=false) at >> /root/src/hypertable/src/cc/Common/DynamicBuffer.h:120 >> #10 0x0869f095 in Hypertable::DynamicBuffer::reserve >> (this=0x99ec7ff4, >> len=1000004, nocopy=false) at >> /root/src/hypertable/src/cc/Common/DynamicBuffer.h:72 >> #11 0x08773887 in Hypertable::FillScanBlock (scanner=..., dbuf=..., >> buffer_size=1000000) at >> >> /root/src/hypertable/src/cc/Hypertable/RangeServer/FillScanBlock.cc:104 >> #12 0x086628c1 in Hypertable::RangeServer::create_scanner >> (this=0x8c95460, >> cb=0x99ec92a8, table=0x99ec929c, range_spec=0x99ec9290, >> scan_spec=0x99ec9200, cache_key=0x99ec9278) >> at >> /root/src/hypertable/src/cc/Hypertable/RangeServer/RangeServer.cc:1371 >> #13 0x087d53e6 in Hypertable::RequestHandlerCreateScanner::run >> (this=0x331d9840) at >> >> >> /root/src/hypertable/src/cc/Hypertable/RangeServer/RequestHandlerCreateScanner.cc:59 >> #14 0x0862debc in Hypertable::ApplicationQueue::Worker::operator() >> (this=0x8c83cf8) at >> /root/src/hypertable/src/cc/AsyncComm/ApplicationQueue.h:172 >> #15 0x0862df18 in >> boost::detail::thread_data<Hypertable::ApplicationQueue::Worker>::run >> (this=0x8c83c28) at >> /usr/local/include/boost/thread/detail/thread.hpp:61 >> #16 0xb7ebfe68 in thread_proxy () from >> /opt/hypertable/0.9.6.0.95b0abc/lib/libboost_thread.so.1.44.0 >> #17 0xb7e05955 in start_thread () from /lib/i686/cmov/libpthread.so.0 >> #18 0xb7bca5ee in clone () from /lib/i686/cmov/libc.so.6 >> >> >> >> >> Mehmet Ali Cetinkaya <[email protected]> Aug 14 07:38AM -0700 >> >> Hello, >> >> i have a Hadoop (0.20)+Hypertable (0.9.6) system that one master and >> two slaves. >> >> we inserted 1 million data to hypertable succesfully. >> But now, when use the select query ( select meta:eklemetarihi from >> urls; ) >> hypertable frozen after approximate 300.000 data show. >> >> i didn't see any error in hypertable logs. >> >> i read some document from internet. and their solution is "you must >> erase cell cache of hypertable". >> but i didn't find how can i erase cell cache. >> >> what can be done to resolvee this issue? >> >> thanx, >> mali >> >> >> >> >> Doug Judd <[email protected]> Aug 14 03:53PM -0700 >> >> Hi Mali, >> >> This could be a number of things. First, check to make sure the >> RangeServers (slaves) are idle when the query appears to be hanging. >> I've >> witnessed situations where a query appears to hang, but is actually >> still >> being executed, scanning over a large section of data that, for >> example, >> does not contain the column "meta:eklemetarihi". The next thing to do >> is >> to double-check that there are no errors in the Hypertable logs. You >> can >> do this with: >> >> grep -i ERROR /opt/hypertable/current/log/*.log >> grep -i Except /opt/hypertable/current/log/*.log >> >> Next double-check the Hadoop logs to be sure there are no errors (see >> your >> Hadoop distro documentation for location of logs). If none of the >> above >> turns up anything, let us know and we can help you capture stack >> traces of >> the RangeServers to determine if they're stuck in a deadlock. >> >> - Doug >> >> On Tue, Aug 14, 2012 at 7:38 AM, Mehmet Ali Cetinkaya < >> >> -- >> Doug Judd >> CEO, Hypertable Inc. >> >> >> >> RangeServer/DfsBroker failled to come up after >> restart<http://groups.google.com/group/hypertable-dev/t/ab5cdbc8fdd3329e> >> >> herb <[email protected]> Aug 14 09:10AM -0700 >> >> Christoph- >> >> How would one go about deleting a range that landed in this data loss? >> Ideally we could do this wholesale as we run into these CellStores. >> >> >> >> >> You received this message because you are subscribed to the Google >> Group hypertable-dev. >> You can post via email <[email protected]>. >> To unsubscribe from this group, >> send<[email protected]>an empty message. >> For more options, >> visit<http://groups.google.com/group/hypertable-dev/topics>this group. >> >> -- >> You received this message because you are subscribed to the Google Groups >> "Hypertable Development" group. >> To post to this group, send email to [email protected]. >> To unsubscribe from this group, send email to >> [email protected]. >> For more options, visit this group at >> http://groups.google.com/group/hypertable-dev?hl=en. >> > > -- > You received this message because you are subscribed to the Google Groups > "Hypertable Development" group. > To post to this group, send email to [email protected]. > To unsubscribe from this group, send email to > [email protected]. > For more options, visit this group at > http://groups.google.com/group/hypertable-dev?hl=en. > -- You received this message because you are subscribed to the Google Groups "Hypertable Development" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/hypertable-dev?hl=en.
