[ 
https://issues.apache.org/jira/browse/PHOENIX-6956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17722742#comment-17722742
 ] 

Istvan Toth commented on PHOENIX-6956:
--------------------------------------

This is an old HBase version.
Even the latest Hbase 2.2.x version has known use after free errors in the 
offheap code.

You may want to try to disable the bucket cache in HBase, and see if that helps.

Alternatively, this may be a data corruption on the network.

> hbase regionserver with phoenix process is java crash.
> ------------------------------------------------------
>
>                 Key: PHOENIX-6956
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-6956
>             Project: Phoenix
>          Issue Type: Bug
>          Components: core
>    Affects Versions: 5.1.2
>         Environment: hbase: 2.2.2
> phoenix: hbase-2.2-phoenix-5.1.2
>            Reporter: Jepson
>            Priority: Major
>              Labels: crash, jvm
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> hbase regionserver with phoenix process is java crash.
>  
> [hbase@hadoop62 ~]$ *more hs_err_pid97203.log* 
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x00007fc60858ad3e, pid=97203, tid=0x00007fbd8291a700
> #
> # JRE version: Java(TM) SE Runtime Environment (8.0_241-b07) (build 
> 1.8.0_241-b07)
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.241-b07 mixed mode 
> linux-amd64 )
> # Problematic frame:
> {color:#ff8b00}*# V  [libjvm.so+0x7ddd3e]*{color}
> #
> # Failed to write core dump. Core dumps have been disabled. To enable core 
> dumping, try "ulimit -c unlimited" before starting Java again
> #
> # If you would like to submit a bug report, please visit:
> #   http://bugreport.java.com/bugreport/crash.jsp
> #
> ---------------  T H R E A D  ---------------
> Current thread (0x00007fc60329a000):  JavaThread 
> "RpcServer.default.RWQ.Fifo.write.handler=72,queue=0,port=16020" daemon 
> [_thread_in_vm, id=98
> 127, stack(0x00007fbd8281a000,0x00007fbd8291b000)]
> siginfo: si_signo: 11 (SIGSEGV), si_code: 2 (SEGV_ACCERR), si_addr: 
> 0x00007fbcc3a90000
> Registers:
> RAX=0x00007fbcc3ad11de, RBX=0x00007fc60329a000, RCX=0x00007fbe36e17c40, 
> RDX=0xffffffffffff7dc4
> RSP=0x00007fbd82918f88, RBP=0x00007fbd82918fd0, RSI=0x0000000000000000, 
> RDI=0x00007fbcc39d11e6
> R8 =0x0000000000080000, R9 =0x0000000000f00018, R10=0x00007fc5f16d63a7, 
> R11=0x00007fc5f16d6358
> R12=0x0000000000100000, R13=0x00007fbd82919000, R14=0x0000000000f00018, 
> R15=0x0000000000000000
> RIP=0x00007fc60858ad3e, EFLAGS=0x0000000000010282, CSGSFS=0x0000000000000033, 
> ERR=0x0000000000000004
>   TRAPNO=0x000000000000000e
> ....................
> ....................
> 0x00007fc60858ad2e:   f0 48 89 74 d1 f0 48 8b 74 d0 f8 48 89 74 d1 f8
> 0x00007fc60858ad3e:   48 8b 34 d0 48 89 34 d1 48 83 c2 04 7e d4 48 83
> 0x00007fc60858ad4e:   ea 04 7c 93 eb a1 49 f7 c0 01 00 00 00 74 0c 66 
> Register to memory mapping:
> RAX=0x00007fbcc3ad11de is pointing into the stack for thread: 
> 0x00007fc603d65000
> RBX=0x00007fc60329a000 is a thread
> RCX=
> [error occurred during error reporting (printing register info), id 0xb]
> Stack: [0x00007fbd8281a000,0x00007fbd8291b000],  sp=0x00007fbd82918f88,  free 
> space=1019k
> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
> code)
> V  [libjvm.so+0x7ddd3e]
> J 3060  
> sun.misc.Unsafe.{*}{color:#ff8b00}copyMemory{color}{*}(Ljava/lang/Object;JLjava/lang/Object;JJ)V
>  (0 bytes) @ 0x00007fc5f16d6421 [0x00007fc5f16d6340+0xe1]
> j  
> org.apache.hadoop.hbase.util.UnsafeAccess.unsafeCopy(Ljava/lang/Object;JLjava/lang/Object;JJ)V+36
> j  
> org.apache.hadoop.hbase.util.UnsafeAccess.copy(Ljava/nio/ByteBuffer;I[BII)V+69
> J 20259 C2 
> org.apache.phoenix.coprocessor.GlobalIndexRegionScanner.apply(Lorg/apache/hadoop/hbase/client/Put;Lorg/apache/hadoop/hbase/client/P
> ut;)V (95 bytes) @ 0x00007fc5f4539414 [0x00007fc5f4538d00+0x714]
> J 32073 C2 
> org.apache.phoenix.hbase.index.IndexRegionObserver.preBatchMutate(Lorg/apache/hadoop/hbase/coprocessor/ObserverContext;Lorg/apache/
> hadoop/hbase/regionserver/MiniBatchOperationInProgress;)V (31 bytes) @ 
> 0x00007fc5f5e8c8ac [0x00007fc5f5e8a6e0+0x21cc]
> J 31164 C2 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(Lorg/apache/hadoop/hbase/regionserver/HRegion$BatchOperation;)V
>  (500
>  bytes) @ 0x00007fc5f4ec0e58 [0x00007fc5f4ec0640+0x818]
> J 31154 C2 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(Lorg/apache/hadoop/hbase/regionserver/HRegion$BatchOperation;)[Lorg/apache
> /hadoop/hbase/regionserver/OperationStatus; (171 bytes) @ 0x00007fc5f5968218 
> [0x00007fc5f5967ea0+0x378]
> J 31155 C2 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$Region
> ActionResult$Builder;Lorg/apache/hadoop/hbase/regionserver/HRegion;Lorg/apache/hadoop/hbase/quotas/OperationQuota;Ljava/util/List;Lorg/apache/
> hadoop/hbase/CellScanner;Lorg/apache/hadoop/hbase/quotas/ActivePolicyEnforcement;Z)V
>  (646 bytes) @ 0x00007fc5f53fa1d4 [0x00007fc5f53f9620+0xbb
> 4]
> J 20397 C2 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(Lorg/apache/hadoop/hbase/regionserver/HRegion;Lorg/apa
> che/hadoop/hbase/quotas/OperationQuota;Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$RegionAction;Lorg/apache/hadoop/hbase/C
> ellScanner;Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$RegionActionResult$Builder;Ljava/util/List;JLorg/apache/hadoop/hbas
> e/regionserver/RSRpcServices$RegionScannersCloseCallBack;Lorg/apache/hadoop/hbase/ipc/RpcCallContext;Lorg/apache/hadoop/hbase/quotas/ActivePol
> icyEnforcement;)Ljava/util/List; (905 bytes) @ 0x00007fc5f4673be8 
> [0x00007fc5f4673360+0x888]
> J 20323 C2 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(Lorg/apache/hbase/thirdparty/com/google/protobuf/RpcController;Lorg/apache
> /hadoop/hbase/shaded/protobuf/generated/ClientProtos$MultiRequest;)Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$MultiRespon
> se; (698 bytes) @ 0x00007fc5f46106c4 [0x00007fc5f460ea60+0x1c64]
> J 16595 C2 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(Lorg/apache/hbase/thirdparty/com/
> google/protobuf/Descriptors$MethodDescriptor;Lorg/apache/hbase/thirdparty/com/google/protobuf/RpcController;Lorg/apache/hbase/thirdparty/com/g
> oogle/protobuf/Message;)Lorg/apache/hbase/thirdparty/com/google/protobuf/Message;
>  (221 bytes) @ 0x00007fc5f3d6f870 [0x00007fc5f3d6f5c0+0x2b0]
> J 23693 C2 
> org.apache.hadoop.hbase.ipc.RpcServer.call(Lorg/apache/hadoop/hbase/ipc/RpcCall;Lorg/apache/hadoop/hbase/monitoring/MonitoredRPCHan
> dler;)Lorg/apache/hadoop/hbase/util/Pair; (562 bytes) @ 0x00007fc5f4c620c8 
> [0x00007fc5f4c61980+0x748]
> J 21876 C2 org.apache.hadoop.hbase.ipc.CallRunner.run()V (1376 bytes) @ 
> 0x00007fc5f36b62f0 [0x00007fc5f36b5440+0xeb0]
> J 16563 C2 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(Lorg/apache/hadoop/hbase/ipc/CallRunner;)V
>  (268 bytes) @ 0x00007fc5f3d4db54 [0x
> 00007fc5f3d4da60+0xf4]
> J 20398% C2 org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run()V (72 bytes) 
> @ 0x00007fc5f4657fd4 [0x00007fc5f4657f60+0x74]
> v  ~StubRoutines::call_stub
> V  [libjvm.so+0x6894eb]
> V  [libjvm.so+0x686db3]
> V  [libjvm.so+0x687377]
> V  [libjvm.so+0x6f34ec]
> V  [libjvm.so+0xa8166b]
> V  [libjvm.so+0xa81971]
> V  [libjvm.so+0x90f542]
> C  [libpthread.so.0+0x7dd5]  start_thread+0xc5



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to