bq. master and region server running on the same node(single node install).

Maybe. This combination should be used for testing only

bq.  zookeeper is not managed by Hbase

Since the classes involved in stack trace are for Procedure V2, zookeeper
likely was not the issue.

On Thu, Aug 17, 2017 at 11:13 AM, Pradheep Shanmugam <
[email protected]> wrote:

> Hi Ted,
>
>
> Just to get some more info..what causes this deadlock in Hbase?...in my
> case this happened for create table through phoenix..can this happen for
> insert query also?..This does not happens always but more frequently..does
> it have to do anything with both hbase master and region server running on
> the same node(single node install)..also zookeeper is not managed by Hbase
> in my case..
>
>
> Thanks,
> Pradheep
>
> ________________________________
> From: Ted Yu <[email protected]>
> Sent: Wednesday, August 16, 2017 11:21 AM
> To: [email protected]
> Subject: Re: Create table could not proceed
>
> I went thru the file you sent - I didn't see which thread was holding the
> lock.
>
> Looking at git log for WALProcedureStore.java, the latest fix in branch-1.1
> was HBASE-16056 which went into 1.1.6
>
> Please upgrade your hbase and see if the problem persists.
>
> On Wed, Aug 16, 2017 at 8:08 AM, Pradheep Shanmugam <
> [email protected]> wrote:
>
> > hi,
> >
> >
> > There are around 30 threads waiting for the lock
> >
> >
> > In the master log, the create table is being assigned to different
> handlers
> >  like below
> >
> >
> > 2017-08-15 17:56:09,174 INFO  [B.defaultRpcServer.handler=4,
> queue=1,port=16020]
> > master.HMaster: Client=vagrant/null create 'DEFAULT.FLYWAYSCHEMAVERSION',
> > {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.phoenix.coprocessor.
> > ScanRegionObserver|805306366|', coprocessor$2 => '|org.apache.phoenix.
> > coprocessor.UngroupedAggregateRegionObserver|805306366|', coprocessor$3
> > => '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver
> |805306366|',
> > coprocessor$4 => '|org.apache.phoenix.coprocessor.
> > ServerCachingEndpointImpl|805306366|', coprocessor$5 =>
> > '|org.apache.phoenix.hbase.index.Indexer|805306366|org.
> > apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.
> > PhoenixIndexCodec,index.builder=org.apache.phoenix.
> index.PhoenixIndexBuilder',
> > coprocessor$6 => '|org.apache.hadoop.hbase.regionserver.
> > LocalIndexSplitter|805306366|'}, {NAME => '0', BLOOMFILTER => 'ROW',
> > VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE',
> > DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL => 'FOREVER', COMPRESSION =>
> > 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536',
> > REPLICATION_SCOPE => '0'}
> >
> >
> > From the thread dump, around 30 threads are waiting on the lock..but cant
> > say which on is holding it..attached the dump
> >
> > Thanks,
> > Pradheep
> >
> > ------------------------------
> > *From:* Ted Yu <[email protected]>
> > *Sent:* Wednesday, August 16, 2017 10:59 AM
> > *To:* [email protected]
> > *Subject:* Re: Create table could not proceed
> >
> > From the complete stack trace, do you see which thread was holding
> > ReentrantLock$NonfairSync@1a027b15 ?
> > What do you see in master log around this time ?
> >
> > 1.1.2 was quite old.
> >
> > Looks like vote for 1.1.12 would pass. Consider upgrading.
> >
> > On Wed, Aug 16, 2017 at 7:52 AM, Pradheep Shanmugam <
> > [email protected]> wrote:
> >
> > > hi,
> > >
> > >
> > > i am running Hbase 1.1.2..when i create a table through phoenix, it
> could
> > > not proceed..
> > >
> > > master keeps retrying but it is not able to get the lock..
> > >
> > > It is a single node cluster with master and region server on the same
> > > server. thread dump shows below log..
> > >
> > > Any reason why this happens and how to resolve this..and which thread
> is
> > > holding the lock?
> > >
> > > Thread 60 (PriorityRpcServer.handler=0,queue=0,port=16020):
> > >   State: WAITING
> > >   Blocked count: 1
> > >   Waited count: 564
> > >   Waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$
> > > ConditionObject@249c706b
> > >   Stack:
> > >     sun.misc.Unsafe.park(Native Method)
> > >     java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> > >     java.util.concurrent.locks.AbstractQueuedSynchronizer$
> > > ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> > >     java.util.concurrent.LinkedBlockingQueue.take(
> > > LinkedBlockingQueue.java:442)
> > >     org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(
> > > RpcExecutor.java:127)
> > >     org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.
> java:107)
> > >     java.lang.Thread.run(Thread.java:745)
> > >
> > >
> > > Thread 59 (B.defaultRpcServer.handler=29,queue=2,port=16020):
> > >   State: WAITING
> > >   Blocked count: 0
> > >   Waited count: 2
> > >   Waiting on java.util.concurrent.locks.ReentrantLock$NonfairSync@
> > 1a027b15
> > >   Stack:
> > >     sun.misc.Unsafe.park(Native Method)
> > >     java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> > >     java.util.concurrent.locks.AbstractQueuedSynchronizer.
> > > parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> > >     java.util.concurrent.locks.AbstractQueuedSynchronizer.
> acquireQueued(
> > > AbstractQueuedSynchronizer.java:870)
> > >     java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(
> > > AbstractQueuedSynchronizer.java:1199)
> > >     java.util.concurrent.locks.ReentrantLock$NonfairSync.
> > > lock(ReentrantLock.java:209)
> > >     java.util.concurrent.locks.ReentrantLock.lock(
> > ReentrantLock.java:285)
> > >     org.apache.hadoop.hbase.procedure2.store.wal.
> > > WALProcedureStore.pushData(WALProcedureStore.java:457)
> > >     org.apache.hadoop.hbase.procedure2.store.wal.
> > WALProcedureStore.insert(
> > > WALProcedureStore.java:340)
> > >     org.apache.hadoop.hbase.procedure2.ProcedureExecutor.
> > submitProcedure(
> > > ProcedureExecutor.java:524)
> > >     org.apache.hadoop.hbase.master.HMaster.createTable(
> > HMaster.java:1459)
> > >     org.apache.hadoop.hbase.master.MasterRpcServices.
> > > createTable(MasterRpcServices.java:422)
> > >     org.apache.hadoop.hbase.protobuf.generated.
> > > MasterProtos$MasterService$2.callBlockingMethod(
> MasterProtos.java:48502)
> > >     org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
> > >     org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
> > >     org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(
> > > RpcExecutor.java:130)
> > >     org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.
> java:107)
> > >     java.lang.Thread.run(Thread.java:745)
> > >
> > >
> > >
> > > Thanks,
> > >
> > > Pradheep
> > >
> >
>

Reply via email to