Re: Uninitialized Message Exception thrown while getting values.

2018-01-17 Thread ramkrishna vasudevan
Hi

Which version of HBase you get this problem? Do you have any pb classpath
issues?

Regards
Ram

On Thu, Jan 18, 2018 at 12:40 PM, Karthick Ram 
wrote:

> "UninitializedMessageException : Message missing required fields : region,
> get", is thrown while performing Get. Due to this all the Get requests to
> the same Region Server are getting stalled.
>
> com.google.protobuf.UninitializedMessageException: Message missing
> required fields : region, get
> at com.google.protobuf.AbstractMessage$Build.
> newUninitializedMessageException(AbstractMessage.java:770)
> at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$GetRequest$
> Builder.build(ClientProtos.java:6377)
> at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$GetRequest$
> Builder.build(ClientProtos.java:6309)
> at org.apache.hadoop.hbase.ipc.RpcServer$Connection.
> processRequest(RpcServer.java:1840)
> at org.apache.hadoop.hbase.ipc.RpcServer$Connection.
> processOneRpc(RpcServer.java:1775)
> at org.apache.hadoop.hbase.ipc.RpcServer$Connection.process(
> RPcServer.java:1623)
> at org.apache.hadoop.hbase.ipc.RpcServer$Connection.
> readAndProcess(RpcServer.java:1603)
> at org.apache.hadoop.hbase.ipc.RpcServer$Listener.doRead(
> RpcServer.java:861)
> at org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.
> doRunLoop(RpcServer.java:643)
> at org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(
> RpcServer.java:619)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
>


Re: Encryption of exisiting data in Stripe Compaction

2017-06-14 Thread ramkrishna vasudevan
Hi
Very interesting case. Ya Stripe compaction does not need to under go a
major compaction if it already running under stripe compaction (reading the
docs I get this).
Since you have enable encryption at a later point of time you face this
issue I believe. The naive workaround I can think of is that do a alter
table with default compaction and it will do a major compaction and once
that is done again move back to Stripe compaction?  Will that work?

I would like to hear opinion of others who have experience with Strip
compaction.

Regards
Ram

On Wed, Jun 14, 2017 at 10:25 AM, Karthick Ram 
wrote:

> We have a table which has time series data with Stripe Compaction enabled.
> After encryption has been enabled for this table the newer entries are
> encrypted and inserted. However to encrypt the existing data in the table,
> a major compaction has to run. Since, stripe compaction doesn't allow a
> major compaction to run, we are unable to encrypt the previous data. Please
> suggest some ways to rectify this problem.
>
> Regards,
> Karthick R
>


Re: A question about conflict between class 'ZeroCopyLiteralString' and latest protobuf

2017-05-24 Thread ramkrishna vasudevan
Hi

During one of the tasks that we were doing, we wanted to upgrade to pb 3
and we found exactly the same issue as you have explained here wrt to
ByteString encapsulating the LiteralByteString.

The task that we were doing was with HBase trunk (which will be coming out
as HBase-2.0). Hence as part of this JIRA
https://issues.apache.org/jira/browse/HBASE-16567

The changes to HBaseZeroCopyByteString has been removed. Not only that we
have shaded the protobuf 3.1 inside HBase so that we can accomodate our own
extension/changes to protobuf to suit our needs.

What is the version of hbase are you using? In your class path since you
have already upgraded the pb to the latest and your version of hbase using
the older pb may have some conflicts. I don't have a soln to this right now
but just highlighting the fact that the community has already accomodated
itself for the future pb releases and changes.

Regards
Ram

On Wed, May 24, 2017 at 2:06 PM, Jing Zhang  wrote:

> Hi, developers,
>
> My company uses HBase, and in our code base, we need to use both HBase and
> protobuf.
>
> However, for some reason we need to upgrade protobuf to the latest version,
> but I found the recent protobuf changes affect HBase. Its class
> 'LiteralByteString' which HBase uses has been encapsulated into a private
> static class inside class 'ByteString' since protobuf 3.0.0 Beta3 (Or some
> version before or after).
>
> In class 'ZeroCopyLiteralString'
> (hbase-protocol/src/main/java/com/google/protobuf/), I tried to use
> 'UnsafeByteOperation.unsafeWrap' instead, but failed at method
> 'zeroCopyGetBytes'.
>
> I wonder whether there is an approach in HBase to accommodate to protobuf's
> change and what's you suggest to make the changes. Also, do you have plan
> to change the relevant codes to be compatible with the new version of
> protobuf?
>
> Hope to get your help! Perhaps my question has been asked before, but I'm a
> freshman to HBase it's difficult to find the topic in a large mail list. So
> please share the link if it does exist.
>
> Thanks very much!
>
> --
>
> Jing.
>


[jira] [Commented] (HBASE-16643) Reverse scanner heap creation may not allow MSLAB closure due to improper ref counting of segments

2016-09-22 Thread ramkrishna vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15515508#comment-15515508
 ] 

ramkrishna vasudevan commented on HBASE-16643:
--

Hi
Are you telling about some new issue @sunyu?

Regards
Ram




> Reverse scanner heap creation may not allow MSLAB closure due to improper ref 
> counting of segments
> --
>
> Key: HBASE-16643
> URL: https://issues.apache.org/jira/browse/HBASE-16643
> Project: HBase
>  Issue Type: Bug
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-16643.patch, HBASE-16643_1.patch, 
> HBASE-16643_2.patch, HBASE-16643_3.patch, HBASE-16643_4.patch, 
> HBASE-16643_5.patch
>
>
> In the reverse scanner case,
> While doing 'initBackwardHeapIfNeeded' in MemstoreScanner for setting the 
> backward heap, we do a 
> {code}
> if ((backwardHeap == null) && (forwardHeap != null)) {
> forwardHeap.close();
> forwardHeap = null;
> // before building the heap seek for the relevant key on the scanners,
> // for the heap to be built from the scanners correctly
> for (KeyValueScanner scan : scanners) {
>   if (toLast) {
> res |= scan.seekToLastRow();
>   } else {
> res |= scan.backwardSeek(cell);
>   }
> }
> {code}
> forwardHeap.close(). This would internally decrement the MSLAB ref counter 
> for the current active segment and snapshot segment.
> When the scan is actually closed again we do close() and that will again 
> decrement the count. Here chances are there that the count would go negative 
> and hence the actual MSLAB closure that checks for refCount==0 will fail. 
> Apart from this, when the refCount becomes 0 after the firstClose if any 
> other thread requests to close the segment, then we will end up in corrupted 
> segment because the segment could be put back to the MSLAB pool. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Fwd: Subscribe

2015-09-20 Thread ramkrishna vasudevan
-- Forwarded message --
From: ramkrishna vasudevan 
Date: Fri, Sep 18, 2015 at 10:34 AM
Subject: Subscribe
To: issues-subscr...@hbase.apache.org


Re: [jira] [Commented] (HBASE-10800) Use CellComparator instead of KVComparator

2015-04-03 Thread ramkrishna vasudevan
Oops. Wil checkout the test failures
On Apr 3, 2015 5:37 PM, "Hadoop QA (JIRA)"  wrote:

>
> [
> https://issues.apache.org/jira/browse/HBASE-10800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394348#comment-14394348
> ]
>
> Hadoop QA commented on HBASE-10800:
> ---
>
> {color:red}-1 overall{color}.  Here are the results of testing the latest
> attachment
>
> http://issues.apache.org/jira/secure/attachment/12709202/HBASE-10800_3.patch
>   against master branch at commit d8b10656d00779e194c3caca118995136babce99.
>   ATTACHMENT ID: 12709202
>
> {color:green}+1 @author{color}.  The patch does not contain any
> @author tags.
>
> {color:green}+1 tests included{color}.  The patch appears to include
> 146 new or modified tests.
>
> {color:green}+1 hadoop versions{color}. The patch compiles with all
> supported hadoop versions (2.4.1 2.5.2 2.6.0)
>
> {color:green}+1 javac{color}.  The applied patch does not increase the
> total number of javac compiler warnings.
>
> {color:green}+1 protoc{color}.  The applied patch does not increase
> the total number of protoc compiler warnings.
>
> {color:red}-1 javadoc{color}.  The javadoc tool appears to have
> generated 16 warning messages.
>
> {color:red}-1 checkstyle{color}.  The applied patch
> generated 1945 checkstyle errors (more than the master's current 1924
> errors).
>
> {color:green}+1 findbugs{color}.  The patch does not introduce any
> new Findbugs (version 2.0.3) warnings.
>
> {color:green}+1 release audit{color}.  The applied patch does not
> increase the total number of release audit warnings.
>
> {color:red}-1 lineLengths{color}.  The patch introduces the following
> lines longer than 100:
> +  public int compareRows(Cell left, int loffset, int llength, Cell
> right, int roffset, int rlength) {
> +  public int compareRows(Cell left, int loffset, int llength, byte[]
> right, int roffset, int rlength) {
> +  Bytes.putLong(newKey, rightKey.length -
> KeyValue.TIMESTAMP_TYPE_SIZE, HConstants.LATEST_TIMESTAMP);
> +&& leftKey[KeyValue.ROW_LENGTH_SIZE + diffIdx] ==
> rightKey[KeyValue.ROW_LENGTH_SIZE + diffIdx]) {
> +public int compareRows(Cell left, int loffset, int llength, Cell
> right, int roffset, int rlength) {
> +  public static int findCommonPrefixInQualifierPart(Cell left, Cell
> right, int qualifierCommonPrefix) {
> +  public static int getDelimiter(final byte[] b, int offset, final int
> length, final int delimiter) {
> +  comp = samePrefixComparator.compareCommonRowPrefix(seekCell,
> currentCell, rowCommonPrefix);
> +  public EncodedSeeker createSeeker(CellComparator comparator,
> HFileBlockDecodingContext decodingCtx) {
>
>   {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.
>
>  {color:red}-1 core tests{color}.  The patch failed these unit tests:
>org.apache.hadoop.hbase.client.TestScannerTimeout
>
> org.apache.hadoop.hbase.replication.regionserver.TestReplicationWALReaderManager
>
> org.apache.hadoop.hbase.wal.TestBoundedRegionGroupingProvider
>   org.apache.hadoop.hbase.regionserver.wal.TestWALReplay
>
> org.apache.hadoop.hbase.regionserver.TestCompoundBloomFilter
>
> org.apache.hadoop.hbase.wal.TestDefaultWALProviderWithHLogKey
>   org.apache.hadoop.hbase.wal.TestDefaultWALProvider
>
> org.apache.hadoop.hbase.replication.TestReplicationKillMasterRS
>
> org.apache.hadoop.hbase.replication.regionserver.TestRegionReplicaReplicationEndpoint
>
> org.apache.hadoop.hbase.regionserver.TestHRegionReplayEvents
>   org.apache.hadoop.hbase.regionserver.wal.TestLogRolling
>
> org.apache.hadoop.hbase.replication.TestReplicationDisableInactivePeer
>   org.apache.hadoop.hbase.mapreduce.TestWALRecordReader
>
> org.apache.hadoop.hbase.replication.regionserver.TestReplicationSink
>
> org.apache.hadoop.hbase.replication.TestReplicationKillSlaveRS
>   org.apache.hadoop.hbase.replication.TestReplicationSource
>
> org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsReplication
>   org.apache.hadoop.hbase.TestZooKeeper
>
> org.apache.hadoop.hbase.regionserver.TestRegionReplicaFailover
>   org.apache.hadoop.hbase.mapreduce.TestHLogRecordReader
>   org.apache.hadoop.hbase.wal.TestWALFactory
>
> org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelReplicationWithExpAsString
>   org.apache.hadoop.hbase.regionserver.wal.TestDurability
>   org.apache.hadoop.hbase.mapreduce.TestWALPlayer
>   org.apache.hadoop.hbase.wal.TestWALSplit
>
> org.apache.hadoop.hbase.replication.TestReplicationSyncUpTool
>   org.apache.hadoop.hbase.TestFullLogReconstruction
>
> org.apache.hadoop.hbase.replication.TestPerTableCFReplication
>
> org.apache.hadoop.hbase.master.TestDistribute

[jira] [Commented] (HBASE-10800) Use CellComparator instead of KVComparator

2015-04-03 Thread ramkrishna vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14394398#comment-14394398
 ] 

ramkrishna vasudevan commented on HBASE-10800:
--

Oops. Wil checkout the test failures



> Use CellComparator instead of KVComparator
> --
>
> Key: HBASE-10800
> URL: https://issues.apache.org/jira/browse/HBASE-10800
> Project: HBase
>  Issue Type: Sub-task
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 1.1.0
>
> Attachments: HBASE-10800_1.patch, HBASE-10800_2.patch, 
> HBASE-10800_3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10499) In write heavy scenario one of the regions does not get flushed causing RegionTooBusyException

2015-01-23 Thread ramkrishna vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290311#comment-14290311
 ] 

ramkrishna vasudevan commented on HBASE-10499:
--

https://issues.apache.org/jira/browse/HBASE-10499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
causing RegionTooBusyException
--
10499-v6.txt, HBASE-10499.patch, HBASE-10499_v5.patch,
compaction-queue.png, hbase-root-master-ip-10-157-0-229.zip,
hbase-root-regionserver-ip-10-93-128-92.zip, master_4e39.log,
master_576f.log, rs_4e39.log, rs_576f.log, t1.dump, t2.dump,
workloada_0.98.dat
to this version.  Doesn't seem so to me.
has 200 regions.  In one of the run with 0.98 server and 0.98 client I
faced this problem like the hlogs became more and the system requested
flushes for those many regions.
remained unflushed.  The ripple effect of this on the client side
org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed
54 actions: RegionTooBusyException: 54 times,
org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed
54 actions: RegionTooBusyException: 54 times,
org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:187)
org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:171)
org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:897)
org.apache.hadoop.hbase.client.HTable.backgroundFlushCommits(HTable.java:961)
org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1225)
wal.FSHLog: Too many hlogs: logs=38, maxlogs=32; forcing flush of 23
regions(s): 97d8ae2f78910cc5ded5fbb1ddad8492,
d396b8a1da05c871edcb68a15608fdf2, 01a68742a1be3a9705d574ad68fec1d7,
1250381046301e7465b6cf398759378e, 127c133f47d0419bd5ab66675aff76d4,
9f01c5d25ddc6675f750968873721253, 29c055b5690839c2fa357cd8e871741e,
ca4e33e3eb0d5f8314ff9a870fc43463, acfc6ae756e193b58d956cb71ccf0aa3,
187ea304069bc2a3c825bc10a59c7e84, 0ea411edc32d5c924d04bf126fa52d1e,
e2f9331fc7208b1b230a24045f3c869e, d9309ca864055eddf766a330352efc7a,
1a71bdf457288d449050141b5ff00c69, 0ba9089db28e977f86a27f90bbab9717,
fdbb3242d3b673bbe4790a47bc30576f, bbadaa1f0e62d8a8650080b824187850,
b1a5de30d8603bd5d9022e09c574501b, cc6a9fabe44347ed65e7c325faa72030,
313b17dbff2497f5041b57fe13fa651e, 6b788c498503ddd3e1433a4cd3fb4e39,
3d71274fe4f815882e9626e1cfa050d1, acc43e4b42c1a041078774f4f20a3ff5
wal.FSHLog: Too many hlogs: logs=53, maxlogs=32; forcing flush of 2
regions(s): fdbb3242d3b673bbe4790a47bc30576f,
6b788c498503ddd3e1433a4cd3fb4e39
regionserver.HRegionServer: regionserver60020.periodicFlusher requesting
flush for region
usertable,user3654,1392107806977.fdbb3242d3b673bbe4790a47bc30576f. after a
delay of 16689
regionserver.HRegionServer: regionserver60020.periodicFlusher requesting
flush for region
usertable,user6264,1392107806983.6b788c498503ddd3e1433a4cd3fb4e39. after a
delay of 15868
regionserver.HRegionServer: regionserver60020.periodicFlusher requesting
flush for region
usertable,user3654,1392107806977.fdbb3242d3b673bbe4790a47bc30576f. after a
delay of 20847
regionserver.HRegionServer: regionserver60020.periodicFlusher requesting
flush for region
usertable,user6264,1392107806983.6b788c498503ddd3e1433a4cd3fb4e39. after a
delay of 20099
regionserver.HRegionServer: regionserver60020.periodicFlusher requesting
flush for region
usertable,user3654,1392107806977.fdbb3242d3b673bbe4790a47bc30576f. after a
delay of 8677
wal.FSHLog: Too many hlogs: logs=54, maxlogs=32; forcing flush of 1
regions(s): fdbb3242d3b673bbe4790a47bc30576f
regions but this region stays with the RS that has this issue.  One
important observation is that in HRegion.internalflushCache() we need to
add a debug log here
does not happen and no logs related to flush are printed in the logs. so
due to some reason this memstore.size() has become 0( I assume this).  The
earlier bugs were also due to similar reason.


> In write heavy scenario one of the regions does not get flushed causing 
> RegionTooBusyException
> --
>
> Key: HBASE-10499
> URL: https://issues.apache.org/jira/browse/HBASE-10499
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
> Fix For: 1.0.0, 2.0.0, 0.98.10, 1.1.0
>
> Attachments: 10499-0.98.txt, 10499-1.0.txt, 10499-v2.txt, 
> 10499-v3.txt, 10499-v4.txt, 10499-v6.txt, 10499-v6.txt, 10499-v7.txt, 
> 10499-v8.txt, HBASE-10499.patch, HBASE-10499_v5.patch, compaction-queue.png, 
> hbase-root-master-ip-10-157-0-229.zip, 

[jira] [Commented] (HBASE-10499) In write heavy scenario one of the regions does not get flushed causing RegionTooBusyException

2015-01-23 Thread ramkrishna vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290308#comment-14290308
 ] 

ramkrishna vasudevan commented on HBASE-10499:
--

+1 on patch
https://issues.apache.org/jira/browse/HBASE-10499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
RegionTooBusyException
--
10499-v6.txt, HBASE-10499.patch, HBASE-10499_v5.patch,
compaction-queue.png, hbase-root-master-ip-10-157-0-229.zip,
hbase-root-regionserver-ip-10-93-128-92.zip, master_4e39.log,
master_576f.log, rs_4e39.log, rs_576f.log, t1.dump, t2.dump,
workloada_0.98.dat
this version.  Doesn't seem so to me.
200 regions.  In one of the run with 0.98 server and 0.98 client I faced
this problem like the hlogs became more and the system requested flushes
for those many regions.
remained unflushed.  The ripple effect of this on the client side
org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed
54 actions: RegionTooBusyException: 54 times,
org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed
54 actions: RegionTooBusyException: 54 times,
org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:187)
org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:171)
org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:897)
org.apache.hadoop.hbase.client.HTable.backgroundFlushCommits(HTable.java:961)
org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1225)
Too many hlogs: logs=38, maxlogs=32; forcing flush of 23 regions(s):
97d8ae2f78910cc5ded5fbb1ddad8492, d396b8a1da05c871edcb68a15608fdf2,
01a68742a1be3a9705d574ad68fec1d7, 1250381046301e7465b6cf398759378e,
127c133f47d0419bd5ab66675aff76d4, 9f01c5d25ddc6675f750968873721253,
29c055b5690839c2fa357cd8e871741e, ca4e33e3eb0d5f8314ff9a870fc43463,
acfc6ae756e193b58d956cb71ccf0aa3, 187ea304069bc2a3c825bc10a59c7e84,
0ea411edc32d5c924d04bf126fa52d1e, e2f9331fc7208b1b230a24045f3c869e,
d9309ca864055eddf766a330352efc7a, 1a71bdf457288d449050141b5ff00c69,
0ba9089db28e977f86a27f90bbab9717, fdbb3242d3b673bbe4790a47bc30576f,
bbadaa1f0e62d8a8650080b824187850, b1a5de30d8603bd5d9022e09c574501b,
cc6a9fabe44347ed65e7c325faa72030, 313b17dbff2497f5041b57fe13fa651e,
6b788c498503ddd3e1433a4cd3fb4e39, 3d71274fe4f815882e9626e1cfa050d1,
acc43e4b42c1a041078774f4f20a3ff5
Too many hlogs: logs=53, maxlogs=32; forcing flush of 2 regions(s):
fdbb3242d3b673bbe4790a47bc30576f, 6b788c498503ddd3e1433a4cd3fb4e39
regionserver.HRegionServer: regionserver60020.periodicFlusher requesting
flush for region
usertable,user3654,1392107806977.fdbb3242d3b673bbe4790a47bc30576f. after a
delay of 16689
regionserver.HRegionServer: regionserver60020.periodicFlusher requesting
flush for region
usertable,user6264,1392107806983.6b788c498503ddd3e1433a4cd3fb4e39. after a
delay of 15868
regionserver.HRegionServer: regionserver60020.periodicFlusher requesting
flush for region
usertable,user3654,1392107806977.fdbb3242d3b673bbe4790a47bc30576f. after a
delay of 20847
regionserver.HRegionServer: regionserver60020.periodicFlusher requesting
flush for region
usertable,user6264,1392107806983.6b788c498503ddd3e1433a4cd3fb4e39. after a
delay of 20099
regionserver.HRegionServer: regionserver60020.periodicFlusher requesting
flush for region
usertable,user3654,1392107806977.fdbb3242d3b673bbe4790a47bc30576f. after a
delay of 8677
Too many hlogs: logs=54, maxlogs=32; forcing flush of 1 regions(s):
fdbb3242d3b673bbe4790a47bc30576f
regions but this region stays with the RS that has this issue.  One
important observation is that in HRegion.internalflushCache() we need to
add a debug log here
not happen and no logs related to flush are printed in the logs. so due to
some reason this memstore.size() has become 0( I assume this).  The earlier
bugs were also due to similar reason.


> In write heavy scenario one of the regions does not get flushed causing 
> RegionTooBusyException
> --
>
> Key: HBASE-10499
> URL: https://issues.apache.org/jira/browse/HBASE-10499
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
> Fix For: 1.0.0, 2.0.0, 0.98.10, 1.1.0
>
> Attachments: 10499-0.98.txt, 10499-1.0.txt, 10499-v2.txt, 
> 10499-v3.txt, 10499-v4.txt, 10499-v6.txt, 10499-v6.txt, 10499-v7.txt, 
> 10499-v8.txt, HBASE-10499.patch, HBASE-10499_v5.patch, compaction-queue.png, 
> hbase-root-master-ip-10-157-0-229.zip, 
> hbase-root-regionserver-ip-10-93-128-92.zip, master_4

Re: [jira] [Updated] (HBASE-10499) In write heavy scenario one of the regions does not get flushed causing RegionTooBusyException

2015-01-23 Thread ramkrishna vasudevan
+1 on patch
On Jan 22, 2015 9:12 PM, "Ted Yu (JIRA)"  wrote:
>
>
>  [
https://issues.apache.org/jira/browse/HBASE-10499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
>
> Ted Yu updated HBASE-10499:
> ---
> Attachment: 10499-v6.txt
>
> > In write heavy scenario one of the regions does not get flushed causing
RegionTooBusyException
> >
--
> >
> > Key: HBASE-10499
> > URL: https://issues.apache.org/jira/browse/HBASE-10499
> > Project: HBase
> >  Issue Type: Bug
> >Affects Versions: 0.98.0
> >Reporter: ramkrishna.s.vasudevan
> >Assignee: ramkrishna.s.vasudevan
> >Priority: Critical
> > Fix For: 2.0.0, 1.1.0
> >
> > Attachments: 10499-v2.txt, 10499-v3.txt, 10499-v4.txt,
10499-v6.txt, HBASE-10499.patch, HBASE-10499_v5.patch,
compaction-queue.png, hbase-root-master-ip-10-157-0-229.zip,
hbase-root-regionserver-ip-10-93-128-92.zip, master_4e39.log,
master_576f.log, rs_4e39.log, rs_576f.log, t1.dump, t2.dump,
workloada_0.98.dat
> >
> >
> > I got this while testing 0.98RC.  But am not sure if it is specific to
this version.  Doesn't seem so to me.
> > Also it is something similar to HBASE-5312 and HBASE-5568.
> > Using 10 threads i do writes to 4 RS using YCSB. The table created has
200 regions.  In one of the run with 0.98 server and 0.98 client I faced
this problem like the hlogs became more and the system requested flushes
for those many regions.
> > One by one everything was flushed except one and that one thing
remained unflushed.  The ripple effect of this on the client side
> > {code}
> > com.yahoo.ycsb.DBException:
org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed
54 actions: RegionTooBusyException: 54 times,
> > at com.yahoo.ycsb.db.HBaseClient.cleanup(HBaseClient.java:245)
> > at com.yahoo.ycsb.DBWrapper.cleanup(DBWrapper.java:73)
> > at com.yahoo.ycsb.ClientThread.run(Client.java:307)
> > Caused by:
org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed
54 actions: RegionTooBusyException: 54 times,
> > at
org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:187)
> > at
org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$500(AsyncProcess.java:171)
> > at
org.apache.hadoop.hbase.client.AsyncProcess.getErrors(AsyncProcess.java:897)
> > at
org.apache.hadoop.hbase.client.HTable.backgroundFlushCommits(HTable.java:961)
> > at
org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1225)
> > at com.yahoo.ycsb.db.HBaseClient.cleanup(HBaseClient.java:232)
> > ... 2 more
> > {code}
> > On one of the RS
> > {code}
> > 2014-02-11 08:45:58,714 INFO  [regionserver60020.logRoller] wal.FSHLog:
Too many hlogs: logs=38, maxlogs=32; forcing flush of 23 regions(s):
97d8ae2f78910cc5ded5fbb1ddad8492, d396b8a1da05c871edcb68a15608fdf2,
01a68742a1be3a9705d574ad68fec1d7, 1250381046301e7465b6cf398759378e,
127c133f47d0419bd5ab66675aff76d4, 9f01c5d25ddc6675f750968873721253,
29c055b5690839c2fa357cd8e871741e, ca4e33e3eb0d5f8314ff9a870fc43463,
acfc6ae756e193b58d956cb71ccf0aa3, 187ea304069bc2a3c825bc10a59c7e84,
0ea411edc32d5c924d04bf126fa52d1e, e2f9331fc7208b1b230a24045f3c869e,
d9309ca864055eddf766a330352efc7a, 1a71bdf457288d449050141b5ff00c69,
0ba9089db28e977f86a27f90bbab9717, fdbb3242d3b673bbe4790a47bc30576f,
bbadaa1f0e62d8a8650080b824187850, b1a5de30d8603bd5d9022e09c574501b,
cc6a9fabe44347ed65e7c325faa72030, 313b17dbff2497f5041b57fe13fa651e,
6b788c498503ddd3e1433a4cd3fb4e39, 3d71274fe4f815882e9626e1cfa050d1,
acc43e4b42c1a041078774f4f20a3ff5
> > ..
> > 2014-02-11 08:47:49,580 INFO  [regionserver60020.logRoller] wal.FSHLog:
Too many hlogs: logs=53, maxlogs=32; forcing flush of 2 regions(s):
fdbb3242d3b673bbe4790a47bc30576f, 6b788c498503ddd3e1433a4cd3fb4e39
> > {code}
> > {code}
> > 2014-02-11 09:42:44,237 INFO  [regionserver60020.periodicFlusher]
regionserver.HRegionServer: regionserver60020.periodicFlusher requesting
flush for region
usertable,user3654,1392107806977.fdbb3242d3b673bbe4790a47bc30576f. after a
delay of 16689
> > 2014-02-11 09:42:44,237 INFO  [regionserver60020.periodicFlusher]
regionserver.HRegionServer: regionserver60020.periodicFlusher requesting
flush for region
usertable,user6264,1392107806983.6b788c498503ddd3e1433a4cd3fb4e39. after a
delay of 15868
> > 2014-02-11 09:42:54,238 INFO  [regionserver60020.periodicFlusher]
regionserver.HRegionServer: regionserver60020.periodicFlusher requesting
flush for region
usertable,user3654,1392107806977.fdbb3242d3b673bbe4790a47bc30576f. after a
delay of 20847
> > 2014-02-11 09:42:54,238 INFO  [regionserver60020.periodicFlusher]
regionserver.HRegionServer: regionserver60020.periodicFlusher requesting
flus

[jira] [Commented] (HBASE-7307) MetaReader.tableExists should not return false if the specified table regions has been split

2012-12-08 Thread ramkrishna vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13527211#comment-13527211
 ] 

ramkrishna vasudevan commented on HBASE-7307:
-

Fine Rajesh...




> MetaReader.tableExists should not return false if the specified table regions 
> has been split
> 
>
> Key: HBASE-7307
> URL: https://issues.apache.org/jira/browse/HBASE-7307
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.92.3, 0.96.0, 0.94.4
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 0.92.2, 0.94.3
>
>
> If a region is split parent we are not adding it to META scan results during 
> full scan. 
> {code}
> if (!isInsideTable(this.current, tableNameBytes)) return false;
> if (this.current.isSplitParent()) return true;
> // Else call super and add this Result to the collection.
> super.visit(r);
> {code}
> If all regions of a table has been split then result size will be zero and 
> returning false.
> {code} 
> fullScan(catalogTracker, visitor, 
> getTableStartRowForMeta(tableNameBytes));
> // If visitor has results >= 1 then table exists.
> return visitor.getResults().size() >= 1;
> {code}
> Even table is present we are returning false which is not correct(its highly 
> possible in case of tables with one region).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira