[jira] [Commented] (HBASE-9116) Add a view/edit tool for favored node mappings for regions
[ https://issues.apache.org/jira/browse/HBASE-9116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13750997#comment-13750997 ] Devaraj Das commented on HBASE-9116: bq. Why not just call initialize() at the end of the constructor body? The reason being I wanted to keep the construction of the object cheap. The initialization does some IO since it scans the meta, and I wanted to keep this out of the basic constructor... bq. Is it worth RegionPlacementMaintainer extending AbstractHBaseTool? IMHO this is not going to add that much value. Do you mind if I look at this in a follow up. bq. Is this necessary – can you instead separate out case 2 as a second test? -@Test(timeout = 18) +@Test(timeout = 180) Fixed the timeout (that was a good catch). The reason the test is clubbed into one is that the test tries out the various favorednode utilities with a single table that it creates in the beginning. Add a view/edit tool for favored node mappings for regions -- Key: HBASE-9116 URL: https://issues.apache.org/jira/browse/HBASE-9116 Project: HBase Issue Type: Improvement Components: Region Assignment Affects Versions: 0.95.0 Reporter: Devaraj Das Assignee: Devaraj Das Fix For: 0.96.0 Attachments: 9116-1.txt, 9116-2.txt, 9116-2.txt, 9116-2.txt, 9116-3.txt, 9116-4.txt, 9116-5.txt Add a tool that one can run offline to view the favored node mappings for regions, and also fix the mappings if needed. Such a tool exists in the 0.89-fb branch. Will port it over to trunk/0.95. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9116) Add a view/edit tool for favored node mappings for regions
[ https://issues.apache.org/jira/browse/HBASE-9116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Devaraj Das updated HBASE-9116: --- Attachment: 9116-6.txt The patch with the findbugs warnings taken care of, and one of Nick's comment addressed. Add a view/edit tool for favored node mappings for regions -- Key: HBASE-9116 URL: https://issues.apache.org/jira/browse/HBASE-9116 Project: HBase Issue Type: Improvement Components: Region Assignment Affects Versions: 0.95.0 Reporter: Devaraj Das Assignee: Devaraj Das Fix For: 0.96.0 Attachments: 9116-1.txt, 9116-2.txt, 9116-2.txt, 9116-2.txt, 9116-3.txt, 9116-4.txt, 9116-5.txt, 9116-6.txt Add a tool that one can run offline to view the favored node mappings for regions, and also fix the mappings if needed. Such a tool exists in the 0.89-fb branch. Will port it over to trunk/0.95. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9321) Contention getting the current user in RpcClient$Connection.writeRequest
[ https://issues.apache.org/jira/browse/HBASE-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751018#comment-13751018 ] Devaraj Das commented on HBASE-9321: bq. If one connection per user, we may have many connections if there are many users. I would not be worried about supporting 100s of users each with their own separate connection, from the REST server (assuming modest configuration of the system). I'd suggest to start with, to reuse connections for the same user. If you ask oozie folks, they will tell you about a cache they implemented in their layer to deal with the many users problem (in their case they ended up creating too many filesystem instances even for the same enduser). The cache is from user - ProxyUGI, right [~tucu00]? I believe Hadoop core has done (or is doing) work to reuse connections for multiple users (not sure whether the proxy users have been covered there). I am +1 for both caching the realuser UGI and reusing connections across different proxy-users for the same realuser within the RPC layer. Maybe, a good one for 0.98? Contention getting the current user in RpcClient$Connection.writeRequest Key: HBASE-9321 URL: https://issues.apache.org/jira/browse/HBASE-9321 Project: HBase Issue Type: Bug Affects Versions: 0.95.2 Reporter: Jean-Daniel Cryans Fix For: 0.98.0, 0.96.0 Attachments: trunk-9321.patch I've been running tests on clusters with lots of regions, about 400, and I'm seeing weird contention in the client. This one I see a lot, hundreds and sometimes thousands of threads are blocked like this: {noformat} htable-pool4-t74 daemon prio=10 tid=0x7f2254114000 nid=0x2a99 waiting for monitor entry [0x7f21f9e94000] java.lang.Thread.State: BLOCKED (on object monitor) at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:466) - waiting to lock 0xfb5ad000 (a java.lang.Class for org.apache.hadoop.security.UserGroupInformation) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.writeRequest(RpcClient.java:1013) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1407) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1634) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1691) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:27339) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:105) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:183) {noformat} While the holder is doing this: {noformat} htable-pool17-t55 daemon prio=10 tid=0x7f2244408000 nid=0x2a98 runnable [0x7f21f9f95000] java.lang.Thread.State: RUNNABLE at java.security.AccessController.getStackAccessControlContext(Native Method) at java.security.AccessController.getContext(AccessController.java:487) at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:466) - locked 0xfb5ad000 (a java.lang.Class for org.apache.hadoop.security.UserGroupInformation) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.writeRequest(RpcClient.java:1013) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1407) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1634) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1691) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:27339) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:105) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:183) {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9230) Fix the server so it can take a pure pb request param and return a pure pb result
[ https://issues.apache.org/jira/browse/HBASE-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-9230: - Summary: Fix the server so it can take a pure pb request param and return a pure pb result (was: Fix the server so it can take a pure pb request param and return a pure pb resutl) Fix the server so it can take a pure pb request param and return a pure pb result - Key: HBASE-9230 URL: https://issues.apache.org/jira/browse/HBASE-9230 Project: HBase Issue Type: Bug Reporter: stack Assignee: stack Priority: Critical Fix For: 0.96.0 Working on the asynchbase update w/ B this afternoon so it can run against 0.95/0.96, I noticed that clients HAVE TO do cellblocks as the server is currently. That is an oversight. Lets fix so can do all pb all the time too (I thought this was there but it is not); it will make it easier dev'ing simple clients. This issue shouldn't hold up release but we should get it in to help the asynchbase convertion. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9278) Reading Pre-namespace meta table edits kills the reader
[ https://issues.apache.org/jira/browse/HBASE-9278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751024#comment-13751024 ] Hadoop QA commented on HBASE-9278: -- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12600088/HBase-9278-v1-1.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 2 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/6920//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6920//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6920//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6920//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6920//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6920//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6920//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6920//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6920//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/6920//console This message is automatically generated. Reading Pre-namespace meta table edits kills the reader --- Key: HBASE-9278 URL: https://issues.apache.org/jira/browse/HBASE-9278 Project: HBase Issue Type: Bug Components: migration, wal Affects Versions: 0.95.2 Reporter: Himanshu Vashishtha Assignee: Himanshu Vashishtha Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: HBase-9278-v0.patch, HBase-9278-v1-1.patch, HBase-9278-v1.patch In upgrading to 0.96, there might be some meta/root table edits. Currently, we are just killing SplitLogWorker thread in case it sees any META, or ROOT waledit, which blocks log splitting/replaying of remaining WALs. {code} 2013-08-20 15:45:16,998 ERROR regionserver.SplitLogWorker (SplitLogWorker.java:run(210)) - unexpected error java.lang.IllegalArgumentException: .META. no longer exists. The table has been renamed to hbase:meta at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:269) at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:261) at org.apache.hadoop.hbase.regionserver.wal.HLogKey.readFields(HLogKey.java:338) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1898) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1938) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.readNext(SequenceFileLogReader.java:215) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:98) at org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:85) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getNextLogLine(HLogSplitter.java:582) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:292) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:209)
[jira] [Updated] (HBASE-9230) Fix the server so it can take a pure pb request param and return a pure pb result
[ https://issues.apache.org/jira/browse/HBASE-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-9230: - Fix Version/s: 0.98.0 Status: Patch Available (was: Open) Fix the server so it can take a pure pb request param and return a pure pb result - Key: HBASE-9230 URL: https://issues.apache.org/jira/browse/HBASE-9230 Project: HBase Issue Type: Bug Reporter: stack Assignee: stack Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9230.txt Working on the asynchbase update w/ B this afternoon so it can run against 0.95/0.96, I noticed that clients HAVE TO do cellblocks as the server is currently. That is an oversight. Lets fix so can do all pb all the time too (I thought this was there but it is not); it will make it easier dev'ing simple clients. This issue shouldn't hold up release but we should get it in to help the asynchbase convertion. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9230) Fix the server so it can take a pure pb request param and return a pure pb result
[ https://issues.apache.org/jira/browse/HBASE-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-9230: - Attachment: 9230.txt Patch looks bigger than it really is because has pb changes: {code} M hbase-client/src/main/java/org/apache/hadoop/hbase/client/MultiServerCallable.java Add check if cellblock and if NOT, then send request pure pb. It may look like duplicate code in the below. It is not. The calls to RequestConverter are different taking different params (overrides) isCellBlock -- does a test if we are to send cell blocks by looking at Configuration. M hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/IPCUtil.java If a CellScanner is not null and codec is, throw exception. M hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/PayloadCarryingRpcController.java Allow null value. M hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcClient.java Allow for no codec being specified. M hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ResponseConverter.java The ScanResult protobuf changed slightly. We do not have the ResultCellMeta anymore. Its content is moved into the pb Result. Also, ScanResult may also carry its results inline as protobuf rather than always as cellblocks. M hbase-common/src/main/java/org/apache/hadoop/hbase/CellScanner.java Fix javadoc. M hbase-common/src/main/java/org/apache/hadoop/hbase/CellUtil.java Allow for null scanner. Fix javadoc too. M hbase-protocol/src/main/protobuf/Client.proto Removed ResultCellMeta. Move its content into ScanResponse. Also allow carrying Results in the ScanResponse rather than as always cellblocks. M hbase-protocol/src/main/protobuf/RPC.proto Remove default codec so it is possible to ask for NO codec. M hbase-server/src/main/java/org/apache/hadoop/hbase/catalog/MetaEditor.java Remove unused import. M hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcCallContext.java isClientCellBlockSupport -- true if client wants response as cellblocks. M hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java Add support for isClientCellBlockSupport. Its in RpcCallContext so can tell among the many connected clients which support cellblock returns. M hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServerInterface.java Removed imports. M hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java Return results pb'd if client wants pb-only. M hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java M hbase-server/src/test/java/org/apache/hadoop/hbase/catalog/TestMetaReaderEditorNoCluster.java M hbase-server/src/test/java/org/apache/hadoop/hbase/master/MockRegionServer.java M hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManager.java Adjust because no more ResultCellMeta. A hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSideNoCodec.java Small unit test that does basic ops w/o using a codec/cellblock. M hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/TestIPC.java Add test for no codec. M src/main/docbkx/rpc.xml Add some doc. Fix the server so it can take a pure pb request param and return a pure pb result - Key: HBASE-9230 URL: https://issues.apache.org/jira/browse/HBASE-9230 Project: HBase Issue Type: Bug Reporter: stack Assignee: stack Priority: Critical Fix For: 0.96.0 Attachments: 9230.txt Working on the asynchbase update w/ B this afternoon so it can run against 0.95/0.96, I noticed that clients HAVE TO do cellblocks as the server is currently. That is an oversight. Lets fix so can do all pb all the time too (I thought this was there but it is not); it will make it easier dev'ing simple clients. This issue shouldn't hold up release but we should get it in to help the asynchbase convertion. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9230) Fix the server so it can take a pure pb request param and return a pure pb result
[ https://issues.apache.org/jira/browse/HBASE-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751028#comment-13751028 ] Hadoop QA commented on HBASE-9230: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12600104/9230.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 17 new or modified tests. {color:red}-1 hadoop1.0{color}. The patch failed to compile against the hadoop 1.0 profile. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/6922//console This message is automatically generated. Fix the server so it can take a pure pb request param and return a pure pb result - Key: HBASE-9230 URL: https://issues.apache.org/jira/browse/HBASE-9230 Project: HBase Issue Type: Bug Reporter: stack Assignee: stack Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9230.txt Working on the asynchbase update w/ B this afternoon so it can run against 0.95/0.96, I noticed that clients HAVE TO do cellblocks as the server is currently. That is an oversight. Lets fix so can do all pb all the time too (I thought this was there but it is not); it will make it easier dev'ing simple clients. This issue shouldn't hold up release but we should get it in to help the asynchbase convertion. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9230) Fix the server so it can take a pure pb request param and return a pure pb result
[ https://issues.apache.org/jira/browse/HBASE-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751029#comment-13751029 ] stack commented on HBASE-9230: -- Here is rb: https://reviews.apache.org/r/13844/ Fix the server so it can take a pure pb request param and return a pure pb result - Key: HBASE-9230 URL: https://issues.apache.org/jira/browse/HBASE-9230 Project: HBase Issue Type: Bug Reporter: stack Assignee: stack Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9230.txt Working on the asynchbase update w/ B this afternoon so it can run against 0.95/0.96, I noticed that clients HAVE TO do cellblocks as the server is currently. That is an oversight. Lets fix so can do all pb all the time too (I thought this was there but it is not); it will make it easier dev'ing simple clients. This issue shouldn't hold up release but we should get it in to help the asynchbase convertion. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9230) Fix the server so it can take a pure pb request param and return a pure pb result
[ https://issues.apache.org/jira/browse/HBASE-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751031#comment-13751031 ] Hadoop QA commented on HBASE-9230: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12600104/9230.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 17 new or modified tests. {color:red}-1 hadoop1.0{color}. The patch failed to compile against the hadoop 1.0 profile. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/6923//console This message is automatically generated. Fix the server so it can take a pure pb request param and return a pure pb result - Key: HBASE-9230 URL: https://issues.apache.org/jira/browse/HBASE-9230 Project: HBase Issue Type: Bug Reporter: stack Assignee: stack Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9230.txt Working on the asynchbase update w/ B this afternoon so it can run against 0.95/0.96, I noticed that clients HAVE TO do cellblocks as the server is currently. That is an oversight. Lets fix so can do all pb all the time too (I thought this was there but it is not); it will make it easier dev'ing simple clients. This issue shouldn't hold up release but we should get it in to help the asynchbase convertion. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9230) Fix the server so it can take a pure pb request param and return a pure pb result
[ https://issues.apache.org/jira/browse/HBASE-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751039#comment-13751039 ] ramkrishna.s.vasudevan commented on HBASE-9230: --- @Stack Does this allow server to have a different codec when it sends the cells to the client? I will review this patch later today. Fix the server so it can take a pure pb request param and return a pure pb result - Key: HBASE-9230 URL: https://issues.apache.org/jira/browse/HBASE-9230 Project: HBase Issue Type: Bug Reporter: stack Assignee: stack Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9230.txt Working on the asynchbase update w/ B this afternoon so it can run against 0.95/0.96, I noticed that clients HAVE TO do cellblocks as the server is currently. That is an oversight. Lets fix so can do all pb all the time too (I thought this was there but it is not); it will make it easier dev'ing simple clients. This issue shouldn't hold up release but we should get it in to help the asynchbase convertion. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9116) Add a view/edit tool for favored node mappings for regions
[ https://issues.apache.org/jira/browse/HBASE-9116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751052#comment-13751052 ] Hadoop QA commented on HBASE-9116: -- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12600102/9116-6.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 6 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/6921//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6921//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6921//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6921//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6921//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6921//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6921//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6921//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6921//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/6921//console This message is automatically generated. Add a view/edit tool for favored node mappings for regions -- Key: HBASE-9116 URL: https://issues.apache.org/jira/browse/HBASE-9116 Project: HBase Issue Type: Improvement Components: Region Assignment Affects Versions: 0.95.0 Reporter: Devaraj Das Assignee: Devaraj Das Fix For: 0.96.0 Attachments: 9116-1.txt, 9116-2.txt, 9116-2.txt, 9116-2.txt, 9116-3.txt, 9116-4.txt, 9116-5.txt, 9116-6.txt Add a tool that one can run offline to view the favored node mappings for regions, and also fix the mappings if needed. Such a tool exists in the 0.89-fb branch. Will port it over to trunk/0.95. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9350) In ChaosMonkey, MoveRegionsOfTableAction throws UnknownRegionException
chendihao created HBASE-9350: Summary: In ChaosMonkey, MoveRegionsOfTableAction throws UnknownRegionException Key: HBASE-9350 URL: https://issues.apache.org/jira/browse/HBASE-9350 Project: HBase Issue Type: Bug Components: test Affects Versions: 0.94.0 Reporter: chendihao Fix For: 0.94.0 The first parameter in HBaseAdmin.move(final byte [] encodedRegionName, final byte [] destServerName) should be encoded. Otherwise, it could throw UnknowRegionException and result in failure of this action. encodedRegionName The encoded region name; i.e. the hash that makes up the region name suffix: e.g. if regionname is TestTable,0094429456,1289497600452.527db22f95c8a9e0116f0cc13c680396., then the encoded region name is: 527db22f95c8a9e0116f0cc13c680396. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9344) RegionServer not shutting down upon KeeperException in open region
[ https://issues.apache.org/jira/browse/HBASE-9344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751095#comment-13751095 ] Andrew Purtell commented on HBASE-9344: --- +1 - What branches? All? lgtm RegionServer not shutting down upon KeeperException in open region -- Key: HBASE-9344 URL: https://issues.apache.org/jira/browse/HBASE-9344 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl Attachments: 9344-trunk.txt We ran into a situation where due to a Kerberos configuration problem one of our region server could not connect to ZK when opening a region. Instead of shutting down it continue to try to reconnect. Eventually the master would assign the region to another region server. Each time that region server was assigned a region it would sit there for 5 mins with the region offline. It would have been better if the region server had shut itself down. This is in the logs: {quote} 2013-08-16 17:31:35,999 WARN org.apache.hadoop.hbase.zookeeper.ZKUtil: hconnection-0x2407b842ff2012d-0x2407b842ff2012d-0x2407b842ff2012d Unable to set watcher on znode (/hbase/hbaseid) org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /hbase/hbaseid at org.apache.zookeeper.KeeperException.create(KeeperException.java:123) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1041) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:172) at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:450) at org.apache.hadoop.hbase.zookeeper.ClusterId.readClusterIdZNode(ClusterId.java:61) at org.apache.hadoop.hbase.zookeeper.ClusterId.getId(ClusterId.java:50) at org.apache.hadoop.hbase.zookeeper.ClusterId.hasId(ClusterId.java:44) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.ensureZookeeperTrackers(HConnectionManager.java:616) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:882) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:857) at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:233) at org.apache.hadoop.hbase.client.HTable.init(HTable.java:173) at org.apache.hadoop.hbase.catalog.MetaReader.getHTable(MetaReader.java:201) at org.apache.hadoop.hbase.catalog.MetaReader.getMetaHTable(MetaReader.java:227) at org.apache.hadoop.hbase.catalog.MetaReader.getCatalogHTable(MetaReader.java:214) at org.apache.hadoop.hbase.catalog.MetaEditor.putToCatalogTable(MetaEditor.java:91) at org.apache.hadoop.hbase.catalog.MetaEditor.updateLocation(MetaEditor.java:296) at org.apache.hadoop.hbase.catalog.MetaEditor.updateRegionLocation(MetaEditor.java:276) at org.apache.hadoop.hbase.regionserver.HRegionServer.postOpenDeployTasks(HRegionServer.java:1828) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler$PostOpenDeployTasksThread.run(OpenRegionHandler.java:240) {quote} I think the RS should shut itself down instead. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9350) In ChaosMonkey, MoveRegionsOfTableAction throws UnknownRegionException
[ https://issues.apache.org/jira/browse/HBASE-9350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chendihao updated HBASE-9350: - Description: The first parameter in HBaseAdmin.move(final byte [] encodedRegionName, final byte [] destServerName) should be encoded. Otherwise, it could throw UnknowRegionException and result in failure of this action. {code} encodedRegionName The encoded region name; i.e. the hash that makes up the region name suffix: e.g. if regionname is TestTable,0094429456,1289497600452.527db22f95c8a9e0116f0cc13c680396., then the encoded region name is: 527db22f95c8a9e0116f0cc13c680396. {code} was: The first parameter in HBaseAdmin.move(final byte [] encodedRegionName, final byte [] destServerName) should be encoded. Otherwise, it could throw UnknowRegionException and result in failure of this action. encodedRegionName The encoded region name; i.e. the hash that makes up the region name suffix: e.g. if regionname is TestTable,0094429456,1289497600452.527db22f95c8a9e0116f0cc13c680396., then the encoded region name is: 527db22f95c8a9e0116f0cc13c680396. In ChaosMonkey, MoveRegionsOfTableAction throws UnknownRegionException -- Key: HBASE-9350 URL: https://issues.apache.org/jira/browse/HBASE-9350 Project: HBase Issue Type: Bug Components: test Affects Versions: 0.94.0 Reporter: chendihao Fix For: 0.94.0 The first parameter in HBaseAdmin.move(final byte [] encodedRegionName, final byte [] destServerName) should be encoded. Otherwise, it could throw UnknowRegionException and result in failure of this action. {code} encodedRegionName The encoded region name; i.e. the hash that makes up the region name suffix: e.g. if regionname is TestTable,0094429456,1289497600452.527db22f95c8a9e0116f0cc13c680396., then the encoded region name is: 527db22f95c8a9e0116f0cc13c680396. {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9153) Create a deprecation policy enforcement check
[ https://issues.apache.org/jira/browse/HBASE-9153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751098#comment-13751098 ] Andrew Purtell commented on HBASE-9153: --- This is git only? Fine by me but we have a SVN based process so should be SVN to be a general project tool? Maybe it's better to not name things by or mention the singularity if it is to be a tool for use by any RM at every release? Create a deprecation policy enforcement check - Key: HBASE-9153 URL: https://issues.apache.org/jira/browse/HBASE-9153 Project: HBase Issue Type: Task Reporter: Jonathan Hsieh Attachments: HBASE-9153-v1.patch We've had a few issues now where we've removed API's without deprecating or deprecating in the late release. (HBASE-9142, HBASE-9093) We should just have a tool that enforces our api deprecation policy as a release time check or as a precommit check. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9350) In ChaosMonkey, MoveRegionsOfTableAction throws UnknownRegionException
[ https://issues.apache.org/jira/browse/HBASE-9350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chendihao updated HBASE-9350: - Labels: test (was: ) Release Note: patch for 0.94.x Hadoop Flags: Reviewed Status: Patch Available (was: Open) In ChaosMonkey, MoveRegionsOfTableAction throws UnknownRegionException -- Key: HBASE-9350 URL: https://issues.apache.org/jira/browse/HBASE-9350 Project: HBase Issue Type: Bug Components: test Affects Versions: 0.94.0 Reporter: chendihao Labels: test Fix For: 0.94.0 The first parameter in HBaseAdmin.move(final byte [] encodedRegionName, final byte [] destServerName) should be encoded. Otherwise, it could throw UnknowRegionException and result in failure of this action. {code} encodedRegionName The encoded region name; i.e. the hash that makes up the region name suffix: e.g. if regionname is TestTable,0094429456,1289497600452.527db22f95c8a9e0116f0cc13c680396., then the encoded region name is: 527db22f95c8a9e0116f0cc13c680396. {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9350) In ChaosMonkey, MoveRegionsOfTableAction throws UnknownRegionException
[ https://issues.apache.org/jira/browse/HBASE-9350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chendihao updated HBASE-9350: - Status: Open (was: Patch Available) In ChaosMonkey, MoveRegionsOfTableAction throws UnknownRegionException -- Key: HBASE-9350 URL: https://issues.apache.org/jira/browse/HBASE-9350 Project: HBase Issue Type: Bug Components: test Affects Versions: 0.94.0 Reporter: chendihao Labels: test Fix For: 0.94.0 The first parameter in HBaseAdmin.move(final byte [] encodedRegionName, final byte [] destServerName) should be encoded. Otherwise, it could throw UnknowRegionException and result in failure of this action. {code} encodedRegionName The encoded region name; i.e. the hash that makes up the region name suffix: e.g. if regionname is TestTable,0094429456,1289497600452.527db22f95c8a9e0116f0cc13c680396., then the encoded region name is: 527db22f95c8a9e0116f0cc13c680396. {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9350) In ChaosMonkey, MoveRegionsOfTableAction throws UnknownRegionException
[ https://issues.apache.org/jira/browse/HBASE-9350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chendihao updated HBASE-9350: - Attachment: MoveRegionsOfTableAction.java.patch patch for 0.94.x In ChaosMonkey, MoveRegionsOfTableAction throws UnknownRegionException -- Key: HBASE-9350 URL: https://issues.apache.org/jira/browse/HBASE-9350 Project: HBase Issue Type: Bug Components: test Affects Versions: 0.94.0 Reporter: chendihao Labels: test Fix For: 0.94.0 Attachments: MoveRegionsOfTableAction.java.patch The first parameter in HBaseAdmin.move(final byte [] encodedRegionName, final byte [] destServerName) should be encoded. Otherwise, it could throw UnknowRegionException and result in failure of this action. {code} encodedRegionName The encoded region name; i.e. the hash that makes up the region name suffix: e.g. if regionname is TestTable,0094429456,1289497600452.527db22f95c8a9e0116f0cc13c680396., then the encoded region name is: 527db22f95c8a9e0116f0cc13c680396. {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9350) In ChaosMonkey, MoveRegionsOfTableAction throws UnknownRegionException
[ https://issues.apache.org/jira/browse/HBASE-9350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chendihao updated HBASE-9350: - Status: Patch Available (was: Open) In ChaosMonkey, MoveRegionsOfTableAction throws UnknownRegionException -- Key: HBASE-9350 URL: https://issues.apache.org/jira/browse/HBASE-9350 Project: HBase Issue Type: Bug Components: test Affects Versions: 0.94.0 Reporter: chendihao Labels: test Fix For: 0.94.0 Attachments: MoveRegionsOfTableAction.java.patch The first parameter in HBaseAdmin.move(final byte [] encodedRegionName, final byte [] destServerName) should be encoded. Otherwise, it could throw UnknowRegionException and result in failure of this action. {code} encodedRegionName The encoded region name; i.e. the hash that makes up the region name suffix: e.g. if regionname is TestTable,0094429456,1289497600452.527db22f95c8a9e0116f0cc13c680396., then the encoded region name is: 527db22f95c8a9e0116f0cc13c680396. {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9349) [0.92] NPE in HMaster during shutdown
[ https://issues.apache.org/jira/browse/HBASE-9349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751107#comment-13751107 ] Hudson commented on HBASE-9349: --- FAILURE: Integrated in HBase-0.92 #621 (See [https://builds.apache.org/job/HBase-0.92/621/]) HBASE-9349. [0.92] NPE in HMaster during shutdown (apurtell: rev 1517752) * /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/master/HMaster.java [0.92] NPE in HMaster during shutdown - Key: HBASE-9349 URL: https://issues.apache.org/jira/browse/HBASE-9349 Project: HBase Issue Type: Bug Affects Versions: 0.92.3 Reporter: Andrew Purtell Assignee: Andrew Purtell Priority: Minor Fix For: 0.92.3 Attachments: 9349.patch Found this in a run of TestWALObserver: {noformat} java.lang.NullPointerException at org.apache.hadoop.hbase.master.HMaster.shutdown(HMaster.java:1510) at org.apache.hadoop.hbase.util.JVMClusterUtil.shutdown(JVMClusterUtil.java:226) at org.apache.hadoop.hbase.LocalHBaseCluster.shutdown(LocalHBaseCluster.java:424) at org.apache.hadoop.hbase.MiniHBaseCluster.shutdown(MiniHBaseCluster.java:417) at org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniHBaseCluster(HBaseTestingUtility.java:607) at org.apache.hadoop.hbase.HBaseTestingUtility.shutdownMiniCluster(HBaseTestingUtility.java:583) at org.apache.hadoop.hbase.coprocessor.TestWALObserver.teardownAfterClass(TestWALObserver.java:111) {noformat} if the active master in the minicluster is terminated before fully initialized. Needs null checks in HMaster#shutdown. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9350) In ChaosMonkey, MoveRegionsOfTableAction throws UnknownRegionException
[ https://issues.apache.org/jira/browse/HBASE-9350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chendihao updated HBASE-9350: - Attachment: MoveRegionsOfTableAction-v2.patch take the diff at HBASE_HOME_DIR In ChaosMonkey, MoveRegionsOfTableAction throws UnknownRegionException -- Key: HBASE-9350 URL: https://issues.apache.org/jira/browse/HBASE-9350 Project: HBase Issue Type: Bug Components: test Affects Versions: 0.94.0 Reporter: chendihao Labels: test Fix For: 0.94.0 Attachments: MoveRegionsOfTableAction.java.patch, MoveRegionsOfTableAction-v2.patch The first parameter in HBaseAdmin.move(final byte [] encodedRegionName, final byte [] destServerName) should be encoded. Otherwise, it could throw UnknowRegionException and result in failure of this action. {code} encodedRegionName The encoded region name; i.e. the hash that makes up the region name suffix: e.g. if regionname is TestTable,0094429456,1289497600452.527db22f95c8a9e0116f0cc13c680396., then the encoded region name is: 527db22f95c8a9e0116f0cc13c680396. {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9138) the name of function getHaseIntegrationTestingUtility() is a misspelling
[ https://issues.apache.org/jira/browse/HBASE-9138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chendihao updated HBASE-9138: - Fix Version/s: 0.94.0 Affects Version/s: (was: 0.94.4) 0.94.0 Release Note: patch for 0.94.x Hadoop Flags: Reviewed Status: Patch Available (was: Open) the name of function getHaseIntegrationTestingUtility() is a misspelling Key: HBASE-9138 URL: https://issues.apache.org/jira/browse/HBASE-9138 Project: HBase Issue Type: Bug Components: test Affects Versions: 0.94.0 Reporter: chendihao Priority: Trivial Fix For: 0.94.0 Attachments: ChaosMonkey.java.patch, ChaosMonkey-v2.patch The function getHaseIntegrationTestingUtility() in ChaosMonkey.java should be getHBaseIntegrationTestingUtility(), just a spelling mistake. {code} /** * Context for Action's */ public static class ActionContext { private IntegrationTestingUtility util; public ActionContext(IntegrationTestingUtility util) { this.util = util; } public IntegrationTestingUtility getHaseIntegrationTestingUtility() { return util; } public HBaseCluster getHBaseCluster() { return util.getHBaseClusterInterface(); } } {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9136) RPC side changes to have a different codec for server to client communication
[ https://issues.apache.org/jira/browse/HBASE-9136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751117#comment-13751117 ] Andrew Purtell commented on HBASE-9136: --- Why can't we have different codecs set up for the client and server side (by configuration)? Seems generally useful. Putting flags in a monolithic codec produces a result limited to whatever we hardcode the codec to do when dealing with the flags. It's a lot more flexible to be able to swap in and out codecs on the client and server side. RPC side changes to have a different codec for server to client communication - Key: HBASE-9136 URL: https://issues.apache.org/jira/browse/HBASE-9136 Project: HBase Issue Type: Sub-task Reporter: ramkrishna.s.vasudevan Assignee: ramkrishna.s.vasudevan Fix For: 0.98.0 With reference to the mail sent in the dev list, http://comments.gmane.org/gmane.comp.java.hadoop.hbase.devel/38984 We should have a provision such that the codec on the server side could be different from the one on the client side. This would help to remove the tags for security usecases. This JIRA is aimed to provide that capability in the codec itself. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9351) Connection capability negotiation
Andrew Purtell created HBASE-9351: - Summary: Connection capability negotiation Key: HBASE-9351 URL: https://issues.apache.org/jira/browse/HBASE-9351 Project: HBase Issue Type: Improvement Reporter: Andrew Purtell Would be useful to support negotiation at connection setup time beyond SASL. Consider: Start with a default baseline profile. Both client and server sides can begin communicating immediately (or after SASL completes if security is active), with a baseline set of messages and codecs. For more interesting use cases, support configuration messages that negotiate connection configuration going forward after both sides ack the changes: codec, configuration, compression. Any nack aborts the upgrade request and leaves both sides still in the default profile. Should be a pluggable implementation. For example, codec implementations should be automatically discovered at runtime if shipped with the client or server, and the set of available options should be communicated to the other side. Features like codecs should all be versioned themselves. Negotiation should be version aware, and decisionmaking on if a given pair of component versions are compatible should be delegated to the component. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9352) Refactor HFile block encoding
Andrew Purtell created HBASE-9352: - Summary: Refactor HFile block encoding Key: HBASE-9352 URL: https://issues.apache.org/jira/browse/HBASE-9352 Project: HBase Issue Type: Improvement Reporter: Andrew Purtell The set of block encoders available for processing HFiles is fixed at compile time. Rather than hardcode the set of available encoders in an enum, consider a registry and a couple of APIs for adding additional encoders at runtime. Modify HFile (V3) metadata to specify encoders by string or classname. Consider a stackable encoding API, so coprocessors can watch, change, or override block coding in upcalls. Block encoders are tightly bound to the particulars of HFile version internals. It would be good if some of that can go away. Could also consider dynamic loading of encoder implementations. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9351) Connection capability negotiation
[ https://issues.apache.org/jira/browse/HBASE-9351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-9351: -- Affects Version/s: 0.98.0 Connection capability negotiation - Key: HBASE-9351 URL: https://issues.apache.org/jira/browse/HBASE-9351 Project: HBase Issue Type: Improvement Affects Versions: 0.98.0 Reporter: Andrew Purtell Would be useful to support negotiation at connection setup time beyond SASL. Consider: Start with a default baseline profile. Both client and server sides can begin communicating immediately (or after SASL completes if security is active), with a baseline set of messages and codecs. For more interesting use cases, support configuration messages that negotiate connection configuration going forward after both sides ack the changes: codec, configuration, compression. Any nack aborts the upgrade request and leaves both sides still in the default profile. Should be a pluggable implementation. For example, codec implementations should be automatically discovered at runtime if shipped with the client or server, and the set of available options should be communicated to the other side. Features like codecs should all be versioned themselves. Negotiation should be version aware, and decisionmaking on if a given pair of component versions are compatible should be delegated to the component. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-6222) Add per-KeyValue Security
[ https://issues.apache.org/jira/browse/HBASE-6222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-6222: -- Affects Version/s: (was: 0.98.0) (was: 0.95.2) Add per-KeyValue Security - Key: HBASE-6222 URL: https://issues.apache.org/jira/browse/HBASE-6222 Project: HBase Issue Type: New Feature Components: security Reporter: stack Assignee: Andrew Purtell Attachments: 6222.pdf, cell-acls-kv-tags-not-for-review.zip, HBaseCellRow-LevelSecurityDesignDoc.docx, HBaseCellRow-LevelSecurityPRD.docx Saw an interesting article: http://www.fiercegovernmentit.com/story/sasc-accumulo-language-pro-open-source-say-proponents/2012-06-14 The Senate Armed Services Committee version of the fiscal 2013 national defense authorization act (S. 3254) would require DoD agencies to foreswear the Accumulo NoSQL database after Sept. 30, 2013, unless the DoD CIO certifies that there exists either no viable commercial open source database with security features comparable to [Accumulo] (such as the HBase or Cassandra databases)... Not sure what a 'commercial open source database' is, and I'm not sure whats going on in the article, but tra-la-la'ing, if we had per-KeyValue 'security' like Accumulo's, we might put ourselves in the running for federal contributions? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9352) Refactor HFile block encoding
[ https://issues.apache.org/jira/browse/HBASE-9352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-9352: -- Affects Version/s: 0.98.0 Refactor HFile block encoding - Key: HBASE-9352 URL: https://issues.apache.org/jira/browse/HBASE-9352 Project: HBase Issue Type: Improvement Affects Versions: 0.98.0 Reporter: Andrew Purtell The set of block encoders available for processing HFiles is fixed at compile time. Rather than hardcode the set of available encoders in an enum, consider a registry and a couple of APIs for adding additional encoders at runtime. Modify HFile (V3) metadata to specify encoders by string or classname. Consider a stackable encoding API, so coprocessors can watch, change, or override block coding in upcalls. Block encoders are tightly bound to the particulars of HFile version internals. It would be good if some of that can go away. Could also consider dynamic loading of encoder implementations. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7664) [Per-KV security] Shell support
[ https://issues.apache.org/jira/browse/HBASE-7664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-7664: -- Affects Version/s: (was: 0.95.2) 0.98.0 [Per-KV security] Shell support --- Key: HBASE-7664 URL: https://issues.apache.org/jira/browse/HBASE-7664 Project: HBase Issue Type: Sub-task Components: security, shell Affects Versions: 0.98.0 Reporter: Andrew Purtell Assignee: Andrew Purtell Support simple exploration and validation of per-KV ACLs via the shell. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-6873) Clean up Coprocessor loading failure handling
[ https://issues.apache.org/jira/browse/HBASE-6873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-6873: -- Affects Version/s: (was: 0.94.5) (was: 0.95.2) 0.98.0 Clean up Coprocessor loading failure handling - Key: HBASE-6873 URL: https://issues.apache.org/jira/browse/HBASE-6873 Project: HBase Issue Type: Sub-task Components: Coprocessors, regionserver Affects Versions: 0.98.0 Reporter: David Arthur Assignee: Andrew Purtell When registering a coprocessor with a missing dependency, the regionserver gets stuck in an infinite fail loop. Restarting the regionserver and/or master has no affect. E.g., Load coprocessor from my-coproc.jar, that uses an external dependency (kafka) that is not included with HBase. {code} 12/09/24 13:13:15 INFO handler.OpenRegionHandler: Opening of region {NAME = 'documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.', STARTKEY = '', ENDKEY = '', ENCODED = 6d1e1b7bb93486f096173bd401e8ef6b,} failed, marking as FAILED_OPEN in ZK 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: regionserver:60020-0x139f43af2a70043 Attempting to transition node 6d1e1b7bb93486f096173bd401e8ef6b from RS_ZK_REGION_OPENING to RS_ZK_REGION_FAILED_OPEN 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: regionserver:60020-0x139f43af2a70043 Successfully transitioned node 6d1e1b7bb93486f096173bd401e8ef6b from RS_ZK_REGION_OPENING to RS_ZK_REGION_FAILED_OPEN 12/09/24 13:13:15 INFO regionserver.HRegionServer: Received request to open region: documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b. 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: regionserver:60020-0x139f43af2a70043 Attempting to transition node 6d1e1b7bb93486f096173bd401e8ef6b from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: regionserver:60020-0x139f43af2a70043 Successfully transitioned node 6d1e1b7bb93486f096173bd401e8ef6b from M_ZK_REGION_OFFLINE to RS_ZK_REGION_OPENING 12/09/24 13:13:15 DEBUG regionserver.HRegion: Opening region: {NAME = 'documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.', STARTKEY = '', ENDKEY = '', ENCODED = 6d1e1b7bb93486f096173bd401e8ef6b,} 12/09/24 13:13:15 INFO regionserver.HRegion: Setting up tabledescriptor config now ... 12/09/24 13:13:15 INFO coprocessor.CoprocessorHost: Class com.mycompany.hbase.documents.DocumentObserverCoprocessor needs to be loaded from a file - file:/path/to/my-coproc.jar. 12/09/24 13:13:16 ERROR handler.OpenRegionHandler: Failed open of region=documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b., starting to roll back the global memstore size. java.lang.IllegalStateException: Could not instantiate a region instance. at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:3595) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3733) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:332) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:680) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.GeneratedConstructorAccessor15.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:3592) ... 7 more Caused by: java.lang.NoClassDefFoundError: kafka/common/NoBrokersForPartitionException at java.lang.Class.getDeclaredConstructors0(Native Method) at java.lang.Class.privateGetDeclaredConstructors(Class.java:2389) at java.lang.Class.getConstructor0(Class.java:2699) at java.lang.Class.newInstance0(Class.java:326) at java.lang.Class.newInstance(Class.java:308) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadInstance(CoprocessorHost.java:254) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:227) at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.loadTableCoprocessors(RegionCoprocessorHost.java:162) at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.init(RegionCoprocessorHost.java:126) at
[jira] [Commented] (HBASE-9153) Create a deprecation policy enforcement check
[ https://issues.apache.org/jira/browse/HBASE-9153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751156#comment-13751156 ] Hadoop QA commented on HBASE-9153: -- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12599735/HBASE-9153-v1.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/6924//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6924//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6924//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6924//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6924//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6924//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6924//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6924//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6924//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/6924//console This message is automatically generated. Create a deprecation policy enforcement check - Key: HBASE-9153 URL: https://issues.apache.org/jira/browse/HBASE-9153 Project: HBase Issue Type: Task Reporter: Jonathan Hsieh Attachments: HBASE-9153-v1.patch We've had a few issues now where we've removed API's without deprecating or deprecating in the late release. (HBASE-9142, HBASE-9093) We should just have a tool that enforces our api deprecation policy as a release time check or as a precommit check. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7661) [Per-KV security] Store and apply per cell ACLs in a shadow CF
[ https://issues.apache.org/jira/browse/HBASE-7661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Purtell updated HBASE-7661: -- Affects Version/s: (was: 0.95.2) [Per-KV security] Store and apply per cell ACLs in a shadow CF -- Key: HBASE-7661 URL: https://issues.apache.org/jira/browse/HBASE-7661 Project: HBase Issue Type: Sub-task Components: Coprocessors, security Reporter: Andrew Purtell Assignee: Andrew Purtell Attachments: 7661-0.94.patch, 7661.patch Coprocessor based implementation of per-KV security that extends the existing AccessController and ACL model to cover per cell permissions. More comments on this approach can be found on the parent issue. Stores and consults the additional metadata in a shadow column family managed by the AccessController. Preserves existing user facing semantics. Does not require any changes to core code except, optionally, the security shell commands, for testing and prototyping convenience. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9138) the name of function getHaseIntegrationTestingUtility() is a misspelling
[ https://issues.apache.org/jira/browse/HBASE-9138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751158#comment-13751158 ] Hadoop QA commented on HBASE-9138: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12596476/ChaosMonkey-v2.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/6926//console This message is automatically generated. the name of function getHaseIntegrationTestingUtility() is a misspelling Key: HBASE-9138 URL: https://issues.apache.org/jira/browse/HBASE-9138 Project: HBase Issue Type: Bug Components: test Affects Versions: 0.94.0 Reporter: chendihao Priority: Trivial Fix For: 0.94.0 Attachments: ChaosMonkey.java.patch, ChaosMonkey-v2.patch The function getHaseIntegrationTestingUtility() in ChaosMonkey.java should be getHBaseIntegrationTestingUtility(), just a spelling mistake. {code} /** * Context for Action's */ public static class ActionContext { private IntegrationTestingUtility util; public ActionContext(IntegrationTestingUtility util) { this.util = util; } public IntegrationTestingUtility getHaseIntegrationTestingUtility() { return util; } public HBaseCluster getHBaseCluster() { return util.getHBaseClusterInterface(); } } {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9321) Contention getting the current user in RpcClient$Connection.writeRequest
[ https://issues.apache.org/jira/browse/HBASE-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751164#comment-13751164 ] Andrew Purtell commented on HBASE-9321: --- bq. One option here would be to eliminate usage of User in RpcClient.ConnectionId, and only keep a reference to User in HConnection. So if you want to authenticate as a new User, you obtain a new HConnection. User could be specified explicitly as a parameter to HCM.createConnection(), and could use User.getCurrent() to populate the value for the old signatures. This sounds reasonable to me. One connection per user could be useful for reasoning about admission control if we are doing per user operation quotas or QoS at some future time. Otherwise that stuff would have to look at each RPC to make a decision. I would +1 changes to REST introducing a higher level connection cache for impersonation. Contention getting the current user in RpcClient$Connection.writeRequest Key: HBASE-9321 URL: https://issues.apache.org/jira/browse/HBASE-9321 Project: HBase Issue Type: Bug Affects Versions: 0.95.2 Reporter: Jean-Daniel Cryans Fix For: 0.98.0, 0.96.0 Attachments: trunk-9321.patch I've been running tests on clusters with lots of regions, about 400, and I'm seeing weird contention in the client. This one I see a lot, hundreds and sometimes thousands of threads are blocked like this: {noformat} htable-pool4-t74 daemon prio=10 tid=0x7f2254114000 nid=0x2a99 waiting for monitor entry [0x7f21f9e94000] java.lang.Thread.State: BLOCKED (on object monitor) at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:466) - waiting to lock 0xfb5ad000 (a java.lang.Class for org.apache.hadoop.security.UserGroupInformation) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.writeRequest(RpcClient.java:1013) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1407) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1634) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1691) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:27339) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:105) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:183) {noformat} While the holder is doing this: {noformat} htable-pool17-t55 daemon prio=10 tid=0x7f2244408000 nid=0x2a98 runnable [0x7f21f9f95000] java.lang.Thread.State: RUNNABLE at java.security.AccessController.getStackAccessControlContext(Native Method) at java.security.AccessController.getContext(AccessController.java:487) at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:466) - locked 0xfb5ad000 (a java.lang.Class for org.apache.hadoop.security.UserGroupInformation) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.writeRequest(RpcClient.java:1013) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1407) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1634) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1691) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:27339) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:105) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:183) {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9350) In ChaosMonkey, MoveRegionsOfTableAction throws UnknownRegionException
[ https://issues.apache.org/jira/browse/HBASE-9350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751169#comment-13751169 ] Hadoop QA commented on HBASE-9350: -- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12600123/MoveRegionsOfTableAction-v2.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/6925//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6925//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6925//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6925//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6925//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6925//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6925//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6925//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6925//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/6925//console This message is automatically generated. In ChaosMonkey, MoveRegionsOfTableAction throws UnknownRegionException -- Key: HBASE-9350 URL: https://issues.apache.org/jira/browse/HBASE-9350 Project: HBase Issue Type: Bug Components: test Affects Versions: 0.94.0 Reporter: chendihao Labels: test Fix For: 0.94.0 Attachments: MoveRegionsOfTableAction.java.patch, MoveRegionsOfTableAction-v2.patch The first parameter in HBaseAdmin.move(final byte [] encodedRegionName, final byte [] destServerName) should be encoded. Otherwise, it could throw UnknowRegionException and result in failure of this action. {code} encodedRegionName The encoded region name; i.e. the hash that makes up the region name suffix: e.g. if regionname is TestTable,0094429456,1289497600452.527db22f95c8a9e0116f0cc13c680396., then the encoded region name is: 527db22f95c8a9e0116f0cc13c680396. {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9343) Implement stateless scanner for Stargate
[ https://issues.apache.org/jira/browse/HBASE-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751170#comment-13751170 ] Andrew Purtell commented on HBASE-9343: --- bq. The current scanner implementation for scanner stores state and hence not very suitable for REST server failure scenarios That's not quite how I would describe it. The current scanner implementation expects clients to restart scans if there is a REST server failure in the midst. The tradeoff is a pretty close semantic mapping - though definitely not RESTful - to the client API on the one hand, and loss of the cursor upon process failure on the other. Sure, that can be problematic. Why introduce new resources and a new model of scanning? Most of what you are trying to do can be done with Gets. Extend the existing resources for that. Do we need ProtobufStreamingUtil if REST already has internally a Generator API for iterating over results returned by scanners? Did you partially reimplement that here? Implement stateless scanner for Stargate Key: HBASE-9343 URL: https://issues.apache.org/jira/browse/HBASE-9343 Project: HBase Issue Type: Improvement Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Priority: Minor Attachments: HBASE-9343_94.00.patch The current scanner implementation for scanner stores state and hence not very suitable for REST server failure scenarios. The current JIRA proposes to implement a stateless scanner. In the first version of the patch, a new resource class ScanResource has been added and all the scan parameters will be specified as query params. The following are the scan parameters startrow - The start row for the scan. endrow - The end row for the scan. columns - The columns to scan. starttime, endtime - To only retrieve columns within a specific range of version timestamps,both start and end time must be specified. maxversions - To limit the number of versions of each column to be returned. batchsize - To limit the maximum number of values returned for each call to next(). limit - The number of rows to return in the scan operation. More on start row, end row and limit parameters. 1. If start row, end row and limit not specified, then the whole table will be scanned. 2. If start row and limit (say N) is specified, then the scan operation will return N rows from the start row specified. 3. If only limit parameter is specified, then the scan operation will return N rows from the start of the table. 4. If limit and end row are specified, then the scan operation will return N rows from start of table till the end row. If the end row is reached before N rows ( say M and M lt; N ), then M rows will be returned to the user. 5. If start row, end row and limit (say N ) are specified and N lt; number of rows between start row and end row, then N rows from start row will be returned to the user. If N gt; (number of rows between start row and end row (say M), then M number of rows will be returned to the user. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (HBASE-9343) Implement stateless scanner for Stargate
[ https://issues.apache.org/jira/browse/HBASE-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751170#comment-13751170 ] Andrew Purtell edited comment on HBASE-9343 at 8/27/13 10:40 AM: - bq. The current scanner implementation for scanner stores state and hence not very suitable for REST server failure scenarios That's not quite how I would describe it. The current scanner implementation expects clients to restart scans if there is a REST server failure in the midst. The tradeoff is a pretty close semantic mapping - though definitely not RESTful - to the client API on the one hand, and loss of the cursor upon process failure on the other. Sure, that can be problematic. Why introduce new resources and a new model of scanning? Most of what you are trying to do can be done with Gets. Extend the existing resources for that. Do we need ProtobufStreamingUtil if REST already has internally a Generator API for iterating over results returned by scanners? Did you partially reimplement that here? What about XML or JSON? I am -0 on the changes as is. was (Author: apurtell): bq. The current scanner implementation for scanner stores state and hence not very suitable for REST server failure scenarios That's not quite how I would describe it. The current scanner implementation expects clients to restart scans if there is a REST server failure in the midst. The tradeoff is a pretty close semantic mapping - though definitely not RESTful - to the client API on the one hand, and loss of the cursor upon process failure on the other. Sure, that can be problematic. Why introduce new resources and a new model of scanning? Most of what you are trying to do can be done with Gets. Extend the existing resources for that. Do we need ProtobufStreamingUtil if REST already has internally a Generator API for iterating over results returned by scanners? Did you partially reimplement that here? Implement stateless scanner for Stargate Key: HBASE-9343 URL: https://issues.apache.org/jira/browse/HBASE-9343 Project: HBase Issue Type: Improvement Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Priority: Minor Attachments: HBASE-9343_94.00.patch The current scanner implementation for scanner stores state and hence not very suitable for REST server failure scenarios. The current JIRA proposes to implement a stateless scanner. In the first version of the patch, a new resource class ScanResource has been added and all the scan parameters will be specified as query params. The following are the scan parameters startrow - The start row for the scan. endrow - The end row for the scan. columns - The columns to scan. starttime, endtime - To only retrieve columns within a specific range of version timestamps,both start and end time must be specified. maxversions - To limit the number of versions of each column to be returned. batchsize - To limit the maximum number of values returned for each call to next(). limit - The number of rows to return in the scan operation. More on start row, end row and limit parameters. 1. If start row, end row and limit not specified, then the whole table will be scanned. 2. If start row and limit (say N) is specified, then the scan operation will return N rows from the start row specified. 3. If only limit parameter is specified, then the scan operation will return N rows from the start of the table. 4. If limit and end row are specified, then the scan operation will return N rows from start of table till the end row. If the end row is reached before N rows ( say M and M lt; N ), then M rows will be returned to the user. 5. If start row, end row and limit (say N ) are specified and N lt; number of rows between start row and end row, then N rows from start row will be returned to the user. If N gt; (number of rows between start row and end row (say M), then M number of rows will be returned to the user. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9343) Implement stateless scanner for Stargate
[ https://issues.apache.org/jira/browse/HBASE-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751174#comment-13751174 ] Andrew Purtell commented on HBASE-9343: --- I considered briefly once keeping the REST scan cursor state in ZooKeeper for transparent failover of scans upon REST process failure. This would not have the same scalability as native scanners on account of ZooKeeper operation throughput limits but could surely support on the order of 100s of concurrent scanners open on a REST farm. Clients that need scanner failover would have it without API changes, though they would need to handle possible HTTP redirects. Expectation would be the majority of clients could live with loss of the cursor upon REST process failure though. No need to do it this way, just providing a historical note. Implement stateless scanner for Stargate Key: HBASE-9343 URL: https://issues.apache.org/jira/browse/HBASE-9343 Project: HBase Issue Type: Improvement Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Priority: Minor Attachments: HBASE-9343_94.00.patch The current scanner implementation for scanner stores state and hence not very suitable for REST server failure scenarios. The current JIRA proposes to implement a stateless scanner. In the first version of the patch, a new resource class ScanResource has been added and all the scan parameters will be specified as query params. The following are the scan parameters startrow - The start row for the scan. endrow - The end row for the scan. columns - The columns to scan. starttime, endtime - To only retrieve columns within a specific range of version timestamps,both start and end time must be specified. maxversions - To limit the number of versions of each column to be returned. batchsize - To limit the maximum number of values returned for each call to next(). limit - The number of rows to return in the scan operation. More on start row, end row and limit parameters. 1. If start row, end row and limit not specified, then the whole table will be scanned. 2. If start row and limit (say N) is specified, then the scan operation will return N rows from the start row specified. 3. If only limit parameter is specified, then the scan operation will return N rows from the start of the table. 4. If limit and end row are specified, then the scan operation will return N rows from start of table till the end row. If the end row is reached before N rows ( say M and M lt; N ), then M rows will be returned to the user. 5. If start row, end row and limit (say N ) are specified and N lt; number of rows between start row and end row, then N rows from start row will be returned to the user. If N gt; (number of rows between start row and end row (say M), then M number of rows will be returned to the user. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4811) Support reverse Scan
[ https://issues.apache.org/jira/browse/HBASE-4811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751189#comment-13751189 ] Andrew Purtell commented on HBASE-4811: --- There was some discussion on including this in 0.98 on the dev@ list. Seems in a reasonable state already, but the changes will need to be rebased after HBASE-9245 and subtasks. Support reverse Scan Key: HBASE-4811 URL: https://issues.apache.org/jira/browse/HBASE-4811 Project: HBase Issue Type: New Feature Components: Client Affects Versions: 0.20.6, 0.94.7 Reporter: John Carrino Assignee: chunhui shen Fix For: 0.98.0 Attachments: 4811-0.94-v3.txt, 4811-trunk-v10.txt, 4811-trunk-v5.patch, HBase-4811-0.94.3modified.txt, HBase-4811-0.94-v2.txt, hbase-4811-trunkv11.patch, hbase-4811-trunkv12.patch, hbase-4811-trunkv13.patch, hbase-4811-trunkv14.patch, hbase-4811-trunkv15.patch, hbase-4811-trunkv16.patch, hbase-4811-trunkv17.patch, hbase-4811-trunkv18.patch, hbase-4811-trunkv1.patch, hbase-4811-trunkv4.patch, hbase-4811-trunkv6.patch, hbase-4811-trunkv7.patch, hbase-4811-trunkv8.patch, hbase-4811-trunkv9.patch Reversed scan means scan the rows backward. And StartRow bigger than StopRow in a reversed scan. For example, for the following rows: aaa/c1:q1/value1 aaa/c1:q2/value2 bbb/c1:q1/value1 bbb/c1:q2/value2 ccc/c1:q1/value1 ccc/c1:q2/value2 ddd/c1:q1/value1 ddd/c1:q2/value2 eee/c1:q1/value1 eee/c1:q2/value2 you could do a reversed scan from 'ddd' to 'bbb'(exclude) like this: Scan scan = new Scan(); scan.setStartRow('ddd'); scan.setStopRow('bbb'); scan.setReversed(true); for(Result result:htable.getScanner(scan)){ System.out.println(result); } Aslo you could do the reversed scan with shell like this: hbase scan 'table',{REVERSED = true,STARTROW='ddd', STOPROW='bbb'} And the output is: ddd/c1:q1/value1 ddd/c1:q2/value2 ccc/c1:q1/value1 ccc/c1:q2/value2 NOTE: when setting reversed as true for a client scan, you must set the start row, else will throw exception. Through {@link Scan#createBiggestByteArray(int)},you could get a big enough byte array as the start row All the documentation I find about HBase says that if you want forward and reverse scans you should just build 2 tables and one be ascending and one descending. Is there a fundamental reason that HBase only supports forward Scan? It seems like a lot of extra space overhead and coding overhead (to keep them in sync) to support 2 tables. I am assuming this has been discussed before, but I can't find the discussions anywhere about it or why it would be infeasible. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9347) Support for adding filters for client requests
[ https://issues.apache.org/jira/browse/HBASE-9347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751193#comment-13751193 ] Andrew Purtell commented on HBASE-9347: --- Looks reasonable. Although, we are starting to dup what is normally done with servlet container configuration and WAR manifests. Should we bring back the WAR packaging target? Support for adding filters for client requests -- Key: HBASE-9347 URL: https://issues.apache.org/jira/browse/HBASE-9347 Project: HBase Issue Type: Improvement Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Attachments: HBASE-9347_94.00.patch Currently there is no support for specifying filters for filtering client requests. It will be useful if filters can be configured through hbase configuration. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9350) In ChaosMonkey, MoveRegionsOfTableAction throws UnknownRegionException
[ https://issues.apache.org/jira/browse/HBASE-9350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751196#comment-13751196 ] Hadoop QA commented on HBASE-9350: -- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12600123/MoveRegionsOfTableAction-v2.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/6927//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6927//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6927//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6927//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6927//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6927//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6927//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6927//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6927//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/6927//console This message is automatically generated. In ChaosMonkey, MoveRegionsOfTableAction throws UnknownRegionException -- Key: HBASE-9350 URL: https://issues.apache.org/jira/browse/HBASE-9350 Project: HBase Issue Type: Bug Components: test Affects Versions: 0.94.0 Reporter: chendihao Labels: test Fix For: 0.94.0 Attachments: MoveRegionsOfTableAction.java.patch, MoveRegionsOfTableAction-v2.patch The first parameter in HBaseAdmin.move(final byte [] encodedRegionName, final byte [] destServerName) should be encoded. Otherwise, it could throw UnknowRegionException and result in failure of this action. {code} encodedRegionName The encoded region name; i.e. the hash that makes up the region name suffix: e.g. if regionname is TestTable,0094429456,1289497600452.527db22f95c8a9e0116f0cc13c680396., then the encoded region name is: 527db22f95c8a9e0116f0cc13c680396. {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9138) the name of function getHaseIntegrationTestingUtility() is a misspelling
[ https://issues.apache.org/jira/browse/HBASE-9138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] chendihao updated HBASE-9138: - Attachment: ChaosMonkey-v3.patch update patch for trunk the name of function getHaseIntegrationTestingUtility() is a misspelling Key: HBASE-9138 URL: https://issues.apache.org/jira/browse/HBASE-9138 Project: HBase Issue Type: Bug Components: test Affects Versions: 0.94.0 Reporter: chendihao Priority: Trivial Fix For: 0.94.0 Attachments: ChaosMonkey.java.patch, ChaosMonkey-v2.patch, ChaosMonkey-v3.patch The function getHaseIntegrationTestingUtility() in ChaosMonkey.java should be getHBaseIntegrationTestingUtility(), just a spelling mistake. {code} /** * Context for Action's */ public static class ActionContext { private IntegrationTestingUtility util; public ActionContext(IntegrationTestingUtility util) { this.util = util; } public IntegrationTestingUtility getHaseIntegrationTestingUtility() { return util; } public HBaseCluster getHBaseCluster() { return util.getHBaseClusterInterface(); } } {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9138) the name of function getHaseIntegrationTestingUtility() is a misspelling
[ https://issues.apache.org/jira/browse/HBASE-9138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751231#comment-13751231 ] chendihao commented on HBASE-9138: -- [~hadoopqa]Please try again^^ the name of function getHaseIntegrationTestingUtility() is a misspelling Key: HBASE-9138 URL: https://issues.apache.org/jira/browse/HBASE-9138 Project: HBase Issue Type: Bug Components: test Affects Versions: 0.94.0 Reporter: chendihao Priority: Trivial Fix For: 0.94.0 Attachments: ChaosMonkey.java.patch, ChaosMonkey-v2.patch, ChaosMonkey-v3.patch The function getHaseIntegrationTestingUtility() in ChaosMonkey.java should be getHBaseIntegrationTestingUtility(), just a spelling mistake. {code} /** * Context for Action's */ public static class ActionContext { private IntegrationTestingUtility util; public ActionContext(IntegrationTestingUtility util) { this.util = util; } public IntegrationTestingUtility getHaseIntegrationTestingUtility() { return util; } public HBaseCluster getHBaseCluster() { return util.getHBaseClusterInterface(); } } {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8112) Deprecate HTable#batch(final List? extends Row)
[ https://issues.apache.org/jira/browse/HBASE-8112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jean-Marc Spaggiari updated HBASE-8112: --- Status: Open (was: Patch Available) Deprecate HTable#batch(final List? extends Row) - Key: HBASE-8112 URL: https://issues.apache.org/jira/browse/HBASE-8112 Project: HBase Issue Type: Bug Reporter: Ted Yu Assignee: Jean-Marc Spaggiari Priority: Minor Attachments: HBASE-8112-v0-trunk.patch, HBASE-8112-v1-trunk.patch This was brought up by Amit's inquiry on mailing list, entitled 'Batch returned value and exception handling' Here is his sample code: {code} Object[] res = null; try { res = table.batch(batch); } catch (RetriesExhaustedWithDetailsException retriesExhaustedWithDetailsException) { retriesExhaustedWithDetailsException.printStackTrace(); } if (res == null) { System.out.println(No results - returned null.); } {code} When RetriesExhaustedWithDetailsException was thrown from batch() call, variable res carried value of null. Meaning user wouldn't get partial result along with the exception. We should deprecate {code}HTable#batch(final List? extends Row){code} and refer to the following method: void batch(final List?extends Row actions, final Object[] results) throws IOException, InterruptedException; -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8112) Deprecate HTable#batch(final List? extends Row)
[ https://issues.apache.org/jira/browse/HBASE-8112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jean-Marc Spaggiari updated HBASE-8112: --- Attachment: HBASE-8112-v1-trunk.patch Deprecate HTable#batch(final List? extends Row) - Key: HBASE-8112 URL: https://issues.apache.org/jira/browse/HBASE-8112 Project: HBase Issue Type: Bug Reporter: Ted Yu Assignee: Jean-Marc Spaggiari Priority: Minor Attachments: HBASE-8112-v0-trunk.patch, HBASE-8112-v1-trunk.patch This was brought up by Amit's inquiry on mailing list, entitled 'Batch returned value and exception handling' Here is his sample code: {code} Object[] res = null; try { res = table.batch(batch); } catch (RetriesExhaustedWithDetailsException retriesExhaustedWithDetailsException) { retriesExhaustedWithDetailsException.printStackTrace(); } if (res == null) { System.out.println(No results - returned null.); } {code} When RetriesExhaustedWithDetailsException was thrown from batch() call, variable res carried value of null. Meaning user wouldn't get partial result along with the exception. We should deprecate {code}HTable#batch(final List? extends Row){code} and refer to the following method: void batch(final List?extends Row actions, final Object[] results) throws IOException, InterruptedException; -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8112) Deprecate HTable#batch(final List? extends Row)
[ https://issues.apache.org/jira/browse/HBASE-8112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jean-Marc Spaggiari updated HBASE-8112: --- Status: Patch Available (was: Open) {quote} + * partially executed results. Use {@link #batchCallback(List, Object[], org.apache.hadoop.hbase.client.coprocessor.Batch.Callback)} instead. Can you wrap long line ? {quote} This doesn't fit in a single line :( * {@link #batchCallback(List, Object[], org.apache.hadoop.hbase.client.coprocessor.Batch.Callback)} is longer than 100 caracteres. Attached is (I think) the best we can do. Deprecate HTable#batch(final List? extends Row) - Key: HBASE-8112 URL: https://issues.apache.org/jira/browse/HBASE-8112 Project: HBase Issue Type: Bug Reporter: Ted Yu Assignee: Jean-Marc Spaggiari Priority: Minor Attachments: HBASE-8112-v0-trunk.patch, HBASE-8112-v1-trunk.patch This was brought up by Amit's inquiry on mailing list, entitled 'Batch returned value and exception handling' Here is his sample code: {code} Object[] res = null; try { res = table.batch(batch); } catch (RetriesExhaustedWithDetailsException retriesExhaustedWithDetailsException) { retriesExhaustedWithDetailsException.printStackTrace(); } if (res == null) { System.out.println(No results - returned null.); } {code} When RetriesExhaustedWithDetailsException was thrown from batch() call, variable res carried value of null. Meaning user wouldn't get partial result along with the exception. We should deprecate {code}HTable#batch(final List? extends Row){code} and refer to the following method: void batch(final List?extends Row actions, final Object[] results) throws IOException, InterruptedException; -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9230) Fix the server so it can take a pure pb request param and return a pure pb result
[ https://issues.apache.org/jira/browse/HBASE-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751351#comment-13751351 ] stack commented on HBASE-9230: -- [~ram_krish] no. just allows no codec so we do pb all the time (we expected clients to support cellblocks -- they shouldn't have to...) Fix the server so it can take a pure pb request param and return a pure pb result - Key: HBASE-9230 URL: https://issues.apache.org/jira/browse/HBASE-9230 Project: HBase Issue Type: Bug Reporter: stack Assignee: stack Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9230.txt Working on the asynchbase update w/ B this afternoon so it can run against 0.95/0.96, I noticed that clients HAVE TO do cellblocks as the server is currently. That is an oversight. Lets fix so can do all pb all the time too (I thought this was there but it is not); it will make it easier dev'ing simple clients. This issue shouldn't hold up release but we should get it in to help the asynchbase convertion. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9230) Fix the server so it can take a pure pb request param and return a pure pb result
[ https://issues.apache.org/jira/browse/HBASE-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-9230: - Attachment: 9230v2.txt Rebase Fix the server so it can take a pure pb request param and return a pure pb result - Key: HBASE-9230 URL: https://issues.apache.org/jira/browse/HBASE-9230 Project: HBase Issue Type: Bug Reporter: stack Assignee: stack Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9230.txt, 9230v2.txt Working on the asynchbase update w/ B this afternoon so it can run against 0.95/0.96, I noticed that clients HAVE TO do cellblocks as the server is currently. That is an oversight. Lets fix so can do all pb all the time too (I thought this was there but it is not); it will make it easier dev'ing simple clients. This issue shouldn't hold up release but we should get it in to help the asynchbase convertion. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9230) Fix the server so it can take a pure pb request param and return a pure pb result
[ https://issues.apache.org/jira/browse/HBASE-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751357#comment-13751357 ] Hadoop QA commented on HBASE-9230: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12600182/9230v2.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 17 new or modified tests. {color:red}-1 hadoop1.0{color}. The patch failed to compile against the hadoop 1.0 profile. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/6930//console This message is automatically generated. Fix the server so it can take a pure pb request param and return a pure pb result - Key: HBASE-9230 URL: https://issues.apache.org/jira/browse/HBASE-9230 Project: HBase Issue Type: Bug Reporter: stack Assignee: stack Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9230.txt, 9230v2.txt Working on the asynchbase update w/ B this afternoon so it can run against 0.95/0.96, I noticed that clients HAVE TO do cellblocks as the server is currently. That is an oversight. Lets fix so can do all pb all the time too (I thought this was there but it is not); it will make it easier dev'ing simple clients. This issue shouldn't hold up release but we should get it in to help the asynchbase convertion. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9230) Fix the server so it can take a pure pb request param and return a pure pb result
[ https://issues.apache.org/jira/browse/HBASE-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-9230: - Release Note: If the client does not specify a codec, the server will not respond using cellblocks; instead it will response with a pure protobuf message. This is slower but easier for clients to make sense of. It should make version one of a client implementation easier to do. To make the hbase client do non-cellblocking communication, set hbase.client.default.rpc.codec to the empty string and do not set hbase.client.rpc.codec Fix the server so it can take a pure pb request param and return a pure pb result - Key: HBASE-9230 URL: https://issues.apache.org/jira/browse/HBASE-9230 Project: HBase Issue Type: Bug Reporter: stack Assignee: stack Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9230.txt, 9230v2.txt Working on the asynchbase update w/ B this afternoon so it can run against 0.95/0.96, I noticed that clients HAVE TO do cellblocks as the server is currently. That is an oversight. Lets fix so can do all pb all the time too (I thought this was there but it is not); it will make it easier dev'ing simple clients. This issue shouldn't hold up release but we should get it in to help the asynchbase convertion. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9230) Fix the server so it can take a pure pb request param and return a pure pb result
[ https://issues.apache.org/jira/browse/HBASE-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751360#comment-13751360 ] Hadoop QA commented on HBASE-9230: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12600182/9230v2.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 17 new or modified tests. {color:red}-1 hadoop1.0{color}. The patch failed to compile against the hadoop 1.0 profile. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/6931//console This message is automatically generated. Fix the server so it can take a pure pb request param and return a pure pb result - Key: HBASE-9230 URL: https://issues.apache.org/jira/browse/HBASE-9230 Project: HBase Issue Type: Bug Reporter: stack Assignee: stack Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9230.txt, 9230v2.txt Working on the asynchbase update w/ B this afternoon so it can run against 0.95/0.96, I noticed that clients HAVE TO do cellblocks as the server is currently. That is an oversight. Lets fix so can do all pb all the time too (I thought this was there but it is not); it will make it easier dev'ing simple clients. This issue shouldn't hold up release but we should get it in to help the asynchbase convertion. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9348) TerminatedWrapper error decoding, skipping skippable types
[ https://issues.apache.org/jira/browse/HBASE-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751366#comment-13751366 ] Nicolas Liochon commented on HBASE-9348: +1 TerminatedWrapper error decoding, skipping skippable types -- Key: HBASE-9348 URL: https://issues.apache.org/jira/browse/HBASE-9348 Project: HBase Issue Type: Bug Affects Versions: 0.95.2 Reporter: Nick Dimiduk Assignee: Nick Dimiduk Fix For: 0.98.0, 0.96.0 Attachments: 0001-HBASE-9348-TerminatedWrapper-skippable-types-bug.patch When {{TerminatedWrapper}} wraps a type which {{isSkippable}}, it does not consider the terminator when updating the source buffer position after skipping or decoding a value. The tests only covered the non-skippable case. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9348) TerminatedWrapper error decoding, skipping skippable types
[ https://issues.apache.org/jira/browse/HBASE-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751386#comment-13751386 ] Ted Yu commented on HBASE-9348: --- Integrated to 0.95 and trunk. Thanks for the patch, Nick. Thanks for the review, Nicolas. TerminatedWrapper error decoding, skipping skippable types -- Key: HBASE-9348 URL: https://issues.apache.org/jira/browse/HBASE-9348 Project: HBase Issue Type: Bug Affects Versions: 0.95.2 Reporter: Nick Dimiduk Assignee: Nick Dimiduk Fix For: 0.98.0, 0.96.0 Attachments: 0001-HBASE-9348-TerminatedWrapper-skippable-types-bug.patch When {{TerminatedWrapper}} wraps a type which {{isSkippable}}, it does not consider the terminator when updating the source buffer position after skipping or decoding a value. The tests only covered the non-skippable case. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9315) TestLruBlockCache.testBackgroundEvictionThread fails on suse
[ https://issues.apache.org/jira/browse/HBASE-9315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-9315: Fix Version/s: 0.96.0 0.98.0 TestLruBlockCache.testBackgroundEvictionThread fails on suse Key: HBASE-9315 URL: https://issues.apache.org/jira/browse/HBASE-9315 Project: HBase Issue Type: Test Affects Versions: 0.95.2 Reporter: Nick Dimiduk Assignee: Nick Dimiduk Fix For: 0.98.0, 0.96.0 Attachments: 0001-HBASE-9315-improve-test-stability-of-TestLruBlockCac.patch One of our build machines is consistently having trouble with this test. {noformat} Error Message expected:2 but was:1 Stacktrace java.lang.AssertionError: expected:2 but was:1 at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:743) at org.junit.Assert.assertEquals(Assert.java:118) at org.junit.Assert.assertEquals(Assert.java:555) at org.junit.Assert.assertEquals(Assert.java:542) at org.apache.hadoop.hbase.io.hfile.TestLruBlockCache.testBackgroundEvictionThread(TestLruBlockCache.java:85) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.junit.runners.Suite.runChild(Suite.java:127) at org.junit.runners.Suite.runChild(Suite.java:26) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Standard Output Background Evictions run: 2 Standard Error 2013-08-22 11:02:58,331 INFO [pool-1-thread-1] hbase.ResourceChecker(147): before: io.hfile.TestLruBlockCache#testBackgroundEvictionThread Thread=35, OpenFileDescriptor=277, MaxFileDescriptor=95000, SystemLoadAverage=119, ProcessCount=75, AvailableMemoryMB=8884, ConnectionCount=1 2013-08-22 11:02:58,338 INFO [pool-1-thread-1] hbase.ResourceChecker(171): after: io.hfile.TestLruBlockCache#testBackgroundEvictionThread Thread=36 (was 35) - Thread LEAK? -, OpenFileDescriptor=279 (was 277) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=95000 (was 95000), SystemLoadAverage=119 (was 119), ProcessCount=75 (was 75), AvailableMemoryMB=8884 (was 8884), ConnectionCount=1 (was 1) 2013-08-22 11:07:58,331 DEBUG [LRU Statistics #0] hfile.LruBlockCache(728): Stats: total=87.01 KB, free=10.65 KB, max=97.66 KB, blocks=8, accesses=0, hits=0, hitRatio=0, cachingAccesses=0, cachingHits=0, cachingHitsRatio=0,evictions=2, evicted=2, evictedPerRun=1.0 2013-08-22 11:12:58,331 DEBUG [LRU Statistics #0] hfile.LruBlockCache(728): Stats: total=87.01 KB, free=10.65 KB, max=97.66 KB, blocks=8, accesses=0, hits=0, hitRatio=0, cachingAccesses=0, cachingHits=0, cachingHitsRatio=0,evictions=2, evicted=2, evictedPerRun=1.0 2013-08-22 11:17:58,331 DEBUG [LRU Statistics #0] hfile.LruBlockCache(728): Stats: total=87.01 KB, free=10.65 KB, max=97.66 KB, blocks=8, accesses=0, hits=0, hitRatio=0, cachingAccesses=0, cachingHits=0, cachingHitsRatio=0,evictions=2, evicted=2, evictedPerRun=1.0 {noformat} -- This message is
[jira] [Created] (HBASE-9353) HTable returned by MetaReader#getMetaHTable() is not closed in MetaEditor#addRegionToMeta()
Ted Yu created HBASE-9353: - Summary: HTable returned by MetaReader#getMetaHTable() is not closed in MetaEditor#addRegionToMeta() Key: HBASE-9353 URL: https://issues.apache.org/jira/browse/HBASE-9353 Project: HBase Issue Type: Bug Reporter: Ted Yu Here is related code: {code} public static void addRegionToMeta(CatalogTracker catalogTracker, HRegionInfo regionInfo, HRegionInfo splitA, HRegionInfo splitB) throws IOException { addRegionToMeta(MetaReader.getMetaHTable(catalogTracker), regionInfo, splitA, splitB); } {code} HTable returned by MetaReader#getMetaHTable() is not closed in MetaEditor#addRegionToMeta() -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9315) TestLruBlockCache.testBackgroundEvictionThread fails on suse
[ https://issues.apache.org/jira/browse/HBASE-9315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-9315: - Resolution: Fixed Status: Resolved (was: Patch Available) Committed to 0.95 and trunk. Thanks N. TestLruBlockCache.testBackgroundEvictionThread fails on suse Key: HBASE-9315 URL: https://issues.apache.org/jira/browse/HBASE-9315 Project: HBase Issue Type: Test Affects Versions: 0.95.2 Reporter: Nick Dimiduk Assignee: Nick Dimiduk Fix For: 0.98.0, 0.96.0 Attachments: 0001-HBASE-9315-improve-test-stability-of-TestLruBlockCac.patch One of our build machines is consistently having trouble with this test. {noformat} Error Message expected:2 but was:1 Stacktrace java.lang.AssertionError: expected:2 but was:1 at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:743) at org.junit.Assert.assertEquals(Assert.java:118) at org.junit.Assert.assertEquals(Assert.java:555) at org.junit.Assert.assertEquals(Assert.java:542) at org.apache.hadoop.hbase.io.hfile.TestLruBlockCache.testBackgroundEvictionThread(TestLruBlockCache.java:85) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.junit.runners.Suite.runChild(Suite.java:127) at org.junit.runners.Suite.runChild(Suite.java:26) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Standard Output Background Evictions run: 2 Standard Error 2013-08-22 11:02:58,331 INFO [pool-1-thread-1] hbase.ResourceChecker(147): before: io.hfile.TestLruBlockCache#testBackgroundEvictionThread Thread=35, OpenFileDescriptor=277, MaxFileDescriptor=95000, SystemLoadAverage=119, ProcessCount=75, AvailableMemoryMB=8884, ConnectionCount=1 2013-08-22 11:02:58,338 INFO [pool-1-thread-1] hbase.ResourceChecker(171): after: io.hfile.TestLruBlockCache#testBackgroundEvictionThread Thread=36 (was 35) - Thread LEAK? -, OpenFileDescriptor=279 (was 277) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=95000 (was 95000), SystemLoadAverage=119 (was 119), ProcessCount=75 (was 75), AvailableMemoryMB=8884 (was 8884), ConnectionCount=1 (was 1) 2013-08-22 11:07:58,331 DEBUG [LRU Statistics #0] hfile.LruBlockCache(728): Stats: total=87.01 KB, free=10.65 KB, max=97.66 KB, blocks=8, accesses=0, hits=0, hitRatio=0, cachingAccesses=0, cachingHits=0, cachingHitsRatio=0,evictions=2, evicted=2, evictedPerRun=1.0 2013-08-22 11:12:58,331 DEBUG [LRU Statistics #0] hfile.LruBlockCache(728): Stats: total=87.01 KB, free=10.65 KB, max=97.66 KB, blocks=8, accesses=0, hits=0, hitRatio=0, cachingAccesses=0, cachingHits=0, cachingHitsRatio=0,evictions=2, evicted=2, evictedPerRun=1.0 2013-08-22 11:17:58,331 DEBUG [LRU Statistics #0] hfile.LruBlockCache(728): Stats: total=87.01 KB, free=10.65 KB, max=97.66 KB, blocks=8, accesses=0, hits=0, hitRatio=0, cachingAccesses=0, cachingHits=0, cachingHitsRatio=0,evictions=2,
[jira] [Updated] (HBASE-9230) Fix the server so it can take a pure pb request param and return a pure pb result
[ https://issues.apache.org/jira/browse/HBASE-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-9230: - Attachment: 9230v3.txt Fix the server so it can take a pure pb request param and return a pure pb result - Key: HBASE-9230 URL: https://issues.apache.org/jira/browse/HBASE-9230 Project: HBase Issue Type: Bug Reporter: stack Assignee: stack Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9230.txt, 9230v2.txt, 9230v3.txt Working on the asynchbase update w/ B this afternoon so it can run against 0.95/0.96, I noticed that clients HAVE TO do cellblocks as the server is currently. That is an oversight. Lets fix so can do all pb all the time too (I thought this was there but it is not); it will make it easier dev'ing simple clients. This issue shouldn't hold up release but we should get it in to help the asynchbase convertion. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9314) Dropping a table always prints a TableInfoMissingException in the master log
[ https://issues.apache.org/jira/browse/HBASE-9314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jean-Daniel Cryans updated HBASE-9314: -- Affects Version/s: 0.94.10 This also started showing up in 0.94.10, so let's check that list of changes between .9 and .10 to see what added more printing. Dropping a table always prints a TableInfoMissingException in the master log Key: HBASE-9314 URL: https://issues.apache.org/jira/browse/HBASE-9314 Project: HBase Issue Type: Improvement Affects Versions: 0.95.2, 0.94.10 Reporter: Jean-Daniel Cryans Priority: Minor Fix For: 0.98.0, 0.96.0 Everytime I drop a table I get the same stack trace in the master's log: {noformat} 2013-08-22 23:11:31,939 DEBUG [MASTER_TABLE_OPERATIONS-jdec2hbase0403-1:6-0] org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Table 't' archived! 2013-08-22 23:11:31,939 DEBUG [MASTER_TABLE_OPERATIONS-jdec2hbase0403-1:6-0] org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Removing 't' descriptor. 2013-08-22 23:11:31,940 DEBUG [MASTER_TABLE_OPERATIONS-jdec2hbase0403-1:6-0] org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Marking 't' as deleted. 2013-08-22 23:11:31,944 DEBUG [MASTER_TABLE_OPERATIONS-jdec2hbase0403-1:6-0] org.apache.hadoop.hbase.zookeeper.lock.ZKInterProcessLockBase: Released /hbase/table-lock/t/write-master:602 2013-08-22 23:11:32,024 DEBUG [RpcServer.handler=0,port=6] org.apache.hadoop.hbase.util.FSTableDescriptors: Exception during readTableDecriptor. Current table name = t org.apache.hadoop.hbase.TableInfoMissingException: No table descriptor file under hdfs://jdec2hbase0403-1.vpc.cloudera.com:9000/hbase/data/default/t at org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptorAndModtime(FSTableDescriptors.java:503) at org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptorAndModtime(FSTableDescriptors.java:496) at org.apache.hadoop.hbase.util.FSTableDescriptors.get(FSTableDescriptors.java:170) at org.apache.hadoop.hbase.master.HMaster.getTableDescriptors(HMaster.java:2629) at org.apache.hadoop.hbase.protobuf.generated.MasterMonitorProtos$MasterMonitorService$2.callBlockingMethod(MasterMonitorProtos.java:4634) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2156) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1861) 2013-08-22 23:11:32,024 WARN [RpcServer.handler=0,port=6] org.apache.hadoop.hbase.util.FSTableDescriptors: The following folder is in HBase's root directory and doesn't contain a table descriptor, do consider deleting it: t {noformat} But the operation completes. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9350) In ChaosMonkey, MoveRegionsOfTableAction throws UnknownRegionException
[ https://issues.apache.org/jira/browse/HBASE-9350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-9350: - Resolution: Fixed Fix Version/s: (was: 0.94.0) 0.95.0 0.98.0 Status: Resolved (was: Patch Available) Committed to 0.95 and to trunk. Thanks for the patch Chendihao In ChaosMonkey, MoveRegionsOfTableAction throws UnknownRegionException -- Key: HBASE-9350 URL: https://issues.apache.org/jira/browse/HBASE-9350 Project: HBase Issue Type: Bug Components: test Affects Versions: 0.94.0 Reporter: chendihao Labels: test Fix For: 0.98.0, 0.95.0 Attachments: MoveRegionsOfTableAction.java.patch, MoveRegionsOfTableAction-v2.patch The first parameter in HBaseAdmin.move(final byte [] encodedRegionName, final byte [] destServerName) should be encoded. Otherwise, it could throw UnknowRegionException and result in failure of this action. {code} encodedRegionName The encoded region name; i.e. the hash that makes up the region name suffix: e.g. if regionname is TestTable,0094429456,1289497600452.527db22f95c8a9e0116f0cc13c680396., then the encoded region name is: 527db22f95c8a9e0116f0cc13c680396. {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9116) Add a view/edit tool for favored node mappings for regions
[ https://issues.apache.org/jira/browse/HBASE-9116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751434#comment-13751434 ] Nick Dimiduk commented on HBASE-9116: - Overall +1. Everything below are just nits. --- bq. IMHO this is not going to add that much value. Do you mind if I look at this in a follow up. No problem. I don't thing AbstractHBaseTool is used very often within the codebase anyway. bq. Fixed the timeout (that was a good catch). The reason the test is clubbed into one is that the test tries out the various favorednode utilities with a single table that it creates in the beginning. I prefer independent tests to run in isolation of each other when at all possible. Not a show-stopper for the patch -- I regard tweaking test timeouts as a kind of code smell. {noformat} + Match +!-- + The logic explicitly checks equality of two floating point numbers. Ignore the warning +!-- +Class name=org.apache.hadoop.hbase.master.AssignmentVerificationReport/ +Bug pattern=FE_FLOATING_POINT_EQUALITY/ + /Match {noformat} AssignmentVerificationReport has separate logic blocks for floating point if (A B) else if (A == B) -- I wonder if this will be a future source of bugs. Add a view/edit tool for favored node mappings for regions -- Key: HBASE-9116 URL: https://issues.apache.org/jira/browse/HBASE-9116 Project: HBase Issue Type: Improvement Components: Region Assignment Affects Versions: 0.95.0 Reporter: Devaraj Das Assignee: Devaraj Das Fix For: 0.96.0 Attachments: 9116-1.txt, 9116-2.txt, 9116-2.txt, 9116-2.txt, 9116-3.txt, 9116-4.txt, 9116-5.txt, 9116-6.txt Add a tool that one can run offline to view the favored node mappings for regions, and also fix the mappings if needed. Such a tool exists in the 0.89-fb branch. Will port it over to trunk/0.95. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9336) Two css files raise release audit warning
[ https://issues.apache.org/jira/browse/HBASE-9336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-9336: - Resolution: Fixed Fix Version/s: 0.96.0 0.98.0 Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Committed to trunk and 0.95. Thanks for the patch Nick. Two css files raise release audit warning - Key: HBASE-9336 URL: https://issues.apache.org/jira/browse/HBASE-9336 Project: HBase Issue Type: Task Reporter: Ted Yu Assignee: Nick Dimiduk Fix For: 0.98.0, 0.96.0 Attachments: 0001-HBASE-9336-Add-missing-license-headers-to-css-files.patch From https://builds.apache.org/job/PreCommit-HBASE-Build/6869/artifact/trunk/patchprocess/patchReleaseAuditProblems.txt : {code} !? /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/resources/hbase-webapps/static/css/bootstrap-theme.css !? /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/resources/hbase-webapps/static/css/bootstrap-theme.min.css Lines that start with ? in the release audit report indicate files that do not have an Apache license header. {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7709) Infinite loop possible in Master/Master replication
[ https://issues.apache.org/jira/browse/HBASE-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751447#comment-13751447 ] Ted Yu commented on HBASE-7709: --- +1 Infinite loop possible in Master/Master replication --- Key: HBASE-7709 URL: https://issues.apache.org/jira/browse/HBASE-7709 Project: HBase Issue Type: Bug Components: Replication Affects Versions: 0.94.6, 0.95.1 Reporter: Lars Hofhansl Assignee: Vasu Mariyala Fix For: 0.98.0, 0.94.12, 0.96.0 Attachments: 095-trunk.patch, 0.95-trunk-rev1.patch, 0.95-trunk-rev2.patch, 0.95-trunk-rev3.patch, 0.95-trunk-rev4.patch, HBASE-7709.patch, HBASE-7709-rev1.patch, HBASE-7709-rev2.patch, HBASE-7709-rev3.patch, HBASE-7709-rev4.patch, HBASE-7709-rev5.patch We just discovered the following scenario: # Cluster A and B are setup in master/master replication # By accident we had Cluster C replicate to Cluster A. Now all edit originating from C will be bouncing between A and B. Forever! The reason is that when the edit come in from C the cluster ID is already set and won't be reset. We have a couple of options here: # Optionally only support master/master (not cycles of more than two clusters). In that case we can always reset the cluster ID in the ReplicationSource. That means that now cycles 2 will have the data cycle forever. This is the only option that requires no changes in the HLog format. # Instead of a single cluster id per edit maintain a (unordered) set of cluster id that have seen this edit. Then in ReplicationSource we drop any edit that the sink has seen already. The is the cleanest approach, but it might need a lot of data stored per edit if there are many clusters involved. # Maintain a configurable counter of the maximum cycle side we want to support. Could default to 10 (even maybe even just). Store a hop-count in the WAL and the ReplicationSource increases that hop-count on each hop. If we're over the max, just drop the edit. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9344) RegionServer not shutting down upon KeeperException in open region
[ https://issues.apache.org/jira/browse/HBASE-9344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751451#comment-13751451 ] Lars Hofhansl commented on HBASE-9344: -- Yeah, was planning on all branches. In the current state a RS in that state is a drag to the cluster. Will commit in a bit. RegionServer not shutting down upon KeeperException in open region -- Key: HBASE-9344 URL: https://issues.apache.org/jira/browse/HBASE-9344 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl Fix For: 0.98.0, 0.95.2, 0.94.12 Attachments: 9344-trunk.txt We ran into a situation where due to a Kerberos configuration problem one of our region server could not connect to ZK when opening a region. Instead of shutting down it continue to try to reconnect. Eventually the master would assign the region to another region server. Each time that region server was assigned a region it would sit there for 5 mins with the region offline. It would have been better if the region server had shut itself down. This is in the logs: {quote} 2013-08-16 17:31:35,999 WARN org.apache.hadoop.hbase.zookeeper.ZKUtil: hconnection-0x2407b842ff2012d-0x2407b842ff2012d-0x2407b842ff2012d Unable to set watcher on znode (/hbase/hbaseid) org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /hbase/hbaseid at org.apache.zookeeper.KeeperException.create(KeeperException.java:123) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1041) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:172) at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:450) at org.apache.hadoop.hbase.zookeeper.ClusterId.readClusterIdZNode(ClusterId.java:61) at org.apache.hadoop.hbase.zookeeper.ClusterId.getId(ClusterId.java:50) at org.apache.hadoop.hbase.zookeeper.ClusterId.hasId(ClusterId.java:44) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.ensureZookeeperTrackers(HConnectionManager.java:616) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:882) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:857) at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:233) at org.apache.hadoop.hbase.client.HTable.init(HTable.java:173) at org.apache.hadoop.hbase.catalog.MetaReader.getHTable(MetaReader.java:201) at org.apache.hadoop.hbase.catalog.MetaReader.getMetaHTable(MetaReader.java:227) at org.apache.hadoop.hbase.catalog.MetaReader.getCatalogHTable(MetaReader.java:214) at org.apache.hadoop.hbase.catalog.MetaEditor.putToCatalogTable(MetaEditor.java:91) at org.apache.hadoop.hbase.catalog.MetaEditor.updateLocation(MetaEditor.java:296) at org.apache.hadoop.hbase.catalog.MetaEditor.updateRegionLocation(MetaEditor.java:276) at org.apache.hadoop.hbase.regionserver.HRegionServer.postOpenDeployTasks(HRegionServer.java:1828) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler$PostOpenDeployTasksThread.run(OpenRegionHandler.java:240) {quote} I think the RS should shut itself down instead. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9344) RegionServer not shutting down upon KeeperException in open region
[ https://issues.apache.org/jira/browse/HBASE-9344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-9344: - Fix Version/s: 0.95.2 0.94.12 0.98.0 RegionServer not shutting down upon KeeperException in open region -- Key: HBASE-9344 URL: https://issues.apache.org/jira/browse/HBASE-9344 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl Fix For: 0.98.0, 0.95.2, 0.94.12 Attachments: 9344-trunk.txt We ran into a situation where due to a Kerberos configuration problem one of our region server could not connect to ZK when opening a region. Instead of shutting down it continue to try to reconnect. Eventually the master would assign the region to another region server. Each time that region server was assigned a region it would sit there for 5 mins with the region offline. It would have been better if the region server had shut itself down. This is in the logs: {quote} 2013-08-16 17:31:35,999 WARN org.apache.hadoop.hbase.zookeeper.ZKUtil: hconnection-0x2407b842ff2012d-0x2407b842ff2012d-0x2407b842ff2012d Unable to set watcher on znode (/hbase/hbaseid) org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /hbase/hbaseid at org.apache.zookeeper.KeeperException.create(KeeperException.java:123) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1041) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:172) at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:450) at org.apache.hadoop.hbase.zookeeper.ClusterId.readClusterIdZNode(ClusterId.java:61) at org.apache.hadoop.hbase.zookeeper.ClusterId.getId(ClusterId.java:50) at org.apache.hadoop.hbase.zookeeper.ClusterId.hasId(ClusterId.java:44) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.ensureZookeeperTrackers(HConnectionManager.java:616) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:882) at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:857) at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:233) at org.apache.hadoop.hbase.client.HTable.init(HTable.java:173) at org.apache.hadoop.hbase.catalog.MetaReader.getHTable(MetaReader.java:201) at org.apache.hadoop.hbase.catalog.MetaReader.getMetaHTable(MetaReader.java:227) at org.apache.hadoop.hbase.catalog.MetaReader.getCatalogHTable(MetaReader.java:214) at org.apache.hadoop.hbase.catalog.MetaEditor.putToCatalogTable(MetaEditor.java:91) at org.apache.hadoop.hbase.catalog.MetaEditor.updateLocation(MetaEditor.java:296) at org.apache.hadoop.hbase.catalog.MetaEditor.updateRegionLocation(MetaEditor.java:276) at org.apache.hadoop.hbase.regionserver.HRegionServer.postOpenDeployTasks(HRegionServer.java:1828) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler$PostOpenDeployTasksThread.run(OpenRegionHandler.java:240) {quote} I think the RS should shut itself down instead. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4811) Support reverse Scan
[ https://issues.apache.org/jira/browse/HBASE-4811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751455#comment-13751455 ] Lars Hofhansl commented on HBASE-4811: -- [~zjushch] bq. The patch for 0.94 version is quite old, should be re-made. You mean it's need to be rebased (it still applies mostly fine), or that you added a lot of stuff to the trunk version that should be in the 0.94 version as well? Support reverse Scan Key: HBASE-4811 URL: https://issues.apache.org/jira/browse/HBASE-4811 Project: HBase Issue Type: New Feature Components: Client Affects Versions: 0.20.6, 0.94.7 Reporter: John Carrino Assignee: chunhui shen Fix For: 0.98.0 Attachments: 4811-0.94-v3.txt, 4811-trunk-v10.txt, 4811-trunk-v5.patch, HBase-4811-0.94.3modified.txt, HBase-4811-0.94-v2.txt, hbase-4811-trunkv11.patch, hbase-4811-trunkv12.patch, hbase-4811-trunkv13.patch, hbase-4811-trunkv14.patch, hbase-4811-trunkv15.patch, hbase-4811-trunkv16.patch, hbase-4811-trunkv17.patch, hbase-4811-trunkv18.patch, hbase-4811-trunkv1.patch, hbase-4811-trunkv4.patch, hbase-4811-trunkv6.patch, hbase-4811-trunkv7.patch, hbase-4811-trunkv8.patch, hbase-4811-trunkv9.patch Reversed scan means scan the rows backward. And StartRow bigger than StopRow in a reversed scan. For example, for the following rows: aaa/c1:q1/value1 aaa/c1:q2/value2 bbb/c1:q1/value1 bbb/c1:q2/value2 ccc/c1:q1/value1 ccc/c1:q2/value2 ddd/c1:q1/value1 ddd/c1:q2/value2 eee/c1:q1/value1 eee/c1:q2/value2 you could do a reversed scan from 'ddd' to 'bbb'(exclude) like this: Scan scan = new Scan(); scan.setStartRow('ddd'); scan.setStopRow('bbb'); scan.setReversed(true); for(Result result:htable.getScanner(scan)){ System.out.println(result); } Aslo you could do the reversed scan with shell like this: hbase scan 'table',{REVERSED = true,STARTROW='ddd', STOPROW='bbb'} And the output is: ddd/c1:q1/value1 ddd/c1:q2/value2 ccc/c1:q1/value1 ccc/c1:q2/value2 NOTE: when setting reversed as true for a client scan, you must set the start row, else will throw exception. Through {@link Scan#createBiggestByteArray(int)},you could get a big enough byte array as the start row All the documentation I find about HBase says that if you want forward and reverse scans you should just build 2 tables and one be ascending and one descending. Is there a fundamental reason that HBase only supports forward Scan? It seems like a lot of extra space overhead and coding overhead (to keep them in sync) to support 2 tables. I am assuming this has been discussed before, but I can't find the discussions anywhere about it or why it would be infeasible. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9230) Fix the server so it can take a pure pb request param and return a pure pb result
[ https://issues.apache.org/jira/browse/HBASE-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751467#comment-13751467 ] Hadoop QA commented on HBASE-9230: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12600198/9230v3.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 17 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: {color:red}-1 core zombie tests{color}. There are 1 zombie test(s): at org.apache.hadoop.hbase.coprocessor.TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking(TestRegionObserverScannerOpenHook.java:231) Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/6932//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6932//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6932//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6932//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6932//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6932//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6932//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6932//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6932//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/6932//console This message is automatically generated. Fix the server so it can take a pure pb request param and return a pure pb result - Key: HBASE-9230 URL: https://issues.apache.org/jira/browse/HBASE-9230 Project: HBase Issue Type: Bug Reporter: stack Assignee: stack Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9230.txt, 9230v2.txt, 9230v3.txt Working on the asynchbase update w/ B this afternoon so it can run against 0.95/0.96, I noticed that clients HAVE TO do cellblocks as the server is currently. That is an oversight. Lets fix so can do all pb all the time too (I thought this was there but it is not); it will make it easier dev'ing simple clients. This issue shouldn't hold up release but we should get it in to help the asynchbase convertion. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9283) Struct and StructIterator should properly handle trailing nulls
[ https://issues.apache.org/jira/browse/HBASE-9283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nick Dimiduk updated HBASE-9283: Status: Patch Available (was: Open) Struct and StructIterator should properly handle trailing nulls --- Key: HBASE-9283 URL: https://issues.apache.org/jira/browse/HBASE-9283 Project: HBase Issue Type: Bug Affects Versions: 0.95.2 Reporter: Nick Dimiduk Assignee: Nick Dimiduk Fix For: 0.98.0, 0.96.0 Attachments: 0001-HBASE-9283-Struct-trailing-null-handling.patch, 0001-HBASE-9283-Struct-trailing-null-handling.patch, 0001-HBASE-9283-Struct-trailing-null-handling.patch For a composite row key, Phoenix strips off trailing null columns values in the row key. The reason this is important is that then new nullable row key columns can be added to a schema without requiring any data upgrade to existing rows. Otherwise, adding new row key columns to the end of a schema becomes extremely cumbersome, as you'd need to delete all existing rows and add them back with a row key that includes a null value. Rather than Phoenix needing to modify the iteration code everywhere (as [~ndimiduk] outlined here: https://issues.apache.org/jira/browse/HBASE-8693?focusedCommentId=13744499page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13744499), it'd be better if StructIterator handled this out-of-the-box. Otherwise, if Phoenix has to specialize this, we'd lose the interop piece which is the justification for switching our type system to this new one in the first place. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9353) HTable returned by MetaReader#getMetaHTable() is not closed in MetaEditor#addRegionToMeta()
[ https://issues.apache.org/jira/browse/HBASE-9353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matteo Bertozzi updated HBASE-9353: --- Attachment: HBASE-9353-v0.patch looks like that is more than addRegionToMeta()... e.g. putToMetaTable(), putToCatalogTable(), ...and others HTable returned by MetaReader#getMetaHTable() is not closed in MetaEditor#addRegionToMeta() --- Key: HBASE-9353 URL: https://issues.apache.org/jira/browse/HBASE-9353 Project: HBase Issue Type: Bug Reporter: Ted Yu Attachments: HBASE-9353-v0.patch Here is related code: {code} public static void addRegionToMeta(CatalogTracker catalogTracker, HRegionInfo regionInfo, HRegionInfo splitA, HRegionInfo splitB) throws IOException { addRegionToMeta(MetaReader.getMetaHTable(catalogTracker), regionInfo, splitA, splitB); } {code} HTable returned by MetaReader#getMetaHTable() is not closed in MetaEditor#addRegionToMeta() -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-8692) [AccessController] Restrict HTableDescriptor enumeration
[ https://issues.apache.org/jira/browse/HBASE-8692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jean-Daniel Cryans updated HBASE-8692: -- Fix Version/s: (was: 0.94.9) 0.94.10 Fixing the Fix Version from 0.94.9 to 0.94.10, the former's RC was cut just before this was committed. Also this caused HBASE-9314, so now every time we delete a table we get a TableInfoMissingException+stack trace in the master log. [AccessController] Restrict HTableDescriptor enumeration Key: HBASE-8692 URL: https://issues.apache.org/jira/browse/HBASE-8692 Project: HBase Issue Type: Improvement Components: Coprocessors, security Affects Versions: 0.98.0, 0.95.1, 0.94.9 Reporter: Andrew Purtell Assignee: Andrew Purtell Fix For: 0.98.0, 0.95.2, 0.94.10 Attachments: 8692-0.94.patch, 8692-0.94.patch, 8692-0.94.patch, 8692-0.94.patch, 8692.patch, 8692.patch, 8692.patch, 8692.patch Some users are concerned about having table schema exposed to every user and would like it protected, similar to the rest of the admin operations for schema. This used to be hopeless because META would leak HTableDescriptors in HRegionInfo, but that is no longer the case in 0.94+. Consider adding CP hooks in the master for intercepting HMasterInterface#getHTableDescriptors and HMasterInterface#getHTableDescriptors(ListString). Add support in the AccessController for only allowing GLOBAL ADMIN to the first method. Add support in the AccessController for allowing access to the descriptors for the table names in the list of the second method only if the user has TABLE ADMIN privilege for all of the listed table names. Then, fix the code in HBaseAdmin (and elsewhere) that expects to be able to enumerate all table descriptors e.g. in deleteTable. A TABLE ADMIN can delete a table but won’t have GLOBAL ADMIN privilege to enumerate the total list. So a minor fixup is needed here, and in other places like this which make the same assumption. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9353) HTable returned by MetaReader#getMetaHTable() is not closed in MetaEditor#addRegionToMeta()
[ https://issues.apache.org/jira/browse/HBASE-9353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751485#comment-13751485 ] Ted Yu commented on HBASE-9353: --- Thanks for the quick action. +1 HTable returned by MetaReader#getMetaHTable() is not closed in MetaEditor#addRegionToMeta() --- Key: HBASE-9353 URL: https://issues.apache.org/jira/browse/HBASE-9353 Project: HBase Issue Type: Bug Reporter: Ted Yu Attachments: HBASE-9353-v0.patch Here is related code: {code} public static void addRegionToMeta(CatalogTracker catalogTracker, HRegionInfo regionInfo, HRegionInfo splitA, HRegionInfo splitB) throws IOException { addRegionToMeta(MetaReader.getMetaHTable(catalogTracker), regionInfo, splitA, splitB); } {code} HTable returned by MetaReader#getMetaHTable() is not closed in MetaEditor#addRegionToMeta() -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (HBASE-9314) Dropping a table always prints a TableInfoMissingException in the master log
[ https://issues.apache.org/jira/browse/HBASE-9314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751482#comment-13751482 ] Jean-Daniel Cryans edited comment on HBASE-9314 at 8/27/13 5:40 PM: I traced it back to HBASE-8692 which was marked as fixed for 0.94.9 but was really fixed in .10. What happened is that HMaster.getHTableDescriptors(List) is now called instead of HMaster.getHTableDescriptors() in HBaseAdmin, and by design we call this method for specific tables until they are gone so not getting a HTD is expected. was (Author: jdcryans): I traced it back to HBASE-8692 which was marked as fixed for 0.94.9 but was really fixed in .10. What happened is that HMaster.getHTableDescriptors(List) is now called instead of HMaster.getHTableDescriptors() in HBaseAdmin, and by design we call this method for specific tables until it is gone so not getting a HTD is expected. Dropping a table always prints a TableInfoMissingException in the master log Key: HBASE-9314 URL: https://issues.apache.org/jira/browse/HBASE-9314 Project: HBase Issue Type: Improvement Affects Versions: 0.95.2, 0.94.10 Reporter: Jean-Daniel Cryans Priority: Minor Fix For: 0.98.0, 0.94.12, 0.96.0 Everytime I drop a table I get the same stack trace in the master's log: {noformat} 2013-08-22 23:11:31,939 DEBUG [MASTER_TABLE_OPERATIONS-jdec2hbase0403-1:6-0] org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Table 't' archived! 2013-08-22 23:11:31,939 DEBUG [MASTER_TABLE_OPERATIONS-jdec2hbase0403-1:6-0] org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Removing 't' descriptor. 2013-08-22 23:11:31,940 DEBUG [MASTER_TABLE_OPERATIONS-jdec2hbase0403-1:6-0] org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Marking 't' as deleted. 2013-08-22 23:11:31,944 DEBUG [MASTER_TABLE_OPERATIONS-jdec2hbase0403-1:6-0] org.apache.hadoop.hbase.zookeeper.lock.ZKInterProcessLockBase: Released /hbase/table-lock/t/write-master:602 2013-08-22 23:11:32,024 DEBUG [RpcServer.handler=0,port=6] org.apache.hadoop.hbase.util.FSTableDescriptors: Exception during readTableDecriptor. Current table name = t org.apache.hadoop.hbase.TableInfoMissingException: No table descriptor file under hdfs://jdec2hbase0403-1.vpc.cloudera.com:9000/hbase/data/default/t at org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptorAndModtime(FSTableDescriptors.java:503) at org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptorAndModtime(FSTableDescriptors.java:496) at org.apache.hadoop.hbase.util.FSTableDescriptors.get(FSTableDescriptors.java:170) at org.apache.hadoop.hbase.master.HMaster.getTableDescriptors(HMaster.java:2629) at org.apache.hadoop.hbase.protobuf.generated.MasterMonitorProtos$MasterMonitorService$2.callBlockingMethod(MasterMonitorProtos.java:4634) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2156) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1861) 2013-08-22 23:11:32,024 WARN [RpcServer.handler=0,port=6] org.apache.hadoop.hbase.util.FSTableDescriptors: The following folder is in HBase's root directory and doesn't contain a table descriptor, do consider deleting it: t {noformat} But the operation completes. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9230) Fix the server so it can take a pure pb request param and return a pure pb result
[ https://issues.apache.org/jira/browse/HBASE-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack updated HBASE-9230: - Attachment: 9230v3.txt A zombie maker has snuck back in looking at test runs over last few days. Hopefully not this patch. Retrying. Fix the server so it can take a pure pb request param and return a pure pb result - Key: HBASE-9230 URL: https://issues.apache.org/jira/browse/HBASE-9230 Project: HBase Issue Type: Bug Reporter: stack Assignee: stack Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9230.txt, 9230v2.txt, 9230v3.txt, 9230v3.txt Working on the asynchbase update w/ B this afternoon so it can run against 0.95/0.96, I noticed that clients HAVE TO do cellblocks as the server is currently. That is an oversight. Lets fix so can do all pb all the time too (I thought this was there but it is not); it will make it easier dev'ing simple clients. This issue shouldn't hold up release but we should get it in to help the asynchbase convertion. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9314) Dropping a table always prints a TableInfoMissingException in the master log
[ https://issues.apache.org/jira/browse/HBASE-9314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jean-Daniel Cryans updated HBASE-9314: -- Fix Version/s: 0.94.12 I traced it back to HBASE-8692 which was marked as fixed for 0.94.9 but was really fixed in .10. What happened is that HMaster.getHTableDescriptors(List) is now called instead of HMaster.getHTableDescriptors() in HBaseAdmin, and by design we call this method for specific tables until it is gone so not getting a HTD is expected. Dropping a table always prints a TableInfoMissingException in the master log Key: HBASE-9314 URL: https://issues.apache.org/jira/browse/HBASE-9314 Project: HBase Issue Type: Improvement Affects Versions: 0.95.2, 0.94.10 Reporter: Jean-Daniel Cryans Priority: Minor Fix For: 0.98.0, 0.94.12, 0.96.0 Everytime I drop a table I get the same stack trace in the master's log: {noformat} 2013-08-22 23:11:31,939 DEBUG [MASTER_TABLE_OPERATIONS-jdec2hbase0403-1:6-0] org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Table 't' archived! 2013-08-22 23:11:31,939 DEBUG [MASTER_TABLE_OPERATIONS-jdec2hbase0403-1:6-0] org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Removing 't' descriptor. 2013-08-22 23:11:31,940 DEBUG [MASTER_TABLE_OPERATIONS-jdec2hbase0403-1:6-0] org.apache.hadoop.hbase.master.handler.DeleteTableHandler: Marking 't' as deleted. 2013-08-22 23:11:31,944 DEBUG [MASTER_TABLE_OPERATIONS-jdec2hbase0403-1:6-0] org.apache.hadoop.hbase.zookeeper.lock.ZKInterProcessLockBase: Released /hbase/table-lock/t/write-master:602 2013-08-22 23:11:32,024 DEBUG [RpcServer.handler=0,port=6] org.apache.hadoop.hbase.util.FSTableDescriptors: Exception during readTableDecriptor. Current table name = t org.apache.hadoop.hbase.TableInfoMissingException: No table descriptor file under hdfs://jdec2hbase0403-1.vpc.cloudera.com:9000/hbase/data/default/t at org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptorAndModtime(FSTableDescriptors.java:503) at org.apache.hadoop.hbase.util.FSTableDescriptors.getTableDescriptorAndModtime(FSTableDescriptors.java:496) at org.apache.hadoop.hbase.util.FSTableDescriptors.get(FSTableDescriptors.java:170) at org.apache.hadoop.hbase.master.HMaster.getTableDescriptors(HMaster.java:2629) at org.apache.hadoop.hbase.protobuf.generated.MasterMonitorProtos$MasterMonitorService$2.callBlockingMethod(MasterMonitorProtos.java:4634) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2156) at org.apache.hadoop.hbase.ipc.RpcServer$Handler.run(RpcServer.java:1861) 2013-08-22 23:11:32,024 WARN [RpcServer.handler=0,port=6] org.apache.hadoop.hbase.util.FSTableDescriptors: The following folder is in HBase's root directory and doesn't contain a table descriptor, do consider deleting it: t {noformat} But the operation completes. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8640) ServerName in master may not initialize with the configured ipc address of hbase.master.ipc.address
[ https://issues.apache.org/jira/browse/HBASE-8640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751488#comment-13751488 ] Jean-Daniel Cryans commented on HBASE-8640: --- Can I get a +1 to revert from someone who +1'd it? I should probably do it in a different jira too... ServerName in master may not initialize with the configured ipc address of hbase.master.ipc.address --- Key: HBASE-8640 URL: https://issues.apache.org/jira/browse/HBASE-8640 Project: HBase Issue Type: Bug Components: master Reporter: rajeshbabu Assignee: rajeshbabu Fix For: 0.98.0, 0.95.2, 0.94.9 Attachments: HBASE-8640_94.patch, HBASE-8640.patch We are starting rpc server with default interface hostname or configured ipc address {code} this.rpcServer = HBaseRPC.getServer(this, new Class?[]{HMasterInterface.class, HMasterRegionInterface.class}, initialIsa.getHostName(), // This is bindAddress if set else it's hostname initialIsa.getPort(), numHandlers, 0, // we dont use high priority handlers in master conf.getBoolean(hbase.rpc.verbose, false), conf, 0); // this is a DNC w/o high priority handlers {code} But we are initialzing servername with default hostname always master znode also have this hostname. {code} String hostname = Strings.domainNamePointerToHostName(DNS.getDefaultHost( conf.get(hbase.master.dns.interface, default), conf.get(hbase.master.dns.nameserver, default))); ... this.serverName = new ServerName(hostname, this.isa.getPort(), System.currentTimeMillis()); {code} If both default interface hostname and configured ipc address are not same clients will get MasterNotRunningException. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9353) HTable returned by MetaReader#getMetaHTable() is not closed in MetaEditor#addRegionToMeta()
[ https://issues.apache.org/jira/browse/HBASE-9353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matteo Bertozzi updated HBASE-9353: --- Assignee: Matteo Bertozzi Status: Patch Available (was: Open) HTable returned by MetaReader#getMetaHTable() is not closed in MetaEditor#addRegionToMeta() --- Key: HBASE-9353 URL: https://issues.apache.org/jira/browse/HBASE-9353 Project: HBase Issue Type: Bug Reporter: Ted Yu Assignee: Matteo Bertozzi Attachments: HBASE-9353-v0.patch Here is related code: {code} public static void addRegionToMeta(CatalogTracker catalogTracker, HRegionInfo regionInfo, HRegionInfo splitA, HRegionInfo splitB) throws IOException { addRegionToMeta(MetaReader.getMetaHTable(catalogTracker), regionInfo, splitA, splitB); } {code} HTable returned by MetaReader#getMetaHTable() is not closed in MetaEditor#addRegionToMeta() -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9354) allow getting all partitions for table to also use direct SQL path
Sergey Shelukhin created HBASE-9354: --- Summary: allow getting all partitions for table to also use direct SQL path Key: HBASE-9354 URL: https://issues.apache.org/jira/browse/HBASE-9354 Project: HBase Issue Type: Improvement Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin While testing some queries I noticed that getPartitions can be very slow (which happens e.g. in non-strict mode with no partition column filter); with a table with many partitions it can take 10-12s easily. SQL perf path can also be used for this path. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Deleted] (HBASE-9354) allow getting all partitions for table to also use direct SQL path
[ https://issues.apache.org/jira/browse/HBASE-9354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin deleted HBASE-9354: allow getting all partitions for table to also use direct SQL path -- Key: HBASE-9354 URL: https://issues.apache.org/jira/browse/HBASE-9354 Project: HBase Issue Type: Improvement Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin While testing some queries I noticed that getPartitions can be very slow (which happens e.g. in non-strict mode with no partition column filter); with a table with many partitions it can take 10-12s easily. SQL perf path can also be used for this path. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9230) Fix the server so it can take a pure pb request param and return a pure pb result
[ https://issues.apache.org/jira/browse/HBASE-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751506#comment-13751506 ] Hadoop QA commented on HBASE-9230: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12600198/9230v3.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 17 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: {color:red}-1 core zombie tests{color}. There are 1 zombie test(s): at org.apache.hadoop.hbase.coprocessor.TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking(TestRegionObserverScannerOpenHook.java:231) Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/6934//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6934//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6934//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6934//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6934//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6934//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6934//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6934//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6934//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/6934//console This message is automatically generated. Fix the server so it can take a pure pb request param and return a pure pb result - Key: HBASE-9230 URL: https://issues.apache.org/jira/browse/HBASE-9230 Project: HBase Issue Type: Bug Reporter: stack Assignee: stack Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9230.txt, 9230v2.txt, 9230v3.txt, 9230v3.txt Working on the asynchbase update w/ B this afternoon so it can run against 0.95/0.96, I noticed that clients HAVE TO do cellblocks as the server is currently. That is an oversight. Lets fix so can do all pb all the time too (I thought this was there but it is not); it will make it easier dev'ing simple clients. This issue shouldn't hold up release but we should get it in to help the asynchbase convertion. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9138) the name of function getHaseIntegrationTestingUtility() is a misspelling
[ https://issues.apache.org/jira/browse/HBASE-9138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751504#comment-13751504 ] Hadoop QA commented on HBASE-9138: -- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12600152/ChaosMonkey-v3.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 45 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/6933//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6933//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6933//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6933//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6933//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6933//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6933//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6933//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6933//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/6933//console This message is automatically generated. the name of function getHaseIntegrationTestingUtility() is a misspelling Key: HBASE-9138 URL: https://issues.apache.org/jira/browse/HBASE-9138 Project: HBase Issue Type: Bug Components: test Affects Versions: 0.94.0 Reporter: chendihao Priority: Trivial Fix For: 0.94.0 Attachments: ChaosMonkey.java.patch, ChaosMonkey-v2.patch, ChaosMonkey-v3.patch The function getHaseIntegrationTestingUtility() in ChaosMonkey.java should be getHBaseIntegrationTestingUtility(), just a spelling mistake. {code} /** * Context for Action's */ public static class ActionContext { private IntegrationTestingUtility util; public ActionContext(IntegrationTestingUtility util) { this.util = util; } public IntegrationTestingUtility getHaseIntegrationTestingUtility() { return util; } public HBaseCluster getHBaseCluster() { return util.getHBaseClusterInterface(); } } {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9116) Add a view/edit tool for favored node mappings for regions
[ https://issues.apache.org/jira/browse/HBASE-9116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751512#comment-13751512 ] Jimmy Xiang commented on HBASE-9116: You already defined UpdateFavoredNodesRequest, why not use it? These two calls may divert later on. It is better to use different message so that it will be easier to maintain compatibility. Why does this one take a RegionOpenInfo? At least, it doesn't need the zk node version right? {noformat} +message UpdateFavoredNodesRequest { + repeated OpenRegionRequest.RegionOpenInfo updateRegionInfo = 1; +} {noformat} Add a view/edit tool for favored node mappings for regions -- Key: HBASE-9116 URL: https://issues.apache.org/jira/browse/HBASE-9116 Project: HBase Issue Type: Improvement Components: Region Assignment Affects Versions: 0.95.0 Reporter: Devaraj Das Assignee: Devaraj Das Fix For: 0.96.0 Attachments: 9116-1.txt, 9116-2.txt, 9116-2.txt, 9116-2.txt, 9116-3.txt, 9116-4.txt, 9116-5.txt, 9116-6.txt Add a tool that one can run offline to view the favored node mappings for regions, and also fix the mappings if needed. Such a tool exists in the 0.89-fb branch. Will port it over to trunk/0.95. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9353) HTable returned by MetaReader#getMetaHTable() is not closed in MetaEditor#addRegionToMeta()
[ https://issues.apache.org/jira/browse/HBASE-9353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matteo Bertozzi updated HBASE-9353: --- Attachment: HBASE-9353-v1.patch too many... putToCatalogTable() is using a put() helper that internally is closing the table provided.. but there're still others like splitRegion() and mergeRegions() that are not closing the table HTable returned by MetaReader#getMetaHTable() is not closed in MetaEditor#addRegionToMeta() --- Key: HBASE-9353 URL: https://issues.apache.org/jira/browse/HBASE-9353 Project: HBase Issue Type: Bug Reporter: Ted Yu Assignee: Matteo Bertozzi Attachments: HBASE-9353-v0.patch, HBASE-9353-v1.patch Here is related code: {code} public static void addRegionToMeta(CatalogTracker catalogTracker, HRegionInfo regionInfo, HRegionInfo splitA, HRegionInfo splitB) throws IOException { addRegionToMeta(MetaReader.getMetaHTable(catalogTracker), regionInfo, splitA, splitB); } {code} HTable returned by MetaReader#getMetaHTable() is not closed in MetaEditor#addRegionToMeta() -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9116) Add a view/edit tool for favored node mappings for regions
[ https://issues.apache.org/jira/browse/HBASE-9116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751515#comment-13751515 ] Devaraj Das commented on HBASE-9116: Thanks, Nick. bq. I regard tweaking test timeouts as a kind of code smell. I had increased the timeout so I could debug various things from within eclipse without the test timing out on me. The timeout annotation in the new patch is 3 minutes (which is what it was set to earlier) but I could actually remove that timeout annotation altogether on commit. The test runs in less than 2 minutes consistently.. bq. AssignmentVerificationReport has separate logic blocks for floating point if (A B) else if (A == B) – I wonder if this will be a future source of bugs. I inspected that part some. The logic for reporting the max/min dispersion is built around the check for equality. This is used only for reporting purposes and we can tune this later if needed. Add a view/edit tool for favored node mappings for regions -- Key: HBASE-9116 URL: https://issues.apache.org/jira/browse/HBASE-9116 Project: HBase Issue Type: Improvement Components: Region Assignment Affects Versions: 0.95.0 Reporter: Devaraj Das Assignee: Devaraj Das Fix For: 0.96.0 Attachments: 9116-1.txt, 9116-2.txt, 9116-2.txt, 9116-2.txt, 9116-3.txt, 9116-4.txt, 9116-5.txt, 9116-6.txt Add a tool that one can run offline to view the favored node mappings for regions, and also fix the mappings if needed. Such a tool exists in the 0.89-fb branch. Will port it over to trunk/0.95. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9355) HBaseTestingUtility#cleanupDataTestDirOnTestFS() doesn't close the FileSystem
Ted Yu created HBASE-9355: - Summary: HBaseTestingUtility#cleanupDataTestDirOnTestFS() doesn't close the FileSystem Key: HBASE-9355 URL: https://issues.apache.org/jira/browse/HBASE-9355 Project: HBase Issue Type: Test Reporter: Ted Yu Priority: Minor Here is related code: {code} public boolean cleanupDataTestDirOnTestFS() throws IOException { boolean ret = getTestFileSystem().delete(dataTestDirOnTestFS, true); if (ret) dataTestDirOnTestFS = null; return ret; } {code} The FileSystem returned by getTestFileSystem() is not closed. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9116) Add a view/edit tool for favored node mappings for regions
[ https://issues.apache.org/jira/browse/HBASE-9116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751539#comment-13751539 ] Devaraj Das commented on HBASE-9116: bq. You already defined UpdateFavoredNodesRequest, why not use it? Hmm. I had defined it and not used it... I will do the needful for using UpdateFavoredNodesRequest in my next patch... Add a view/edit tool for favored node mappings for regions -- Key: HBASE-9116 URL: https://issues.apache.org/jira/browse/HBASE-9116 Project: HBase Issue Type: Improvement Components: Region Assignment Affects Versions: 0.95.0 Reporter: Devaraj Das Assignee: Devaraj Das Fix For: 0.96.0 Attachments: 9116-1.txt, 9116-2.txt, 9116-2.txt, 9116-2.txt, 9116-3.txt, 9116-4.txt, 9116-5.txt, 9116-6.txt Add a tool that one can run offline to view the favored node mappings for regions, and also fix the mappings if needed. Such a tool exists in the 0.89-fb branch. Will port it over to trunk/0.95. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-8640) ServerName in master may not initialize with the configured ipc address of hbase.master.ipc.address
[ https://issues.apache.org/jira/browse/HBASE-8640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751552#comment-13751552 ] stack commented on HBASE-8640: -- +1 after talking to [~jdcryans] (I +1'd the patch) ServerName in master may not initialize with the configured ipc address of hbase.master.ipc.address --- Key: HBASE-8640 URL: https://issues.apache.org/jira/browse/HBASE-8640 Project: HBase Issue Type: Bug Components: master Reporter: rajeshbabu Assignee: rajeshbabu Fix For: 0.98.0, 0.95.2, 0.94.9 Attachments: HBASE-8640_94.patch, HBASE-8640.patch We are starting rpc server with default interface hostname or configured ipc address {code} this.rpcServer = HBaseRPC.getServer(this, new Class?[]{HMasterInterface.class, HMasterRegionInterface.class}, initialIsa.getHostName(), // This is bindAddress if set else it's hostname initialIsa.getPort(), numHandlers, 0, // we dont use high priority handlers in master conf.getBoolean(hbase.rpc.verbose, false), conf, 0); // this is a DNC w/o high priority handlers {code} But we are initialzing servername with default hostname always master znode also have this hostname. {code} String hostname = Strings.domainNamePointerToHostName(DNS.getDefaultHost( conf.get(hbase.master.dns.interface, default), conf.get(hbase.master.dns.nameserver, default))); ... this.serverName = new ServerName(hostname, this.isa.getPort(), System.currentTimeMillis()); {code} If both default interface hostname and configured ipc address are not same clients will get MasterNotRunningException. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9343) Implement stateless scanner for Stargate
[ https://issues.apache.org/jira/browse/HBASE-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751557#comment-13751557 ] Francis Liu commented on HBASE-9343: [~andrew.purt...@gmail.com] Just to clarify there's two big motivating pieces to create a new scanner resource: 1. make REST server stateless, let's keep all state in hbase and have the server merely function as a proxy this makes the system much simpler to scale and manage. 2. stream the data instead of issuing new http request for each batch, this solves #1 as well as makes scans more performant (less rpc calls, delegating flow control at tcp layer). This also eases the pressure of having to keep a lot of scan data in memory in order to make it performant. This patch should include support for json, xml and protobuf. Implement stateless scanner for Stargate Key: HBASE-9343 URL: https://issues.apache.org/jira/browse/HBASE-9343 Project: HBase Issue Type: Improvement Components: REST Affects Versions: 0.94.11 Reporter: Vandana Ayyalasomayajula Assignee: Vandana Ayyalasomayajula Priority: Minor Attachments: HBASE-9343_94.00.patch The current scanner implementation for scanner stores state and hence not very suitable for REST server failure scenarios. The current JIRA proposes to implement a stateless scanner. In the first version of the patch, a new resource class ScanResource has been added and all the scan parameters will be specified as query params. The following are the scan parameters startrow - The start row for the scan. endrow - The end row for the scan. columns - The columns to scan. starttime, endtime - To only retrieve columns within a specific range of version timestamps,both start and end time must be specified. maxversions - To limit the number of versions of each column to be returned. batchsize - To limit the maximum number of values returned for each call to next(). limit - The number of rows to return in the scan operation. More on start row, end row and limit parameters. 1. If start row, end row and limit not specified, then the whole table will be scanned. 2. If start row and limit (say N) is specified, then the scan operation will return N rows from the start row specified. 3. If only limit parameter is specified, then the scan operation will return N rows from the start of the table. 4. If limit and end row are specified, then the scan operation will return N rows from start of table till the end row. If the end row is reached before N rows ( say M and M lt; N ), then M rows will be returned to the user. 5. If start row, end row and limit (say N ) are specified and N lt; number of rows between start row and end row, then N rows from start row will be returned to the user. If N gt; (number of rows between start row and end row (say M), then M number of rows will be returned to the user. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9230) Fix the server so it can take a pure pb request param and return a pure pb result
[ https://issues.apache.org/jira/browse/HBASE-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751564#comment-13751564 ] Hadoop QA commented on HBASE-9230: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12600207/9230v3.txt against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 17 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:red}-1 core tests{color}. The patch failed these unit tests: {color:red}-1 core zombie tests{color}. There are 1 zombie test(s): at org.apache.hadoop.hbase.coprocessor.TestRegionObserverScannerOpenHook.testRegionObserverCompactionTimeStacking(TestRegionObserverScannerOpenHook.java:231) Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/6936//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6936//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6936//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6936//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6936//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6936//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6936//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6936//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6936//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/6936//console This message is automatically generated. Fix the server so it can take a pure pb request param and return a pure pb result - Key: HBASE-9230 URL: https://issues.apache.org/jira/browse/HBASE-9230 Project: HBase Issue Type: Bug Reporter: stack Assignee: stack Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9230.txt, 9230v2.txt, 9230v3.txt, 9230v3.txt Working on the asynchbase update w/ B this afternoon so it can run against 0.95/0.96, I noticed that clients HAVE TO do cellblocks as the server is currently. That is an oversight. Lets fix so can do all pb all the time too (I thought this was there but it is not); it will make it easier dev'ing simple clients. This issue shouldn't hold up release but we should get it in to help the asynchbase convertion. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9230) Fix the server so it can take a pure pb request param and return a pure pb result
[ https://issues.apache.org/jira/browse/HBASE-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751571#comment-13751571 ] Elliott Clark commented on HBASE-9230: -- I put a few small comments on RB. For the most part it looks good. I'll dive a little deeper. Fix the server so it can take a pure pb request param and return a pure pb result - Key: HBASE-9230 URL: https://issues.apache.org/jira/browse/HBASE-9230 Project: HBase Issue Type: Bug Reporter: stack Assignee: stack Priority: Critical Fix For: 0.98.0, 0.96.0 Attachments: 9230.txt, 9230v2.txt, 9230v3.txt, 9230v3.txt Working on the asynchbase update w/ B this afternoon so it can run against 0.95/0.96, I noticed that clients HAVE TO do cellblocks as the server is currently. That is an oversight. Lets fix so can do all pb all the time too (I thought this was there but it is not); it will make it easier dev'ing simple clients. This issue shouldn't hold up release but we should get it in to help the asynchbase convertion. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-9356) [0.94] SecureServer.INSECURE_VERSIONS is declared incorrectly
Lars Hofhansl created HBASE-9356: Summary: [0.94] SecureServer.INSECURE_VERSIONS is declared incorrectly Key: HBASE-9356 URL: https://issues.apache.org/jira/browse/HBASE-9356 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl Priority: Trivial Fix For: 0.94.12 I just found spurious messages of the form: 2013-08-27 18:52:58,389 WARN org.apache.hadoop.hbase.ipc.SecureServer: Incorrect header or version mismatch from host:port got version 3 expected version 4 Version 3 means insecure and the code tries to test for it, but the insecure version are declared in SetByte and are then tested again an int, which apparently is always false. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9356) [0.94] SecureServer.INSECURE_VERSIONS is declared incorrectly
[ https://issues.apache.org/jira/browse/HBASE-9356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl updated HBASE-9356: - Attachment: 9356.txt Simple fix. [0.94] SecureServer.INSECURE_VERSIONS is declared incorrectly - Key: HBASE-9356 URL: https://issues.apache.org/jira/browse/HBASE-9356 Project: HBase Issue Type: Bug Reporter: Lars Hofhansl Priority: Trivial Fix For: 0.94.12 Attachments: 9356.txt I just found spurious messages of the form: 2013-08-27 18:52:58,389 WARN org.apache.hadoop.hbase.ipc.SecureServer: Incorrect header or version mismatch from host:port got version 3 expected version 4 Version 3 means insecure and the code tries to test for it, but the insecure version are declared in SetByte and are then tested again an int, which apparently is always false. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9315) TestLruBlockCache.testBackgroundEvictionThread fails on suse
[ https://issues.apache.org/jira/browse/HBASE-9315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751596#comment-13751596 ] Hudson commented on HBASE-9315: --- SUCCESS: Integrated in hbase-0.95 #496 (See [https://builds.apache.org/job/hbase-0.95/496/]) HBASE-9315 TestLruBlockCache.testBackgroundEvictionThread fails on suse (stack: rev 1517869) * /hbase/branches/0.95/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestLruBlockCache.java TestLruBlockCache.testBackgroundEvictionThread fails on suse Key: HBASE-9315 URL: https://issues.apache.org/jira/browse/HBASE-9315 Project: HBase Issue Type: Test Affects Versions: 0.95.2 Reporter: Nick Dimiduk Assignee: Nick Dimiduk Fix For: 0.98.0, 0.96.0 Attachments: 0001-HBASE-9315-improve-test-stability-of-TestLruBlockCac.patch One of our build machines is consistently having trouble with this test. {noformat} Error Message expected:2 but was:1 Stacktrace java.lang.AssertionError: expected:2 but was:1 at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:743) at org.junit.Assert.assertEquals(Assert.java:118) at org.junit.Assert.assertEquals(Assert.java:555) at org.junit.Assert.assertEquals(Assert.java:542) at org.apache.hadoop.hbase.io.hfile.TestLruBlockCache.testBackgroundEvictionThread(TestLruBlockCache.java:85) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.junit.runners.Suite.runChild(Suite.java:127) at org.junit.runners.Suite.runChild(Suite.java:26) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Standard Output Background Evictions run: 2 Standard Error 2013-08-22 11:02:58,331 INFO [pool-1-thread-1] hbase.ResourceChecker(147): before: io.hfile.TestLruBlockCache#testBackgroundEvictionThread Thread=35, OpenFileDescriptor=277, MaxFileDescriptor=95000, SystemLoadAverage=119, ProcessCount=75, AvailableMemoryMB=8884, ConnectionCount=1 2013-08-22 11:02:58,338 INFO [pool-1-thread-1] hbase.ResourceChecker(171): after: io.hfile.TestLruBlockCache#testBackgroundEvictionThread Thread=36 (was 35) - Thread LEAK? -, OpenFileDescriptor=279 (was 277) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=95000 (was 95000), SystemLoadAverage=119 (was 119), ProcessCount=75 (was 75), AvailableMemoryMB=8884 (was 8884), ConnectionCount=1 (was 1) 2013-08-22 11:07:58,331 DEBUG [LRU Statistics #0] hfile.LruBlockCache(728): Stats: total=87.01 KB, free=10.65 KB, max=97.66 KB, blocks=8, accesses=0, hits=0, hitRatio=0, cachingAccesses=0, cachingHits=0, cachingHitsRatio=0,evictions=2, evicted=2, evictedPerRun=1.0 2013-08-22 11:12:58,331 DEBUG [LRU Statistics #0] hfile.LruBlockCache(728): Stats: total=87.01 KB, free=10.65 KB, max=97.66 KB, blocks=8, accesses=0, hits=0, hitRatio=0, cachingAccesses=0, cachingHits=0, cachingHitsRatio=0,evictions=2, evicted=2, evictedPerRun=1.0
[jira] [Commented] (HBASE-9348) TerminatedWrapper error decoding, skipping skippable types
[ https://issues.apache.org/jira/browse/HBASE-9348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751595#comment-13751595 ] Hudson commented on HBASE-9348: --- SUCCESS: Integrated in hbase-0.95 #496 (See [https://builds.apache.org/job/hbase-0.95/496/]) HBASE-9348 TerminatedWrapper error decoding, skipping skippable types (Nick) (tedyu: rev 1517857) * /hbase/branches/0.95/hbase-common/src/main/java/org/apache/hadoop/hbase/types/TerminatedWrapper.java * /hbase/branches/0.95/hbase-common/src/test/java/org/apache/hadoop/hbase/types/TestTerminatedWrapper.java TerminatedWrapper error decoding, skipping skippable types -- Key: HBASE-9348 URL: https://issues.apache.org/jira/browse/HBASE-9348 Project: HBase Issue Type: Bug Affects Versions: 0.95.2 Reporter: Nick Dimiduk Assignee: Nick Dimiduk Fix For: 0.98.0, 0.96.0 Attachments: 0001-HBASE-9348-TerminatedWrapper-skippable-types-bug.patch When {{TerminatedWrapper}} wraps a type which {{isSkippable}}, it does not consider the terminator when updating the source buffer position after skipping or decoding a value. The tests only covered the non-skippable case. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9336) Two css files raise release audit warning
[ https://issues.apache.org/jira/browse/HBASE-9336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751604#comment-13751604 ] Elliott Clark commented on HBASE-9336: -- This was already put into the pom as an exclude ( HBASE-9342 ) so that we didn't have to change the source files. What's the best way going forward to handle it ? Two css files raise release audit warning - Key: HBASE-9336 URL: https://issues.apache.org/jira/browse/HBASE-9336 Project: HBase Issue Type: Task Reporter: Ted Yu Assignee: Nick Dimiduk Fix For: 0.98.0, 0.96.0 Attachments: 0001-HBASE-9336-Add-missing-license-headers-to-css-files.patch From https://builds.apache.org/job/PreCommit-HBASE-Build/6869/artifact/trunk/patchprocess/patchReleaseAuditProblems.txt : {code} !? /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/resources/hbase-webapps/static/css/bootstrap-theme.css !? /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/resources/hbase-webapps/static/css/bootstrap-theme.min.css Lines that start with ? in the release audit report indicate files that do not have an Apache license header. {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9336) Two css files raise release audit warning
[ https://issues.apache.org/jira/browse/HBASE-9336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751610#comment-13751610 ] stack commented on HBASE-9336: -- [~eclark] Pardon me. Didn't see that. I think adding third-party files to the pom to exclude is way to go. Should I revert this? (I committed because I thought we were still getting warnings). Two css files raise release audit warning - Key: HBASE-9336 URL: https://issues.apache.org/jira/browse/HBASE-9336 Project: HBase Issue Type: Task Reporter: Ted Yu Assignee: Nick Dimiduk Fix For: 0.98.0, 0.96.0 Attachments: 0001-HBASE-9336-Add-missing-license-headers-to-css-files.patch From https://builds.apache.org/job/PreCommit-HBASE-Build/6869/artifact/trunk/patchprocess/patchReleaseAuditProblems.txt : {code} !? /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/resources/hbase-webapps/static/css/bootstrap-theme.css !? /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/resources/hbase-webapps/static/css/bootstrap-theme.min.css Lines that start with ? in the release audit report indicate files that do not have an Apache license header. {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9283) Struct and StructIterator should properly handle trailing nulls
[ https://issues.apache.org/jira/browse/HBASE-9283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751607#comment-13751607 ] Hadoop QA commented on HBASE-9283: -- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12600084/0001-HBASE-9283-Struct-trailing-null-handling.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 8 new or modified tests. {color:green}+1 hadoop1.0{color}. The patch compiles against the hadoop 1.0 profile. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/6935//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6935//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6935//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6935//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6935//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6935//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6935//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6935//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/6935//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/6935//console This message is automatically generated. Struct and StructIterator should properly handle trailing nulls --- Key: HBASE-9283 URL: https://issues.apache.org/jira/browse/HBASE-9283 Project: HBase Issue Type: Bug Affects Versions: 0.95.2 Reporter: Nick Dimiduk Assignee: Nick Dimiduk Fix For: 0.98.0, 0.96.0 Attachments: 0001-HBASE-9283-Struct-trailing-null-handling.patch, 0001-HBASE-9283-Struct-trailing-null-handling.patch, 0001-HBASE-9283-Struct-trailing-null-handling.patch For a composite row key, Phoenix strips off trailing null columns values in the row key. The reason this is important is that then new nullable row key columns can be added to a schema without requiring any data upgrade to existing rows. Otherwise, adding new row key columns to the end of a schema becomes extremely cumbersome, as you'd need to delete all existing rows and add them back with a row key that includes a null value. Rather than Phoenix needing to modify the iteration code everywhere (as [~ndimiduk] outlined here: https://issues.apache.org/jira/browse/HBASE-8693?focusedCommentId=13744499page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13744499), it'd be better if StructIterator handled this out-of-the-box. Otherwise, if Phoenix has to specialize this, we'd lose the interop piece which is the justification for switching our type system to this new one in the first place. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-9321) Contention getting the current user in RpcClient$Connection.writeRequest
[ https://issues.apache.org/jira/browse/HBASE-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang updated HBASE-9321: --- Assignee: Jimmy Xiang Status: Open (was: Patch Available) Contention getting the current user in RpcClient$Connection.writeRequest Key: HBASE-9321 URL: https://issues.apache.org/jira/browse/HBASE-9321 Project: HBase Issue Type: Bug Affects Versions: 0.95.2 Reporter: Jean-Daniel Cryans Assignee: Jimmy Xiang Fix For: 0.98.0, 0.96.0 Attachments: trunk-9321.patch I've been running tests on clusters with lots of regions, about 400, and I'm seeing weird contention in the client. This one I see a lot, hundreds and sometimes thousands of threads are blocked like this: {noformat} htable-pool4-t74 daemon prio=10 tid=0x7f2254114000 nid=0x2a99 waiting for monitor entry [0x7f21f9e94000] java.lang.Thread.State: BLOCKED (on object monitor) at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:466) - waiting to lock 0xfb5ad000 (a java.lang.Class for org.apache.hadoop.security.UserGroupInformation) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.writeRequest(RpcClient.java:1013) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1407) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1634) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1691) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:27339) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:105) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:183) {noformat} While the holder is doing this: {noformat} htable-pool17-t55 daemon prio=10 tid=0x7f2244408000 nid=0x2a98 runnable [0x7f21f9f95000] java.lang.Thread.State: RUNNABLE at java.security.AccessController.getStackAccessControlContext(Native Method) at java.security.AccessController.getContext(AccessController.java:487) at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:466) - locked 0xfb5ad000 (a java.lang.Class for org.apache.hadoop.security.UserGroupInformation) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.writeRequest(RpcClient.java:1013) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1407) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1634) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1691) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:27339) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:105) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:183) {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9321) Contention getting the current user in RpcClient$Connection.writeRequest
[ https://issues.apache.org/jira/browse/HBASE-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751632#comment-13751632 ] Jimmy Xiang commented on HBASE-9321: Ok, let me fall back to my original solution for REST impersonation: one connection per user. Let's defer the connection reusing to 0.98. Contention getting the current user in RpcClient$Connection.writeRequest Key: HBASE-9321 URL: https://issues.apache.org/jira/browse/HBASE-9321 Project: HBase Issue Type: Bug Affects Versions: 0.95.2 Reporter: Jean-Daniel Cryans Assignee: Jimmy Xiang Fix For: 0.98.0, 0.96.0 Attachments: trunk-9321.patch I've been running tests on clusters with lots of regions, about 400, and I'm seeing weird contention in the client. This one I see a lot, hundreds and sometimes thousands of threads are blocked like this: {noformat} htable-pool4-t74 daemon prio=10 tid=0x7f2254114000 nid=0x2a99 waiting for monitor entry [0x7f21f9e94000] java.lang.Thread.State: BLOCKED (on object monitor) at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:466) - waiting to lock 0xfb5ad000 (a java.lang.Class for org.apache.hadoop.security.UserGroupInformation) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.writeRequest(RpcClient.java:1013) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1407) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1634) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1691) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:27339) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:105) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:183) {noformat} While the holder is doing this: {noformat} htable-pool17-t55 daemon prio=10 tid=0x7f2244408000 nid=0x2a98 runnable [0x7f21f9f95000] java.lang.Thread.State: RUNNABLE at java.security.AccessController.getStackAccessControlContext(Native Method) at java.security.AccessController.getContext(AccessController.java:487) at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:466) - locked 0xfb5ad000 (a java.lang.Class for org.apache.hadoop.security.UserGroupInformation) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.writeRequest(RpcClient.java:1013) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1407) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1634) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1691) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:27339) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:105) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:43) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:183) {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9336) Two css files raise release audit warning
[ https://issues.apache.org/jira/browse/HBASE-9336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751642#comment-13751642 ] Hudson commented on HBASE-9336: --- SUCCESS: Integrated in HBase-TRUNK #4438 (See [https://builds.apache.org/job/HBase-TRUNK/4438/]) HBASE-9336 Two css files raise release audit warning (stack: rev 1517884) * /hbase/trunk/hbase-server/src/main/resources/hbase-webapps/static/css/bootstrap-theme.css * /hbase/trunk/hbase-server/src/main/resources/hbase-webapps/static/css/bootstrap-theme.min.css Two css files raise release audit warning - Key: HBASE-9336 URL: https://issues.apache.org/jira/browse/HBASE-9336 Project: HBase Issue Type: Task Reporter: Ted Yu Assignee: Nick Dimiduk Fix For: 0.98.0, 0.96.0 Attachments: 0001-HBASE-9336-Add-missing-license-headers-to-css-files.patch From https://builds.apache.org/job/PreCommit-HBASE-Build/6869/artifact/trunk/patchprocess/patchReleaseAuditProblems.txt : {code} !? /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/resources/hbase-webapps/static/css/bootstrap-theme.css !? /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/resources/hbase-webapps/static/css/bootstrap-theme.min.css Lines that start with ? in the release audit report indicate files that do not have an Apache license header. {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-9350) In ChaosMonkey, MoveRegionsOfTableAction throws UnknownRegionException
[ https://issues.apache.org/jira/browse/HBASE-9350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13751644#comment-13751644 ] Hudson commented on HBASE-9350: --- SUCCESS: Integrated in HBase-TRUNK #4438 (See [https://builds.apache.org/job/HBase-TRUNK/4438/]) HBASE-9350 In ChaosMonkey, MoveRegionsOfTableAction throws UnknownRegionException (stack: rev 1517874) * /hbase/trunk/hbase-it/src/test/java/org/apache/hadoop/hbase/chaos/actions/MoveRegionsOfTableAction.java In ChaosMonkey, MoveRegionsOfTableAction throws UnknownRegionException -- Key: HBASE-9350 URL: https://issues.apache.org/jira/browse/HBASE-9350 Project: HBase Issue Type: Bug Components: test Affects Versions: 0.94.0 Reporter: chendihao Labels: test Fix For: 0.98.0, 0.95.0 Attachments: MoveRegionsOfTableAction.java.patch, MoveRegionsOfTableAction-v2.patch The first parameter in HBaseAdmin.move(final byte [] encodedRegionName, final byte [] destServerName) should be encoded. Otherwise, it could throw UnknowRegionException and result in failure of this action. {code} encodedRegionName The encoded region name; i.e. the hash that makes up the region name suffix: e.g. if regionname is TestTable,0094429456,1289497600452.527db22f95c8a9e0116f0cc13c680396., then the encoded region name is: 527db22f95c8a9e0116f0cc13c680396. {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira