[jira] [Assigned] (HBASE-3996) Support multiple tables and scanners as input to the mapper in map/reduce jobs
[ https://issues.apache.org/jira/browse/HBASE-3996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Hofhansl reassigned HBASE-3996: Assignee: Lars Hofhansl (was: Eran Kutner) Support multiple tables and scanners as input to the mapper in map/reduce jobs -- Key: HBASE-3996 URL: https://issues.apache.org/jira/browse/HBASE-3996 Project: HBase Issue Type: Improvement Components: mapreduce Reporter: Eran Kutner Assignee: Lars Hofhansl Fix For: 0.94.3, 0.96.0 Attachments: 3996-v2.txt, 3996-v3.txt, 3996-v4.txt, 3996-v5.txt, 3996-v6.txt, 3996-v7.txt, HBase-3996.patch It seems that in many cases feeding data from multiple tables or multiple scanners on a single table can save a lot of time when running map/reduce jobs. I propose a new MultiTableInputFormat class that would allow doing this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-3996) Support multiple tables and scanners as input to the mapper in map/reduce jobs
[ https://issues.apache.org/jira/browse/HBASE-3996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481195#comment-13481195 ] Lars Hofhansl commented on HBASE-3996: -- I'm going to see if I can finish this in the next few days. Support multiple tables and scanners as input to the mapper in map/reduce jobs -- Key: HBASE-3996 URL: https://issues.apache.org/jira/browse/HBASE-3996 Project: HBase Issue Type: Improvement Components: mapreduce Reporter: Eran Kutner Assignee: Eran Kutner Fix For: 0.94.3, 0.96.0 Attachments: 3996-v2.txt, 3996-v3.txt, 3996-v4.txt, 3996-v5.txt, 3996-v6.txt, 3996-v7.txt, HBase-3996.patch It seems that in many cases feeding data from multiple tables or multiple scanners on a single table can save a lot of time when running map/reduce jobs. I propose a new MultiTableInputFormat class that would allow doing this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5257) Allow filter to be evaluated after version handling
[ https://issues.apache.org/jira/browse/HBASE-5257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481276#comment-13481276 ] Varun Sharma commented on HBASE-5257: - Currently, ColumnCountGetFilter and ColumnPaginationFilter suffer from this issue - they always undercount when there are multiple versions of a cell (even when max versions of a column family is set to 1 - I think this is because the versions exist until compaction happens). I looked at the ScanQueryMatcher/StoreScanner/ColumnTracker code and it seems that there is one other plausible approach towards resolving this. Currently, if a filter wants to skip over a KeyValue pair, it has 2 options - skip to next key value pair which could be the same column (SKIP) or skip to next column (SEEK_NEXT_COL). Though we are providing the filters a mechanism to really skip in these two ways when they exclude the value, we don't do that when they include the value. The INCLUDE always causes a seek to the next key value pair. I think that probably makes sense for the ColumnTracker since for column tracking we never want to seek across columns after doing an INCLUDE but for filters we probably want symmetry when trying to INCLUDE/EXCLUDE key value pairs. So, I was proposing something like: 1) Introduce INCLUDE_AND_SEEK_NEXT_COL to Filter.ReturnCode 2) Introduce INCLUDE_AND_SEEK_NEXT_COL to ScanQueryMatcher.MatchCode 3) Modify StoreScanner accordingly to seek to next column after the include and also link the above two types in the match() function 4) Finally modify ColumnPaginationFilter to return SEEK_NEXT_COL,INCLUDE_AND_SEEK_NEXT_COL instead of SKIP,INCLUDE_AND_SEEK_NEXT_COL respectively. Similarly for ColumnCountGetFilter This might be a more direct way of resolving this issue and would avoid the column tracker sandwich between two layers of filters. What do you think, lars ? Varun Allow filter to be evaluated after version handling --- Key: HBASE-5257 URL: https://issues.apache.org/jira/browse/HBASE-5257 Project: HBase Issue Type: Improvement Reporter: Lars Hofhansl There are various usecases and filter types where evaluating the filter before version are handled either do not make sense, or make filter handling more complicated. Also see this comment in ScanQueryMatcher: {code} /** * Filters should be checked before checking column trackers. If we do * otherwise, as was previously being done, ColumnTracker may increment its * counter for even that KV which may be discarded later on by Filter. This * would lead to incorrect results in certain cases. */ {code} So we had Filters after the column trackers (which do the version checking), and then moved it. Should be at the discretion of the Filter. Could either add a new method to FilterBase (maybe excludeVersions() or something). Or have a new Filter wrapper (like WhileMatchFilter), that should only be used as outmost filter and indicates the same (maybe ExcludeVersionsFilter). See latest comments on HBASE-5229 for motivation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5257) Allow filter to be evaluated after version handling
[ https://issues.apache.org/jira/browse/HBASE-5257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481278#comment-13481278 ] Varun Sharma commented on HBASE-5257: - Hey guys, Sorry but I did not see the work log and the earlier proposal(s) - I am new to JIRA but regarding my above thought - that is more close to the excludeVersions() option mentioned in the issue - since versions are discarded in this approach. Btw, we would also need one change to FilterList.filterKeyValue() where INCLUDE_AND_SEEK_NEXT_COL would override INCLUDE so if one filter returned INCLUDE and another INCLUDE_AND_SEEK_NEXT_COL, the result is INCLUDE_AND_SEEK_NEXT_COL - this would mean that we can mix ColumnPaginationFilter/ColumnCountGetFilter with other filters but we would only get back the latest column versions. I doubt if there is a compelling use case for counting/pagination of versions... my 2 cents. Thanks ! Varun Allow filter to be evaluated after version handling --- Key: HBASE-5257 URL: https://issues.apache.org/jira/browse/HBASE-5257 Project: HBase Issue Type: Improvement Reporter: Lars Hofhansl There are various usecases and filter types where evaluating the filter before version are handled either do not make sense, or make filter handling more complicated. Also see this comment in ScanQueryMatcher: {code} /** * Filters should be checked before checking column trackers. If we do * otherwise, as was previously being done, ColumnTracker may increment its * counter for even that KV which may be discarded later on by Filter. This * would lead to incorrect results in certain cases. */ {code} So we had Filters after the column trackers (which do the version checking), and then moved it. Should be at the discretion of the Filter. Could either add a new method to FilterBase (maybe excludeVersions() or something). Or have a new Filter wrapper (like WhileMatchFilter), that should only be used as outmost filter and indicates the same (maybe ExcludeVersionsFilter). See latest comments on HBASE-5229 for motivation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6945) Compilation errors when using non-Sun JDKs to build HBase-0.94
[ https://issues.apache.org/jira/browse/HBASE-6945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481342#comment-13481342 ] Kumar Ravi commented on HBASE-6945: --- Following is a comment I added in HBASE-6965. Although related, I believe these are two separate issues. HBASE-6965 introduces a new Utility Bean class. This issue uses the new Bean class in a Junit Test case. Please let me know if you have further questions. Thanks! -- What JVMs and OS's did you test on out of interest? How many different vendor and OS strings did you test your patch against? This was tested on Sun (Oracle) JDK 6 (1.6.0_34), OpenJDK 6 and IBM Java 7. All tested on RHEL 6.3. It seems a big hacky looking for 'IBM' in vendor string figuring if an IBM JVM or not. Are you sure it's always upper case. Any other property you could check to be sure it is the JVM you think. Does IBM only make a linux JDK? We borrowed this idea from the code here in hadoop: http://svn.apache.org/repos/asf/hadoop/common/tags/release-1.0.3/src/core/org/apache/hadoop/security/UserGroupInformation.java. See methods getOSLoginModuleName() - Line 262 and getOsPrincipalClass() - Line 276. Compilation errors when using non-Sun JDKs to build HBase-0.94 -- Key: HBASE-6945 URL: https://issues.apache.org/jira/browse/HBASE-6945 Project: HBase Issue Type: Bug Components: build Affects Versions: 0.94.1 Environment: RHEL 6.3, IBM Java 7 Reporter: Kumar Ravi Assignee: Kumar Ravi Labels: patch Fix For: 0.94.3 Attachments: ResourceCheckerJUnitListener_HBASE_6945-trunk.patch When using IBM Java 7 to build HBase-0.94.1, the following comilation error is seen. [INFO] - [ERROR] COMPILATION ERROR : [INFO] - [ERROR] /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[23,25] error: package com.sun.management does not exist [ERROR] /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[46,25] error: cannot find symbol [ERROR] symbol: class UnixOperatingSystemMXBean location: class ResourceAnalyzer /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[75,29] error: cannot find symbol [ERROR] symbol: class UnixOperatingSystemMXBean location: class ResourceAnalyzer /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[76,23] error: cannot find symbol [INFO] 4 errors [INFO] - [INFO] [INFO] BUILD FAILURE [INFO] I have a patch available which should work for all JDKs including Sun. I am in the process of testing this patch. Preliminary tests indicate the build is working fine with this patch. I will post this patch when I am done testing. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-7024) TableMapReduceUtil.initTableMapperJob unnecessarily limits the types of outputKeyClass and outputValueClass
Dave Beech created HBASE-7024: - Summary: TableMapReduceUtil.initTableMapperJob unnecessarily limits the types of outputKeyClass and outputValueClass Key: HBASE-7024 URL: https://issues.apache.org/jira/browse/HBASE-7024 Project: HBase Issue Type: Bug Components: mapreduce Reporter: Dave Beech Priority: Minor The various initTableMapperJob methods in TableMapReduceUtil take outputKeyClass and outputValueClass parameters which need to extend WritableComparable and Writable respectively. Because of this, it is not convenient to use an alternative serialization like Avro. (I wanted to set these parameters to AvroKey and AvroValue). The methods in the MapReduce API to set map output key and value types do not impose this restriction, so is there a reason to do it here? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-6942) Endpoint implementation for bulk delete rows
[ https://issues.apache.org/jira/browse/HBASE-6942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anoop Sam John updated HBASE-6942: -- Attachment: HBASE-6942_Trunk.patch Patch for trunk Endpoint implementation for bulk delete rows Key: HBASE-6942 URL: https://issues.apache.org/jira/browse/HBASE-6942 Project: HBase Issue Type: Improvement Components: Coprocessors, Performance Reporter: Anoop Sam John Assignee: Anoop Sam John Fix For: 0.94.3, 0.96.0 Attachments: HBASE-6942_DeleteTemplate.patch, HBASE-6942.patch, HBASE-6942_Trunk.patch, HBASE-6942_V2.patch, HBASE-6942_V3.patch, HBASE-6942_V4.patch, HBASE-6942_V5.patch, HBASE-6942_V6.patch, HBASE-6942_V7.patch We can provide an end point implementation for doing a bulk deletion of rows(based on a scan) at the server side. This can reduce the time taken for such an operation as right now it need to do a scan to client and issue delete(s) using rowkeys. Query like delete from table1 where... -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-6942) Endpoint implementation for bulk delete rows
[ https://issues.apache.org/jira/browse/HBASE-6942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anoop Sam John updated HBASE-6942: -- Status: Patch Available (was: Open) Endpoint implementation for bulk delete rows Key: HBASE-6942 URL: https://issues.apache.org/jira/browse/HBASE-6942 Project: HBase Issue Type: Improvement Components: Coprocessors, Performance Reporter: Anoop Sam John Assignee: Anoop Sam John Fix For: 0.94.3, 0.96.0 Attachments: HBASE-6942_DeleteTemplate.patch, HBASE-6942.patch, HBASE-6942_Trunk.patch, HBASE-6942_V2.patch, HBASE-6942_V3.patch, HBASE-6942_V4.patch, HBASE-6942_V5.patch, HBASE-6942_V6.patch, HBASE-6942_V7.patch We can provide an end point implementation for doing a bulk deletion of rows(based on a scan) at the server side. This can reduce the time taken for such an operation as right now it need to do a scan to client and issue delete(s) using rowkeys. Query like delete from table1 where... -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5257) Allow filter to be evaluated after version handling
[ https://issues.apache.org/jira/browse/HBASE-5257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481462#comment-13481462 ] Ted Yu commented on HBASE-5257: --- bq. 4) Finally modify ColumnPaginationFilter to return SEEK_NEXT_COL,INCLUDE_AND_SEEK_NEXT_COL instead of SKIP,INCLUDE_AND_SEEK_NEXT_COL respectively. Similarly for ColumnCountGetFilter I guess you meant this: 4) Finally modify ColumnPaginationFilter to return SEEK_NEXT_COL,INCLUDE_AND_SEEK_NEXT_COL instead of SKIP,SEEK_NEXT_COL respectively. Similarly for ColumnCountGetFilter @Varun: If you can provide a patch as you outlined above, that would be nice. Allow filter to be evaluated after version handling --- Key: HBASE-5257 URL: https://issues.apache.org/jira/browse/HBASE-5257 Project: HBase Issue Type: Improvement Reporter: Lars Hofhansl There are various usecases and filter types where evaluating the filter before version are handled either do not make sense, or make filter handling more complicated. Also see this comment in ScanQueryMatcher: {code} /** * Filters should be checked before checking column trackers. If we do * otherwise, as was previously being done, ColumnTracker may increment its * counter for even that KV which may be discarded later on by Filter. This * would lead to incorrect results in certain cases. */ {code} So we had Filters after the column trackers (which do the version checking), and then moved it. Should be at the discretion of the Filter. Could either add a new method to FilterBase (maybe excludeVersions() or something). Or have a new Filter wrapper (like WhileMatchFilter), that should only be used as outmost filter and indicates the same (maybe ExcludeVersionsFilter). See latest comments on HBASE-5229 for motivation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6728) [89-fb] prevent OOM possibility due to per connection responseQueue being unbounded
[ https://issues.apache.org/jira/browse/HBASE-6728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481464#comment-13481464 ] Ted Yu commented on HBASE-6728: --- Plan to integrate to trunk this afternoon. [89-fb] prevent OOM possibility due to per connection responseQueue being unbounded --- Key: HBASE-6728 URL: https://issues.apache.org/jira/browse/HBASE-6728 Project: HBase Issue Type: Bug Reporter: Kannan Muthukkaruppan Assignee: Michal Gregorczyk Fix For: 0.96.0 Attachments: 6728-trunk.txt The per connection responseQueue is an unbounded queue. The request handler threads today try to send the response in line, but if things start to backup, the response is sent via a per connection responder thread. This intermediate queue, because it has no bounds, can be another source of OOMs. [Have not looked at this issue in trunk. So it may or may not be applicable there.] -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6787) Convert RowProcessorProtocol to protocol buffer service
[ https://issues.apache.org/jira/browse/HBASE-6787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481469#comment-13481469 ] Devaraj Das commented on HBASE-6787: Did my response to the questions make sense, [~stack] ? Anything else pending from my end on this one? Convert RowProcessorProtocol to protocol buffer service --- Key: HBASE-6787 URL: https://issues.apache.org/jira/browse/HBASE-6787 Project: HBase Issue Type: Sub-task Components: Coprocessors Reporter: Gary Helmling Assignee: Devaraj Das Fix For: 0.96.0 Attachments: 6787-1.patch, 6787-2.patch With coprocessor endpoints now exposed as protobuf defined services, we should convert over all of our built-in endpoints to PB services. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6942) Endpoint implementation for bulk delete rows
[ https://issues.apache.org/jira/browse/HBASE-6942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481471#comment-13481471 ] Hadoop QA commented on HBASE-6942: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12550303/HBASE-6942_Trunk.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 82 warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 findbugs{color}. The patch appears to introduce 7 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.regionserver.TestSplitTransaction Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3110//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3110//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3110//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3110//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3110//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3110//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3110//console This message is automatically generated. Endpoint implementation for bulk delete rows Key: HBASE-6942 URL: https://issues.apache.org/jira/browse/HBASE-6942 Project: HBase Issue Type: Improvement Components: Coprocessors, Performance Reporter: Anoop Sam John Assignee: Anoop Sam John Fix For: 0.94.3, 0.96.0 Attachments: HBASE-6942_DeleteTemplate.patch, HBASE-6942.patch, HBASE-6942_Trunk.patch, HBASE-6942_V2.patch, HBASE-6942_V3.patch, HBASE-6942_V4.patch, HBASE-6942_V5.patch, HBASE-6942_V6.patch, HBASE-6942_V7.patch We can provide an end point implementation for doing a bulk deletion of rows(based on a scan) at the server side. This can reduce the time taken for such an operation as right now it need to do a scan to client and issue delete(s) using rowkeys. Query like delete from table1 where... -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6966) Compressed RPCs for HBase (HBASE-5355) port to trunk
[ https://issues.apache.org/jira/browse/HBASE-6966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481478#comment-13481478 ] Devaraj Das commented on HBASE-6966: [~enis] I'll take a look at incorporating your comment#1. I agree that comment#2 could be handled as a follow up jira. [~lhofhansl] I was thinking of running a table operation for 1000 times or something, and checking the time taken with/without patch. Makes sense? Anything else? Compressed RPCs for HBase (HBASE-5355) port to trunk -- Key: HBASE-6966 URL: https://issues.apache.org/jira/browse/HBASE-6966 Project: HBase Issue Type: Improvement Components: IPC/RPC Reporter: Devaraj Das Assignee: Devaraj Das Fix For: 0.96.0 Attachments: 6966-1.patch, 6966-v2.txt This jira will address the port of the compressed RPC implementation to trunk. I am expecting the patch to be significantly different due to the PB stuff in trunk, and hence filed a separate jira. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6998) Uncaught exception in main() makes the HMaster/HRegionServer process suspend
[ https://issues.apache.org/jira/browse/HBASE-6998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481482#comment-13481482 ] Ted Yu commented on HBASE-6998: --- Integrated to trunk. Thanks for the patch, Liang. Thanks for the review, Lars. Uncaught exception in main() makes the HMaster/HRegionServer process suspend Key: HBASE-6998 URL: https://issues.apache.org/jira/browse/HBASE-6998 Project: HBase Issue Type: Bug Components: master, regionserver Affects Versions: 0.94.2, 0.96.0 Environment: CentOS6.2 + CDH4.1 HDFS + hbase0.94.2 Reporter: liang xie Assignee: liang xie Attachments: HBASE-6998.patch I am trying HDFS QJM feature in our test env. after a misconfig, i found the HMaster/HRegionServer process still up if the main thread is dead. Here is the stack trace: xception in thread main java.net.UnknownHostException: unknown host: cluster1 at org.apache.hadoop.ipc.Client$Connection.init(Client.java:214) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1196) at org.apache.hadoop.ipc.Client.call(Client.java:1050) at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225) at $Proxy8.getProtocolVersion(Unknown Source) at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396) at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379) at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119) at org.apache.hadoop.hdfs.DFSClient.init(DFSClient.java:238) at org.apache.hadoop.hdfs.DFSClient.init(DFSClient.java:203) at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:123) at org.apache.hadoop.hbase.regionserver.HRegionServer.startRegionServer(HRegionServer.java:3647) at org.apache.hadoop.hbase.regionserver.HRegionServer.startRegionServer(HRegionServer.java:3631) at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:61) at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:75) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:76) at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:3691) Then i need to kill the process manually to cleanup each time, so annoyed. After applied the attached patch, the process will exist as expected, then i am happy again :) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6942) Endpoint implementation for bulk delete rows
[ https://issues.apache.org/jira/browse/HBASE-6942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481491#comment-13481491 ] Ted Yu commented on HBASE-6942: --- Please use remove the trailing CR (^M) in the patch. Trunk patch looks good overall. {code} +public class BulkDeleteEndpoint extends BulkDeleteService implements CoprocessorService,^M +Coprocessor {^M {code} Please add javadoc to above class. {code} + scanner.close();^M +} catch (IOException ioe) {^M + LOG.debug(ioe);^M {code} Should LOG.error() be used above ? {code} +byte[] versionsDeleted = deleteWithLockArr[i].getFirst().getAttribute(^M +NO_OF_VERSIONS_TO_DELETE);^M +if (versionsDeleted != null) {^M + totalVersionsDeleted += Bytes.toInt(versionsDeleted);^M +}^M {code} Should the above be enclosed in if (deleteType == DeleteType.VERSION) block ? {code} + h = h + 13 * Bytes.hashCode(this.family);^M + h = h + 13 * Bytes.hashCode(this.qualifier);^M {code} Would mulplication by 13 result in overflow ? Take a look at the following code from HColumnDescriptor: {code} result ^= Byte.valueOf(COLUMN_DESCRIPTOR_VERSION).hashCode(); result ^= values.hashCode(); {code} {code} + * Copyright 2011 The Apache Software Foundation^M {code} The above line is not needed. Endpoint implementation for bulk delete rows Key: HBASE-6942 URL: https://issues.apache.org/jira/browse/HBASE-6942 Project: HBase Issue Type: Improvement Components: Coprocessors, Performance Reporter: Anoop Sam John Assignee: Anoop Sam John Fix For: 0.94.3, 0.96.0 Attachments: HBASE-6942_DeleteTemplate.patch, HBASE-6942.patch, HBASE-6942_Trunk.patch, HBASE-6942_V2.patch, HBASE-6942_V3.patch, HBASE-6942_V4.patch, HBASE-6942_V5.patch, HBASE-6942_V6.patch, HBASE-6942_V7.patch We can provide an end point implementation for doing a bulk deletion of rows(based on a scan) at the server side. This can reduce the time taken for such an operation as right now it need to do a scan to client and issue delete(s) using rowkeys. Query like delete from table1 where... -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-6951) Allow the master info server to be started in a read only mode.
[ https://issues.apache.org/jira/browse/HBASE-6951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elliott Clark updated HBASE-6951: - Resolution: Fixed Fix Version/s: 0.96.0 0.94.3 0.92.3 Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Allow the master info server to be started in a read only mode. --- Key: HBASE-6951 URL: https://issues.apache.org/jira/browse/HBASE-6951 Project: HBase Issue Type: Improvement Components: UI Reporter: Elliott Clark Assignee: Elliott Clark Priority: Critical Labels: noob Fix For: 0.92.3, 0.94.3, 0.96.0 Attachments: HBASE-6951-092-0.patch, HBASE-6951-094-0.patch, HBASE-6951-trunk-0.patch There are some cases that a user could want a web ui to be accessible but might not want the split and compact functionality to be usable. Allowing the web ui to start in a readOnly mode would be good. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6597) Block Encoding Size Estimation
[ https://issues.apache.org/jira/browse/HBASE-6597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481530#comment-13481530 ] Phabricator commented on HBASE-6597: Kannan has commented on the revision [jira] [HBASE-6597] [89-fb] Incremental data block encoding. INLINE COMMENTS src/main/java/org/apache/hadoop/hbase/io/encoding/CopyKeyDataBlockEncoder.java:29 Regarding the second bullet (in my comment), I suppose it is ok to leave this as is... as it does simplify the calling logic a little bit. We should just add here in comments that * this is used for non-encoded blocks. * and, keeps blocks in old format (without the DBE specific headers). src/main/java/org/apache/hadoop/hbase/io/encoding/FastDiffDeltaEncoder.java:35 use integer compression for key, value and prefix (7-bit encoding) -- use integer compression for key, value and prefix lengths (7-bit encoding) src/main/java/org/apache/hadoop/hbase/io/encoding/DiffKeyDeltaEncoder.java:34 ditto (as other comment) s/prefix/prefix lengths REVISION DETAIL https://reviews.facebook.net/D5895 To: Kannan, Karthik, Liyin, aaiyer, avf, JIRA, mbautin Cc: tedyu Block Encoding Size Estimation -- Key: HBASE-6597 URL: https://issues.apache.org/jira/browse/HBASE-6597 Project: HBase Issue Type: Improvement Components: io Affects Versions: 0.89-fb Reporter: Brian Nixon Assignee: Mikhail Bautin Priority: Minor Attachments: D5895.1.patch, D5895.2.patch, D5895.3.patch, D5895.4.patch Blocks boundaries as created by current writers are determined by the size of the unencoded data. However, blocks in memory are kept encoded. By using an estimate for the encoded size of the block, we can get greater consistency in size. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6597) Block Encoding Size Estimation
[ https://issues.apache.org/jira/browse/HBASE-6597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481531#comment-13481531 ] Phabricator commented on HBASE-6597: Kannan has commented on the revision [jira] [HBASE-6597] [89-fb] Incremental data block encoding. Mikhail - looks great. Pending comments are very trivial. REVISION DETAIL https://reviews.facebook.net/D5895 To: Kannan, Karthik, Liyin, aaiyer, avf, JIRA, mbautin Cc: tedyu Block Encoding Size Estimation -- Key: HBASE-6597 URL: https://issues.apache.org/jira/browse/HBASE-6597 Project: HBase Issue Type: Improvement Components: io Affects Versions: 0.89-fb Reporter: Brian Nixon Assignee: Mikhail Bautin Priority: Minor Attachments: D5895.1.patch, D5895.2.patch, D5895.3.patch, D5895.4.patch Blocks boundaries as created by current writers are determined by the size of the unencoded data. However, blocks in memory are kept encoded. By using an estimate for the encoded size of the block, we can get greater consistency in size. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5257) Allow filter to be evaluated after version handling
[ https://issues.apache.org/jira/browse/HBASE-5257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481535#comment-13481535 ] Varun Sharma commented on HBASE-5257: - Sure, I will put together a patch and submit as soon as its ready. Thanks ! Allow filter to be evaluated after version handling --- Key: HBASE-5257 URL: https://issues.apache.org/jira/browse/HBASE-5257 Project: HBase Issue Type: Improvement Reporter: Lars Hofhansl There are various usecases and filter types where evaluating the filter before version are handled either do not make sense, or make filter handling more complicated. Also see this comment in ScanQueryMatcher: {code} /** * Filters should be checked before checking column trackers. If we do * otherwise, as was previously being done, ColumnTracker may increment its * counter for even that KV which may be discarded later on by Filter. This * would lead to incorrect results in certain cases. */ {code} So we had Filters after the column trackers (which do the version checking), and then moved it. Should be at the discretion of the Filter. Could either add a new method to FilterBase (maybe excludeVersions() or something). Or have a new Filter wrapper (like WhileMatchFilter), that should only be used as outmost filter and indicates the same (maybe ExcludeVersionsFilter). See latest comments on HBASE-5229 for motivation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6998) Uncaught exception in main() makes the HMaster/HRegionServer process suspend
[ https://issues.apache.org/jira/browse/HBASE-6998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481541#comment-13481541 ] Hudson commented on HBASE-6998: --- Integrated in HBase-TRUNK #3470 (See [https://builds.apache.org/job/HBase-TRUNK/3470/]) HBASE-6998 Uncaught exception in main() makes the HMaster/HRegionServer process suspend (Liang Xie) (Revision 1400940) Result = FAILURE tedyu : Files : * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/ServerCommandLine.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/OOMERegionServer.java Uncaught exception in main() makes the HMaster/HRegionServer process suspend Key: HBASE-6998 URL: https://issues.apache.org/jira/browse/HBASE-6998 Project: HBase Issue Type: Bug Components: master, regionserver Affects Versions: 0.94.2, 0.96.0 Environment: CentOS6.2 + CDH4.1 HDFS + hbase0.94.2 Reporter: liang xie Assignee: liang xie Attachments: HBASE-6998.patch I am trying HDFS QJM feature in our test env. after a misconfig, i found the HMaster/HRegionServer process still up if the main thread is dead. Here is the stack trace: xception in thread main java.net.UnknownHostException: unknown host: cluster1 at org.apache.hadoop.ipc.Client$Connection.init(Client.java:214) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1196) at org.apache.hadoop.ipc.Client.call(Client.java:1050) at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225) at $Proxy8.getProtocolVersion(Unknown Source) at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396) at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379) at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119) at org.apache.hadoop.hdfs.DFSClient.init(DFSClient.java:238) at org.apache.hadoop.hdfs.DFSClient.init(DFSClient.java:203) at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:123) at org.apache.hadoop.hbase.regionserver.HRegionServer.startRegionServer(HRegionServer.java:3647) at org.apache.hadoop.hbase.regionserver.HRegionServer.startRegionServer(HRegionServer.java:3631) at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:61) at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:75) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:76) at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:3691) Then i need to kill the process manually to cleanup each time, so annoyed. After applied the attached patch, the process will exist as expected, then i am happy again :) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-6998) Uncaught exception in main() makes the HMaster/HRegionServer process suspend
[ https://issues.apache.org/jira/browse/HBASE-6998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-6998: -- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Uncaught exception in main() makes the HMaster/HRegionServer process suspend Key: HBASE-6998 URL: https://issues.apache.org/jira/browse/HBASE-6998 Project: HBase Issue Type: Bug Components: master, regionserver Affects Versions: 0.94.2, 0.96.0 Environment: CentOS6.2 + CDH4.1 HDFS + hbase0.94.2 Reporter: liang xie Assignee: liang xie Attachments: HBASE-6998.patch I am trying HDFS QJM feature in our test env. after a misconfig, i found the HMaster/HRegionServer process still up if the main thread is dead. Here is the stack trace: xception in thread main java.net.UnknownHostException: unknown host: cluster1 at org.apache.hadoop.ipc.Client$Connection.init(Client.java:214) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1196) at org.apache.hadoop.ipc.Client.call(Client.java:1050) at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225) at $Proxy8.getProtocolVersion(Unknown Source) at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396) at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379) at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119) at org.apache.hadoop.hdfs.DFSClient.init(DFSClient.java:238) at org.apache.hadoop.hdfs.DFSClient.init(DFSClient.java:203) at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:123) at org.apache.hadoop.hbase.regionserver.HRegionServer.startRegionServer(HRegionServer.java:3647) at org.apache.hadoop.hbase.regionserver.HRegionServer.startRegionServer(HRegionServer.java:3631) at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:61) at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:75) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:76) at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:3691) Then i need to kill the process manually to cleanup each time, so annoyed. After applied the attached patch, the process will exist as expected, then i am happy again :) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-6998) Uncaught exception in main() makes the HMaster/HRegionServer process suspend
[ https://issues.apache.org/jira/browse/HBASE-6998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-6998: -- Fix Version/s: 0.96.0 Uncaught exception in main() makes the HMaster/HRegionServer process suspend Key: HBASE-6998 URL: https://issues.apache.org/jira/browse/HBASE-6998 Project: HBase Issue Type: Bug Components: master, regionserver Affects Versions: 0.94.2, 0.96.0 Environment: CentOS6.2 + CDH4.1 HDFS + hbase0.94.2 Reporter: liang xie Assignee: liang xie Fix For: 0.96.0 Attachments: HBASE-6998.patch I am trying HDFS QJM feature in our test env. after a misconfig, i found the HMaster/HRegionServer process still up if the main thread is dead. Here is the stack trace: xception in thread main java.net.UnknownHostException: unknown host: cluster1 at org.apache.hadoop.ipc.Client$Connection.init(Client.java:214) at org.apache.hadoop.ipc.Client.getConnection(Client.java:1196) at org.apache.hadoop.ipc.Client.call(Client.java:1050) at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225) at $Proxy8.getProtocolVersion(Unknown Source) at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396) at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379) at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119) at org.apache.hadoop.hdfs.DFSClient.init(DFSClient.java:238) at org.apache.hadoop.hdfs.DFSClient.init(DFSClient.java:203) at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:123) at org.apache.hadoop.hbase.regionserver.HRegionServer.startRegionServer(HRegionServer.java:3647) at org.apache.hadoop.hbase.regionserver.HRegionServer.startRegionServer(HRegionServer.java:3631) at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:61) at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:75) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:76) at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:3691) Then i need to kill the process manually to cleanup each time, so annoyed. After applied the attached patch, the process will exist as expected, then i am happy again :) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7008) Set scanner caching to a better default
[ https://issues.apache.org/jira/browse/HBASE-7008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481549#comment-13481549 ] Jean-Daniel Cryans commented on HBASE-7008: --- Could a bigger default with a hbase.client.scanner.max.result.size that doesn't default to Long.MAX_VALUE be more appropriate? Set scanner caching to a better default --- Key: HBASE-7008 URL: https://issues.apache.org/jira/browse/HBASE-7008 Project: HBase Issue Type: Bug Components: Client Reporter: liang xie Assignee: liang xie Fix For: 0.94.3, 0.96.0 Attachments: 7008-0.94.txt, 7008-0.94-v2.txt, 7008-v3.txt, 7008-v4.txt, HBASE-7008.patch, HBASE-7008-v2.patch per http://search-hadoop.com/m/qaRu9iM2f02/Set+scanner+caching+to+a+better+default%253Fsubj=Set+scanner+caching+to+a+better+default+ let's set to 100 by default -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7023) Forward-port HBASE-6727 size-based HBaseServer callQueue throttle from 0.89fb branch
[ https://issues.apache.org/jira/browse/HBASE-7023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481561#comment-13481561 ] Jean-Daniel Cryans commented on HBASE-7023: --- bq. Its nicer than what we have in trunk where we just count queue items. We stopped doing that since 0.94.0 with HBASE-5190. Forward-port HBASE-6727 size-based HBaseServer callQueue throttle from 0.89fb branch Key: HBASE-7023 URL: https://issues.apache.org/jira/browse/HBASE-7023 Project: HBase Issue Type: Improvement Components: IPC/RPC Reporter: stack Assignee: Ted Yu Fix For: 0.96.0 Attachments: 6727-fb.txt Forward port the size base throttle that is out in 0.89fb branch. Its nicer than what we have in trunk where we just count queue items. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7008) Set scanner caching to a better default
[ https://issues.apache.org/jira/browse/HBASE-7008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481563#comment-13481563 ] stack commented on HBASE-7008: -- Chatting w/ LarsH, we should be doing size-based rather than count of items. This'll do for now. +1 on patch (even if it super conservative -- Thanks Lars). Set scanner caching to a better default --- Key: HBASE-7008 URL: https://issues.apache.org/jira/browse/HBASE-7008 Project: HBase Issue Type: Bug Components: Client Reporter: liang xie Assignee: liang xie Fix For: 0.94.3, 0.96.0 Attachments: 7008-0.94.txt, 7008-0.94-v2.txt, 7008-v3.txt, 7008-v4.txt, HBASE-7008.patch, HBASE-7008-v2.patch per http://search-hadoop.com/m/qaRu9iM2f02/Set+scanner+caching+to+a+better+default%253Fsubj=Set+scanner+caching+to+a+better+default+ let's set to 100 by default -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6951) Allow the master info server to be started in a read only mode.
[ https://issues.apache.org/jira/browse/HBASE-6951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481570#comment-13481570 ] Hudson commented on HBASE-6951: --- Integrated in HBase-0.94 #545 (See [https://builds.apache.org/job/HBase-0.94/545/]) HBASE-6951 Allow the master info server to be started in a read only mode. (Revision 1400957) Result = FAILURE eclark : Files : * /hbase/branches/0.94/src/main/resources/hbase-webapps/master/table.jsp * /hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/TestInfoServers.java Allow the master info server to be started in a read only mode. --- Key: HBASE-6951 URL: https://issues.apache.org/jira/browse/HBASE-6951 Project: HBase Issue Type: Improvement Components: UI Reporter: Elliott Clark Assignee: Elliott Clark Priority: Critical Labels: noob Fix For: 0.92.3, 0.94.3, 0.96.0 Attachments: HBASE-6951-092-0.patch, HBASE-6951-094-0.patch, HBASE-6951-trunk-0.patch There are some cases that a user could want a web ui to be accessible but might not want the split and compact functionality to be usable. Allowing the web ui to start in a readOnly mode would be good. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-6410) Move RegionServer Metrics to metrics2
[ https://issues.apache.org/jira/browse/HBASE-6410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elliott Clark updated HBASE-6410: - Status: Patch Available (was: Open) Move RegionServer Metrics to metrics2 - Key: HBASE-6410 URL: https://issues.apache.org/jira/browse/HBASE-6410 Project: HBase Issue Type: Sub-task Components: metrics Affects Versions: 0.96.0 Reporter: Elliott Clark Assignee: Elliott Clark Priority: Blocker Attachments: HBASE-6410-1.patch, HBASE-6410-2.patch, HBASE-6410-3.patch, HBASE-6410.patch Move RegionServer Metrics to metrics2 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-6410) Move RegionServer Metrics to metrics2
[ https://issues.apache.org/jira/browse/HBASE-6410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elliott Clark updated HBASE-6410: - Attachment: HBASE-6410-3.patch Rebased on trunk and added the blocked updates time metrics that were added earlier. Move RegionServer Metrics to metrics2 - Key: HBASE-6410 URL: https://issues.apache.org/jira/browse/HBASE-6410 Project: HBase Issue Type: Sub-task Components: metrics Affects Versions: 0.96.0 Reporter: Elliott Clark Assignee: Elliott Clark Priority: Blocker Attachments: HBASE-6410-1.patch, HBASE-6410-2.patch, HBASE-6410-3.patch, HBASE-6410.patch Move RegionServer Metrics to metrics2 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-6665) ROOT region should not be splitted even with META row as explicit split key
[ https://issues.apache.org/jira/browse/HBASE-6665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] rajeshbabu updated HBASE-6665: -- Summary: ROOT region should not be splitted even with META row as explicit split key (was: ROOT table is allowing to split the table.) ROOT region should not be splitted even with META row as explicit split key --- Key: HBASE-6665 URL: https://issues.apache.org/jira/browse/HBASE-6665 Project: HBase Issue Type: Bug Reporter: Y. SREENIVASULU REDDY call the split operation on ROOT table by keeping split key as .META. root table is splited into two regions. and assigned to some regionserver. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7008) Set scanner caching to a better default
[ https://issues.apache.org/jira/browse/HBASE-7008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481601#comment-13481601 ] Andrew Purtell commented on HBASE-7008: --- Sorry I'm late to the party. 10 is better than 1, but 100 seems better than that IMHO. Just $0.02. Set scanner caching to a better default --- Key: HBASE-7008 URL: https://issues.apache.org/jira/browse/HBASE-7008 Project: HBase Issue Type: Bug Components: Client Reporter: liang xie Assignee: liang xie Fix For: 0.94.3, 0.96.0 Attachments: 7008-0.94.txt, 7008-0.94-v2.txt, 7008-v3.txt, 7008-v4.txt, HBASE-7008.patch, HBASE-7008-v2.patch per http://search-hadoop.com/m/qaRu9iM2f02/Set+scanner+caching+to+a+better+default%253Fsubj=Set+scanner+caching+to+a+better+default+ let's set to 100 by default -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7008) Set scanner caching to a better default
[ https://issues.apache.org/jira/browse/HBASE-7008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481607#comment-13481607 ] Lars Hofhansl commented on HBASE-7008: -- I'd prefer 100 too. Let's make it 100 in trunk and 10 in 0.94... Just to be safe? Set scanner caching to a better default --- Key: HBASE-7008 URL: https://issues.apache.org/jira/browse/HBASE-7008 Project: HBase Issue Type: Bug Components: Client Reporter: liang xie Assignee: liang xie Fix For: 0.94.3, 0.96.0 Attachments: 7008-0.94.txt, 7008-0.94-v2.txt, 7008-v3.txt, 7008-v4.txt, HBASE-7008.patch, HBASE-7008-v2.patch per http://search-hadoop.com/m/qaRu9iM2f02/Set+scanner+caching+to+a+better+default%253Fsubj=Set+scanner+caching+to+a+better+default+ let's set to 100 by default -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6733) [0.92 UNIT TESTS] TestReplication.queueFailover occasionally fails [Part-2]
[ https://issues.apache.org/jira/browse/HBASE-6733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481612#comment-13481612 ] Sergey Shelukhin commented on HBASE-6733: - Should be ok in this one, better for git commit tracking. Thanks! [0.92 UNIT TESTS] TestReplication.queueFailover occasionally fails [Part-2] --- Key: HBASE-6733 URL: https://issues.apache.org/jira/browse/HBASE-6733 Project: HBase Issue Type: Bug Reporter: Devaraj Das Assignee: Devaraj Das Fix For: 0.96.0 Attachments: 6733-1.patch, 6733-2.patch, 6733-3.patch, HBASE-6733-0.94.patch The failure is in TestReplication.queueFailover (fails due to unreplicated rows). I have come across two problems: 1. The sleepMultiplier is not properly reset when the currentPath is changed (in ReplicationSource.java). 2. ReplicationExecutor sometime removes files to replicate from the queue too early, resulting in corresponding edits missing. Here the problem is due to the fact the log-file length that the replication executor finds is not the most updated one, and hence it doesn't read anything from there, and ultimately, when there is a log roll, the replication-queue gets a new entry, and the executor drops the old entry out of the queue. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7008) Set scanner caching to a better default
[ https://issues.apache.org/jira/browse/HBASE-7008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481613#comment-13481613 ] stack commented on HBASE-7008: -- I can bend on the 100 since three folks say it. Just trying to be conservative (it doesn't suit me well I can tell). Sorry for making extra work around a small change. Set scanner caching to a better default --- Key: HBASE-7008 URL: https://issues.apache.org/jira/browse/HBASE-7008 Project: HBase Issue Type: Bug Components: Client Reporter: liang xie Assignee: liang xie Fix For: 0.94.3, 0.96.0 Attachments: 7008-0.94.txt, 7008-0.94-v2.txt, 7008-v3.txt, 7008-v4.txt, HBASE-7008.patch, HBASE-7008-v2.patch per http://search-hadoop.com/m/qaRu9iM2f02/Set+scanner+caching+to+a+better+default%253Fsubj=Set+scanner+caching+to+a+better+default+ let's set to 100 by default -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-6665) ROOT region should not be splitted even with META row as explicit split key
[ https://issues.apache.org/jira/browse/HBASE-6665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] rajeshbabu updated HBASE-6665: -- Description: split operation on ROOT table by specifying explicit split key as .META. closing the ROOT region and taking long time to rollback failed split. I think we can skip split for ROOT table as how we are doing for META region. was: call the split operation on ROOT table by keeping split key as .META. root table is splited into two regions. and assigned to some regionserver. ROOT region should not be splitted even with META row as explicit split key --- Key: HBASE-6665 URL: https://issues.apache.org/jira/browse/HBASE-6665 Project: HBase Issue Type: Bug Reporter: Y. SREENIVASULU REDDY split operation on ROOT table by specifying explicit split key as .META. closing the ROOT region and taking long time to rollback failed split. I think we can skip split for ROOT table as how we are doing for META region. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7016) port HBASE-6518 'Bytes.toBytesBinary() incorrect trailing backslash escape' to 0.94
[ https://issues.apache.org/jira/browse/HBASE-7016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481625#comment-13481625 ] Sergey Shelukhin commented on HBASE-7016: - Should it be ok to commit now? Thanks. port HBASE-6518 'Bytes.toBytesBinary() incorrect trailing backslash escape' to 0.94 --- Key: HBASE-7016 URL: https://issues.apache.org/jira/browse/HBASE-7016 Project: HBase Issue Type: Task Affects Versions: 0.94.2 Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Priority: Trivial Attachments: HBASE-7016.patch Porting a bugfix... -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-6665) ROOT region should not be splitted even with META row as explicit split key
[ https://issues.apache.org/jira/browse/HBASE-6665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] rajeshbabu updated HBASE-6665: -- Attachment: HBASE-6665_trunk.patch Patch for trunk. Please review and provide comments/suggestions. ROOT region should not be splitted even with META row as explicit split key --- Key: HBASE-6665 URL: https://issues.apache.org/jira/browse/HBASE-6665 Project: HBase Issue Type: Bug Reporter: Y. SREENIVASULU REDDY Attachments: HBASE-6665_trunk.patch split operation on ROOT table by specifying explicit split key as .META. closing the ROOT region and taking long time to rollback failed split. I think we can skip split for ROOT table as how we are doing for META region. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-6665) ROOT region should not be splitted even with META row as explicit split key
[ https://issues.apache.org/jira/browse/HBASE-6665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] rajeshbabu updated HBASE-6665: -- Status: Patch Available (was: Open) ROOT region should not be splitted even with META row as explicit split key --- Key: HBASE-6665 URL: https://issues.apache.org/jira/browse/HBASE-6665 Project: HBase Issue Type: Bug Reporter: Y. SREENIVASULU REDDY Attachments: HBASE-6665_trunk.patch split operation on ROOT table by specifying explicit split key as .META. closing the ROOT region and taking long time to rollback failed split. I think we can skip split for ROOT table as how we are doing for META region. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6942) Endpoint implementation for bulk delete rows
[ https://issues.apache.org/jira/browse/HBASE-6942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481633#comment-13481633 ] Andrew Purtell commented on HBASE-6942: --- Sorry for being late to the party. Really glad to see your interest and effort in submitting a coprocessor example. We need more of these! I took a look at the trunk patch after reading through this issue. +1 after addressing Ted's comments, except for the hashcode multiplication thing. Endpoint implementation for bulk delete rows Key: HBASE-6942 URL: https://issues.apache.org/jira/browse/HBASE-6942 Project: HBase Issue Type: Improvement Components: Coprocessors, Performance Reporter: Anoop Sam John Assignee: Anoop Sam John Fix For: 0.94.3, 0.96.0 Attachments: HBASE-6942_DeleteTemplate.patch, HBASE-6942.patch, HBASE-6942_Trunk.patch, HBASE-6942_V2.patch, HBASE-6942_V3.patch, HBASE-6942_V4.patch, HBASE-6942_V5.patch, HBASE-6942_V6.patch, HBASE-6942_V7.patch We can provide an end point implementation for doing a bulk deletion of rows(based on a scan) at the server side. This can reduce the time taken for such an operation as right now it need to do a scan to client and issue delete(s) using rowkeys. Query like delete from table1 where... -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6428) Pluggable Compaction policies
[ https://issues.apache.org/jira/browse/HBASE-6428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481639#comment-13481639 ] Andrew Purtell commented on HBASE-6428: --- It does make sense, yes. Pluggable Compaction policies - Key: HBASE-6428 URL: https://issues.apache.org/jira/browse/HBASE-6428 Project: HBase Issue Type: New Feature Reporter: Lars Hofhansl For some usecases is useful to allow more control over how KVs get compacted. For example one could envision storing old versions of a KV separate HFiles, which then rarely have to be touched/cached by queries querying for new data. In addition these date ranged HFile can be easily used for backups while maintaining historical data. This would be a major change, allowing compactions to provide multiple targets (not just a filter). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (HBASE-7012) Create hbase-client module
[ https://issues.apache.org/jira/browse/HBASE-7012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elliott Clark reassigned HBASE-7012: Assignee: Elliott Clark Create hbase-client module -- Key: HBASE-7012 URL: https://issues.apache.org/jira/browse/HBASE-7012 Project: HBase Issue Type: Task Reporter: Elliott Clark Assignee: Elliott Clark Priority: Critical Fix For: 0.96.0 I just tried creating a project that uses 0.95-SNAPSHOT and had to import org.apache.hbase:hbase-server as the module that I depend on. This will be confusing to users. In addition this brings in lots of dependencies that are not really needed. Let's create a client module that has all of the client in it. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6728) [89-fb] prevent OOM possibility due to per connection responseQueue being unbounded
[ https://issues.apache.org/jira/browse/HBASE-6728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481649#comment-13481649 ] stack commented on HBASE-6728: -- Nit: Could assign and declare in the one step instead of have it straddle constructor: {code} +this.currentSize = new AtomicLong(0); {code} Do we have to pollute metrics w/ an actual instance of HRegionServer: {code} + private HRegionServer regionServer; {code} Do we have to add new public method on HRegionServer to getResponseQueueSize? Lets not put this in trunk, not until after hbase-6410 goes in. It does this properly. Can go into 0.94. [89-fb] prevent OOM possibility due to per connection responseQueue being unbounded --- Key: HBASE-6728 URL: https://issues.apache.org/jira/browse/HBASE-6728 Project: HBase Issue Type: Bug Reporter: Kannan Muthukkaruppan Assignee: Michal Gregorczyk Fix For: 0.96.0 Attachments: 6728-trunk.txt The per connection responseQueue is an unbounded queue. The request handler threads today try to send the response in line, but if things start to backup, the response is sent via a per connection responder thread. This intermediate queue, because it has no bounds, can be another source of OOMs. [Have not looked at this issue in trunk. So it may or may not be applicable there.] -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7016) port HBASE-6518 'Bytes.toBytesBinary() incorrect trailing backslash escape' to 0.94
[ https://issues.apache.org/jira/browse/HBASE-7016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481651#comment-13481651 ] Enis Soztutar commented on HBASE-7016: -- This bug only applies, when the String ends with \, in which case it will throw an exception. So it should be safe to backport. [~lhofhansl] if you give the go, I'll commit this to 0.94. port HBASE-6518 'Bytes.toBytesBinary() incorrect trailing backslash escape' to 0.94 --- Key: HBASE-7016 URL: https://issues.apache.org/jira/browse/HBASE-7016 Project: HBase Issue Type: Task Affects Versions: 0.94.2 Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Priority: Trivial Attachments: HBASE-7016.patch Porting a bugfix... -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6728) [89-fb] prevent OOM possibility due to per connection responseQueue being unbounded
[ https://issues.apache.org/jira/browse/HBASE-6728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481652#comment-13481652 ] Ted Yu commented on HBASE-6728: --- Integrated to trunk. Thanks for the patch, Michal. [89-fb] prevent OOM possibility due to per connection responseQueue being unbounded --- Key: HBASE-6728 URL: https://issues.apache.org/jira/browse/HBASE-6728 Project: HBase Issue Type: Bug Reporter: Kannan Muthukkaruppan Assignee: Michal Gregorczyk Fix For: 0.96.0 Attachments: 6728-trunk.txt The per connection responseQueue is an unbounded queue. The request handler threads today try to send the response in line, but if things start to backup, the response is sent via a per connection responder thread. This intermediate queue, because it has no bounds, can be another source of OOMs. [Have not looked at this issue in trunk. So it may or may not be applicable there.] -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6945) Compilation errors when using non-Sun JDKs to build HBase-0.94
[ https://issues.apache.org/jira/browse/HBASE-6945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481657#comment-13481657 ] stack commented on HBASE-6945: -- @Kumar I asked other questions in the above. Mind addressing them? Thanks. Compilation errors when using non-Sun JDKs to build HBase-0.94 -- Key: HBASE-6945 URL: https://issues.apache.org/jira/browse/HBASE-6945 Project: HBase Issue Type: Bug Components: build Affects Versions: 0.94.1 Environment: RHEL 6.3, IBM Java 7 Reporter: Kumar Ravi Assignee: Kumar Ravi Labels: patch Fix For: 0.94.3 Attachments: ResourceCheckerJUnitListener_HBASE_6945-trunk.patch When using IBM Java 7 to build HBase-0.94.1, the following comilation error is seen. [INFO] - [ERROR] COMPILATION ERROR : [INFO] - [ERROR] /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[23,25] error: package com.sun.management does not exist [ERROR] /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[46,25] error: cannot find symbol [ERROR] symbol: class UnixOperatingSystemMXBean location: class ResourceAnalyzer /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[75,29] error: cannot find symbol [ERROR] symbol: class UnixOperatingSystemMXBean location: class ResourceAnalyzer /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[76,23] error: cannot find symbol [INFO] 4 errors [INFO] - [INFO] [INFO] BUILD FAILURE [INFO] I have a patch available which should work for all JDKs including Sun. I am in the process of testing this patch. Preliminary tests indicate the build is working fine with this patch. I will post this patch when I am done testing. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7023) Forward-port HBASE-6727 size-based HBaseServer callQueue throttle from 0.89fb branch
[ https://issues.apache.org/jira/browse/HBASE-7023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481660#comment-13481660 ] Jean-Daniel Cryans commented on HBASE-7023: --- If my understanding is correct, the main difference is that they are blocking on adding to the call queue whereas what I did in HBASE-5190 is sending the request back to the client for retries. Also they are using SizeBasedThrottler which seems to be only in 0.89-fb. Forward-port HBASE-6727 size-based HBaseServer callQueue throttle from 0.89fb branch Key: HBASE-7023 URL: https://issues.apache.org/jira/browse/HBASE-7023 Project: HBase Issue Type: Improvement Components: IPC/RPC Reporter: stack Assignee: Ted Yu Fix For: 0.96.0 Attachments: 6727-fb.txt Forward port the size base throttle that is out in 0.89fb branch. Its nicer than what we have in trunk where we just count queue items. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6728) [89-fb] prevent OOM possibility due to per connection responseQueue being unbounded
[ https://issues.apache.org/jira/browse/HBASE-6728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481661#comment-13481661 ] Ted Yu commented on HBASE-6728: --- Backed out after seeing Stack's comment. [89-fb] prevent OOM possibility due to per connection responseQueue being unbounded --- Key: HBASE-6728 URL: https://issues.apache.org/jira/browse/HBASE-6728 Project: HBase Issue Type: Bug Reporter: Kannan Muthukkaruppan Assignee: Michal Gregorczyk Fix For: 0.96.0 Attachments: 6728-trunk.txt The per connection responseQueue is an unbounded queue. The request handler threads today try to send the response in line, but if things start to backup, the response is sent via a per connection responder thread. This intermediate queue, because it has no bounds, can be another source of OOMs. [Have not looked at this issue in trunk. So it may or may not be applicable there.] -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7024) TableMapReduceUtil.initTableMapperJob unnecessarily limits the types of outputKeyClass and outputValueClass
[ https://issues.apache.org/jira/browse/HBASE-7024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481663#comment-13481663 ] stack commented on HBASE-7024: -- Not that I know of. My guess is that it historical and long since addressed over in Hadoop. TableMapReduceUtil.initTableMapperJob unnecessarily limits the types of outputKeyClass and outputValueClass --- Key: HBASE-7024 URL: https://issues.apache.org/jira/browse/HBASE-7024 Project: HBase Issue Type: Bug Components: mapreduce Reporter: Dave Beech Priority: Minor The various initTableMapperJob methods in TableMapReduceUtil take outputKeyClass and outputValueClass parameters which need to extend WritableComparable and Writable respectively. Because of this, it is not convenient to use an alternative serialization like Avro. (I wanted to set these parameters to AvroKey and AvroValue). The methods in the MapReduce API to set map output key and value types do not impose this restriction, so is there a reason to do it here? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6951) Allow the master info server to be started in a read only mode.
[ https://issues.apache.org/jira/browse/HBASE-6951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481665#comment-13481665 ] Hudson commented on HBASE-6951: --- Integrated in HBase-TRUNK #3471 (See [https://builds.apache.org/job/HBase-TRUNK/3471/]) HBASE-6951 Allow the master info server to be started in a read only mode. (Revision 1400956) Result = FAILURE eclark : Files : * /hbase/trunk/hbase-server/src/main/resources/hbase-webapps/master/table.jsp * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/TestInfoServers.java Allow the master info server to be started in a read only mode. --- Key: HBASE-6951 URL: https://issues.apache.org/jira/browse/HBASE-6951 Project: HBase Issue Type: Improvement Components: UI Reporter: Elliott Clark Assignee: Elliott Clark Priority: Critical Labels: noob Fix For: 0.92.3, 0.94.3, 0.96.0 Attachments: HBASE-6951-092-0.patch, HBASE-6951-094-0.patch, HBASE-6951-trunk-0.patch There are some cases that a user could want a web ui to be accessible but might not want the split and compact functionality to be usable. Allowing the web ui to start in a readOnly mode would be good. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6992) Coprocessors semantic issues: post async operations, helper methods, ...
[ https://issues.apache.org/jira/browse/HBASE-6992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481666#comment-13481666 ] Andrew Purtell commented on HBASE-6992: --- The basic problem is we need pre/post on the RPC path for managing interactions with the client, and pre/post on the background async processing. There is no single semantic here. Perhaps the naming can be improved, though, to make this clearer or less confusing. Suggestions? Coprocessors semantic issues: post async operations, helper methods, ... Key: HBASE-6992 URL: https://issues.apache.org/jira/browse/HBASE-6992 Project: HBase Issue Type: Brainstorming Components: Coprocessors Affects Versions: 0.92.2, 0.94.2, 0.96.0 Reporter: Matteo Bertozzi Assignee: Matteo Bertozzi Discussion ticket around coprocessor pre/post semantic. For each rpc in HMaster we have a pre/post operation that allows a coprocessor to execute some code before and after the operation * preOperation() * my operation * postOperation() This is used for example by the AccessController to verify if the user can execute or not the operation. Everything is fine, unless the master operation is asynchronous (like create/delete table) * preOperation() * executor.submit(new OperationHandler()) * postOperation() The pre operation is still fine, since is executed before the operation and need to throw exceptions to the client in case of failures... The post operation, instead, is no longer post... is just post submit. And if someone subscribe to postCreateTable() the notification can arrive before the table creation. To solve this problem, HBASE-5584 added pre/post handlers and now the situation looks like this: {code} client request client response | | +--+-- submit op --++--- (HMaster) pre op post op (executor) + handler + pre handler post handler {code} Now, we've two types of pre/post operation and the semantical correct are preOperation() and postOperationHandler() since the preOperation() needs to reply to the client (e.g AccessController NotAllowException) and the postOperatioHandler() is really post operation. postOperation() is not post... and preOperationHandler() can't communicate with the client. The AccessController coprocessor uses the postOperation() that is fine for the sync operation like addColumn(), deleteColumn()... but in case of failure of async operations like deleteTable() we've removed rights that we still need. I think that we should get back just to the single pre/post operation but with the right semantic... Other then the when is executed problem, we've also functions that can be described with other simpler functions for example: modifyTable() is just a helper to avoid multiple addColumn()/deleteColumn() calls but the problem here is that modifyTable() has its own pre/post operation() and if I've implemented the pre/post addColumn I don't get notified when I call modifyTable(). This is another problem in the access controller coprocessor In this case I'm not sure what the best solution can be... but in this way, adding new helper methods means breaking the coprocessors, because they don't get notified even if something is changed... Any idea, thoughts, ...? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-6987) Port HBASE-6920 to trunk (?)
[ https://issues.apache.org/jira/browse/HBASE-6987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gregory Chanan updated HBASE-6987: -- Resolution: Fixed Status: Resolved (was: Patch Available) Thanks for the review, Lars. Committed to trunk. Port HBASE-6920 to trunk (?) Key: HBASE-6987 URL: https://issues.apache.org/jira/browse/HBASE-6987 Project: HBase Issue Type: Bug Reporter: Gregory Chanan Assignee: Gregory Chanan Priority: Minor Fix For: 0.96.0 Attachments: HBASE-6987.patch Need to investigate whether we need to port HBASE-6920 to trunk. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-6987) Port HBASE-6920 to trunk (?)
[ https://issues.apache.org/jira/browse/HBASE-6987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gregory Chanan updated HBASE-6987: -- Issue Type: Task (was: Bug) Port HBASE-6920 to trunk (?) Key: HBASE-6987 URL: https://issues.apache.org/jira/browse/HBASE-6987 Project: HBase Issue Type: Task Reporter: Gregory Chanan Assignee: Gregory Chanan Priority: Minor Fix For: 0.96.0 Attachments: HBASE-6987.patch Need to investigate whether we need to port HBASE-6920 to trunk. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6987) Port HBASE-6920 to trunk (?)
[ https://issues.apache.org/jira/browse/HBASE-6987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481679#comment-13481679 ] stack commented on HBASE-6987: -- +1 Port HBASE-6920 to trunk (?) Key: HBASE-6987 URL: https://issues.apache.org/jira/browse/HBASE-6987 Project: HBase Issue Type: Task Reporter: Gregory Chanan Assignee: Gregory Chanan Priority: Minor Fix For: 0.96.0 Attachments: HBASE-6987.patch Need to investigate whether we need to port HBASE-6920 to trunk. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7009) Port HBaseCluster interface/tests to 0.94
[ https://issues.apache.org/jira/browse/HBASE-7009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481684#comment-13481684 ] Andrew Purtell commented on HBASE-7009: --- As [~enis] points out I did +1 this at the meetup. While it is a fat patch, it is all test code, not functional changes, and we sorely need such test code. Also +1 on an addition to the book that explains how to run nightlies with this framework with 0.94. I want to start doing this. Port HBaseCluster interface/tests to 0.94 - Key: HBASE-7009 URL: https://issues.apache.org/jira/browse/HBASE-7009 Project: HBase Issue Type: Sub-task Components: test Affects Versions: 0.94.3 Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Fix For: 0.94.3 Attachments: HBASE-7009-p1.patch, HBASE-7009.patch, HBASE-7009-v2-squashed.patch Need to port. I am porting V5 patch from the original JIRA; I have a partially ported (V3) patch from Enis with protocol buffers being reverted to HRegionInterface/HMasterInterface -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7001) Fix the RCN Correctness Warning in MemStoreFlusher class
[ https://issues.apache.org/jira/browse/HBASE-7001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481690#comment-13481690 ] Andrew Purtell commented on HBASE-7001: --- +1, you going to commit this Ted? Fix the RCN Correctness Warning in MemStoreFlusher class Key: HBASE-7001 URL: https://issues.apache.org/jira/browse/HBASE-7001 Project: HBase Issue Type: Bug Reporter: liang xie Assignee: liang xie Priority: Minor Attachments: HBASE-7001.patch https://builds.apache.org/job/PreCommit-HBASE-Build/3057//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html#Warnings_CORRECTNESS shows : Bug type RCN_REDUNDANT_NULLCHECK_WOULD_HAVE_BEEN_A_NPE (click for details) In class org.apache.hadoop.hbase.regionserver.MemStoreFlusher In method org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher$FlushRegionEntry) Value loaded from region Return value of org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushRegionEntry.access$000(MemStoreFlusher$FlushRegionEntry) At MemStoreFlusher.java:[line 346] Redundant null check at MemStoreFlusher.java:[line 363] -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6583) Enhance Hbase load test tool to automatically create column families if not present
[ https://issues.apache.org/jira/browse/HBASE-6583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481692#comment-13481692 ] Andrew Purtell commented on HBASE-6583: --- [~lhofhansl] What do you think about backporting this to 0.94? I use LoadTestTool, happy to do it if you give the ok. Enhance Hbase load test tool to automatically create column families if not present --- Key: HBASE-6583 URL: https://issues.apache.org/jira/browse/HBASE-6583 Project: HBase Issue Type: Bug Components: test Reporter: Karthik Ranganathan Assignee: Sergey Shelukhin Labels: noob Fix For: 0.96.0 Attachments: HBASE-6583.patch, HBASE-6583.patch The load test tool currently disables the table and applies any changes to the cf descriptor if any, but does not create the cf if not present. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7001) Fix the RCN Correctness Warning in MemStoreFlusher class
[ https://issues.apache.org/jira/browse/HBASE-7001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HBASE-7001: -- Resolution: Fixed Fix Version/s: 0.96.0 Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Fix the RCN Correctness Warning in MemStoreFlusher class Key: HBASE-7001 URL: https://issues.apache.org/jira/browse/HBASE-7001 Project: HBase Issue Type: Bug Reporter: liang xie Assignee: liang xie Priority: Minor Fix For: 0.96.0 Attachments: HBASE-7001.patch https://builds.apache.org/job/PreCommit-HBASE-Build/3057//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html#Warnings_CORRECTNESS shows : Bug type RCN_REDUNDANT_NULLCHECK_WOULD_HAVE_BEEN_A_NPE (click for details) In class org.apache.hadoop.hbase.regionserver.MemStoreFlusher In method org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher$FlushRegionEntry) Value loaded from region Return value of org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushRegionEntry.access$000(MemStoreFlusher$FlushRegionEntry) At MemStoreFlusher.java:[line 346] Redundant null check at MemStoreFlusher.java:[line 363] -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7009) Port HBaseCluster interface/tests to 0.94
[ https://issues.apache.org/jira/browse/HBASE-7009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481707#comment-13481707 ] stack commented on HBASE-7009: -- The doc issue is HBASE-6302 Port HBaseCluster interface/tests to 0.94 - Key: HBASE-7009 URL: https://issues.apache.org/jira/browse/HBASE-7009 Project: HBase Issue Type: Sub-task Components: test Affects Versions: 0.94.3 Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Fix For: 0.94.3 Attachments: HBASE-7009-p1.patch, HBASE-7009.patch, HBASE-7009-v2-squashed.patch Need to port. I am porting V5 patch from the original JIRA; I have a partially ported (V3) patch from Enis with protocol buffers being reverted to HRegionInterface/HMasterInterface -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-2689) Implement common gateway service daemon for Avro and Thrift servers
[ https://issues.apache.org/jira/browse/HBASE-2689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack resolved HBASE-2689. -- Resolution: Won't Fix Closing. If this happens, doesn't have to be in core (THanks for noticing Lliang). Implement common gateway service daemon for Avro and Thrift servers --- Key: HBASE-2689 URL: https://issues.apache.org/jira/browse/HBASE-2689 Project: HBase Issue Type: Improvement Components: avro, Thrift Reporter: Jeff Hammerbacher -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6728) [89-fb] prevent OOM possibility due to per connection responseQueue being unbounded
[ https://issues.apache.org/jira/browse/HBASE-6728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481709#comment-13481709 ] stack commented on HBASE-6728: -- [~ted_yu] Thanks Ted. Elliott's coming patch has means of addressing my bugaboo. [89-fb] prevent OOM possibility due to per connection responseQueue being unbounded --- Key: HBASE-6728 URL: https://issues.apache.org/jira/browse/HBASE-6728 Project: HBase Issue Type: Bug Reporter: Kannan Muthukkaruppan Assignee: Michal Gregorczyk Fix For: 0.96.0 Attachments: 6728-trunk.txt The per connection responseQueue is an unbounded queue. The request handler threads today try to send the response in line, but if things start to backup, the response is sent via a per connection responder thread. This intermediate queue, because it has no bounds, can be another source of OOMs. [Have not looked at this issue in trunk. So it may or may not be applicable there.] -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6843) loading lzo error when using coprocessor
[ https://issues.apache.org/jira/browse/HBASE-6843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481713#comment-13481713 ] Andrew Purtell commented on HBASE-6843: --- bq. Is this the typical lzo compression library folks would use? That would indeed be pretty bad. It is, and it is, but {{com.hadoop}} classes are from the Hadoop GPL codec project. The trivial thing to do here is add them to the whitelist but there are two larger issues IMHO. The precedent is bad, whitelisting non ASF code. How often will we do this? And anyway the whitelist should not be hardcoded anyway. At least, it should be extensible. loading lzo error when using coprocessor Key: HBASE-6843 URL: https://issues.apache.org/jira/browse/HBASE-6843 Project: HBase Issue Type: Bug Components: Coprocessors Affects Versions: 0.94.1 Reporter: Zhou wenjian Assignee: Zhou wenjian Priority: Critical Fix For: 0.94.3, 0.96.0 Attachments: HBASE-6843-trunk.patch After applying HBASE-6308,we found error followed 2012-09-06 00:44:38,341 DEBUG org.apache.hadoop.hbase.coprocessor.CoprocessorClassLoader: Finding class: com.hadoop.compression.lzo.LzoCodec 2012-09-06 00:44:38,351 ERROR com.hadoop.compression.lzo.GPLNativeCodeLoader: Could not load native gpl library java.lang.UnsatisfiedLinkError: Native Library /home/zhuzhuang/hbase/0.94.0-ali-1.0/lib/native/Linux-amd64-64/libgplcompression.so already loaded in another classloade r at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1772) at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1732) at java.lang.Runtime.loadLibrary0(Runtime.java:823) at java.lang.System.loadLibrary(System.java:1028) at com.hadoop.compression.lzo.GPLNativeCodeLoader.clinit(GPLNativeCodeLoader.java:32) at com.hadoop.compression.lzo.LzoCodec.clinit(LzoCodec.java:67) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:113) at org.apache.hadoop.hbase.io.hfile.Compression$Algorithm$1.getCodec(Compression.java:107) at org.apache.hadoop.hbase.io.hfile.Compression$Algorithm.getCompressor(Compression.java:243) at org.apache.hadoop.hbase.util.CompressionTest.testCompression(CompressionTest.java:85) at org.apache.hadoop.hbase.regionserver.HRegion.checkCompressionCodecs(HRegion.java:3793) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3782) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3732) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:332) at org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) 2012-09-06 00:44:38,355 DEBUG org.apache.hadoop.hbase.coprocessor.CoprocessorClassLoader: Skipping exempt class java.io.PrintWriter - delegating directly to parent 2012-09-06 00:44:38,355 ERROR com.hadoop.compression.lzo.LzoCodec: Cannot load native-lzo without native-hadoop -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7019) Can't pass SplitAlgo in hbase shell
[ https://issues.apache.org/jira/browse/HBASE-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481715#comment-13481715 ] stack commented on HBASE-7019: -- +1 on patch (Might want to do as Ted suggests too on commit)? Can't pass SplitAlgo in hbase shell --- Key: HBASE-7019 URL: https://issues.apache.org/jira/browse/HBASE-7019 Project: HBase Issue Type: Bug Components: shell Affects Versions: 0.96.0 Reporter: Gregory Chanan Assignee: Gregory Chanan Fix For: 0.96.0 Attachments: HBASE-7019.patch {noformat} hbase(main):002:0 create 't1', 'f1', {NUMREGIONS = 15, SPLITALGO = 'HexStringSplit'} ERROR: uninitialized constant Hbase::Admin::RegionSplitter {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-7025) Metric for how many WAL files a regionserver is carrying
stack created HBASE-7025: Summary: Metric for how many WAL files a regionserver is carrying Key: HBASE-7025 URL: https://issues.apache.org/jira/browse/HBASE-7025 Project: HBase Issue Type: Improvement Components: metrics Reporter: stack A metric that shows how many WAL files a regionserver is carrying at any one time would be useful for fingering those servers that are always over the upper bounds and in need of attention -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6665) ROOT region should not be splitted even with META row as explicit split key
[ https://issues.apache.org/jira/browse/HBASE-6665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481733#comment-13481733 ] Hadoop QA commented on HBASE-6665: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12550327/HBASE-6665_trunk.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 82 warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 findbugs{color}. The patch appears to introduce 5 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3112//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3112//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3112//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3112//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3112//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3112//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3112//console This message is automatically generated. ROOT region should not be splitted even with META row as explicit split key --- Key: HBASE-6665 URL: https://issues.apache.org/jira/browse/HBASE-6665 Project: HBase Issue Type: Bug Reporter: Y. SREENIVASULU REDDY Attachments: HBASE-6665_trunk.patch split operation on ROOT table by specifying explicit split key as .META. closing the ROOT region and taking long time to rollback failed split. I think we can skip split for ROOT table as how we are doing for META region. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4850) hbase tests need to be made Hadoop version agnostic
[ https://issues.apache.org/jira/browse/HBASE-4850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481746#comment-13481746 ] stack commented on HBASE-4850: -- I do not believe the first premise holds any more, that a single hbase.jar can work against multiple hadoop versions. Therefore, we can't do what you suggest? hbase tests need to be made Hadoop version agnostic --- Key: HBASE-4850 URL: https://issues.apache.org/jira/browse/HBASE-4850 Project: HBase Issue Type: Improvement Components: test Affects Versions: 0.92.0, 0.92.1, 0.94.0 Reporter: Roman Shaposhnik Priority: Critical Currently it is possible to have a single hbase jar that can work with multiple versions of Hadoop. It would be nice if hbase-test.jar also followed the suit. For now I'm aware of the following problems (but there could be more): 1. org.apache.hadoop.hbase.mapreduce.NMapInputFormat is failing because org.apache.hadoop.mapreduce.JobContext is either class or interface depending on which version of Hadoop you compile it against. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6302) Document how to run integration tests
[ https://issues.apache.org/jira/browse/HBASE-6302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481749#comment-13481749 ] Enis Soztutar commented on HBASE-6302: -- Sorry I could not get to this sooner. Other than the CLI commands for ChaosMonkey, is there anything else missing or improved upon. I can update the patch with suggestions. Document how to run integration tests - Key: HBASE-6302 URL: https://issues.apache.org/jira/browse/HBASE-6302 Project: HBase Issue Type: Sub-task Components: documentation Reporter: stack Assignee: Enis Soztutar Priority: Blocker Fix For: 0.96.0 Attachments: HBASE-6302_v1.patch HBASE-6203 has attached the old IT doc with some mods. When we figure how ITs are to be run, update it and apply the documentation under this issue. Making a blocker against 0.96. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6508) [0.89-fb] Filter out edits at log split time
[ https://issues.apache.org/jira/browse/HBASE-6508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481750#comment-13481750 ] Sergey Shelukhin commented on HBASE-6508: - It appears that new message will not be processed correctly by the old server on new protocols. Looks like we won't be porting this... [0.89-fb] Filter out edits at log split time Key: HBASE-6508 URL: https://issues.apache.org/jira/browse/HBASE-6508 Project: HBase Issue Type: Improvement Components: master, regionserver, wal Affects Versions: 0.89-fb Reporter: Alex Feinberg Assignee: Alex Feinberg Fix For: 0.89-fb At log splitting time, we can filter out many edits if we have a conservative estimate of what was saved last in each region. This patch does the following: 1) When a region server flushes a MemStore to HFile, store the last flushed sequence id for the region in a map. 2) Send the map to master it as a part of the region server report. 3) Adds an RPC call in HMasterRegionInterface to allow a region server to query the last last flushed sequence id for a region. 4) Skips any log entry with sequence id lower than last flushed sequence id for the region during log split time. 5) When a region is removed from a region server, removed the the entry for that region from the map, so that it isn't sent during the next report. This can reduce downtime when a regionserver goes down quite a bit. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6929) Publish Hbase 0.94 artifacts build against hadoop-2.0
[ https://issues.apache.org/jira/browse/HBASE-6929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481752#comment-13481752 ] stack commented on HBASE-6929: -- Yes, we have not published a secure jar to maven. We should have. We just shipped two tgz bundles, secure and insecure. Yes, this is a problem we need to fix. On changing the version being the only way around this issue, I agree. Jon Hsieh and I talked w/ a Maven'y fellow who suggested similar -- Andrew Bayer. Even if we could make it work w/ classifiers, it would probably be a deceit; I would not trust that downloaded, maven would pull in the proper dependencies. The hadoop-api-evolution table made me violently sick. Roman, on HBASE-4850, I don't think we can do what is suggested there (Commented to that effect). I like the patch you suggest Enis. What do we do in hbase 1.0, say, where minimum required is indeed h2. Don't we want our version to be hbase in that case and not hbase-*-hadoop2? You think w/ your patch, maven repo will have hbase-*-hadoop2 and it will be just fine when hbase itself moves to hadoop2? (We'll have a hadoop3 version around this time too?) Oh, we don't need to do a security artifact in maven for 0.96. We should have published one for 0.94 and before. Publish Hbase 0.94 artifacts build against hadoop-2.0 - Key: HBASE-6929 URL: https://issues.apache.org/jira/browse/HBASE-6929 Project: HBase Issue Type: Task Components: build Affects Versions: 0.94.2 Reporter: Enis Soztutar Attachments: 6929.txt, hbase-6929_v2.patch Downstream projects (flume, hive, pig, etc) depends on hbase, but since the hbase binaries build with hadoop-2.0 are not pushed to maven, they cannot depend on them. AFAIK, hadoop 1 and 2 are not binary compatible, so we should also push hbase jars build with hadoop2.0 profile into maven, possibly with version string like 0.94.2-hadoop2.0. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6945) Compilation errors when using non-Sun JDKs to build HBase-0.94
[ https://issues.apache.org/jira/browse/HBASE-6945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481754#comment-13481754 ] Kumar Ravi commented on HBASE-6945: --- @stack, I was hoping we could address concerns in HBASE-6965, since this JIRA was riding on that. Here are my responses to your questions for this JIRA: We seem to be removing the body of the class. The functionality of this class is already there in the OSMXBean. If this is going to be an issue, we need to discuss the best approach about how to reorganize the classes. Most of the APIs and methods in the above abstract classes access APIs and methods not available in some JVMs. Should this class, OSMXBean, be renamed OS since it answers questions about the OS in a way that insulates us against differences in JVM. Do you mean OSBean instead of OSMXBean? I am open to renaming the class to At this point, OSMXBean has been committed to trunk though. If you could let me know how to back out the patch now, I can do that. I would like to make sure we are in concurrence on the overall class and the methods. Maybe a better name would be JVM. Then you'd ask it for an implementation of UnixOperatingSystemMXBean. It would take care of returning the IBM or Oracle implementation. They both implement the UnixOperatingSystemMXBean Interface? I am not sure I follow. IBM Java will not be able to implement the UnixOperatingSystemMXBean interface as the IBM Java SDK does not contain the com.sun.management package. OSMXBean acts like a wrapper and invokes the UnixOperatingSystemMXBean if the JDK is Sun and for IBM provides the equivalent functionality. Compilation errors when using non-Sun JDKs to build HBase-0.94 -- Key: HBASE-6945 URL: https://issues.apache.org/jira/browse/HBASE-6945 Project: HBase Issue Type: Bug Components: build Affects Versions: 0.94.1 Environment: RHEL 6.3, IBM Java 7 Reporter: Kumar Ravi Assignee: Kumar Ravi Labels: patch Fix For: 0.94.3 Attachments: ResourceCheckerJUnitListener_HBASE_6945-trunk.patch When using IBM Java 7 to build HBase-0.94.1, the following comilation error is seen. [INFO] - [ERROR] COMPILATION ERROR : [INFO] - [ERROR] /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[23,25] error: package com.sun.management does not exist [ERROR] /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[46,25] error: cannot find symbol [ERROR] symbol: class UnixOperatingSystemMXBean location: class ResourceAnalyzer /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[75,29] error: cannot find symbol [ERROR] symbol: class UnixOperatingSystemMXBean location: class ResourceAnalyzer /home/hadoop/hbase-0.94/src/test/java/org/apache/hadoop/hbase/ResourceChecker.java:[76,23] error: cannot find symbol [INFO] 4 errors [INFO] - [INFO] [INFO] BUILD FAILURE [INFO] I have a patch available which should work for all JDKs including Sun. I am in the process of testing this patch. Preliminary tests indicate the build is working fine with this patch. I will post this patch when I am done testing. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6929) Publish Hbase 0.94 artifacts build against hadoop-2.0
[ https://issues.apache.org/jira/browse/HBASE-6929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481755#comment-13481755 ] Jarek Jarcec Cecho commented on HBASE-6929: --- Another idea - version number can contain string. So instead of creating special dot release, what about creating release ${normalversion}-hadoop2? Publish Hbase 0.94 artifacts build against hadoop-2.0 - Key: HBASE-6929 URL: https://issues.apache.org/jira/browse/HBASE-6929 Project: HBase Issue Type: Task Components: build Affects Versions: 0.94.2 Reporter: Enis Soztutar Attachments: 6929.txt, hbase-6929_v2.patch Downstream projects (flume, hive, pig, etc) depends on hbase, but since the hbase binaries build with hadoop-2.0 are not pushed to maven, they cannot depend on them. AFAIK, hadoop 1 and 2 are not binary compatible, so we should also push hbase jars build with hadoop2.0 profile into maven, possibly with version string like 0.94.2-hadoop2.0. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7019) Can't pass SplitAlgo in hbase shell
[ https://issues.apache.org/jira/browse/HBASE-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gregory Chanan updated HBASE-7019: -- Attachment: HBASE-7019-v2.patch Attached v2 patch. Includes test case. I ran the test case with the command: mvn test -PlocalTests -Dtest=org.apache.hadoop.hbase.client.TestShell it passes with patch, fails without. Can't pass SplitAlgo in hbase shell --- Key: HBASE-7019 URL: https://issues.apache.org/jira/browse/HBASE-7019 Project: HBase Issue Type: Bug Components: shell Affects Versions: 0.96.0 Reporter: Gregory Chanan Assignee: Gregory Chanan Fix For: 0.96.0 Attachments: HBASE-7019.patch, HBASE-7019-v2.patch {noformat} hbase(main):002:0 create 't1', 'f1', {NUMREGIONS = 15, SPLITALGO = 'HexStringSplit'} ERROR: uninitialized constant Hbase::Admin::RegionSplitter {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7019) Can't pass SplitAlgo in hbase shell
[ https://issues.apache.org/jira/browse/HBASE-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481759#comment-13481759 ] Ted Yu commented on HBASE-7019: --- +1 on patch v2. Can't pass SplitAlgo in hbase shell --- Key: HBASE-7019 URL: https://issues.apache.org/jira/browse/HBASE-7019 Project: HBase Issue Type: Bug Components: shell Affects Versions: 0.96.0 Reporter: Gregory Chanan Assignee: Gregory Chanan Fix For: 0.96.0 Attachments: HBASE-7019.patch, HBASE-7019-v2.patch {noformat} hbase(main):002:0 create 't1', 'f1', {NUMREGIONS = 15, SPLITALGO = 'HexStringSplit'} ERROR: uninitialized constant Hbase::Admin::RegionSplitter {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-2600) Change how we do meta tables; from tablename+STARTROW+randomid to instead, tablename+ENDROW+randomid
[ https://issues.apache.org/jira/browse/HBASE-2600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481772#comment-13481772 ] Jesse Yates commented on HBASE-2600: We've been doing a lot of thinking over here at Saleforce about this issue and was thinking about picking up work on this, is Alex is busy. The current approach is pretty good, and has a lot of merits. We also discussed the option of using the multi-row transaction stuff (which will be another reason why we couldn't split META). I did a full write-up/analysis of the options (see https://dl.dropbox.com/u/6147077/Proposal-HBASE-2600.docx). What I ended up coming up with is a little bit crazy, but I think it works. (I'm not dealing with tablenames as hashes, but that is pretty trivial). What I'm looking to solve are: (1) replacing start key’s with endkeys (2) ensuring correct sorting (3) ensuring correct split behavior to avoid META holes (4) moving the compound key to their own family/qualifier There seems to be a couple pieces we can put together to ensure we meet all the above goals. First, row keys are encoded as: For all non-terminal regions: {code} table0x00endkey {code} For the terminal region: {code} table0x01 {code} Then we can move the encoded name into its own cell, under the “info:encodedname” column. Next, the regionid is moved to the timestamp and used for all updates the region in META (this includes offlining and marking the parent as split). Since regionids are already timestamps by convention, this doesn't stray that far afield. META then looks something like: {code} table0x00endkey | info | | encodedname | regionid | md5 hash | regioninfo | regionid | hri – 1 | server | regionid | server:port | server.startcode| regionid | startcode | splitA | regionid | hri – 3 | splitB | regionid | hri – 4 table0x01| info | encodedname | regionid2| hri-4 | ...| regionid2| ... {code} Obviously there are some serious implications for how lookups and splits work. Splits need to take the opposite approach with respect to putting children in META. Currently, we write the bottom and then the top child, counting on the htable to retry when it finds an offlined region. Now, we just flip the ordering by: (1) offline the parent, (2) put the 'top' child and then (3) insert the bottom child. The problem lies in making sure that the bottom child sorts before the parent. In the previous scheme we ensured that sorting by putting a regionid in the row key. With the above scheme, the 'top' child will always sort before the parent because it has a lower endkey. The 'bottom' child actual has _exactly the same row key_ as the parent. However, the bottom child still sorts first because it has a larger regionid (which is also already baked into the code). We also must do a check of the timestamp vs. the expected regionid to ensure that we can get the correct region, but that is a minor overhead. NOTE: this also gives us provenance of regions, at least until the catalog janitor cleans up parent regions. For lookups, you would query for the first region that matches (similar to the current mechanism): {code} table0x00desired key99…… {code} which still finds the correct (bottom) child because its regionid must be greater than its parent causing it to sort _before_ its parent in the same row. This give us correct sorting, an easily readable META, and no holes. Oh, and we can remove all the backwords scanning. Change how we do meta tables; from tablename+STARTROW+randomid to instead, tablename+ENDROW+randomid Key: HBASE-2600 URL: https://issues.apache.org/jira/browse/HBASE-2600 Project: HBase Issue Type: Bug Reporter: stack Assignee: Alex Newman Attachments: 0001-Changed-regioninfo-format-to-use-endKey-instead-of-s.patch, 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen.patch, 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen-v2.patch, 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen-v4.patch, 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen-v6.patch, 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen-v7.2.patch, 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen-v8, 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen-v8.1, 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen-v9.patch, 0001-HBASE-2600.v10.patch, 0001-HBASE-2600-v11.patch,
[jira] [Commented] (HBASE-2600) Change how we do meta tables; from tablename+STARTROW+randomid to instead, tablename+ENDROW+randomid
[ https://issues.apache.org/jira/browse/HBASE-2600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481773#comment-13481773 ] Jesse Yates commented on HBASE-2600: As an aside, if don't roll into the hashed tablenames here, we do easy end-key extraction by encoding the length of the table name into the row key as the last 4 bytes of the key. Then you would read in an int from the last 4 bytes to jump right to the correct location in the key for the endkey. This still sorts correctly because the prefix to that length will always sort the same way, so the suffix doesn't affect sorting. Change how we do meta tables; from tablename+STARTROW+randomid to instead, tablename+ENDROW+randomid Key: HBASE-2600 URL: https://issues.apache.org/jira/browse/HBASE-2600 Project: HBase Issue Type: Bug Reporter: stack Assignee: Alex Newman Attachments: 0001-Changed-regioninfo-format-to-use-endKey-instead-of-s.patch, 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen.patch, 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen-v2.patch, 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen-v4.patch, 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen-v6.patch, 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen-v7.2.patch, 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen-v8, 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen-v8.1, 0001-HBASE-2600.-Change-how-we-do-meta-tables-from-tablen-v9.patch, 0001-HBASE-2600.v10.patch, 0001-HBASE-2600-v11.patch, 2600-trunk-01-17.txt, HBASE-2600+5217-Sun-Mar-25-2012-v3.patch, HBASE-2600+5217-Sun-Mar-25-2012-v4.patch, hbase-2600-root.dir.tgz, jenkins.pdf This is an idea that Ryan and I have been kicking around on and off for a while now. If regionnames were made of tablename+endrow instead of tablename+startrow, then in the metatables, doing a search for the region that contains the wanted row, we'd just have to open a scanner using passed row and the first row found by the scan would be that of the region we need (If offlined parent, we'd have to scan to the next row). If we redid the meta tables in this format, we'd be using an access that is natural to hbase, a scan as opposed to the perverse, expensive getClosestRowBefore we currently have that has to walk backward in meta finding a containing region. This issue is about changing the way we name regions. If we were using scans, prewarming client cache would be near costless (as opposed to what we'll currently have to do which is first a getClosestRowBefore and then a scan from the closestrowbefore forward). Converting to the new method, we'd have to run a migration on startup changing the content in meta. Up to this, the randomid component of a region name has been the timestamp of region creation. HBASE-2531 32-bit encoding of regionnames waaay too susceptible to hash clashes proposes changing the randomid so that it contains actual name of the directory in the filesystem that hosts the region. If we had this in place, I think it would help with the migration to this new way of doing the meta because as is, the region name in fs is a hash of regionname... changing the format of the regionname would mean we generate a different hash... so we'd need hbase-2531 to be in place before we could do this change. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-7026) Make metrics collection in StoreScanner.java more efficient
Karthik Ranganathan created HBASE-7026: -- Summary: Make metrics collection in StoreScanner.java more efficient Key: HBASE-7026 URL: https://issues.apache.org/jira/browse/HBASE-7026 Project: HBase Issue Type: Sub-task Reporter: Karthik Ranganathan Assignee: Karthik Ranganathan Per the benchmarks I ran, the following block of code seems to be inefficient: StoreScanner.java: public synchronized boolean next(ListKeyValue outResult, int limit, String metric) throws IOException { // ... // update the counter if (addedResultsSize 0 metric != null) { HRegion.incrNumericMetric(this.metricNamePrefix + metric, addedResultsSize); } // ... Removing this block increased throughput by 10%. We should move this to the outer layer. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6728) [89-fb] prevent OOM possibility due to per connection responseQueue being unbounded
[ https://issues.apache.org/jira/browse/HBASE-6728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481779#comment-13481779 ] Hudson commented on HBASE-6728: --- Integrated in HBase-TRUNK #3472 (See [https://builds.apache.org/job/HBase-TRUNK/3472/]) HBASE-6728 revert (Revision 1401012) HBASE-6728 prevent OOM possibility due to per connection responseQueue being unbounded (Revision 1401008) Result = FAILURE tedyu : Files : * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/HBaseServer.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/metrics/RegionServerDynamicMetrics.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/SizeBasedThrottler.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestSizeBasedThrottler.java tedyu : Files : * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/HBaseServer.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/metrics/RegionServerDynamicMetrics.java * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/SizeBasedThrottler.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestSizeBasedThrottler.java [89-fb] prevent OOM possibility due to per connection responseQueue being unbounded --- Key: HBASE-6728 URL: https://issues.apache.org/jira/browse/HBASE-6728 Project: HBase Issue Type: Bug Reporter: Kannan Muthukkaruppan Assignee: Michal Gregorczyk Fix For: 0.96.0 Attachments: 6728-trunk.txt The per connection responseQueue is an unbounded queue. The request handler threads today try to send the response in line, but if things start to backup, the response is sent via a per connection responder thread. This intermediate queue, because it has no bounds, can be another source of OOMs. [Have not looked at this issue in trunk. So it may or may not be applicable there.] -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6987) Port HBASE-6920 to trunk (?)
[ https://issues.apache.org/jira/browse/HBASE-6987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481778#comment-13481778 ] Hudson commented on HBASE-6987: --- Integrated in HBase-TRUNK #3472 (See [https://builds.apache.org/job/HBase-TRUNK/3472/]) HBASE-6987 Port HBASE-6920 to trunk (?) (Revision 1401015) Result = FAILURE gchanan : Files : * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestClientTimeouts.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/RandomTimeoutRpcEngine.java Port HBASE-6920 to trunk (?) Key: HBASE-6987 URL: https://issues.apache.org/jira/browse/HBASE-6987 Project: HBase Issue Type: Task Reporter: Gregory Chanan Assignee: Gregory Chanan Priority: Minor Fix For: 0.96.0 Attachments: HBASE-6987.patch Need to investigate whether we need to port HBASE-6920 to trunk. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6920) On timeout connecting to master, client can get stuck and never make progress
[ https://issues.apache.org/jira/browse/HBASE-6920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481780#comment-13481780 ] Hudson commented on HBASE-6920: --- Integrated in HBase-TRUNK #3472 (See [https://builds.apache.org/job/HBase-TRUNK/3472/]) HBASE-6987 Port HBASE-6920 to trunk (?) (Revision 1401015) Result = FAILURE gchanan : Files : * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestClientTimeouts.java * /hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/ipc/RandomTimeoutRpcEngine.java On timeout connecting to master, client can get stuck and never make progress - Key: HBASE-6920 URL: https://issues.apache.org/jira/browse/HBASE-6920 Project: HBase Issue Type: Bug Affects Versions: 0.94.2 Reporter: Gregory Chanan Assignee: Gregory Chanan Priority: Critical Fix For: 0.94.2 Attachments: 6920-addendum.txt, HBASE-6920.patch, HBASE-6920-v2.patch HBASE-5058 appears to have introduced an issue where a timeout in HConnection.getMaster() can cause the client to never be able to connect to the master. So, for example, an HBaseAdmin object can never successfully be initialized. The issue is here: {code} if (tryMaster.isMasterRunning()) { this.master = tryMaster; this.masterLock.notifyAll(); break; } {code} If isMasterRunning times out, it throws an UndeclaredThrowableException, which is already not ideal, because it can be returned to the application. But if the first call to getMaster succeeds, it will set masterChecked = true, which makes us never try to reconnect; that is, we will set this.master = null and just throw MasterNotRunningExceptions, without even trying to connect. I tried out a 94 client (actually a 92 client with some 94 patches) on a cluster with some network issues, and it would constantly get stuck as described above. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7001) Fix the RCN Correctness Warning in MemStoreFlusher class
[ https://issues.apache.org/jira/browse/HBASE-7001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481781#comment-13481781 ] Hudson commented on HBASE-7001: --- Integrated in HBase-TRUNK #3472 (See [https://builds.apache.org/job/HBase-TRUNK/3472/]) HBASE-7001 Fix the RCN Correctness Warning in MemStoreFlusher class (Liang Xie) (Revision 1401037) Result = FAILURE tedyu : Files : * /hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreFlusher.java Fix the RCN Correctness Warning in MemStoreFlusher class Key: HBASE-7001 URL: https://issues.apache.org/jira/browse/HBASE-7001 Project: HBase Issue Type: Bug Reporter: liang xie Assignee: liang xie Priority: Minor Fix For: 0.96.0 Attachments: HBASE-7001.patch https://builds.apache.org/job/PreCommit-HBASE-Build/3057//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html#Warnings_CORRECTNESS shows : Bug type RCN_REDUNDANT_NULLCHECK_WOULD_HAVE_BEEN_A_NPE (click for details) In class org.apache.hadoop.hbase.regionserver.MemStoreFlusher In method org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher$FlushRegionEntry) Value loaded from region Return value of org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushRegionEntry.access$000(MemStoreFlusher$FlushRegionEntry) At MemStoreFlusher.java:[line 346] Redundant null check at MemStoreFlusher.java:[line 363] -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6929) Publish Hbase 0.94 artifacts build against hadoop-2.0
[ https://issues.apache.org/jira/browse/HBASE-6929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481791#comment-13481791 ] Enis Soztutar commented on HBASE-6929: -- @Stack bq. What do we do in hbase 1.0, say, where minimum required is indeed h2. Don't we want our version to be hbase in that case and not hbase-hadoop2? You think w/ your patch, maven repo will have hbase-hadoop2 and it will be just fine when hbase itself moves to hadoop2? (We'll have a hadoop3 version around this time too?) If Hbase moves to hadoop2, and drops support for h1, then we can just call the main version hbase without the hadoop2 suffix. As long as which version of HBase requires which version of hadoop, it should not be too confusing, wdyt? @Jarek bq. Another idea - version number can contain string. So instead of creating special dot release, what about creating release ${normalversion}-hadoop2? This is what I did in the patch. We will release for example hbase-0.94.3-hadoop2. Publish Hbase 0.94 artifacts build against hadoop-2.0 - Key: HBASE-6929 URL: https://issues.apache.org/jira/browse/HBASE-6929 Project: HBase Issue Type: Task Components: build Affects Versions: 0.94.2 Reporter: Enis Soztutar Attachments: 6929.txt, hbase-6929_v2.patch Downstream projects (flume, hive, pig, etc) depends on hbase, but since the hbase binaries build with hadoop-2.0 are not pushed to maven, they cannot depend on them. AFAIK, hadoop 1 and 2 are not binary compatible, so we should also push hbase jars build with hadoop2.0 profile into maven, possibly with version string like 0.94.2-hadoop2.0. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-7027) Use the correct port of info server of region servers
Elliott Clark created HBASE-7027: Summary: Use the correct port of info server of region servers Key: HBASE-7027 URL: https://issues.apache.org/jira/browse/HBASE-7027 Project: HBase Issue Type: Improvement Components: UI Reporter: Elliott Clark Assignee: Elliott Clark Right now the master ui just guesses that the port of the info server will always be the same on all servers. This is not a good assumption setting it for each server is possible, also setting the conf variable to 0 will make the info server choose a port randomly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7027) Use the correct port of info server of region servers
[ https://issues.apache.org/jira/browse/HBASE-7027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elliott Clark updated HBASE-7027: - Attachment: HBASE-7027-0.patch Patch that adds on info server port to server load which is sent to the master. Use the correct port of info server of region servers - Key: HBASE-7027 URL: https://issues.apache.org/jira/browse/HBASE-7027 Project: HBase Issue Type: Improvement Components: UI Reporter: Elliott Clark Assignee: Elliott Clark Attachments: HBASE-7027-0.patch Right now the master ui just guesses that the port of the info server will always be the same on all servers. This is not a good assumption setting it for each server is possible, also setting the conf variable to 0 will make the info server choose a port randomly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7027) Use the correct port of info server of region servers
[ https://issues.apache.org/jira/browse/HBASE-7027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elliott Clark updated HBASE-7027: - Status: Patch Available (was: Open) Use the correct port of info server of region servers - Key: HBASE-7027 URL: https://issues.apache.org/jira/browse/HBASE-7027 Project: HBase Issue Type: Improvement Components: UI Reporter: Elliott Clark Assignee: Elliott Clark Attachments: HBASE-7027-0.patch Right now the master ui just guesses that the port of the info server will always be the same on all servers. This is not a good assumption setting it for each server is possible, also setting the conf variable to 0 will make the info server choose a port randomly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6977) Multithread processing ZK assignment events
[ https://issues.apache.org/jira/browse/HBASE-6977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481815#comment-13481815 ] Jimmy Xiang commented on HBASE-6977: Thanks Ted for the review. I posted the second patch to RB: https://reviews.apache.org/r/7682/ Multithread processing ZK assignment events --- Key: HBASE-6977 URL: https://issues.apache.org/jira/browse/HBASE-6977 Project: HBase Issue Type: Improvement Components: Region Assignment Reporter: Jimmy Xiang Assignee: Jimmy Xiang Priority: Minor Attachments: trunk-6977_v1.patch Related to HBASE-6976 and HBASE-6611. ZK events processing is a bottle neck for assignments, since there is only one ZK event thread. If we can use multiple threads, it should be better. With multiple threads, the order of events could be messed up. However, if we pass all events related to one region always to the same worker thread, the order should be kept. We need to play with it and find out how much performance imrovement we can get. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (HBASE-7015) We need major_compact --force to rewrite all store files for a table
[ https://issues.apache.org/jira/browse/HBASE-7015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Himanshu Vashishtha reassigned HBASE-7015: -- Assignee: Himanshu Vashishtha We need major_compact --force to rewrite all store files for a table Key: HBASE-7015 URL: https://issues.apache.org/jira/browse/HBASE-7015 Project: HBase Issue Type: Improvement Affects Versions: 0.96.0 Reporter: Kevin Odell Assignee: Himanshu Vashishtha Having major_compact --force would have some advantages: 1.) Changing compression type and making sure all storefiles are written with the new value 2.) Can help with TTL and expiring all old data -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-7028) Bump JRuby to 1.7.0
Elliott Clark created HBASE-7028: Summary: Bump JRuby to 1.7.0 Key: HBASE-7028 URL: https://issues.apache.org/jira/browse/HBASE-7028 Project: HBase Issue Type: Task Reporter: Elliott Clark Assignee: Elliott Clark Priority: Minor Jruby released 1.7.0 which includes InvokeDynamic which speeds lots of things up. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-7029) Result array serialization improvements
Karthik Ranganathan created HBASE-7029: -- Summary: Result array serialization improvements Key: HBASE-7029 URL: https://issues.apache.org/jira/browse/HBASE-7029 Project: HBase Issue Type: Sub-task Reporter: Karthik Ranganathan Assignee: Karthik Ranganathan The Result[] is very inefficiently serialized - there are 2 for loops over each result and we instantiate every object. A better way is to make it a data block, and use delta block encoding to make it more efficient. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7028) Bump JRuby to 1.7.0
[ https://issues.apache.org/jira/browse/HBASE-7028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elliott Clark updated HBASE-7028: - Attachment: HBASE-7028-0.patch One line pom change. Bump JRuby to 1.7.0 --- Key: HBASE-7028 URL: https://issues.apache.org/jira/browse/HBASE-7028 Project: HBase Issue Type: Task Reporter: Elliott Clark Assignee: Elliott Clark Priority: Minor Attachments: HBASE-7028-0.patch Jruby released 1.7.0 which includes InvokeDynamic which speeds lots of things up. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7028) Bump JRuby to 1.7.0
[ https://issues.apache.org/jira/browse/HBASE-7028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elliott Clark updated HBASE-7028: - Status: Patch Available (was: Open) Bump JRuby to 1.7.0 --- Key: HBASE-7028 URL: https://issues.apache.org/jira/browse/HBASE-7028 Project: HBase Issue Type: Task Reporter: Elliott Clark Assignee: Elliott Clark Priority: Minor Attachments: HBASE-7028-0.patch Jruby released 1.7.0 which includes InvokeDynamic which speeds lots of things up. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7019) Can't pass SplitAlgo in hbase shell
[ https://issues.apache.org/jira/browse/HBASE-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481828#comment-13481828 ] Hadoop QA commented on HBASE-7019: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12550343/HBASE-7019-v2.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 82 warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 findbugs{color}. The patch appears to introduce 4 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.client.TestHCM org.apache.hadoop.hbase.master.TestMasterMetrics Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3113//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3113//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3113//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3113//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3113//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3113//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3113//console This message is automatically generated. Can't pass SplitAlgo in hbase shell --- Key: HBASE-7019 URL: https://issues.apache.org/jira/browse/HBASE-7019 Project: HBase Issue Type: Bug Components: shell Affects Versions: 0.96.0 Reporter: Gregory Chanan Assignee: Gregory Chanan Fix For: 0.96.0 Attachments: HBASE-7019.patch, HBASE-7019-v2.patch {noformat} hbase(main):002:0 create 't1', 'f1', {NUMREGIONS = 15, SPLITALGO = 'HexStringSplit'} ERROR: uninitialized constant Hbase::Admin::RegionSplitter {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7019) Can't pass SplitAlgo in hbase shell
[ https://issues.apache.org/jira/browse/HBASE-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481833#comment-13481833 ] Gregory Chanan commented on HBASE-7019: --- Ran failing tests locally, they passed. Going to commit to trunk. Can't pass SplitAlgo in hbase shell --- Key: HBASE-7019 URL: https://issues.apache.org/jira/browse/HBASE-7019 Project: HBase Issue Type: Bug Components: shell Affects Versions: 0.96.0 Reporter: Gregory Chanan Assignee: Gregory Chanan Fix For: 0.96.0 Attachments: HBASE-7019.patch, HBASE-7019-v2.patch {noformat} hbase(main):002:0 create 't1', 'f1', {NUMREGIONS = 15, SPLITALGO = 'HexStringSplit'} ERROR: uninitialized constant Hbase::Admin::RegionSplitter {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-7019) Can't pass SplitAlgo in hbase shell
[ https://issues.apache.org/jira/browse/HBASE-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gregory Chanan updated HBASE-7019: -- Resolution: Fixed Status: Resolved (was: Patch Available) Committed to trunk. Thanks for the review, Ted. Can't pass SplitAlgo in hbase shell --- Key: HBASE-7019 URL: https://issues.apache.org/jira/browse/HBASE-7019 Project: HBase Issue Type: Bug Components: shell Affects Versions: 0.96.0 Reporter: Gregory Chanan Assignee: Gregory Chanan Fix For: 0.96.0 Attachments: HBASE-7019.patch, HBASE-7019-v2.patch {noformat} hbase(main):002:0 create 't1', 'f1', {NUMREGIONS = 15, SPLITALGO = 'HexStringSplit'} ERROR: uninitialized constant Hbase::Admin::RegionSplitter {noformat} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6929) Publish Hbase 0.94 artifacts build against hadoop-2.0
[ https://issues.apache.org/jira/browse/HBASE-6929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481845#comment-13481845 ] Jarek Jarcec Cecho commented on HBASE-6929: --- [~enis] I see, I originally thought that you want to increase the version number. Adding -hadoop2 suffix seems as a reasonable workaround to me, but it's up to you HBase guys to agree on the final solution :-) Publish Hbase 0.94 artifacts build against hadoop-2.0 - Key: HBASE-6929 URL: https://issues.apache.org/jira/browse/HBASE-6929 Project: HBase Issue Type: Task Components: build Affects Versions: 0.94.2 Reporter: Enis Soztutar Attachments: 6929.txt, hbase-6929_v2.patch Downstream projects (flume, hive, pig, etc) depends on hbase, but since the hbase binaries build with hadoop-2.0 are not pushed to maven, they cannot depend on them. AFAIK, hadoop 1 and 2 are not binary compatible, so we should also push hbase jars build with hadoop2.0 profile into maven, possibly with version string like 0.94.2-hadoop2.0. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4850) hbase tests need to be made Hadoop version agnostic
[ https://issues.apache.org/jira/browse/HBASE-4850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481846#comment-13481846 ] Roman Shaposhnik commented on HBASE-4850: - if that's the case I think this JIRA needs to be closed as won't fix :-( hbase tests need to be made Hadoop version agnostic --- Key: HBASE-4850 URL: https://issues.apache.org/jira/browse/HBASE-4850 Project: HBase Issue Type: Improvement Components: test Affects Versions: 0.92.0, 0.92.1, 0.94.0 Reporter: Roman Shaposhnik Priority: Critical Currently it is possible to have a single hbase jar that can work with multiple versions of Hadoop. It would be nice if hbase-test.jar also followed the suit. For now I'm aware of the following problems (but there could be more): 1. org.apache.hadoop.hbase.mapreduce.NMapInputFormat is failing because org.apache.hadoop.mapreduce.JobContext is either class or interface depending on which version of Hadoop you compile it against. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HBASE-6744) Per table balancing could cause regions unbalanced overall
[ https://issues.apache.org/jira/browse/HBASE-6744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jimmy Xiang resolved HBASE-6744. Resolution: Won't Fix Per table balancing could cause regions unbalanced overall -- Key: HBASE-6744 URL: https://issues.apache.org/jira/browse/HBASE-6744 Project: HBase Issue Type: Improvement Reporter: Jimmy Xiang Per table balancing just balances regions based on tables. However, overall, regions could be seriously unbalanced. For example, if you shutdown all most all region serves in a cluster, then create tons of new tables (no region pre-split), then start up all region servers. You will see the regions won't move to other region servers since they are balanced per table (only one region for a table at this moment). If we can make the balance algorithm sophisticated enough, we don't need the configuration hbase.master.loadbalance.bytable. We can do the regular and bytable balancing at the same time. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6929) Publish Hbase 0.94 artifacts build against hadoop-2.0
[ https://issues.apache.org/jira/browse/HBASE-6929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481850#comment-13481850 ] Roman Shaposhnik commented on HBASE-6929: - @stack if HBASE-4850 is not an option I believe you guys are on the right track. You definitely don't want to use classifier for this so that leaves mucking with the version string. As long as you come up with a reasonable encoding strategy for pointing at various artifacts (e.g. hadoop2-secure, etc.) you can append that to the version string. Watch out for things in the pom.xml that need to be tweaked for each of these combinations -- I got bitten by default settings in specially versioned pom files a couple of times. Publish Hbase 0.94 artifacts build against hadoop-2.0 - Key: HBASE-6929 URL: https://issues.apache.org/jira/browse/HBASE-6929 Project: HBase Issue Type: Task Components: build Affects Versions: 0.94.2 Reporter: Enis Soztutar Attachments: 6929.txt, hbase-6929_v2.patch Downstream projects (flume, hive, pig, etc) depends on hbase, but since the hbase binaries build with hadoop-2.0 are not pushed to maven, they cannot depend on them. AFAIK, hadoop 1 and 2 are not binary compatible, so we should also push hbase jars build with hadoop2.0 profile into maven, possibly with version string like 0.94.2-hadoop2.0. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5314) Gracefully rolling restart region servers in rolling-restart.sh
[ https://issues.apache.org/jira/browse/HBASE-5314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481854#comment-13481854 ] Enis Soztutar commented on HBASE-5314: -- The patch to 0.94 looks good, and it seems it can be made use of on 0.92 - 0.94 rolling restart. [~lhofhansl] any objections? Gracefully rolling restart region servers in rolling-restart.sh --- Key: HBASE-5314 URL: https://issues.apache.org/jira/browse/HBASE-5314 Project: HBase Issue Type: Improvement Components: scripts Reporter: Yifeng Jiang Assignee: Yifeng Jiang Priority: Minor Fix For: 0.96.0 Attachments: HBASE-5314-0.94.patch, HBASE-5314.patch, HBASE-5314.patch.2 The rolling-restart.sh has a --rs-only option which simply restarts all region servers in the cluster. Consider improve it to gracefully restart region servers to avoid the offline time of the regions deployed on that server, and keep the region distributions same as what it was before the restarting. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5914) Bulk assign regions in the process of ServerShutdownHandler
[ https://issues.apache.org/jira/browse/HBASE-5914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481869#comment-13481869 ] Enis Soztutar commented on HBASE-5914: -- [~ram_krish] Ram, do you think that it is safe for backport into 0.94? It seems a good improvement for SSH. Bulk assign regions in the process of ServerShutdownHandler --- Key: HBASE-5914 URL: https://issues.apache.org/jira/browse/HBASE-5914 Project: HBase Issue Type: Improvement Reporter: chunhui shen Assignee: chunhui shen Fix For: 0.96.0 Attachments: HBASE-5914.patch, HBASE-5914v2.patch, HBASE-5914v3.patch In the process of ServerShutdownHandler, we currently assign regions singly. In the large cluster, one regionserver always carried many regions, this action is quite slow. What about using bulk assign regions like cluster start up. In current logic, if we failed assigning many regions to one destination server, we will wait unitl timeout, however in the process of ServerShutdownHandler, we should retry it to another server. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7028) Bump JRuby to 1.7.0
[ https://issues.apache.org/jira/browse/HBASE-7028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481871#comment-13481871 ] Hadoop QA commented on HBASE-7028: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12550354/HBASE-7028-0.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop 2.0 profile. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 82 warning messages. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 findbugs{color}. The patch appears to introduce 4 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.hadoop.hbase.regionserver.TestSplitTransaction Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/3115//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3115//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3115//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3115//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3115//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/3115//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/3115//console This message is automatically generated. Bump JRuby to 1.7.0 --- Key: HBASE-7028 URL: https://issues.apache.org/jira/browse/HBASE-7028 Project: HBase Issue Type: Task Reporter: Elliott Clark Assignee: Elliott Clark Priority: Minor Attachments: HBASE-7028-0.patch Jruby released 1.7.0 which includes InvokeDynamic which speeds lots of things up. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-7028) Bump JRuby to 1.7.0
[ https://issues.apache.org/jira/browse/HBASE-7028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481874#comment-13481874 ] stack commented on HBASE-7028: -- I think 1.6.x is 9MB and 1.7.x is 14MB. Size difference is pretty big. Bump JRuby to 1.7.0 --- Key: HBASE-7028 URL: https://issues.apache.org/jira/browse/HBASE-7028 Project: HBase Issue Type: Task Reporter: Elliott Clark Assignee: Elliott Clark Priority: Minor Attachments: HBASE-7028-0.patch Jruby released 1.7.0 which includes InvokeDynamic which speeds lots of things up. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-6972) HBase Shell deleteall requires column to be defined
[ https://issues.apache.org/jira/browse/HBASE-6972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13481877#comment-13481877 ] David S. Wang commented on HBASE-6972: -- +1 on patch. HBase Shell deleteall requires column to be defined Key: HBASE-6972 URL: https://issues.apache.org/jira/browse/HBASE-6972 Project: HBase Issue Type: Bug Components: shell Affects Versions: 0.96.0 Reporter: Ricky Saltzer Assignee: Ricky Saltzer Fix For: 0.96.0 Attachments: HBASE-6972.2.patch, HBASE-6972.patch It appears that the shell does not allow users to delete a row without specifying a column (deleteall). It looks like the deleteall.rb used to pre-define column as nil, making it optional. I've created a patch and confirmed it to be working in standalone mode, I will upload it shortly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira